At primary school, there was always one weird kid passing around an iPad open to an R-rated website, game or fan-fiction.
I didn’t have a phone until I was 12, or any social media apps until I was 16, which helped protect me from the dark rabbit holes that unrestricted internet access inevitably lead to. But I still couldn’t avoid the weird kid in the playground. No amount of monitoring can fully prevent kids from accessing inappropriate online content.
Teenagers are likely to find a way around the social media ban.Credit: iStock
As a teenager, I know that the blanket ban for under-16s, set to come into effect in December, probably won’t keep kids off social media. Whether they use fake birthdates to log into social media apps, or borrow someone else’s device, there’s always a workaround. Young people have been evading the watch of parents and authorities forever. If you owned a fake ID or had an older sibling buy you alcohol as a teenager, what makes you think that your child won’t find a way around this ban?
So, if we accept that many young people will continue to use social media despite the ban – because, let’s face it, they will – then something needs to be done to mediate the harm that they will be exposed to through that social media use.
It goes without saying that there’s a clear link between social media and mental health issues. The medium is known to promote content that induces depression and eating disorders, or encourages and romanticises self-harm and suicide.
Loading
In my eyes, there are the two main issues here that we can and should address to reduce this harm: the way social media apps actively push kids towards harmful content, and how kids are taught to engage with that content.
Social media apps are notorious for using algorithms that lead users towards content that is disturbing, depressing or enraging, because that’s what keeps them addicted.
My Instagram feed is dominated by videos about the female body, make-up and mental health, despite the fact I’ve never sought out any of these types of content. But my algorithm knows I’m a teenage girl and seems to be trying to foster the insecurities that it knows my demographic is susceptible to.
And my algorithm is relatively benign. In the US, parents who are suing social media companies for allegedly causing their children to take their own life have reported that their children’s feeds were filled with material about “suicide, self-harm, and eating disorders”.
Loading
For social media companies, profits clearly come before teens’ mental health. So perhaps seriously jeopardising those profits would be the most effective way to force change.
While the impending social media ban threatens fines of up to $50 million for social media companies that do not take “reasonable steps” to prevent workarounds, that probably isn’t going to be enough of a punishment to create change. The term “reasonable steps” is too vague, and the profits made from having under-16s illegally using social media apps would likely outweigh the fines.
It’s instead worth looking to some of the more drastic steps that have been taken in the US against social media companies, for various reasons. The US government’s banning of TikTok, though relating to data privacy concerns rather than mental health, did effectively lead to the app going offline in US for a day (the ban was then postponed, but is due to come back into effect in September, unless its parent company ByteDance sells its American operations to a US-owned company.)
This kind of broad government action against social media companies, threatening to entirely suspend their operations unless they cease recommending distressing or disturbing content to teenagers, might be worth trying in Australia.
But even if this doesn’t happen – if there’s no effective legislation from the government, and we can’t change the fact that kids will be exposed to dangerous content – one of the easiest and most important ways to reduce the harm of social media is education.
TikTok supporters and users at a March 2024 protest against the US banning the platform.Credit: AP
Parents and schools often warn us about online predators, but not about how we should deal with content that makes us feel bad about ourselves or other people. And that’s probably because adults and authorities don’t fully understand what we’re being exposed to.
If schools partnered with social media experts and psychologists to learn what kinds of content social media is promoted to young people, what warning signs parents should look for if their child is at risk of internet-induced mental health issues, and how young people can disengage from harmful content or learn how to better deal with it, then we might make some progress. It’s akin to giving kids and teenagers a vaccine against the social media virus, rather than trying to keep it out of the country.
Loading
Because, after all, social media doesn’t cease being a cesspit of negativity and danger once children turn 16. These highly powerful algorithms profit off worsening our mental health, and they’re relentless. Educating young people on how to critically engage with or distance themselves from harmful online content is a long-term form of protection.
Crisis support is available from Lifeline 13 11 14.
Saria Ratnam is a University of Melbourne arts student who has been interning at The Age.
The Opinion newsletter is a weekly wrap of views that will challenge, champion and inform your own. Sign up here.
Most Viewed in Technology
Loading
































