Australia’s parliament just passed a groundbreaking law that bans kids under the age of 16 from using social media platforms, marking a significant shift in how the government is addressing the online safety of young people.
The legislation passed with bipartisan support, although it faced significant opposition from some members of the center-right Coalition and independent members of parliament. The law aims to protect young people’s mental health and well-being by restricting their access to platforms like TikTok, Instagram, Facebook and Snapchat. It is expected to take effect at the end of 2025, giving social media companies a year to comply.
While the ban seems popular with the general public, particularly those concerned with the mental health implications of social media use among children, it has also generated considerable debate. The Australian government argues that social media platforms contribute to a range of harms, including cyberbullying, addiction and exposure to harmful content. However, critics of the law — including many members of parliament, mental health experts and civil liberties groups — say that it represents an overreach and may do more harm than good.
The law mandates that social media companies must take “reasonable steps” to prevent users under the age of 16 from accessing their platforms, with fines of up to $50 million for failure to comply. Importantly, the law specifies that social media companies will not be able to mandate government-issued identification, such as a digital ID, for age verification. However, messaging apps, online gaming services and platforms that support health and education services (such as YouTube) will be exempted from the ban.
The swift passage of the legislation has raised concerns about the lack of consultation and the rushed nature of the process, but the law is seen by many as a necessary step in the fight to protect children from the dangers of the internet. But there are real questions about whether the ban will truly protect vulnerable users or if it risks exacerbating other issues, such as digital exclusion and inequality.
Social media platforms have long been criticized for their addictive nature and their algorithms that drive user engagement, which keep users interacting with content for longer periods, thus increasing ad revenue. These same algorithms also contribute to a range of problems, from the spread of misinformation and hate speech to the exacerbation of mental health issues, particularly among young people.
Social media has often been compared to Big Tobacco for its addictive nature and the harm it can cause. Experts argue that, much as the tobacco industry was forced to change its practices after years of public scrutiny, the tech industry needs to be held accountable for the ways in which its platforms are designed to manipulate and exploit users. This includes addressing the ways in which these platforms prioritize profit over user well-being.
The challenge is regulating not the platforms themselves, but the business model that underpins them. Social media companies profit by keeping users engaged for as long as possible, and their algorithms are designed to encourage this engagement, often at the expense of users’ mental health. Social media itself is not inherently harmful, but the way these platforms are structured and operated can cause harm. In this context, the Australian ban fails to address the root cause of the problem: the exploitative nature of these platforms’ business models.
The law’s implementation challenges are considerable. Age verification is technologically possible, but still a complex and imperfect process. Social media platforms will face significant difficulties in ensuring that children under 16 cannot access their services without compromising user privacy or introducing security vulnerabilities. Children who are determined to access social media will likely find ways to bypass restrictions, raising the question of how effective the ban will actually be.
The law also risks isolating young people from a range of opportunities. Social media platforms are not just for entertainment; they are essential tools for education, connection and exploration. More importantly, they are integral to understanding and engaging with the future world that is unfolding — denying young people access to these platforms effectively forces them to remain in the past.
While mental health experts acknowledge that social media can have harmful effects, they also emphasize that it provides vital support for young people who might otherwise feel isolated. As Nicole Palfrey from the mental health organization Headspace argued during the Senate inquiry, social media offers “help-seeking” opportunities for children, especially those living in areas with limited access to other forms of support.
While the Australian social media ban may seem like a quick and simple solution to the complex problems associated with social media, it is ultimately a flawed approach. It fails to address the root causes of the issues and risks depriving children of the opportunities that social media can provide.
Instead of banning access to these platforms, we should focus on regulating the companies that operate them, holding them accountable for the harm they cause and creating a safer online environment for all users. The rights of children to education, connection and self-expression should be safeguarded. By focusing on regulation rather than restriction, governments can create an environment that balances the protection of young people with their right to access the digital resources they need to succeed.
Pari Esfandiari is the co-founder and president at the Global TechnoPolitics Forum, a member of the at-large advisory committee at ICANN representing the European region, and a member of APCO Worldwide’s advisory board.