This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 2 minute read

Reflections on Australia's social media ban for under-16s

Only a few short weeks ago, our jaws collectively hit the floor when we read that the Australian government announced a social media ban for under 16s. At this point, many details (including key definitions) are still unclear but, in a nutshell, the law introduces a legal minimum age of 16 to hold an account with certain social media platforms and requires these platforms to introduce reasonable age-gating practices.

The ban, which will come into force in late 2025, has sparked widespread discussion and debate which largely centres on its draconian nature, its potential implications for freedom of information and expression, and how it could inhibit the social development of children. While many social media and online platforms are not available to under 13s, the requirement that this restriction is extended to under 16s certainly affects a developing age group to whom social interactivity is much more important.

Those supporting the ban argue that certain social media platforms cultivate the spread of misinformation, hate speech, and harmful content, which, in turn, can contribute to violence, extremism, and cyberbullying. The overall objective is therefore to control the spread of dangerous content and protect children from online harm.

However, is an outright ban the most effective way to achieve this? Those against the ban, in addition to criticising its inhibitive nature and the current lack of detail, argue that it will ultimately be circumvented by the technological aptitude of Gen Alpha (e.g. bypassing age-gates by way of VPNs). Concerns have also been raised that the ban could push children to less regulated corners of the internet.

It is interesting to compare the ban to the approach we are seeing in the EU and in the UK. In the EU, through the introduction of the Digital Services Act, the focus has been on regulation through broadly requiring improved transparency, better moderation, and more proactive removal of harmful content. The UK approach is also regulation-focused but the Online Safety Act 2023 (which will start to take effect over the course of 2025) is even more direct and prescriptive, listing specific measures that should be implemented on services likely to be accessed by children. This approach is much more focused on children and the scale of fines that can be imposed by Ofcom is significantly higher.

The Australian government's decision to favour prohibition over increased regulation perhaps indicates exasperation with the social media giants and the perceived failure to implement effective age-gating.

One thing is certain, child safety will continue to stay at the forefront of the digital regulatory agenda. It will be interesting to observe how these approaches compare with one another and what new tools and solutions are developed to comply with these requirements. 

Subscribe to receive our latest insights - on the topics that matter most to you - direct to your inbox, at your preferred frequency. Subscribe here

Tags

online safety act, data protection and privacy, online safety, technology, commentary