Despite widespread criticism of the proposed bill, and many questions as to whether it will actually deliver the intended benefit, the Australian government is pushing ahead with its proposed restrictions on social media usage, which will see users aged under 16 banned from social apps.
Yesterday marked the next stage of the proposal, with the government officially introducing the “Online Safety Amendment” bill in Parliament. The next stage, then, is for Parliament to officially vote on the bill, which is likely to happen next week.
And the government seems very keen to enact it, despite a broad range of experts voicing concerns about the impacts that it’ll have, and the practical realities of its enforcement.
But again, the government is keen to take action, on behalf of parents everywhere, though as it stands, I’m not sure that this proposal is going to work as the government expects.
First off, there will be challenges in enforcement.
As per the bill:
“The Online Safety Amendment (Social Media Minimum Age) Bill 2024 (the Bill) amends the Online Safety Act 2021 (Online Safety Act), with the aim of establishing a minimum age for social media use, placing responsibility on social media platforms for the safety of their users.”
So the platforms themselves will be responsible for its enforcement, meaning that each individual app will seemingly have to implement its own systems to detect and block underage users.
Which they’ve never been able to do effectively. Every app already has its own detection systems in place based on their current age restrictions, yet even industry-leading processes designed to weed out young users are not 100% effective at doing so. Which the Australian government acknowledges, that some youngsters will still be able to access social apps, despite these regulations. Yet, its stance is that by introducing this into law, that’s a step in the right direction either way, as it will, at the least, give parents a means to push back on their children’s requests to join social apps.
But more importantly, the Australian government is yet to provide a standard framework as to how the apps will be measured, and thus found to be in violation of these laws. Right now, it seems like every app will be judged based on their own processes, which will mean that there’ll be significantly variable approaches to enforcement.
So for example, Meta has much more comprehensive age detection systems in place than, say, X, which has fewer checks and balances. In terms of enforcement, that seems like a minefield of inequality, that will make this bill largely unenforceable, because without agreed standards in process, that’ll significantly benefit the platforms that are better resourced, while detection at scale will also be complex, given the insight available (or not) from each app.
This is part of the reason why Meta has argued that age detection should be conducted at the app store level, as that would then ensure that all apps held to the same, consistent standard. But that’s not as flashy as going after the perceived enemies in Facebook and TikTok, which looks better for a government that wants to be seen as standing up to big tech.
There has been discussion of an industry standard process for age detection, which the government will be looking to impose as part of the implementation phase. But the details of that process haven’t been revealed as yet, and those in charge of reviewing potential options on this front seem unconvinced that they’ll be effective either.
With potential fines of up to $US32 million on the line, this seems like a major oversight, and one that could render the whole proposal ineffective from the start. And that’s before you even get into questions as to whether we should be banning young teens from social apps either way.
Because the analysis on this front is varied, with some academics suggesting that social media plays a critical connective role for teens, while others point to the harmful impacts of social media for certain users.
That last point is probably the most prescient, that social media will have different impacts for different users, and as such, a universal ban for all teenagers won’t be a “solution” to the perceived dangers in this respect.
Indeed, even the research that the Australian government cites in supporting its teenage ban proposal is not conclusive, with the author of one of the reports highlighted within the documentation noting that the government has misinterpreted his findings.
So, the bill will potentially be unenforceable, depending on the specific mechanisms in place, and ineffective, based on academic insight.
Oh, and also, messaging apps will be exempt.
At this stage, the bill will cover Reddit, Snapchat, TikTok, Facebook, Instagram and X, with messaging apps, like Messenger and WhatsApp, not part of the current proposal. Neither is YouTube, which seems just as problematic for teens based on the concerns raised for other social apps.
Newer platforms like Threads and Bluesky are also not currently listed in scope, which leaves a heap of potential holes in the proposed restriction of social media use.
Because even if kids are banned from these main apps, they’ll just go to other platforms instead. Many teens are already active on WhatsApp, while pushing them out of the major apps will see other alternatives gain traction.
And without definitive guidelines as to which apps will be included in the bill, based on user counts, and/or other specifics, the government will need to table an amendment every time a new app gets popular with teens, which will make this further unworkable as a solution in this respect.
Overall, the teen ban bill is an ill-advised, poorly structured approach to a problem that may not even exist.
But the government is keen to show parents that it’s taking action, so much so that it’s only allowing a 24-hour window to submit amendments. Which means that it may well become law very soon. But while the Australian Government is keen to showcase its “world leading” leadership in this case, really, it’s likely to highlight the opposite, that policy makers remain largely out of touch with the modern online landscape.
If 16-year-olds are using social apps, 15-year-olds will find ways to do the same, as will 14-year-olds, as they look to keep up with the latest based on their high school peers. So while the basic principle makes sense, in protecting teens from online harms, banning them, and shielding them from such, is unlikely to be the answer in the longer-term.
App store level restrictions would be more effective, if restriction is the route you choose to go, while mandated cybersecurity education would be a better avenue, acknowledging the reality of the modern interactive landscape, and the dangers posed within it.