Australia could employ various technologies to enforce its social media ban for individuals under 16, although each option carries risks, according to a recent report.
The ban, designed to mitigate the detrimental effects of social media, is set to take effect in December and has generated interest from international leaders due to its pioneering nature.
The new legislation mandates platforms to take reasonable steps in preventing users under 16 from creating accounts and to deactivate any existing accounts.
While many parents back the initiative, experts have voiced concerns particularly regarding data privacy and the reliability of age verification technologies.
The federal government enlisted the UK-based Age Check Certification Scheme to explore potential enforcement technologies, leading to a final report that outlined several proposed methods.
These approaches included identity verification with government documents, parental consent, and AI technologies capable of assessing user age through physical characteristics and behavioral analysis. However, the report clarified that no single method could guarantee sufficient efficacy across all scenarios.
The adoption of identity document verification was underscored as the most effective approach, yet risks associated with prolonged data retention and information sharing with regulators raised privacy red flags.
The report also highlighted challenges in AI-based facial recognition, noting its high accuracy in adults but significant margin for errors among younger age groups. This raises potential for both false approvals and unjustified account bans.
Concerns were reiterated regarding parental verification methods, suggesting a layered approach to age assurance for better reliability. The report cautioned that tech companies must step up their efforts to prevent circumvention through techniques like document forgery and VPN usage.
Communications Minister Anika Wells emphasized the absence of a universal solution, affirming that the report demonstrated the feasibility of monitoring methods being both efficient and protective of privacy. She noted that social media companies, as leading developers in AI technology, have a responsibility to leverage their resources to ensure online safety for children.
The impending regulation allows for fines of up to AUD 50 million ($32.5 million; £25.7 million) for platforms failing to implement the necessary restrictions, with impacted services including Facebook, Instagram, Snapchat, and YouTube.
Polling suggests a majority of Australian adults endorse the ban, yet mental health advocates argue it may isolate children or force them into less regulated areas of the internet. They propose focusing on enhancing content moderation instead of imposing stricter bans to better equip youth for online challenges.