Twitch, a popular streaming platform known for gaming broadcasts, has been included in Australia’s impending social media ban that targets users under the age of 16. This decision expands the list of restricted platforms, which currently includes giants like Facebook, Instagram, TikTok, and Snapchat.
The implementation of the ban takes effect on December 10, requiring Twitch and similar services to deactivate any existing accounts of users under 16 by January 9 of the following year. The initiative aims to mitigate risks associated with social media exposure, particularly regarding harmful content.
According to Australia's eSafety Commissioner, Julie Inman Grant, the decision to include Twitch stems from its primary function as a platform for online social interaction. Twitch facilitates user interactions through content sharing and livestreaming, prompting concerns about child engagement in potentially risky environments.
Ms. Inman Grant clarified that the ban aims to alleviate the pressure and risks children face online. The Australian government expects platforms to adhere to these new regulations or face substantial penalties, potentially costing them millions.
Twitch, best known for its focus on video game streaming, already forbids users under 13 from operating accounts, though users aged 13 and up can access the platform with parental permission. With this new legislation, Twitch will further restrict its usability among younger audiences, representing a significant escalation in efforts to create safer online environments for children.
This new ban also suggests that more platforms, such as YouTube, Reddit, and X (formerly Twitter), will need to take actionable steps to comply with the restrictions. The methods for enforcing the ban are still under discussion, with possibilities that extend from age verification via government IDs to using algorithms to infer age based on user behavior.
As the policy approaches its launch, parents and stakeholders are left pondering the implications for children's access to digital content. While eSafety officials have expressed confidence in the directive, the effectiveness of implementation and monitoring poses ongoing questions.


















