Huge social media purge hits Australian teenagers as Meta deactivates accounts of under-16 users

Meta has started deactivating accounts it believes belong to Australians aged under 16, moving ahead of a landmark Australian law that will make it illegal for major social platforms to host users under 16 without “reasonable steps” to prevent them. The first wave of removals began on 4 December 2025, a week before the law is enforced, (10 December, 2025).
READ ALSO: NSS registration system to shut on Friday, but 50% of service personnel have yet to report
Meta says it has already notified affected users, primarily those aged 13–15, and will give them options to download or delete their content before accounts are closed. The company estimates roughly 350,000 Instagram accounts and 150,000 Facebook accounts in Australia could be affected, and it has blocked the creation of new under-16 accounts on the platforms covered by the measure.
How the removals and appeals will work
Meta has outlined a two-week notice process for users it believes are underage. Impacted users receive in-app messages, emails and SMS prompting them to either preserve their content or verify their age. Those who dispute Meta’s assessment can request a review and will be asked to verify their age via a “video selfie” or by supplying government ID such as a driver’s licence. Meta stresses that implementing the law is “an ongoing and multi-layered process” and has urged the government to adopt app-store level verification and parental-consent tools to reduce the need for repeated verification across apps. Independent reporting and tests suggest the age-verification systems are imperfect.
Guardian testing found the video-selfie checks (provided by third-party vendors) can correctly confirm some older users but struggle with edge cases, including users close to the threshold or those from minority groups and may push the company to request government ID more often than hoped. Critics warn that facial age estimation raises privacy and fairness concerns.
Why Australia is pushing the policy
The Australian government framed the ban as a public-health and child-safety measure. Communications Minister Anika Wells said the law is designed to protect “Generation Alpha” from manipulative algorithms and the harms of early social-media exposure, citing concerns about addiction-style design and exposure to harmful content. A July eSafety report found 96% of Australian children aged 10–15 had used at least one social platform, and large shares reported exposure to harmful content, cyberbullying and grooming-type behaviour.
READ ALSO: Gov’t forwards petitions against EC chair, deputies, and Special Prosecutor to Chief Justice
Australia’s Communications Minister Anika Wells
Platform responses, pushback and practical problems
Meta and other platforms have argued the law is blunt and difficult to enforce at scale. Meta has warned it could be more effectively implemented through standardised, privacy-preserving age checks at app stores and by giving parents tools to approve under-16 access, rather than forcing platforms to rely on sometimes-inaccurate detection systems. YouTube has said the law risks making its service “less safe” for young people by removing parental controls and forcing them away from platforms that offer moderation and safety features.
The policy has also produced immediate practical problems: civil society groups, businesses and ordinary users have reported wrongful account closures; industry regulators have warned of teething problems; and the Telecommunications Industry Ombudsman has highlighted gaps in recourse for people wrongly locked out of accounts. Observers expect a spike in appeals and complaints in the coming weeks.
READ ALSO: ‘Mom, queen of our hearts, your legacy lives on’ – Nana Konadu’s children pay tribute to her
What to watch next
With the law due to be enforced from 10 December 2025, attention now turns to whether platforms can roll out technically and legally robust systems for age assurance without infringing privacy or excluding vulnerable users who rely on online services for social connection and support. Researchers and government bodies will also be studying the immediate social effects: whether the measure reduces exposure to online harms and how families and children adapt. Several academic teams and child-welfare research centres have launched studies to track the ban’s impact on households and young people.


