
Two landmark US verdicts delivered in March 2026 have found Meta (Facebook, Instagram) and Google’s YouTube negligent in designing addictive social media platforms that caused demonstrable harm to young users.
These decisions provide legal and evidential support for the regulatory approaches already underway in Australia and proposed in New Zealand. Rather than validating a new direction, the verdicts reinforce and legitimise age-restriction frameworks already being implemented, while offering case law that may strengthen enforcement and inform design requirements.
We have argued in past reports that the focus of social media regulation should be algorithm design, not individual content. These judgements validate that approach, opening up the potential to regulators to focus more on structural elements of social media’s impact, rather than getting involved in contentious arguments about the merits of political and social commentary.
A jury found Meta and YouTube liable for negligent design of their social media platforms after a seven-week trial. The plaintiff, identified as Kaley (aged 20 at verdict), testified that she began using YouTube at age 6 and Instagram at age 9, eventually spending ‘all day long’ on social media. She developed anxiety, depression, and body dysmorphia by age 10.
The jury determined that both platforms employed addictive design features, including infinite scroll, autoplay, and persistent notifications, while executives knew these features could harm young users.

One day before the Los Angeles verdict, a New Mexico jury found Meta liable for failing to protect children from child predators and sexual exploitation on Facebook and Instagram. The jury determined Meta violated state consumer protection laws and ordered the company to pay $375 million in civil penalties.
Significantly, internal Meta documents revealed that executives were aware that end-to-end encryption would shield approximately 7.5 million child sexual abuse material reports from law enforcement, yet implemented the policy anyway.
Both verdicts reflect a critical shift in legal strategy. Rather than focusing on platform content (which receives some protection under US Section 230), the cases targeted platform design and the companies’ knowledge that design decisions were deliberately addictive and harmful. This approach has been described as Big Tech’s ‘Big Tobacco moment’ – establishing that companies knew their product was harmful and concealed that knowledge.
The Los Angeles case is the first of approximately 2,000 pending lawsuits against social media companies brought by parents and school districts, making this a bellwether verdict with potentially enormous precedential and financial implications. Of course, Meta and YouTube will appeal. But the trend is now clearly against them.
We have consistently argued in past reports that the regulatory focus should be on algorithms and other structural aspects of the social media industry. One of the misfortunes of the collapse of the Federal Government’s Mis/disinformation proposals in 2023 was that the draft legislation empowered the ACMA to demand information and data from social media platforms about the operation of algorithms. This would have introduced some welcome transparency (see our report “Mis/disinformation regulation – benefits, risks, and one big gap”).
That task has since fallen to the eSafety Commission, which has suffered a series of setbacks in court when it has attempted to regulate content. In contrast, the Commission has found stronger ground when it acts within a framework designed to structurally mitigate harm, as in the under-16 social media ban (see our report “In Defence of Australia’s ESafety Commission – Why Technology Needs Regulation”).
The jury’s core finding was that Meta and YouTube deliberately engineered their platforms to be addictive. Internal documents presented at trial showed that executives understood the addictive nature of features such as:
The jury found that platform executives knew these design features could harm young users but failed to warn consumers. This distinction is legally significant: it is not enough to argue that complex factors contribute to mental health issues. The jury determined that platforms made young users who already had vulnerabilities or mental health struggles demonstrably worse. As one legal commentator noted, the verdict reflects a new standard: ‘it is the design of the platform…that is an enormous deal.’
The plaintiff’s testimony detailed specific harms linked to platform use: anxiety, depression, body dysmorphia, reduced engagement with family and offline activities, and compulsive checking behaviours. The jury accepted that these harms were substantially caused by the addictive design of the platforms, not solely by pre-existing conditions or external factors.
Australia passed the Online Safety Amendment in late 2024, making it the first country to legislate a minimum age requirement for social media platforms. The legislation came into force on 10 December 2025. Rather than a complete ban, the law requires platforms to ‘take reasonable steps’ to prevent Australians under 16 from creating or maintaining accounts.
The legislation applies to social media platforms meeting three conditions: the primary or significant purpose is to enable social interaction, users can link to or interact with other users, and users can post material. As of December 2025, the following platforms are age-restricted:
Exempted services include messaging apps (WhatsApp, Messenger), gaming platforms (Discord, Roblox, Steam), educational services, and professional networking services.
Platforms are using multiple approaches to verify age, including live video selfies, email addresses, and official documents. The government’s Age Assurance Technology Trial (early 2025) determined that age verification could be implemented without compromising privacy. Notably, as of February 2026, some young people were still able to circumvent the ban through VPNs and alternative access methods, though others reported feeling more isolated from communication networks.
The legislation provides for civil penalties of up to AUD 49.5 million (approximately USD 33 million) for non-compliance. Notably, there are no penalties for young users or parents who access restricted platforms – the regulatory burden falls entirely on platforms. The eSafety Commissioner has regulatory guidance authority and can issue compliance notices and enforcement directions.
In May 2025, National Party MP Catherine Wedd introduced the Social Media (Age-Restricted Users) Bill, which seeks to ban children under 16 from accessing social media by requiring age verification. The bill is explicitly modelled on Australia’s approach. In October 2025, the New Zealand Parliament confirmed plans to introduce the legislation. Prime Minister Christopher Luxon stated in May 2025 that he was ‘concerned by the harm social media can cause young New Zealanders’ and that restricting access for under-16s ‘would help protect our kids from bullying, harmful content and social media addiction.’
The bill has been positioned as a member’s bill rather than a government bill due to opposition from the ACT New Zealand coalition partner, who have expressed preference to ‘watch the implementation’ of Australia’s approach before committing to New Zealand legislation. However, the government has signalled cross-party support is being sought, making passage likely in the current or next term.
Opposition to the bill has come from diverse quarters:
The US verdict validates Australia’s and New Zealand’s focus on platform design rather than user content. The jury’s finding that platforms were deliberately engineered to be addictive – not merely that they contain harmful content – provides strong legal and evidential foundation for age-restriction frameworks. Both regulatory approaches are grounded in the same core insight: the danger lies in the design architecture, not individual posts or images. The US case law demonstrates that this framing has legal weight in adversarial proceedings.
The internal documents and testimony presented in the US trials provide evidence that platform executives – particularly Mark Zuckerberg – knew about the addictive design of their platforms and the risks to young users. This evidence could support future regulatory actions in Australia and New Zealand if platforms fail to comply with age restrictions or if governments move toward design-change requirements. For instance, in the New Mexico case, prosecutors revealed that Meta executives understood the impact of encryption decisions on child safety. This type of evidence could inform design standards imposed by eSafety in Australia or future New Zealand legislation.
The verdicts establish that platforms can be held liable for harms flowing from design choices, even where pre-existing vulnerabilities exist. This is significant for A/NZ regulators because it undermines the defence that ‘not all harms come from our platform.’ The juries in both US cases found that platforms made vulnerable young people demonstrably worse through their design.
This precedent may embolden Australian and New Zealand regulators to pursue enforcement action and may make legal challenges to age-restriction laws (like those underway in Australia’s High Court) harder to sustain.
The verdicts reinforce that regulators should take implementation rigour seriously. The US cases showed that internal documents, employee communications, and design specifications are critical evidence. Australian and New Zealand regulators should ensure that compliance reporting from platforms is thorough and that platforms are required to disclose design changes made specifically to comply with age restrictions.
The US cases also showed that platforms will defend themselves vigorously; regulators should anticipate legal challenges and ensure their regulatory guidance is well-documented and evidence-based.
The verdicts add weight to a growing global movement. As of early 2026, countries including France, the United Kingdom, Malaysia, Germany, Italy, Greece, Spain, and Denmark are considering similar bans or restrictions. The US verdicts provide evidence that age restrictions are not ideological or untested; they are grounded in demonstrated harms and platform design practices. This momentum will likely accelerate New Zealand’s passage of its legislation and strengthen Australia’s resolve to defend its approach against legal and political challenges.
The US verdicts suggest that merely restricting access by age may not be sufficient long-term. If platforms can be found liable for addictive design, regulators may move toward mandating specific design changes (e.g., removing infinite scroll, limiting autoplay, capping notification frequency). Australia’s eSafety Commissioner has already indicated willingness to pursue such measures in phase 2 of implementation. New Zealand should anticipate similar moves if it passes age-restriction legislation.
The US verdicts show that once a platform design is shown to be addictive and to cause harm, the burden of proof subtly shifts: the platform must affirmatively demonstrate why the design is necessary and proportionate. This creates pressure to adopt safer defaults. Platforms may feel compelled to offer filtering, parental controls, and time-limiting features not as optional extras but as standard architecture.
Australia and New Zealand should maintain alignment as they implement age restrictions. Platforms will seek to create single global systems to comply with multiple jurisdictions’ requirements. Divergence between Australian and New Zealand approaches could make compliance more costly and create loopholes. The verdicts provide a shared evidentiary base; both countries should reference them in future guidance to platforms and to defend against legal challenges.
The US cases relied heavily on internal platform documents. Both Australian and New Zealand regulators should establish robust transparency reporting requirements, including regular disclosure of engagement metrics, algorithm changes, and safety measures. This data will be invaluable if future enforcement action or design change orders are pursued.
Australia’s High Court challenges and potential New Zealand constitutional concerns will benefit from the US verdicts as evidence. Regulators should develop detailed regulatory impact statements and compliance guidance that reference the US case law and the international evidence base. This will strengthen defences against arguments that age restrictions are disproportionate or ineffective.
The March 2026 US verdicts against Meta and YouTube represent a landmark moment in platform accountability. They validate the design-focused regulatory approach that Australia has implemented and New Zealand is considering. The verdicts reinforce the existing trajectory in both countries: that platform design – specifically the deliberate engineering of addictive features – is the appropriate regulatory target, and that age-restriction regimes are a proportionate response to demonstrated harms.
For Australia, the verdicts provide evidence to defend its age-restriction law against High Court challenges and international criticism. They suggest that moving beyond age restrictions to mandated design changes (e.g., removal of infinite scroll, limits on algorithmic ranking) may be legally defensible and evidence-based.
For New Zealand, the verdicts strengthen the case for passage of age-restriction legislation aligned with Australia’s model. They also underscore the need for robust implementation, including transparency requirements and design standards that go beyond mere access restriction.
Both countries should use the evidence and precedent from the US cases to: (1) defend their regulatory frameworks against legal and political challenge; (2) refine compliance guidance and transparency standards; (3) prepare for evolution toward design-change requirements; and (4) maintain alignment with the emerging global consensus that platform design, not user content, is the regulatory priority.
The verdicts do not resolve implementation challenges – VPN circumvention, displacement risk, data privacy concerns – but they do provide a robust evidentiary foundation for treating those challenges as implementation problems, not regulatory failures. Both Australia and New Zealand should view these US judgments as validation to move forward with confidence, rigour, and ongoing adaptation as the evidence base evolves.
Venture Insights is an independent company providing research services to companies across the media, telco and tech sectors in Australia and New Zealand.
For more information go to archive.archive.ventureinsights.com.au or contact us at contact@archive.archive.ventureinsights.com.au.