This article was first published by the San Francisco Chronicle.
Two juries, in two different jurisdictions, have now delivered back-to-back verdicts that may come to define this moment in technology law — and the future of Silicon Valley.
In New Mexico last month, a jury imposed a $375 million penalty on Meta for misleading the public about the safety of its platforms for children. Just one day later, a Los Angeles jury found Meta and YouTube liable for intentionally designing addictive platforms that harmed a young user’s mental health, awarding damages and assigning the majority of responsibility to Meta.
While the legal outcomes will be parsed in the days ahead, the broader significance is clear.
These cases are not really about Meta or YouTube. They expose a growing mismatch between the risks digital systems are designed to produce and the legal frameworks we still use to govern them. As courts begin to scrutinize product design alongside content, the longstanding distinction between the two is becoming increasingly untenable.
Over the course of these trials, jurors were presented with evidence that should trouble anyone.
In New Mexico, investigators posing as minors created accounts and documented sexual solicitations, while internal documents and expert testimony raised concerns about addiction, mental health harms and Meta’s own understanding of those risks. In Los Angeles, the focus shifted even more directly to product design, with features like the “infinite” scroll, algorithmic recommendation systems and appearance-altering filters characterized as deliberately engineered to maximize engagement, particularly among young users.
Meta’s defense across both cases was equally revealing. It argued that no platform serving billions of users can ever be entirely safe, that it has invested heavily in safeguards and that individual harms cannot be straightforwardly attributed to platform use. In other words, the defense was not that harm does not occur, but that harm is an unavoidable byproduct of scale.
That is precisely the problem.
For more than two decades, our legal systems have treated digital platforms as intermediaries rather than architects. The original sin here is a federal law, Section 230 of the Communications Decency Act, which reflects a world where platforms merely hosted content and were, therefore, immune from liability for what users posted. But the evidence in New Mexico and Los Angeles cases underscores a different reality. Platforms are not just conduits; they are designed environments that shape behavior. They optimize for engagement using tools that are not neutral or incidental.
What is striking is how these cases advance that legal theory in two different directions. The New Mexico case focused on exposure to harm, including sexual predators, explicit content and failures of safety representation. The Los Angeles case focused on the generation of harm, namely addiction, compulsive use and mental health impacts arising from core product features. Together, they collapse the distinction between content and design.
This is a subtle but profound shift. It moves the legal inquiry from “what users do” to “what systems are designed to produce.”
We have been here before. From tobacco to opioids, industries have long argued that harmful outcomes are either unintended or the responsibility of individual users. Courts, eventually, rejected that framing when evidence showed that companies understood the risks and engineered their products accordingly. Social media now appears to be entering that same phase.
What is needed is a shift toward true accountability. That means evaluating not just whether companies remove harmful content, but whether their systems predictably generate harm. It means asking whether engagement-based business models are compatible with child safety. And it means recognizing that when harms are systemic, liability cannot be avoided by pointing to individual bad actors.
These recent verdicts will not, on their own, resolve these questions. But taken together, I think they do signal something more important. That is, courts are increasingly willing to scrutinize not just what platforms host, but how they are built.
Already, this litigation is part of a broader wave across the United States and Canada examining social media addiction, youth mental health and platform accountability. The cumulative effect is to test whether courts are willing to treat technology companies not as passive platforms or “mere conduits,” but as responsible designers of complex socio-technical systems.
If they are, the implications will be profound.
For policymakers, the lesson is straightforward and clear. Do not wait for courts to do the work of regulation. Ongoing efforts around online harms, artificial intelligence governance and platform accountability must move beyond content moderation toward structural oversight. This includes transparency into algorithms, independent auditing of risk and enforceable duties of care, particularly where children are concerned.
For companies, the message is more uncomfortable. The era of plausible deniability is ending. Claims that harms are inevitable at scale are increasingly legally and politically untenable.
It’s about time.