A New Mexico jury has ordered Meta to pay $375 million for violating the state’s Unfair Practices Act, but the financial penalty is the least concerning part of the ruling. The real danger lies in the legal theory used to reach that verdict—one that transforms essential security features into evidence of negligence.
The verdict is part of a broader wave of legal defeats for Big Tech. In Los Angeles, a separate jury found both Meta and YouTube liable for designing addictive products that harmed a young user, awarding $6 million in damages. While the public reaction has been largely celebratory, these cases signal a fundamental shift in how courts view social media: not as platforms hosting speech, but as “defective products.”
When privacy features become evidence
The most alarming aspect of the New Mexico case is how the court treated end-to-end encryption (E2EE). In 2023, Meta added E2EE to Facebook Messenger to protect user privacy. In a typical security context, this is a gold standard for protecting billions of people from surveillance, data breaches and authoritarian regimes.

Yet, the New Mexico attorney general successfully argued that this specific design choice enabled harm. The state’s logic: because predators employ encrypted messages to groom minors and exchange illegal material, the encryption itself makes it harder for law enforcement to intervene. By choosing to encrypt, Meta allegedly “enabled” the crime.
New Mexico is now seeking court-mandated changes to “protect minors from encrypted communications that shield bad actors.” This creates a terrifying precedent where a security tool designed to protect the vast majority of users is characterized as a weapon for a minor minority of criminals.
If implementing encryption becomes “Exhibit A” in a negligence lawsuit, the incentive for tech companies to improve security vanishes. Why rollout a privacy-protective feature if a plaintiff’s lawyer can later frame it as “shielding bad actors”?
Context: The Section 230 Bypass
Section 230 of the Communications Act generally protects platforms from being held liable for the content users post. To get around this “shield,” lawyers are now using a “product design” theory. They argue that the problem isn’t the speech (which is protected), but the design of the app (which they claim is a defective product), effectively making Section 230 irrelevant.
The “smoking gun” problem in safety engineering
Beyond encryption, these trials are creating a dangerous incentive for corporate silence. Much of the evidence used against Meta and YouTube came from internal documents where employees flagged safety risks and debated the tradeoffs of certain features.
In a healthy engineering culture, you want these debates. You want safety teams to document risks and wrestle with difficult choices before a product launches. But when these excellent-faith deliberations are presented to a jury as “smoking guns” proving the company “knew and did it anyway,” the rational corporate response is to stop putting anything in writing.
The lesson currently being learned by general counsels across Silicon Valley is that inquiry is a liability and ignorance is a safety strategy. When companies stop conducting risk assessments and stop documenting internal warnings to avoid future lawsuits, the platforms actually become less safe for everyone.
The broader fallout for the open internet
This “design liability” framework doesn’t stop at Meta. It applies to any communication tool ever invented. Predators use the postal service and telephones; those tools are not considered “defective” because they can be misused. Applying this logic to software creates a precarious environment for small platforms that lack the legal resources of a tech giant to fight these theories in court.
As Meta and Google appeal these losses, the tech industry is watching to see if “product design” becomes the new standard for regulating the internet. If it does, the trade-off for “child safety” may be the systematic dismantling of digital privacy and the end of transparent internal safety auditing.
If the law begins to treat security features as liabilities, will companies stop innovating on privacy altogether to avoid the courtroom?






