Meta’s recent courtroom losses in New Mexico and Los Angeles are indicative of an industry problem for technology companies. While internal research can help a company understand how its products affect people, that same research can later be used in court to show what company leaders knew and when they knew it.
In these two cases, juries found that Meta failed to do enough to protect young users, and internal records played an important role in shaping that conclusion.
This issue could matter for the next phase of the tech industry’s growth in AI.
Companies building chatbots and other AI tools are funding safety and impact research, though legal risk may make some leaders less willing to support work that could expose harmful effects on users.
Why It Matters: Internal safety research can help technology companies understand harm, though it can also become evidence in court, which may pressure companies to limit what gets studied, documented, or shared. That concern carries added weight in AI, where reduced transparency could leave the public and outside oversight groups with less visibility into how these tools affect people.
- A Repeated Legal Pattern: Meta lost two separate cases in the same week, one in New Mexico and one in Los Angeles. The cases were different, though each centered on the claim that the company failed to protect young users from harm linked to its platforms. Jurors reviewed a large volume of internal company material, including emails, presentations, survey results, and research findings. That evidence helped support the view that Meta had access to information about risks on its services and did not respond strongly enough to those problems.
- When Research Becomes Evidence: For years, Meta employed researchers to study how its platforms affected users. That work included internal surveys and studies on issues such as harmful experiences faced by teenagers on Instagram and changes in emotional well-being linked to Facebook use. In court, those materials became important because they gave juries a record of what the company had learned internally. Plaintiffs used that research to argue that Meta’s public image as a safety-conscious company did not fully match the concerns documented inside the business.
- The Haugen Effect: A major turning point came in 2021, when former Facebook employee Frances Haugen leaked internal documents that drew public attention to the company’s own findings about product harms. Alongside the criticism brought from the outside, it also changed how many people viewed internal research at large tech companies. Since then, concern has grown that some firms may reduce sensitive research efforts or limit access to internal data when those findings create legal or reputational risk.
- AI Faces the Same Question: The same issue now applies to companies building AI systems. Firms such as Meta, OpenAI, Anthropic, and Google are studying how their models behave and how to make them safer, though there is much less public information about how these products affect people in everyday use. Researchers cited in the reporting warn that companies may focus heavily on technical model questions while giving less attention to human impact, especially for children. If internal studies on user harm are treated as legal liabilities, companies may become less willing to examine those effects openly.
- Independent Oversight Still Matters: Outside researchers and advocacy groups argue that independent review remains necessary even when companies conduct internal research. Internal documents can help explain what risks were known, how long those risks were understood, and whether company leaders acted on that knowledge. When access to information becomes more limited, it becomes harder for legal and public entities to judge whether safety efforts are meaningful. That concern is especially important for AI tools, where many questions about long-term user impact are still unresolved.
Go Deeper -> Meta’s court losses spell potential trouble for AI research, consumer safety – CNBC
Trusted insights for technology leaders
Our readers are CIOs, CTOs, and senior IT executives who rely on The National CIO Review for smart, curated takes on the trends shaping the enterprise, from GenAI to cybersecurity and beyond.
Subscribe to our 4x a week newsletter to keep up with the insights that matter.


