Meta has formally declined to sign the European Union’s new voluntary code of practice for general-purpose AI systems.
The code was published earlier this month to help companies prepare for compliance with the EU’s AI Act, a wide-reaching regulation set to take effect on August 2, 2025. The legislation aims to increase transparency, reduce risk, and establish standards for the development and deployment of AI across the region.
In a public statement, Meta’s global affairs chief, Joel Kaplan, criticized the code as going beyond the scope of the AI Act.
He argued that the document introduces legal uncertainties and imposes additional burdens on AI developers. Kaplan warned that these measures could hinder the advancement of powerful AI technologies and limit European businesses’ ability to develop products based on those models.
Meta’s decision not to sign adds to a growing list of companies expressing concern about the EU’s approach.
Why It Matters: The balance between innovation and oversight is now at the center of the international conversation on AI. The EU is positioning itself as a global leader in AI governance, aiming to shape international norms and ensure safety and transparency. But Meta’s rejection of the code reveals a deeper struggle over who gets to set the rules for emerging technologies. If large tech companies resist regulatory frameworks, the global AI realm may fracture, with different regions adopting competing standards.

- Meta Rejects the Framework: Meta has officially declined to sign the European Union’s voluntary Code of Practice for general-purpose AI systems, describing the document as overreaching and misaligned with the legal boundaries of the AI Act itself. Designed as a guide for early compliance, the code outlines documentation requirements, content restrictions, and transparency expectations for AI model developers.
- Warning From Meta’s Policy Chief: Joel Kaplan delivered a sharp critique of the EU’s regulatory path, calling it a clear case of bureaucratic overreach. He warned that the code could slow or even derail progress in developing frontier AI systems. Kaplan argued that by imposing vague and expansive obligations, the EU risks strangling innovation and placing artificial constraints on businesses that depend on advanced AI models to fuel new products and services.
- The EU’s Regulatory Ambition: The European Commission sees the AI Act and its accompanying code as cornerstones of its strategy to lead the world in responsible AI governance. The goal is to embed safeguards such as data accountability and ethical design into the foundation of AI development. With the Act targeting “systemic-risk” models, the Commission hopes to establish an international benchmark for AI safety and trust. These standards may eventually extend far beyond the borders of the European Union.
- Broader Industry Divide: Several high-profile companies, including Airbus and ASML, have signed a joint letter urging the Commission to delay the rollout of the code by two years. Though OpenAI has committed to the framework, the industry remains divided over whether the EU’s approach represents responsible regulation or regulatory overreach.
- A Global Crossroads: The friction between the EU and major AI developers reflects a deeper, structural conflict over who gets to define the rules of the road for these technologies. Europe’s push for enforceable ethics and safety stands in contrast to the more market-driven, flexible approach favored by companies like Meta. As the AI Act nears implementation, the EU’s strategy is being tested on a global stage.
Meta refuses to sign EU’s AI code of practice – TechCrunch
Trusted insights for technology leaders
Our readers are CIOs, CTOs, and senior IT executives who rely on The National CIO Review for smart, curated takes on the trends shaping the enterprise, from GenAI to cybersecurity and beyond.
Subscribe to our 4x a week newsletter to keep up with the insights that matter.