- Advertisement -Newspaper WordPress Theme

Top 5 This Week

Related Posts

Google Agrees to Comply with EU’s AI Code of Practice, Marking a Shift in Strategy

Google Aligns with EU’s AI Code Amid Industry Tensions

In a notable pivot, Google has confirmed it will sign the European Union’s Code of Practice on Artificial Intelligence (GPAI)—a move signaling a shift in the company’s stance on regulatory compliance in Europe. The decision follows mounting pressure from EU regulators and comes just weeks before enforcement begins under the AI Act, a legislative framework set to impact major tech companies.

What Is the EU AI Code of Practice?

Published on July 10, 2025, the GPAI is a voluntary agreement developed by independent experts to help AI companies align with the incoming AI Act. Although non-binding, it offers a framework for tech giants to demonstrate good faith in complying with EU requirements—especially for general-purpose AI (GPAI) models that pose “systemic risk.” These models include large-scale systems developed by Google, OpenAI, Anthropic, and Meta.

From August 2, 2025, these voluntary principles will start to influence obligations under the AI Act. Companies like Google have been given two years to achieve full compliance, but the voluntary code acts as a stepping stone toward this goal.

Why Google Changed Course

Back in June, representatives from several major AI firms—including Google and OpenAI—had urged the European Commission to delay mandatory enforcement, citing concerns about innovation slowdowns. However, the EU’s firm regulatory stance appears to have pressured Google into revisiting its position.

“We remain concerned that aspects of the AI Act and Code—such as deviations from copyright law or requirements to disclose trade secrets—could hamper innovation,” said Kent Walker, Google’s President of Global Affairs. Nevertheless, he acknowledged that collaboration with regulators was essential to building trust in AI technologies.

Meta Still Pushes Back

Unlike Google, Meta has refused to sign the GPAI. The company claims that the Code introduces legal liabilities and innovation bottlenecks, particularly criticizing what it sees as overly strict EU regulations. Meta’s VP for Global Public Policy, Joel Kaplan, emphasized that these constraints run counter to the interests of developers and could set back AI progress.

What the Code Requires

By signing the Code, companies like Google agree to a range of responsible practices. These include:

  • Transparent documentation of AI tools and models
  • Avoiding training data from pirated or copyrighted content
  • Respecting opt-out requests from content creators
  • Demonstrating safety and robustness in AI deployments

These measures are designed to increase accountability and minimize risks associated with the rapid deployment of powerful AI models.

Conclusion: A New Era of AI Accountability in Europe

Google’s decision to sign the GPAI signals more than regulatory compliance—it marks a strategic adaptation to Europe’s increasingly firm stance on AI oversight. As the AI Act rolls out, the industry is entering a new phase where transparency, ethics, and collaboration will be just as crucial as innovation. Whether other tech giants like Meta follow suit remains to be seen, but the tide is clearly turning toward greater accountability in artificial intelligence.


Popular Articles