Artificial Intelligence Act – European Parliament moves faster on regulation

0
119

The landmark EU’s Artificial Intelligence Act is now a done deal. AI development continues to raise questions on the ethical, social, and economic fronts. 

EU lawmakers overwhelmingly voted to adopt the latest draft of the new registration that will govern how companies use artificial intelligence (AI). The latest draft was introduced in May 2023 to introduce amendments that address the most recent advances in generative AI models.

The AI Act, first introduced in April 2021, was passed on 14th June 2023 to strictly regulate AI services and mitigate risks associated with AI use. It has been two years in the making for the new legislation. The first draft included safeguards for biometric data exploitation, mass surveillance systems, and policing algorithms. It also pre-empted the surge in the use of generative AI models that began towards the tail-end of 2022. 

Highlights of the AI Act

AI industry leaders and stakeholders have praised the layered approach taken by the EU. According to Kevin Bocek, VP at Venafi, a standout feature of the AI Act is assigning identities for AI models and running them through conformity assessments for eventual registration on the EU database. 

The progressive approach gets endorsement from different spheres since it will enhance AI governance, maintain control, and safeguard people’s well-being. Self-assessment gives a high degree of much-needed flexibility. Businesses and organizations will evaluate their projects to classify them appropriately in line with the risk categories outlined in the AI Act. New AI tools and systems must comply to uphold safety and public trust. 

The Act introduces a tiered approach for AI models – low and minimal risk, limited risk, high risk, and unacceptable risk. 

  1. Low and minimal-risk AI tools will not be regulated;
  2. Limited-risk AI tools will need to be transparent:
  3. High-risk practices will be strictly regulated. The Act requires a database of general purpose and an explanation of how, where, and when high-risk AI systems are deployed in the EU. The database must be freely and publicly accessible, easily navigable, and user-friendly, as well as remain machine-readable and understandable. The legislation outlines that the databases provide search functionalities that allow the general public to search the database for specific high-risk systems, locations, and different categories of risk and keywords.
  4. Unacceptable risk AI models will be banned. Right from the outset, the AI ACT will ban AI systems that present an unacceptable level of risk. The new legislation will ban predictive policing tools such as those already in use in several US states and the social scoring system used in China to classify people based on their behavior. 

In the raging hype around generative AI, the new legislations shine the spotlight on high-risk uses including facial recognition technologies and other profiling systems. New laws will set new stringent limits on high-risk AI systems that can influence voters or harm people’s health or societal well-being. 

Content churned out using popular generative AI models such as ChatGPT must be labeled as stipulated in the AI Act. It also bans the publishing of summaries of copyrighted data. Businesses deploying high-risk AI systems should bear most of the obligations.

The AI Act becomes the first AI legislation in the world to impose heavy fines of up to $32 million or 6% of global profits for non-compliance. The EU’s AI Act is the strictest AI law in the world and will become the benchmark for regulations in other jurisdictions.

Upcoming Regulations

The EU’s AI Act becomes the pioneering regulation across the world. Other countries may start following suit and enacting regulations that control the development of advanced AI systems. 

Back in April, EU lawmakers still working on the AI Act called for a global summit to explore how the advanced development of AI systems can be regulated. There are a few other countries that have commenced working on their AI regulations. For example, Canada is working on its AI and Data Act.

On the other hand, the US and the UK have adopted a different stance. There are more cautious in their approaches to regulating AI development. The UK reiterates that it has taken a pro-innovation approach to AI regulation whereby they will be no new regulations or regulatory body. Instead, AI regulation will be passed to existing regulators in the different sectors where AI tools or systems have been deployed. 

Recently, the UK announced an investment of $125 million into a Foundation Model Taskforce that will spur the development of AI systems that will boost GDP. The jury is still out if the UK will rescind the light-touch approach to AI regulation in the face of increased public concern and the passing of the EU’s AI Act.

Hurdles to Implementation

Copyright Materials – the AI Act poses probably the biggest hurdle for ChatGPT and other generative AI systems that scrap vast amounts of text from the internet. The majority of data scraped from the internet comes from copyrighted sources. 

Economic Ramifications – Social media sites such as Reddit are considering introducing charges for the use of its Application Programming Interface (API). The OpenAI project, the creator of ChatGPT, relies heavily on data scraped for free from Reddit to train its AI system. 

National Security Concerns – the European Council will have to thrash out the finer details between its members about biometric surveillance and facial recognition. Some EU countries would prefer to include exceptions for their defense and national security applications. 

The challenge to the Power of Tech Giants – the new regulations may put the EU on a collision course with global tech corporations. In May, Google left out the EU when launching its chatbot, Bard, across 180 countries and territories.

Increased Paperwork – EU member countries must now find considerable amounts of resources to hire AI experts to review and deal with the expected volume of paperwork. 

Conclusion

The enactment of the AI Act solidifies Europe’s position as the de facto leader in global tech regulation. The AI Act will influence AI policymaking across the globe. 

Just like the General Data Protection Regulation (GDPR), the AI Act will become the new benchmark. For example, Microsoft has indicated that it intends to extend the rights at the heart of GDPR to all consumers globally irrespective of whether they reside in Europe. Global tech leaders may borrow heavily from the EU’s AI Act to deploy AI tools and streamline AI practices rather than consider it a challenge to their power.

👉 Author: Alessandro Civati

👉 Blockchain Registration: https://lutinx.com/home/published/NTAw

💥💥💥 THIS ARTICLE HAS BEEN NOTARIZED INTO THE BLOCKCHAIN NETWORK. IF YOU WANT TO USE BLOCKCHAIN TECHNOLOGY EVERY DAY, GO TO LUTINX.COM (AND CREATE A FREE ACCOUNT WITH 100 BLOCKCHAIN TRANSACTIONS INCLUDED). PROTECTING INTELLECTUAL PROPERTY IS A MUST-HAVE FOR EVERY PROFESSIONAL, CODE & SOFTWARE DEVELOPER, MANAGER, OR LAWYER 💥💥💥