The Impact of the EU's AI Act on US Businesses

The European Union's AI Act (the “AI Act”), which was adopted by the EU parliament on March 14, 2024, is the first legal framework aimed at regulating artificial intelligence (AI). The AI Act has global implications, establishing a foundation for trustworthy AI in Europe and signaling a shift towards more stringent AI governance worldwide. This article explores the potential impacts of the AI Act on US businesses and offers insights into navigating the evolving landscape of AI regulation.

Understanding the EU AI Act

The AI Act is a comprehensive attempt by the European Union to manage the complex challenges and risks posed by AI technologies while fostering innovation and ensuring the safety and rights of individuals and businesses. The AI Act seeks to categorize AI systems according to risk levels, ranging from unacceptable and high-risk to limited and minimal or no risk, and imposes obligations on AI developers and companies deploying AI systems that emphasize the importance of transparency, accountability, and ethics.

 
 

Assessing the Risk of AI systems

The AI Act provides a regulatory framework defining 4 levels of risk for AI systems:

  • Unacceptable risk:  All AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned. The EU parliament has provided examples of systems that produce unacceptable risk including: 

    • Cognitive behavioral manipulation: systems that target and manipulate people or specific vulnerable groups: for example, voice-activated toys that encourage dangerous behavior in children;

    • Social scoring: systems that classify people based on behavior, socio-economic status, or personal characteristics;

    • Real-time and remote biometric identification systems, such as facial recognition used to identify individuals or groups of people in public.

The scope of banned biometric identification systems is unclear, particularly as it relates to law enforcement. For example, the EU parliament appears to have specifically banned predictive policing and has indicated that systems that predict that individuals with “personality traits or characteristics” are more likely to commit or to have committed crimes are prohibited. However, geographic crime prediction systems that are currently in use and that have been shown to reinforce racist stereotypes can remain in use. 

The parliament further provided for law enforcement exceptions that allow “real-time” remote biometric identification systems in “serious cases,” and after a significant delay to prosecute serious crimes, after court approval. The intended meaning of “serious cases” and who decides which crimes qualify as “serious crimes” for real-time applications remains unclear.  

  • High risk: AI systems could negatively impact safety or fundamental rights are high risk. The parliament defines two categories of high risk systems:

    • Systems used in products falling under the EU’s product safety legislation: toys, aviation, cars, medical devices, and industrial machines and equipment.

    • AI systems falling into specific areas that will have to be registered in an EU database:

      • Management and operation of critical infrastructure

      • Education and vocational training

      • Employment, worker management and access to self-employment

      • Access to and enjoyment of essential private services and public services and benefits

      • Law enforcement

      • Migration, asylum, and border control management

      • Assistance in legal interpretation and application of the law.

High risk AI systems will undergo a third-party assessment before entering European markets and after any substantial modifications throughout the systems life cycle. Citizens affected by high risk systems can file complaints about AI systems to designated national authorities.

  • Limited risk: For AI systems posing limited risk, the Act mandates specific transparency measures to inform users when they are interacting with AI, including chatbots. In addition, AI-generated content, including text, images, audio, and video (i.e. deep fakes), must be readily identifiable or labeled.   

  • Minimal or no risk: The vast majority of AI systems fall within the minimal or no risk category, which includes, for example, AI-based games, spam filters, and search systems.  

The Act also addresses “General-Purpose” AI models by introducing transparency obligations and risk management requirements, either placing these systems in the limited-risk category or carving out a separate classification for general-purpose systems that can be regulated separately in the future. 

Strategic Implications for US Businesses

US companies that adapt quickly to these regulations may find new opportunities for competitive advantage over companies that are slow to adapt. Collectively, the EU represents the highest demand for AI-based products and services in the world, and the EU's regulatory approach will likely influence many other regions, including Canada, Australia, India, Japan, China, and the US. To successfully develop an international market, businesses should stay ahead of potential regulatory changes by fostering a culture of compliance and ethical AI use.

At a minimum, US businesses must ensure their AI products and services comply with the AI Act to maintain access to the European market. While the Act aims to reduce compliance burdens by creating transparency of regulatory requirements, US companies developing high and unacceptable risk AI products may require significant adjustments in how AI systems are designed, developed, and deployed. Those developing limited and minimal or no risk products should be prepared to notify users that their content is generated by an AI-system or mark AI-generated outputs, if they are not doing so already.   

Navigating the Transition

The transition to the new regulatory framework is an opportunity for US businesses to engage in early compliance and invest in ethical AI.  Understanding and integrating the key obligations of the AI Act allows businesses to mitigate risks and seize a first-mover advantage. Emphasizing ethical AI practices and transparency not only ensures compliance but also build trust with consumers and regulators.

Conclusion

The EU's AI Act is a landmark regulation that sets a new standard for AI governance globally. For US businesses, it presents a call to action to reevaluate their AI strategies in light of these new requirements and embrace the principles of trustworthy AI. Companies that adopt strategies to comply with the Act will lead the way in ethical AI development, build trust among consumers, and ensure access to global markets.

Next
Next

Intellectual Property Rights in the Age of AI