Overview:

The European Union’s (EU) Artificial Intelligence Act (AI Act) has been making waves in the global AI community. Set to become the world’s first AI regulation, it establishs a legal framework for AI development and use to ensure the technology is safe, trustworthy, and transparent.

While the EU AI Act primarily applies to EU-based companies, it can also have significant implications for U.S. AI companies. I will delve into how the EU AI Act may accelerate compliance for U.S. AI companies, providing legal opinions and real-world examples.

Introduction:

The EU AI Act is a groundbreaking piece of legislation that seeks to regulate AI development and use within the EU. Scheduled to become law in 2024, it covers a wide range of AI applications and systems, from autonomous vehicles and facial recognition technology to recruitment software and healthcare algorithms. The AI Act has three risk categories: unacceptable risk, high risk, and low or minimal risk. Even though the AI Act is primarily aimed at EU-based companies, it can also impact U.S. AI companies that operate within the EU or provide services to EU customers. Even without offering services explicitly for the EU, it would be wise for U.S. AI companies to  take note of the AI Act’s provisions and begin preparing for potential compliance requirements.

Similar to the U.S. National Institute of Standards and Technology AI Risk Management Framework, the EU AI Act offers generic guidance rather than specific industry protocols. When the U.S. regulators absorb and refine the EU AI Act, they will likely apply the same measure to U.S. companies offering AI technology.

Lending credence to the impact that The EU AI Act will have even on U.S. soil, seven prominent AI companies, Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI signed up to self-regulate their AI developments.

How Complex is the EU AI Act and How Hard is it to Comply?

An AI focused UK law firm, Burges – Salmon, has put in the hours and intellect to create a diagram that shows the compliance processes the EU AI Act calls for. Be prepared to be educated and intimidated by the details.

Here is just one half of the decision process chart –

 The full process chart can be found at: https://blog.burges-salmon.com/PostFile/102i9my

There are several areas of impact for U.S. AI Companies, including:

Compliance Costs: The AI Act’s compliance costs could be a significant barrier for U.S. AI companies, especially smaller start-ups and small and medium enterprises (SMEs). Under the AI Act, companies may need to invest in legal, technical, and human resources to ensure compliance with the new rules. This can include hiring AI ethicists, data protection officers, and other compliance specialists like DSG. The penalties for being out of compliance with the new act far outweigh the modest investment in DSG risk systems.

Legal and Regulatory Risks: The EU AI Act introduces new legal and regulatory risks for U.S. AI companies operating in the EU. Non-compliance with the AI Act can result in fines of up to 6% of a company’s global annual turnover or €30 million, whichever is higher. Additionally, companies found to be in violation of the AI Act could face reputational damage and loss of business opportunities.

As mentioned earlier, these financial and reputation penalties mean that wise business risk mitigation practices require adding a service like DSG’s AI control measures to the product cost structure.

Adoption of Best Practices: In response to the AI Act, U.S. AI companies may need to adopt best practices in AI development and use. This includes ensuring transparency, accountability, and explainability in AI systems, as well as addressing potential biases and ensuring data privacy and security. DSG offers these services. By adopting these best practices, U.S. AI companies can not only ensure compliance with the AI Act but also enhance the trustworthiness of their AI systems, making them more attractive to customers and investors alike.

Real-World Examples:

Microsoft: Microsoft has been a vocal advocate for responsible AI development and use. The company has published its AI principles, which include fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. By adopting these principles, Microsoft has demonstrated its commitment to responsible AI practices, which can help it navigate the compliance landscape introduced by the AI Act.

IBM: IBM has also been at the forefront of responsible AI development. The company has developed a set of AI ethics principles, similar to Microsoft, which include accountability, transparency, and explainability. IBM has also developed tools and services to help companies implement these principles, such as the AI Fairness 360 toolkit, which helps detect and mitigate bias in AI systems. By embracing these practices, IBM can position itself as a leader in responsible AI development and ensure compliance with the AI Act.

Google: Google has also been working to develop AI systems that adhere to its AI principles, which include being socially beneficial, avoiding creating or reinforcing unfair bias, being accountable to people, and upholding high standards of scientific excellence. By focusing on these principles, Google confirms that its AI systems are compliant with the AI Act and helps drive the responsible development and use of AI within the industry.

The EU AI Act is a landmark piece of legislation that will have significant implications for AI development and use around the world. While the AI Act primarily applies to EU-based companies, it will quite likely also impact U.S. AI companies that operate within the EU or provide services to EU customers. By adopting best practices in AI development and use, U.S. AI companies will ensure compliance with the AI Act and also enhance the trustworthiness of their AI systems, making them more attractive to customers and investors alike. As the AI Act comes into force, U.S. AI companies should take note of its provisions and begin preparing for potential compliance requirements.

Explore more