The year 2023 marked a pivotal moment for the legal, regulatory, and policy landscape surrounding artificial intelligence (AI). As public debate intensified and both commercial and public sector adoption of AI capabilities surged, several legal frameworks reached significant milestones. Notably, some of these frameworks predate the emergence of generative AI models, suggesting more targeted regulations addressing advancing capabilities.

One of the leading experienced AI law firms, Gibson Dunn, has created a detailed review and outlook report for AI.

Below is a summary of some of the key trends defined by Gibson Dunn.

Key Developments in 2023

1. EU’s AI Act: Overcoming Challenges

The European Union’s (“EU”) AI Act faced near derailment due to the emergence of foundation models (referred to as “general purpose AI”). Nevertheless, it now approaches the finish line, set to become the first comprehensive AI law directly regulating AI systems based on inherent risk. The AI Act will have sweeping consequences beyond EU borders because the act is one of the broadest measures that is likely to serve as the basis for other nations to create their own legislation.

2. U.S. Approach: Sector Focused and Self-Regulatory

In contrast to the EU’s broad guidelines approach, the United States continues to rely on a largely industry driven, sector focused and self-regulatory approach to AI. Although efforts to develop a federal framework fell short, the landscape remains dynamic with both state and national attempts at AI regulation continuing. Notable developments include:

  • A sweeping White House executive order for AI safety and general data security
  • Private sector commitments related to cutting-edge applications based on transparency
  • Regulatory guidance based on the National Institute of Standards and Technology (NIST) AI Risk Management Framework 1.0
  • Statements by agencies such as the Federal Trade Commission (FTC), Department of Justice (DOJ), Equal Employment Opportunity Commission (EEOC), Securities and Exchange Commission (SEC), and Consumer Financial Protection Bureau (CFPB). DOJ specifically has released a message defining the intent to pursue AI violations aggressively on March 7, 2024.
  • Ongoing Senate efforts to create AI legislative regulatory frameworks

3. Focus on Data Use and Privacy

Both federal and state levels sharpened their focus on the allegedly improper use of protected data (e.g., personal or copyrighted data) for model development and product improvement.

Currently, several states in the United States have AI legislation or are working on AI legislation. Some of these states include:

California: California has passed several AI-related laws, including the California Consumer Privacy Act (CCPA), which regulates the use of AI for consumer data privacy. The state is also working on the California Autonomous Vehicles Passenger Safety Act, which sets safety standards for autonomous vehicles.

Illinois: Illinois has enacted the Artificial Intelligence Video Interview Act, which requires companies to obtain consent from job applicants before using AI to analyze their video interviews. The state is also considering the Facial Recognition and Biometric Information Privacy Act, which would regulate the use of facial recognition technology.

Massachusetts: Massachusetts has passed the Algorithmic Management and Decision-Making Transparency Act, which requires companies to disclose the use of AI in their decision-making processes. The state is also working on the AI Education Act, which would create a grant program to support AI education in schools.

New York: New York has enacted the New York Stop Hacks and Improve Electronic Data Security Act (SHIELD Act), which regulates the use of AI for data security. The state is also considering the New York State Artificial Intelligence Act, which would create a task force to study the impact of AI on the economy and society.

Texas: Texas has passed the Texas Privacy Protection Act, which regulates the use of AI for data privacy. The state is also considering the Texas Artificial Intelligence Task Force Act, which would create a task force to study the impact of AI on the economy and society.

Washington: Washington has passed the Washington State AI in Government Act, which requires state agencies to evaluate the use of AI in their decision-making processes. The state is also considering the Washington State Artificial Intelligence and Privacy Act, which would regulate the use of AI for data privacy.

These are just a few examples of states with AI legislation or working on AI legislation. Other states, such as Maryland, Michigan, and Utah, have also introduced or passed AI-related laws or are considering doing so. As AI technology continues to advance and become more prevalent, it is likely that more states will enact AI legislation in the future.

Outlook for 2024

1. Geopolitical Goals and Regulatory Models

AI’s geopolitical significance remains uncertain in a year when half the world’s population is set to cast ballots in elections. Governments worldwide will continue experimenting with different regulatory models to govern foundation models and other AI deployments. These models aim to achieve political, societal, and geopolitical objectives.

The tenor of current AI legislation is to penalize a company that violates AI-oriented regulations harshly. These penalties can be a hefty percentage of company sales, making the investment in contracting with Data Science Group for AI control and monitoring a high R.O.I. and risk mitigation proposition.

2. Societal Norms and Risk Awareness

As AI becomes more entrenched, evolving societal norms and risk awareness will play a crucial role. These developments will span various legal domains. For instance:

  • Market and competitive authorities globally signal increased scrutiny of market impacts by leading AI companies.
  • The EU’s AI Act will require nearly all companies to use AI in their products, services, and supply chains to assess risk profiles and potential liability.
  • Comprehensive AI laws and governance tools are proposed and debated worldwide.

3. Continued Regulatory Momentum

In the U.S., the FTC, California’s Privacy Protection Agency (CPPA), and other federal and state regulators will persist in establishing themselves as key agencies in this rapidly evolving space.

International, national, state, and industrial entities are painting the AI regulatory landscape. Staying on top of the dynamic regulatory landscape requires AI companies to work with outside contractors to monitor, integrate, and apply regulations that either overlap or contradict various agencies.  DSG.AI works with AI companies globally to ensure compliance and control for both existing models and those under development.

Contact DSG for a free initial consultation about your AI assessment.