A Comprehensive Guide for AI Companies


The rapid growth of artificial intelligence (AI) has spurred a wave of regulatory activity in the United States. While the federal government has yet to enact comprehensive AI legislation, as the EU has done, various states have taken it upon themselves to develop their own AI regulatory frameworks. This article provides an up-to-date and reasonably comprehensive overview of the current state-by-state AI regulatory landscape, its potential impact on AI companies, and best practices for navigating this complex and evolving environment.


As AI technology continues to advance and proliferate across industries, concerns regarding its potential impacts on privacy, security, and fairness have grown. In response, several U.S. states have developed their own AI regulatory frameworks, with more expected to follow suit in the coming years. While these state-level initiatives are not yet as comprehensive as the European Union’s Artificial Intelligence Act, they signal a growing interest in regulating AI at the state level.

A Summary of Different Jurisdictions And Local Regulations:

State-by-State AI Regulatory Landscape:

California: California has been at the forefront of AI regulation in the United States, largely because it is one of the hubs of AI development. The state’s Consumer Privacy Act (CCPA), which took effect in January 2020, includes provisions that may apply to AI companies. The CCPA grants consumers the right to opt-out of the sale of their personal information, which could impact AI companies that rely on consumer data to train and improve their algorithms.

Additionally, California recently passed the Algorithmic Accountability Act, which requires companies to conduct regular assessments of their AI systems to ensure they are free from bias and other adverse impacts. This legislation will likely have a significant impact on AI companies operating in California, as they will need to invest in resources to conduct these assessments and implement any necessary corrective actions.

New York: New York has also taken steps to regulate AI, primarily through its proposed “Algorithmic Accountability Bill.” This bill, which has not yet been enacted, would require companies to conduct impact assessments of their AI systems and take steps to mitigate any negative impacts on individuals or groups. The bill would also establish a task force to study the potential impacts of AI on civil rights and liberties.

Illinois: Illinois has passed the Artificial Intelligence Video Interview Act, which regulates the use of AI in employment-related video interviews. The law requires employers to obtain consent from job applicants before using AI to analyze their facial expressions or other characteristics. This legislation could impact AI companies that develop or sell AI-based recruitment tools, as they may need to ensure compliance with Illinois’ unique requirements.

While this legislation is exceptionally narrowly focused, Illinois is likely to broaden AI legislation to achieve a general framework similar to other states.

Massachusetts: Massachusetts has proposed the “Artificial Intelligence Transparency Act,” which would require companies to disclose when they use AI to make decisions that impact individuals. The proposed legislation would also establish a task force to study the potential impacts of AI on civil rights and liberties.

South Carolina: Though S.C. produced general AI recommendations in 2020, South Carolina is now developing more precise AI legislation. The goal for South Carolina and other governments is to encourage AI development while ensuring fairness, transparency, privacy, copyright assignments, and safety. The legislation is likely to follow the same guidelines as California and New York.

Impact on AI Companies: The state-by-state AI regulatory landscape has the potential to significantly impact AI companies, particularly those that operate in multiple states. Compliance costs could be a significant barrier for smaller startups and SMEs, as they may need to invest in legal, technical, and human resources to ensure compliance with the various state-level requirements.

Additionally, non-compliance with state-level AI legislation can result in fines, reputational damage, and loss of business opportunities. AI companies that fail to adapt to the evolving regulatory landscape may find themselves at a competitive disadvantage, as they may be unable to access certain markets or secure contracts with regulated entities.

The business case for employing DSG’s AI control and monitoring processes gets stronger with each new AI regulation.

Best Practices for Navigating the AI Regulatory Landscape:

Stay informed: AI companies should closely monitor developments in state-level AI regulation and stay up-to-date on any new legislation or proposed bills. This can help companies anticipate potential compliance requirements and adapt their business practices accordingly. Conduct regular risk assessments: AI companies should conduct regular risk assessments to identify any potential compliance risks and develop strategies to mitigate these risks. This can include reviewing existing AI systems and processes, identifying areas for improvement, and implementing corrective actions as needed.

Develop a compliance plan: AI companies should develop a comprehensive compliance plan that outlines their approach to state-level AI regulation. This plan should include policies and procedures for ensuring compliance with relevant legislation, as well as a plan for addressing any non-compliance issues that may arise. Using an independent and globally experienced AI control and monitoring company like DSG, will mitigate risks and help safeguard essentially risk free growth.

Engage with stakeholders: AI companies should engage with stakeholders, including regulators, policymakers, and industry groups, to stay informed about developments in AI regulation and advocate for policies that support innovation and growth in the AI sector.

The state-by-state AI regulatory landscape is complex and constantly evolving. While this can create challenges for AI companies, it also presents an opportunity for companies to demonstrate their commitment to responsible AI development and use. By staying informed, conducting regular risk assessments, developing a compliance plan, and engaging with stakeholders, AI companies can navigate the regulatory landscape and position themselves for success in the years to come.

Explore more