What Does AI Look Like Within The Regulatory Landscape

Dec 14 / Darren Winter

Below, we summarise the major developments surrounding the regulation of artificial intelligence in the US, as well as outline steps companies can take to avoid potential regulatory pitfalls.

While federal AI regulations are not in place currently in the US, regulators have sent a clear signal that AI regulation is on the horizon.   

The European Commission does not currently have any legislative tools or concrete standards for the regulation of the use and development of AI. At international and supranational levels, AI is regulated by means of a regulatory regime consisting of rules and standards, rather than by formal regulatory frameworks.

For example, regulatory regimes for AI rules that are applied on an application-specific basis can vary from a complete ban on specific AI technologies, to detailed technical standards that govern the composition of AI systems, various standards on interpretability, standards on the development process, and regulations on liability.    AI regulations would impact AI tools usage across various industries, and functions within them, in varying degrees. This proposed AI bill will impact other types of AI, for example, requiring the disclosure of lower-risk AI systems and banning certain categories of AI, but will probably lead to less trade and international regulatory attention.   

However, at present, Standards Australia has no specific regulatory framework to govern AI development and use, and therefore is reliant on existing legislation and standards until new standards are developed. Many reports indicate artificial intelligence (AI) legislation is in its early days, with very few legislative provisions and other regulatory tools to govern the development and use of AI.   

In the newly released report, Regulation of Artificial Intelligence, the Law Library of Congress examines the emerging regulatory and policy landscape surrounding AI, including guidelines, ethics codes, and actions taken by governments and their agencies, as well as statements from governments and their agencies, across jurisdictions worldwide.

To help organisations navigate an evolving landscape of proposed laws, regulations, and standards, the Responsible AI Institute (RAII) has mapped more than 200 AI-related international principles and policy documents, many of which are tracked in the AI Policy Observatory, based on countries.   

RAIIs implementation framework is grounded in globally agreed-upon principles, incorporating requirements from both established and proposed regulations, and tailored for specific industries, use cases, and settings. It may also inform finer-grained requirements, such as AI systems documentation demonstrating compliance with regulations. The RAII implementation framework explains existing laws, as well as considering proposed approaches and requirements from laws and standards currently in development.   

We have been researching how AI algorithms should be regulated, and how AI systems should be deployed, which are guided by key principles underlying the proposed regulatory framework and have helped companies throughout industry to start and grow AI-driven initiatives. Regulators, government agencies, and consumer advocates are all focused heavily on the need to address unintended consequences that can arise as algorithms and AI are developed and used.

While the regulatory, government policy, and legal landscape is far from settled, companies can take the initiative to manage risks associated with developing and using AI systems with thoughtful, responsible and ethical AI programs.   

A thoughtful, responsible AI program (or risk management framework) will include widely accepted principles related to AI development and use, many of which are central to current discussions around regulation and public policy. For instance, the National Institute of Standards and Technology (NIST) is developing a Framework for the Risk Management of AI, which addresses risks to AI systems during design, development, use, and evaluation.   

Companies that adopt AI technologies are highly advised to make sure their proposed uses of those technologies do not violate restrictions set forth by laws and regulations relating to privacy, non-discrimination, data protection, and other relevant frameworks. Regulations related to autonomous vehicles (AVs) are among the most complex regulations for AI systems and are therefore worth exploring in terms of their regulatory approaches to issues around cybersecurity and liability.

AI laws and regulations can be divided into three broad themes, that is, the regulation of AI systems, liability and responsibility of systems, and issues related to privacy and security.   

The ethical implications of AI are regulated, either by AI-specific codes, by companies concerned about good corporate social responsibility, by research institutions (private or public) concerned about ethical research and innovation, or they are not regulated at all. In terms of the ethical development and applications of AI, it is mostly regulated through codes of conduct and business-management-type documents.   

Regulation on ethical and legal supports to the development of AI is in its early stages, but policies provide state oversight over Chinese companies and over high-value data, including data retention of Chinese users inside China, and mandatory adherence to national standards of AI by the Peoples Republic, including for big data, cloud computing, and industrial software.   

Regulation is taking shape to ensure that entities are striking the right balance between innovation and consumer privacy. Nearly three years ago, organisations were developing roadmaps to establish robust programs to manage information and data to meet the European Unions (EU) General Data Protection Regulation (GDPR). The GDPR is the landmark legislation that is enabling AI systems, but additional regulatory policies are now moving in that same direction.   

These requirements are likely to lay the groundwork for future legislation that is similar in scope and impact to the GDPR on privacy, thus suggesting the European Commission could be on the verge of proposing unique, regulatory AI law. Rather than providing proposed regulations at this point, the Commission has outlined legal requirements which any regulatory framework should cover to ensure AI remains credible and respects the values and principles of the European Union.

While we are confident that Stratigraphy offers a useful heuristic to categorise existing measures of AI regulation, we would like to focus on five themes which are critical for understanding the type of materials that constitute regulatory Stratigraphy, and the ways in which laws may eventually influence the development and adoption of AI.    

Useful Links:

https://en.wikipedia.org/wiki/Regulation_of_artificial_intelligence

https://www.frontiersin.org/articles/10.3389/fcomp.2021.779957/full

https://blogs.loc.gov/law/2019/07/new-report-on-the-regulation-of-artificial-intelligence/

https://oecd.ai/wonk/emerging-regulatory-landscape-ai

https://www.lexology.com/library/detail.aspx?g=4c845762-e954-4f47-8809-f5ad0f5d3716
https://www.howtoregulate.org/artificial_intelligence/

https://www.ibm.com/blogs/journey-to-ai/2021/04/three-resources-to-help-you-understand-todays-data-and-ai-regulatory-landscape/
https://home.kpmg/ch/en/blogs/home/posts/2021/06/ai-regulatory-landscape.html
https://www.brookings.edu/blog/techtank/2022/02/01/the-eu-and-u-s-are-starting-to-align-on-ai-regulation/
https://www2.deloitte.com/us/en/insights/industry/technology/technology-media-and-telecom-predictions/2022/ai-regulation-trends.html

https://hbr.org/2021/09/ai-regulation-is-coming

https://www.jdsupra.com/legalnews/responsible-ai-managing-risk-in-an-9604009/

https://www.orrick.com/en/Insights/2021/11/US-Artificial-Intelligence-Regulation-Takes-Shape

https://www.mayerbrown.com/en/perspectives-events/publications/2022/01/contracting-for-ai-in-the-evolving-regulatory-landscape

Courses

Created with