Emerging Issue: AI Governance
April 3, 2023 |
Articles
Advancements in Artificial Intelligence have dominated the news recently in fascinating ways. Generative AI chatbots demonstrated seemingly spontaneous responses to test users. Visual artists filed a class action lawsuit to protect their intellectual property against AI “digital scraping” technology. Unions representing creative professionals have begun to discuss the role generative AI will play in compensation structures. It is exciting to imagine the future with technology that can help us make creative, informed decisions about some of our most pressing problems.
But we must also imagine the negative implications for such powerful technology. AI has already been integrated into business and government. Public and private sector entities use AI in many aspects of operation every day. For example, financial services entities use AI to assist consumers with signing up for credit cards, applying for loans, opening accounts, and making credit decisions. Manufacturing uses AI to coordinate logistics, conduct safety inspections, and automate robot-driven processes. Health care entities use AI to assist with diagnoses, monitor health data, and assess for high risk. Municipalities are using AI facial-recognition technology and to compile data for risk assessments and predicting where crimes or abuse may occur. Even more broadly, employers in all sectors can use AI to screen resumes to sort applicants.
Imagine AI being used to make credit decision, but the data input into the system has an ingrained bias about the meaning of certain demographic information. Imagine a facial recognition software being used to arrest someone, but the software cannot identify certain features with 100% accuracy. Imagine a healthcare treatment decision being made by AI, but the system does not take into account the risk of certain medication interactions. Some of these outcomes have already been identified, and highlight the fundamental fact that the technology requires human insight at each level of development and use.
As the public learns more about this technology and its implications, calls to create comprehensive AI governance grow increasingly louder. Notably, conversations about AI governance include not only the technical aspects of AI, but concerns about how AI and algorithmic learning is used. In other words, the power of the technology depends upon the decisions we make about how the system is made, sold, and used.
This is not only a fact of the technology, but a central consideration for users of AI who must comply with laws, rules, and regulations in the conduct of their businesses.
What does this all mean, and what is the conversation about it? Let’s start from the beginning.
What is AI?
AI does not have a universal definition. The New York State Comptroller’s Office defines AI as the ability for a machine to perform human cognitive functions, such as perceiving, evaluating, learning, and making conclusions based on external data. See Office of the New York State Comptroller, Division of State Government Accountability Artificial Intelligence Governance audit, February 16, 2023. The New York City Office of Technology and Innovation defines AI as “an umbrella term without precise boundaries, that encompasses a range of technologies and techniques of varying sophistication that are used to, among other tasks, make predictions, inferences, recommendations, rankings, or other decisions with data, and that includes topics such as machine learning, deep learning, supervised learning, unsupervised learning, reinforcement learning, statistical inference, statistical regression, statistical classification, ranking, clustering, and expert systems.” NYC OTI comment to Artificial Intelligence Governance.
AI in practice is often referred to as an “algorithmic tool,” defined by the NYS Comptroller as “any technology or computerized process that is derived from machine learning, AI, predictive analytics, or similar methods of data analysis and used to make decisions about and implement policies that materially impact the rights, liberties, benefits, safety, or interests of the public, including their access to services and resources for which they may be eligible.” See Artificial Intelligence Governance.
What is AI Governance?
AI Governance can be broadly defined as the overall process of directing, managing, and monitoring the AI activities of an organization. It is important to understand that the discussion about AI currently puts the burden for compliance on the user, not the manufacturer. In this way, entities will be responsible to not only know about AI they are using, but also what it is intended to do, how it works, and whether it has any negative or unintended effects or outcomes. A governance framework is intended to guide the evaluation process, either internally as a best practice or procedure, or externally to comply with governmental regulation.
What Laws and Regulations Apply To AI?
Importantly, several currently existing laws and regulatory schemes impact development, monitoring, and use of AI and automated decision-making systems. These include the NLRA (NLRB considers intrusive employee monitoring schemes to potentially violate the NLRA); ADA (DOJ will prosecute entities that use AI in a discriminatory manner); and anti-discrimination statutes generally (EEOC considers the use of AI as having potentially discriminatory impacts upon protected groups). Any governance framework in your business must ensure that use of AI does not have any unintended discriminatory or illegal effects.
National and International Laws and Regulations Concerning AI
The European Commission has developed and proposed a regulatory framework and coordinated plan on AI that aim to place clear requirements and obligations regarding specific uses of AI. The Commission plans to release the framework this year.
In 2022, Canada’s Minister of Innovation, Science, and Industry tabled Canada’s first attempt to formally regulate certain artificial intelligence systems as part of privacy reforms. Unlike the current frameworks being discussed in the United States, the proposed Canadian law applies (in sum and substance) to AI designers, developers, and retailers who make AI systems that are designated as “high-impact systems” under the statute. The law providers that those entities perform certain oversight and compliance requirements.
The United States does not have any comprehensive proposal or regulatory framework. The US Government Accountability Office has proposed an Accountability Framework for Federal Agencies and Other Entities that offers insight into what governance frameworks may look like in the future. Other agencies, including the Department of Defense and the United States Intelligence Community, have released proposed governance frameworks, but no proposal for putting it into effect. FTC released advanced notice of proposed rulemaking concerning Automated Decision—making Systems in the context of data security and sharing rulemaking.
State and Local Laws Concerning AI
Illinois and Maryland regulate automated employment decision-making tools and require intermittent auditing. In 2023, California, Connecticut, Colorado, and Virginia will effectuate legislation that governs automated decision-making with regard to privacy and personal/private data use in a consumer-oriented way (modeled after the European Union’s General Data Protection Regulation).
New York State does not have current or pending legislation concerning AI, but in New York City, Local Law 144 (going into effect this year after several delays) would require a bias audit be conducted on an automated employment decision tool before the tool is used. LL 144 also requires that stakeholders be notified about the use of such tools.
In 2019, the New York City Mayor signed an Executive Order establishing an Algorithms Management and Policy Officer (“AMPO”) in the Mayor’s Office of Operations. The AMPO was tasked by the Executive Order (“EO 50”) to implement a framework that included criteria to help identify and prioritize algorithmic tools and systems that support agency decision making. In February 2023, the Office of the New York State Comptroller released an audit report of the AMPO’s progress, and identified ways that AI and algorithmic tools were identified within City agencies.
Importantly, however, another requirement of the Mayor’s Executive Order was that AMPO create a framework for assessment of algorithmic tools, considering the complexity, benefits, impacts, and other relevant characteristics of algorithmic tools, including the potential risk of harm to any individual or group arising from the tool’s use. NYS Comptroller found that no framework was created. As a result, no agency was instructed to conduct any risk assessments or evaluate the impact of the algorithmic tool’s use, and there was no requirement for determining whether the algorithmic tools in use by various agencies were functioning as intended, providing a benefit, and not generating harmful, unintended consequences. NYS Comptroller found that there were also no policies to monitor whether algorithmic tools were being used fairly and responsibly and in accordance with AMPO’s governing principles. Clearly, use and monitoring of AI must be a dynamic process for any entity to accomplish the goals articulated by NYS Comptroller.
How Should You Begin to Think About Your AI Use In This Environment?
The federal government determined that AI systems should be responsible, equitable, traceable, reliable, and governable. See GAO-21-519SP Artificial Intelligence: An Accountability Framework for Federal Agencies and Other Entities. Governments on all levels – from the EU to the federal government to municipal agencies – are grappling with how to integrate these core values into a governance and regulatory framework. The conversations are dynamic, but these principles seem to be a foundation that is widely and broadly used
To provide some more detail about how governance is being developed, the Department of Defense (which issued its own guidance for use and development of AI systems) defined these terms:
Responsible: human being exercising appropriate levels of judgment and responsibility for the development, deployment, use and outcomes of DOD’s AI systems.
Equitable: taking deliberate steps to avoid unintended bias in the development and deployment of combat or non-combat AI systems that would inadvertently cause harm to persons.
Traceable: having an engineering discipline that is sufficiently advanced such that technical experts possess an appropriate understanding of the technology, development processes, and operational methods of its AI systems, including transparent and auditable methodologies, data sources and design procedure and documentation.
Reliable: having an explicit, well-defined domain of use, and the safety, security, and robustness of such systems should be tested and assured across their entire life cycle within that domain of use.
Governable: designed and engineered to fulfill their intended function while possessing the ability to detect and avoid unintended harm or disruption, and for human or automated disengagement or deactivation of deployed systems that demonstrate unintended escalatory or other behavior.
We’ll break down these terms and offer tips for implementing AI governance into your business in our forthcoming articles. We’ll also talk with practice leaders throughout the firm who can discuss specific AI use and implications, and how you can leverage AI, avoid expensive pitfalls, and plan for the next phase of governance requirements.
If you have questions or concerns about the impact of AI governance for you or your business, or would like to begin or supplement your AI governance framework, contact Lippes Mathias attorneys Caitlin O’Neil (coneil@lippes.com) or Jameson Tibbs (jtibbs@lippes.com), who are closely monitoring updates in the artificial intelligence space.