Federal Agencies Signal Collaborative and Increasing Enforcement Efforts Against Discrimination and Bias in Automated Systems

May 18, 2023 | Articles
CLICK HERE TO DOWNLOAD A PDF VERSION

While state and federal agencies are developing regulations directly targeting automated decision-making systems, on April 25, 2023, the Consumer Financial Protection Bureau (CFPB), Justice Department (DOJ), Equal Employment Opportunity Commission (EEOC), and Federal Trade Commission (FTC), issued a joint statement about enforcement efforts combatting bias from the use of automated systems and artificial intelligence.  This article summarizes the recent enforcement actions, the statutes each agency relied upon, and key takeaways about federal oversight of the use of automated systems.  In short, the joint statement signals that enforcement actions are likely to increase, and companies should take steps now to ensure their use of automated systems is compliant with all applicable laws.
 
The joint statement acknowledged the general benefits of “automated systems,” which were defined to mean “software and algorithmic processes, including AI, that are used to automate workflows and help people complete tasks or make decisions.”  But the focus of the joint statement was on the potential for these systems to perpetuate unlawful bias, automate unlawful discrimination, or produce other harmful outcomes.  Each of the agencies making the joint statement described its role in reducing these outcomes and ensuring automated systems develop and expand in a manner consistent with federal law.  Each agency also cited to its prior guidance in this space, and those resources are summarized herein as well.
 
Department of Justice Civil Rights Division
DOJ underscored that automated systems used to screen tenants for rental housing could result in harms actionable under the Fair Housing Act.  More specifically, DOJ noted that companies using algorithms to screen tenants could face liability if their practices disproportionately deny people of color access to fair housing opportunities.  The joint statement referenced a Statement of Interest filed by DOJ in Louis v. SafeRent, 22-cv-10800 (D. Mass. 2022), detailing the Fair Housing Act’s applicability to a defendant company that allegedly did not make housing decisions and did not own or lease the properties, but simply provided services to assist those making housing decisions.  DOJ concluded that defendant’s services, which purported to replace human judgment with algorithms, resulted in the allegedly discriminatory effect and are subject to the Fair Housing Act’s prohibition on discrimination.  
 
The takeaway: DOJ is targeting enforcement efforts on housing providers and tenant screening companies to “ensure that all policies that exclude people from housing opportunities, whether based on algorithm or otherwise, do not have an unjustified disparate impact because of race, national origin or another protected characteristic.”
 
Consumer Financial Protection Bureau
CFPB addressed the increasingly prevalent use of algorithm decision-making in credit decisions, noting that using complex, opaque, or new technology is not a defense for failing to give an applicant a specific statement of adverse action or complying with consumer financial laws.  The Equal Credit Opportunity Act applies to all credit decisions, regardless of the technology used to make them.  Here, a creditor will be held responsible for relying upon an algorithm that violates this law.  
 
The takeaway: creditors are not permitted to use complex algorithms when doing so means they cannot provide the necessary specific and accurate reasons for adverse actions.
 
Equal Employment Opportunity Commission
EEOC identified several algorithmic tools used in the employment process, including: resume scanners that prioritize applications using certain keywords; employee monitoring software that rates employees on the basis of their keystrokes or other factors; “chatbots” that assess and reject job candidates’ qualifications; video interviewing software that evaluates candidates’ facial expressions and speech patterns; and testing software that scores applicants and employees for “job fit” or “cultural fit.”  In EEOC’s view, these tools can lead to outcomes that may violate the Americans with Disabilities Act (ADA).  
 
The most common ways that an employer’s use of algorithmic decision-making tools could violate the ADA are where an employer does not provide a “reasonable accommodation” necessary for a job applicant or employee to be rated fairly and accurately by the algorithm, or where the employer relies on a tool that “screens out” an individual with a disability.  Potential employees are “screened out” when a disability prevents a job applicant or employee from meeting (or lowers their performance on) a selection criterion.  And of course, algorithms that directly ask about disabilities or medical information would violate the ADA.  EEOC has emphasized that employers may be held responsible for actions of other entities, such as AI software vendors, who the employer has authorized to act on its behalf.  
 
The takeway: employers should ensure that applicants and employees are aware when algorithm-based tools are being used, and that reasonable accommodations are available.
 
Federal Trade Commission
FTC used the joint statement to highlight its history in regulating new and emerging technologies, including artificial intelligence.  It noted that AI tools can be inaccurate, biased, or discriminatory by design and can incentivize relying on increasingly invasive forms of commercial surveillance.  Further, FTC warned that AI tools come with harms, such as inherent design flaws and inaccuracy from potentially unrepresentative datasets or lack of context and meaning, or have built-in or resulting bias and discrimination.  Additionally, FTC is concerned with AI’s tendency to lead to invasive forms of commercial surveillance. 
 
From FTC’s perspective, relying on such tools could violate section 5 of the FTC Act (prohibiting unfair or deceptive practices, which would include the sale or use of racially biased algorithms); the Fair Credit Reporting Act (where an algorithm is used to deny employment, housing, credit, insurance, or other benefits); or the Equal Credit Opportunity Act (where a company uses a biased algorithm that results in credit discrimination based on various protected classes).  FTC has enforced these laws in actions that resulted in orders to destroy work products, including algorithms, that were trained on data that was impermissibly or illegally collected pursuant to the Children’s Online Privacy Protection Act.  The joint statement advised “market participants” to ensure that their automated tools do not have discriminatory impacts, any claims about AI’s capabilities are substantiated, and steps are taken to assess and mitigate risks before deploying an AI system.
 
The takeaway: FTC will target enforcement actions not only related to the outcome of automated decision making but also where the data used to train the system is improperly or impermissibly collected.
 
The joint statement closed by noting that the sources of discrimination or adverse outcomes in an automated system could occur throughout the development, training, and deployment of the automated system.  While enforcement actions currently center around the use of these systems, it is likely that there will be increased scrutiny of the models and algorithms themselves, and calls for increased transparency in the development stage.
 
If you have questions or concerns about your organization’s use of AI, or require a review of your automated systems policies to work towards eliminating the potential for bias or discrimination, contact the Lippes Mathias artificial intelligence team co-leaders, Caitlin O’Neil (coneil@lippes.com) or Jameson Tibbs (jtibbs@lippes.com).

PRACTICE TEAMS
ARTIFICIAL INTELLIGENCE
This website uses cookies to enhance user experience and to analyze traffic. To learn more about cookies and how we use them, please review our Privacy Policy. To continue use of this website, you must provide your consent to its use of cookies by clicking the "Accept" button.