Artificial Intelligence in the Workforce

On June 7, 2022, Conn Maciel Carey LLP partners Kara Maciel and Jordan Schwartz interviewed EEOC Commissioner Keith Sonderling about the EEOC’s recent focus on Artificial Intelligence (AI) and its impact on workplace discrimination. 

AI refers to a “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.”[1]  It can feature in software used to complete tasks previously completed by human beings.  Relevant to the discussion with Commissioner Sonderling, employers can use AI in most employment and/or hiring decisions, such as who to inform about a new position, who to interview, and who to select for a position. 

When making those decisions, employers could suffer liability if they discriminate against an individual based on their race, color, religion, sex, national origin, age, pregnancy, disability status, or genetic information[2].  Unlawful discrimination can occur two ways – disparate treatment and disparate impact.  Disparate treatment occurs when individuals are intentionally discriminated against by an employer, whereas disparate impact refers to unintentional discrimination – where an employer’s neutral policies or procedures negatively impact individuals in a particular protected class.  

Employers should be aware, as Commissioner Sonderling stressed in his remarks, that AI technologies are only as good as the data and training used to develop them.  There have been numerous instances where employers who used AI tools to assist in employment and/or hiring decisions have been left with discriminatory results and potential disparate impact liability as a direct result of the technology.

Commissioner Sonderling offered some examples of ways that AI could unintentionally produce discriminatory results in employment decisions:

  • When one employer used AI to search for potential future employees, the AI discovered that the current high performing employees all arrived early to the office. However, those employees were not strong employees because they arrived early, rather they arrived early to the office because they lived nearby.  AI subsequently correlated those employees with particular zip codes closer to the office, and prioritized searching for those zip codes in its candidate search. As a result, the employer’s search would result in discriminatory results if the zip codes nearer the office were disproportionately one race.  Additionally, the employer’s search would not be effective, as the AI was focused on a correlation between strong employees and their zip codes that was not causation.
  • More broadly, AI searches based on current workforces are likely to result in reinforcing the status quo.  In historically male-dominated industries like coding, AI is likely to correlate male-related search terms as indicators of future strong performers as the vast majority of the data given to it would be focused on male employees.  

Commissioner Sonderling also touched on the concerns he had with biometrics and facial recognition software. Facial recognition software can be used to replace interviewers, as the software can record responses and register interviewees’ facial expressions.  He explained how in one instance, a popular facial recognition software that was trained and honed using light-skinned individuals was 99% effective when recognizing the facial expressions of light-skinned males individuals, but the effectiveness dropped to 65-79% when working with dark-skinned females. This deficit could potentially result in discriminatory results, as darker-skinned females would be less likely to receive fair evaluations during their interviews.

Additionally, Commissioner Sonderling discussed the EEOC’s focus on the Americans with Disabilities Act (ADA), and how its existing requirements could apply to the use of AI in employment-related decision making.  He highlighted three potential issues that employers should be wary of when using AI: (1) failing to provide a reasonable accommodation necessary for an applicant to be fairly rated or reviewed by an AI algorithm; (2) AI that screens out individuals with disabilities when the AI views disabilities as proxies for weaker candidates; and (3) AI tools that make disability-related inquires that violate the ADA. 

Commissioner Sonderling emphasized that employers need to be hands-on and proactive about evaluating any AI tools they are considering using throughout the employment process. Employers can and should take steps to evaluate the vendors they purchase any such software from by questioning vendors on how the AI was trained and developed.  Ineffective or poorly developed AI tools could increase an employer’s risk of future discrimination claims.


[1] As defined by Congress in the National Artificial Intelligence Initiative Act of 2020 at section 5002(3).

[2] Title VII of the Civil Rights Act of 1964 outlaws discrimination based on race, religion, national origin, color, and sex; the Age Discrimination in Employment Act of 1975 outlaws discrimination based on age; the Pregnancy Discrimination Act of 1978 outlaws discrimination based on pregnancy; and the Genetic Information Nondiscrimination Act of 2008 outlaws discrimination based on genetic information.