Legal Articles

Search Legal News Articles

reset

Employer Pays $365,000 Where Automated Screening Software Excludes Older Applicants


Published on: Thu 4th Jan, 2024 By: Joshua C. Hausman

Employer Pays $365,000 Where Automated Screening Software Excludes Older Applicants
The recent settlement serves as a cautionary tale for employers making use of artificial intelligence, as such tools may cause employers to run afoul of nondiscrimination law based on a disparate impact theory.

By: Joshua C. Hausman, Esq.

Artificial intelligence (“AI”) has been a hot topic since the release of “ChatGPT” late last year. While use cases of this technology are often flippant and silly—a recent AI-produced example of Frank Sinatra singing a song by the band ‘Green Day’ comes to mind—there is also little doubt that AI has the potential to transform the way in which much of the world works. The most obvious potential of this technology, at least at its current stage of development, is through the automation of tasks that may previously have been viewed as largely mechanical or formulaic. One such example which might arise in the employment context is the use of AI tools to screen job applicants. For several months, the Equal Employment Opportunity Commission (“EEOC”) has cautioned that biases built into these tools—whether intentionally or inadvertently because of a bias inherent in the particular data set on which the AI was developed—could cause employers to inadvertently run afoul of nondiscrimination law.

On September 11, 2023, the EEOC announced that it had reached settlement in an age discrimination case brought against an employer who had used software to automatically screen job applicants. According to the EEOC, the employer—iTutorGroup, which provides online tutoring services—had programmed their application software to automatically reject female applicants age 55 or older and male applicants age 60 or older. Allegedly, over two (200) hundred otherwise qualified applicants had therefore been rejected because of their age. The practice was apparently discovered when a suspicious applicant lowered their age and made it through the automated screen as a result. As part of the settlement, iTutorGroup will pay $365,000 to applicants rejected due to their age and will also be required to provide training and policy updates to address discriminatory practices.

The Age Discrimination in Employment Act (“ADEA”) makes it unlawful for employers to discriminate against persons forty (40) years of age or older. While it should hopefully be obvious that intentionally excluding older job applicants based on their age would violate the law, the settlement is nevertheless regarded as significant because it represents one of the EEOC’s first successful enforcement actions against a company using automatic hiring tools since the agency announced its Artificial Intelligence and Algorithmic Fairness Initiative last year. Since that time, the EEOC has repeatedly referred to AI as a “new civil rights frontier.” When the EEOC announced the lawsuit against iTutorGroup last year, EEOC Chair Charlotte A. Burrows emphasized: “Even when technology automates the discrimination, the employer is still responsible.” The recent settlement confirms that the EEOC will be taking enforcement action against employers whose automation tools run afoul of nondiscrimination law.

Of primary concern to employers considering the use of AI tools, however, is the fact that discrimination need not be intentional to be unlawful. “Disparate impact” discrimination occurs when facially neutral employment decisions have a disproportionate adverse impact on members of a protected class. As part of the EEOC’s broader AI Initiative, the agency in May of this year published a technical assistance document on “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964.”

In that guidance, the EEOC cautioned that just as employers should monitor their “traditional decision-making procedures” to determine whether a disparate impact may exist, so too should they monitor non-traditional tools such as AI which are used to make employment decisions. This is because an employer’s “selection procedures”—defined as the procedures by which an employer makes an employment decision including but not limited to hiring, promotion, or firing—may violate Title VII on a disparate impact basis. The result does not change when an employer’s selection procedures are assisted by or even wholly reliant upon AI tools. The EEOC reminded employers that they may be held responsible for discriminatory employment practices engaged in by an outside vendor or agent which the employer has authorized to act on its own behalf.

“Artificial intelligence” is something of a misnomer, at least in the forms in which it is presently available to the public. Large-language models such as ChatGPT are “trained” on datasets, and that data itself may have inherent biases that can manifest in a disparate output from the AI tool if not properly monitored and controlled for. In light of this fact, the EEOC suggests that employers making use of such tools first ask the vendors what processes have been put in place to evaluate whether the use of the tool results in a substantially lower selection rate for individuals in a protected class. However, merely asking for this reassurance will not be enough to protect an employer if a disparate impact on a protected class nevertheless occurs.

Therefore, the EEOC recommends that employers also conduct self-analyses on an ongoing basis, including on the outcomes associated with their AI tools, to determine whether their employment practices have a disproportionately large negative impact on a protected class, and to either correct those practices if so, or confirm that the tool is nevertheless job related and consistent with business necessity. Where the use of the tool violates the four-fifths rule (e.g., causes a selection rate in a protected classification which is less than 80% of the rate in another classification), and it is does not meet the job-relatedness and business necessity standard, the employer should revise the practice to eliminate the adverse impact.

Takeaways:

• An employer who programmed its application software to screen out applicants above a certain age was required to pay $365,000 pursuant to a settlement with the EEOC.

• The settlement confirms that the EEOC will be paying careful attention to employers whose use of tools to assist with employment decisions—including AI tools—result in a violation of nondiscrimination law.

• AI tools can result in inadvertent unlawful discrimination on a disparate impact theory where biases in the data or tool itself affect outcomes in a disproportionately adverse way to members of a protected class.

• The EEOC cautions employers to monitor both traditional and non-traditional decision-making tools for potential adverse impact, and to either confirm that the procedure meets the job-related and consistent with business necessity standard, or revise the procedure to eliminate the adverse impact.

Bottom Line:

The recent EEOC settlement and its guidance over the last year make clear that the agency will be paying careful attention to unlawful discrimination caused by AI and similar tools. Employers seeking to implement these new technologies must remain cognizant of the fact that the use of such tools may inadvertently violate nondiscrimination law where the use of the tool results in a disparate impact on members of a protected classification. Therefore, the EEOC cautions employers to monitor the outcomes associated with their AI-assisted employment procedures in the same manner as non-AI, traditional tools, to assess for the presence of a disparate impact.