(212) 907-7300
  中文

AI Hiring Poses Discrimination Risk; A Cautionary Tale


The use of artificial intelligence (AI) in recruitment and employment decisions might be extremely tempting to employers, saving both money and the countless HR hours devoted to the task when it was an all-human operation. Employers may choose to use these tech tools, but there are pitfalls malingering in AI that might run afoul of anti-discrimination laws. This article will warn about a recent example impacting many employers across numerous industries, as well as recent regulatory and legislative efforts to address this issue.

A recent decision from a federal court in California shows that employers need to be careful to avoid potential discriminatory impact if they use AI in employment decisions. The Equal Employment Opportunity Commission (EEOC) and local lawmakers, including New York City government, are also taking steps to ensure that the adoption of this emerging technology is compliant with anti-discrimination laws.

In Mobley v. Workday, Inc., the United States District Court for the Northern District of California allowed a job applicant’s claims under Title VII of the Civil Rights Act and other federal anti-discrimination statutes to proceed against a human resources technology vendor, Workday, Inc. (Workday). The plaintiff, Derek Mobley, alleged that Workday’s AI screening tools, which have become common in recruiting circles, discriminated against him and others on the basis of race, age, and disability.

The Court held that Mobley’s allegations sufficiently stated a claim against Workday under the theory that Workday was an “agent” of the employers using Workday’s AI tools. Based on the allegations in the complaint, Workday provides companies with a platform on the customer/employer’s website to collect, process, and screen job applications. Mobley–an African American man over the age of 40, who suffers from anxiety and depression–alleged that he applied to more than 100 positions with companies that use Workday’s screening tools, and that his application was denied in every case despite his qualifications. In one case, he was rejected at 1:50 a.m., less than an hour after he submitted the application.

While the plaintiff in this case elected to sue only Workday rather than the prospective employers themselves–presumably a strategic decision given the sheer number of prospective employers involved–the decision has ramifications for employers too. The Court’s reliance on a theory of liability that Workday was an agent of the employers indicates that liability for the allegedly discriminatory employment decisions could flow up the chain to the employers that utilized the AI tools, and indeed, much of the Court’s analysis addressed Workday and the employers interchangeably.

In holding that Workday could be held liable as an employer’s agent under the anti-discrimination laws, the Court relied on a broad policy argument that allowing “companies to escape liability for hiring decisions by saying that function has been handed over to someone else (or here, artificial intelligence)” would create a “gap” that would cut against the remedial purposes of the anti-discrimination laws. Similarly, the Court held that “[n]othing in the language of the federal anti-discrimination statutes or the case law interpreting those statutes distinguishes between delegating functions to an automated agent versus a live human one,” and “[d]rawing an artificial distinction between software decisionmakers and human decisionmakers would potentially gut anti-discrimination laws in the modern era.”

The Court also held that the allegations were sufficient to allege discriminatory impact through the use of the AI screening tools: “The sheer number of rejections and the timing of those decisions, coupled with the [complaint’s] allegations that Workday’s AI systems rely on biased training data, support a plausible inference that Workday’s screening algorithms were automatically rejecting Mobley’s applications based on a factor other than his qualifications, such as a protected trait.” The Court held that “the rejection emails Mobley allegedly received in the middle of the night” give rise “to a plausible inference that the decision was automated.”

While Mobley’s claims survived Workday’s motion to dismiss based on the alleged discriminatory impact of the AI screening tools, the Court held that he was unable to demonstrate that Workday intended the alleged discrimination, and therefore it dismissed the intentional discrimination claims. The decision paves the way for the case to proceed to discovery, which will likely focus on how the screening algorithm worked, any anti-bias testing that was conducted, and the decisions that the tool made with respect to other applicants in protected classes (after all, Mobley brought the case as a putative class action).

Significantly, the EEOC inserted itself into the Mobley case by filing an amicus curiae brief in favor of the plaintiff and against Workday. It is plain that the EEOC is proactively monitoring and addressing the involvement of AI in recruiting, hiring, and other employment practices. In fact, the EEOC’s Strategic Enforcement Plan for Fiscal Years 2024-2028 states that one of the EEOC’s priorities is eliminating discrimination in recruitment and hiring practices that use “technology, including artificial intelligence and machine learning, to target job advertisements, recruit applicants, or make or assist in hiring decisions where such systems intentionally exclude or adversely impact protected groups.”

State and local lawmakers are also attempting to get ahead of employers’ use of emerging AI technologies. For example, New York City passed a law, with enforcement beginning in July of last year, that prohibits an employer from using an “automated employment decision tool” to screen a candidate or employee for an employment decision unless the employer complies with certain requirements to seek to prevent discriminatory impact, including: audit the tool annually for bias; publish a public summary of the audit; and provide certain notices to applicants and employees who are subject to the screening tool. The New York City law’s definition of “automated employment decision tool” ropes in not only delegation of employment decisions to the tool, but also use of the tool to “substantially assist” the employment decision:

The term “automated employment decision tool” means any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons. The term “automated employment decision tool” does not include a tool that does not automate, support, substantially assist or replace discretionary decision-making processes and that does not materially impact natural persons, including, but not limited to, a junk email filter, firewall, antivirus software, calculator, spreadsheet, database, data set, or other compilation of data.

The New York City law requires the bias audit of the AI tool to be conducted by an independent auditor – i.e., an individual who does not work for the employer utilizing the tool or the vendor that developed the tool. The law’s implementing regulations have detailed specifications of scores and ratios that must be calculated as part of the bias audit. Notably, the City’s Frequently Asked Questions (FAQ) guidance clarifies that the AI vendor can engage the independent auditor and coordinate the collection of data to conduct the bias audit. One would think that many vendors offering AI employment tools will assume the burden of coordinating the bias audit as part of the selling point for their product. The FAQs emphasize, however, that the law places ultimate responsibility on the employer to ensure that the bias audit was done before using the AI tool.

Despite the fact that the New York City law has been effective for over a year, we are not aware of any enforcement actions to date, it does not appear that any court has had occasion to interpret the statute, and some observers have commented that the law, while groundbreaking, is all bark and no bite. It remains to be seen whether a case like Mobley will spur heightened enforcement activity in this area.

To avoid running afoul of anti-discrimination laws, employers will want to review how AI tools work, and how they are being used. If the decision-making ultimately remains with a human, and the technology is not itself deciding whether to advance the candidate, this may reduce the risk of inadvertent discriminatory impact caused by the technology. If you use AI tools to screen or advance applicants and bypass human review, the Mobley decision is a warning that you could be held liable if the AI tools discriminate, even inadvertently. You also need to ensure compliance with any applicable local laws such as New York City’s auditing and notice requirements. Whether or not there is an applicable local law, it would be important to confirm that the AI tools are thoroughly tested, and that such testing is appropriately documented, to seek to ensure that the tools are scrutinized for any potential biases. Since some AI tools can continue to “learn” and evolve based on the data fed into them, it would also be important to ensure the tool is continuously monitored over time for compliance with anti-discrimination laws. If you are an employer using an AI tool, you may intend to rely on the software vendor for such testing and monitoring. As such, you may want to seek to protect your company in the vendor contract, such as by requiring the vendor to provide representations and warranties that the AI tool complies with all applicable anti-discrimination laws and regulations, and including an indemnification clause in the event a claim is asserted against your company based on alleged discriminatory impact of the AI tool.

Matthew C. Daly is a partner with Golenbock Eiseman Assor Bell and Peskoe in New York. Alexander W. Leonard is the chair of the firm’s employment and labor practice.

Reprinted with permission from the New York Law Journal. For a PDF of the article, click here.

For further assistance please contact your primary Golenbock attorney or the attorneys listed below:

Matthew C. Daly, Partner, Litigation
(212) 907-7329
Email: mdaly@golenbock.com

Alexander W. Leonard, Partner, Chair of Labor & Employment Group
(212) 907-7378
Email: aleonardgolenbock.com

Golenbock Eiseman Assor Bell & Peskoe LLP

Golenbock Eiseman Assor Bell & Peskoe LLP is a Manhattan-based business law firm with a broad-based practice that offers corporate, complex litigation, labor & employment, real estate, reorganization, intellectual property, tax, and trust & estate expertise. The firm provides high value, sophisticated counsel and representation for its domestic and international clients while maintaining a hands-on, personalized approach to all matters.

The firm represents entrepreneurial, portfolio, and institutional clients, ranging from start-ups to Fortune 500 companies, with a specific focus on the mid-market segment. Among our clients are private corporations, public companies, private equity firms, venture capital firms, individual investors, and entrepreneurs.

Golenbock is a member of the Alliott Global Alliance, which was named to Band 1 of global law firm alliances by Chambers Guides, the prestigious international legal survey. Alliott numbers 215 firms in 94 countries on six continents, and helps member firms partner with others in countries around the globe.

Golenbock Eiseman Assor Bell & Peskoe LLP uses Client Alerts to inform clients and other interested parties of noteworthy issues, decisions and legislation that may affect them or their businesses. A Client Alert should not be construed or relied upon as legal advice. This Client Alert may be considered advertising under applicable state laws.
© GEABP (2024)