In the latest development on July 12, 2024, in Mobley vs. Workday, Inc., an Artificial Intelligence (AI) algorithmic bias and discrimination case in the U.S. District Court for the Northern District of California, the Court in part rejected Workday, Inc.’s (“Workday) Motion to Dismiss. The Court allowed Plaintiff to pursue novel claims that Workday – a third-party software vendor providing AI employment screening tools – may be liable for the discriminatory effects of those tools under federal anti-discrimination laws.
The lawsuit was filed under Title VII of the Civil Rights Act of 1964 (“Title VII”), the Civil Rights Act of 1866 (42 U.S.C. § 1981), the Age Discrimination in Employment Act of 1967 (“ADEA”), and the Americans with Disabilities Act Amendments Act of 2008 (“ADAAA”).
On January 19, 2024, the Court granted Workday’s Motion to Dismiss, with leave to amend. The Court held that Mobley had not sufficiently alleged facts demonstrating that Workday qualifies as an “employment agency” subject to liability under federal anti-discrimination laws. Mobley filed his Amended Complaint in February 2024, including two additional theories of liability, namely that Workday should be held liable as either an “indirect employer” or an employer’s “agent.” Mobley is seeking class certification for applicants or former applicants who are African American, over 40, and/or have a disability.
The Equal Employment Opportunity Commission (EEOC) also filed an amicus brief on April 9, 2024, arguing that vendors using AI tools like Workday could be subject to Title VII, the Americans with Disability Act, and ADEA. The brief is attached here.
In this latest ruling on July 12, 2024, U.S. District Judge Rita Lin rejected Workday's bid to dismiss the lawsuit. The judge ruled that Workday could be considered an employer covered by federal anti-discrimination laws because it performs screening functions that its customers would normally carry out themselves. However, Lin dismissed claims that Workday engaged in intentional discrimination based on age and race.
The case is ongoing, with several claims still proceeding. This ruling allows the lawsuit to move forward, setting the stage for further legal developments in the use of AI in hiring practices.
Key Takeaways
- This case could set an important precedent on the legal implications of using AI in employment functions.
- The Court’s decision establishes that third-party vendors who furnish AI-screening tools to employers may be held liable as “agents” of those employers under Title VII, the ADAAA and the ADEA.
- The Court stated that there is no meaningful distinction to be made between “software decision-makers” and “human decision-makers” to determine coverage as an agent under the anti-discrimination laws, noting that to hold otherwise would lead to undesirable outcomes. For example, the Court reasoned, that in such circumstances, employers could delegate “discriminatory programs” to third-party software tools instead of humans, while the third parties who created the software could escape liability.
- Employers should be aware that, in addition to filing the amicus brief, the EEOC has also released a technical assistance document on considerations for incorporating automated systems into employment decisions.
- The EEOC takes the position that employers can be held liable under Title VII for selection procedures that use an algorithmic decision-making tool if the procedure discriminates on a basis prohibited by Title VII, even if the tool is designed or administered by another entity, such as a software vendor.
- The ongoing interest in AI is part of a broader federal effort to regulate and manage AI. President Biden’s executive order outlines a comprehensive approach to AI regulation, including developing best practices for mitigating AI’s potential employee harm.
- This regulatory focus will likely lead to increased scrutiny and potential guidance from the Department of Labor and other agencies such as the EEOC.
Practical Implications
Employers will need to develop proactive procedures to ensure that they do not engage in bias and discriminatory practices in their efforts to create efficiencies of operation. While not the stated focus of this case, the underlying driver here is that of transparency. Explainability of AI results will be a key factor going forward, particularly as such systems become more advanced. The existence of any embedded bias is a known focal point of attention and will need to be addressed by employers.
This then raises the general question of the algorithmic transparency of AI systems which will, no doubt, also become an increasingly important issue. Thorough algorithmic and impact assessments combined with detailed audit trails of actual decision-making will be necessary.
These validation mechanisms, in addition to human oversight, are already required by the European Union’s AI Act. This Act comes into force this week (August 1, 2024) and applies equally to providers and deployers of AI systems located outside the E.U. but who are also doing business in Europe. I will cover the implications of the E.U. AI Act in more detail in a future article to be released soon.
There are, however, other moves closer to home. U.S. state laws are beginning to follow the European model to a greater or lesser extent. I recently wrote briefly about Colorado’s new AI legislation. Other states and cities are beginning to adopt a similar approach. The reality is that automated decision-making without appropriate oversight is neither an ethical nor a sustainable path. Appropriate mechanisms are required to verify the fairness of treatment.
Vigilance from employers around these issues, proactive procedures, and remaining well-informed about the evolving legal and regulatory landscape, will all be essential as these cases and initiatives move forward.
About R. Scott Jones
I am a Partner in Generative Consulting, an attorney and CEO of Veritai. I am a frequent writer on matters relating to Generative AI and its successful deployment, both from a user perspective and that of the wider community.