Workday 请求法院驳回年龄歧视索赔:AI 招聘首次进入全国集体诉讼核心,北美华人 HR 需要关注什么?HR科技巨头 Workday 近日在一起备受关注的 AI 招聘诉讼中采取了最新法律行动。根据其在 2026 年 1 月 21 日向法院提交的文件,Workday 正式请求法官驳回原告提出的“差别影响(disparate impact)年龄歧视”指控,并主张《Age Discrimination in Employment Act(ADEA)》的适用范围仅保护在职员工,而不涵盖求职者。因此,公司认为应当撤销求职者基于年龄歧视提出的相关索赔。这一动议,是目前 Mobley v. Workday 案件的最新进展,也标志着该案正式进入核心法律博弈阶段。
该诉讼最早于 2023 年提起,原告为一批求职者,他们指控 Workday 的 AI 招聘与筛选工具在算法决策中对年龄等受保护群体造成系统性不利影响,从而构成歧视。2025 年 2 月,法院批准该案以 nationwide collective action(全国集体诉讼) 形式推进,使其从个体纠纷升级为覆盖全美范围的大规模案件。与此同时,法官还曾要求 Workday 提供使用其 HiredScore technology 的雇主完整名单,进一步扩大了潜在影响面。Workday 则公开回应称,其 AI 工具并不会识别或使用种族、年龄或残疾等受保护属性,并强调最终决策仍由人工主导。
从法律层面看,Workday 当前的策略并非直接围绕“算法是否存在偏见”展开,而是聚焦更基础的问题——求职者是否具备提起“差别影响”诉讼的法律资格。换言之,公司希望通过对法律条款的解释,缩小案HR件的适用范围。无论法院最终是否采纳这一主张,这一动作本身已经说明:AI 招聘正在从技术问题转变为司法问题。
对于北美华人 HR 从业者而言,这一点尤其值得重视。许多 NACSHR 社群成员所在的企业多为中小规模公司、跨州运营团队或初创组织,HR 通常身兼招聘、合规、员工关系与系统管理等多重角色。现实情况是,当企业采购 ATS 或 AI 筛选工具时,系统上线往往被视为效率优化;但一旦候选人质疑筛选结果或提起投诉,站出来解释流程、提供记录、应对律师函的人,往往是 HR 本人,而不是技术供应商。
这正是 Workday 案件释放的真正信号:算法并不会分担雇主责任。即便筛选由系统完成,法律仍然认定这是雇主的用工行为。企业不能以“系统自动决定”为由规避风险,HR 也无法以“工具问题”完全免责。
更广泛地看,Workday 并非孤例。此前 Eightfold AI 也因招聘流程涉及 FCRA 合规问题而遭遇诉讼调查。两起案件虽然分别涉及 ADEA 与 FCRA,不同的法律框架,却指向同一个趋势:只要算法影响到候选人的就业机会,它就等同于招聘决策本身,必须接受同等甚至更严格的监管与审查。这意味着,HR 科技行业已经进入“强合规时代”。
与此同时,监管环境也在不断收紧。包括 California 在内的多个州已开始要求企业在使用自动化招聘工具时提供候选人退出机制(opt-out),并进行风险评估与透明度披露。这类规定实际上将“算法治理”正式纳入 HR 日常合规管理范畴,而不再是技术团队的内部事务。
在这一背景下,HR 的能力模型正在悄然改变。过去我们关注的是招聘速度、转化率和成本控制;而未来更关键的问题是:系统是否可解释、是否可审计、是否留存记录、是否经得起监管问询。如果无法清晰说明筛选逻辑或提供合规证明,那么效率提升带来的收益,很可能被一次诉讼完全抵消。
对 NACSHR 的华人 HR 同行来说,这些案例并非遥远的大公司新闻,而是与日常工作直接相关的风险提醒。无论企业规模大小,只要开始使用 AI 招聘工具,就已经进入同一套法律框架之中。真正成熟的数字化升级,不是简单上线更多自动化,而是在效率、合规与信任之间取得平衡。
Workday 当前的法律动作,或许只是这场变革的开端,但它已经清晰地勾勒出一个趋势:未来的招聘竞争,不再只是“谁更智能”,而是“谁更合规、谁更可解释、谁更负责任”。这将成为所有北美 HR 必须面对的新现实。
Workday is seeking dismissal of disparate impact age discrimination claims brought by job applicants in the ongoing Mobley v. Workday lawsuit, arguing that the Age Discrimination in Employment Act (ADEA) does not extend such protections to applicants. In a court filing on January 21, 2026, the company stated that the law’s “plain language” limits disparate impact claims to employees, not candidates. The case, originally filed in 2023 and certified as a nationwide collective action in 2025, alleges that Workday’s AI recruiting tools discriminated based on age and other protected factors. Workday denies the claims, asserting that its AI systems neither use nor identify protected characteristics. The dispute highlights growing legal and compliance risks tied to AI-driven hiring technologies. Meanwhile, states including California are tightening regulations, requiring opt-out mechanisms and risk assessments for automated decision tools. The case could significantly shape how HR technology vendors and employers deploy AI in recruitment.
class action
2026年01月27日
class action
Agency Law and the Workday Lawsuit
文章讨论了在Workday诉讼中,代理法的相关法律问题。原告声称,Workday的AI筛选工具因种族、年龄和残疾而对他进行了歧视。这起案件提出了HR技术供应商是否可以对歧视性结果直接负责的问题。法律的复杂性包括AI在招聘决策中的角色、代理责任以及对雇主和AI开发者的潜在影响。此案件提醒雇主在实施AI招聘工具时要谨慎,并确保避免法律风险。AI开发者也必须确保其产品无歧视行为,因为该诉讼可能会树立重要的法律先例。
Editor's Note
Agency Law and the Workday Lawsuit
Agency law is so old that it used to be called master and servant law. (That's different from slavery, where human beings were considered the legal property of other humans based on their race, gender, and age, which is partly why we have discrimination laws.)
Today, agency laws refer to principals and agents. All employees are agents of their employer, who is the principal. And employers can have nonemployee agents too when they hire someone to do things on their behalf. Generally, agents owe principals a fiduciary duty to act in the principal's best interest, even when that isn't the agent's best interest.
Agency laws gets tricky fast because you have to figure out who is in charge, what authority was granted, whether the person acting was inside or outside that authority, what duty applies, and who should be held responsible as a matter of fairness and public policy.
Generally, the principal is liable for the acts of the agent, sometimes even when the agent acts outside their authority. And agents acting within their authority are rarely liable for their actions unless it also involves intentional wrongs, like punching someone in the nose.
Enter discrimination, which is generally a creature of statute that may or may not be consistent with general agency law even when the words used are exactly the same.
Discrimination is generally an intentional wrong, but employees are not usually directly liable for discrimination because making employment decisions is part of the way employment works and the employer is always liable for those decisions.
The big exception is harassment because harassment, particularly sexual harassment, is never part of someone's job duties. So in harassment cases, the individual harasser is liable but the employer may not be unless they knew what was going on and didn't do anything about it.
It's confusing and makes your head hurt. And that's just federal discrimination law. Other employment laws, both state and federal, deal with agent liability differently.
Now, let's move to the Workday lawsuit. In that case, the plaintiff is claiming that Workday was an agent of the employer, but not in the sense of someone the employer was directing. They are claiming that Workday has independent liability as an employer too because they were acting like an employer in screening and rejecting applicants for the employer.
But that's kinda the whole point of HR Technology—to save the employer time and resources by doing some of the work. The software doesn't replace the employer's decision making and the employer is going to be liable for any discrimination regardless of whether and how the employer used their software.
If this were a products liability case, the answer would turn on how the product was designed to be used and how the employer used it. But this is an employment law and discrimination case. So, the legal question here is whether a company that makes HR Technology can also be directly liable for discriminatory outcomes when the employer uses that technology.
We don't have an answer to that yet and won't for a while. That's because this case is just at the pleading stage and hasn't been decided based on the evidence. What's happened so far is Workday filed a motion to dismiss based on the allegations in the complaint. Basically, Workday said, "Hey, we're just a software company. We don't make employment decisions; the employer does. It's the employer who is responsible for using our software in a way that doesn't discriminate. So, please let us out of the case. Then the plaintiff and EEOC said it's too soon to decide that. If all of the allegations in the lawsuit are considered true, then the plaintiff has made viable legal claims against Workday.
Those claims are that Workday's screening function acts like the employer in evaluating applications and rejecting or accepting them for the next level of review. This is similar to what third party recruiters and other employment agencies do and those folks are generally liable for those decisions under discrimination law. In addition, Workday could even be an agent of the employer if the employer has directly delegated that screening function to the software.
We're not to the question of whether a software company is really an agent of the employer or is even acting like an employment agency. And even if it is, whether it's the kind of agency that has direct liability or whether it's just the employer who ends up liable. This will all depend on statutory definitions and actual evidence about how the software is designed, how it works, and how the employer used it.
We also aren't at the point where we look at the contracts between the employer and Workday, how liability is allocated, whether there are indemnity clauses, and whether these type of contractual defenses even apply if Workday meets the statutory definition of an employer or agent who can be liable under Title VII.
Causation will also be a big issue because how the employer sets up the software, it's level of supervision of what happens with the software, and what's really going on in the screening process will all be extremely important.
The only thing that's been decided so far is that the plaintiff filed a viable claim against Workday and the lawsuit can proceed. Here are the details of the case and some good general advice for employers using HR Technology in any employment decision making process.
- Heather Bussing
AI Workplace Screener Faces Bias Lawsuit: 5 Lessons for Employers and 5 Lessons for AI Developers
by Anne Yarovoy Khan, John Polson, and Erica Wilson
at Fisher Phillips
A California federal court just allowed a frustrated job applicant to proceed with an employment discrimination lawsuit against an AI-based vendor after more than 100 employers that use the vendor’s screening tools rejected him. The judge’s July 12 decision allows the class action against Workday to continue based on employment decisions made by Workday’s customers on the theory that Workday served as an “agent” for all of the employers that rejected him and that its algorithmic screening tools were biased against his race, age, and disability status. The lawsuit can teach valuable lessons to employers and AI developers alike. What are five things that employers can learn from this case, and what are five things that AI developers need to know?
AI Job Screening Tool Leads to 100+ Rejections
Here is a quick rundown of the allegations contained in the complaint. It’s important to remember that this case is in the very earliest stages of litigation, and Workday has not yet even provided a direct response to the allegations – so take these points with a grain of salt and recognize that they may even be proven false.
Derek Mobley is a Black man over the age of 40 who self-identifies as having anxiety and depression. He has a degree in finance from Morehouse College and extensive experience in various financial, IT help-desk, and customer service positions.
Between 2017 and 2024, Mobley applied to more than 100 jobs with companies that use Workday’s AI-based hiring tools – and says he was rejected every single time. He would see a job posting on a third-party website (like LinkedIn), click on the job link, and be redirected to the Workday platform.
Thousands of companies use Workday’s AI-based applicant screening tools, which include personality and cognitive tests. They then interpret a candidate’s qualifications through advanced algorithmic methods and can automatically reject them or advance them along the hiring process.
Mobley alleges the AI systems reflect illegal biases and rely on biased training data. He notes the fact that his race could be identified because he graduated from a historically Black college, his age could be determined by his graduation year, and his mental disabilities could be revealed through the personality tests.
He filed a federal lawsuit against Workday alleging race discrimination under Title VII and Section 1981, age discrimination under the ADEA, and disability discrimination under the ADA.
But he didn’t file just any type of lawsuit. He filed a class action claim, seeking to represent all applicants like him who weren’t hired because of the alleged discriminatory screening process.
Workday asked the court to dismiss the claim on the basis that it was not the employer making the employment decision regarding Mobley, but after over a year of procedural wrangling, the judge gave the green light for Mobley to continue his lawsuit.
Judge Gives Green Light to Discrimination Claim Against AI Developer
Direct Participation in Hiring Process is Key – The judge’s July 12 order says that Workday could potentially be held liable as an “agent” of the employers who rejected Mobley. The employers allegedly delegated traditional hiring functions – including automatically rejecting certain applicants at the screening stage – to Workday’s AI-based algorithmic decision-making tools. That means that Workday’s AI product directly participated in the hiring process.
Middle-of-the-Night Email is Critical – One of the allegations Mobley raises to support his claim that Workday’s AI decision-making tool automatically rejected him was an application he submitted to a particular company at 12:55 a.m. He received a rejection email less than an hour later at 1:50 a.m., making it appear unlikely that human oversight was involved.
“Disparate Impact” Theory Can Be Advanced – Once the judge decided that Workday could be a proper defendant as an agent, she then allowed Mobley to proceed against Workday on a “disparate impact” theory. That means the company didn’t necessarily intend to screen out Mobley based on race, age, or disability, but that it could have set up selection criteria that had the effect of screening out applicants based on those protected criteria. In fact, in one instance, Mobley was rejected for a job at a company where he was currently working on a contract basis doing very similar work.
Not All Software Developers On the Hook – This decision doesn’t mean that all software vendors and AI developers could qualify as “agents” subject to a lawsuit. Take, for example, a vendor that develops a spreadsheet system that simply helps employers sort through applicants. That vendor shouldn’t be part of any later discrimination lawsuit, the court said, even if the employer later uses that system to purposefully sort the candidates by age and rejects all those over 40 years old.
5 Tips for Employers
This lawsuit could have just easily been filed against any of the 100+ employers that rejected Mobley, and they still may be added as parties or sued in separate actions. That is a stark reminder that employers need to tread carefully when implementing AI hiring solutions through third parties. A few tips:
Vet Your Vendors – Ensure your AI vendors follow ethical guidelines and have measures in place to prevent bias before you deploy the tool. This includes understanding the data they use to train their models and the algorithms they employ. Regular audits and evaluations of the AI systems can help identify and mitigate potential biases – but it all starts with asking the right questions at the outset of the relationship and along the way.
Work with Counsel on Indemnification Language – It’s not uncommon for contracts between business partners to include language shifting the cost of litigation and resulting damages from employer to vendor. But make sure you work with counsel when developing such language in these instances. Public policy doesn’t often allow you to transfer the cost of discriminatory behavior to someone else. You may want to place limits on any such indemnity as well, like certain dollar amounts or several months of accrued damages. And you’ll want to make sure that your agreements contain specific guidance on what type of vendor behavior falls under whatever agreement you reach.
Consider Legal Options – Should you be targeted in a discrimination action, consider whether you can take action beyond indemnification when it comes to your AI vendors. Breach of contract claims, deceptive business practice lawsuits, or other formal legal actions to draw the third party into the litigation could work to shield you from shouldering the full responsibility.
Implement Ongoing Monitoring – Regularly monitor the outcomes of your AI hiring tools. This includes tracking the demographic data of applicants and hires to identify any patterns that may suggest bias or have a potential disparate impact. This proactive approach can help you catch and address issues before they become legal problems.
Add the Human Touch – Consider where you will insert human decision-making at critical spots along your hiring process to prevent AI bias, or the appearance of bias. While an automated process that simply screens check-the-box requirements such as necessary licenses, years of experience, educational degrees, and similar objective criteria is low risk, completely replacing human judgment when it comes to making subjective decisions stands at the peak of riskiness when it comes to the use of AI. And make sure you train your HR staff and managers on the proper use of AI when it comes to making hiring or employment-related decisions.
5 Tips for Vendors
While not a complete surprise given all the talk from regulators and others in government regarding concerns with bias in automated decision making tools, this lawsuit should grab the attention of any developer of AI-based hiring tools. When taken in conjunction with the recent ACLU action against Aon Consulting for its use of AI screening platforms, it seems the time for government expressing concerns has been replaced with action. While plaintiffs’ attorneys and government enforcement officials have typically focused on employers when it comes to alleged algorithmic bias, it was only a matter of time before they turned their attention to the developers of these products. Here are some practical steps AI vendors can take now to deal with the threat.
Commit to Trustworthy AI – Make sure the design and delivery of your AI solutions are both responsible and transparent. This includes reviewing marketing and product materials.
Review Your Work – Engage in a risk-based review process throughout your product’s lifecycle. This will help mitigate any unintended consequences.
Team With Your Lawyers – Work hand-in-hand with counsel to help ensure compliance with best practices and all relevant workplace laws – and not just law prohibiting intentional discrimination, but also those barring the unintentional “disparate impact” claims as we see in the Workday lawsuit.
Develop Bias Detection Mechanisms – Implement robust testing and validation processes to detect and eliminate bias in your AI systems. This includes using diverse training data and regularly updating your algorithms to address any identified biases.
Lean Into Outside Assistance – Meanwhile, collaborate with external auditors or third-party reviewers to ensure impartiality in your bias detection efforts.
原文来自:https://www.salary.com/newsletters/law-review/agency-law-and-the-workday-lawsuit/
class action
2024年08月10日
class action
法官允许针对 Workday 的人工智能偏见诉讼继续进行Workday因其AI筛选软件涉嫌偏见而面临集体诉讼。美国加州北区地方法院法官Rita Lin裁定,Workday可能被视为受联邦反歧视法律保护的雇主,因为它执行的筛选功能是其客户通常自己执行的。这一裁决可能会对使用AI进行招聘的法律责任产生重大影响。该诉讼由Derek Mobley提起,他表示自己因为是黑人、年龄超过40岁且患有焦虑和抑郁症而被Workday的客户公司拒绝了超过100次工作机会。EEOC警告雇主,如果他们未能防止筛选软件产生歧视性影响,他们可能会承担法律责任。
7月15日(路透社)——加利福尼亚的一位联邦法官驳回了Workday公司试图驳回一项拟议中的集体诉讼的请求。该诉讼称,Workday公司用于筛选其他企业求职者的人工智能软件中包含了现有的偏见。
在这一首例裁决中,美国地方法官Rita Lin于周五表示,Workday可以被视为受联邦工作场所歧视法律覆盖的雇主,因为它执行了其客户通常自己进行的筛选功能。
Lin拒绝驳回Derek Mobley在2023年提出的几项诉讼。Mobley声称由于他是黑人、年龄超过40岁并患有焦虑和抑郁症,他在与Workday签约的公司中申请了超过100个职位但都被拒绝。
此案是首个挑战使用AI筛选软件的拟议集体诉讼,可能会在使用AI自动化招聘和其他就业功能的法律影响上树立重要的先例。现在,大多数大型公司都在使用这种技术。
Lin驳回了Workday基于种族和年龄的故意歧视指控。她还裁定该公司不能被视为反偏见法下的“就业机构”,因为与人力资源公司不同,它不为工人提供就业机会。
Workday发言人在一份声明中表示,公司对Lin驳回部分指控感到满意。“我们有信心在进入下一阶段时能轻松驳斥剩余指控,因为我们将有机会直接挑战其准确性,”发言人说。
Mobley的律师没有立即回应置评请求。诉讼称,Workday使用公司现有员工的数据来训练其AI软件,以筛选最佳申请者,但没有考虑到现有歧视可能反映的问题。
Mobley指控Workday违反了1964年《民权法案》第七章(Title VII of the Civil Rights Act of 1964)和其他联邦反歧视法律,进行了种族、年龄和残疾歧视。拟议中的集体诉讼可能包括数十万人。
Workday表示,由于它不是Mobley的潜在雇主,也不是可以因歧视而被追责的就业机构,因为它不为客户做出招聘决定,因此不受工作场所偏见法律的约束。
但Lin在周五表示,反偏见法律旨在广泛保护工人,防止雇主将筛选申请者等任务外包以逃避责任,并且Workday可以作为其客户的代理人承担责任。
“(诉讼)合理地声称Workday的客户将包括拒绝申请者在内的传统招聘功能委托给Workday提供的算法决策工具,”民主党总统Joe Biden任命的Lin写道。
美国平等就业机会委员会(U.S. Equal Employment Opportunity Commission)负责执行联邦禁止工作场所歧视的法律,该机构在4月份的一份简报中曾敦促Lin让案件继续进行。该机构警告雇主,如果他们未能防止筛选软件产生歧视性影响,他们可能会被追究法律责任。