加州雇主必须重视GINA:一张体检表如何引发100万美元合规风险NACSHR概述:一张看似普通的体检表,也可能让企业踩中 GINA 合规风险。NACSHR 最新加州雇主合规指南提醒,GINA 保护的不只是 DNA 检测报告,还包括家族病史。Dollar General 曾因入职前体检中要求申请人填写家庭成员癌症、糖尿病、心脏病病史,被 EEOC 起诉,最终以100万美元和解。对加州雇主来说,风险更高:联邦 GINA 通常适用于15人及以上雇主,而加州 FEHA 从5人起就可能适用。企业应重点检查三类文件:入职体检表、健康激励计划问卷、FMLA/CFRA/ADA/FEHA 医疗证明请求。最实用的做法,是删除家族病史问题,并在所有医疗信息请求中加入 GINA Safe Harbor 声明。
美国有一部专门保护员工基因信息的联邦法律。违反它,不需要恶意歧视,只需要一张填错的体检表。
先看一个真实案例
2017 年,EEOC(Equal Employment Opportunity Commission,美国平等就业机会委员会)对 Dollar General(全美连锁折扣超市,正式名称 Dolgencorp, LLC)提起诉讼,指控其在阿拉巴马州 Bessemer 配送中心的入职前体检中,要求申请人填写家庭成员的癌症、糖尿病、心脏病病史,违反联邦 GINA(Genetic Information Nondiscrimination Act,基因信息非歧视法)和 ADA(Americans with Disabilities Act,美国残障人士法)。
这些问题看起来很普通,很多体检表都有类似设计。
2022 年,法院裁定 Dollar General 的做法违反 GINA。2023 年 10 月,案件最终以 $100 万和解结案,涉及约500名受害人,同时被要求修订所有相关政策,并强制实施年度 GINA 合规培训。
EEOC 官方声明明确指出:"在招聘过程中要求申请人回答祖父母、父母或子女的病史,违反 GINA。雇主被禁止获取此类信息,无论该信息是否被用于拒绝录用。"
Dollar General 不是小公司。它有完整的法务团队,有 HR 部门,有合规预算——依然犯了这个错。作为加州的小企业主,可能你的风险不比它小。
GINA 是什么?
GINA,全称 Genetic Information Nondiscrimination Act(基因信息非歧视法),2008 年由联邦政府签署生效,法律编号 42 U.S.C. §2000ff。
这部法律的核心逻辑很简单:员工的基因信息,既不能影响他的保险待遇,也不能影响他的工作机会。
法律分为两个部分。Title I 规范健康保险领域,由劳工部 EBSA 监管,禁止团体健康计划基于基因信息调整保费或拒绝承保。Title II 规范就业领域,由 EEOC 监管,禁止雇主以任何方式获取、使用或披露员工的基因信息。
两个部分对加州雇主都有约束。购买了团体健康险的雇主,两条线上的义务同时存在。
"基因信息"不只是 DNA 检测报告
这是大多数雇主最容易误解的地方。
根据 DOL 官方 FAQ 文件,以下内容全部属于受 GINA 保护的基因信息:员工本人的基因检测结果、员工家庭成员的基因检测结果、员工或其家庭成员的家族病史、员工或其家庭成员曾寻求或接受基因咨询的事实,以及员工所怀胎儿或通过辅助生殖持有的胚胎的基因信息。
家族病史是最高频的风险点。
父亲有冠心病、母亲有乳腺癌、祖父患过糖尿病——这些写在病历上的信息,只要进入雇主的视野,风险就已经产生。
需要说明的是,员工本人已确诊的疾病属于健康信息,不是基因信息,不在 GINA 保护范围之内。员工的年龄和性别同样不属于基因信息。
雇主不能做什么?
根据 EEOC 官方文件,以下行为均属违法。
就业层面,雇主不得以任何方式获取员工或其家属的基因信息;不得将基因信息用于招聘、解雇、晋升、薪酬等任何雇用决定;不得向第三方披露员工基因信息;不得因员工提出 GINA 投诉或拒绝提供基因信息而打击报复。
健康计划层面,团体健康计划不得基于基因信息调整团体保费;不得在参保前或参保过程中收集家族病史;不得通过健康问卷奖励机制诱导员工提供基因信息。
三个最容易踩的雷区
雷区一:医疗证明表格
员工申请 FMLA 或 CFRA 病假,你要求提供医生证明。医生在表格里顺手写上"家族有糖尿病史"或"父亲曾患心脏病"。这份表格交到你手上的那一刻,你就获取了基因信息。
法律规定,只要雇主没有事先声明"请勿提供基因信息",即便是无意获取,也可能承担责任。这个漏洞修补起来极其简单——在表格里加一段固定声明,下文会详细说。
雷区二:健康激励计划问卷
公司提供健康激励计划(Wellness Program),员工填写健康风险评估问卷(Health Risk Assessment)后可以获得保费折扣或礼品卡。如果问卷包含家族病史,且奖励与完成挂钩,这属于"为承保目的收集基因信息",直接违反 GINA。
建议做法是将两类内容严格分开:不含家族病史的问卷可以绑定奖励;含家族病史的问卷必须与奖励完全脱钩,且与参保流程分离。
雷区三:入职前体检
这就是 Dollar General 踩到的那个坑。入职体检表格中的家族病史栏目,是最直接、最清晰的违规行为,没有任何灰色地带。解决方法也最简单:彻底删除这一栏,同时加入 GINA Safe Harbor 声明。
企业如何完成 GINA 合规?
第一步:张贴官方海报
EEOC 要求所有 15 人及以上的雇主在工作场所张贴"Know Your Rights: Workplace Discrimination is Illegal"海报,该海报明确将基因信息列为受保护类别。未张贴或张贴过期版本,罚款约 $698/次(2025 年联邦调整后金额)。海报可在 EEOC 官网免费下载,电子张贴同样符合要求。海报下载:www.eeoc.gov/poster
扫描文末二维码,添加微信小助手 hinacshr ,备注「海报」,免费发送给你。
第二步:在所有医疗证明申请中加入 Safe Harbor 声明
EEOC 官方在联邦法规 29 CFR Part 1635 中提供了以下 Safe Harbor(安全港)声明。加入后,即便医生仍然填写了家族病史,雇主收到后受 GINA 豁免条款保护,违规风险大幅降低。
以下文字直接复制使用:
"The Genetic Information Nondiscrimination Act of 2008 (GINA) prohibits employers and other entities covered by GINA Title II from requesting or requiring genetic information of an individual or family member of the individual, except as specifically allowed by this law. To comply with this law, we are asking that you not provide any genetic information when responding to this request for medical information. 'Genetic information' as defined by GINA, includes an individual's family medical history, the results of an individual's or family member's genetic tests, the fact that an individual or an individual's family member sought or received genetic services, and genetic information of a fetus carried by an individual or an individual's family member or an embryo lawfully held by an individual or family member receiving assistive reproductive services."
将这段文字加在 FMLA / CFRA 医疗证明申请表、病假证明请求函、ADA / FEHA 合理便利安排表格等所有医疗信息请求文件的底部或顶部,一次设置,长期有效。
第三步:审查 Wellness 计划问卷
检查现有健康激励计划的问卷设计,确认家族病史问题与任何奖励机制严格分离。如有需要,将问卷拆分为两个独立版本。
第四步:更新员工手册
在手册中加入 GINA 政策说明,包括公司不收集基因信息的承诺、员工的投诉渠道,以及基因信息的保密存档要求。这一步虽非强制,但在发生争议时能有效证明雇主的合规意图。
违规代价有多大?
违规情形 / 后果:
未张贴 EEOC 官方海报:约 $698/次(2025年)15–100 人雇主实质性违规:最高赔偿 $50,000101–200 人雇主实质性违规:最高赔偿 $100,000201–500 人雇主实质性违规:最高赔偿 $200,000501 人以上雇主实质性违规:最高赔偿 $300,000
以上赔偿上限不含追回工资(Back Pay)和律师费,实际赔偿总额往往远超上限。
加州雇主还需要注意什么?
联邦 GINA 的适用门槛是 15 人,但加州 FEHA(Fair Employment and Housing Act,加州公平就业和住房法)从5人起就适用,由 CRD(California Civil Rights Department,加州民权部)负责执法,对基因信息的保护标准与联邦 GINA 基本一致。
雇主规模与适用:
1–4 人:部分 FEHA 适用5–14 人:FEHA15 人以上:GINA + FEHA
加州 CRD 要求雇主张贴 FEHA 反歧视通知海报,中文版同样免费提供。如果工作场所 10% 以上员工以中文为主要语言,张贴中文版是强制要求。
下载链接:https://calcivilrights.ca.gov/posters/?openTab=1
或者扫描文末二维码,添加微信小助手,备注「海报」,免费发送给你。
如有疑问,欢迎联系我们
NACSHR(北美华人人力资源协会)长期关注加州华人雇主的 HR 合规与员工福利需求。
邮箱:nacshr818@gmail.com网站:compliance.nacshr.org
ADA
2026年03月26日
ADA
Agency Law and the Workday Lawsuit
文章讨论了在Workday诉讼中,代理法的相关法律问题。原告声称,Workday的AI筛选工具因种族、年龄和残疾而对他进行了歧视。这起案件提出了HR技术供应商是否可以对歧视性结果直接负责的问题。法律的复杂性包括AI在招聘决策中的角色、代理责任以及对雇主和AI开发者的潜在影响。此案件提醒雇主在实施AI招聘工具时要谨慎,并确保避免法律风险。AI开发者也必须确保其产品无歧视行为,因为该诉讼可能会树立重要的法律先例。
Editor's Note
Agency Law and the Workday Lawsuit
Agency law is so old that it used to be called master and servant law. (That's different from slavery, where human beings were considered the legal property of other humans based on their race, gender, and age, which is partly why we have discrimination laws.)
Today, agency laws refer to principals and agents. All employees are agents of their employer, who is the principal. And employers can have nonemployee agents too when they hire someone to do things on their behalf. Generally, agents owe principals a fiduciary duty to act in the principal's best interest, even when that isn't the agent's best interest.
Agency laws gets tricky fast because you have to figure out who is in charge, what authority was granted, whether the person acting was inside or outside that authority, what duty applies, and who should be held responsible as a matter of fairness and public policy.
Generally, the principal is liable for the acts of the agent, sometimes even when the agent acts outside their authority. And agents acting within their authority are rarely liable for their actions unless it also involves intentional wrongs, like punching someone in the nose.
Enter discrimination, which is generally a creature of statute that may or may not be consistent with general agency law even when the words used are exactly the same.
Discrimination is generally an intentional wrong, but employees are not usually directly liable for discrimination because making employment decisions is part of the way employment works and the employer is always liable for those decisions.
The big exception is harassment because harassment, particularly sexual harassment, is never part of someone's job duties. So in harassment cases, the individual harasser is liable but the employer may not be unless they knew what was going on and didn't do anything about it.
It's confusing and makes your head hurt. And that's just federal discrimination law. Other employment laws, both state and federal, deal with agent liability differently.
Now, let's move to the Workday lawsuit. In that case, the plaintiff is claiming that Workday was an agent of the employer, but not in the sense of someone the employer was directing. They are claiming that Workday has independent liability as an employer too because they were acting like an employer in screening and rejecting applicants for the employer.
But that's kinda the whole point of HR Technology—to save the employer time and resources by doing some of the work. The software doesn't replace the employer's decision making and the employer is going to be liable for any discrimination regardless of whether and how the employer used their software.
If this were a products liability case, the answer would turn on how the product was designed to be used and how the employer used it. But this is an employment law and discrimination case. So, the legal question here is whether a company that makes HR Technology can also be directly liable for discriminatory outcomes when the employer uses that technology.
We don't have an answer to that yet and won't for a while. That's because this case is just at the pleading stage and hasn't been decided based on the evidence. What's happened so far is Workday filed a motion to dismiss based on the allegations in the complaint. Basically, Workday said, "Hey, we're just a software company. We don't make employment decisions; the employer does. It's the employer who is responsible for using our software in a way that doesn't discriminate. So, please let us out of the case. Then the plaintiff and EEOC said it's too soon to decide that. If all of the allegations in the lawsuit are considered true, then the plaintiff has made viable legal claims against Workday.
Those claims are that Workday's screening function acts like the employer in evaluating applications and rejecting or accepting them for the next level of review. This is similar to what third party recruiters and other employment agencies do and those folks are generally liable for those decisions under discrimination law. In addition, Workday could even be an agent of the employer if the employer has directly delegated that screening function to the software.
We're not to the question of whether a software company is really an agent of the employer or is even acting like an employment agency. And even if it is, whether it's the kind of agency that has direct liability or whether it's just the employer who ends up liable. This will all depend on statutory definitions and actual evidence about how the software is designed, how it works, and how the employer used it.
We also aren't at the point where we look at the contracts between the employer and Workday, how liability is allocated, whether there are indemnity clauses, and whether these type of contractual defenses even apply if Workday meets the statutory definition of an employer or agent who can be liable under Title VII.
Causation will also be a big issue because how the employer sets up the software, it's level of supervision of what happens with the software, and what's really going on in the screening process will all be extremely important.
The only thing that's been decided so far is that the plaintiff filed a viable claim against Workday and the lawsuit can proceed. Here are the details of the case and some good general advice for employers using HR Technology in any employment decision making process.
- Heather Bussing
AI Workplace Screener Faces Bias Lawsuit: 5 Lessons for Employers and 5 Lessons for AI Developers
by Anne Yarovoy Khan, John Polson, and Erica Wilson
at Fisher Phillips
A California federal court just allowed a frustrated job applicant to proceed with an employment discrimination lawsuit against an AI-based vendor after more than 100 employers that use the vendor’s screening tools rejected him. The judge’s July 12 decision allows the class action against Workday to continue based on employment decisions made by Workday’s customers on the theory that Workday served as an “agent” for all of the employers that rejected him and that its algorithmic screening tools were biased against his race, age, and disability status. The lawsuit can teach valuable lessons to employers and AI developers alike. What are five things that employers can learn from this case, and what are five things that AI developers need to know?
AI Job Screening Tool Leads to 100+ Rejections
Here is a quick rundown of the allegations contained in the complaint. It’s important to remember that this case is in the very earliest stages of litigation, and Workday has not yet even provided a direct response to the allegations – so take these points with a grain of salt and recognize that they may even be proven false.
Derek Mobley is a Black man over the age of 40 who self-identifies as having anxiety and depression. He has a degree in finance from Morehouse College and extensive experience in various financial, IT help-desk, and customer service positions.
Between 2017 and 2024, Mobley applied to more than 100 jobs with companies that use Workday’s AI-based hiring tools – and says he was rejected every single time. He would see a job posting on a third-party website (like LinkedIn), click on the job link, and be redirected to the Workday platform.
Thousands of companies use Workday’s AI-based applicant screening tools, which include personality and cognitive tests. They then interpret a candidate’s qualifications through advanced algorithmic methods and can automatically reject them or advance them along the hiring process.
Mobley alleges the AI systems reflect illegal biases and rely on biased training data. He notes the fact that his race could be identified because he graduated from a historically Black college, his age could be determined by his graduation year, and his mental disabilities could be revealed through the personality tests.
He filed a federal lawsuit against Workday alleging race discrimination under Title VII and Section 1981, age discrimination under the ADEA, and disability discrimination under the ADA.
But he didn’t file just any type of lawsuit. He filed a class action claim, seeking to represent all applicants like him who weren’t hired because of the alleged discriminatory screening process.
Workday asked the court to dismiss the claim on the basis that it was not the employer making the employment decision regarding Mobley, but after over a year of procedural wrangling, the judge gave the green light for Mobley to continue his lawsuit.
Judge Gives Green Light to Discrimination Claim Against AI Developer
Direct Participation in Hiring Process is Key – The judge’s July 12 order says that Workday could potentially be held liable as an “agent” of the employers who rejected Mobley. The employers allegedly delegated traditional hiring functions – including automatically rejecting certain applicants at the screening stage – to Workday’s AI-based algorithmic decision-making tools. That means that Workday’s AI product directly participated in the hiring process.
Middle-of-the-Night Email is Critical – One of the allegations Mobley raises to support his claim that Workday’s AI decision-making tool automatically rejected him was an application he submitted to a particular company at 12:55 a.m. He received a rejection email less than an hour later at 1:50 a.m., making it appear unlikely that human oversight was involved.
“Disparate Impact” Theory Can Be Advanced – Once the judge decided that Workday could be a proper defendant as an agent, she then allowed Mobley to proceed against Workday on a “disparate impact” theory. That means the company didn’t necessarily intend to screen out Mobley based on race, age, or disability, but that it could have set up selection criteria that had the effect of screening out applicants based on those protected criteria. In fact, in one instance, Mobley was rejected for a job at a company where he was currently working on a contract basis doing very similar work.
Not All Software Developers On the Hook – This decision doesn’t mean that all software vendors and AI developers could qualify as “agents” subject to a lawsuit. Take, for example, a vendor that develops a spreadsheet system that simply helps employers sort through applicants. That vendor shouldn’t be part of any later discrimination lawsuit, the court said, even if the employer later uses that system to purposefully sort the candidates by age and rejects all those over 40 years old.
5 Tips for Employers
This lawsuit could have just easily been filed against any of the 100+ employers that rejected Mobley, and they still may be added as parties or sued in separate actions. That is a stark reminder that employers need to tread carefully when implementing AI hiring solutions through third parties. A few tips:
Vet Your Vendors – Ensure your AI vendors follow ethical guidelines and have measures in place to prevent bias before you deploy the tool. This includes understanding the data they use to train their models and the algorithms they employ. Regular audits and evaluations of the AI systems can help identify and mitigate potential biases – but it all starts with asking the right questions at the outset of the relationship and along the way.
Work with Counsel on Indemnification Language – It’s not uncommon for contracts between business partners to include language shifting the cost of litigation and resulting damages from employer to vendor. But make sure you work with counsel when developing such language in these instances. Public policy doesn’t often allow you to transfer the cost of discriminatory behavior to someone else. You may want to place limits on any such indemnity as well, like certain dollar amounts or several months of accrued damages. And you’ll want to make sure that your agreements contain specific guidance on what type of vendor behavior falls under whatever agreement you reach.
Consider Legal Options – Should you be targeted in a discrimination action, consider whether you can take action beyond indemnification when it comes to your AI vendors. Breach of contract claims, deceptive business practice lawsuits, or other formal legal actions to draw the third party into the litigation could work to shield you from shouldering the full responsibility.
Implement Ongoing Monitoring – Regularly monitor the outcomes of your AI hiring tools. This includes tracking the demographic data of applicants and hires to identify any patterns that may suggest bias or have a potential disparate impact. This proactive approach can help you catch and address issues before they become legal problems.
Add the Human Touch – Consider where you will insert human decision-making at critical spots along your hiring process to prevent AI bias, or the appearance of bias. While an automated process that simply screens check-the-box requirements such as necessary licenses, years of experience, educational degrees, and similar objective criteria is low risk, completely replacing human judgment when it comes to making subjective decisions stands at the peak of riskiness when it comes to the use of AI. And make sure you train your HR staff and managers on the proper use of AI when it comes to making hiring or employment-related decisions.
5 Tips for Vendors
While not a complete surprise given all the talk from regulators and others in government regarding concerns with bias in automated decision making tools, this lawsuit should grab the attention of any developer of AI-based hiring tools. When taken in conjunction with the recent ACLU action against Aon Consulting for its use of AI screening platforms, it seems the time for government expressing concerns has been replaced with action. While plaintiffs’ attorneys and government enforcement officials have typically focused on employers when it comes to alleged algorithmic bias, it was only a matter of time before they turned their attention to the developers of these products. Here are some practical steps AI vendors can take now to deal with the threat.
Commit to Trustworthy AI – Make sure the design and delivery of your AI solutions are both responsible and transparent. This includes reviewing marketing and product materials.
Review Your Work – Engage in a risk-based review process throughout your product’s lifecycle. This will help mitigate any unintended consequences.
Team With Your Lawyers – Work hand-in-hand with counsel to help ensure compliance with best practices and all relevant workplace laws – and not just law prohibiting intentional discrimination, but also those barring the unintentional “disparate impact” claims as we see in the Workday lawsuit.
Develop Bias Detection Mechanisms – Implement robust testing and validation processes to detect and eliminate bias in your AI systems. This includes using diverse training data and regularly updating your algorithms to address any identified biases.
Lean Into Outside Assistance – Meanwhile, collaborate with external auditors or third-party reviewers to ensure impartiality in your bias detection efforts.
原文来自:https://www.salary.com/newsletters/law-review/agency-law-and-the-workday-lawsuit/