Artificial intelligence (AI) is transforming the workplace. From résumé screeners to video interview tools and performance monitoring software, automated decision-making promises speed and efficiency. But for employers, these tools carry serious legal risks. When algorithms affect who gets hired, promoted, or fired, employers remain responsible under federal, state, and local laws. Missteps can trigger discrimination lawsuits, regulatory enforcement, and reputational damage. In this article, we’ll break down the federal employment laws, state and local AI regulations, recent lawsuits and enforcement actions, and a compliance framework employers can use to stay ahead.
Why Is Automated Hiring/Firing Legally Risky?
AI systems can unintentionally replicate or amplify human bias. For example:
- Résumé screeners may downgrade applicants from women’s colleges.
- Video interview tools may disadvantage candidates with disabilities.
- Productivity algorithms may penalize older employees or pregnant workers.
The key legal issue: adverse impact on protected groups. Employers cannot avoid liability by blaming the vendor. If your AI tool screens people out unfairly, you are on the hook.
Federal Employment Laws Governing AI Tools
Title VII of the Civil Rights Act
Title VII prohibits discrimination based on race, color, religion, sex (including sexual orientation and gender identity), and national origin. In 2023, the EEOC clarified that AI hiring tools are subject to the same adverse impact analysis as traditional tests. Employers must validate that tools are job-related and explore less-discriminatory alternatives.
Americans with Disabilities Act (ADA)
AI hiring tools cannot “screen out” qualified candidates with disabilities. For example, requiring all applicants to take a timed typing test without accommodation may violate the ADA. Employers must provide reasonable accommodations in any AI-mediated assessment.
Age Discrimination in Employment Act (ADEA)
AI tools cannot favor younger applicants. In 2023, the EEOC settled with iTutorGroup for $365,000 after its system allegedly auto-rejected applicants over 40.
Federal Trade Commission (FTC)
The FTC has warned it will pursue unfair or deceptive AI practices, including undisclosed bias or false claims about AI’s fairness. Employers and vendors alike fall under scrutiny.
State and Local AI Hiring Laws
Several states and cities have taken the lead in regulating AI employment tools:
- New York City Local Law 144: Requires bias audits of automated employment decision tools (AEDTs), public posting of results, and candidate notices.
- Illinois: Its AI Video Interview Act mandates candidate consent and bias reporting. Amendments to the Illinois Human Rights Act further regulate discriminatory AI.
- Maryland: Employers must obtain written waivers before using facial recognition in interviews.
- Colorado AI Act (SB 24-205): Effective 2026, this law requires risk management, impact assessments, disclosures, and AG notification within 90 days of detecting algorithmic discrimination.
- California (CCPA/CPRA ADMT Rules): Pending final approval, these regulations would require notices, access rights, and appeal processes for AI tools in employment decisions.
AI Hiring Lawsuits/Enforcement
EEOC v. iTutorGroup
The EEOC’s first AI discrimination settlement demonstrated that employers face liability when their algorithms reject candidates based on age. The key facts in this case are as follows:
- Parties: The case was brought by the U.S. Equal Employment Opportunity Commission (EEOC) against iTutorGroup, Inc., Tutor Group Limited, and Shanghai Ping’An Intelligent Education Technology Co., Ltd.
- What the company does: iTutorGroup provides English-language tutoring remotely (often to students in China) using U.S.-based tutors working from home.
- Alleged discrimination mechanism: The EEOC alleged that in early 2020 iTutorGroup’s online hiring software was programmed to automatically reject female applicants aged 55 and older and male applicants aged 60 and older. More than 200 U.S.-based qualified applicants were rejected on that basis.
- How it was discovered: The EEOC complaint says that one rejected applicant applied with her true birth date, was rejected, then reapplied (same qualifications) but using a more recent birth date—and got an interview. This was used as evidence of discriminatory treatment.
Mobley v. Workday
A proposed class action alleges that Workday’s AI hiring software discriminated against applicants based on race, age, and disability. The courts allowed portions of the case to proceed, signaling potential exposure not only for employers but also for software vendors.
Monitoring and Firing Risks
Wearables and monitoring systems used for productivity scoring can run afoul of the ADA and Title VII if they collect health data or unfairly target certain groups.
Compliance Framework for Employers
To reduce risk, employers should adopt a proactive compliance program:
- Inventory AI Tools: Create a register of all automated systems used in hiring, promotion, evaluation, and termination.
- Bias Testing & Audits: Run adverse impact analyses regularly. In NYC, arrange independent bias audits and publish results.
- ADA Accommodations: Offer alternatives for applicants with disabilities.
- Human Oversight: Ensure real human review of automated decisions.
- Vendor Contracts: Require documentation, cooperation on audits, and jurisdiction-specific compliance support.
- Transparency & Notices: Inform candidates when AI is used, describe its purpose, and provide appeal mechanisms where required.
- Monitoring & Logging: Continuously test tools for disparate impact and maintain logs for regulators.
- Feature Selection: Avoid proxies (like zip code or video background) that may encode bias.
- Training: Educate HR and managers on AI compliance obligations.
- Recordkeeping: Preserve audits, notices, validation studies, and vendor assurances.
Key Takeaways
- Federal laws already apply: Title VII, ADA, and ADEA govern AI hiring and firing tools.
- States add new obligations: NYC bias audits, Illinois video interview rules, Maryland consent, Colorado’s AI Act, and California’s ADMT regulations are raising the bar.
- Lawsuits are growing: iTutorGroup and Workday cases highlight real risks.
Employers that adopt structured compliance programs—combining bias testing, accommodations, transparency, and human oversight—will not only reduce litigation risk but also build trust with candidates and employees.
Conclusion
So, in conclusion, AI in employment is here to stay, but so is the law. Employers that treat AI hiring and firing tools as regulated employment tests—with careful audits, documentation, and human oversight—will be best positioned to leverage AI responsibly while staying on the right side of the law. Please feel free to contact our law firm to speak with an artificial intelligence attorney.