Employers are increasingly using Automated Decision Systems (“ADS”) – tools powered by Artificial Intelligence (“AI”) and algorithms – to streamline recruitment and manage the employee lifecycle. Uses so far include resume screening, facial recognition in video interviews, performance evaluation and management, employee engagement, and even benefits enrollment. As adoption of these technologies accelerates by the day, California is stepping in with significant new regulations effective on October 1, 2025 that will govern employers’ use of ADS in hiring and employment practices.
While AI-based technologies such as ADS offer efficiency and scale, they also present significant risks – particularly the potential for discriminatory outcomes and disparate impact. In other words, despite its “intelligence,” AI can’t grasp context, consider nuance, or apply empathy in decision-making – at least, not yet.
To address and help reduce these tools’ potentially unlawful discriminatory outcomes on candidates and employees, in March 2025, the California Civil Rights Council released “Proposed Employment Regulations Regarding Automated-Decision Systems.” The regulations were officially approved by the Office of Administrative Law on June 27, 2025 and are expected to become effective on October 1, 2025. Once effective, they will have the same legal effect as any other state regulations issued under the Fair Employment and Housing Act (FEHA) and will, for the first time in California, make algorithmic bias expressly unlawful in the employment context.
This post explains what employers, HR professionals, and employment attorneys need to know now before the regulations go into effect.
What are Automated Decision Systems?
The regulations define Automated Decision Systems or ADS as: “a computational process that makes a decision or facilitates human decision making regarding an employment benefit, as defined in section 11008(i) of these regulations. An Automated-Decision System may be derived from and/or use artificial intelligence, machine-learning, algorithms, statistics, and/or other data processing techniques.”
The regulations also define related technology terms including “algorithm,” “machine learning,” and “Automated-Decision System Data.”
The Substance of the New Regulations
Under the regulations:
- Use of ADS is unlawful where the technologies’ systems or selection criteria discriminate against an applicant or an employee, unless justified by a permissible defense like business necessity.
- Use of ADS is unlawful where a technology or algorithm may screen out applicants based on their schedule availability.
- Such screening criteria may have a disparate impact on people who, for example, observe a religious Sabbath, are disabled, have a relationship with a disabled person, or have medical restrictions.
- Under the regulations, use of this screening criteria is presumptively unlawful unless the use is job-related, consistent with business necessity, and includes a mechanism for an applicant to request accommodations.
- Use of ADS is unlawful where tools measure an applicant’s traits like reaction time, physical dexterity, facial expressions, or tone of voice.
- These tools, which include use of ADS in applicant video interviews, for example, may inadvertently disadvantage applicants with disabilities.
- Under the new regulations, employers may be required to provide reasonable accommodations or alternative assessments to applicants.
- Employers are subject to new recordkeeping requirements and must retain ADS-related records – including data on selection criteria, use of ADS tools, and personnel records created through the use of ADS tools – for four years from creation or the related personnel action.
- Employers are liable for third-party ADS vendors and software providers that create discriminatory outcomes.
- In a particularly significant aspect, the regulations provide that employers are legally responsible for the actions of the third-party vendors who provide or operate their ADS tools and related software.
- This means that if a business relies on a tech vendor that provides a system or tool that violates the regulations or otherwise produces discriminatory outcomes, the liability may be attributed to the business as well as the tech vendor.
Suggested Action Items: “AI Risk Mitigation” Steps for Employers and HR Leaders
Businesses with employees in California may wish to consider the following steps to reduce the risks the new regulations bring:
- Audit, revise and update policies now to comply with the new regulations, and then annually to incorporate updates to California law, including implementing effective Anti-Bias Testing Protocols for internal use in assessing technology options.
- Review (and/or implement) cybersecurity protocols to ensure protection of confidential information, as defined in your Employee Handbook or in Confidentiality Agreements.
- Audit your company’s technology and software systems, including reviewing any existing sites that now incorporate AI and ADS tools.
- Identify any ADS or algorithmic tools used in recruiting, hiring, performance evaluations, or workforce management.
- Choose vendors wisely by, for example, asking about their anti-bias testing protocols before you sign up for their software or site (i.e. how was the AI bot trained? Have the ADS protocols been tested for bias? Can the ADS’ decisions be overridden or adjusted?)
- Ensure there are records that comprehensively show how anti-bias testing protocols are implemented in vendor audits.
- Review vendor agreements.
- Make sure vendor agreements include robust indemnity provisions, transparency obligations, and compliance assurances.
- Train your team on the risks inherent in relying on tools using ADS, AI, and algorithms.
- Make sure the final say on any applicant or employee decision is made by a trained, qualified, human being.
- Ensure your team has processes for utilizing and verifying any factual research obtained through the use of AI and that you aren’t relying on an algorithm-based decision without conducting due diligence.
- Ensure that employees and applicants are aware of the process for submitting requests for reasonable accommodation pursuant to your policies and applicable law.
Final Thoughts
While ADS and related AI technology can increase efficiency and streamline logistics, as we all know, HR decisions are rarely black and white. Nuance matters. As Fox Rothschild partner Sahara Pynes wisely opined in 2023, “technology is no substitute for a human touch.”
Once effective, California’s new regulations will send a clear message about ADS tools: employers can’t hide behind the algorithm; the law still applies; and the smartest approach blends new technology with old-school compliance.
We will continue to monitor the status of the regulations and provide updates when they are effective.