Artificial intelligence has increasingly found its way into hiring systems and processes, delivering a timely solution to growing application volumes and the soaring complexity of skills requirements for a constantly evolving workforce. However, the difficult-to-scrutinize nature of AI-based decision-making has prompted several states and localities to create laws regulating the use of AI in hiring. In this article, we look at these laws and what your organization needs to know in order to comply.
Regulations Governing AI Use in Selection Systems
California
In June 2025, the California Civil Rights Council amended the state’s code of regulations with a new rulemaking action that defined several AI-related terms. It also made it unlawful for an employer or other covered entity “to use an automated-decision system or selection criteria that discriminates against an applicant or employee or a class of applicants or employees on a basis protected by the Fair Employment and Housing Act.”
Examples added to the regulations include:
- How using an automated-decision system that measures an applicant’s skill, dexterity, reaction time, and/or other abilities or characteristics may discriminate against individuals with certain disabilities.
- How the use of an automated-decision system to analyze an applicant’s tone of voice, facial expressions, or other physical characteristics may discriminate based on a range of characteristics protected under the Fair Employment and Housing Act.
The amendments make it clear that in the event of a claim, the availability of anti-bias testing “or similar proactive efforts to avoid unlawful discrimination” will be considered. Provisions concerning automated-decision systems data were also added to the code of regulations’ rules around the preservation of records, with employers instructed to retain this information for four years from the date of the record’s creation.
Effective: October 1, 2025
Applies to: Employers with five or more employees
California has also made changes to the California Consumer Privacy Act, though they will not take effect until 2027. These changes require employers that use automated decision-making technology in hiring and employment processes to notify applicants/employees that such tools are in use.
Employers must offer the option to opt out of using such systems, unless they provide applicants the ability to appeal any decision by sending it to a human decision maker for a second opinion.
The amendments require employers to perform risk assessments of their automated hiring tools and to respond to information requests with plain language explanations of the purpose, logic, and outcomes of tool use.
Effective: Jan 1, 2027
Applies to: Employers are subject to the California Consumer Privacy Act when they meet any of the following:
- Have annual revenue of over $25 million
- Buy, sell, or share the personal information of 100,000 or more California residents or households
- Derive 50% or more of annual revenue from the sale of California residents’ personal information
CATCH UP ON THE LATEST TRENDS | ‘The 6 Biggest Compliance Themes From SHRM Blueprint’
Colorado
Senate Bill 24-205 made additions to the Colorado Revised Statutes “Concerning consumer protections in interactions with artificial intelligence systems”. These new additions make it clear that both selection tool developers and deployers have a duty to avoid “algorithmic discrimination.” They require employers to create AI risk management policies and to complete impact assessments for all AI systems. Any business that deploys an AI system must publish a statement disclosing its use and explaining the information it collects.
The law also requires employers to give consumers the ability to correct incorrect data processed by the AI, while also providing an appeals process for consequential decisions.
Effective: February 1, 2026
Applies to: All employers, though some requirements do not apply when the employer:
- Has fewer than 50 full-time employees
- Does not use its own data to train the AI
- Makes an AI impact assessment available to users
New York City
The administrative code of New York City was amended in 2021 to include a new subchapter on automated employment decision tools. The law took effect in January 2023 and has been actively enforced since July of the same year. It states that automated employment decision tool use is prohibited unless the employer can:
- Demonstrate that the tool has been subject to a bias audit within the last year, and;
- Provide a summary of the results of the bias audit on the organization’s website
Employers must also disclose the use of automated employment decision tools to employees or candidates. This disclosure must detail the purpose of the tool and the characteristics it will look for when making its determination.
Applies to: All employers
FUTURE CONSIDERATIONS FOR COMPLIANCE | ‘The False Claims Act: How DEI Risk Is Evolving, the Hefty Fines the DOJ Is Threatening, and How Affirmity Can Help’
Texas
The Texas Responsible Artificial Intelligence Governance Act is an amendment to Section 503.001 of the state’s Business and Commerce Code. It prohibits the development of AI systems for a range of purposes, including discrimination. It also establishes an entity called the Texas Artificial Intelligence Advisory Council, charged with conducting educational programs on AI for state agencies and local governments, and issuing reports on AI-related topics to inform the Texas legislature’s policies.
While the act states that “A person may not develop or deploy an artificial intelligence system with the intent to unlawfully discriminate against a protected class in violation of state or federal law.”, it also establishes that “a disparate impact is not sufficient by itself to demonstrate an intent to discriminate.”
Effective: January 1, 2026
Applies to: All employers
Regulations Governing AI Use in Facial Recognition and Video Interviews
Illinois
Part of a slightly older wave of AI-related laws, Illinois has had an act specific to governing the use of artificial intelligence to analyze video interviews since 2020. Employers that use such methods must notify each applicant prior to the interview, provide information about how this AI process works, and obtain the participant’s consent. Any videos recorded for this purpose should only be shared “with persons whose expertise or technology is necessary in order to evaluate an applicant’s fitness for a position”, and applicants can request that these videos be destroyed (with employers given 30 days to comply).
Employers that rely solely on this methodology must collect and report the race and ethnicity data of applicants and hires to the Illinois Department of Commerce and Economic Opportunity by December 31 each year.
Applies to: All employers
DIVE DEEP INTO AI IN OUR WHITE PAPER | ‘The Influence of Artificial Intelligence on Organizational Diversity and Hiring Regulations’
Maryland
Maryland has a similar law to Illinois, effective as of October 2020, that requires employers to obtain consent from an applicant before using “certain facial recognition service technologies during an interview.” The law does not mention artificial intelligence specifically, instead prohibiting the creation of a machine-interpretable “facial template” by any technology that analyzes facial features.
The waiver that applicants sign to consent must include their name, the date of the interview, and confirmation that they have read and understood what they’re being asked to sign.
Applies to: All employers
Ensure your use of AI is legal and effective with Affirmity’s AI risk assessment services. Contact our team today to get started.
About the Author
Grace Mazar oversees marketing at Affirmity with direct responsibility for go-to-market strategy, product marketing and positioning, demand generation, and digital marketing.
A seasoned B2B marketing leader with over 20 years of experience, Grace applies her deep industry knowledge and proven outcomes-based marketing tactics to reach and influence targeted audiences.