Artificial Intelligence: How Are Regulators Responding to the New Tech Frontier for Hiring and HR?

The rise of new AI-based solutions for candidate assessment tasks and the complex interaction of the positives and negatives promised by that technology have necessitated a response at local, state, and national levels. In this article, Affirmity Principal Business Consultant, Patrick McNiel, PhD, takes a closer look at the key current and upcoming guidelines and regulations you’ll need to understand if your organization is considering deploying AI in its talent decision processes.

The Response From the Equal Employment Opportunity Commission and Department of Justice

The EEOC released a technical assistance document in 2022 called ‘The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees’. This document is primarily concerned with three things:

  1. Ensuring a process to provide reasonable accommodations is in place when AI tools are used.
  2. Ensuring safeguards are put into place that prevent individuals with disabilities from being improperly screened out even when they can do the job with or without accommodation.
  3. Ensuring AIs don’t accidentally engage in prohibited disability-related inquiries or medical exams.

The EEOC additionally released similar Title VII-related guidance, called ‘Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964’. The EEOC also has a wider Artificial Intelligence and Algorithmic Fairness Initiative.

"The OFCCP has previously stated via an FAQ on employee selection procedures that Irrespective of the level of technical sophistication involved, OFCCP analyzes all selection devices for adverse impact. This means organizations using AI algorithms need to use the same standard they deploy for any tests that are laid out in uniform guidelines."

Further Responses at the Federal Level

Meanwhile, the DOJ has released its own guidance, ‘Algorithms, Artificial Intelligence, and Disability Discrimination in Hiring’. This guidance:

  1. Says employers using AI tools must consider how those tools could impact different disabilities.
  2. Covers employer obligations when using AI.

Further to this guidance, the EEOC and DOJ, along with the Federal Trade Commission and Consumer Financial Protection Bureau issued a ‘Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems’ in April 2023. In summary, the agencies stated they’re watching the development of AI closely and looking for uses that can cause unlawful discrimination or the violation of other federal laws. Key existing concerns were said to be:

  • Uses of data that can create biased systems
  • The opacity of AI systems preventing assessments of fairness
  • Issues of design and use that can adversely affect various populations

The OFCCP has previously stated via an FAQ on employee selection procedures that “Irrespective of the level of technical sophistication involved, OFCCP analyzes all selection devices for adverse impact.” This means organizations using AI algorithms need to use the same standard they deploy for any tests that are laid out in uniform guidelines.

Finally, at the federal level, organizations should be aware of the Algorithmic Accountability Act bill, which was introduced in February 2022. If passed, this would require companies deploying AI for decisions that could be discriminatory to assess them and mitigate negative impacts.

AN OFCCP SCHEDULING LETTER REFRESHER | ‘What Are the 26 Items on the OFCCP’s Scheduling Letter and Itemized Listing?

State and Local-Level Laws to Be Aware Of

Perhaps the most well-known state-level AI-related law currently out there is the Illinois State Artificial Intelligence Video Review Act. This act focuses on informed consent, transparency, and privacy. If employers use video interview technology that assesses facial features, expressions, speech, or other characteristics of applicants for selection, then they must:

  • Notify applicants in writing that this is being done
  • Describe what’s being assessed by the AI
  • Obtain written consent before the interview
  • Destroy any videos that are recorded within 30 days

Maryland has a law that covers similar ground, requiring an applicant’s written consent and a waiver.

New York City, meanwhile, has its Automated Employment Decisions Tool (AEDT) law prohibiting the use of such tools unless they’ve been subject to a bias audit within one year of use. Organizations must notify candidates that AI is being used, and they also have to tell candidates which characteristics are being assessed.

California currently has some draft regulations in place that will require an annual impact assessment, and also requires safeguards to be built against algorithmic discrimination. Furthermore, the proposed regulation goes as far as requiring organizations to allow candidates to opt out of being subject to automated decision tools.

"Hiring and various other employment decisions fall under the high-risk category, and will attract certain additional requirements under the act. Providers would need to register their systems in an EU-wide database, and comply with a wide range of risk management, testing, technical robustness, data training and data governance, transparency, human oversight, and cybersecurity requirements."

International Responses: The EU’s Artificial Intelligence Act

The European Union has a proposed piece of legislation called the Artificial Intelligence Act that would have broad effects on AI use. This legislation classifies AI into different categories. These are limited risk, high risk, and unacceptable:

  • Limited-risk AI: AI systems such as social media engines, natural language processing, and spell check, that comply with minimal transparency requirements.
  • High-risk AI: AI systems with the potential to negatively affect safety or fundamental rights. This category includes product categories within the EU’s product safety legislation and systems that fall within eight key categories:
    • Biometric identification and categorization of natural persons
    • Management and operation of critical infrastructure
    • Education and vocational training
    • Employment, worker management, and access to self-employment
    • Access to (and enjoyment of) essential private and public services and benefits
    • Law enforcement
    • Migration, asylum, and border control management
    • Assistance in legal interpretation and application of the law
  • Unacceptable AI: Prohibited AI usage that is considered a threat to people, such as social scoring systems (classifying people based on behavior, socio-economic status, or personal characteristics), real-time and remote biometric identification systems, and systems that manipulate people in general, or specific vulnerable groups.

Hiring and various other employment decisions fall under the high-risk category, and will attract certain additional requirements under the act. Providers would need to register their systems in an EU-wide database, and comply with a wide range of risk management, testing, technical robustness, data training and data governance, transparency, human oversight, and cybersecurity requirements. This would require a level of AI documentation and interpretability that may not currently be feasible.

Though still in a draft form, if passed, this law would apply to the 27 countries of the European Union—around half a billion people—and would likely have a similar global impact as the EU’s adoption and implementation of GDPR in 2016-18.

Executive Order 14110

The most recent development in this space is the Biden administration’s October 2023 Executive Order 14110 on ‘Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence’. For our purposes, the most significant section among the wide-ranging 36-page order is Section 6 (b) (i), which directs the Secretary of Labor to “develop and publish principles and best practices for employers that could be used to mitigate AI’s potential harms to employees’ well-being and maximize its potential benefits.” This promises “specific steps for employers to take with regard to AI”, including:

  • Job displacement risks and career opportunities related to AI
  • Labor standards and job quality, including issues related to the equity, protected activity, compensation, health, and safety implications of AI
  • Implications for workers of employers’ AI-related collection and use of data about them
"For our purposes, the most significant section among the wide-ranging 36-page Executive Order 14110 is Section 6 (b) (i), which directs the Secretary of Labor to develop and publish principles and best practices for employers that could be used to mitigate AI's potential harms to employees' well-being and maximize its potential benefits."

Society for Industrial and Organizational Psychology Guidelines

Looking beyond the regulators, it’s worth considering what professional associations have to say about AI. One such association is the Society for Industrial and Organizational Psychology (SIOP). In January 2022, SIOP issued a set of recommendations that should carry considerable weight, considering the association is also the technical authority on employment testing. Their guidelines are:

  1. AI-based assessments should produce scores that are considered fair and unbiased.
  2. The content and scoring of AI-based assessments should be clearly related to the job.
  3. AI-based assessments should produce scores that predict future job performance (or other relevant outcomes) accurately.
  4. AI-based assessments should produce consistent scores that measure job-related characteristics (e.g. upon re-assessment).
  5. All steps and decisions relating to the development and scoring of AI-based assessments should be documented for verification and auditing.

ALSO ON THE BLOG | ‘Affirmative Action Scope: When Is an AAP Required, Who Should Be Included, and How?

Continue Reading About AI’s Implications for Diversity and Hiring

This article is an extract from our white paper, “The Influence of Artificial Intelligence on Organizational Diversity and Hiring Regulations: The Possibilities and Dangers of the New Tech Frontier”. The white paper explores how AI and machine learning technologies are already finding their way into organizational hiring processes while considering the potential benefits and risks of integrating AI in this way.

Cover page of the Affirmity white paper “The Influence of Artificial Intelligence on Organizational Diversity and Hiring Regulations: The Possibilities and Dangers of the New Tech Frontier”

Download the full White Paper today.

Root out bias and adverse impact at key decision points in your talent acquisition processes. Contact Affirmity today to learn more about our talent acquisition process reviews.

About the Author

Patrick McNiel, PhD, is a principal business consultant for Affirmity. Dr. McNiel advises clients on issues related to workforce measurement and statistical analysis, diversity and inclusion, OFCCP and EEOC compliance, and pay equity. Dr. McNiel has over ten years of experience as a generalist in the field of Industrial and Organizational Psychology and has focused on employee selection and assessment for most of his career. He received his PhD in I-O Psychology from the Georgia Institute of Technology. Connect with him on LinkedIn.

Talk to an Expert or Request a Demo

Let Affirmity help your HR and compliance teams with expert consulting services, data analysis, training, and software to optimize your affirmative action and D&I programs.

-->

We use cookies to simplify forms and otherwise improve your experience on our site. By using the site, you accept our use of cookies. Cookie Policy​.