Use of Artificial Intelligence (AI) Tools by Employers Can Violate the Americans with Disabilities Act

In a story reported by the Washington Post, HireVue’s artificial intelligence (AI) software was found to have assessed over a million video job interviews. Its autonomous interview system asked questions of candidates, filmed their responses, and then used the resulting video and audio to assess candidates for jobs, such as in investment banking and accounting. The AI attempts to predict how a candidate will perform on the job based on how they act in an interview—their gestures, pose, lean, as well as their tone and cadence—and the content of their responses. This process produces an employability score, which employers use to decide who advances in the application process.

Yet a number of ethical AI observers have been very critical of HireVue. In the Washington Post story, AI Now Institute Co-Founder Meredith Whittaker calls this development “profoundly disturbing” and the underlying methodology “pseudoscience.” Princeton Computer Science Professor Arvind Narayanan says this is “AI whose only conceivable purpose is to perpetuate societal biases.” Scientific evidence suggests that accurately inferring emotions from facial expressions is very difficult and it stands to reason that inferring personality traits is even harder, if it’s possible at all.

What has been not been noted, however, is the way in which these systems likely discriminate against people with disabilities. The problem that people with disabilities face through this kind of AI is, even if they have a strong set of positive qualities for certain jobs, the AI is unlikely to highlight those features and could generate low scores for those individuals.

Characteristics such as typical enunciation and speaking at a specific pace are qualities that might correlate with effective salespersons. Further, perhaps leaning forward with one arm on the table signals an interpersonal comfort that prior high-performing salespersons often display. The AI system would have identified this relationship from the “training data”—the video interviews and the sales outcomes collected from current employees. However, people with disabilities will not benefit if their qualities manifest physically in a way the algorithm has not seen in that training data. If their facial attributes or mannerisms are different than the norm, they get no credit, even if their traits would be as beneficial to the job.

Advocates suggest that this is a big problem. Sheri Byrne-Haber, Head of Accessibility at VMware, has argued that “the range of characteristics of disability is very, very broad,” contributing to this algorithmic discrimination problem. Shari Trewin, an Accessibility Researcher at IBM, agrees, arguing: “The way that AI judges people is with who it thinks they’re similar to—even when it may never have seen anybody similar to them—is a fundamental limitation in terms of fair treatment for people with disabilities.”

To account for this problem, AI training data would have to include many people with diverse disabilities. Since each job type has a distinct model, this would have to be true for many different models (for context, HireVue has over 200 models). While it is possible AI software could include such a range of different individuals, it would take a tremendous effort. Without diverse training, an AI system would not be able to learn any characteristics demonstrated by people with disabilities who were later successful. With some of their qualities ignored, these candidates would be pushed towards the middle of the distribution. And since most applicants for any specific job do not get hired, applicants with no similar, high-performing past employees do not stand a chance.

On their ethical AI page, HireVue says they actively promote equality opportunity “regardless of gender, ethnicity, age, or disability status.” Further, HireVue does allow people with disabilities to request more time for questions and otherwise implement accommodations as requested by the employer. However, the core problem of inferring from videos of people with disabilities remains. In the Post’s reporting, Nathan Mondragon, the chief industrial-organizational psychologist at HireVue, says that facial actions can make up 29% of a person’s employability score.

Broadly, this issue of coverage (meaning the training data containing enough relevant examples) is a genuine concern when applying AI systems to people with disabilities. Potentially relevant to this software, there is research showing that speech recognition works poorly for people with atypical speech patterns, such as a deaf accent. Google researchers demonstrated that some AI considers language about having a disability as inherently negative. As another problematic example, imagine how driverless cars might learn human movements to avoid the path of pedestrians. This is a type of situation in which humans still dramatically outperform AI: choosing not to narrowly interpret a situation based only on what we have seen before.

About 13% of Americans have a disability of some kind and they already suffer from worse employment outcomes. Their unemployment rate stands at 6.1%, twice that of people without disabilities. Americans without disabilities also out-earn their peers with disabilities: $35,000 to $23,000 in median earnings over the past year, according to the Census Bureau. Specifically relevant to facial analysis, estimates suggest that around 500,000 to 600,000 people in the United States have been diagnosed with a craniofacial condition, meaning an abnormality in the face of head. Additionally, millions of Americans have autism spectrum disorder, one of many conditions which can manifest itself in unusual facial or speech expressions.

While systems like these may embolden recent calls for facial recognition bans, there are other policy implications as well. For algorithms that are crucial in hiring, companies should publicly release bias audit reports—summaries of the predictions made across subgroups, especially protected classes—rather than simply claiming their models have been evaluated and are bias-free. Further, the Equal Employment Opportunity Commission (EEOC) should review these systems and issue guidance on whether these systems violate the Americans with Disabilities Act. While there are many positive applications of AI for people with disabilities, we need to be especially careful that AI for video and audio analysis treats all people fairly. (1)

As implied by the Washington Post story on HireVue’s artificial intelligence (AI) software, employers may inadvertently violate the Americans with Disabilities Act (ADA) if they use artificial intelligence to assess job applicants and employees. In response, the U.S. Equal Employment Opportunity Commission (EEOC) recently released a technical assistance document. “The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees,” that is focused on preventing discrimination against job seekers and employees with disabilities. Based on the ADA, regulations, and existing policy guidance, this document outlines issues that employers should consider to ensure that the use of software tools in employment does not disadvantage workers or applicants with disabilities in ways that violate the ADA. The document highlights promising practices to reduce the likelihood of disability discrimination.

The most common ways employers may violate ADA are by not providing a “reasonable accommodation”; by using an algorithmic-decision-making tool that “screens out” individuals with disabilities; and by using such tools to violate ADA’s restrictions on disability-related inquiries and medical examinations.

Employers increasingly use AI and other software tools to help them select new employees, monitor performance, and determine pay or promotions. Employers may give computer-based tests to applicants or use computer software to score applicants’ resumes. Many of these tools use algorithms or AI. These tools may result in unlawful discrimination against people with disabilities in violation of the Americans with Disabilities Act (ADA).

The Department of Justice and the Equal Employment Opportunity Commission (EEOC) released a technical assistance document about disability discrimination when employers use artificial intelligence (AI) and other software tools to make employment decisions. This document:

  • Provides examples of the types of technological tools that employers are using
  • Clarifies that, when designing or choosing technological tools, employers must consider how their tools could impact different disabilities;
  • Explains employers’ obligations under the ADA when using algorithmic decision-making tools, including when an employer must provide a reasonable accommodation; and
  • Provides information for employees on what to do if they believe they have experienced discrimination.

Based on the ADA, regulations, and existing policy guidance, employers should ensure that the use of software tools in employment does not disadvantage workers or applicants with disabilities in ways that violate the ADA. Steps the EEOC recommends to reduce the chances of screening out someone because of a disability are:

  • make sure the user interface is accessible to as many individuals with disabilities as possible;
  • present materials in alternative formats; and
  • determine whether the algorithm disadvantages individuals with disabilities.

The document also says the tool used by the employer should indicate the availability of reasonable accommodations; provide clear instructions for requesting reasonable accommodations in advance of the assessment; and provide all applicants and employees with as much information about the tool as possible.

“Algorithmic tools should not stand as a barrier for people with disabilities seeking access to jobs,” said Assistant Attorney General Kristen Clarke of the Justice Department’s Civil Rights Division. “This guidance will help the public understand how an employer’s use of such tools may violate the Americans with Disabilities Act, so that people with disabilities know their rights and employers can take action to avoid discrimination.”

“New technologies should not become new ways to discriminate. If employers are aware of the ways AI and other technologies can discriminate against persons with disabilities, they can take steps to prevent it,” said EEOC Chair Charlotte A. Burrows. “As a nation, we can come together to create workplaces where all employees are treated fairly. This new technical assistance document will help ensure that persons with disabilities are included in the employment opportunities of the future.”

The EEOC’s technical assistance document is part of its Artificial Intelligence and Algorithmic Fairness Initiative to ensure that the use of software, including artificial intelligence (AI), used in hiring and other employment decisions complies with the federal civil rights laws that the EEOC enforces. In addition to its technical assistance, the EEOC released a summary document providing tips for job applicants and employees. (2)

For more information on the Justice Department’s Civil Rights Division and its disability work, please visit www.justice.gov/crt. For more information on the ADA, please call the Justice Department’s toll-free ADA information line at 800-514-0301 (TDD 800-514-0383) or visit www.ada.gov.

WorkSaver Systems structures all Physical Ability Tests (PATs) to be in full compliance with all federal and state laws pertaining to employment discrimination. For more information on our process, please call
visit our web site at www.worksaverystems.com or call us (800) 414-2174.

References:

  1. https://www.brookings.edu/blog/techtank/2019/10/31/for-some-employment-algorithms-disability-discrimination-by-default/
  2. https://www.eeoc.gov/newsroom/us-eeoc-and-us-department-justice-warn-against-disability-discrimination