Could AI be preventing you from getting the job? Know your rights.

By Anthony May

The increase of artificial intelligence (AI) in the workplace can have its benefits. But for many workers or prospective workers, the use of AI in the workplace raises concerns due to the potentially negative—and discriminatory—impacts it may have in the hiring process or in the workplace. This article will discuss how prospective and current employees can identify when their rights are being adversely impacted by AI. I will outline how to recognize when your rights are at stake and highlight some of the legal protections afforded to workers when an employer uses AI in a discriminatory way.

There are three federal laws in place that primarily address the issue of discrimination in employment: Title VII of the Civil Rights Act of 1964 (Title VII)the Americans with Disabilities Act (ADA), and the Age Discrimination in Employment Act (ADEA). Legal theories interpreting these laws often prohibit employers from using—or contracting with companies to use—technology that screens out or denies applicants or employees the benefits of employment. These laws are meant to protect applicants’ rights when being considered for employment and prevent discrimination on the basis of race, color, religion, sex (including sexual orientation), age, national origin, veteran status, disability status, or other genetic information. Disparate impact claims can be brought against a company that uses AI in a way that results in certain applicants, such as females or persons of a certain race, being disproportionately excluded from consideration—whether intentional or not. But how can an employee or candidate be aware of instances when they are being discriminated against?

For starters, AI can discriminate against applicants before you even know a job opportunity exists. In December 2022, Real Women in Trucking, a non-profit formed by female commercial-motor-vehicle drivers, filed a charge of discrimination with the Equal Employment Opportunity Center (EEOC) against Meta Platforms, Inc.—formerly Facebook. The charge asserts that Meta violated both Title VII and the ADEA by using an algorithm that targeted job ads based on the gender and age of users, which disproportionally identified male truck drivers for certain positions while preventing potential female applicants form seeing those same advertisements. According to Real Women in Trucking’s counsel, while women make up 54%, and people over 55 make up 28%, of people job searching on Facebook, these demographics are only exposed to a fraction of open positions on a routine basis due to algorithmic bias: “Facebook [Meta] is one of the go-to resources for these life opportunities [and] the consequences of this kind of discrimination are far reaching.”

Secondly, AI used in job assessments or performance tests can disparately screen out applicants, particularly those with disabilities. For example, if an applicant is required to take a timed math test on a computer as part of a job application, an individual with a disability that impacts their dexterity might perform poorly and be removed from consideration. Similarly, if an employer tests a blind applicant on visual memory and they cannot meaningfully participate in the test because the test is inaccessible using a screen reader, they may be excluded even if their memory is actually sufficient enough for the job.

A third way AI can disparately affect individuals with disabilities is if it is used to conduct video interviews of prospective applicants that may misinterpret disability-related mannerisms. Certain “micro-expressions” could eliminate an applicant for facial expressions or physical reactions due to their disabilities that have nothing to do with the job. In those instances, employers, including those in Maryland, may be required by law to provide you with notice that you are being recorded and assessed. These are only a few examples of how AI can deny qualified applicants fair and equal employment opportunities in the hiring process.

Apart from its effects on the hiring process, AI is also capable of discrimination when assessing a company’s current employees’ wages and promotional opportunities. CBS News reported in 2023 about a study conducted by UC Law, San Francisco that described how gig workers, such as Uber drivers, are subjected to algorithmic wage discrimination: “Algorithmic wage discrimination allows firms to personalize and differentiate wages for workers in ways unknown to them, paying them to behave in ways that the firm desires, perhaps [paying] as little as the system determines that they may be willing to accept[.]”

Moreover, some AI tools use predictive analysis to evaluate current employees and can exhibit bias based on identifying factors that establish patterns the company views as indicative of success in a given role. In this instance, data is used to identify candidates for promotions or raises. But when the data the system uses to discern who is “successful” is biased—e.g., based on factors from a group of “historically successful” employees who are all middle-aged white males—the systems should be evaluated by bias audits to prevent the company from promoting only those demonstrating the same characteristics favored by the biased inputs system.

The above examples only scratch the surface of the various ways employers can use AI to discriminate, but they can be useful for spotting when discrimination is happening. Importantly, they can indicate when certain actions can be taken to ameliorate discrimination.

Here are some tips for ways you can exercise your rights to prevent and/or remediate when AI discriminates:

  • Educate Yourself: While there are federal laws that aim to prevent discrimination generally, many states are at the forefront of introducing new laws specifically targeting discriminatory uses of AI. Familiarize yourself with the unique laws of your state so you can stay on top of employers who aren’t complying with the regulations in place to prevent discrimination. The more you know about the laws and protections in place, the easier it is to protect your rights.
  • Request a Reasonable Accommodation: Unless it would pose an undue burden, an employer or prospective employer is required to provide you a reasonable accommodation if you are a person with a disability. This could include things like alternative testing or interviews with a person in HR. In many instances you should be notified that AI is being used to make employment decisions. Contact HR to determine how it is being used and request a reasonable accommodation. If you are not hired, contact the employer’s HR department to determine if the decision was based on an assessment for which you were entitled to an accommodation and, if you were not given one, request to be reevaluated. If an employer asks you for additional information or proof of disability, you should familiarize yourself with your privacy rights under the ADA before submitting anything. If an employer denies your request, consider filing a charge of discrimination with the EEOC.
  • Consult with an Attorney: If you have reason to believe that you were denied an employment opportunity due to AI bias, contact our office to explore options such as filing a charge of discrimination with the EEOC or pursuing litigation against the employer.
  • Strength in Numbers: If you are part of an organized labor movement or trade organization, or even if you aren’t, you should talk with others about potential barriers to employment. You may recognize patterns that can lead to potential class action suits. For example, in 2023, the EEOC settled a claim against an online tutoring company following allegations that its use of AI “automatically reject[ed] female applicants aged 55 or older and male applicants aged 60 or older,” and rejected “more than 200 qualified applicants based in the United States because of their age.” Among other things, the settlement required the employer utilizing the AI to pay $365,000 in damages and adopt new policies and training to remediate the discrimination that occurred and prevent it from happening again in the future.

We know that AI in the workforce is here for the long run, but that does not mean that an employer can use it to discriminate. I have written and presented extensively on the intersection of AI and employment law, most recently presenting on this topic at the Society for Human Resource Management (SHRM) Talent Conference & Expo and at the 2024 National Employment Lawyers Association (NELA) Annual Convention. You can read my blog series, Algorithmically Excluded, here. If you have questions about this matter or feel like you’ve been subject to discrimination in the hiring process, please call Brown, Goldstein & Levy at 410-962-1030 for a consultation and learn more about my practice here.

If you are an employer who wants to conduct an AI bias audit or learn more about the ways you can prevent AI bias in your hiring and promotion processes, contact us at Inclusivity Consulting. We offer practical and legal analysis and guidance, education and training, strategic diversity assessments and more to help businesses remain accessible.

Inclusivity helps businesses, organizations, government agencies, and industry groups navigate the rapidly changing landscape of civil rights and achieve real inclusion of people with disabilities in their workforces and communities.

Share this post: