Workday and Amazon’s alleged AI employment biases are among myriad ‘oddball results’ that could exacerbate hiring discrimination ...Middle East

Fortune - News
Workday and Amazon’s alleged AI employment biases are among myriad ‘oddball results’ that could exacerbate hiring discrimination
Following allegations that workplace management software firm Workday has an AI-assisted platform that discriminates against prospective employees, human resources and legal experts are sounding the alarm on AI hiring tools. “If the AI is built in a way that is not attentive to the risks of bias…then it can not only perpetuate those patterns of exclusion, it could actually worsen it,” law professor Pauline Kim told Fortune.

Despite AI hiring tools’ best efforts to streamline hiring processes for a growing pool of applicants, the technology meant to open doors for a wider array of prospective employees may actually be perpetuating decades-long patterns of discrimination.

AI hiring tools have become ubiquitous, with 492 of the Fortune 500 companies using applicant tracking systems to streamline recruitment and hiring in 2024, according to job application platform Jobscan. While these tools can help employers screen more job candidates and help identify relevant experience, human resources and legal experts warn improper training and implementation of hiring technologies can proliferate biases.

    Research offers stark evidence of AI’s hiring discrimination. The University of Washington Information School published a study last year finding that in AI-assisted resume screenings across nine occupations using 500 applications, the technology favored white-associated names in 85.1% of cases and female associated names in only 11.1% of cases. In some settings, Black male participants were disadvantaged compared to their white male counterparts in up to 100% of cases.

    “You kind of just get this positive feedback loop of, we’re training biased models on more and more biased data,” Kyra Wilson, a doctoral student at the University of Washington Information School and the study’s lead author, told Fortune. “We don’t really know kind of where the upper limit of that is yet, of how bad it is going to get before these models just stop working altogether.”

    Some workers are claiming to see evidence of this discrimination outside of just experimental settings. Last month, five plaintiffs, all over the age of 40, claimed in a collective action lawsuit that workplace management software firm Workday has discriminatory job applicant screening technology. Plaintiff Derek Mobley alleged in an initial lawsuit last year the company’s algorithms caused him to be rejected from more than 100 jobs over seven years on account of his race, age, and disabilities.

    Workday denied the discrimination claims and said in a statement to Fortune the lawsuit is “without merit.” Last month the company announced it received two third-party accreditations for its “commitment to developing AI responsibly and transparently.”

    “Workday’s AI recruiting tools do not make hiring decisions, and our customers maintain full control and human oversight of their hiring process,” the company said. “Our AI capabilities look only at the qualifications listed in a candidate’s job application and compare them with the qualifications the employer has identified as needed for the job. They are not trained to use—or even identify—protected characteristics like race, age, or disability.”

    It’s not just hiring tools with which workers are taking issue. A letter sent to Amazon executives, including CEO Andy Jassy, on behalf of 200 employees with disabilities claimed the company flouted the Americans with Disabilities Act. Amazon allegedly had employees make decisions on accommodations based on AI processes that don’t abide by ADA standards, The Guardian reported this week. Amazon told Fortune its AI does not make any final decisions around employee accommodations.

    “We understand the importance of responsible AI use, and follow robust guidelines and review processes to ensure we build AI integrations thoughtfully and fairly,” a spokesperson told Fortune in a statement.

    How could AI hiring tools be discriminatory?

    Just as with any AI application, the technology is only as smart as the information it’s being fed. Most AI hiring tools work by screening resumes or resume screening evaluating interview questions, according to Elaine Pulakos, CEO of talent assessment developer PDRI by Pearson. They’re trained with a company’s existing model of assessing candidates, meaning if the models are fed existing data from a company—such as demographics breakdowns showing a preference for male candidates or Ivy League universities—it is likely to perpetuate hiring biases that can lead to “oddball results” Pulakos said.

    “If you don’t have information assurance around the data that you’re training the AI on, and you’re not checking to make sure that the AI doesn’t go off the rails and start hallucinating, doing weird things along the way, you’re going to you’re going to get weird stuff going on,” she told Fortune. “It’s just the nature of the beast.”

    Much of AI’s biases come from human biases, and therefore, according to Washington University law professor Pauline Kim, AI’s hiring discrimination exists as a result of human hiring discrimination, which is still prevalent today. A landmark 2023 Northwestern University meta-analysis of 90 studies across six countries found persistent and pervasive biases, including that employers called back white applicants on average 36% more than Black applicants and 24% more than Latino applicants with identical resumes.

    The rapid scaling of AI in the workplace can fan these flames of discrimination, according to Victor Schwartz, associate director of technical product management of remote work job search platform Bold.

    “It’s a lot easier to build a fair AI system and then scale it to the equivalent work of 1,000 HR people, than it is to train 1,000 HR people to be fair,” Schwartz told Fortune. “Then again, it’s a lot easier to make it very discriminatory, than it is to train 1,000 people to be discriminatory.”

    “You’re flattening the natural curve that you would get just across a large number of people,” he added. “So there’s an opportunity there. There’s also a risk.”

    How HR and legal experts are combatting AI hiring biases

    While employees are protected from workplace discrimination through the Equal Employment Opportunity Commission and Title VII of the Civil Rights Act of 1964, “there aren’t really any formal regulations about employment discrimination in AI,” said law professor Kim. 

    Existing law prohibits against both intentional and disparate impact discrimination, which refers to discrimination that occurs as a result of a neutral appearing policy, even if it’s not intended.

    “If an employer builds an AI tool and has no intent to discriminate, but it turns out that overwhelmingly the applicants that are screened out of the pool are over the age of 40, that would be something that has a disparate impact on older workers,” Kim said.

    Though disparate impact theory is well-established by the law, Kim said, President Donald Trump has made clear his hostility for this form of discrimination by seeking to eliminate it through an executive order in April.

    “What it means is agencies like the EEOC will not be pursuing or trying to pursue cases that would involve disparate impact, or trying to understand how these technologies might be having a discrete impact,” Kim said. “They are really pulling back from that effort to understand and to try to educate employers about these risks.”

    The White House did not immediately respond to Fortune’s request for comment.

    With little indication of federal-level efforts to address AI employment discrimination, politicians on the local level have attempted to address the technology’s potential for prejudice, including a New York City ordinance banning employers and agencies from using “automated employment decision tools” unless the tool has passed a bias audit within a year of its use. 

    Melanie Ronen, an employment lawyer and partner at Stradley Ronon Stevens & Young, LLP, told Fortune other state and local laws have focused on increasing transparency on when AI is being used in the hiring process, “including the opportunity [for prospective employees] to opt out of the use of AI in certain circumstances.”

    The firms behind AI hiring and workplace assessments, such as PDRI and Bold, have said they’ve taken it upon themselves to mitigate bias in the technology, with PDRI CEO Pulakos advocating for human raters to evaluate AI tools ahead of their implementation.

    Bold technical product management director Schwartz argued that while guardrails, audits, and transparency should be key in ensuring AI is able to conduct fair hiring practices, the technology also had the potential to diversify a company’s workforce if applied appropriately. He cited research indicating women tend to apply to fewer jobs than men, doing so only when they meet all qualifications. If AI on the job candidate’s side can streamline the application process, it could remove hurdles for those less likely to apply to certain positions.

    “By removing that barrier to entry with these auto-apply tools, or expert-apply tools, we’re able to kind of level the playing field a little bit,” Schwartz said.

    This story was originally featured on Fortune.com

    Read More Details
    Finally We wish PressBee provided you with enough information of ( Workday and Amazon’s alleged AI employment biases are among myriad ‘oddball results’ that could exacerbate hiring discrimination )

    Apple Storegoogle play

    Also on site :