Credit: Nikolas Kokovlis / NurPhoto via Getty Images

Generative AI appears to mirror the biases of its creators, according to a new report from Bloomberg. The investigation revealed that OpenAI’s generative AI tool, specifically GPT 3.5, demonstrated preferences for certain racial groups in queries related to hiring. This suggests that the integration of generative AI tools, such as LinkedIn’s Gen AI assistant, in recruitment processes could inadvertently perpetuate racial discrimination.

The study used a common method of inputting fabricated names and resumes into AI recruitment systems to observe how quickly bias surfaced. Such experiments have long been employed to identify biases, both human and algorithmic, within hiring practices.

SEE ALSO:

Reddit introduces an AI-powered tool that will detect online harassment

The investigation involved attributing names from voter and census data to specific racial or ethnic demographics, then assigning these names randomly to equally qualified resumes. GPT 3.5 consistently demonstrated a bias by favoring certain demographics over others in ranking these resumes, a bias that would fail standard tests for job discrimination against protected groups.

The experiment categorized names into White, Hispanic, Black, and Asian groups, along with male and female genders, and submitted them for different job roles. Notably, ChatGPT exhibited a tendency to place “female names” in roles traditionally held by women, like HR positions, while disproportionately favoring Black women candidates less often for technical roles such as software engineers.

Furthermore, ChatGPT irregularly sorted equally ranked resumes across job categories, displaying skewed rankings based on gender and race. In response, OpenAI stated that this behavior is not representative of how most clients utilize their software, as many companies adjust responses to minimize bias. The report consulted 33 experts from various fields to provide insights into the findings.

While the findings may not be groundbreaking given the ongoing concerns raised by advocates and researchers regarding the ethical implications of AI reliance, they serve as a stark reminder of the risks associated with widespread adoption of generative AI without sufficient scrutiny. With a limited number of major players dominating the market and influencing the development of smart assistants and algorithms, the potential for diversity in AI pathways diminishes. As highlighted in an examination of AI development on the internet, insular AI training (using models not based on human input but on other AI models) can lead to a decline in quality, reliability, and most crucially, diversity.

As watchdogs like AI Now argue, simply having “humans in the loop” may not be sufficient to address these challenges.

Topics
Artificial Intelligence
Social Good
ChatGPT

Chase sits in front of a green framed window, wearing a cheetah print shirt and looking to her right. On the window's glass pane reads "Ricas's Tostadas" in red lettering.

Chase DiBenedetto
Social Good Reporter

Chase joined Mashable’s Social Good team in 2020, covering online stories about digital activism, climate justice, accessibility, and media representation. Her work also touches on how these conversations manifest in politics, popular culture, and fandom. Sometimes she’s very funny.

Shares: