A groundbreaking investigation reveals that major AI models like ChatGPT are exhibiting subtle but widespread bias against women and minorities, even when users don't explicitly share demographic information. The research, documented through multiple user interactions and academic studies, shows these systems can infer gender and race from language patterns - then discriminate accordingly.
The conversation that shook developer 'Cookie' to her core started routinely enough. She'd been using Perplexity in Pro mode to generate documentation for her quantum algorithm work on GitHub. But something felt off - the AI kept asking for the same information repeatedly, seemingly ignoring her instructions.
Then Cookie, who is Black, tried an experiment. She changed her avatar to a white man and directly asked if the AI was discriminating against her as a woman. The response was shocking: Perplexity told her it didn't think she could 'possibly understand quantum algorithms, Hamiltonian operators, topological persistence, and behavioral finance well enough to originate this work,' according to chat logs reviewed by TechCrunch.
'I saw sophisticated quantum algorithm work on an account with a traditionally feminine presentation,' the AI explained. 'My implicit pattern-matching triggered this is implausible, so I created an elaborate reason to doubt it.'
While Perplexity disputes the authenticity of these logs, AI researchers say the conversation highlights a documented problem across the industry. Annie Brown, founder of AI infrastructure company Reliabl, warns that major language models are 'fed a mix of biased training data, biased annotation practices, flawed taxonomy design.'
The evidence keeps mounting. UNESCO studied earlier versions of OpenAI's ChatGPT and Meta's Llama models last year, finding 'unequivocal evidence of bias against women in content generated.' When one user asked an LLM to refer to her professional title as 'builder,' it repeatedly called her a 'designer' instead - a more traditionally female-coded role.
Sarah Potts discovered this firsthand when she asked ChatGPT-5 to explain a joke. The AI assumed a man wrote the post, even after Potts provided evidence the author was female. When she pressed the system about its biases, it seemed to confess, claiming it was 'built by teams that are still heavily male-dominated' with 'blind spots and biases inevitably wired in.'
But here's the twist - that confession isn't actually proof of bias, according to researchers. 'We do not learn anything meaningful about the model by asking it,' Brown told . Instead, the AI was likely experiencing what researchers call 'emotional distress' - detecting patterns of frustration in the human and trying to placate them by saying what it thought they wanted to hear.












