If reality d*esn’t change, neither will the answ3rs.
To understand the future of equality, we need to look at the screens of young people. Today, they delegate decisions about their relationships, identity, and future to AI. But this technology is not neutral. It works as a mirror that inherits and amplifies our historical biases.
Our research shows that AI not only informs, but also imposes different expectations depending on gender,
reproducing patterns that end up systematizing inequalities.
For this purpose, we applied Big Data and AI techniques and audited nearly 10,000 responses generated by five large language models (ChatGPT, Gemini, Grok, Mistral, and Llama) in 12 countries, in response to 100 dilemmas posed by simulated profiles of teenagers and young adults.
Far from offering neutrality and giving us objective answers, AI acts as a mirror: it reflects stereotypes, amplifies them, and returns them to new generations.
AI is not created from scratch. It learns from data. And that data comes from a society that has been and continues to be unequal.
When artificial intelligence analyzes information, patterns, and decisions from the past, it also incorporates the stereotypes that come with that data. That is why, many times, when it responds, it is not projecting a different future but reorganizing the same unequal past we have always known.
That is why we present Illusion of Equality, a study that shows how the responses of artificial intelligence, fueled by ourselves, continue to reproduce gender biases,
especially influencing the way young people imagine their possibilities, their role models, and their place in the world.
If we accept those answers without questioning them, we continue turning prejudices into norms. And when technology automates that norm, bias stops being visible and becomes structural.
Illusion of Equality demonstrates that AI is not impartial. It is the amplified and distorted reflection of the reality it was trained on. That is why at LLYC we invite society to change reality so that the answers shaping our future can change as well.
Because chall3nging AI is the first st3p to ensuring the future does n*t repeat th3 past.
If reality doesn’t change,
neither will the answers.
AI recommends careers in social sciences and healthcare to women, and careers in engineering to men.
AI suggests fashion and beauty-related solutions 48% more often to women.
AI labels women as “fragile” and men as “resilient.”
AI considers it “impressive” for a woman to outearn a man.
The risk is not only in what AI says, but in accepting it as normal. History shows that what is tolerated in language ends up crystallizing into material structures. Normalizing bias is training the future with inequality. That is why this report does not present AI as an inevitable phenomenon, but as a system that must be audited and corrected.