Structural bias is inherent in the construction of artificial intelligence and is therefore reproduced in its responses. Today, models are deployed in contexts where their results have real-world consequences: systems that evaluate credit, filter resumes, suggest diagnoses, and determine bail amounts. Inequality is one of the most significant challenges in modern societies, and its persistence has been analyzed by numerous theorists.
It appeared in 2013 and its function was to convert words into mathematical coordinates and then perform arithmetic with them. What has been injustice for generations, algorithms transform into data, and more than that, this data can become anchored as truth. In this framework, algorithms digitalize the habitus: they convert what Bourdieu would call symbolic violence into code.
Models learn from massive text datasets, which reflect the world as it has been described, not as it should be from an ethical and moral standpoint. They learn from the history and language produced by civilization, and in doing so, they maintain biases and inequities intact. The biggest problem today is that the information sources that feed AI systems are the same ones used by Word2vec.
Although it may seem like an engineering problem, when algorithms return “Nurse” when asked for “Doctor,” it is a reproduction of the language upon which we have built our society. This is why the use of LLMs forces us into an ethical discussion about what these tools learn and from whom. Thus, they become embedded in the social fabric.
A clear example, used by Christian, is the story of the first word calculator, Word2vec, developed by Google. Christian recounts that two years later, at an informal meeting, a doctoral student and a researcher opened their laptops and, playing with Word2vec, typed a new equation: Doctor – Man + Woman. They of course expected the answer to be “Doctor,” but the calculator returned: “Nurse.” Up until that moment, everything was fine.
They repeated with another example: Merchant – Man + Woman = Housewife. The alignment problem forces us to be aware of the habitus that feeds these systems and to warn about the inequalities that artificial intelligence reproduces.