Los algoritmos aplicados a las personas son seductores porque ofrecen la sensación de objetividad, sin pararnos a preguntar por qué el algoritmo está ofreciendo esa respuesta y no otra, sin plantearnos qué supuestos sobre la condición humana han influido en su elaboración. Al igual que los números o las estadísticas, ofrecen esa imagen de “verdad” fundamental por dudoso que pueda llegar a ser el método por el que se ha llegado al resultado.
Lo que nos lleva a The Turing Normalizing Machine:
The Turing Normalizing Machine is an experimental research in machine-learning that identifies and analyzes the concept of social normalcy. Each participant is presented with a video line up of 4 previously recorded participants and is asked to point out the most normal-looking of the 4. The person selected is examined by the machine and is added to its algorithmically constructed image of normalcy. The kind participant’s video is then added as a new entry on the database.
Y:
Conducted and presented as a scientific experiment TNM challenges the participants to consider the outrageous proposition of algorithmic prejudice. The responses range from fear and outrage to laughter and ridicule, and finally to the alarming realization that we are set on a path towards wide systemic prejudice ironically initiated by its victim, Turing.
Y de una entrevista, The Turing Normalizing Machine. An experiment in machine learning & algorithmic prejudice, con uno de los creadores se comenta precisamente el hecho de que ya empleamos algoritmos de ese tipo:
Usually society doesn’t get to decide what is good or even normal for society. The decision often comes from ‘the top’. If ever such algorithm to determine normality was ever applied, could we trust people to help decide who looks normal or who isn’t?
While I agree that top-down role models influence the image of what’s considered normal or abnormal, it is the wider society who absorbs, approves and propagates these ideas. Whether we like it or not, such algorithms are already used and are integrated into our daily lives. It happens when Twitter’s algorithms suggests who we should follow, when Amazon’s algorithms offers what we should consume, when OkCupid’s algorithms tells us who we should date, and when Facebook’s algorithms feeds us what it believes we would ‘like’.
De tal forma, vamos logrando reducir la enorme variedad de la experiencia humana a una pequeña serie de números que creemos objetivos, resultado de un proceso científico. Y es en ese cientifismo donde radica el mayor peligro.