How to (de)racialize artificial intelligence?

Ana Valdivia, Youssef M.Ouled, Javier Sánchez and Paula Guerra Cáceres, members of Algorace, opened the session of presentations in Madrid with this round table. At the beginning of the round table, Guerra Cáceres mentioned the three dimensions in which racism is framed: institutional, social and individual, and recalled the definition of the concept of racialisation and the connotations it normally has depending on whether it refers to white or non-white people.

Along these lines, Paula Guerra Cáceres asserted that “artificial intelligence is in the framework of structural racism and makes the person disappear and replaces it with decisions according to historical patterns and prejudices”. So what needs to be done to (de)racialise AI? According to Guerra Cáceres, “we have to educate people about how it affects us and which public administrations and companies are developing these technologies”. To de-racialise artificial intelligence, Guerra Cáceres proposed some ideas: 

  • encouraging debate on the scope of the use of AI
  • to know what use is being made of these technologies by public administrations and private companies
  • involve groups of people affected by AI.
  • promote the inclusion of racialised people in AI-related careers, for example with scholarships.

According to Algorace, there is a hegemony of white profiles in technical positions that develop artificial intelligence. That is why an agenda of racialised professionals is being drawn up within the collective, “because other perspectives are needed to be able to design fairer algorithms”, Sánchez justified. 

Another interesting aspect of the collective’s work, which brings knowledge closer to the general public, is the fact that it compiles problematic artificial intelligence systems and explains them in plain language. According to Sánchez, they also challenge the Anglo-Saxon technological philosophy and look for cases that are closer to home so that people can identify with European experiences and logic. 

“It is important to remember that artificial intelligence does not understand contexts and makes a simplified reading of the facts, it only understands zeros and ones,” warned Sánchez. In turn, Valdivia presented the case of a racist form of the Ertzaintza used to assess the risk that survivors of gender violence had of being assaulted again by their aggressor. The questionnaire was scored according to the women’s answers and the risk was higher if the aggressor was of migrant origin, when statistics in Spain show that aggressors are mostly local. 

“We have to be clear that Artificial Intelligence has a patriarchal and colonialist origin, at the service of the powerful,” Valdivia said. As a solution, Valdivia proposed that organisations and social movements should re-appropriate technology and artificial intelligence systems and use them to their advantage. 

If you want to know the sections of the report that Ana Valdivia and Javier Sánchez are working on together with the rest of the Algorace team and the answers to the questions posed by the audience, we recommend you to watch the full video here!   

Read all the #JornadasDAR summaries here!

Do you want to get the Societat Oberta agenda on your e-mail?

Subscribe