Community struggle, transparent public policies and an auditable state agency

As we explained in previous posts, at the end of May we co-organised the JornadasDAR Democracy Algorithms and Resistances together with Algorace, Algorights and Lafede.cat. During the three days of the conference we had the opportunity to listen to presentations and talks about a more democratic, decolonial and respectful of human rights artificial intelligence. In this case we share the summaries of the sessions that took place in Barcelona and here you can retrieve the full videos.

Why and how to involve affected communities by Artificial Intelligence

A panel with Safia Oulmane from GHETT’UP, Anna Colom from Democratic Society and Judith Membrives i Llorens from the AI working group in Barcelona and the Algorights collective.  It was moderated by the journalist Moha Gerehou. 

During the conversation, Safia Oulmane stressed the importance of working in community and finding a team to fight with and not give up. “We have to go on, go on and go on. The younger generation looks up to us and we can be an example for a more diverse and inclusive technology,” she said. 

Anna Colom shared the concept of “civic lottery” which is used in deliberative democracy and consists of the participation in assemblies of a representative sample of the population (neighbourhood, city, country). Supposedly, this system would offer the same opportunities for everyone to participate and give their opinion in processes since people are chosen at random from lists, but Colom questions the real inclusivity of this system. 

In relation to this, and more specifically with regard to the question of how to prevent public policies from returning discrimination, Colom responded: “The principles of justice must be integrated into the implementation of these policies and there must be a space for monitoring“.

Membrives i Llorens emphasised that there must be diversity in technology, which is essential for there to be representation. She also added that training is very important to understand the implications of AI, but it is also essential to understand the narrative frameworks.

Citizen participation in the implementation of artificial intelligence in the public sector

This session had the participation of Divij Joshi from the OGP Alliance, Aidan Peppin from the Ada Lovelace Institute and Gemma Galdón from Fundación Éticas, and was facilitated by Gala Pin. 

Divij Joshi reviewed different experiences in which the use of AI had led to discrimination and exclusion, such as:

  • the use of an algorithm to determine social benefits by the Dutch government
  • the UK government’s case of eliminating A-level exams and allowing access to university by generating a cut-off mark based on several variables
  • the use of facial recognition by law enforcement agencies uncovered by the Black Lives Matter movement. 

Joshi pointed out two recurring shortcomings when it comes to applying transparency in artificial intelligence procedures. On the one hand, there is the technical issue of either difficulty or knowledge. On the other, the political economy of how an algorithm is implemented. The question is, according to Joshi, how citizens can participate in these spaces. To this end, he emphasises the need to involve citizens from the very beginning, in the spaces of accountability, but also of design.

In the case of the example explained by Aidan Peppin, the Citizen Biometric Council, this is a practice to address the issue of facial recognition. Peppin explained that the council members were chosen randomly according to various parameters to ensure the diversity of the council. The aim was to assess when the use of biometric data is appropriate and when it is not. 

Different recommendations and needs emerged from the work of the council: 

  • a new, stronger, more forceful regulation
  • to have an independent legal body to oversee their use
  • having standards on the use of this type of data.

In the case of Éticas, the first organisation to propose auditing algorithms in Spain, an external auditability guide for algorithms was developed in view of the difficulty of reaching an agreement with the government to audit algorithms such as the VioGén programme. In the development of the external audit of this system, conclusions were drawn such as the police delegating the evaluation of a possible case of male violence to the algorithm. 

A powerful idea that came up at the table was that techno-solutionism is not the solution to the problems of vulnerable communities. And the emphasis was placed on resistance as a form of action. 

When asked what the ideal state agency for oversight of artificial intelligence should look like in order to function well, Joshi, Peppin and Galdón explained:

  • It must deliver what is promised after its creation.
  • It must have a social-technical point of view and a holistic view on the algorithm. 
  • It is important to know who to contact in order to be able to make a complaint.
  • It should be accountable to the people and communities that the AI harms. 
  • It should be overseen by an external body.
  • It should be participatory and democratic.
  • The agency should be able to inspect and impose sanctions.
  • The agency must have powers to be able to operate effectively

Remember you can watch videos from JornadasDAR in your Youtube channel!

Do you want to get the Societat Oberta agenda on your e-mail?

Subscribe