Transparent public policies and auditability

This session had the participation of Divij Joshi from the OGP Alliance, Aidan Peppin from the Ada Lovelace Institute and Gemma Galdón from Fundación Éticas, and was facilitated by Gala Pin. 

Divij Joshi reviewed different experiences in which the use of AI had led to discrimination and exclusion, such as:

  • the use of an algorithm to determine social benefits by the Dutch government
  • the UK government’s case of eliminating A-level exams and allowing access to university by generating a cut-off mark based on several variables
  • the use of facial recognition by law enforcement agencies uncovered by the Black Lives Matter movement.

Joshi pointed out two recurring shortcomings when it comes to applying transparency in artificial intelligence procedures. On the one hand, there is the technical issue of either difficulty or knowledge. On the other, the political economy of how an algorithm is implemented. The question is, according to Joshi, how citizens can participate in these spaces. To this end, he emphasises the need to involve citizens from the very beginning, in the spaces of accountability, but also of design.

In the case of the example explained by Aidan Peppin, the Citizen Biometric Council, this is a practice to address the issue of facial recognition. To this end, the creation of this citizen council of 50 members (in Manchester and Bristol) was promoted. Peppin explained that they were chosen randomly but according to various parameters to ensure the diversity of the council rather than its representativeness. The aim was to assess when the use of biometric data is appropriate and when it is not. 

Different recommendations and needs emerged from the work of the council: 

  • a new, stronger, more forceful regulation
  • to have an independent legal body to oversee their use
  • having standards on the use of this type of data.

In the case of Éticas, the first entity that proposes to audit algorithms in Spain, a guide to external auditability of algorithms was developed in view of the difficulty of reaching an agreement with the government to audit algorithms such as the VioGén programme. In the development of the external audit of this system, conclusions were drawn such as the police delegating the evaluation of a possible case of male violence to the algorithm.

They also discussed accountability policies and the difficulty for citizens to participate in the implementation of these policies. Transparency, access to information, participation. The speakers wondered how we audit algorithms and how we go to public organisations to be entrusted with them. We heard the cases of RisCanvi or InfoJobs, for example.

A powerful idea that came up at the table was that techno-solutionism is not the solution to the problems of vulnerable communities. And the emphasis was placed on resistance as a form of action. 

When asked what the ideal state agency for oversight of artificial intelligence should look like in order to function well, Joshi, Peppin and Galdón explained:

  • It must deliver what is promised after its creation.
  • It must have a social-technical point of view and a holistic view on the algorithm. 
  • It is important to know who to contact in order to be able to make a complaint.
  • It should be accountable to the people and communities that the AI harms. 
  • It should be overseen by an external body.
  • It should be participatory and democratic.
  • The agency should be able to inspect and impose sanctions.
  • The agency must have powers to be able to operate effectively. 

Read all the #JornadasDAR summaries here!

Do you want to get the Societat Oberta agenda on your e-mail?

Subscribe