The second round table of our Artificial Intelligence and Human Rights conference focused on the different social scenarios in which artificial technology is used. We were treated to four presentations:
- Algorithmic transparency: the case of the electric social bond. David Cabo, from CIVIO Foundation.
- How to incorporate technology in the defence of human rights. Adriana Ribas of Amnesty International.
- You tell me who you’re with and I’ll tell you how I’ll cure you. Cesar Nebot, expert in Artificial Intelligence and healing.
- Observatory of Algorithms with Social Impact (OASI): Sara Suárez, Eticas Research and Consulting.
David Cabo, co-director of Civio, addressed the algorithm transparency through the case of the Bono Social. Cabo said that in 2018 Civio began to investigate the system that granted aid to pay electricity bills to families at risk of social exclusion. This was due to the fact that with with a change in programme in 2017, they saw that many people had not been accepted for the aid. “Just as we have the right to know the laws, we have the right to know how the programs that make decisions work,” Cabo says. At the time, it was reported that many people had not been able to access the aid because the process was “complex and there was a lot of misinformation about it”. Civio was able to prove that there was an error in the application that granted the aid and, to prevent this from happening again in other applications, they have taken drastic measures. In September 2019 they went to court against the Transparency Council and requested to know the source code of the application. For now, the judgment of the trial is unknown and we will have to wait for the courts to resume activity post-Covid.
When it comes to health, artificial intelligence and big data, César Nebot points to the Hippocratic oath. Nebot warns that the oath is based on four principles that pose challenges in the current context. Will artificial intelligence blur control of the Hippocratic oath in favour of tycoons and holdings? Nebot stresses that artificial intelligence allows us to find patterns efficiently through algorithms. When asked if we should make decisions based purely on data or if there should be other considerations during research, Nebot stated that it is also important to consider a moral perspective. Nebot imagines a scenario in which data linking income and medical history “could [influence] an organ transplant?”. To learn about some of the challenging questions Nebot is currently working on, don’t miss the whole video!
Sara Suárez presented the Observatory of Algorithms with Social Impact (OASI), a search engine created by the Eticas Foundation with the aim of consolidating and classifying opaque, discriminatory algorithms. “We know that these algorithms can have an undesirable effect and we want to put a tool at the service of citizens to increase awareness of this problem,” explained Suárez. Suárez also warned that “every algorithm must be auditable, and special attention must be paid to those that can be discriminatory”. To convey the potential impact of AI-based decisions, Suárez showed – as you will see in the video – several examples of real cases in which the algorithms have enforced existing prejudices.
Adriana Ribas explained how technology is incorporated into the defence of human rights at Amnesty International. “The biases that already exist can be reinforced by the use of artificial intelligence,” said Ribas. Ribas shared three approaches Amnesty takes when it comes to technology and human rights. First, citizens can be educated to know how to take advantage of technology and to know what risks it has, through accessible short-courses. “There is also an opportunity for advocacy from civil society organisations,” says Ribas. That’s why Amnesty International also builds alliances with the technology sector. To establish dialogue and contact with specialized companies and thus bring a human rights approach to the development of artificial intelligence projects. Several entities have signed the Toronto Declaration, calling for guarantees against discrimination in AI systems. In addition, the use of technology, says Ribas, is also because it is considered a platform or tool that allows the organization to maximize its work. This is the case of the Citizen Evidence Lab, a platform with very interesting public access sources. Finally, Ribas highlights the need for transparency. Amnesty therefore runs public campaigns and conducts advocacy in order to highlight and combat algorithmic discrimination.
At the end of the video, our speakers share their ideas for the rule civil society must play in the future to ensure transparency and minimise AI discrimination. Here are some links commented on during the meeting:
- On spurious correlations
- Gangs Matrix, discrimination in the Greater London area
- Ethical guidelines for COVID-19 tracing apps, in Nature