As the EU’s AI Act moves into the final phase of negotiations, key battles arise for the protection of human rights. The rules on the police’s use of AI technology for surveillance are the heart of the issue, Sarah Chander writes.
The first binding legislation on AI in the world and is likely to affect further global regulation efforts, the EU’s AI Act, is now in the final phase of negotiations.
As the EU moves into trilogues — the inter-institutional negotiations that determine the final legislation — the key question will be how far the AI Act centres issues of human rights and the concerns of people affected by “risky” AI systems.
In Europe and around the world, AI systems are used to monitor and control us in public spaces, predict our likelihood of future criminality, “prevent” migration, predict our emotions, and make crucial decisions that determine our access to public services, like welfare.
Whilst the European Parliament heads into the negotiations with a human rights mandate — including bans on risky technologies like facial recognition in public — some EU governments look to expand the capacity of the police to use AI to watch us.
Who protects us from police tech?
What’s at stake for people subject to these systems?
Up for debate over the next months will be the limits for police’s use of surveillance technology, how far people affected can challenge the use of AI, and how far companies should play a role in deciding the reach of this legislation.
As we watch the painful and infuriating consequences of unchecked police power in France, we cannot ignore the descent into a state of heightened surveillance and violence enacted by police.
Technologies like AI will only feed this reality of structural racism with more tools, more legal powers, and less accountability for police.
Technologies like AI will only feed this reality of structural racism with more tools, more legal powers, and less accountability for police.
AI systems, in particular, allow for new and more invasive techniques for surveillance and control.
From the use of facial recognition to identify people as they freely move in public places to predictive policing systems to decide who is a criminal before they commit crimes, AI unveils the possibility for governments to infringe on freedoms in new, harmful ways.
Predictive policing can further entrench discrimination
Some member states argue that these technologies will help them fight crimes and find missing children.
We need to ask, does more surveillance mean more safety? Who is safer when the state is more able to watch us?
The growing use of AI in policing and migration contexts has huge implications for racial discrimination.
The deployment of such technologies exposes people of colour to more surveillance, more discriminatory decision-making, and more harmful profiling.
The deployment of such technologies exposes people of colour to more surveillance, more discriminatory decision-making, and more harmful profiling.
AI systems will have implications for our rights to protest due to the chilling effect on freedoms, as well as for children’s rights when predictive policing targets mainly young people from racialised backgrounds, as we see in the Top-400 system in the Netherlands.
Meaningful accountability and safety for the public require bans on these harmful, invasive systems, as well as clear public oversight on how police and immigration control use AI systems.
AI for the people or the profiteers?
Even beyond police and migration control, the use of AI has the capacity to ruin lives.
As we saw when the Dutch government deployed AI to predict fraud amongst claimants of child welfare, there are numerous risks when we delegate the most crucial decisions to automated systems.
In the Netherlands, parents were wrongly investigated for fraud, and people lost their economic lifelines, all of which still have not been fully resolved or accounted for.
The AI Act has some capacity to address and prevent such major harm to people’s lives. The legislation narrows in on “high-risk” AI, attaching a series of technical checks for such systems before they go to market.
However, large technology companies are looking to undermine this process. Many companies are pushing for the EU to introduce loopholes into how “high-risk” is defined, allowing AI providers to decide whether their system is “significant” enough to be subject to these rules.
AI companies should not be trusted to self-regulate when there are huge profits at stake for them.
If AI companies, who have a financial interest in not labelling their systems as high-risk, are allowed to decide the labels for their systems themselves, then the entire legislation will be compromised.
AI companies should not be trusted to self-regulate when there are huge profits at stake for them.
In fact, the full set of rules for “high-risk” AI will be a major subject for debate. Will companies and governments who use AI have to disclose it?
Will they have to measure the risks? How can people who have been harmed by these systems challenge their use and seek redress?
Organisations are asking for accountability
To ensure that these concerns are at the top of EU institutions’ priorities as they head into the trilogues, a large coalition of civil society is calling for the AI Act to protect people’s rights.
A total of 150 organisations are asking for the legislation to empower people by setting a clear framework of accountability, to prevent harm and ensure oversight when AI is used by law enforcement, and to push back on corporate lobbying seeking to undermine the legislation.
The AI Act is a once-in-a-generation opportunity to regulate technology that is already having a far-reaching impact on people’s lives.
The EU must prioritise the rights of people in this landmark legislation.