There have been further calls from EU institutions to outlaw biometric surveillance in public.
In a joint opinion published today, the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS), Wojciech Wiewiórowski, have called for draft EU regulations on the use of artificial intelligence technologies to go further than the Commission’s proposal in April — urging that the planned legislation should be beefed up to include a “general ban on any use of AI for automated recognition of human features in publicly accessible spaces, such as recognition of faces, gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioural signals, in any context”.
Such technologies are simply too harmful to EU citizens’ fundamental rights and freedoms — like privacy and equal treatment under the law — to permit their use, is the argument.
The EDPB is responsible for ensuring a harmonization application of the EU’s privacy rules, while the EDPS oversees EU institutions’ own compliance with data protection law and also provides legislative guidance to the Commission.
EU lawmakers’ draft proposal on regulating applications of AI contained restrictions on law enforcement’s use of biometric surveillance in public places — but with very wide-ranging exemptions which quickly attracted major criticism from digital rights and civil society groups, as well as a number of MEPs.
The EDPS himself also quickly urged a rethink. Now he’s gone further, with the EDPB joining in with the criticism.
The EDPB and the EDPS have jointly fleshed out a number of concerns with the EU’s AI proposal — while welcoming the overall “risk-based approach” taken by EU lawmakers — saying, for example, that legislators must be careful to ensure alignment with the bloc’s existing data protection framework to avoid rights risks.
“The EDPB and the EDPS strongly welcome the aim of addressing the use of AI systems within the European Union, including the use of AI systems by EU institutions, bodies or agencies. At the same time, the EDPB and EDPS are concerned by the exclusion of international law enforcement cooperation from the scope of the Proposal,” they write.
“The EDPB and EDPS also stress the need to explicitly clarify that existing EU data protection legislation (GDPR, the EUDPR and the LED) applies to any processing of personal data falling under the scope of the draft AI Regulation.”
As well as calling for the use of biometric surveillance to be banned in public, the pair have urged a total ban on AI systems using biometrics to categorize individuals into “clusters based on ethnicity, gender, political or sexual orientation, or other grounds on which discrimination is prohibited under Article 21 of the Charter of Fundamental Rights”.
That’s an interesting concern in light of Google’s push, in the adtech realm, to replace behavioral micromarketing of individuals with ads that address cohorts (or groups) of users, based on their interests — with such clusters of web users set to be defined by Google’s AI algorithms.
(It’s interesting to speculate, therefore, whether FLoCs risks creating a legal discrimination risk — based on how individual mobile users are grouped together for ad targeting purposes. Certainly, concerns have been raised over the potential for FLoCs to scale bias and predatory advertising. And it’s also interesting that Google avoided running early tests in Europe, likely owning to the EU’s data protection regime.)
In another recommendation today, the EDPB and the EDPS also express a view that the use of AI to infer emotions of a natural person is “highly undesirable and should be prohibited” — except for what they describe as “very specified cases, such as some health purposes, where the patient emotion recognition is important”.
“The use of AI for any type of social scoring should be prohibited,” they go on — touching on one use-case that the Commission’s draft proposal does suggest should be entirely prohibited, with EU lawmakers evidently keen to avoid any China-style social credit system taking hold in the region.
However by failing to include a prohibition on biometric surveillance in public in the proposed regulation the Commission is arguably risking just such a system being developed on the sly — i.e. by not banning private actors from deploying technology that could be used to track and profile people’s behavior remotely and en masse.
Commenting in a statement, the EDPB’s chair Andrea Jelinek and the EDPS Wiewiórowski argue as much, writing [emphasis ours]:
“Deploying remote biometric identification in publicly accessible spaces means the end of anonymity in those places. Applications such as live facial recognition interfere with fundamental rights and freedoms to such an extent that they may call into question the essence of these rights and freedoms. This calls for an immediate application of the precautionary approach. A general ban on the use of facial recognition in publicly accessible areas is the necessary starting point if we want to preserve our freedoms and create a human-centric legal framework for AI. The proposed regulation should also prohibit any type of use of AI for social scoring, as it is against the EU fundamental values and can lead to discrimination.”
In their joint opinion they also express concerns about the Commission’s proposed enforcement structure for the AI regulation, arguing that data protection authorities (within Member States) should be designated as national supervisory authorities (“pursuant to Article 59 of the [AI] Proposal”) — pointing out the EU DPAs are already enforcing the GDPR (General Data Protection Regulation) and the LED (Law Enforcement Directive) on AI systems involving personal data; and arguing it would therefore be “a more harmonized regulatory approach, and contribute to the consistent interpretation of data processing provisions across the EU” if they were given competence for supervising the AI Regulation too.
They are also not happy with the Commission’s plan to give itself a predominant role in the planned European Artificial Intelligence Board (EAIB) — arguing that this “would conflict with the need for an AI European body independent from any political influence”. To ensure the Board’s independence the proposal should give it more autonomy and “ensure it can act on its own initiative”, they add.
The Commission has been contacted for comment.
Update: A spokesperson for the Commission pointed out that GDPR — in principle — already prohibits the use of remote biometric systems for identification purposes unless limited exceptions apply. (Such as taking place for reasons of substantial public interest.)
The official also noted that to be lawful any such exceptional use must be based on EU or national law; be duly justified, proportionate and subject to adequate safeguards; and would also have to comply with the EU Charter of Fundamental Rights.
“With the new regulation we want to create complementary rules to the data protection acquis, but not ban something that is a priori already prohibited (with certain narrow exceptions),” the spokesperson told us.
“For law enforcement purposes we propose to ban the use of real-time remote biometric identification systems in publicly accessible spaces which are not strictly needed to protect exhaustively listed law enforcement purposes (regarding victims of crime such as missing children; specific and imminent threats to life and human safety such as an impending terrorist attack; or investigation/detection of a limited list of serious crimes).
“This leads to an overall comprehensive EU approach that provides sufficient protection and limits the use of those systems to the strict minimum necessary and proportionate for the protection of overriding public interests.”
“The GDPR lists several grounds, which could justify this kind of processing of ‘special categories’ of personal data, including biometric data,” they added. “The one mainly relevant ground here is ‘substantial public interest on the basis of EU or national law’. The law has to be proportionate to the aim pursued, respect the essence of the right to data protection and provide for suitable safeguards.
“The Law Enforcement Directive also lists several grounds, the most relevant one as an ‘authorisation by EU or national law’. Under the Law Enforcement Directive, ‘sensitive data’ may only be processed when strictly necessary and subject to appropriate safeguards for the right and freedoms of the data subjects.”
The Commission official also pointed to “a number of beneficial applications outside the law enforcement field”, where they suggested biometric surveillance could provide for a public good, such as to help visually impaired people — saying that that that was why it had opted for “a more differentiated approach”, and decided against an outright ban.
The AI Regulation is one of a number of digital proposals unveiled by EU lawmakers in recent months. Negotiations between the different EU institutions — and lobbying from industry and civil society — continues as the bloc works toward adopting new digital rules.
In another recent and related development, the UK’s information commissioner warned last week over the threat posed by big data surveillance systems that are able to make use of technologies like live facial recognition — although she claimed it’s not her place to endorse or ban a technology.
But her opinion makes it clear that many applications of biometric surveillance may be incompatible with the UK’s privacy and data protection framework.