Three facial-recognition stories have led the EU regulatory press in the past twelve months.
In March 2025 the Swedish government tabled Proposition 2025/26:150, authorising Swedish police to use real-time AI facial recognition in public spaces for serious crimes, with prosecutor authorisation and oversight from the Privacy Protection Authority. The law took effect on 1 January 2026, making Sweden one of the first EU member states to legislate explicitly for live police biometric identification.
In February 2025 the EU AI Act’s prohibited-practices article became enforceable, banning real-time remote biometric identification by law enforcement in public spaces (with narrow carve-outs for terrorism, missing-person, and major-crime cases) and banning the untargeted scraping of facial images from the internet or CCTV to build face-recognition databases. Maximum penalties under that article are €35 million or 7% of global turnover. The next compliance milestone, on 2 August 2026, brings the high-risk system requirements into force.
In October 2025 the EU’s new Entry/Exit System started operating at Schengen external borders. The system became fully operational on 10 April 2026. Every non-EU national crossing the border now has their face and fingerprints recorded into a central biometric database administered by eu-LISA.
These three threads have shaped the public conversation about facial recognition in Europe. None of them produced the largest unconsented facial-recognition exposure that EU residents face in 2026. That exposure was produced by two non-state operators that the EU has formally found to be in violation of multiple regulations, that have refused to comply with administrative orders, and that continue to operate against EU residents today.
This article maps the visible regulation alongside the operating reality, and argues that the gap between them is the structural feature most worth attention.
What the visible state of regulation actually says
The Swedish law is narrowly drawn. Real-time facial recognition by police is permitted for serious crimes (carrying four-year-or-more prison sentences), terrorism prevention, and certain missing-person cases. Each deployment requires advance authorisation from a prosecutor or, in emergencies, retroactive authorisation within 24 hours. All use is reported to the Privacy Protection Authority. The law is in force; deployments are subject to procedural review.
The EU AI Act’s posture on law-enforcement biometric identification is layered. Article 5(1)(d) prohibits real-time remote biometric identification in public spaces, with narrow exceptions broadly aligned to the categories Sweden’s law also lists. Authorisation must include a fundamental-rights impact assessment, registration of the system in the EU database, and respect for temporal, geographic, and personal limitations. Post-incident facial recognition (analysing recorded footage after the fact) is classified as high-risk rather than prohibited, meaning it must meet the high-risk requirements that come into force on 2 August 2026.
The Entry/Exit System is the largest biometric programme in the EU’s external-border architecture. EES holds facial images and fingerprints (children under twelve provide only the facial image) for non-EU travellers crossing into Schengen. The data is held by eu-LISA under purpose-bound retention rules and accessible to border-control authorities for the stated functions of the system.
A 2024 survey by AlgorithmWatch counted at least eleven EU member states already running operational police facial recognition: Austria, Finland, France, Germany, Greece, Hungary, Italy, Latvia, Lithuania, Slovenia, and the Netherlands. Most are forensic deployments (post-incident analysis against custody-image databases). The Dutch police database holds approximately 1.3 million images, including individuals who were never charged.
Each of these systems has procedural attachments: legal basis, oversight body, retention rule, redress mechanism. They sit inside a regime that is auditable in principle, even when the audit process is contested.
The AI Act red lines that already apply
Two prohibitions in Article 5 of the EU AI Act are directly relevant to this discussion, and both have been enforceable since 2 February 2025.
The first is Article 5(1)(d), the prohibition on real-time remote biometric identification in publicly accessible spaces for law-enforcement purposes. The exceptions are listed exhaustively (terrorism, kidnapping and human-trafficking victim search, identification of suspects in serious enumerated crimes) and require ex-ante or near-immediate post-hoc judicial or administrative authorisation. This is the article that frames Sweden’s law: the exception structure under Article 5(1)(d) is precisely what Sweden’s bill is implementing.
The second is Article 5(1)(e), the prohibition on placing on the market, putting into service, or using AI systems that “create or expand facial-recognition databases through the untargeted scraping of facial images from the internet or CCTV footage.” This is a categorical ban with no law-enforcement carve-out. The European Commission’s implementation guidelines, published in early 2025, clarified that targeted searches using a specific image to find a specific individual are permitted, and that face-image datasets used purely for AI training or testing without identification are not in scope. Database-building scraping is.
Article 5(1)(e) does not name any company, but it describes Clearview AI’s business model exactly. Clearview’s product is a face-recognition database built by scraping billions of images from publicly accessible internet sources, sold for identity matching to law-enforcement and other clients. As of February 2025 that activity is illegal on two parallel grounds: it has been a GDPR violation since 2022 across five jurisdictions, and it is now a prohibited AI practice under the AI Act, with maximum administrative fines of €35 million or 7% of worldwide turnover.
The non-state layer that is already running
Clearview AI is the larger and better-documented case of the two operators worth examining.
Between 2022 and 2024, five European supervisory authorities found Clearview’s processing of EU residents’ biometric data to violate the GDPR and issued fines totalling more than €110 million. Italy’s Garante imposed €20 million in February 2022. Greece’s Hellenic DPA imposed €20 million in July 2022. France’s CNIL imposed €20 million in October 2022, followed by an additional €5.2 million penalty in May 2023 for non-compliance with the original order. The UK’s ICO issued £7.5 million in May 2022; that fine was vacated by a First-tier Tribunal in October 2023, then reinstated by the Upper Tribunal in October 2025, which held that Clearview’s processing constitutes monitoring of UK-resident behaviour and falls within the territorial scope of UK GDPR. The Dutch DPA imposed €30.5 million in May 2024, alongside four enforcement orders backed by non-compliance penalties of up to €5.1 million in total.
Clearview has paid none of these fines, has not appointed an EU representative under Article 27 GDPR, and has in some cases declined to respond to formal notices. Because the company is US-domiciled with no EU assets, there is no straightforward enforcement route for a fine even after exhaustion of appeals. To date, none of the European fines have been collected.
In September 2024 the Dutch DPA opened an investigation into the personal liability of Clearview’s directors. Dutch law allows the regulator to hold individuals personally accountable for corporate GDPR violations they directed or knowingly permitted. This is the first time the mechanism has been deployed against an offshore operator at this scale, and the first attempt to reach individuals that corporate structure does not protect.
PimEyes is the second case, and it is structurally similar. PimEyes operates a public-facing facial-image search engine that continuously scrapes the open internet, indexes faces, and lets paying users find every public image on which a given face appears. Its corporate domicile is offshore (variously Belize and Georgia), with a Polish operating presence historically. A complaint against PimEyes was filed with the Hamburg DPA in July 2020 by an affected data subject. The Hamburg DPA acknowledged that PimEyes was operating unlawfully under the GDPR. It then took no substantive enforcement action for more than four years.
In 2025 the privacy NGO noyb sued the Hamburg DPA, arguing that the regulator’s inaction was itself unlawful and that the offshore location of an operator cannot be a reason to abandon enforcement of European data-protection law. A Hamburg DPA decision followed in late 2025; substantive enforcement against PimEyes itself remains pending at the time of writing.
The two cases share a structural pattern. The operator is US- or offshore-domiciled, processes EU-resident biometric data, has been formally found to violate EU law, continues to operate, and treats fines and enforcement orders as costs to be ignored. The compliance posture is constructed to make enforcement administratively expensive while the operating economics absorb each ruling as a sunk cost.
What this means for an EU resident’s face
If you have ever appeared in a public image online (a LinkedIn profile photo, a tagged Facebook image, a news article, a wedding announcement, a conference attendee list, a sports-club photo), you are highly likely to be in Clearview’s index. The company’s claimed scale of more than 30 billion images was reported in 2023 and has not contracted.
PimEyes is consumer-accessible. Anyone with a credit card can upload your face and receive a list of public images where you appear, with source URLs. The service is widely used in unrelated investigative contexts and has been documented in stalking and domestic-abuse cases. Nothing in its operating posture stops a private user with no investigative training from running the search.
The right-of-access mechanic exists for both. Article 15 GDPR allows any data subject to demand from any controller a copy of their personal data, the categories of recipients, the purposes, and the retention period. We covered the mechanic at length in our analysis of GDPR Article 15 as a corporate-side reconnaissance vector. Compliant controllers respond inside the statutory thirty-day window. Clearview and PimEyes do not. Article 17 erasure operates on the same logic: it works against compliant operators and is ignored by the rest.
The instruments fail in proportion to the operator’s compliance posture, not uniformly across the layer. For every European data broker or registered service that responds to an Article 15 request inside the deadline, there is a Clearview that does not. A reader’s exposure is the union of the compliant and the non-compliant subset, with the most invasive collection generally sitting in the non-compliant subset. We mapped that layer structurally in our inventory of the data layers you did not author.
Why the conversation is louder where less is at stake
Procedural systems generate procedural debate. Sweden’s law produces parliamentary argument over scope, oversight, retention, proportionality, and civil-liberties safeguards. The AI Act produces compliance-industry argument over the boundaries of carve-outs, the scope of high-risk classification, and the timing of secondary legislation. The EES produces civil-society argument over data-minimisation, retention periods, and third-country sharing. In each case there is procedural input from both sides.
Non-state operators that have declined to engage with regulation work differently. The EDPB issues a press release and national DPAs impose fines; the fines go unpaid and the collection does not happen; the next press release notes the same facts at a higher number. Civil-liberties organisations sue inactive regulators, regulators respond by announcing new investigations, and operations continue regardless. The conversation flattens because one party never speaks.
This is not an argument that the visible regulation is unimportant. The Swedish law, the AI Act, and the EES will materially shape how state and state-adjacent facial recognition operates in the EU for the next decade. Each of those programmes is bounded, auditable, and subject to remedy. The bounded layer is the easier one to analyse because there is something on both sides of the analysis.
The harder analytical exercise is the unbounded layer. The most invasive face-recognition exposure most EU residents have in 2026 was assembled by operators who have declined the procedural game. Closing that gap is a different category of problem from regulating Sweden’s police: it requires either jurisdictional reach the EU does not currently have, or novel enforcement mechanics like the Dutch director-liability investigation, or commercial pressure (payment processors, hosting providers, third-party data exits) the regulators have so far been reluctant to deploy at scale. Each of those is harder than passing the next law.
The next twelve months will indicate which path the EU takes. The Hamburg DPA’s substantive response to noyb’s litigation, and the Dutch DPA’s resolution of the personal-liability investigation, will say more about the future of facial-recognition enforcement in Europe than the dozen pieces of secondary legislation due to issue under the AI Act in the same period.
Sources
Sweden Proposition 2025/26:150 — police real-time facial recognition
- Library of Congress Global Legal Monitor: Sweden government bill on facial recognition and DNA genealogy
- Biometric Update: Sweden proposes law on live facial recognition
EU AI Act — prohibited practices and timeline
- EU AI Act Article 5 (full text)
- Future of Privacy Forum: the ban on untargeted scraping of facial images
- Future of Privacy Forum: restrictions on real-time RBI for law enforcement
Entry/Exit System (EES)
- European Commission: Entry/Exit System overview
- European Commission: EES fully operational 10 April 2026
Clearview AI EU enforcement
- EDPB: Italian SA fines Clearview AI €20 million (2022)
- EDPB: Hellenic DPA fines Clearview AI €20 million (2022)
- EDPB: French SA fines Clearview AI €20 million (2022)
- Dutch DPA: €30.5 million fine and four enforcement orders against Clearview (May 2024)
- ICO: UK Upper Tribunal judgment on Clearview AI (October 2025)
- Solomon investigation: how Clearview dodged fines across Europe
PimEyes — Hamburg DPA inactivity and noyb litigation
Operational police facial recognition in EU member states