States of Surveillance

European Data Mining in the Name of Welfare-Fraud Detection

Artificial Intelligence (AI) has recently become all the rage around the world, filling some people with awe and others with terror. Just like any new technology, it can be used for good or for nefarious purposes. And right on schedule, governments worldwide are conducting experiments with AI, often without public awareness. While there has been limited reporting on the topic, the focus has mainly been on predictive policing and risk assessments within the criminal justice system. However, there is an area where even more extensive experiments are taking place, and they are targeting vulnerable populations with minimal scrutiny.

Opaque Technologies

Extensive welfare-fraud detection systems are now being used in welfare states, such as Denmark and other Scandinavian countries. Ranging from basic spreadsheets to sophisticated predictive algorithms, these systems generate “scores” that can have life-altering consequences for millions. The sales pitch for them emphasizes their potential to recover large amounts of fraudulently obtained funds from public resources. But public authorities have typically resisted calls for transparency, citing instead increased fraud risks or the need to protect proprietary technology. The portrayal of benefit cheats often reinforces the stereotype of the “undeserving poor,” and the debate over these issues in Europe, which has generous welfare states, is highly politically charged.

Consulting firms, for example, who are often the algorithm vendors, tend to exaggerate the extent of welfare fraud, claiming it to be around 5 percent of benefits spending, while national auditors’ offices estimate it to be much lower, somewhere between 0.2 and 0.4 percent. Differentiating between honest mistakes and intentional fraud in complex public systems is admittedly challenging, but when opaque technologies are deployed to try to distinguish one from the other, or even to simply identify the latter, they pose significant risks to all. Millions of individuals are now being subjected to scoring by these systems based on data-mining operations that take place out of the public eye, as fraud controllers gain the authority to deeply interrogate citizens’ lives. Being flagged by the “suspicion machine” can have severe consequences, and anyone can become a suspect.1

Denmark

As a case in point consider Denmark, which spends a whopping 28 percent of its GDP on state benefits. Despite claims made by Bernie Sanders, Denmark is not a socialist country, but its citizens do pay high taxes in exchange for first-class benefits. The average Danish taxpayer contributes 36 percent of his income to the state in return for “free” universal healthcare, education, and elder services.2 Such a system works well in the small, Nordic countries where people must endure long, dark winters with very cold temperatures, thus necessitating more highly coordinated public-spending programs to protect their most vulnerable. Such schemes would be quite unworkable in the United States because of its much more varied demography and climate.

But in recent years, this amount of government spending has become a hot-button issue in Denmark, and the Danish government has cut back on public spending, even as its population is aging. Further complicating the matter, back in 2011, KMD, one of Denmark’s largest IT companies, estimated that up to 5 percent of all welfare payments in the country—an order of magnitude higher than most other European countries—was fraudulent. So Denmark set out to crack down on benefit fraud, and its fraud-detection system has since ballooned in terms of the extent of its surveillance and ethnic profiling. Under the aegis of Annika Jacobsen, the head of the data-mining unit at the Danish Public Benefits Administration, the agency has tripled the number of state databases it accesses from three to nine. 

According to Gabriel Geiger at Wired, the Danish system now collects highly personal data on individuals, including information related to their taxes, homes, autos, relationships, employers, and travel. Other variables considered include nationality (now deemed illegal in many Scandinavian countries), connections to non-EU countries through “family relations,” marital status (including “presumed partner”), duration of marriage, living arrangements, home size, income, previous residence outside Denmark, call history with the Public Benefits Administration, and even the residency status of taxpayers’ children. All this data gets fed into sophisticated models said to have the ability to analyze it and then predict potential cases of fraud. But while these programs are highly intrusive, which is problematic in itself, Morten Bruun Jonassen, a decision-maker at the hard end of the automated fraud-control system in Copenhagen’s social services department, says only a “very small” number of the cases he encounters involve actual fraud.3

Other European Countries

Several other countries, including France, the Netherlands, Ireland, Spain, Poland, and Italy, have also turned to similar AI-based algorithms in response to political pressure to crack down on welfare fraud. The Netherlands in particular serves as a cautionary tale against excessive reliance on technology. In 2021, the Dutch government faced a childcare benefits scandal, which resulted in the resignation of the entire government. Approximately 20,000 families were wrongly accused of fraud due to officials considering minor mistakes, like a missing signature, as indications of fraudulent activity, and welfare recipients were forced to repay significant sums of money they had received as benefits.

The scandal was enough to bring down the Dutch government, but after reviewing documents obtained from Denmark’s system, Geiger says Denmark’s system goes well beyond the one that brought down the Dutch government.

Implications for the United States

As a keen student of American current affairs, I understand the United States to be currently under a radical, far-left administration. According to a report by Thomas Catenacci from the New York Post, William Henck, a former Internal Revenue Service (IRS) lawyer who was forced out of his job for accusing his superiors of malfeasance back in 2017, came out as a whistleblower, warning that the greatly expanded IRS was intending to audit millions of small businesses and middle-class taxpayers.4 It’s almost certain the IRS will be employing AI algorithms similar to those used in Denmark and other European countries harvesting personal data to conduct these audits.

In an even more sinister move, according to Kyle Seraphin at UncoveredDC, the Federal Bureau of Investigation (FBI) has recently received approval from the Biden administration to keep an eye on Roman Catholics, and especially those who attend the Latin Mass.5 As Charlie Kirk, founder of Turning Point USA, put it, “The FBI was just caught plotting to target Catholics who attend Latin Mass using SPLC [Southern Poverty Law Center] rhetoric as justification to treat them as enemies of the state.”6 Would the FBI be keen to use this new and powerful data-gathering and surveillance technology to close in on future suspects? Why not?

Raising awareness about these underhanded data-harvesting methods being deployed on an unsuspecting public is the first step toward putting an end to their game.

Notes
1. https://www.lighthousereports.com/investigation/suspicion-machines/.
2. https://www.reuters.com/article/us-denmark-election-welfare-insight-idUSKCN1SZ0IC.
3. https://www.wired.com/story/algorithms-welfare-state-politics/.
4. https://nypost.com/2022/08/16/ex-irs-whistleblower-says-middle-class-targeted-under-inflation-bill/.
5. https://www.uncoverdc.com/2023/02/08/the-fbi-doubles-down-on-christians-and-white-supremacy-in-2023/.
6. https://twitter.com/charliekirk11/status/1623860261429084160.

is that author of eight books on amateur and professional astronomy. His latest book is Choosing & Using Binoculars, a Guide for Stargazers, Birders and Outdoor Enthusiasts (Springer Publishing, 2023).

This article originally appeared in Salvo, Issue #66, Fall 2023 Copyright © 2026 Salvo | www.salvomag.com https://salvomag.com/article/salvo66/states-of-surveillance

Topics

Bioethics icon Bioethics Philosophy icon Philosophy Media icon Media Transhumanism icon Transhumanism Scientism icon Scientism Euthanasia icon Euthanasia Porn icon Porn Marriage & Family icon Marriage & Family Race icon Race Abortion icon Abortion Education icon Education Civilization icon Civilization Feminism icon Feminism Religion icon Religion Technology icon Technology LGBTQ+ icon LGBTQ+ Sex icon Sex College Life icon College Life Culture icon Culture Intelligent Design icon Intelligent Design

Welcome, friend.
Sign-in to read every article [or subscribe.]