This time last year, the political climate in the Netherlands was facing utter chaos. In the middle of an ongoing pandemic, the Dutch government resigned due to the mismanagement of large sums of childcare benefit funds. The tax authority had wrongly accused thousands of families, many of which were low-income and immigrants, of fraud and ordered them to repay benefits worth tens of thousands of euros overnight. In many cases, an error as harmless as a missing signature caused the authorities to label citizens as frauds. It was also reported that the institution had discriminated against citizens with dual nationalities. When the residents turned to the government for redressal, it chose to resign—thereby admitting that a grave injustice had been meted out upon innocent people, many of whom were forced into penury within days.
The tax authority imposed serious administrative burdens on 26,000 parents by falsely deeming them fraudulent based on suspicions generated from a “self-learning” algorithm between 2012-2019. The erroneous halt in benefits to beneficiaries under a scanner caused many to quit jobs, navigate complex processes, file for bankruptcy, and endure psychological distress, as formal complaints took up to two years to be addressed. A mix of human intervention to keep the automated process under check and quick redressal of people’s grievances would have helped ease burdens and prevented the scandal from destabilizing the Netherlands’ long-standing government.
Studies highlight that the childcare benefit model was prone to fraud right from its inception as there was no mechanism to verify the benefit sum on a case-by-case basis or regulate the usage of fake bills for child minder services. This led to some agencies participating in “organized fraud,” the blame of which fell upon beneficiaries as the authority placed the onus of correct claiming and delivery of benefits on parents alone. Besides, the institution lacked the expertise to suspend a benefit pending investigation, which meant that suspicion alone led to the termination of allowances. Moreover, the tax authority divulged no details until the probe was underway. These factors levied significant learning and compliance costs on parents who had to turn to courts to understand why their benefits were taken away and how they could prove innocence. Post-2012, politicians ordered the bureaucracy to tighten the noose around fraudulent practices, which led to the adoption of “collective punishment” on the 80-20 principle (80% fraud, 20% innocent).
Long after rumors emerged that racial profiling was baked into the automated system, the tax authority, in May last year, admitted that it had used “dual nationality” to discover if someone was likely to commit fraud. Advocates representing families claim that “foreign-looking names” came directly under the lens, which led to immigrants and minorities enduring acute burdens than others. The underlying racism and discrimination among people in power created a deep institutional bias in the society and imposed psychological strain on parents, who had to opt out of employment to attend to their children. Many families also suffered divorces and separations, living under fears of displacement and imprisonment, adding to psychological woes. When victims sought the government to come to their aid in this period, central ministries resorted to blame avoidance and did nothing to ease burdens. It was only in 2018, when media coverage around the suspicion flagging process accelerated that the Parliament took note.
The scandal elucidates how institutional factors involving an automated infrastructure design can produce burdensome outcomes under negligible interventions. Every stage of the fraud detection process was devoid of human interaction between suspected beneficiaries and the authorities, causing decisions to be formed from a distance—based on risk modeling and documentation analysis. Besides, every stage of the administrative process responsible for flagging fraud dictated the subsequent level rules. This resulted in a lack of individual assessment across levels, which multiplied costs for citizens at the receiving end.
The gross injustice also reveals how institutional racism and class divide are writ large even in the most advanced societies. The automated infrastructure normalized discrimination, which allowed the uproar to gather momentum. Political structures devoid of inherent biases are an anomaly, but when a government-supported framework makes it a norm to segregate on the basis of color, class, or creed, a societal breakdown is only natural. The process of de-stigmatization would involve tactful replacement of people in power with sensitive and public-minded bureaucrats and politicians who do not push minorities to the brink. An open channel of communication between politicians and citizens is key in ensuring stability and inducing a feeling of being heard.
Personalization of response to different suspicious claims could have also softened the blow on citizens as the tax authority placed everyone—from the agencies exploiting loopholes to parents struggling with compliance and learning roadblocks in filling out claim details—under the same fraud bracket. It is imperative to leave room for nuance when a system that processes a mass number of applications is developed. Unless the government successfully eases glaring administrative burdens, access to welfare reforms will remain restricted to the more “well-off” sections of society. Smooth functioning, as well as operability, will thus remain a far-fetched dream.
Established in 1995, the Georgetown Public Policy Review is the McCourt School of Public Policy’s nonpartisan, graduate student-run publication. Our mission is to provide an outlet for innovative new thinkers and established policymakers to offer perspectives on the politics and policies that shape our nation and our world.