Friday, June 18

Virgina Eubanks: “The Government uses technology against the working class because it fears its power”



A machine can decide if you have the right to receive public aid, if you are granted a bank loan or if you must have access to social housing. The use of tools Artificial intelligence (AI) that feed on our data to automate complex decisions like these is increasingly common. However, that system has a victim: poor.

For three years, Virgina Eubanks, Associate Professor of Political Science at the University of Albany (New York), investigated how in USA the use of algorithms in social assistance services, in the judicial system and in neighborhood surveillance, it serves to punish and discipline the working class already homeless. An exhaustive study that analyzes how technological progress is accentuating discrimination and social exclusion of the most vulnerable. A model that, as she herself suffered, spreads to the middle class. ‘The automation of inequality‘(Captain Swing) is released this Monday.

-We ignore the poor as if they were something alien to us

-In the US we think that poverty only affects a small part of the population, and people with pathologies, but that is empirically false. More than half of the population will fall below the poverty line in adulthood and almost 65% will resort to social assistance. The social systems that we agree to build for the chronically poor, racial minorities, the disabled, migrants, and the marginalized will end up trapping us.

-Is that what happened to you?

-Yes. Jason, my partner, was beaten up by robbers. They caused many injuries, a very serious injury to the brain and post-traumatic stress. A week before we had taken out a new health insurance, but it was canceled. I suspect it was because they investigated us to see if we were cheating insurance. Although a few weeks later they resumed the service, we have descended the economic ladder because he became disabled and cannot work and I am working part-time to take care of him. Now we are part of that aid system. And it wasn’t by making any bad decisions.

More than half of the population will fall below the poverty line in adulthood

– Has this contempt for the poor grown in parallel with the hostility to the welfare state?

-The marginalization of the poor did not start with neoliberalism, that has been here for a long time. I looked when tools were used against the poor and I got to 1601, when the idea that poverty is a moral failure was established. We tend to decontextualize those surveillance technologies, but they have been created by the politics and culture that we dragged along from before.

-AI systems are tested in poor neighborhoods. Are the poor being used as guinea pigs?

-For many of them, saying no to social assistance is not an option. If you do not have medical care or food aid and an assistant sees that you lack food at home, you run the risk of having your children sent to foster homes. The working class has a lot of power and the state uses these technologies because it fears it. You could refuse to give your data to the health system but then you would die on the street.

“No reason justifies denying a five-year-old girl access to health services. It is an inhumane and unacceptable way of organizing society.”

These technologies end up classifying the poor between good and bad.

Austerity has been assumed, the idea that there is not enough for everyone, so tools must be created to make a moral rationing in which it is determined who deserves more aid. In the US it is very different from Europe, where there is a basic agreement that there are human rights and a ground under which no one can fall.

One of my biggest fears is that the use of that AI technology makes that screening between good and bad seem logical, reasonable and fair, when it leaves us hanging. Of all the families I spoke to, there is no legitimate reason to deny a five-year-old girl access to the toilet. It is an inhumane and unacceptable way of organizing society.

In Indiana, the use of algorithms to distribute public aid ended up wrongly denying a million benefits. If these cases were not uncovered, we would continue to think that these technologies are perfect, that the machine never fails.

We have a magical thinking with technology, we irrationally trust that its decisions are more reliable than human ones. But these years I have seen many people realize that they were wrong and that the algorithms carried racist and class biases.

These tools are used politically to override the decisions of working people, racial minorities and women. The problem is that when the algorithm’s decision is different from that of the worker who knows the situation in their neighborhood, the algorithm’s decision ends up prioritizing, and that has a lot to do with race and gender and our distrust of social workers.

“These tools are used politically to override the decisions of working people, racial minorities and women”

He points out that these systems divide the working class. Does that hinder your political organization?

In the United States, the social assistance office is a horrible, dangerous, dirty, crowded and guarded experience, but it is also a very good place for the organization because people who are angry get together and by talking they build networks of solidarity. They have always led to riots and resistance. Access to social assistance must be facilitated, but these technologies deprive that human contact.

Does its use then respond to the ideological motivation of reducing the public and replacing politics with the algorithm?

The designer of the family screening system used in Allegheny County, Pennsylvania, said that when these data systems are deployed, red tape can be completely eliminated. For many data scientists this is an efficiency decision, not a political one, but they are sometimes very naive.

One of the greatest dangers of these systems is that they are policies that pretend not to be. Using technology to make lists of who deserves help or to decide where resources go is a way of saying that, as a community, we don’t want to grapple with those tough decisions. In Los Angeles there are 66,000 homeless people. That is not fixed with technology. We should have a discussion about our values, why we allow that to happen, and how we prepare to give up something to fix it.

“One of the greatest dangers of these systems is that they are policies that pretend not to be”

Another danger is that these AI systems create a situation of asymmetric power, where only whoever owns the algorithm has the power. Does that make us even more vulnerable?

We focus on technical solutions such as auditing algorithms, more transparency or regulating technology companies, but the families I speak with, even if they do not have expert knowledge of those systems, understand their impact. They are good but partial solutions, because the problem is rooted in how we explain poverty and how we value the lives of others. And in this country there is no agreement that all human beings have the same rights.

But even those who use the algorithm do not understand how it reaches the conclusions it draws …

True, I have seen it in cases where a citizen is accused of having a debt because the algorithm says so. The State has no proof of this, but since the citizen has no proof to deny it, justice ends up paying attention to the State. In only two states I have seen tens of thousands of such cases and there may be millions across the country. The more sophisticated the algorithms are, the more important it is to understand how they work. However, I do not want people to think that it is something so complicated that we cannot fight it, because there are ways to detect it without having studied computing.

The use of these AI tools is not regulated in the US because they are said to slow down technological innovation. Do you think that the new European legislation will allow greater respect for human rights?

I am very excited about a regulation that recognizes the inappropriate uses of these tools. Banning facial recognition systems is crucial. But beyond digital surveillance, these punitive technologies are happening in many places that people don’t see, like public assistance programs.

Algorithms make mathematical decisions, not ethical ones. Can there be social progress if we use biased data that replicates the injustices of the past?

I believe that the solution will not come from the people who create that technology or from the agencies that use it. Power never gives up without a demand. We are in a dangerous moment in which we divide between the good poor, those who are because of the pandemic, and the bad ones, those who already were before. But this moment also allows us to rethink many things that seemed natural or inevitable to us. In Australia they have doubled the amount of aid they give to poor people due to the pandemic, and before that it seemed impossible to do so. Citizens must push to deal with all these problems.


www.informacion.es

Leave a Reply

Your email address will not be published. Required fields are marked *