ARTIFICIAL intelligence is everywhere these days, from the Alexa virtual assistant in your kitchen to the algorithms that decide on your suitability for a job or a mortgage. But what exactly is it? The definition matters because to a great extent it dictates how we think about AI’s impact.

If AI is something that outperforms humans by definition, it seems logical to trust it to identify people who should be stopped and searched via facial recognition, say, or to make judgements on which offenders should get probation. If it is solely about algorithms, it becomes a lot easier to sweep aside issues of bias and injustice as mere technical issues.

Kate Crawford takes a broader view. Co-founder of the AI Now Institute at New York University and a researcher at Microsoft Research and the école Normale Supérieure in Paris, she has spent the best part of two decades investigating the political and social implications of AI. In her new book, Atlas of AI, she also looks at the global infrastructure that underpins the rise of this technology.

She argues that AI, far from being something abstract and objective, is both material and intrinsically linked to power structures. The way it is made involves extracting resources from people and the planet, and the way it is used reflects the beliefs and biases of those who wield it. Only when we come to terms with this, says Crawford, will we be able to chart a just and sustainable future with AI.

Timothy Revell: What is AI?

Kate Crawford: I think of it in three ways. Technically speaking, it is an ecosystem of techniques that we can put under the banner of machine learning. Secondly, it’s about social practices: who is designing the systems and who is deciding which problems to solve. And finally there is infrastructure, the process of massive data harvesting and where it is going.

Why do we tend to focus on the technology itself rather than its effects?

There’s a tendency to be blinded by innovation. In the 1970s, Joseph Weizenbaum, who created the first ever chatbot, called Eliza, noticed that people were completely prepared to be taken in by the powerful delusion that AI systems were entirely autonomous technical boxes that could engage with us as autonomous entities. He said there was a trap we would fall into, in that we would focus too much on technical innovation and not on the deeper social impacts these systems would have. Weizenbaum wrote about these issues in the mid-1970s and we still haven’t learned that lesson.

You say in your new book that AI is neither artificial nor intelligent. What do you mean?

Often when people think about artificial intelligence, they’ll think about binary code and math, or something that’s ethereal and in the cloud, or they might think about a series of corporate products like Alexa, Siri or Google’s search algorithm. But none of these things are artificial – in fact they are profoundly material. They only function because of large amounts of data scraped from the internet and an enormous extraction of resources, including minerals, energy and the human labour that is necessary to label the data used by AI systems. In this sense, AI is a material system that is very much coming from humans, created by humans, and more widely from the earth.

Then we think about intelligence. There’s a trap, in which ever since the very early days of AI we have assumed that computers were like the human mind. The writer and engineer Ellen Ullman once wrote that the belief that the mind is like a computer, and vice versa, has infected thinking in the computer sciences for so long that it has become like an original sin. We don’t look at how these systems are different to human intelligence. They’re doing statistical analysis at scale and that’s very useful for some things. But let’s be really clear: it’s not like human intelligence.

How does thinking of AI like human intelligence cause problems?

One phenomenon I discuss in my book is the idea of enchanted determinism, the belief that these systems are both magical and at the same time can provide insights about all of us in ways that are superhuman. This means we’re not expecting these systems to produce forms of bias and discrimination. Nor do we focus on the ways in which they’re constructed and their limitations.

What have you learned about how products that use AI are made, and the impact that has on people and the environment?

One of the most eye-opening projects I’ve worked on was “Anatomy of an AI System” with Vladan Joler at the University of Novi Sad in Serbia. We traced the life cycle of a single Amazon Echo, the voice-enabled AI system. It was remarkable how difficult it was to track where all of the components came from, to study the ways in which user data is harvested and processed, all the way through to the devices being disposed of in e-waste tips in countries like Ghana and Pakistan.

That project inspired me to look deeper into the full logistical pathways and supply chains of the AI industry. AI requires a lot of industrial infrastructure. When I started researching the book, I began by focusing on hardware. But the past few years we’ve all learned a lot about the large energy consumption of AI. If you look at cutting-edge systems like OpenAI’s GPT-3, a language model that produces human-like text, they are extremely energy intensive. There is a sizable carbon footprint and we need to contend with it. Combine that with the labour exploitation that happens on digital piecework services like Amazon Mechanical Turk and you can start to see the ways in which AI can be understood as an extractive industry.

You say that it is inherently political too. How?

Artificial intelligence is politics all the way down. From the way in which data is collected, to the automated classification of personal characteristics like gender, race, emotion or sexual identity, to the way in which those tools are built and who experiences the downsides.

Time and time again we’ve seen that people who are already marginalised are the ones who experience the worst harms from large-scale artificial intelligence systems. We’ve seen communities of colour targeted by predictive policing systems, immigrants surveilled and tracked by deportation tools, and people with disabilities cut off from support services due to poorly designed healthcare algorithms.

I’m optimistic when I see people starting to demand greater justice, transparency and accountability. We’ve seen widespread student protests in the UK over algorithmic mismanagement in the education system and we’ve seen substantial public pushback around facial recognition in the US.

Are we also seeing government pushback? Like when the Australian government drafted legislation for big tech firms to pay for content from news organisations and Facebook responded by briefly turning off all news for Australians on its platform?

It was horrifying to see that. This was a signal being sent by Facebook to the world that says: “If you pass laws that we don’t like, we will simply take our toys and go home.” And given how many countries right now are looking to produce much stricter forms of regulation on the tech sector, it seems like a troublesome type of strongman tactics.

Are tech companies any different to powerful companies that have gone before them?

Tech companies have taken on the roles of states in terms of things like providing civic infrastructure. Facebook, for example, has spent huge amounts of money to convince populations that they are the place where you can communicate with family, where student groups can put up their information. This is where you connect with your communities. What was so extraordinary to see was that this civic infrastructure can be switched off any minute. The power of technology companies has in some ways leapfrogged the power of states and this is very unusual.

What can we do about that?

We have a long way to go, but I’m actually optimistic. Think about the car. Cars didn’t have safety belts for decades, but now laws mandating them have been passed around the world. You can also think about the way that some countries have extremely strong food safety regulations that have a real impact on people’s lives. We have to come up with similar policies to control the harmful impacts of artificial intelligence.

In terms of the bias built into AI and the unjust outcomes it produces, are we just seeing the tip of the iceberg?

If you think about the biggest stories about bias in AI over the past decade, they’ve come because an investigative journalist, a whistle-blower or a researcher has discovered a particular issue. But there is a myriad of issues that have never been made public, which is why we need to shift our focus from the idea that bias is a thing that requires a tech fix to looking at ways in which discrimination is built into the DNA of these systems, such as in the data sets used to train them.

What are the most problematic uses of AI you can see coming down the track?

One I find particularly concerning is so-called emotion detection. There are companies that use this in hiring tools so that when you’re doing a job interview, the micromovements in your face are being mapped to all sorts of interpretations of what you might be thinking and feeling – often in the context of previous successful applicants. One of the problems with that is that you end up hiring people who look and sound like your existing workforce.

There was also a tool that has been marketed for shopping malls that looks at people’s faces to see emotions that will indicate that you might be about to steal from shops. What was the training data for that, and what are the assumptions about what somebody looks like when they are shoplifting?

Does the underlying technology of emotion detection work?

It has been almost entirely demolished. Psychologist Lisa Feldman Barrett looked at every single paper that’s ever been written on this question and found no correlation between the expression on your face and your internal emotional state. Which, frankly, is known to anyone who has had their picture taken by a photographer who said “smile”.

What is really interesting is how the assumption becomes ingrained in a field like machine learning. It is a case of the theory fitting the tools. Machine learning can look at movements of the face, so if the theory says there are universal emotions that can be detected from microexpressions, then AI can be used. Or misused. And it can end up being applied in something as important as education or criminal justice.

When it comes to the future of AI, are you an optimist or a pessimist?

I’m a sceptical optimist. I am optimistic about the ways in which we think about the next generation of civic infrastructure. How do we make sure infrastructures are going to really serve us, and in ways that can’t just be switched off in the middle of a political negotiation, as we saw with Facebook and Australia?

Artificial intelligence is politics all the way down”

The conversation about climate change has reached a point that means we are going to think about the impact technical systems have on the planet from an energy and natural resources perspective. I’m also optimistic that, in some ways, AI allows us to have conversations about how we want to live. These conversations have often been quite segmented. If you think about conversations about labour rights, climate justice and data protection, they’ve primarily been in very separate silos, but right now artificial intelligence touches each one. This is the moment to bring those issues together.

So the detrimental effects of AI, which is still in its infancy, can be reversed?

The important thing to remember is that no technology is inevitable. Just because something is designed, doesn’t mean it has to be widely deployed. And just because something has always been done a certain way, doesn’t mean we can’t change it.

That is the most important thing when we think about labour exploitation, environmental degradation and the mass harvesting of data, all of which can be profoundly detrimental. These are all practices that can change, and the great legacy of industry over the past 300 years or so is that industries have changed once regulated. We can remake these systems and there’s profound political hope in that.