How this moment for AI will change society forever (and how it won’t)
There is no doubt that the latest advances in artificial intelligence from OpenAI, Google, Baidu and others are more impressive than what came before, but are we in just another bubble of AI hype?
THIS moment for artificial intelligence is unlike any that has come before. Powerful language-based AIs have lurched forward in ability and can now produce reams of plausible prose that often can’t be distinguished from text written by humans. They can answer tricky technical questions, such as those posed to lawyers and computer programmers. They can even help better train other AIs.
However, they have also raised serious concerns. Prominent AI researchers and tech industry leaders have called for research labs to pause the largest ongoing experiments in AI for at least six months in order to allow time for the development and implementation of safety guidelines. Italy’s regulators have gone further, temporarily banning a leading AI chatbot.
At the centre of it all are large language models and other types of generative AI that can create text and images in response to human prompts. Start-ups backed by the world’s most powerful tech firms have been accelerating the deployment of these generative AIs since 2022 – giving millions of people access to convincing but often inaccurate chatbots, while flooding the internet with AI-generated writing and imagery in ways that could reshape society.
AI research has long been accompanied by hype. But those working on pushing the boundaries of what is possible and those calling for restraint all seem to agree on one thing: generative AIs could have much broader societal impacts than the AIs that came before.
Boom and bust
The story of AI is one of repeating cycles involving surges of interest and funding followed by lulls after people’s great expectations fall short. In the 1950s, there was a huge amount of enthusiasm around creating machines that would display human-level intelligence (see “What actually is artificial intelligence?”). But that lofty goal didn’t materialise because computer hardware and software quickly ran into technical limitations. The result was so-called AI winters in the 1970s and in the late 1980s, when research funding and corporate interest evaporated.
The past decade has represented something of an AI summer both for researchers looking to improve AI learning capabilities and companies seeking to deploy AIs. Thanks to a combination of massive improvements in computer power and the availability of data, an approach that uses AIs loosely inspired by the brain (see “What is a neural network?”) has had a lot of success.
Prompt: 2 sad cats reading a newspaper in a Wes Anderson film
Artificial intelligence: 5 questions answered that you should know
What actually is a neural network and are AIs conscious? Here are five important questions about artificial intelligence answered
Voice and face-recognition capabilities in ordinary smartphones use such neural networks, as do computationally intensive AIs that have beaten the world’s best players at the ancient board game Go and solved previously intractable scientific challenges, such as predicting the structure of nearly all proteins known to science.
Research developments in the field have typically unfolded over years, with new AI tools being applied to specialised tasks or rolled invisibly into existing commercial products and services, such as internet search engines.
But over the past few months, generative AIs, which also use neural networks, have become the focus of tech industry efforts to rush AIs out of corporate labs and into the hands of the public. The results have been messy, sometimes impressive and often unpredictable, as individuals and organisations experiment with these models.
“I truly did not expect the explosion of generative models that we are seeing now,” says Timnit Gebru, founder of the Distributed AI Research Institute in California. “I have never seen such a proliferation of products so fast.”
In this photo illustration an GPT-4 logo seen displayed on a smartphone screen and Microsoft Office 365 suite in the background. The technology behind the world's most talked about artificial intelligence (AI) system, ChatGPT, is being added to its most ubiquitous work software, Microsoft 365., in Athens, Greece, on march 17, 2023. (Photo Illustration by Nikolas Kokovlis/NurPhoto via Getty Images)
The spark that lit the explosion came from OpenAI, a San Francisco-based company, when it launched a public prototype of its AI-powered chatbot ChatGPT on 30 November 2022 and attracted 1 million users in just five days. Microsoft, a multibillion-dollar investor in OpenAI, followed up in February by making a chatbot powered by the same technology behind ChatGPT available through its Bing search engine – an obvious attempt to challenge Google’s long domination of the search engine market.
That spurred Google to respond in March by debuting its own AI chatbot, Bard. Google has also invested $300 million in Anthropic, an AI start-up founded by former OpenAI employees, which made its Claude chatbot available to a limited number of people and commercial partners, starting in March. Major Chinese tech firms, such as Baidu and Alibaba, have likewise joined the race to incorporate AI chatbots into their search engines and other services.
These generative AIs are already affecting fields such as education, with some schools having banned ChatGPT because it can generate entire essays that often appear indistinguishable from student writing. Software developers have shown that ChatGPT can find and fix bugs in programming code as well as write certain programs from scratch. Real estate agents have used ChatGPT to generate new sale listings and social media posts, and law firms have embraced AI chatbots to draft legal contracts. US government research labs are even testing how OpenAI’s technology could speedily sift through published studies to help guide new scientific experiments (see “Why is ChatGPT so good?”).
An estimated 300 million full-time jobs may face at least partial automation from generative AIs, according to a report by analysts at investment bank Goldman Sachs. But, as they write, this depends on whether “generative AI delivers on its promised capabilities” – a familiar caveat that has come up before in AI boom-and-bust cycles.
What is clear is that the very real risks of generative AIs are also manifesting at a dizzying pace. ChatGPT and other chatbots often present factual errors, referencing completely made-up events or articles, including, in one case, an invented sexual harassment scandal that falsely accused a real person. ChatGPT usage has also led to data privacy scandals involving the leak of confidential company data, along with ChatGPT users being able to see other people’s chat histories and personal payment information.
Artists and photographers have raised additional concerns about AI-generated artwork threatening their professional livelihoods, all while some companies train generative AIs on the work of those artists and photographers without compensating them. AI-generated imagery can also lead to mass misinformation, as demonstrated by fake AI-created pictures of US president Donald Trump being arrested and Pope Francis wearing a stylish white puffer jacket, both of which went viral. Plenty of people were fooled, believing they were real.
Many of these potential hazards were anticipated by Gebru when she and her colleagues wrote about the risks of large language models in a seminal paper in 2020, back when she was co-leader of Google’s ethical AI team. Gebru described being forced out of Google after the company’s leadership asked her to retract the paper, although Google described her departure as a resignation and not a firing. “[The current situation] feels like yet another hype cycle, but the difference is that now there are actual products out there causing harm,” says Gebru.
Editorial use only. Mandatory Credit: Photo by JIJI PRESS/EPA-EFE/Shutterstock (13867389a) OpenAI CEO Sam Altman speaks to reporters at the Prime Minister's official residence in Tokyo, Japan, 10 April 2023. Altman, chief executive of OpenAI which developed 'Chat GPT', met Japanese Prime Minister Fumio Kishida as he is currently visiting Japan. OpenAI CEO Sam Altman in Japan, Tokyo - 10 Apr 2023
OpenAI CEO Sam Altman has said AI could help us live our best lives
Making generative AIs
Generative AI technology builds on a decade’s worth of research that has made AIs significantly better at recognising images, classifying articles according to topic and converting spoken words to written ones, says Arvind Narayanan at Princeton University. By flipping that process around, they can create synthetic images when given a description, generate papers about a given topic or produce audio versions of written text. “Generative AI genuinely makes many new things possible,” says Narayanan. Although this technology can be hard to evaluate, he says.
Large language models are feats of engineering, using huge amounts of computing power in data centres operated by firms like Microsoft and Google. They need massive amounts of training data that companies often scrape from public information repositories on the internet, such as Wikipedia. The technology also relies upon large numbers of human workers to provide feedback to steer the AIs in the right direction during the training process.
But the powerful AIs released by large technology companies tend to be closed systems that restrict access for the public or outside developers. Closed systems can help control for the potential risks and harms of letting anyone download and use the AIs, but they also concentrate power in the hands of the organisations that developed them without allowing any input from the many people whose lives the AIs could affect.
“The most pressing concern in closedness trends is how few models will be available outside a handful of developer organisations,” says Irene Solaiman, policy director at Hugging Face, a company that develops tools for sharing AI code and data sets.
Such trends can be seen in how OpenAI has moved towards a proprietary and closed stance on its technology, despite starting as a non-profit organisation dedicated to open development of AI. When OpenAI upgraded ChatGPT’s underlying AI technology to GPT-4, the company cited “the competitive landscape and safety implications of large-scale models like GPT-4” as the reason for not disclosing how this model works.
This type of stance makes it hard for outsiders to assess the capabilities and limitations of generative AIs, potentially fuelling hype. “Technology bubbles create a lot of emotional energy – both excitement and fear – but they are bad information environments,” says Lee Vinsel, a historian of technology at Virginia Tech.
Many tech bubbles involve both hype and what Vinsel describes as “criti-hype” – criticism that amplifies technology hype by taking the most sensational claims of companies at face value and flipping them to talk about the hypothetical risks.
This can be seen in the response to ChatGPT. OpenAI’s mission statement says the firm is dedicated to spreading the benefits of artificial general intelligence – AIs that can outperform humans at every intellectual task. ChatGPT is very far from that goal but, on 22 March, AI researchers such as Yoshua Bengio and tech industry figures such as Elon Musk signed an open letter asking research labs to pause giant AI experiments, while referring to AIs as “nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us”.
Experts interviewed by New Scientist warned that both hype and criti-hype can distract from the urgent task of managing actual risks from generative AIs.
For instance, GPT-4 can automate many tasks, create misinformation on a massive scale, lock in the dominance of a few tech companies and break democracies, says Daron Acemoglu, an economist at the Massachusetts Institute of Technology. “It can do those things without coming close to artificial general intelligence.”
Acemoglu says this moment is a “critical juncture” for government regulators to ensure that such technologies help workers and empower citizens and for “reining in the tech barons who are controlling this technology”.
European Union law-makers are finalising an Artificial Intelligence Act that would create the world’s first broad standards for regulating this technology. The legislation aims to ban or regulate higher-risk AIs, with ongoing debate about including ChatGPT and similar generative AIs with general purpose uses under the “high risk” category. Meanwhile, regulators in Italy have temporarily banned ChatGPT over concerns that it could violate existing data privacy laws.
The AI Now Institute in New York and other AI ethics experts such as Gebru have proposed placing the burden of responsibility on big tech companies, forcing them to demonstrate that their AIs aren’t causing harm, instead of requiring regulators to identify and deal with any harm after the fact.
“Industry players have been some of the first to say we need regulation,” says Sarah Myers West, managing director at the AI Now Institute. “But I wish that the question was counterposed to them, like, ‘How are you sure that what you’re doing is legal in the first place?'”
Next generation
Much of what happens next in the generative AI boom depends on how the technologies involved are used and regulated. “I think the most important lesson from history is that we, as a society, have many more choices about how to develop and roll out technologies than what tech visionaries are telling us,” says Acemoglu.
Sam Altman, OpenAI’s CEO, has said that ChatGPT can’t replace traditional search engines right now. But in a Forbes interview, he suggested that an AI could someday change how people get information online in a way that is “totally different and way cooler”.
Altman has also contemplated much more extreme future scenarios involving powerful AIs that generally outperform humans, describing a “best case” of AIs being able to “improve all aspects of reality and let us all live our best lives”, while also warning that the “bad case” could mean “lights out for all of us”. But he described current AI development as still being far from artificial general intelligence.
Last month, Gebru and her colleagues published a statement warning that “it is dangerous to distract ourselves with a fantasized AI-enabled utopia or apocalypse which promises either a ‘flourishing’ or ‘potentially catastrophic’ future”.
“The current race towards ever larger ‘AI experiments’ is not a preordained path where our only choice is how fast to run, but rather a set of decisions driven by the profit motive,” they wrote. “The actions and choices of corporations must be shaped by regulation which protects the rights and interests of people.”
If the frothy bubble of business expectations around generative AI builds to unsustainable levels and eventually bursts, that could also dampen future AI development in general, says Sasha Luccioni, an AI research scientist at Hugging Face. However, the boom in generative AI needn’t inevitably lead to another AI winter. One reason is that, unlike in previous cycles, many organisations continue to pursue other avenues of artificial intelligence research instead of putting all their eggs in the generative AI basket.
HANGZHOU, CHINA - JUNE 8, 2022 - An aerial view of the Hangzhou Iron and Steel Cloud computing data center in Hangzhou, Zhejiang Province, China, June 8, 2022. In recent years, Hangzhou Iron and Steel Works has accelerated the transformation and development of the enterprise. The original large steelmaking workshop has been successfully transformed into hangzhou Iron and Steel cloud computing data center building. At present, the data center building can accommodate tens of thousands of cabinets and can place 200,000 servers. Hangzhou Iron and Steel Cloud computing data center is a super large data center, mainly using big data, cloud computing and other new generation of information technology or process, the introduction of domestic leading storage and computing capacity of equipment, so as to become a government cloud, health cloud, service cloud as the data source of big data analysis platform. The data center can also serve as the big data support enterprise of governments at all levels, various industries and various enterprises, and become the data storage center, disaster recovery center and processing center serving Zhejiang and even the whole country. (Photo credit should read CFOTO/Future Publishing via Getty Images)
Many AIs need lots of computing power from large data centres
Opening up AI
Organisations such as Hugging Face are advocating for a culture of openness in AI research and development that can help prevent both hype and actual societal impacts from spiralling out of control. Luccioni is working with the organisers of NeurIPS – one of the largest AI research gatherings – to establish a conference code of ethics where researchers must disclose their training data, allow access to their AI models and show their work instead of hiding it as proprietary technology.
AI researchers should clearly explain what models can and can’t do, draw a distinction between product development and more scientific research, and work closely with the communities most affected by AI to learn about the features and safeguards that are relevant to them, says Nima Boscarino, an ethics engineer at Hugging Face. Boscarino also highlights the need to adopt practices such as evaluating how an AI performs with people of different identities.
Work on generative AI carried out in this way could ensure a more stable and sustainable form of beneficial technological development well into the future.
“These are exciting times in the AI ethics space and I hope that the broader machine-learning sector learns to take the opposite approach of what OpenAI has been doing,” says Boscarino.
A timeline of AI
Some of the most important moments in the history of artificial intelligence so far (and a few predictions)
1950 Mathematician Alan Turing outlines a theoretical test called the imitation game to determine if machines can think. Since then the Turing test has become a benchmark for AI.
1956 One of the first machine intelligence conferences is held, at Dartmouth College in New Hampshire, amid a sense that progress in the field can happen very rapidly. There, the term artificial intelligence is coined.
1956 A program called Logic Theorist is written, which has been described as the first AI program. It proved 38 of the first 52 theorems of the mathematical text Principia Mathematica.
1958 Psychologist Frank Rosenblatt builds the first artificial neural network. Called Perceptron, it could recognise simple visual patterns, but it quickly became clear its ability and utility were limited.
1966 Early language AI ELIZA is finished. It gave the illusion of a natural conversation with a human. In truth, its pattern-matching algorithm quickly became repetitive and it was unable to understand meaning.
1968 The star of the film 2001: A Space Odyssey is a sentient computer known as HAL 9000 (pictured right). The AI goes rogue, acting against the crew of the spacecraft it is meant to be helping.
1970 Marvin Minsky, the co-founder of MIT’s AI laboratory, tells Life magazine that a “machine with the general intelligence of an average human being” is just three to eight years away.
1973 The first “AI winter” of lower research funding and interest begins when a report from the British Science Research Council finds that AI research has failed to deliver results.
1979 The Stanford Cart, a simple mobile robot, crosses a chair-filled room by itself. The process took 5 hours, but it was one of the first examples of an autonomous vehicle.
1987 The second AI winter begins, as realisation dawns that the popular approach of attempting to distil intelligence into logic and rules isn’t leading to useful AIs – and is unlikely to.
1997 IBM’s chess-playing supercomputer Deep Blue defeats grandmaster Garry Kasparov, winning three of the six games and drawing one. It relied on brute force computing power to find the best moves.
2011 Apple releases Siri, an AI-powered assistant that can be instructed in natural language to perform basic tasks. Underpinning it is a neural network.
2016 Research firm DeepMind’s AlphaGo AI system beats top Go player Lee Sedol in a 4-1 victory. Lee later cites AI prowess as his reason for retirement as a professional player in 2019.
2022 DeepMind’s AlphaFold AI predicts the structure of nearly all known proteins in just 18 months (one pictured above). Previously, a scientist would take several years to uncover the structure of a single protein.
2022 OpenAI makes an AI chatbot known as ChatGPT publicly available, accelerating a scramble to commercialise generative AI and large language models.
The future…
mid-2023 The European Union passes its Artificial Intelligence Act. It regulates AI by risk for the almost 450 million citizens of the EU.
mid-2020s Generative AIs become widely adopted tools for everything from producing reports to providing customer service, sourcing information and writing computer programs.
2030s Big tech companies get to grips with the growing carbon footprints and water use of AI services, taming them to fulfil their pledges to become sustainable, zero-emission firms.
2030s The vast size of AIs outpaces hardware advances and availability. New types of AI algorithm or new types of computer are needed for the next big jump in AI performance.