Artificial Intelligence Isn’t Actually That Amazing
The reality can't match the hype.
by James Meadway
19 August 2024
The seven biggest US tech companies lost a combined $800bn in value at the start of August, as investors dumped their shares in an overnight panic about an imminent US recession combined with creeping awareness that the “AI revolution” had been dramatically overhyped. The slump brought to a close an extraordinary 18 months following the public release of Open AI’s breakout ChatGTP large language model in December 2022, with its seemingly uncanny ability to produce human-like writing and conversation. A partial rebound in valuations has not shifted the brewing sense of unease that AI will not come close to the extraordinary hype it has produced. Both Goldman Sachs and ING have produced reports in the last few months warning of AI’s excessive costs and limited benefits.
The hype had reached cartoonish heights. Idle speculation about possible machine “consciousness” and the imminent prospect of an all-powerful supercomputer – so-called Artificial General Intelligence – was pushed by AI hype merchants like OpenAI’s Sam Altman, helping balloon the valuation of his own company and others in the tech space to those extraordinary heights. Chip designer Nvdia, whose GPU semiconductors have been repurposed from gaming to AI applications, briefly in June became the world’s most valuable company as businesses scrambled to obtain its dedicated AI chips – essential for the process of “training” AI models on vast quantities of data.
Critical to understanding why AI is a bubble is understanding that it is only an extension of existing, very familiar technologies. For two decades, the core tech business model has hinged on taking user data in huge quantities and processing it, gaining valuable insights about consumer behaviour and selling this to advertisers. With the arrival of smartphones in the late 2000s, an entire technological infrastructure was rapidly assembled to enable the remorseless, minute-by-minute collection of user data. Today, there are 5.35bn people online – more than the 4.18bn who have access to home sanitation.
That mindboggling aggregation of human data provides one part of the raw material for AI. Combined with dedicated processors, of the kind Nvidia supply, the amount of data is now so vast that hitherto unfeasible new applications can be developed – most strikingly in the creation of computer software seemingly able to hold a conversation. The results can appear, to our human eyes, near-magical: the talking, intelligent computer has been a dream of science fiction for as long as computers have existed. Meanwhile, fantastical artistic creations are apparently available with just a few keystrokes. It is little surprise that AI has sparked off such extraordinary hype. But it remains, fundamentally, an extension of the data-extraction industry that we have all become entangled in over the last two decades.
What is happening with AI is the operations of data extraction are now happening on such a large scale that science-fiction results appear possible. But because it is an extractive industry, and because it has to run at such huge scale, there are hard limits to what current AI technologies can do. That, in turn, suggests the stock market valuations of tech companies are likely to be wildly out of line with the real economics – a classic bubble.
The first barrier is that the raw material of human data is running out. One calculation in the Wall Street Journal suggests that AI will run out of data, from the entire internet, as produced by all of humanity, as early as 2026. AI companies have taken to using AI-generated data to try and train their machines, but this produces what a recent academic paper called “model collapse”: AI stops working when it has to feed on itself. And the more the internet becomes flooded with “AI slop”, the less useful AI will become. This “inhuman centipede”, as tech writer Corey Doctorow calls it, will not survive.
At the other end of the data extraction machine is the hardware needed to run its software. However, the more data that is being fed into the computers running the software, the more resource-intensive they are becoming. Data centres are mushrooming across the globe to keep up with demand: Microsoft is currently opening a new data centre somewhere on the planet every three days. But these data centres, crammed with the servers running the processing software, demand vast resources. A typical Google data centre uses as much electricity as 80,000 households, for example, while a new Amazon data centre in Pennsylvania has a nuclear power plant dedicated keeping it supplied with electricity. To keep those humming servers cool requires huge volumes of water: a new hyperscale data centre will typically consume the same amount of water daily as 40,000 people. It’s no wonder that protests against these monsters are starting to multiply, from Chile to Ireland. In Britain, Labour’s own plans for their rapid expansion are likely to run hard into England’s already over-stretched water supply.
There are hard limits to what this generation of AI is likely to deliver and that means the bubble will burst – the reality cannot match the hype. Before it collapses, some genuinely useful applications in drug discovery, for instance, will be drowned out by the generation of profit-chasing “slop” – and, more ominously, the rapid extension of AI technologies to military purposes, like Israel’s notorious “Lavender” system, used to generate thousands of targets for the IDF in Gaza.
As climate change worsens and resource constraints become apparent across the globe, harder questions need to be asked about the extraordinary commitment we are making to technologies increasingly geared towards profit and war.
James Meadway is an economist.