Tech

When will the Artificial Intelligence bubble burst?

When will the Artificial Intelligence bubble burst?

Paris Marx/ After the recent volatility of the stock markets, the artificial intelligence (AI) bubble has started to be discussed. To be clear, the concerns are not unfounded. Skeptics call AI another case of technological fantasy and financial speculation, since the start of ChatGpt in late 2022, because the course of these cycles is known.

Recently, voices from the financial sector, such as Goldman Sachs or Sequoia Capital, have joined the chorus. Tech stock market prices are overvalued, it's clear, and that's partly because of the hype that's been created around generative AI. The question is not whether the markets will correct the situation, but when it will happen and how deep the crisis will be.

I don't want to pretend with a crystal ball and give a definitive answer, but I think the time has come to think about what will happen after the markets change direction, instead of just focusing on the present. Tech bubbles follow cycles, but there is also a period after the crisis when attention starts to shift elsewhere, before the troubled technology gets back on its feet. The cryptocurrency bubble burst in 2022, but its industry is not dead. It is one of the main financiers of the current election campaign in the United States, seeking candidates willing to pass softer laws for its activities.

Even precarious employment is no longer in the spotlight, even as Uber continues its campaign to exclude workers from the guarantees of labor laws, with no small consequences. Meanwhile, smart glasses are back, and social networks continue to cause social damage despite the debate about them. We can't let it go the same way with generative AI.

Chatbots (software capable of imitating human conversations) and image generators are probably being used in more concrete ways than cryptocurrencies, but that may also mean that, when the hype is over, they will be used to the detriment of humans in ways others. We absolutely need to figure out how to prevent this.

In the early days of the chatbot craze, OpenAi CEO Sam Altman made big promises about what large language models (llms) meant for the future of humanity. According to Altman, with this type of artificial intelligence, doctors and teachers would become chatbots, and eventually, everyone would have their own personal assistant, capable of helping them with their every need. If his predictions had come true, it wouldn't be hard to see what the consequences would be for jobs. The problem is that those statements were pure fantasy.

Meanwhile, it has become clear that large language models have limitations, although many companies do not want to recognize them, because this can dampen the enthusiasm that increases their value in the market. The problem of so-called "hallucinations" (information completely invented by AI software) has no solution, and the idea that technologies will improve endlessly using ever-larger amounts of data has been called into question by minimal progress in new models of artificial intelligence.

However, when the bubble bursts, chatbots and image generators will not end up in the dustbin of history. Rather, we will see a revision of the areas in which their use will make more sense and, if attention quickly shifts elsewhere, this may happen without much resistance. Visual artists and video game workers could see their working conditions deteriorate further due to artificial intelligence, especially if the artists lose lawsuits they bring against companies for using their works without permission in the training of IA software.

However, things could be worse. Microsoft is already partnering with data analytics company Palantir to provide AI to the US military and intelligence services, while governments around the world are asking how to use generative AI to cut costs for public services, often without getting considering the potential harm that may result from the use of tools capable of producing false information.

Dan McQuillan, author of Resisting AI (Bristol University Press 2022), refers to this problem, explaining that it is the fundamental reason why we should reject these technologies. There are already many examples of algorithms being used to harm people in public benefits, immigrants and other vulnerable groups. We risk seeing the repetition or even multiplication of these situations. When the AI ​​bubble bursts, some investors will lose money, some companies will close, and some workers will lose their jobs. There will be long debates on the front pages of newspapers and websites, or even on social networks. However, the permanent damage already mentioned will be the hardest to spot right away, because they will fade as attention shifts to whatever Silicon Valley wants to promote as the basis for its next investment cycle.

All the benefits that Altman and his colleagues promised will disappear, just as the promises of the gig economy, metaverses, cryptocurrencies and many other things have disappeared. However, harmful uses of technology will remain unless steps are taken to limit them. The AI ​​generating tools themselves are part of the AI ​​bubble.

But there's another to keep in mind: the giant data centers filled with thousands of servers that Amazon, Google, Microsoft and others are building around the world to power the future of IT they're working on. This infrastructure is sparking protests around the world over its huge water and energy consumption, but tech giants are pushing ahead, investing hundreds of billions of dollars to build it.

The future of data centers can take two different paths. After the first phase of the pandemic, Amazon found that it had overestimated the future demand for online commerce platforms and started building far more warehouses than it needed. In 2022, in a bid to cut costs, it canceled many new distribution center projects. Something similar can happen with those of data processing. For example, there are already those who question the creation of three such structures in Auckland, New Zealand.

On the other hand, we can think back to the new economy bubble of the early 2000s, which spurred the construction of fiber optic infrastructure. The United States ended up with far more fiber than it needed, even as it used it to support the expansion of the digital economy a few years later.

Recently, after the bursting of the cryptocurrency bubble, some of the computing power used for mining operations, i.e. for creating new coins and verifying transactions, was diverted to training artificial intelligence models. Ultimately, major cloud providers will likely rein in their expansion plans when the AI ​​generation bubble deflates, but not cancel them entirely.

As Microsoft, Amazon and Google grow richer as our lives and the services we use require ever-increasing computing power, regardless of the environmental and social consequences, they'll want to ensure that whatever comes after generative AI will need ever-increasing servers. and more powerful to keep us dependent on them.

Make no mistake: there is a bubble around artificial intelligence, and the moment it bursts may be closer than expected. It is important to understand what distortions it is driving in the tech sector and in society in general, but we must also be ready for what comes after the crash.

Generative AI is less useful than the industry has led us to believe and requires a lot of computing power to perform the simplest tasks. But when the buzz dies down, there's sure to be a further push to make some of its implementations more efficient, and companies will try to keep them running, regardless of their social and environmental consequences. We will have to continue to oppose these efforts, even as attention in the technology sector has shifted to another source of collective enthusiasm. / PARIS MARX is a Canadian journalist and writer expert in technology and urban planning.

This article was published in his newsletter "Disconnect", published by Bota.al and reposted by Tiranapost.al