Why this analyst says the AI ​​bubble is 17 times bigger than the dotcom crisis

Why this analyst says the AI ​​bubble is 17 times bigger than the dotcom crisis

A version of this story will appear in CNN Business’ Nightcap newsletter. To receive it in your inbox, sign up for free here.


New York

At this point, even the concept of an “AI bubble” appears to be a bubble. (In fact, Deutsche Bank analysts said last month that the “AI Bubble” Bubble has already exploded.)

Maybe some corners of the Internet are bored of bubble talk. That doesn’t make the market any less bubbly.

Just this week, financial time wrote that 10 AI startups (not a dollar profit between them) have gained nearly $1 trillion in market value in the past 12 months. (That is, to use a technical term, bananas.)

Even as Wall Street analysts and the tech media increasingly question the hype, drawing uncomfortable comparisons to the late 1990s, the AI ​​industry’s response has been to shrug its shoulders and watch its valuations rise higher and higher. AI faithful believe the technology will disrupt (hopefully in a good way!) virtually every aspect of modern life, from phone operating systems to pharmaceuticals to finance. And even if there was a bubble, its proponents say, the dot-com bubble gave us companies like Amazon, and the Internet became, well, the Internet.

There are many skeptics who oppose the hype of AI machinery, although few professional market analysts have done so as stridently as Julien Garran, a researcher and partner at the British firm MacroStrategy Partnership.

Earlier this month, Garran released a report claiming that we are in “the largest and most dangerous bubble the world has ever seen.” Garran concludes that there is a “misallocation of capital in the United States” that makes the current frenzy 17 times larger than the dot-com bubble and four times larger than the 2008 housing bubble.

Needless to say, this is a bold statement about a phenomenon that is very difficult to predict.

I sat down (virtually) with Garran earlier this week to talk about bubbles and why he thinks AI fervor is, to quote his report, not just “a little bad” but rather “the antithesis of socioeconomic progress.”

The following interview has been edited for length and clarity.

Nightcap: Your latest report on AI caused a lot of buzz among finance and tech media junkies like me. Can you explain the main lines to me?

At the center of the note is a golden rule that I have developed, which is that if you use a large AI language model to create an application or a service, it will never be commercial.

One of the reasons is the way they were built. The original big language AI model was built using vectors to try to understand the statistical probability of words following each other in the sentence. And although they are very clever and require a lot of engineering to make, they are also very limited.

The second thing is the way the LLMs were applied to coding. What they have learned (the coding that exists, both in and out of the public domain) means that they are effectively showing you snippets of code learned by heart. This, again, will be limited if you want to start developing new applications.

And the third set of problems, in terms of how it’s built, has to do with the idea of ​​scaling. There’s a real problem at a certain point in terms of how much you have to spend to upgrade them. I would say it’s definite that (the developers) have hit a climbing wall. Otherwise, they would be releasing better and better models every time they hit the market with a new product. And since ChatGPT 4 came out in March 2023, they haven’t really raised the bar significantly.

Nightcap: What about the argument that ChatGPT, while not perfect, is capable of doing low-level hard work and could increase productivity?

Garran: There are certain shitty jobs: some parts of management, consulting, jobs where people don’t check if you’re doing it right or don’t know if you’re doing it right. So you can argue that you can replace nonsense with nonsense, and yes, okay, I’m willing to accept that you probably can, but that doesn’t actually make it any more useful.

Nightcap: So how should regular people think about all the huge sums of money floating around the industry?

Garran: The AI ​​ecosystem can’t really sustain itself. You have Nvidia making a ton of money… Everyone else (the data centers, the LLM developers, the software developers using LLM) are making huge losses.

Consequently, to maintain the process, it is necessary to have continuous financing, so it seems like a tour of permanent financing. But despite all this, there is no obvious way to turn this into a profit. It’s hope over realistic expectations… When you run out of investors, everything is going to fall apart.

Nightcap: Are investors really pulling back?

Garran: The amount of willingness (of venture capital) to fund some of these startups, especially software developers, is starting to decline because they are highly valued. That basically leaves you with SoftBank, which has had to raise a lot of debt against its actions to fund the first tranche of the OpenAI commitment it made, and still has a second, larger tranche to fund.

You have foreign states like, say, Saudi Arabia. But there are not many countries that have unlimited purchasing power. And that leaves Nvidia as sort of the last man standing.

Garran argues that large language models can be impressive predictors of text, but they are not the economic disruptors that AI evangelists claim them to be.

Nightcap: Are we in the process of deflation or is the bubble still growing?

Garran: With AI, I can’t say it’s starting to deflate. We’re just a week after the all-time highs, so it would be a bit arrogant to say it was definitely the high. But it’s certainly getting closer.

Nightcap: I have to ask, because I ask myself this all the time: What if you’re wrong? What if the hype is real?

Garran: Well, there are two ways I would be wrong.

One is that it takes longer to break than I thought. Which, to be honest, he already has. And what happens if I’m wrong in that sense is they simply continue to build things that are not fundamentally useful to economic society.

If it continues for another year or two because they managed to persuade someone to provide funding, more people will be doing things that will not pay off. The future won’t be so bright. The future (gross domestic product) will be less than it would be if they didn’t do these things and just went about doing some mundane things that people would really value.

And if I’m completely wrong… (and) someone could come up with “superintelligence”, well, that completely changes the world, and we would depend on whoever controlled the systems. We could be in some kind of utopia, or we could be in a “Brave New World.” Or we could be in a utopia like “Player Piano,” Kurt Vonnegut’s book, where everyone has a job except a few people who live in their ivory towers.

To be honest, I think that is beyond our current ability as an industrial society to achieve. If that starts to change, I’ll change my mind very quickly. I just haven’t seen it.

Leave a Reply

Your email address will not be published. Required fields are marked *