Written before Anthropic unveiled Mythos. The article still largely holds up but just so you know.
Question 1: The definition of "AI"?
The past few years have been interesting as someone who has been aware of machine learning before the release of OpenAI's ChatGPT or even before the building of the GPT-1 by OpenAI. "Artificial Intelligence" seems like an obvious rebrand of machine learning to be more public facing.
If you disregard Generative AI and LLMs, we have used some form of machine learning/AI since the early 2000s to do a variety of stuff. Recommendation algorithms, spam filters, and even Google's 2018 protein structure predicting model, AlphaFold 1.
The market is largely basing the current valuations of the frontier AI labs on LLMs and GenAI, not the other branches of machine learning. ChatGPT's release kind of proved what machine learning could do, making it highly visible for investors and innovators; kickstarting this AI wave.
Question 2: Is AI actually useful?
The obvious answer is yes, right? I think LLMs (along with tool calling, RAG, reasoning, and all the other add-ons) has immense potential for a lot of use cases that were historically out-of-bounds with traditional technology and computing.
Question 3: Is AI actually, actually useful? AKA. Does it actually make companies money?
Most CEOs have yet to see a measurable difference in productivity or profit using AI tools. Now, that will definitely change. Especially, in select industries like coding and marketing. But the shoveling of money to these companies is not being based on being useful to a couple of industries, the frontier AI labs foresee a future where every commercial entity is paying them to use their models.
There is an interesting concept here; this highly visible "useful" technology hasn't really translated to real world productivity increases (yet). Yes, cost per token will come down, companies will have access to way more compute (we're building a lot, and I mean a lot, of data centers), and people will eventually learn how to use these tools better as they get more popular. This is the least that the labs can hope for.
But a very realistic scenario for AI companies is AI being "interesting and/or useful" but just under the threshold where they actually make financial sense for most industries. At that point, you have Cloud 2.0; where it's incredible technology, almost like magic, but it's only useful for certain industries. It's still gonna make a lot of money. But it's not the paradigm shifting futuristic vision that people in the industry is selling.
Question 4: Do they have the money?
AI really does feel like a technology tailor-made to extract the most amount from venture capital. It's a risky, high potential technology with incredible ceilings. And in a lot of ways, winner takes all. VC can invest in 100 companies, only 5 have to win to recoup their investments 30 times over.
As long as the image of AI is "this technology will change society in ways we can't imagine (subtext: and we are going to make a lot of money with it)", I don't think the big AI players will have any issues with funding.
Question 5: What's the end goal? AGI?
The machine learning networks are internalizing the underlying ruleset of English which has patterns of reasoning built into it. Is that enough for machines to actually "think" and be "intelligent" by themselves? That's an open question.
Anthropic's Amodei said the it's a scale problem and if they build enough compute, there is a chance of achieving AGI - which, funnily enough, conveniently aligns with the same requirement needed to meet demand and make these models financially viable.
A much more plausible (and comforting) explanation is not that AGI is imminent, but that the incentives strongly reward talking as if it is. The "society will be unrecognizable due to AGI" rhetoric is to drum up enough investment and pubic interest to get to the tipping point where the cost per token and the models' usability becomes financially worth it for large-scale industry-wide adoptions.
That's their gambit. Whether it works or not is still an open question. But if it doesn't, the consequences unfortunately won't be confined to the companies making the bet.