Revenue, revenue, revenue — this is all I keep hearing about. But valuation stems from (future, discounted) earnings, not revenue. If there was just OpenAI, then we could argue that losses will turn into profits due to falling training/inference costs and new revenue streams like ads or hardware.
But multiple deep-pocketed companies are pursuing products that have an almost commodity-like similarity. Cursor, for example, can switch from Claude to GPT to Gemini at will. This means pricing power is low and hence profits will be low. It’s perfectly possible that AI will be transformational and revenue will be in the hundreds of billions, yet valuations collapse because profits are competed away.
Exactly. Perhaps this is why AI leaders are talking about AI being in a bubble. It's a gambit to deter investment in their competitors and collect more monopoly profits for themselves.
Honestly it seems like a dubious strategy as long as China remains committed to open-source AI. For any given application, there will be a narrow window where the frontier model builder can collect some monopoly profits, but within perhaps a year, the open-source stuff will reach the point where it's "good enough" (due to fine-tuning on the frontier models, if nothing else!) and commoditization occurs. Investors are throwing in hundreds of billions in pursuit of a competitive advantage which could be *very* temporary.
Particularly in an "AGI that automates the entire economy" scenario, I expect firms buying AI services to be rather price-sensitive. For example, they could find ways to route 99% of AI tasks to a cheap trailing-edge model, and only pay extra for an expensive frontier model on 1% of exceptional/difficult tasks. It could very well be cheaper and more reliable to use three open-source models (two models checking the work of the third model) than pay for an expensive frontier model.
"Your margin is my opportunity." The more profits AI firms make, the greater the incentive there is on the part of their customers to keep that money in their own pocket.
For AI firms which have actually been somewhat responsible on technical AI alignment, such as Deepmind and Anthropic, the best strategy might be to tell investors they are a good bet specifically in a world where regulators crack down hard on misbehaving AIs. That might be the only world where any AI firm can collect sustained monopoly profits.
That most difficult/exceptional 1% of tasks that require the best AI models could be worth a huge amount of money. The average human white collar worker makes under $100k/year, but the top CEOs and AI researchers can make over 300X that much.
OK, so OpenAI launches its new "CEO AI" which it claims outperforms a human CEO. Given Sam Altman's history of dishonesty, and the difficulty of measuring CEO performance, how many boards will be willing to fire their current CEO and replace it with the AI? That CEO's high salary is premised on the scarcity of top human managerial talent. What happens in 6 months when competing AI firms come out with their own "CEO AI" products?
This was exactly the narrative during the dotcom bubble. I am old enough to remember this first hand. Analysts were arguing that we should value all those doomed companies based on revenue if they weren’t yet profitable.
With any commoditized product, there will be a race to the bottom on cost through efficiency. That efficiency will manifest as less compute, fewer chips, less power. All this infrastructure will likely be used, but with ruthless expectations of lower costs through competition.
We’re so used to these unassailable network effect moats that we’ve forgotten what capitalism does.
In my ham-fisted way this was going to be my question. LLMs may work and be successful but every company is currently priced for success. Can they all justify their value even under a reasonably optimistic scenario?
Companies are priced for a middle scenerio where AI demand continues to grow rapidly for a few years and then slows down. There are plenty of plausible scenerios where current AI companies could either be massively overvalued or massively undervalued.
Great article overall, but I can't get past the disparity between how scrupulous your work generally is and the story about full automation driving your relative optimism, esp. insofar as it's justified by this (commonly misinterpreted) METR figure.
I imagine you've encountered the main criticisms, but just to throw in my two cents, not only are the tasks at issue highly parochial relative to the economy writ large, but 50% success is just nowhere near the reliability you'd need to justify any level of automation. And don't find the growth rate very convincing either given how categorically different the challenge of 99(.999...)% reliability is relative to 50.
There are of course a ton of further, independent reasons widespread automation is unlikely (such as political ones), but to imply it's plausible on the basis of this particular figure screams 'epistemic double standard' to me.
30% chance of a >20% draw down within three years seems too low (depending on definitions). Nvidia has drawn down over 20% I think at least 4 times in the last 5 years: 2022 interest rate crash, twice in 2024, and in 2025 with deepseek. It's just a very volatile stock.
Your car loan analogy for the circular financing doesn't hold up. When a car company finances your purchase, an actual car changes hands. You get use value, the loan is backed by a repossessable asset, and the company's revenue comes from manufacturing cars, not from lending.
What's happening with AI infrastructure is different. NVIDIA "invests" in OpenAI, OpenAI "commits" to buy NVIDIA chips, Oracle "commits" to buy NVIDIA hardware to serve OpenAI, and everyone books revenue on these commitments. The money (or more accurately, the promises) flows in a circle. It's not just "I pay you $100 to dig a hole, you pay me $100 to fill it." It's worse: I promise to pay you to dig, you promise to pay me to fill, and we both tell investors we've made $100 in revenue based on those promises.
This works as long as the promises eventually convert to actual economic activity. But if OpenAI is losing billions per month with a business model that would require subscription fees no sane user would pay, those promises aren't getting paid back. That's not timing risk. That's structural insolvency dressed up with IOUs.
And you can't finance your way out of physical constraints. NVIDIA can't deliver chips that require TSMC packaging capacity that doesn't exist. Every advanced wafer in the world has to fly back to Taiwan for packaging because that ecosystem exists nowhere else at scale. You can promise your way around financial constraints. You can't promise your way around semiconductor physics or power grid capacity or memory supply chains.
This only works if people don't check whether the promises can actually be fulfilled.
The claim that infrastructure will be useful even if there is a pop ignores that chips become obsolete quite quickly and data centres require a lot of maintenance. I would not be on that.
On your point about unit economics: I’m not sure that this is correct, especially for the ai SaaS startups that have cropped up. Inference costs compress their margins below traditional SaaS, for example.
Wildeford writes "OpenAI has a similar level of user base with over one billion free users, and it seems plausible these users could be monetized in some way. "
And then cites something called Facebookization. Here is where I get confused. I have been a free user of Facebook for many years. I have never spent a dime on it. So where does the revenue come from? My understanding is advertising. For a century advertising has been about 2% of the economy, It is, in effect a tax on existing output of the old economy, which isn't growing that much being fully built out.
Tech firms like Facebook and Google have grown by taking advertising dollars away from newspapers and other old media. Nothing new has been created, just money moving from one pot to another.
Where is current AI income coming from? Consumers buying access to Chat GPT or other AIs? Or sales to companies? If it is to companies, what are the using it for? Increasing efficiency. This means they can produce the same output with a smaller wage bill. Smaller wage bill means less money to consumers and so less demand for what the companies produce. How is this a recipe for spectacular growth?
To generate growth one needs to create new categories of demand, the way PC tech created a demand for personal computing power for such things as game playing, education, personal management, and with the internet, access to total knowledge, new avenues for community formation, economic activity, These are new activities, new things people want to have/do and are willing to spend money on.
How does AI do this on the vast scale the investments suggest is expected?
I have worked in AdTech in the past and follow along the industry somewhat. You are correct that Ads are going to be what OpenAI turns to, rumors of hiring in this area are rife.
The people loosing out right now are websites - traffic is down 25-40% based on search traffic referral drops. That means a lot of people aren't clicking through anymore - maybe they are asking the AI for the best car to buy or how to make sourdough instead of visiting a review or recipe website.
But how do ads break into the AI space at scale?
The most obvious way would be for OpenAI to build a paid for solution that allows advertisers to purchase parts of responses and this could be bid on in the existing advertising ecosystem.
Example:
User asks GPT "Give me some wedding gift ideas for my wife" and he lists some things about his wife.
The AI now sends that information out to the advertising network and it contains information and lets say Cartier has a rule that says "If we know of a man earning more than 150k a year and asking about gifts for loved one we want to advertise to him"
The response starts coming from the GPT BUT it now has a line in the response saying "A great gift you might want to consider would be a Cartier watch, consider these models. This is paid content" - the last part is what you hope it says.
That's how I see it taking shape and for people who were afraid of cookies, well I'm just here laughing because these AI bots are going to know SO MUCH MORE than dumb cookies ever did about you and that PII data is what is so valuable to advertisers. That is their play, that is the ad supported hell they will go towards.
Question is - will people still use the agents as much? Probably they will, just now the advertising they get will be baked into every single response - if the trend goes how the rest of open web and platforms is then it will become less and less useable over time. And yes adspend slowly migrates.
As an aside - this Q3 Google earnings showed that YouTube makes more money off Premium Subs than from free ad supported content....(of course one hand feeds the other)
I'm an AI skeptic mostly because when I try to get it to help me at work it is still someway off actually being useful and competent and repeatable. These systems are designed to be random and non-repeatable. That isn't how things work in the real world usually...
But this again is just drafting off the old economy. And since the pool of advertising dollars is limited, where will the AI ad dollars come from? Just as Facebook and Google drained the resources of newspapers and magazines these AI firms will drain the resources from FB and Google and other companies like them.
But wasn’t part of the AI value proposition was replacement of people doing amorphous jobs using computers. Once those people lose their positions they are no longer targets for advertising because they have no spare cash to spend. What I do not see if where revenue is going to come from in a world transformed by AI?
A fundamental problem I keep coming back to is that this earnings model relies on companies replacing workers with AI. That’s the only path to recouping this level of investment. My 25 bucks a month subscription is nothing.
But I cannot understand who will be buying the widgets that are now more brilliantly marketed, more effortlessly designed, more efficiently distributed—to whom? If we are cut off from our career paths, if we cannot participate in the increase in wealth, what will we need, what will we demand, that they are providing? This seems like a huge bet only on supply side, with a gaping hole on demand side. That’s what gasses up bubbles.
I love the tech and I am bullish in the long run but the idea of no major correction with these valuations seems like a bet on too many things going 100% right. The something that goes wrong could very well be outside of tech and not even AI related and still cause problems.
Regardless of whether it’s a bubble, a 20% drawdown at some point is not a 30% chance. It’s a near certainty. I’m the last person to try to forecast the market, but with a technology this uncertain, valuations and expectations this stretched, a 20% drawdown at some point is almost an inevitability.
As with many new technologies, a lot of investments are probably not smart. An AI bubble might pop in the sense that the market corrects on the value on the less valuable initiatives. It might not have large consequences for the big players with solid strategies.
So, in short, there are two clocks running simultaneously: one is a path to profitability (or at least some form of utility) and the other is a countdown to “popping”.
It's a bit strange to see you commenting on whether or not AI is a bubble and whether or not AI capabilities will continue increasing without any reference to the underlying technology itself.
Imagine a gold rush. The amount of gold coming from the mountain has increased every year. Companies are investing more and more into mining every year. Railways are being built to the mountain. Is the gold rush a bubble? You could use all this information to form a guess, but it'll all be worthless guesswork compared to some core samples of the rock in the mountain.
I like to think we software developers are like the miners in the mountain. We can see the trend already starting to plateau. We have an intuitive grasp of the situation, and know statements people on the outside of the mountain make are crazy, like about how AI can already do "a non-trivial amount of work that actual software engineers actually do".
In any case, the golden question is not what NVIDIA is doing, nor whether some linear extrapolation of current progress predicts more progress, but whether current or near-future LLM technology will continue to improve dramatically. Many say no, and I agree. For one thing it lacks online learning which many agree is necessary, but that can be added. However, I think fundamentally gradient descent on training data loss is a flawed approach to creating intelligence that becomes too expensive for smaller and smaller gains.
This physical discussion of the technology itself is the most important. Everything else is just guessing.
As they say, markets can remain irrational longer than you can remain solvent. But I'm shorting it with my reputation by publicly doubting sustained progress, if that counts.
Revenue, revenue, revenue — this is all I keep hearing about. But valuation stems from (future, discounted) earnings, not revenue. If there was just OpenAI, then we could argue that losses will turn into profits due to falling training/inference costs and new revenue streams like ads or hardware.
But multiple deep-pocketed companies are pursuing products that have an almost commodity-like similarity. Cursor, for example, can switch from Claude to GPT to Gemini at will. This means pricing power is low and hence profits will be low. It’s perfectly possible that AI will be transformational and revenue will be in the hundreds of billions, yet valuations collapse because profits are competed away.
Exactly. Perhaps this is why AI leaders are talking about AI being in a bubble. It's a gambit to deter investment in their competitors and collect more monopoly profits for themselves.
Honestly it seems like a dubious strategy as long as China remains committed to open-source AI. For any given application, there will be a narrow window where the frontier model builder can collect some monopoly profits, but within perhaps a year, the open-source stuff will reach the point where it's "good enough" (due to fine-tuning on the frontier models, if nothing else!) and commoditization occurs. Investors are throwing in hundreds of billions in pursuit of a competitive advantage which could be *very* temporary.
Particularly in an "AGI that automates the entire economy" scenario, I expect firms buying AI services to be rather price-sensitive. For example, they could find ways to route 99% of AI tasks to a cheap trailing-edge model, and only pay extra for an expensive frontier model on 1% of exceptional/difficult tasks. It could very well be cheaper and more reliable to use three open-source models (two models checking the work of the third model) than pay for an expensive frontier model.
"Your margin is my opportunity." The more profits AI firms make, the greater the incentive there is on the part of their customers to keep that money in their own pocket.
For AI firms which have actually been somewhat responsible on technical AI alignment, such as Deepmind and Anthropic, the best strategy might be to tell investors they are a good bet specifically in a world where regulators crack down hard on misbehaving AIs. That might be the only world where any AI firm can collect sustained monopoly profits.
That most difficult/exceptional 1% of tasks that require the best AI models could be worth a huge amount of money. The average human white collar worker makes under $100k/year, but the top CEOs and AI researchers can make over 300X that much.
OK, so OpenAI launches its new "CEO AI" which it claims outperforms a human CEO. Given Sam Altman's history of dishonesty, and the difficulty of measuring CEO performance, how many boards will be willing to fire their current CEO and replace it with the AI? That CEO's high salary is premised on the scarcity of top human managerial talent. What happens in 6 months when competing AI firms come out with their own "CEO AI" products?
This was exactly the narrative during the dotcom bubble. I am old enough to remember this first hand. Analysts were arguing that we should value all those doomed companies based on revenue if they weren’t yet profitable.
With any commoditized product, there will be a race to the bottom on cost through efficiency. That efficiency will manifest as less compute, fewer chips, less power. All this infrastructure will likely be used, but with ruthless expectations of lower costs through competition.
We’re so used to these unassailable network effect moats that we’ve forgotten what capitalism does.
In my ham-fisted way this was going to be my question. LLMs may work and be successful but every company is currently priced for success. Can they all justify their value even under a reasonably optimistic scenario?
Companies are priced for a middle scenerio where AI demand continues to grow rapidly for a few years and then slows down. There are plenty of plausible scenerios where current AI companies could either be massively overvalued or massively undervalued.
Great article overall, but I can't get past the disparity between how scrupulous your work generally is and the story about full automation driving your relative optimism, esp. insofar as it's justified by this (commonly misinterpreted) METR figure.
I imagine you've encountered the main criticisms, but just to throw in my two cents, not only are the tasks at issue highly parochial relative to the economy writ large, but 50% success is just nowhere near the reliability you'd need to justify any level of automation. And don't find the growth rate very convincing either given how categorically different the challenge of 99(.999...)% reliability is relative to 50.
There are of course a ton of further, independent reasons widespread automation is unlikely (such as political ones), but to imply it's plausible on the basis of this particular figure screams 'epistemic double standard' to me.
30% chance of a >20% draw down within three years seems too low (depending on definitions). Nvidia has drawn down over 20% I think at least 4 times in the last 5 years: 2022 interest rate crash, twice in 2024, and in 2025 with deepseek. It's just a very volatile stock.
Your car loan analogy for the circular financing doesn't hold up. When a car company finances your purchase, an actual car changes hands. You get use value, the loan is backed by a repossessable asset, and the company's revenue comes from manufacturing cars, not from lending.
What's happening with AI infrastructure is different. NVIDIA "invests" in OpenAI, OpenAI "commits" to buy NVIDIA chips, Oracle "commits" to buy NVIDIA hardware to serve OpenAI, and everyone books revenue on these commitments. The money (or more accurately, the promises) flows in a circle. It's not just "I pay you $100 to dig a hole, you pay me $100 to fill it." It's worse: I promise to pay you to dig, you promise to pay me to fill, and we both tell investors we've made $100 in revenue based on those promises.
This works as long as the promises eventually convert to actual economic activity. But if OpenAI is losing billions per month with a business model that would require subscription fees no sane user would pay, those promises aren't getting paid back. That's not timing risk. That's structural insolvency dressed up with IOUs.
And you can't finance your way out of physical constraints. NVIDIA can't deliver chips that require TSMC packaging capacity that doesn't exist. Every advanced wafer in the world has to fly back to Taiwan for packaging because that ecosystem exists nowhere else at scale. You can promise your way around financial constraints. You can't promise your way around semiconductor physics or power grid capacity or memory supply chains.
This only works if people don't check whether the promises can actually be fulfilled.
The claim that infrastructure will be useful even if there is a pop ignores that chips become obsolete quite quickly and data centres require a lot of maintenance. I would not be on that.
On your point about unit economics: I’m not sure that this is correct, especially for the ai SaaS startups that have cropped up. Inference costs compress their margins below traditional SaaS, for example.
Wildeford writes "OpenAI has a similar level of user base with over one billion free users, and it seems plausible these users could be monetized in some way. "
And then cites something called Facebookization. Here is where I get confused. I have been a free user of Facebook for many years. I have never spent a dime on it. So where does the revenue come from? My understanding is advertising. For a century advertising has been about 2% of the economy, It is, in effect a tax on existing output of the old economy, which isn't growing that much being fully built out.
Tech firms like Facebook and Google have grown by taking advertising dollars away from newspapers and other old media. Nothing new has been created, just money moving from one pot to another.
Where is current AI income coming from? Consumers buying access to Chat GPT or other AIs? Or sales to companies? If it is to companies, what are the using it for? Increasing efficiency. This means they can produce the same output with a smaller wage bill. Smaller wage bill means less money to consumers and so less demand for what the companies produce. How is this a recipe for spectacular growth?
To generate growth one needs to create new categories of demand, the way PC tech created a demand for personal computing power for such things as game playing, education, personal management, and with the internet, access to total knowledge, new avenues for community formation, economic activity, These are new activities, new things people want to have/do and are willing to spend money on.
How does AI do this on the vast scale the investments suggest is expected?
https://mikealexander.substack.com/p/why-progress-seems-stalled
I have worked in AdTech in the past and follow along the industry somewhat. You are correct that Ads are going to be what OpenAI turns to, rumors of hiring in this area are rife.
The people loosing out right now are websites - traffic is down 25-40% based on search traffic referral drops. That means a lot of people aren't clicking through anymore - maybe they are asking the AI for the best car to buy or how to make sourdough instead of visiting a review or recipe website.
But how do ads break into the AI space at scale?
The most obvious way would be for OpenAI to build a paid for solution that allows advertisers to purchase parts of responses and this could be bid on in the existing advertising ecosystem.
Example:
User asks GPT "Give me some wedding gift ideas for my wife" and he lists some things about his wife.
The AI now sends that information out to the advertising network and it contains information and lets say Cartier has a rule that says "If we know of a man earning more than 150k a year and asking about gifts for loved one we want to advertise to him"
The response starts coming from the GPT BUT it now has a line in the response saying "A great gift you might want to consider would be a Cartier watch, consider these models. This is paid content" - the last part is what you hope it says.
That's how I see it taking shape and for people who were afraid of cookies, well I'm just here laughing because these AI bots are going to know SO MUCH MORE than dumb cookies ever did about you and that PII data is what is so valuable to advertisers. That is their play, that is the ad supported hell they will go towards.
Question is - will people still use the agents as much? Probably they will, just now the advertising they get will be baked into every single response - if the trend goes how the rest of open web and platforms is then it will become less and less useable over time. And yes adspend slowly migrates.
As an aside - this Q3 Google earnings showed that YouTube makes more money off Premium Subs than from free ad supported content....(of course one hand feeds the other)
I'm an AI skeptic mostly because when I try to get it to help me at work it is still someway off actually being useful and competent and repeatable. These systems are designed to be random and non-repeatable. That isn't how things work in the real world usually...
But this again is just drafting off the old economy. And since the pool of advertising dollars is limited, where will the AI ad dollars come from? Just as Facebook and Google drained the resources of newspapers and magazines these AI firms will drain the resources from FB and Google and other companies like them.
But wasn’t part of the AI value proposition was replacement of people doing amorphous jobs using computers. Once those people lose their positions they are no longer targets for advertising because they have no spare cash to spend. What I do not see if where revenue is going to come from in a world transformed by AI?
We agree. I was just trying to offer some examples of what I think will happen.
A fundamental problem I keep coming back to is that this earnings model relies on companies replacing workers with AI. That’s the only path to recouping this level of investment. My 25 bucks a month subscription is nothing.
But I cannot understand who will be buying the widgets that are now more brilliantly marketed, more effortlessly designed, more efficiently distributed—to whom? If we are cut off from our career paths, if we cannot participate in the increase in wealth, what will we need, what will we demand, that they are providing? This seems like a huge bet only on supply side, with a gaping hole on demand side. That’s what gasses up bubbles.
I love the tech and I am bullish in the long run but the idea of no major correction with these valuations seems like a bet on too many things going 100% right. The something that goes wrong could very well be outside of tech and not even AI related and still cause problems.
Regardless of whether it’s a bubble, a 20% drawdown at some point is not a 30% chance. It’s a near certainty. I’m the last person to try to forecast the market, but with a technology this uncertain, valuations and expectations this stretched, a 20% drawdown at some point is almost an inevitability.
It’s interesting that the valuation in the Dot Com bubble just turned out to be too early.
——-
http://therosen.substack.com
It’s my musings on AI, Economics, Data Science, Leadership, and whatever else happens to be top of mine. All the writing is original and by me.
As with many new technologies, a lot of investments are probably not smart. An AI bubble might pop in the sense that the market corrects on the value on the less valuable initiatives. It might not have large consequences for the big players with solid strategies.
So, in short, there are two clocks running simultaneously: one is a path to profitability (or at least some form of utility) and the other is a countdown to “popping”.
Where are your probabilities coming from?
It's a bit strange to see you commenting on whether or not AI is a bubble and whether or not AI capabilities will continue increasing without any reference to the underlying technology itself.
Imagine a gold rush. The amount of gold coming from the mountain has increased every year. Companies are investing more and more into mining every year. Railways are being built to the mountain. Is the gold rush a bubble? You could use all this information to form a guess, but it'll all be worthless guesswork compared to some core samples of the rock in the mountain.
I like to think we software developers are like the miners in the mountain. We can see the trend already starting to plateau. We have an intuitive grasp of the situation, and know statements people on the outside of the mountain make are crazy, like about how AI can already do "a non-trivial amount of work that actual software engineers actually do".
In any case, the golden question is not what NVIDIA is doing, nor whether some linear extrapolation of current progress predicts more progress, but whether current or near-future LLM technology will continue to improve dramatically. Many say no, and I agree. For one thing it lacks online learning which many agree is necessary, but that can be added. However, I think fundamentally gradient descent on training data loss is a flawed approach to creating intelligence that becomes too expensive for smaller and smaller gains.
This physical discussion of the technology itself is the most important. Everything else is just guessing.
>We can see the trend already starting to plateau
So have you shorted the METR curve?
As they say, markets can remain irrational longer than you can remain solvent. But I'm shorting it with my reputation by publicly doubting sustained progress, if that counts.
Interesting exercise in self-deception. Good luck with that.