Discussion about this post

User's avatar
Shine's avatar

Revenue, revenue, revenue — this is all I keep hearing about. But valuation stems from (future, discounted) earnings, not revenue. If there was just OpenAI, then we could argue that losses will turn into profits due to falling training/inference costs and new revenue streams like ads or hardware.

But multiple deep-pocketed companies are pursuing products that have an almost commodity-like similarity. Cursor, for example, can switch from Claude to GPT to Gemini at will. This means pricing power is low and hence profits will be low. It’s perfectly possible that AI will be transformational and revenue will be in the hundreds of billions, yet valuations collapse because profits are competed away.

Expand full comment
Nathan Witkin's avatar

Great article overall, but I can't get past the disparity between how scrupulous your work generally is and the story about full automation driving your relative optimism, esp. insofar as it's justified by this (commonly misinterpreted) METR figure.

I imagine you've encountered the main criticisms, but just to throw in my two cents, not only are the tasks at issue highly parochial relative to the economy writ large, but 50% success is just nowhere near the reliability you'd need to justify any level of automation. And don't find the growth rate very convincing either given how categorically different the challenge of 99(.999...)% reliability is relative to 50.

There are of course a ton of further, independent reasons widespread automation is unlikely (such as political ones), but to imply it's plausible on the basis of this particular figure screams 'epistemic double standard' to me.

Expand full comment
34 more comments...

No posts