Podcast: AGI by 2030? Two forecasters discuss
"I'm coming around to the maybe-lines-just-keep-going-up-indefinitely thing"
Ever wanted to know my personal opinion on the question of “AGI by 2030”? Think it would be even better if there was a second top forecaster in the conversation?
Well you’re in luck because I recorded a podcast with Robert de Neufville at Telling the Future! You can listen to it here:
…you can also find it on Substack or Apple podcasts.
~
Here’s my summary of what we talked about… sorry about writing about myself in the third person but it was what made the most sense
The big question for the podcast: “What are the chances we develop AGI by 2030?”
Wildeford says it’s hard to forecast AGI without clarifying what “AGI” means. People throw around the term, but definitions vary:
Some require an AI that can replace all current remote worker jobs cheaply.
Others say it must win a Nobel Prize in every field, which verges on superintelligence.
De Neufville acknowledges we want to talk about “AGI” because that is what people talk about. But it is problematic because we don’t really know what it means.
De Neufville suggests AGI might be akin to a “smart human being across many domains.”
Wildeford proposes using the standard of automating roughly “99% of current remote worker jobs” at a cost advantage relative to equivalently skilled human work.
Will AGI take all of our jobs?
It’s unclear how labor markets adapt if AI can do nearly everything. De Neufville notes that historically, new technology sometimes spares humans by creating new jobs, but if an AI does nearly all tasks better and cheaper, there won’t really be any jobs for humans to shift into. Disruptions could be unprecedented.
“Moravec’s paradox”
De Neufville points out that in certain narrow domains (e.g., arithmetic, playing Go), AI already surpasses humans. However a six-year-old can still outdo most computers in broad real-world understanding.
Wildeford references Moravec’s paradox the concept that tasks hard for humans (like large arithmetic) can be trivial for AI, while tasks easy for humans (like robust image recognition or playing certain video games) long remained difficult for AI. We evolved specialized instincts for day-to-day survival, but not for large-scale math.
Wildeford thinks incremental improvements and “scaling magic” are so far bridging many gaps, though there remain “stunning” failures at times.
The data wall
Current large language models train on massive amounts of text (essentially the entire internet). We can get more compute but we can’t get a second internet. Will we “run out” of new training data? Will this be an issue for AI?
One approach — synthetic data. Have AI generate further data for itself, especially in domains where correctness is checkable (e.g., code generation, math proofs). Also human labelers could filter out low-quality AI-generated data. Together this could effectively expanding the training set.
Another approach — multi-epoch training. Re-train on the same dataset multiple times. Currently typical training might use two epochs. Possibly up to 10 epochs is still useful, though with diminishing returns.
Thirdly, multimodal data. AI can train on images, video, audio, etc., which broadens the data pool. Still finite, but could “buy more time.”
Multiple ways to scale AI
Different ways of scaling AI may hit a wall. But there are other ways to scale AI, and new ways could come about.
Historically, people focused on scaling compute/training, but newer breakthroughs include:
Chain-of-thought reasoning (letting models “work out” problems step by step).
Post-training with specialized data.
Scaffolding with tools (internet access, ability to read papers, run code, etc.).
Forecasting approach
How would you forecast a question like “AGI by 2030?”
Bottom-up: track AI performance on benchmarks (e.g., passing certain tests, scoring well on standardized tasks) and see how performance is trending.
Some argue we can just extrapolate AI’s exponential curve. Others say every exponential eventually hits logistic slowdown. Wildeford acknowledges difficulty: he tried calling the top before, yet progress resumed.
Top-down: estimate the hardware/algorithmic progress needed for AGI, sometimes comparing to the human brain’s compute or the brain’s “floating point operations.”
Wildeford notes estimates range from 10^21 to 10^30 FLOPs, plus the complexity of evolutionary “pre-training.”
Hard to compare neural hardware to silicon, making it uncertain.
De Neufville draws parallels to early Covid exponential growth. People denied it at first, but it kept rising. Wildeford sees a similar “exponential denialism” around AI. However, plateaus may still appear. Both remain uncertain, describing a wide forecast range.
Probabilities for AGI by 2030
Wildeford places the “over/under” at 2030: he gives ~50% that AGI arrives by the end of 2030 or before.
De Neufville mostly agrees, sees enough plausible variation to keep that date as a median guess.
Fast vs. slow
De Neufville asks if AGI might rapidly become superintelligent (a “fast takeoff”).
Wildeford references Eliezer Yudkowsky’s concept of a superintelligence that self-improves in months. Wildeford’s stance is more moderate: yes, AI can automate its own R&D, but real-world hardware constraints remain.
If an AI automates AI research, progress accelerates well beyond historical norms, but maybe not in “months.”
Additionally, physical tasks like factory build-outs still take time. Even superhuman planning must contend with real-world resource constraints.
Nonetheless, large productivity gains come from replacing entire R&D teams with massive AI worker equivalents.
Risks from AI
Immediate “mundane” risks
Bad actors (terrorists, rogue states) could leverage AI capabilities for bioweapons or cyberattacks.
Dangerous virological knowledge that was previously limited to a few experts could become widely accessible via AI.
Cyberattacks could scale up drastically if automated by AI.
Defensive uses of AI may mitigate some of this, but arms races are likely.
Longer-term disruptive effects
Economic and labor upheaval if AI replaces a huge fraction of jobs. Social/personal meaning crisis if people can’t tie identity to work.
Geopolitical tensions: states may fight over who controls transformative AI.
De Neufville and Wildeford describe AI as a “Pandora’s box” requiring robust governance and strategic foresight. Many concerns are tangled together, from job disruption to existential risk.
Government engagement
Wildeford thinks government must step up, but this isn’t happening yt. Tech CEOs themselves often want democratic input, saying decisions on how to transform society can’t be left solely to private sector fiduciary logic. But this democratic input hasn’t historically appeared yet.
Paris AI Summit disappointment: De Neufville references a recent summit in Paris aimed at AI governance but calls it pessimistic and unproductive. Wildeford sees it as a missed chance: it neither solved nor meaningfully addressed the global coordination problem.
Muddle-through theory
Wildeford offers that historically, societies often wait until the last minute to tackle big problems (e.g., nuclear arms treaties only after Hiroshima, serious Covid measures only once cases were high).
However, De Neufville is skeptical about “muddling through” if the crisis is unrecoverable.
Call for dialogue
Both see open public engagement, discussion, and legislative awareness as essential.
They acknowledge the risk that it might not ramp up quickly enough, but hope that raising awareness via podcasts and Substack helps.
If AGI were to cause mass unemployment, what about "The devil finds work for idle hands"? Many research papers allege that unemployed masses of men often cause insurgencies/civil wars, or else their leaders solve the problem by starting wars with neighbors.
Scaling reinforcement learning, rather than scaling pretraining, is why I'm hopeful about AGI. This includes optimizing models to get correct answers on logic, math, coding, and real-world analytical tasks. It also includes training them to navigate virtual environments, which will help teach them executive functioning.