Ten Takes on the Paris AI Action Summit
France wants to race, but isn't looking at the road ahead
About the author: Peter Wildeford is an AI analyst and top forecaster, ranked in the top 1% every year since 2022. Here, he shares the news and analysis that informs his forecasts.
Author’s note: There was so much AI news this week that I’m coming at you with a three-part series: today starting with the AI Summit; tomorrow covering recent updates on AI with Sam Altman, Elon Musk, and Donald Trump; and on Saturday with my typical “Weekend Links” with more AI as well as the world outside of AI. Subscribe so you don’t miss it!
1: The vibes have shifted… and will shift again
Back in November 2023 (approximately two decades ago in AI years), there was an international summit in Bletchley Park in the UK called the “AI Safety Summit”. It was a high-level global meeting where world leaders, tech CEOs, and policymakers came together to think about balancing AI opportunities with emerging national security risks from future, more advanced AI models. A second such summit occurred in Seoul.
The Paris “AI Action Summit” is the third in this global conversation, coming after Seoul and Bletchley. But this Summit had a very different vibe. As apparent from the name alone, the goal by France was to change topics away from AI safety and more to capitalizing on AI’s opportunities and “winning the AI race”.
Major vibe shifts often lead to overcorrection and recency bias. Many today are now thinking that the Paris Summit means that AI safety concerns are obsolete, potentially never to be seen again.
However, such sweeping changes in consensus rarely last. Keep in mind that in each of 2022, 2023, 2024, and 2025 the consensus around AI have been very different. I expect the consensus in 2026 to again be quite different. For example, consider COVID — initial dismissal of the threat swung to predictions of permanent societal transformation, followed by both extremes being proved wrong. Reality typically settles between such poles and I think AI discussion will too.
2: Vance was right about a lot, but the devil will be in the details
A lot of attention was on US Vice President JD Vance’s speech and what he would say about AI and what tone this would set for America.
And Vance was right about a lot of things:
Vance is right that “we face the extraordinary prospect of a new Industrial Revolution, one on par with the invention of the steam engine or Bessemer steel”, but I think this actually underestimates what AI can and will do.
Vance is right that the US should aim to be the partner of choice for the world, especially when the most compelling alternative is currently China. I hope other democracies can create a more vibrant free market for AI products and then the US can compete.
Vance is right that geopolitical security — for both the US and France — depends on the most powerful AI systems being built in the US with American-designed and manufactured chips and thus it is imperative to “safeguard American AI and chip technologies from theft and misuse”.
Vance is right that the US should not “go it alone” on AI and should continue to partner with the rest of the free world.
Vance is right that AI must remain free from ideological bias, and that AI should not be co-opted into a tool for authoritarian censorship.
Vance is right that excessive regulation of the tech sector is an issue, especially in Europe. We need to be careful in how we approach regulating AI as good anticipatory regulation is hard.
Another thing Vance is right about is that we’d all do better to look at AI with optimism rather than trepidation. But this is where the problem also lies — looking at AI with optimism doesn’t mean you can just wish the risks part away. We do need a careful balancing of both.
For example, it was interesting to see Vance then outline a “pro-worker” path for AI. Vance here is overly optimistic that AI will not “replace human beings” without any solid empirical evidence. “Replacing human beings” is generally what happens when you have machines that can do everything a human can do, only better and cheaper. Vance may have “plans to keep it that way” where humans are not replaced, but what are those plans, exactly?
Additionally, Vance is wrong to see pushback on safety as mainly a conspiracy to benefit “Big Tech”. It’s clear that Big Tech is generally opposed to regulations and lobbying strong against regulations, not pushing for it. Meta is not in the top ten companies by total lobbying money lobbying for regulation.
The Trump administration is building an AI Action Plan. The devil will be in the details and the Trump administration needs to understand that AI can enable a significantly better future, but only if it is proactively understood. There will be serious national security implications and we need to be vigilant.
3: The summit declaration was disappointing and I’m glad the US and UK didn’t sign it
The Paris Summit Declaration "Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet" focused mainly on platitudes around “diversity”, “inclusivity”, “multi-stakeholder” approaches, and a lot of nonsensical focus on how much energy AI uses when each ChatGPT query only uses the same amount of energy as watching a YouTube video for two minutes.
At the same time, they ignored many pressing concerns about emerging risks from advanced AI systems, failing to address the “substantial risks may arise from potential intentional misuse or unintended issues of control relating to alignment with human intent” from the Bletchley Declaration that France signed on. This is concerning given that we may not have many more summits before we have to reckon with these risks becoming far more realized.
As a result of focusing on the wrong issues, this ill-advised statement fractured the international community and the US and UK didn’t sign on. Additional signatories of the Bletchley Declaration that haven't signed the Paris Statement: include Israel, Saudi Arabia, Turkey, and Philippines.
4: The International AI Safety Report was well done, and France is dangerously wrong to dismiss it
The International AI Safety Report is a 298-page report synthesizing current understanding of general-purpose AI risks and capabilities and meant to be a systematic attempt to create a shared global scientific understanding of advanced AI risks and mitigation strategies. It’s no joke — it was chaired by Turing Award winner Yoshua Bengio, authored by 100 leading AI experts, and commissioned by the UK government at the Bletchley Summit with the support of over 30 nations including France.
The report highlighted the uncertain path ahead — future capability advancements in AI could range from slow to extremely rapid. However, there is still scientific evidence of emerging risks to national security as AI capabilities grow, such as AI enabling hacking or biological attacks, or AI even evading human oversight and control permanently.
The report highlights that AI risk management techniques are nascent but progress is possible. The problem is that we don’t have much time to make a lot of progress on these issues and the Paris summit chose not to build towards this progress at all. Instead, the Summit pushed the Report into one of many side events and ignored it completely. I think France will eventually come to regret this decision.
5: It’s great that AI companies continued voluntary commitments …where is France’s Mistral?
In the run up to the Summit, leading AI companies continued to create and iterate on their voluntary safety commitments, as they had promised to do so. METR has a good run-down here, and I especially applaud new commitments from Meta, Amazon, Microsoft, and xAI. It’s an important milestone that every single top AI company that is investing billions of dollars into AI is currently covered by a voluntary commitment. This shows the promises of voluntary commitments continuing to work as designed and shows the Summit’s strong role in galvanizing commitments. I hope companies continue to innovate in these commitments.
However, one notable absence was France’s own Mistral AI. Founded by former Google Deepmind and Meta employees, Mistral AI is considered France’s “national champion” in AI. They committed to launch a framework at the Seoul Summit but now it is nowhere to be seen… and more concerningly it seems like no one cares. This is a key example of the downside of voluntary commitments — commitments only work insofar as people will care and shame the companies into keeping their commitments.
6: The Summit didn’t do the one thing it was supposed to do
A spokesperson for the UK was apt in explaining why the UK didn’t sign the declaration: “We felt the declaration didn't provide enough practical clarity on global governance, nor sufficiently address harder questions around national security and the challenge AI poses to it.”
This is right, and it was a huge failure from Paris.
When you think about the role of international summits on AI, it’s to collectively understand where AI is going, get on the same page about the promises and perils, and help chart a course for what a good AI-enabled world looks like. Who else is supposed to do this? The companies can’t do this themselves and even individual governments face collective action problems. This is exactly what international summits are for. But the Paris summit didn’t do that at all.
One important role of voluntary commitments is for governments to tell companies what outcomes they want and where they want commitments to go, as well as pressure companies to do more. Voluntary commitments from companies are still mostly lacking in detail and need to do much more to define clear thresholds for when AI would be dangerous enough to warrant safeguards, evaluation capacity for noticing when AI meets those thresholds, and have a plan for responding if evaluations show those thresholds are met.
Creating and improving voluntary commitments was a key theme of the Bletchley Summit and Seoul Summit, but wasn’t present at all in Paris. This is a problem because companies can’t shape a transformative technology like AI without citizen input and companies are not going to invest more without public pressure and oversight.
Paris didn’t even take a single step in this direction. In doing so, the French government abdicated its role in shepherding the international community to provide proper understanding and oversight of AI.
It would be ideal not to repeat this mistake next time. It looks like the next Summit is to be held by India, which seems close to France in their current approach to AI, so I imagine that the default is for these mistakes to be repeated. But maybe it can be avoided.
7: France raised a lot of money for AI but there’s still question marks before declaring them “back in the AI race”
Instead, the main outcome of the Paris AI Action Summit seems to have been a successful fundraising campaign for French AI. After the $50B I mentioned in my previous weekend links, we now have $62B more investments reported ($112B in total), including $20B from Canadian investment firm Brookfield. As a result, French President Macron boasted that France is “back in race” on AI. And I agree this puts France significantly more on track to compete financially with American tech companies than I was expecting.
However, there’s still a lot of question marks. Remember that American hyperscalers are spending $100B per year. For the French funding, it’s not really clear how many years this is over or whether it will recur year-over-year. Also we still have no idea how the money will actually be spent and the way the money is deployed will definitely matter.
Additionally, Europe doesn’t have the best track record when it comes to efficiently deploying capital at scale. So there’s still a long road ahead to catch up to the US and while money is the most important part, it will still take much more than money. Europe isn’t famous for building big things quickly.
We’ll see. For now, I will update my list of AI players to the following:
OpenAI with Microsoft and/or Oracle (Stargate)
Anthropic with Amazon and Google
Google
xAI
Meta
France? Mistral?
8: French AI acceleration is a good thing
Regardless of whether France is “in” the AI race or not, I do think it is good for France to see a lot of AI investment:
Potentially this helps existing AI systems become more widely adopted, which seems like a good thing. Further adoption of existing AI systems would lead to lead to improved education, science, healthcare, and quality of life.
Insofar as France and the EU can stay competitive with the US, I think it is helpful for the US to have a “worthy competitor” that isn’t China. There should be a conversation about AI and this conversation should ideally be between multiple democratic free nations that can keep each other in check.
Insofar as France and the EU can stay competitive with the US, that’s good for free market dynamics which typically improve outcomes for the consumer. DeepSeek already put pressure on American companies to lower their prices — maybe French innovation can do the same.
French AI acceleration likely is good for American industry — they will likely buy a lot of American AI products and rent a lot of American cloud compute.
It’s much better to see data centers be built in France than in China or the UAE. We need to ensure that all aspects of AI are under control of democracies.
France having more skin-in-the-game from developing the technology will hopefully help Europe become more vibrant and more pragmatic in their policymaking around tech issues. It would be good for France and Europe to have more direct experience creating and evaluating these AI systems. I think Europe would benefit from this energy.
A sizable amount of French AI investment ($400M initial funding, potentially will scale to $2B+) is going into Current AI, a foundation to help lower income countries integrate AI into their development. This also seems like a clearly good thing.
9: It’s good to see more AISIs, but it needs to be coupled with real investment in state capacity
AI Safety Institutes — as initially established by the US, UK, Canada, EU, Japan, and Singapore — were originally designed to be technical government institutions with a clear mandate to understand and evaluate AI systems, generally with a technology-first approach that informs rather than regulates.
It’s good to see more countries creating similar entities. China has created the Chinese AI Safety and Development Association (CNAISDA) — in particular seems to have good buy-in from the Chinese AI ecosystem as well as a fairly technocratic focus on evaluating AI systems.
France also established the National Institute for AI Evaluation and Safety (INESIA) which seems like a welcome step to complement the EU AI Office in AI evaluation capacity. India also launched an AI Safety Institute. But in both these cases, the “institute” is mainly a joint program between existing organizations without new resourcing. Even China’s CNAISDA is a consortium of eight major Chinese institutions rather than a new body.
This new trend towards packaging organizations creates the possibility for interorganizational tensions and doesn’t necessarily resource further state capacity when the most important thing is for all governments to have a good handle about what is going on with AI.
10: We need more urgency
Kevin Roose writes in the New York Times that “policymakers can’t seem to grasp how soon powerful A.I. systems could arrive, or how disruptive they could be.” Instead it’s been like, according to Roose, “watching policymakers on horseback, struggling to install seatbelts on a passing Lamborghini”
Roose is right. If AI is developing at the pace we’ve been observing for the past two years and at the pace that Sam Altman, Demis Hassabis, and Dario Amodei are saying, this conversation needs to become far more urgent.
We can’t just have platitudes about the importance of worker autonomy — we need actual plans for how to empower workers in the AI age. We can’t just dismiss the ideas of AI-enabled cyberattacks or biological warfare, we need evaluations, thresholds, and response plans.
To be clear, adoption and investment in AI is good. But innovation doesn’t come for free, and all past successful technological innovation has been done in tandem with acknowledgement of emerging risks and plans to address them. That’s how humanity benefits from AI.
But the current path is to sideline and ignore this nuanced discussion. We need the vibes to shift back, and soon. Because time may not be on our side.
If you want my analysis even faster, follow me on Twitter or BlueSky.
For this article, I would also like to thank Caroline Jeanmaire for contributing analysis.