Weekend Links #8: Canadian and Indian AI, pool on the moon
Also better dinner parties and the seven habits of highly depolarizing people
AI
Power laws come for AI and geopolitics, like everything else. A few countries — US and China mainly — are the standouts and thus get the vast majority of coverage, as they should.
Scrolling the Chatbot Arena Leaderboard (a very imperfect1 but nonetheless popular ranking of LLMs), the first five models are all American. Then China comes in with DeepSeek in 6th place. Afterwards, the models are mostly American but with Chinese models like Alibaba’s Qwen2.5-Max in 10th, Zhipu’s GLM-4-Plus in 13th, StepFun’s Step-2-16K-Exp in 14th, Tencent’s Hunyuan-Turbo also in 14th, and 01 AI’s Yi-Lightning in 24th.
Outside of the US, we have Cohere’s Command A sharing 12th place from a Canadian company and Mistral Large in 48th place from a French company2. Then there’s a large gap to 64th place is AI21 Labs’s Jamba-1.5-Large from an Israeli company. After that, there’s no models from any other country3 on the leaderboard except for falcon-180b-chat from UAE’s Technology Innovation Institute in 148th place. Though Sutskever’s mysterious Safe Intelligence Inc is a dual Israeli-US company and should be factored in this list somewhere — they’re not currently ranked because they don’t have a released product to rank as they famously do not make products, just superintelligence.
In prior editions of “The Power Law”, we’ve covered top AI development across the US, China, Israel, and France. But they’re not the whole story and sometimes you do need to zoom out.
So today let’s explore some of these lesser discussed countries: Canada, the UAE, and India…
~
Canada invests, but doesn’t bring the cash
Cohere Inc. is a Canadian AI company focused on enterprise customers. Their latest model Command A aims to have high quality models at a lower price. Command A is part of Cohere's broader AI ecosystem that includes multilingual models (Aya Expanse), retrieval systems (Embed and Rerank), and workplace systems (North and Compass). In my opinion, I struggle to see what Command A offers companies that they can’t get from Google/Anthropic/OpenAI, especially with Google’s Gemma 3 launch.
Canada is interested in getting involved in AI but they aren’t putting that much money into it. So it was interesting to see Canada announce investing up to $240M in Cohere’s $725M project to increase domestic Canadian AI compute capacity, as part of a broader $2 billion Canadian Sovereign AI Compute Strategy. The investment will support a new multi-billion-dollar AI data center in Canada, scheduled to come online in 2025. This investment also marks Cohere as the first funding recipient of the AI Compute Challenge announced in December 2024.
The problem is that “multi-billion-dollar” just isn’t that much these days. Stargate’s build out is $100B/yr and other American companies like Microsoft, Amazon, Meta, and Google are investing at a similar scale. It will be hard for other countries to keep up.
And interestingly, more Canadian AI investment seems to be occurring in France rather than Canada. The Canadian fund Brookfield has pledged to €20 billion into French AI by 2030. This includes a large data center and renewable energy production.
~
UAE’s has cash but also geopolitical tensions
While UAE doesn’t have top models, they’ve done fairly well in AI and definitely have a lot of cash in the race. Recall that MGX has invested in colossal AI projects such as the “Stargate project” involving OpenAI infrastructure in the US. They’ve also invested billions with France.
A special episode of the “AI Policy Podcast” on the UAE’s AI Ambitions produced by CSIS examines the United Arab Emirates (UAE) and its ambitious push to become a major global hub for AI. The hosts and guests traveled to the UAE to meet officials, industry representatives, and investors, analyzing the country’s emerging role in AI.
Some tidbits from the podcast:
The UAE is no longer merely oil-rich but has the determination to establish itself in global AI infrastructure. UAE officials aim to derive 20% of non-oil GDP from AI by 2031. Sovereign wealth funds (notably Mubadala) and closely related private vehicles (notably MGX) back huge AI infrastructure deals.
UAE leadership — especially Tahnoun bin Zayed Al Nahyan, the UAE National Security Advisor and brother to the President — directly steer multiple AI ventures like G42 and MGX.
Two reasons for Microsoft and other companies to invest in the UAE: UAE’s co-investment money and market access. For example, the Microsoft–G42–Kenya deal for data centers involved UAE capital that lowered risk for Microsoft. It also gives Microsoft wide Middle East and African market access. This deal was brokered in part by the UAE’s diplomatic reach and government relationships.
But there’s a concern — UAE is a “swing state” in US-China tech competition. This means that giving them access to US tech could be key to winning them over, but UAE might pass US tech to Chinese entities. Any large-scale AI data center in the UAE thus needs US government approval to bring in high-end chips.
Key Emirati figures promised the US that, in AI infrastructure, they would distance themselves from Chinese gear. G42 claims to have “ripped and replaced” over $1B of Huawei equipment, removing Chinese-manufactured components from data centers. There is, however, evidence the UAE continues major dealings with Chinese tech in other sectors; “decoupling” may be isolated to G42 or certain projects.
The podcast group toured a G42 subsidiary (Husna) site, where Microsoft Azure sections are physically walled off and accessible only to Microsoft employees. Floor-to-ceiling turnstiles, badge checks, and separate security cameras are meant to maintain strict partitioning. But even with physical separation, advanced threat actors (e.g., Chinese intelligence) might try cyber intrusions. True security depends on robust digital protocols and constant audits, not just physical locks.
There is a debate among US officials about the priority of controlling the “assets” (data centers) versus keeping “secrets” (model weights) since advanced AI also depends on scale and compute capacity.
~
India just wants to build
Speaking of emerging markets in AI, let’s also talk about India. India partnered with France to chair the previous international “AI Action Summit”. This Summit was highly attended by world leaders. India is hosting the next Summit in 2026. How will India be a player in AI?
India has a lot of potential in AI with millions of educated tech workers and a strong manufacturing base. IndiaAI commits substantial funding (~$1.2B) for AI research, startups, and supercomputing infrastructure. Domestic initiatives like the Bhashini language model show India's commitment to developing homegrown AI solutions addressing local needs. And there are amazing technological feats like Aadhaar, the world's largest biometric ID system covering 1.4 billion Indians, and the Unified Payments Interface, which processes over 10 billion transactions monthly. Reliance and Adani Group are also heavily investing.
But India is plagued with infrastructure challenges like chronic power outages and connectivity issues outside major cities. There’s also substantial brain drain where the best Indian AI talent often leaves for American companies. This likely limits what India can do in competing with American and Chinese AI companies on the world stage.
It was interesting watching Modi on the Lex Fridman podcast where Modi stated that “AI is incomplete without India” given its vast talent pool.
Here’s some further notes on India’s AI geopolitical positioning:
India isn't naturally aligned with Western democracies. India has. maintained ‘strategic autonomy’ since independence and will continue to do so.
India's approach is pragmatic realpolitik - they'll work with whoever serves their interests in each specific domain rather than picking a specific side across the entire US-China competition. India continuing Russian oil purchases despite sanctions demonstrates they won't simply follow Western diplomatic priorities.
However, India is part of the Quadrilateral Security Dialogue between India, the US, Japan, and Australia which provides a platform for cooperation on technology and security issues, including AI safety and semiconductor supply chains.
Strained India-China relations heavily influence India’s technology strategy and provide some strategic overlap with the US:
After the 2020 Galwan Valley border clash, Modi's government banned 200+ Chinese apps, restricted Chinese investment, and is actively working to reduce dependencies. India has furthermore effectively excluded Huawei from 5G deployment, aligning with Western security concerns.
However, despite these tensions, China remains among India's largest trading partner ($135B+ in 2023).
The Quad's tech agenda explicitly aims to “reduce reliance on China” in networks by developing open 5G solutions and to coordinate on AI ethics to counter techno-authoritarian models.
The US CHIPS Act and the Diffusion Framework encourage partnerships with friendly nations. Micron — an American producer of computer memory and computer data storage — is building a $2.75B testing facility in Gujarat. Other India-US initiatives on semiconductor workforce development are also underway as India becomes the manufacturing hub of choice over China.
India sees AI primarily through the lens of economic development:
Modi's government is heavily focused on AI autonomy and digital sovereignty - they want to build domestic capability, not just import Western tech. This approach aligns with Modi's broader Atmanirbhar Bharat (self-reliant India) vision.
India’s large rural population could greatly benefit from AI in agriculture (precision farming) and healthcare (telemedicine).
Modi frames AI not just as an industry but as a means to “transform people's lives” and drive socio-economic progress.
India is positioning itself as a bridge between the West and the Global South, especially in AI.
India's recent G20 presidency focused on “responsible, human-centric AI governance” and digital public infrastructure.
At the G7, Modi urged leaders to keep AI “creative, not destructive” and ensure AI is “transparent, fair, secure, accessible and responsible.”
In upcoming negotiations around AI before and at the next Summit, India is likely to rally countries in Asia, Africa, and Latin America around shared demands: technology transfer, capacity building, and affordable access to AI benefits.
India's regulatory philosophy is cautious but flexible, especially compared to the EU approach. India currently lacks an AI-specific law, preferring soft guidance and sectoral rules. This approach balances fostering innovation and mitigating risk, much like the US narrative of responsible AI innovation.
India signed the 2023 Bletchley Park AI Safety declaration.
India has been active in the Global Partnership on AI4.
Indian tech firms worry that the EU's stringent requirements will raise compliance costs.
The Ministry of Electronics and IT (MeitY) issued advisories in 2023-24 urging online platforms to self-regulate AI tools.
~
Get hired to do cool AI stuff
If you liked the above analysis and are interested in getting paid to do some of this yourself, now’s your chance. This week we have paid entry-level fellowships across London, Berkeley, and Cambridge:
Accelerate into pivotal AI safety work with the Pivotal Research Fellowship
The Pivotal Research Fellowship is accepting applications for a focused, 9-week mentored research program (June 30 - Aug 29) in London, designed to fast-track talented researchers working on AI safety, AI governance, and/or the intersection of AI and biological risks.
Fellows collaborate directly with expert mentors, tackling critical challenges around the responsible development of AI.
A £5000 stipend, meals, and comprehensive relocation support provided.
Ideal candidates are driven individuals committed to ensuring AI develops safely — prior experience in AI is helpful but not mandatory.
Apply by April 9.
~
Dive deep into AI alignment with MATS
The ML Alignment & Theory Scholars (MATS) Program is accepting applications for its Summer 2025 cohort (June 16 - Aug 22) in Berkeley, CA.
MATS is a 10-week program (with optional six-month extension). MATS connects promising researchers with leading mentors in AI alignment, interpretability, and governance.
Scholars will be mentored to conduct original research, as well as attend expert-led seminars, and network within Berkeley’s thriving AI alignment community.
$12,000 stipend plus compensated travel, lodging, office space, and meals on weekdays.
No prior alignment experience necessary, just ambition and talent.
Apply by April 18.
~
Shape the future of AI at Cambridge this summer
Cambridge ERA is offering paid 8-week summer fellowships (June 29 - Aug 24) in Cambridge UK for researchers and entrepreneurs interested in studying risks from advanced AI systems.
This in-person program at the University of Cambridge focuses on technical AI safety, governance approaches, and the critical intersection between the two.
~£5700 stipend, compensated lodging, compensated meals during work hours, and visa/transport coverage while working on mentored research projects.
Beyond the research opportunities, fellows benefit from 30+ events and develop lasting connections within Cambridge's AI community.
The program welcomes talented individuals at any career stage from around the world.
Apply by April 8.
~
And two more opportunities:
Shape global AI dialogues through remote research
The Safe AI Forum (SAIF) is recruiting 6-12 month remote research fellows to develop and execute high-impact projects addressing extreme AI risks through international collaboration.
SAIF runs the International Dialogues on AI Safety (IDAIS) program, bringing together scientists and governance experts to foster global cooperation.
Fellows can join at various experience levels (Research Fellow, Senior Research Fellow, or Special Projects Fellow) to work on projects ranging from international coordination research to developing safety standards or academic collaboration initiatives.
The fellowship is fully remote and can be done from one of 160+ countries.
Exceptional candidates may even propose their own projects aligned with SAIF's mission.
The position pays $80K-140K annually (location-adjusted) with benefits.
Ideal candidates are impact-driven self-starters with strong prioritization skills who can work independently while communicating effectively in a remote environment.
Experience with international relations or specific knowledge of China is particularly valuable for this role.
~
Help run the show behind innovation policy in DC
The Institute for Progress (IFP) is hiring an Operations Manager to ensure their DC-based think tank runs smoothly while they work to accelerate scientific and technological progress.
This role combines office management, event coordination, and operational improvement projects with flexibility for part-time or full-time arrangements.
Salary $60,000-$80,000/yr with excellent benefits including unlimited PTO, health coverage, paid parental leave, and 5% retirement contribution. Visa sponsorship may be possible.
The ideal candidate is detail-oriented, high-agency, and excited about creating an efficient environment for policy entrepreneurs.
Apply by Apr 20 (though applications are reviewed on a rolling basis)
~
Lifestyle
The seven habits of highly depolarizing people
Politics is very polarized these days. How can we turn down the heat? The Seven Habits of Highly Depolarizing People:
When disagreeing with others, look for shared values and criticize from that common ground.
Avoid binary thinking and recognize that many conflicts involve trade-offs rather than good vs. evil.
Practice intellectual humility through doubt and qualification of statements.
Keep dialogue open, even when frustrated, as ending conversation often leads to demonization.
I realize that was only four habits, but that’s how good summarization works!
~
Dinner parties’ true purpose: meaningful conversations, not food+decor
In “the ultimate guide to holding world class dinner parties”, Auren Hoffman challenges common assumptions about what makes a dinner party successful, arguing that elements like food quality and fancy settings are far less important than careful guest curation and conversation management.
Instead, the key to an outstanding dinner party is maintaining a single conversation for all attendees, which requires limiting the group to 12 or fewer people and having a skilled moderator. Hoffman recommends inviting guests sequentially rather than all at once, sending pre-dinner questions for participants to consider, and ensuring everyone can hear each other clearly. The article specifically warns against common pitfalls like having speakers (which Hoffman calls “boring”) or hosting fundraiser dinners.
~
Whimsy
Swimming on the Moon is cool
XKCD’s “What If” explores “What if there were a pool on the Moon?” They find the physics of lunar swimming would create a remarkable combination of familiar and alien experiences. While floating and underwater swimming would feel similar to Earth due to water density and inertia being independent of gravity, the reduced lunar gravity would enable swimmers to launch themselves 1-2 meters out of the water naturally. The water itself would behave differently too, with larger waves and more “splashy” behavior due to reduced gravity.
The core problem with using Chatbot Arena to rank models is that models are ranked by normal humans who don’t really know enough to judge advanced model capabilities these days and ask overly simplistic questions. Also companies put different amounts of effort into “optimizing” for ranking well on the Arena — this is likely why Claude does so poorly. And then there may be fraud and trolling (see here and here).
I don’t think this should be taken to mean that Cohere is more advanced than Mistral. Firstly, the Arena is not a great way to compare model abilities. Secondly, Cohere just recently released Command A whereas Mistral has not released a new model in awhile.
DeepMind is being counted as a US company, not a UK company, as it is fully part of Google. I am counting companies as headquartered in the principal headquarter of their parent company.
GPAI just merged into the OECD AI Policy Observatory and notably the OECD does not include India. But a condition of the merger allows India to continue to participate.




Fantastic article. Small typo, I think you meant that stargate's investment was $100B/yr, not $100M/yr