15 fellowships and 10 roles for AI policy this October
Fully-funded entry-level fellowships and other full-time roles
Have you been reading about AI lately and wondering what’s up and how you can help? Getting AI right is the greatest challenge of our time, and we need a wide variety of people to pitch in!
In this article I will highlight some AI policy roles currently hiring now that I personally endorse and think highly of.1
If you read this blog, you’re likely the target audience for many of these roles. If you’re career searching right now, or just career browsing, consider applying!
~
[Remote] Train for an AI Policy career with paid AI Law/Policy Research Fellowships
The Institute for Law & AI is hiring for multiple full-time paid seasonal research fellowships across three tracks: US Law & Policy, EU Law, and Legal Frontiers. These programs offer law students, professionals, and academics the chance to conduct cutting-edge AI law research with close mentorship from LawAI’s research staff. Alumni have landed roles at the US Commerce Department, EU AI Office, UK AI Safety Institute, leading AI labs, academia, and top think tanks.
The three research tracks:
US Law & Policy: Requires understanding of US legal principles for AI regulation. Summer includes in-person week in DC/Berkeley and admission to Summer Institute on Law and AI.
EU Law: Focuses on EU AI Act and EU legal frameworks. Summer includes in-person week in Cambridge and admission to Cambridge Forum on Law and AI (August 18-23).
Legal Frontiers: Open to interdisciplinary backgrounds (CS, policy, economics, history, psychology, physics) - legal training not required. Research on AI Agents & Rule of Law and International Regulatory Institutions. Summer includes admission to Workshop on Law-Following AI.
There are two seasonal cohorts available:
Winter Fellowships (2026 January 26 - May 8) - Unfortunately applications are now closed.
Summer Fellowships (2026 June-August 2026, dates vary by track) - Applications close on January 30.
Compensation is $1,500/week (US track) or €1,000/week (EU, Legal Frontiers). Fellows working in-person from LawAI’s Cambridge, UK office receive an additional €1,000/week for living costs. The EU track strongly encourages Cambridge-based work; other tracks are remote-first with optional in-person opportunities.
You’ll work with mentors to design tailored research projects (law review articles, policy briefs, reports), with considerable discretion over outputs. No prior AI expertise required — they want people who can apply strong research abilities to AI law questions. Past fellows include law students, PhD candidates, professionals transitioning into AI policy, and legal academics.
~
[Cambridge UK] 8-Week fully-funded full-time AI Safety Research Fellowship
The ERA Fellowship offers 8 weeks of fully-funded AI safety research (2026 February 2 - March 27) where you’ll work on technical safety, governance, or technical AI governance projects with weekly mentorship from expert researchers. The program covers everything: competitive stipend, meals, lodging, transport, and visa support.
ERA welcomes researchers and entrepreneurs at any career stage working on mitigating risks from advanced AI systems. You’ll join ~30+ fellows working alongside Cambridge’s AI safety community, with seminars and events over the 8 weeks plus dedicated research management support and compute resources. The program is especially interested in projects that unite technical and policy research — leveraging the technical substrate of AI (architectures, algorithms, interfaces) to support policy goals.
Research areas include AI alignment and control,, autonomous AI agents, cybersecurity and preventing model weight exfiltration, and sociotechnical challenges. No prior AI safety research required — they want talented individuals motivated to contribute.
Applications close October 30th. The fellowship will be based in-person in Cambridge, UK for the full 8 weeks (2026 February 2 - March 27).
~
[DC] Part-Time AI Policy Fellowship for Conservative Professionals
The Foundation for American Innovation’s Conservative AI Policy Fellowship is an 8-week, work-compatible program (2026 January 23 - March 30) designed for conservative policy professionals who want to develop AI literacy without leaving their day job. The program is fully funded with a $1500 stipend, covering all meals and retreat expenses.
This is explicitly for conservatives exploring questions like “What does a conservative vision for AI policy look like?” and “How should the US maintain AI leadership amid competition with China?”
The time commitment is ~5 hours/week: weekly Friday lunch sessions (12-1:30pm), occasional Tuesday evenings (6:30-8:30pm), and one mandatory weekend retreat (February 20-22).
Topics include national security implications of AI, export controls, China competition, energy/reindustrialization, and AI’s impact on children and families. Fellows produce a 2-page policy memo with mentor guidance.
The target audience is early- to mid-career policy analysts, think tank staff, Hill staffers, legal professionals, government employees, and private-sector tech professionals. No prior AI experience required — the program teaches technical fundamentals for policy discussions, not engineering. Past fellows include Senate Commerce Committee staff, Heritage Foundation analysts, and congressional aides.
Applicants must be US citizens. Applications close October 31. The final cohort announced mid-December and the fellowship is over 2026 January 23 - March 30. All weekday sessions will be in-person in downtown DC near Metro.
~
[Brussels/EU] Launch a European AI Policy Career with 8-Week Program + Paid Placement
The Talos fellowship is a three-part program designed to accelerate European AI policy careers: an 8-week online fundamentals course, 7-day Brussels policymaking summit, and optional 4-6 month paid placement at leading organizations like The Future Society, OECD.AI, Centre for European Policy Studies, or the Centre for Future Generations.
The program focuses on safe and responsible deployment of advanced AI. In this fellowship, you’ll complete weekly readings with expert guest speakers covering EU AI governance, regulatory approaches, digital governance, AI infrastructure/industrial strategy, geopolitics/national security, international AI regulation, and economic integration. You’ll write a policy brief on your chosen topic to be published. The Brussels summit (2026 March 21-27) includes expert speakers on policy design/implementation, Q&As and networking with cutting-edge AI governance practitioners, and practical role-playing workshops.
This fellowship has a strong preference for EU citizens (as many top EU institutional roles require EU citizenship). Fellowships should have finished undergrad and are best positioned if they hold a Master’s or PhD degree in ML, public policy, or other related fields.
Applications close October 25th. The online program runs 2026 January 28 - March 18, the in-person Brussels summit is March 21-27, and the career placements occur across 2026 April-October.
~
[Remote] Accelerator for Scientists Leading Ambitious Coordinated Research Programs
Brains is a 15-week part-time accelerator (2026 February-June) training scientists and technologists to design and execute coordinated research programs — the kind of projects too big for a single academic lab but too research-heavy for startups. Think DARPA-style programs that led to the internet, GPS, and mRNA vaccines. Fellows could become program managers at government ARPAs, run programs within established nonprofits, or raise money to create their own focused research organizations.
The program provides training (calibrating risk/reward, thinking at program level, best practices for program design/management), mentorship from DARPA/ARPA-E veterans helping refine ideas for execution and impact, and networks connecting fellows to funders, partners, and peers. Each week you will meet with an experienced mentor, spend several hours on personalized activities, attend small group meetings with peers, and join panels/fireside chats with founders, ARPA leaders, and philanthropists.
The program is remote-first with two in-person events: 2-day kickoff workshop (2026 February 24-26) and Brains Showcase (2026 June 9). Target audience: talented scientists and technologists with ambitious research visions ranging from carbon management to chronic disease to observing the universe—visions beyond the scope of individual labs, startups, or large companies.
Applications close November 18. You can see the first cohort and showcase supercut.
~
[Remote] 30-Hour Course to Build AI Safety Solutions + Up to $50K Funding for AI Safety Entrepreneurs
BlueDot Impact’s AGI Strategy course condenses years of self-study into 30 hours of structured learning on how to protect humanity from AI risks. The course comes in the form of either a 6-day intensive (5h/day) or 6-week part-time (5h/week) format. Each session includes 2-3 hours of reading/writing plus a 2-hour facilitated Zoom discussion with ~8 peers and an AI safety expert. The course is pay-what-you-want with a free option available.
After the course, you will be invited to write a proposal for how you would tackle AI risks and if your final proposal is strong, you’ll receive $10-50K funding to kickstart your transition into impactful AI safety work. This work could involve plans to either start a company/non-profit, do policy entrepreneurship, or pursue high-impact research.
The course is primarily for entrepreneurs and operators building AI safety solutions. BlueDot partners with Entrepreneur First, Institute for Progress, 50 Years VC, Seldon Lab, and Halcyon Futures to accelerate promising projects.
Another strength of BlueDot is access to the 4000+ alumni network, including people at OpenAI, the UK AI Security Institute, the UN, Anthropic, DeepMind, NATO, OECD, and Stanford HAI. Completing the course gives you access to this builder community focused on ambitious actions to make AI go well.
The next course cohort starts October 27th, so apply by October 19th. New cohorts start monthly.
~
[Remote] Part-Time 12-Week Research Fellowship in AI Governance, Safety & Philosophy
Future Impact Group’s fellowship is a part-time, remote-first program (running from early 2025 December to early 2026 March) where you work as a research associate on specific projects in AI governance, technical AI safety, or other aspects of AI philosophy. The program runs 8+ hours per week and involves mentorship from experienced project leads.
There are three tracks:
AI Policy: Policy & governance (shaping rules/standards/institutions nationally and internationally), plus economy/ethics/society work (managing AI’s effects on economies, societies, power structures)
Technical AI safety: Technical safety projects (LLM reward-seeking, cooperative AI definitions, interpretability)
AI Philosophy: Philosophical fundamentals (coexistence with advanced AI, decision-making under uncertainty) and foundational research (consciousness models, eliciting LLM preferences, individuating digital minds, evaluating normative competence)
The program provides co-working sessions, troubleshooting support, career guidance, opening/closing events, networking opportunities, research sprints, and guest speakers.
Applications close October 19th.
~
[London] 5-Week ML Bootcamp to Launch Your Technical AI Safety Career
ARENA, the Alignment Research Engineer Accelerator, is running its 7th cohort from 2026 January 5 - February 6 in-person in Shoreditch, London. The goal of the program is to upskil people in ML, specifically related to the alignment of large language models, to help people interested in contributing to technical AI alignment (e.g. as research engineers at major orgs, or as independent researchers). Travel and accommodation are fully covered.
To apply, you need to code well in Python and have solid math fundamentals (linear algebra, calculus, probability). Otherwise there is no single profile — recent cohorts include diverse academic and professional backgrounds. They want people who genuinely care about AI safety, understand how they might contribute to technical safety work, and see how ARENA fits their goals. The program runs careers events and stays connected with alumni after graduation.
Applications are now open until October 18th.
~
[Remote] Fully-Funded PhD & Postdoc Fellowships in AI Safety & US-China Governance
Future of Life Institute’s Vitalik Buterin Fellowships fund PhD students and postdocs working on AI safety and/or US-China AI governance research. These fellowships exist to build a “vibrant AI existential safety research community free from financial conflicts of interest” and there’s a notable and unorthodox commitment that if you take a job at any of Anthropic, DeepMind, Meta, OpenAI, or xAI within 2 years of completing the fellowship, you must donate half your gross compensation monthly to charity.
There are three tracks:
US-China AI Governance PhD Fellowships: Involves work on US-China AI competition and risk reduction involved in managing that competition, global governance mechanisms to prevent AI race dynamics, institutional designs for cooperation, and/or comparative approaches to AI risk management. The fellowship covers full tuition + fees for 5 years with extension funding possible, plus a $40,000 annual stipend for US/UK/Canada universities, plus a $10,000 research fund for travel/compute resources. There also are annual workshops and networking events.
Technical PhD Fellowships: Involves work on interpretability, verification, alignment, cybersecurity, deception-resistant objectives, and/or formal analysis methods — technical work that reduces risk of AI causing existential catastrophes. The fellowship covers full tuition + fees for 5 years with extension funding possible, plus a $40,000 annual stipend for US/UK/Canada universities and a $10,000 research fund for travel/compute resources. There also are annual workshops and networking events.
Technical Postdoctoral Fellowships: An $80,000 annual stipend plus a $10,000 research fund. Requires securing mentor and host institution beforehand.
There are no geographic limitations to this work. It is open to current PhD students and prospective PhD students. Requires advisor confirmation of support for your research direction. Apply by November 21.
~
[Brussels, EU only] Help the EU AI Office implement the EU AI Act
The EU AI Office’s AI Safety Unit (A3) is hiring for the following profiles:
People with both legal experience and AI safety/policy experience
People with experience in technical AI safety
People with experience with risk management
People with experience with AI and cybersecurity (e.g., cyber-offensive evaluation, model-weight security)
People with experience with AI and biosecurity
People with experience in operations
The way the EU Commission hires is via a super generic expression of interest form, but if you are an EU citizen and fit any of the above, please consider applying. Applications will continue to be reviewed on a rolling basis.
~
[Remote] Stop Human Extinction from Biological Catastrophes
Normally I focus on roles related to AI, since I think it’s important and it’s the field I know best. However, biosecurity is important in its own right (remember COVID?) and also is a critical AI threat vector (one of the most concrete ways AI leads to harm is via AI-generated bioweapons).
Andrew Snyder-Beattie runs Open Philanthropy’s biosecurity program and his team has a concrete “four pillars” plan they think can cut biorisk in half or more:
Pillar 1: Personal protective equipment, especially via elastomeric respirators
Pillar 2: Biohardening buildings, especially via propylene glycol vapor (the same chemical used in fog machines and vapes)
Pillar 3: Early detection, especially via pathogen-agnostic metagenomic sequencing
Pillar 4: Rapid, reactive medical countermeasures
You can see more in the official article and in this podcast discussion.
But the reason why this is important is fewer than 100 people globally are working full-time on implementing the above four pillars and many critical projects have literally zero full-time staff. But there are multiple roles to implement this defense srategy.
If you have good pre-existing domain knowledge in one of biology, manufacturing, logistics, or entrprenuership, please consider one of the following roles:
You can apply to be a grantmaker at Open Philanthropy to deploy tens of millions annually in biosecurity funding. And this isn’t reviewing applications — it’s entrepreneurial headhunting and creating the projects you want to exist. Half the team has bio PhDs, half don’t. Program Associate applications roles close October 20.
They are recruiting for a CEO and a full team for a PPE nonprofit to manufacture/distribute elastomeric respirators at scale. These masks cost $5-10, last 20 years, provide 100x better protection than N95s, and can be reused for 6 months straight. Stockpiling them for every American would cost 1% of annual missile defense spending. The team needs manufacturing experts, product designers, logistics specialists, and global health experts.
The world desperately needs more biohardening researchers, as currently there are approximately 0 full-time people doing this. These researchers would validate strategies for protecting buildings using propylene glycol vapor (already mass-produced for fog machines), improvised air filtration, and surface sterilization. It would be a hands-on engineering role testing whether you can turn homes into clean rooms using household materials.
Also, medical countermeasures researchers to develop rapid-response antivirals/antibiotics using AI-driven molecular design.
Fill out this expression of interest form to explore the above.
~
[DC] AI Policy Research and Operations Roles at Conservative Think Tank
The Foundation for American Innovation is hiring for two AI policy roles. FAI is a right-of-center nonprofit think tank with a “politics of builders, hackers, and founders” focused on advancing tech policy that supports innovation and American competitiveness. Both positions are Washington DC-based with hybrid work in an office in Union Market. Applications reviewed on rolling basis.
Research Fellow / Senior Fellow - AI Policy ($75K-115K Research Fellow / $130K-175K Senior Fellow): Produce original nonpartisan research shaping federal and state AI policy debates. Act as policy entrepreneur identifying opportunities where FAI can offer intellectual leadership and equip policymakers. Focus areas include cybersecurity/national resilience (protecting critical infrastructure from AI threats), AI diffusion (reducing deployment barriers), tech competition (ensuring US leadership in semiconductors/manufacturing/AI systems), AI and society (analyzing AI agents’ effects on culture/governance/families), or public-sector adoption (accelerating federal AI integration).
This role involves writing research and commentary, translate technical debates for policymakers/public, shaping FAI programming (panels, workshops, convenings), and periodic travel to major tech hubs. Senior Fellows also develop donor relationships.
Role requirements include exceptional communication skills, intellectual rigor, entrepreneurial initiative, and fluency bridging technical and policy conversations. Research Fellows need 2-6 years relevant experience (highly qualified recent grads encouraged) whereas Senior Fellows need 6+ years plus publication record in policy journals/national outlets. A law degree is helpful but not required.
Program Manager - AI Policy ($70K-90K): Run operations for the Conservative AI Policy Fellowship (their part-time 8-week program for DC professionals), coordinate new programs and major events, provide operational support to Director and Senior Fellows on hiring/grantwriting/systems. Facilitate team meetings, track priorities, ensure accountability. Need outstanding organizational skills, ability to manage multiple priorities, exceptional attention to detail, proactive problem-solving, and comfort owning program execution end-to-end. Event/program coordination experience helpful but not required.
~
[Remote] Build Metaculus’s Consulting Practice Using Forecasting for High-Stakes Decisions
Metaculus is an online forecasting platform with increasing demand from organizations wanting to harness collective intelligence for geopolitical risk, frontier AI governance, and public health preparedness.
Metaculus is hiring a Head of Consulting Services to scale their forecasting consulting practice aiming to help governments, NGOs, think tanks, and corporations use forecasting to make better decisions on critical global challenges. This role represents a chance to translate crowd forecasting expertise into actionable insights for decision-makers while building sustainable revenue that supports their broader mission: building epistemic infrastructure for the world’s most important challenges. In this role, you’ll own the consulting practice end-to-end: develop the client pipeline, manage relationships throughout engagements, ensure high-quality deliverables, build the team and processes to scale impact.
The compensation is $200-250K + performance bonus + equity. The role is remote from anywhere in the world but the candidate must be available 8am-12pm Pacific time for core collaboration hours.
The role calls for 5+ years consulting experience at a major firm or quantitative firm, and must have experience owning client outcomes, project management/delivery P&L, and personell management. The ideal candidate has strong business development track record, ideally with a mix of government, NGO, corporate clients.
~
[DC] Shape AI Policy Narratives as ARI’s Communications Manager
Americans for Responsible Innovation is hiring a Communications Manager to expand their media presence on AI and emerging tech policy. ARI is a bipartisan nonprofit focused on thoughtful AI governance that protects the public while fostering innovation. The role pays $105K-125K and involves a hybrid schedule that is in-person Tue-Thu and remote Mon+Fri. There is a priority deadline for October 10th.
In this role, you’d manage proactive and reactive media communications, draft and pitch op-eds landing pieces in national outlets, lead media monitoring (quarterly metrics and daily press clips), draft press releases and policy statements, collaborate on strategic media campaigns supporting policy objectives, build and manage media relationships as main point of contact for press inquiries, support virtual and in-person events with media presence, and assist with website maintenance.
The role requires 3-4 years communications experience at advocacy group, think tank, multi-client consultancy, or in government. Applicants must demonstrate an ability to synthesize complicated policy into digestible products, understanding of AP Style with strong eye for detail, and on-the-record experience with media outlets. A tech/innovation policy background and existing relationships with reporters covering biosecurity and national security beats is preferred.
~
[Berkeley CA] Launch an AI Safety career with a full-time paid fellowship + mentorship from top researchers
The Astra Fellowship boasts an 80%+ placement rate getting people into full-time AI safety roles, including at top AI companies (e.g., Anthropic, OpenAI, DeepMind) and other top organizations like Redwood Research and METR. Now they’re opening applications for their next cohort, offering a fully-funded 3-6 month research program starting January 5th. They focus on both technical AI safety work and AI policy work.
The role comes with a competitive stipend plus a generous compute budget, and involves in-person work at their Berkeley CA research center with ~150 network participants. top labs. Their mentor roster is stacked: Jan Leike, Buck Shlegeris, Owain Evans, Ethan Perez, Ryan Greenblatt, and dozens more across governance, security, empirical research, strategy, and field-building.
Prior AI safety experience is not required — they explicitly want people from adjacent fields. Many of their most impactful fellows came from outside the field. The program provides dedicated placement support plus incubation services if you want to launch your own AI safety initiative. International applicants are welcome with visa support.
Unfortunately, applications are now closed.
Disclaimers: A job being featured here means that I like the people who work there and the organization, but does not mean that I endorse or agree with all of their opinions and policy opinions. In many cases, I strongly disagree! However, I think AI policy should be a ‘big tent’ enterprise with robust debate across many differing perspectives so I did not apply an ideological filter to these jobs. I recommend applicants do their own research as to what each organization stands for and apply accordingly.
Additionally, I am featuring jobs based on my own independent judgement and I am not directly associated with any of the opportunities listed here (unless otherwise noted). I cannot answer questions about any of the roles I am not directly involved with.


This is really helpful to me. Thank you so much !