Now is a great time to get a career in AI policy
Fully-funded fellowships, full-time roles, and part-time AI policy programs
These roles are all closed now and this post is now just an archive for historical purposes. You can see the latest (as of Oct 6) here.
~
~
If you’re looking for a career and want to consider work in AI policy at an illustrious think tank or in Congress, now seems like a great time as many organizations are hiring.
I’ve selected some roles that I personally endorse working at and think highly of. If you like this blog, you’re likely the target audience for many of these roles. If you’re career searching right now, consider applying!
~
[DC or remote/global] Shape AI’s impact on national security and work with me!
First up is me! I work at the Institute for AI Policy and Strategy (IAPS), a DC + remote think tank focused on US federal AI policy (and also some work in the UK).
We’re hiring for several roles:
Researchers and Senior Researchers for our Frontier Security team, offering $81K-$220K/yr to work on cutting-edge AI security challenges that directly influence US defense and national security policy.
The ideal candidate has 1+ years (Researcher) or 5+ years (Senior Researcher) of prior experience in research, policy, or technical roles.
Work will focus specifically on tackling critical questions around autonomous AI systems in cyber warfare, developing AI security technologies for high-stakes applications, and advising government stakeholders on AI preparedness.
Researchers and Senior Researchers for our Compute Policy team. Also a salary of $81K-$220K/yr depending on experience. The ideal candidate has prior experience working with AI, tech policy, chips/semiconductors, export controls, data center security, hardware-based access controls, or supply chains.
Senior Researcher - AI International Strategy to play a central role conducting research and engaging policymakers. Salary of $111K-$220K. The ideal candidate has strong thinking on the topics of AI and geopolitics.
Senior Research Manager (or Director) in either Frontier Security or Compute Policy to help lead our respective research teams and manage a team of 3-6 researchers. Here we are prioritizing research management, project management skills, and understanding of the AI landscape. Salary of $111K-$220K. 6+ years of research/policy experience with prior management experience is key.
A (Senior) Programs Associate to help build the talent pipeline for our AI Policy Future and architect their flagship AI Policy Fellowship ($70K-$110K/yr). As the second hire on the Programs team, you'll design immersive curricula, secure participation from top policymakers and AI experts, coach fellows ranging from recent grads to senior professionals, and help shape how talented people transition into high-impact AI policy careers.
The role starts with a 3-6 month contract (expecting permanent conversion) and involves orchestrating the fellowship's 2-week DC intensive from September 1-November 21.
Ideal candidates bring strong project management skills, passion for mentorship, and existing networks in tech/AI policy
Some details about all the roles at IAPS:
The roles are fully remote, open to non-Americans, and we can hire in most countries, however we have a preference for candidates based in DC or SF. We can also support with visas and relocation.
While we are looking for candidates with minimum experience levels, we welcome applications from candidates with significantly more experience than this! We have a wide salary range to compensate for experience appropriately.
We are open to part-time work as long as you can do 20+ hours per week.
Apply by August 10
We are hosting a Q&A webinar on July 21.
~
[DC or Europe] Work on important US tech policy questions at RAND
RAND’s Technology and Security Policy Center (TASP) is rapidly expanding with openings for researchers, research leads, and project managers across their Compute, US AI Policy, Europe, and Talent teams.
Led by experts like Lennart Heim (Compute team) and Brodi Kotila + Chris Byrd (US AI Policy team), these roles offer unparalleled access to classified government work, direct policy influence, and the ability to translate cutting-edge AI governance concepts into implementable proposals.
They're seeking ML engineers, semiconductor experts, policy wonks, and excellent generalists who can handle both rapid-response briefings and longer-term strategic projects.
RAND's unique position provides strong credibility with US and allied governments plus a track record of real policy impacts, bringing an important combination of technical expertise and national security policy-making. They are hiring in both US and Europe, with some roles closing August 3 and others closing August 10.
Apply here or message Michael Aird on Twitter with questions. These roles are open to non-Americans.
~
[DC] Launch Your Career Shaping America's AI and Biotech Policy
The Horizon Fellowship offers an unparalleled pathway into US emerging technology policy with a 100% placement rate at federal agencies, congressional offices, and think tanks.
This fully-funded program provides $113,000/year (or $75,000 for junior fellows) to work on critical AI, biotechnology, and emerging tech challenges in Washington DC either in Congress, the Executive Branch, or a DC-based think tank in a 1-2 year placement.
Fellows receive intensive policy training, mentorship from senior leaders, and join a tight-knit community of public service-oriented experts — with many alumni converting to permanent roles at NSC, OSTP, Commerce, and other key institutions.
Applications close August 28 for the 2026 cohort starting with training in 2026 January. Must already be eligible to work in the US (though doesn’t require US citizenship) and be willing to relocate to DC.
~
[Remote US] Bridge Academia and Government to Shape AI Policy
Princeton's Laboratory for AI seeks Policy Fellows to embed with federal and state agencies in a unique program offering $130K-$180K to advance responsible AI development from inside government.
Fellows will spend a year at agencies like the US AI Safety Institute, HHS, state regulators, or OSTP, conducting risk assessments, evaluating AI systems, and advising leadership on critical policy decisions.
This rare opportunity requires 7+ years of AI/ML/tech policy experience and an advanced degree, with placements starting between July-December this year.
The role is remote with travel to DC and Princeton. Applications reviewed on a rolling basis and potentially filling quickly. Applicants must already be eligible to work for US federal/state agencies (citizenship or permanent residency typically required).
~
[DC] Work for Center for New American Security’s AI Security Initiative
The Center for a New American Security (CNAS) is expanding their initiative on AI security and stability. This coordinated hiring effort spans multiple seniority levels, all focused on advancing the national security community’s understanding of AI risks and developing concrete policy solutions.
At the leadership level, CNAS seeks a Fellow/Senior Fellow for their Technology and National Security Program to lead research on mitigating risks from advanced AI capabilities, including AI-enabled cyber and biological weapons development, misaligned AI behaviors, and recursive AI improvement.
They're also hiring an Associate Fellow with 3-5 years of experience to conduct deep research on AI security while engaging senior government officials and technology leaders.
Supporting these senior researchers will be two Research Assistants/Associates — one focused on AI risks and governance, another specializing in compute governance and semiconductor policy.
A Project Coordinator will orchestrate the entire initiative, managing complex logistics, tracking deliverables, and supporting strategic policy development.
All positions offer three-year appointments based in Washington DC. These roles are open to non-Americans who are willing to relocate (relocation assistance available).
~
[London] Help the UK AI Security Institute study transformative AI
The UK AI Security Institute (UKAISI) is the world’s largest government team dedicated to understanding AI capabilities and risks. They have a lot of awesome experts and they’re seeking to bring on a Transformative AI Policy Analyst to bridge cutting-edge technical AI insights. The team is small and agile, highly impact focused, and works on fast feedback cycles.
This 24-month fixed-term role pays £44,195-£48,620 + 28.97% pension and offers the chance to directly shape how the UK government responds to AI’s most critical security challenges: chemical/bio weapons development, cyber-attacks, fraud, and loss of control scenarios.
You'll need deep strategic and technical insight on transformative AI, exceptional autonomy to execute poorly-defined projects, and strong communication skills to translate complex technical work for senior decision-makers. They’re looking for candidates with an excellent strategic sense of impactful interventions for transformative AI, very strong communication skills, and outstanding organizational abilities. That makes this role an excellent fit for ‘Power Law’ blog readers! Applications are only open to UK nationals.
The role involves leading interactions with cross-government assessment partners, producing rapid-turnaround briefs for senior officials, and coordinating technical work across teams. Based in London with hybrid working (40-60% office), applications close July 31, 2025.
~
[Brussels, EU only] Help the EU AI Office implement the EU AI Act
The EU AI Office’s AI Safety Unit (A3) is still hiring for many roles across legal, operations, policy work, technical work (especially on biorisk or cyber-risk). This is the part of the EU AI Office that is tasked with evaluation and monitoring emerging risks and incidents from large-scale AI AI models.
The way the Commission hires is usually via super generic job roles, and they need a variety of different expertise across a variety of seniorities. If you are an EU citizen with prior technical, policy, or operational experience this could be a great role for you to explore. Applications will continue to be reviewed on a rolling basis.
~
[London] Help BlueDot build the future of AI talent
BlueDot Impact is an AI workforce non-profit. They are trying to figure out what AI policy work is needed to make AI go well, then build the workforce to make it happen. In this role, BlueDot Impact is looking for someone who can take total autonomy to run programs (courses, bootcamps, whatever moves the needle) to train top talent and place them in critical AI roles.
The role pays 110-160K USD/yr with flexibility on seniority. Requirements include policymaking experience (UK/US/EU/China preferred) and deep AI governance knowledge — but they value depth in one area over shallow breadth.
The position is London-based with a strong in-person preference, and they're evaluating candidates on a rolling basis with first offers in early August 2025.
~
[DC] Help Encode move AI ideas into actual legislation
Encode AI, an advocacy powerhouse working on state and federal legislation across America seeks a Policy Analyst/Director to drive their next policy push. Encode's track record speaks for itself — they don't just write white papers, they convert ideas into action and get them across the finish line through coalition building, political strategy, and hands-on advocacy.
This $110,000+ DC-based role, potentially open to both early career and experienced candidates. The role would involve drafting legislative text and amendments, building coalitions across government and industry, and helping manage multiple concurrent campaigns while under pressure. The ideal candidate brings high agency, intellectual agility, and strong writing skills, ready to tackle everything from emergency weekend responses to strategic long-term planning. Applications will continue to be reviewed on a rolling basis.
~
[Remote US] Run ops for the team on the forefront of AI safety lawmaking
The Secure AI Project, famous for generating California’s SB 1047 proposal, has learned a lot from the experience and is back at a new variety of bipartisan state-level legislation. They ar hiring for both an Operations Manager/Head of Operations and a Chief of Staff (both roles $115-160k+/yr) to build the infrastructure for their AI safety policy work.
Secure AI Project is a 501c4 nonprofit responding to overwhelming demand from state legislators who want to pass bills protecting the public from frontier AI risks. You'd be an early team member handling the full operations stack—compliance, finance, HR, and program management—for an organization poised to create meaningful AI safety laws across multiple states while federal action stalls.
The role is remote anywhere in the US, but with a preference for working from Berkeley CA. The ideal candidate brings 3+ years of operations/finance experience (or exceptional writing/communication skills), thrives in fast-paced environments, and gets impatient when projects aren't moving forward. Experience with 501c4s or startups is a major plus. They're evaluating applications on a rolling basis, so apply ASAP through their application form.
~
[CLOSED] Define AI governance at the Centre for the Governance of AI
The Centre for the Governance of AI (GovAI) is recruiting for two roles — Research Scholars and Research Fellows — shaping how humanity navigates the AI transition. GovAI has helped define the field since 2018, with alumni now working AI governance at DeepMind, OpenAI, Anthropic, and government positions.
Research Scholars receive £75K-£95K for a one-year visiting position with remarkable flexibility. Research Fellows are experienced researchers earning £80K-£100K on two-year renewable contracts who drive core research priorities while mentoring junior staff.
Both roles offer extraordinary freedom to pursue impactful work across frontier AI safety, international governance, compute policy, AI economics, and technical governance.
Based primarily in London, with US options available. Applications are already closed but consider it next time. These roles are open to both Americans and non-Americans.
~
[CLOSED] Help The Future Society prepare for AI Crises before they happen
The Future Society seeks a Research Fellow contractor to help prepare the US government for potential crises involving the most powerful AI systems, including eventual AGI — building concrete operational playbooks and mapping complex response networks for scenarios.
This $440-$523/day contract role for 1-6 months (with potential for permanent placement) is based in Washington DC and involved strategic research on large-scale AI risk scenarios and crisis response mechanisms, with assistance coordinating private expert consultations to stress-testing response strategies.
The role is best for those with some technical grounding in areas like AI cybersecurity, evaluations, alignment, risk modeling, or crisis response. This isn't theoretical work - you'll be that could define humanity's future. The position offers direct mentorship and the chance to shape national preparedness strategies at a critical moment.
Applications are already closed, so this is now just stored for archival purposes.
~
~
Disclaimer: A job being featured here means that I like the people who work there and the organization, but does not mean that I endorse or agree with all of their opinions and policy opinions. In many cases, I strongly disagree with some aspects. However, I think AI policy should be a ‘big tent’ enterprise with robust debate across many differing perspectives so I did not apply an ideological filter to these jobs. I recommend applicants do their own research as to what each organization stands for and apply accordingly.



My org, BlueDot Impact, is hiring too! We want to build a cracked AI governance talent pipeline to in the future fill all these roles with top talent fast.
Your mission: Figure out what governance-y shaped knowledge/skill/intervention is needed to make AI go well, then build the workforce to make it happen. You’ll have total autonomy to run programs (courses, bootcamps, whatever moves the needle) to train top talent and place them in critical roles.
This is a high-agency role with real influence on AI's trajectory. You'll work with an excellent team in an impact-first culture. Competitive comp (110-160k USD), strong benefits, and all the resources you need to succeed.
We're looking to hire 1-3 people asap. Flexible on seniority - we care more about your ability to ship than years of experience.
Email joshua@bluedot.org if you know someone perfect for this - or if that's you.
JD: https://bluedot.org/join-us/e18d5d61-8c13-4023-a241-c1dcb0222143
My team - the UK AI Security Institute's Strategic Awareness team - is hiring too! We are hiring a Transformative AI Policy Analyst.
In this role, you will drive work to leverage AISI's technical insights for cross-government work to assess and prepare for transformative AI.
We are looking for candidates with an excellent strategic sense of impactful interventions for transformative AI, very strong communication skills, and outstanding organisational abilities.
Our team is small and agile, highly impact focused, and works on fast feedback cycles. I have never worked on anything that has felt higher impact - come and join!
Applications close 31 July. Applications are only open to UK nationals. Message me on linkedin (Jonas Sandbrink) if you have any questions!
Job ad here: https://www.civilservicejobs.service.gov.uk/csr/index.cgi?SID=dXNlcnNlYXJjaGNvbnRleHQ9MTQyMDk2NTAzJm93bmVyPTUwNzAwMDAmcGFnZWFjdGlvbj12aWV3dmFjYnlqb2JsaXN0Jm93bmVydHlwZT1mYWlyJnBhZ2VjbGFzcz1Kb2JzJmpvYmxpc3Rfdmlld192YWM9MTk2MTc3MiZzZWFyY2hwYWdlPTEmc2VhcmNoc29ydD1zY29yZSZyZXFzaWc9MTc1Mjc0MjU2Ny03ZjU1OTZlNzQ5ODk3ZjRmMDk2NDU3YTRjM2FjZWQ0NzlkYmMwMTNk