40 new opportunities to shape AI policy
Research, entrepreneurship, government, engineering, media, operations, and more!
These roles are all closed now and this post is now just an archive for historical purposes. You can see the latest (as of Oct 6) here.
~
~
Have you been reading about AI lately and wondering what’s up and how you can help? Readers of this blog know that getting AI right is the greatest challenge of our time, and we need a wide variety of people to pitch in.
In this article I will highlight some AI policy roles that I personally endorse and think highly of1. If you read this blog, you’re likely the target audience for many of these roles. If you’re career searching right now, or just career browsing, consider applying!
~
[DC] Launch Your Career Shaping America's AI and Biotech Policy
The Horizon Fellowship offers an unparalleled pathway into US emerging technology policy with a 100% placement rate at federal agencies, congressional offices, and think tanks.
This fully-funded program provides $113,000/year (or $75,000 for junior fellows) to work on critical AI, biotechnology, and emerging tech challenges in Washington DC either in Congress, the Executive Branch, or a DC-based think tank in a 1-2 year placement.
Fellows receive intensive policy training, mentorship from senior leaders, and join a tight-knit community of public service-oriented experts — with many alumni converting to permanent roles at NSC, OSTP, Commerce, and other key institutions.
Applications close August 28 for the 2026 cohort starting with training in 2026 January. Must already be eligible to work in the US (though doesn’t require US citizenship) and be willing to relocate to DC.
~
[London] The UK government continues to build their AI security work
The AI Security Institute (AISI), the world's largest government team dedicated to AI safety and security, is continuing to hire across research, engineering, and policy. Born from the 2023 Bletchley Park AI Safety Summit, AISI is the UK government's official answer to ensuring humanity doesn't lose control of frontier AI.
These aren’t typical government jobs – AISI operates like a tech startup with the backing and influence of a major government. These roles involve working alongside alumni from Anthropic, DeepMind, OpenAI, Google, and top universities on genuinely existential challenges: preventing AI from enabling bioweapons development, stopping catastrophic cyber attacks, and ensuring we maintain control over increasingly powerful AI systems.
Current roles include:
Senior Testing Product & Strategy Lead: Own the long-term vision for AISI's frontier AI testing program. This role requires 5+ years in tech/AI, deep LLM knowledge, and a proven product leadership track record. Rolling deadline for applications.
Software Engineering Manager - Core Technology: Lead small teams building critical infrastructure (Inspect framework, model hosting). This role requires engineering management experience, Python expertise, experience with research teams. Rolling deadline for applications.
Research Manager - Control Team: Lead ~4 researchers developing ‘AI Control’ science to ensure potentially misaligned systems can still be safely used. This role requires experience in research/engineering management experience and a strong understanding of the ‘AI Control’ field. Rolling deadline for applications.
Research Assistant Residency - Human Influence (6-month fixed term, £65k): Support research on AI persuasion/manipulation/deception. This role is ideal for MSc grads/early PhD students with a computational social science or psychology background. This role closes Sept 8.
Private Secretary (2 roles, £33-40k): Executive support to senior leadership - one for Strategy Portfolio, one for Technical Research (requires SC clearance), involves managing calendars, coordinating high-level meetings, and driving strategic initiatives. This role closes Sept 3.
Transformative AI Engagement Officer (£44-49k): Develop cross-government AGI preparedness strategy, brief ministers and officials on transformative AI risks/opportunities. This role requires excellent communication and political awareness. This role closes Sept 2.
Research Scientist - Strategic Awareness: Conduct deep dives on AI trajectories and impacts for government decision-makers. This role requires expertise in AI forecasting, scaling laws, or specific impact areas (e.g., labor markets, loss of control)
Safeguards Technical Governance Researcher: Bridge technical safeguards research with policy/governance and develop actionable frameworks for managing fine-tuning risks, differential access, and evaluation outputs.
Research Scientist - Safeguards: Develop attacks/defenses for LLMs and adversarial testing of frontier systems. This role requires hands-on LLM research experience and ML publications.
Research Scientist - ChemBio: Lead work on AI-enabled chemical/biological threats. This role requires bio/chem science or ML background plus biosecurity expertise and you must be eligible for ‘Developed Vetting’ (largely only for UK nationals).
Software Engineer - Core Technology: Build tools/infrastructure for AI safety research. This role requires Python expertise and production code experience.
Where not otherwise specified, roles usually involve a salary range of £65K-£145K.
Roles are based in London (Whitehall), hybrid working 40-60% office time. Roles are generally open to UK nationals, EU/EEA with settled status, and those with commonwealth work rights. Some roles are more restrictive and limited to UK nationals only.
~
[DC] Shape US Tech Policy on China Competition in the House CCP Committee
The House Select Committee on the CCP is hiring a Professional Staff Member (Technology Staffer) in Washington DC. This role is about as close as it gets to being on the frontlines of US tech–national security policymaking. The committee would be on the Minority side (Democrat). They are looking for someone who can dig deep into export controls, dual-use technologies, and the strategic implications of AI, quantum, biotech, and semiconductors—and then translate that expertise into actionable recommendations for Congress.
You’ll be expected to research, draft memos and proposals, prep hearings, and engage stakeholders across government, industry, think tanks, and academia. The job demands subject-matter expertise in at least one emerging tech area, a strong grasp of export control regimes (EAR, ITAR, Wassenaar, etc.), and 3–5+ years of experience in related national security or innovation policy. Hill, executive branch, or tech-policy experience is strongly preferred; Mandarin is a plus. Applications are reviewed on a rolling basis.
~
[DC] Shape Science and Energy Policy from the House Science, Space, and Technology Committee
Ranking Member Zoe Lofgren (D-CA) of the House Science, Space, and Technology Committee is hiring a Professional Staff Member to drive oversight of the Department of Energy and other science/tech agencies. This role puts you at the center of congressional work on energy innovation and environmental policy — from drafting legislation to briefing Members, shaping hearings, and engaging with stakeholders.
Applicants need to be a US citizen with substantive prior experience in DOE research/technology or energy/environmental policy, plus sharp writing, communication, and organizational skills. Capitol Hill or government/industry background is preferred. An advanced degree or technical expertise is a bonus.
Materials (cover letter + resume) are due by September 5 to SciResumes@mail.house.gov.
~
[Bay Area/Remote] Run operations at a fast growing AI research institute, FAR.AI
FAR.AI is a fast growing non-profit AI research institute based in Berkeley (with remote employees) that does AI safety research (everything from jailbreaking models to researching alignment) and also leads conferences and workshops. They are hiring for a few operational roles.
They are hiring for a Senior Project Manager on Events to lead some of their conferences and workshop work. The job is full-time (remote US or in-person Berkeley, CA) with $115K–$150K salary plus travel and equipment covered. This role involves owning strategy and execution across different event types (from multi-day international gatherings to technical workshops) and different modalities (working across research, comms, and ops teams while coordinating speakers, sponsors, and venues). Expect one fully covered trip per month, occasional evening/weekend work, and the chance to directly support the researchers and policymakers shaping AI’s future.
This role will be performed best by someone with at least 5+ years of event project leadership (complex, large-scale, or cross-functional), mastery of logistics/vendor management, strong project management tool fluency (Asana, Airtable, etc.), and the ability to operate independently under pressure. A global mindset and sharp communication skills are musts. Prior exposure to technical, policy, or mission-driven events is a plus.
FAR.AI is also hiring for a People Operations Generalist to help scale a thoughtful, people-first culture as the team doubles in size over the next 18 months.
The role is full-time, based in Berkeley (hybrid possible), with $85K–$110K salary plus travel, equipment, and catered meals. In this role you’d support recruiting, onboarding, compliance, and engagement—ensuring a seamless employee experience from first interview to team retreats.
They want someone with 3–5 years of prior experience in HR/people ops, sharp organizational and communication skills, and comfort working independently in a dynamic environment. Extra points for prior experience in distributed teams, startups, or mission-driven nonprofits.
If those roles aren’t cool enough for you, there’s also the chance to lead it all. FAR.AI also seeks a Chief Operating Officer to lead all the operations during this period of rapid growth. Reporting directly to the CEO and incoming President, the COO will manage Finance, People, and Business Operations, scaling the backbone that supports cutting-edge research, major field-building events, and global collaborations.
The COO role is a full-time role based in Berkeley, CA (preferred, hybrid/remote possible) with $175K–$250K compensation plus visa sponsorship, travel, equipment, and catered meals. FAR.AI wants a proven operator with 7+ years of senior leadership, strong financial acumen (prior experience with multi-million dollar budgets, compliance, and risk), and a track record of scaling systems and teams in high-growth, mission-driven environments. Experience in nonprofits, think tanks, or R&D-heavy orgs is a plus, but an AI background isn’t required.
~
[London/Remote] Magnify the impact of AI Governance writing at GovAI
Do you like meticulous analysis and the chance to judge others? The Centre for the Governance of AI (GovAI) is hiring a Senior Research Editor that combines both of these passions to supercharge the GovAI publications pipeline.
Low editing capacity is now a bottleneck at GovAI, so this is a good opportunity to be a force multiplier — help turn draft research into polished outputs that reach policymakers, researchers, and industry leaders making critical decisions about AI. This means both hands-on editing and building the systems that let GovAI scale from ~15 to 30+ researchers without losing quality. For strong candidates, the role can also include strategic input into research priorities.
Compensation is £76k–£114k (~$99k–$148k) plus benefits, with London location preferred but remote (esp. US) possible, and visa sponsorship available. Strong editing skills are essential; AI governance expertise is a plus but not mandatory. Apply by Aug 31.
~
[DC] Kick off your national security career with CNAS’s NextGen Fellowship
The Center for a New American Security (CNAS) is now accepting applications for the 2026 Shawn Brimley Next Generation National Security Leaders Fellowship. This is a year-long, part-time program designed to support emerging national security professionals who are (a) a US citizen (b) ages 27–35 with (c) at least four years of prior professional experience in researching or executing U.S. national security policy.
Fellows will get opportunities for leadership development, networking, and will cap the experience with a week-long international study tour. Fellows also participate in monthly dinners with senior figures — past speakers have included Madeleine Albright, Stanley McChrystal, Jeh Johnson, and Mike Gallagher. NextGen sessions are usually held 5-6 times per fellowship year, in the evenings.
The program is free to join (participants cover their own travel costs) and is based primarily at the CNAS headquarters in Washington DC. Applications close Sep 21.
~
[Remote-US] Turn Research Into a Startup That Shapes AI’s Future with 5050
5050 (run by Fifty Years) is a free, 13-week program for scientists and engineers who want to build startups tackling civilization-scale problems in AI safety, alignment, and beyond. Applications are open until September 14, with the next cohort running September–December in San Francisco, Boston, and remotely.
The entire cohort will go to SF (travel and lodging paid for) for the kickoff weekend (early Sept) and a 3-day off-the-grid experience (Oct 24-26). After the kickoff, every week program members will join office hours with 50Y Partners and afternoon workshops focused on entrepreneurship skills. Workshops are held in-person in SF and Boston, and remotely for participants across the rest of the US.
The program offers mentorship from leaders like Wojciech Zaremba (OpenAI co-founder), Ross Girshick (deep learning pioneer), and Jaan Tallinn (leading entrepreneur and invester). Alumni so far have launched 78 companies with a 95% seed raise success rate, from cancer immunotherapies to decarbonizing shipping. If selected, you’d join a tight cohort of ambitious peers, learn how to turn research into a company, avoid costly mistakes, and explore whether entrepreneurship is right for you.
There are no fees, no equity agreements, and no other strings attached — 5050 is designed to accelerate founders building startups that make AI safe, interpretable, and aligned with human flourishing.
~
[Brussels/EU] Lead EU and Global AI Governance at the Ada Lovelace Institute
The Ada Lovelace Institute is hiring a Head of EU and Global AI Governance to lead its Brussels-based policy and research agenda. This is a senior role shaping how AI and data are governed across Europe and globally, with direct influence in forums like the European Commission, Parliament, Council, UNESCO, and OECD. The role pays from €75,000/year (approx. €6,250/month pre-tax, plus bonus and allowances), is offered on a 2-year contract, and comes with flexible hours, strong benefits, and a hybrid Brussels/London setup.
Ada is looking for a strategic, well-connected policy leader with deep expertise in AI/data governance, experience in legislative or regulatory analysis, and comfort navigating complex EU and international landscapes. Strong research and communication skills, stakeholder engagement, and staff management are required. The deadline to apply is is 9:30am BST on September 8.
~
[London] Work on UK policy, also at the Ada Lovelace Institute
The Ada Lovelace Institute is also hiring a Policy Researcher to help shape how AI and data governance evolves in the UK and beyond. Salary starts at £41,767, with a 2-year fixed-term contract, based in London with flexible hybrid options. Applications close 9:30am BST on Sep 15. This is a strong fit for an early-career researcher who wants to make a tangible impact on emerging AI policy.
You’ll work closely with Ada’s Public Policy Lead on projects in law, policy, and governance. They’re looking for someone with strong research and policy chops, ideally with experience in AI/data governance, law, or regulation. Familiarity with Ada’s methods—policy/legal analysis, expert convenings, public deliberation, surveys—is a plus, but they also welcome fresh perspectives from fields like computer science, data science, or futures thinking.
~
[London] Help Demonstrate AI Risks at Apollo
Apollo Research is hiring a 6-month, full-time Evals Demonstration Engineer (London-based, £7,500/month) to design and deliver demonstrations that translate technical AI evaluation findings into compelling formats for policymakers and other non-technical decision-makers. Applications are open until September 10, reviewed on a rolling basis (early submission encouraged). This role starts a contract role but strong performance could lead to a permanent role.
The role requires technical fluency (Python, Inspect framework, ability to run/modify evals) and exceptional communication skills to craft demos, visualizations, and live presentations that resonate with policymakers. Prior experience presenting to government or think tank audiences, plus creativity in choosing the right medium (interactive, video, report), are core. You’ll collaborate directly with Apollo’s evals and governance leads (Marius Hobbhahn, Charlotte Stix) and produce outputs like live demos, policy-oriented visuals, and blog posts. The role is in-person in London, and UK work eligibility is prioritized (though exceptional candidates elsewhere are invited to apply).
~
[Remote] The Future of Life Institute seeks an editor and investigator
If you want to use storytelling craft to shape the world’s response to transformative technology, the Future of Life Institute is seeking a full-time, remote Editor to lead the creation of written, broadcast, and video content that raises awareness of both the risks and opportunities of advanced AI. Reporting directly to the Director of Communications, the Editor will shape everything from op-eds and blog posts to short films, PSAs, and large-scale media campaigns.
Compensation ranges from $90,000–$190,000 depending on experience and geography. Applications are due September 8. Candidates should bring proven editing/writing experience in journalism, screenwriting, or related fields, strong knowledge of AI safety issues (including alignment, misuse, AGI risks), and the ability to communicate complex ideas in a clear, evocative way. Bonus points for top-tier publication credits, ghostwriting, or campaign management experience.
FLI is also is hiring a full-time, remote AI Safety Investigator (salary $90k–$150k + benefits) to document and analyze safety practices at the biggest AI companies, explain incidents to the public, and lead FLI’s flagship AI Safety Index. The role mixes research, field investigation, and communications —building networks inside corporations, rapidly analyzing incidents, and translating findings into clear metrics and visualizations that shape public and policy debates.
They want someone self-directed and sharp at following leads, ideally with a background in journalism, research, and/or AI safety, plus the ability to communicate technical concepts clearly. Occasional Bay Area travel is required. Applications are due September 4.
~
[SF] Help Launch AI Safety Projects at the Center for AI Safety
The Center for AI Safety (CAIS) is hiring a Special Projects Associate and a Special Projects Manager in San Francisco to drive new initiatives at the frontiers of AI safety. In these roles, you’d work directly with CAIS leadership to scout opportunities, design project plans, manage budgets and timelines, and turn ambitious ideas into operational reality. With AI safety communications hitting millions and public curiosity accelerating, CAIS is looking to scale fast — and this is a chance to help shape that surge into impactful, mission-aligned projects.
Both positions are full-time, on-site and offer strong compensation ($100K–140K for Associate, $120K–150K for Manager). Requirements are a bachelor’s degree (advanced degree optional), 1–2 for Associate / 2–4 years’ experience for Manager in startups/ops/consulting/project management, proven ability to learn new domains quickly, and genuine interest in AI safety.
~
CAIS is also hiring a Finance Manager in San Francisco to take full ownership of financial operations across both its nonprofit arms — a 501c3 and a 501c4. This role would oversee everything from audits, tax filings, and nonprofit governance policies to payroll, expense systems, and donor reporting —working directly with leadership, legal counsel, and the operations team to keep the organization financially sound and compliant.
The role offers $110K–150K, hybrid flexibility, and strong benefits. Candidates should bring 4+ years in finance or nonprofit accounting, with experience managing audits, budgets, and compliance processes; familiarity with Xero, Ramp, Rippling, Stripe, or similar platforms is a plus. Experience across both 501c3 and 501c4 entities is preferred but not required.
~
Lastly, CAIS is looking for a Director of Public Engagement in San Francisco to shape how the world understands the risks of advanced AI. You’ll lead multi-channel campaigns, craft narratives that resonate with diverse audiences, and serve as a visible spokesperson for one of the most influential organizations in the field.
CAIS is looking for someone with proven chops in campaign strategy, media production, and public communication—ideally someone who’s run large-scale awareness efforts before. The position offers $140k–$170k/year and strong benefits.
All applications are on a rolling basis.
~
[Berkeley CA] Build AI Demos with CivAI
CivAI, a nonprofit based in Berkeley, is hiring a Senior Software Engineer to create interactive demos that bring AI’s capabilities and risks to life. Instead of publishing papers, CivAI builds tangible products/demo that policymakers, journalists, and the public can use first-hand. This helps people intuitively grasp what advanced AI can and can’t do. CivAI’s work has already reached 60+ government offices and been featured on ABC, NPR, CNN, and WaPo.
This is a $150k–$200k full-time, on-site role for a generalist engineer. The team values both hacker spirit and strong UI/UX instincts. 5+ years of experience is preferred (though strong candidates with 3+ years will be considered). Applications are on a rolling basis.
~
[Remote-US] Run Projects at the Edge of AI and Biosecurity with RAND’s Meselson Center
RAND’s new Meselson Center is hiring an AI/Bio Research Project Manager (2-year term, $75k–$156k, hybrid/remote eligible) to drive high-impact work at the intersection of frontier AI security and biological risk reduction. As both AI and biotech risks accelerate and increasingly overlap, RAND wants a project manager who can keep ambitious research and policy projects moving fast and effectively.
This role involves coordinating interdisciplinary teams, manage research deliverables, support recruiting rounds, oversee budgets/contracts, and help run workshops that convene top talent. Requirements involve at least 4 years of project management/operations experience (a Master’s or PMP can substitute), strong organizational/communication skills, and the ability to juggle multiple high-stakes projects. Preferred locations are from DC, Santa Monica, Boston, or Pittsburgh, but remote is possible. Clearance eligibility is a plus but not required. Applications are open now on a rolling basis.
~
[London] Do operations at 80,000 Hours
80,000 Hours is a non-profit that’s helped thousands of people shift their careers toward the world’s most pressing problems and is now focused on helping people get careers on AI risk. Theirgrowing team (now ~35 staff) needs more operational backbone: from ops generalists to specialists in events, recruiting, people ops, office management, executive support, and video production. They’re also seeking an IT security and data privacy specialist to strengthen internal systems.
This is a bit meta, but perhaps the best way to help AGI is to do operations for the organization that helps coach people into AGI-related careers. Leverage! They’re looking for people who are organized, detail-oriented, clear communicators, flexible, and motivated by 80,000 Hours’ mission.
Compensation is £41k–£75k for generalist roles and £55k–£80k for the IT security role, with strong benefits. Roles are full-time, London-based by default (visa sponsorship possible), but some remote arrangements are considered. The deadline to apply is September 1.
~
[Remote/Global] Run Operations That Power AI Safety Projects
Rethink Priorities is hiring a Special Projects Associate/Coordinator for a 5-month parental leave cover in a full-time remote setting to keep high-impact initiatives running smoothly. This is a $70–90k/year equivalent role (prorated) starting late September/early October 2025, with benefits and possible extension. You’ll work across time zones (UK/California overlap required) and serve as the operations lead for 2–3 projects advancing safe and aligned AI, from budgeting and compliance to HR support, contracts, and project management.
The job is for someone who’s comfortable with generalist ops work (finance, HR, compliance, project management), proactive in solving problems, and eager to support initiatives like launching new AI safety orgs, running fellowships, and incubating policy talent. Bonus points if you’ve touched US nonprofit finance/compliance. Applications are open now and judged on a rolling basis — ideal start date is Sept 29 (latest Oct 6, 2025).
~
[Remote] Train on AGI Strategy with BlueDot Impact’s Course
BlueDot Impact is offering a free AGI Strategy Course designed to prepare people to shape the future of artificial general intelligence. BlueDot, a non-profit spun out of Cambridge that has already trained 5,000+ AI safety professionals, built this program to give participants the tools, frameworks, and community to engage with the highest-stakes policy, technical, and governance challenges of our time. Alumni have gone on to roles at OpenAI, Anthropic, NATO, the UN, OECD, and national AI directorates.
The course is virtual, free, and highly structured, with small peer groups, expert facilitators, and flexible pacing: either a 6-day intensive (5 hrs/day) or a 6-week part-time option (5 hrs/week). Every month a new round starts, with rolling applications. Participants will spend 2–3 hours preparing for each live discussion and then dive into guided conversations with other motivated peers. If you want to pivot into AI safety or policy—or level up from a related background—this is one of the few high-signal entry points with a strong track record of career impact.
~
The go-to for navigating emerging tech policy
The US government and allied institutions urgently need digital and scientific talent to keep pace with transformative tech. But the entry points are fragmented and opaque.
Emergingtechpolicy.org is a great one-stop guide for anyone who wants to break into public service careers at the intersection of technology and policy. Whether you’re a student mapping out your first steps, a technologist pivoting into policy, or a policy pro tackling new tech domains, it offers structured pathways, curated resources, and expert advice.
The content spans everything from career essentials (fit testing, resumes, networking) to policy institutions (Congress, think tanks, federal agencies, ARPAs, intelligence community). It also helps profiles real opportunities —internships, fellowships, full-time roles — and help demystify how to navigate the AI policy world, with a special focus on DC.
~
Disclaimers: A job being featured here means that I like the people who work there and the organization, but does not mean that I endorse or agree with all of their opinions and policy opinions. In many cases, I strongly disagree! However, I think AI policy should be a ‘big tent’ enterprise with robust debate across many differing perspectives so I did not apply an ideological filter to these jobs. I recommend applicants do their own research as to what each organization stands for and apply accordingly.
Additionally, I am featuring jobs based on my own independent judgement and I am not directly associated with any of the opportunities listed here (unless otherwise noted). I cannot answer questions about any of the roles I am not directly involved with.



Magnificent!
I suspect that I'm first to comment -- after 7 hours -- because everyone else must be busy responding to your 40 opportunities.
You said of the UK AISI jobs:
> Roles are generally open to UK nationals, EU/EEA with settled status, and those with commonwealth work rights. Some roles are more restrictive and limited to UK nationals only.
... but in fact some (most?) of the jobs can accept applicants of other nationalities as well. I'd refer to each job page for details.