About the AI Safety Institute

The AI Safety Institute (AISI), launched at the 2023 Bletchley Park AI Safety Summit, is the world's first state-backed organization dedicated to advancing AI safety for the public interest. Our mission is to prevent extreme risks from advanced AI systems, including cyber attacks on critical infrastructure, AI-enhanced chemical and biological threats, large-scale societal disruptions, and potential loss of control over increasingly powerful AI. In just one year, we've assembled one of the planet's largest and most respected model evaluation teams, featuring world-renowned scientists and many senior researchers from leading AI labs such as Anthropic, DeepMind, and OpenAI.

At AISI, we're building the premier institution for impacting both technical AI safety and AI governance. We conduct cutting-edge research, develop novel evaluation tools, and provide crucial insights to governments and international partners. By joining us, you'll collaborate with the brightest minds in the field, directly shape global AI policies, and tackle complex challenges at the forefront of technology and ethics. Whether you're a researcher, engineer, or policy expert, at AISI, you're not just advancing your career – you're positioned to have one of the most significant impacts on humanity's future in the age of artificial intelligence.

The Systemic Safety and Responsible Innovation team 

AISI is expanding our Systemic Safety team. This team is focussed on identifying and catalyzing interventions which could advance the field of AI safety and strengthen the systems and infrastructure in which AI systems are deployed. The team will lead the effort to shape the AI ecosystem to ensure safe, scalable, and responsible AI development.  

The team will work with the private sector, academia, and model developers to develop actionable technical innovations that governments or industry could implement. We are recruiting a Workstream Lead to spearhead this effort, grow the team, and drive forward efforts to ensure that AI’s integration into society is safe and responsible. 

Role Summary 

As the Workstream Lead for this team, you will build and lead a multidisciplinary team focussed on pushing systemic safety forward as an agenda and creating the global environment for responsible innovation. Your responsibilities will be to: 

  • Develop and deliver a strategy to advance systemic AI safety research 
  • Shape a market-shaping agenda, seeking to create a system of third-party evaluators, and catalyze broader safety innovation. 
  • Build and lead a talent-dense, mission-aligned, multidisciplinary team focussed on systemic AI safety 
  • Forge strong links with partners in academia, industry, government and civil society organisations, to identify and catalyze high-impact research collaborations.  
  • Create and manage an agile and effective set of interventions for AISI to deploy, including grants, competitions, and partnership arrangements. You will have high autonomy to explore what is possible. 
  • Collaborate across AISI’s Research Unit teams to come to shared views on potentially high-potential technical directions, and then work together to find ways of catalyzing these amongst non-AISI partners. 
  • Act as part of AISI’s leadership team, setting the culture and supporting staff. 

The position offers the successful candidate an opportunity to play a high-impact role as AISI’s arms out into industry and academia. You will play a central role in ensuring that AI Safety and Innovation is the remit of an increasing number of players. 

Person specification: 

You may be a good fit if you have some of the following skills, experience, and attitudes: 

  •  A track record of working to ensure positive outcomes for all of society from the creation of AI systems 
  • Experience managing a broad portfolio of externally delivered projects, grant programs, and/or other funding initiatives. A deep understanding of funding mechanisms would be beneficial. 
  • History of developing market-shaping innovations to grow sectors through investments and other activity.   
  • Proven ability to catalyze innovation through targeted investments. Additionally desirable if this is in the field of AI, or tech. 
  • Skilled at identifying and assessing promising proposals. You can spot high-potential projects that deliver meaningful results. 
  • Excellent written and verbal communication skills. 
  • Strong relationship management skills: capable of building and maintaining long-term relationships with academic institutions and industry partnerships. 
  • Strategic thinker with a focus on outcomes.  
  • Familiarity with the academic research and innovation funding landscape.  
  • Comprehensive understanding of the field of AI and large language models (e.g. GPT-4), including current issues and risks. 

Additional Information

Internal Fraud Database 

The Internal Fraud function of the Fraud, Error, Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud, or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed, or who would have been dismissed had they not resigned, for internal fraud. In instances such as this, civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out the pre employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil service. In this way, the policy is ensured and the repetition of internal fraud is prevented.  For more information please see - Internal Fraud Register.

Security

Successful candidates must undergo a criminal record check. Successful candidates must meet the security requirements before they can be appointed. The level of security needed is counter-terrorist check (opens in a new window)See our vetting charter (opens in a new window). People working with government assets must complete baseline personnel security standard (opens in new window) checks.

Nationality requirements

We may be able to offer roles to applicant from any nationality or background. As such we encourage you to apply even if you do not meet the standard nationality requirements (opens in a new window).

Working for the Civil Service

The Civil Service Code (opens in a new window) sets out the standards of behaviour expected of civil servants. We recruit by merit on the basis of fair and open competition, as outlined in the Civil Service Commission's recruitment principles (opens in a new window). The Civil Service embraces diversity and promotes equal opportunities. As such, we run a Disability Confident Scheme (DCS) for candidates with disabilities who meet the minimum selection criteria. The Civil Service also offers a Redeployment Interview Scheme to civil servants who are at risk of redundancy, and who meet the minimum requirements for the advertised vacancy.

Diversity and Inclusion

The Civil Service is committed to attract, retain and invest in talent wherever it is found. To learn more please see the Civil Service People Plan (opens in a new window) and the Civil Service Diversity and Inclusion Strategy (opens in a new window).

Apply for this Job

* Required
resume chosen  
(File types: pdf, doc, docx, txt, rtf)


Enter the verification code sent to to confirm you are not a robot, then submit your application.

This application was flagged as potential bot traffic. To resubmit your application, turn off any VPNs, clear the browser's cache and cookies, or try another browser. If you still can't submit it, contact our support team through the help center.