About the AI Safety Institute

The AI Safety Institute (AISI), launched at the 2023 Bletchley Park AI Safety Summit, is the world's first state-backed organization dedicated to advancing AI safety for the public interest. Our mission is to assess and mitigate risks from frontier AI systems, including cyber attacks on critical infrastructure, AI-enhanced chemical and biological threats, large-scale societal disruptions, and potential loss of control over increasingly powerful AI. In just one year, we've assembled one of the largest and most respected model evaluation teams, featuring renowned scientists and senior researchers from leading AI labs such as Anthropic, DeepMind, and OpenAI.

At AISI, we're building the premier institution for impacting both technical AI safety and AI governance. We conduct cutting-edge research, develop novel evaluation tools, and provide crucial insights to governments, companies, and international partners. By joining us, you'll collaborate with the brightest minds in the field, directly shape global AI policies, and tackle complex challenges at the forefront of technology and ethics. Whether you're a researcher, engineer, or policy expert, at AISI, you're not just advancing your career – you're positioned to have significant impact in the age of artificial intelligence.

Autonomous Systems

We're focused on extreme risks from autonomous AI systems - those capable of interacting with the real world. To address this, we're advancing the state of the science in risk modeling, incorporating insights from other safety-critical and adversarial domains, while developing our own novel techniques. We're also empirically evaluating these risks - building out one of the world's largest agentic evaluation suites, as well as pushing forward the science of model evaluations, to better understand the risks and predict their materialisation.

Role Summary

As a manager, you'll be heading up a multi-disciplinary team including scientists, engineers and domain experts on the risks that we are investigating. Your team is given huge amounts of autonomy to chase research directions & build evaluations that relate to your team’s over-arching threat model. This includes coming up with ways of breaking down the space of risks, as well as designing & building ways to evaluate them. All of this is done within an extremely collaborative environment, where everyone does a bit of everything. Some of the areas we focus on include:

  • Research and Development (R&D). Investigating AI systems' potential to conduct research, particularly in sensitive areas. This includes studying AI capabilities in developing dual-use technologies, unconventional weapons, and accelerating AI and hardware (GPU) development.
  • Self-replication. Researching the potential for AI systems to autonomously replicate themselves across networks and studying their ability to establish persistence.
  • Human influence. Assessing AI models' capacity to manipulate, persuade, or coerce individuals and groups. This covers techniques for general human influence, key individual manipulation, social fabric alteration, and the accumulation of social and political power.
  • Dangerous resource acquisition. Examining AI models' ability to navigate restricted or illegal domains for acquiring resources or services. This encompasses research into general acquisition of dual-use resources, circumvention of embargoes and acquisition of human assets.
  • Deceptive alignment. Evaluating AI systems' potential to display deceptive behaviours. This includes research into AI's ability to misrepresent its capabilities, conceal its true objectives, and strategically behave in ways that may not align with its actual goals or knowledge.

How you can contribute

In this role, you will be people managing a team of very exceptional and highly motivated individuals (we have people from OpenAI, DeepMind, Anthropic, Meta and AWS within Autonomous Systems). You’ll also be growing your team at the beginning – getting stuck in with sourcing specific people we want, often world-leading experts in their domain, and then helping run the full process and getting them excited about the role.

Within your team you’re expected to provide fantastic management, including building strong relationships with the other team members, as well as giving regular feedback and coaching. You’ll receive mentorship and coaching both from the work-stream lead (Alan Cooney), as well as the broader group of research directors in AISI (people like Geoffrey Irving and Yarin Gal). In addition, we have a very strong learning culture including paper-reading groups and Friday afternoons dedicated to deep learning.

Person Specification

You may be a good fit if you have some of the following skills, experience and attitudes:

  • Experience people managing strong research or engineering teams, with well thought-out views on management philosophy and style.
  • Experience coaching team members & providing feedback.
  • Strong understanding of large language models (e.g. GPT-4). This includes both a broad understanding of the literature, as well as hands-on experience with either research or research engineering.
  • Strong written and verbal communication skills.
  • A track record of helping teams achieve exceptional things.
  • Experience working with world-class multi-disciplinary teams (e.g. in a top-3 lab).
  • Experience working within a research team that has delivered multiple exceptional scientific breakthroughs, in deep learning (or a related field).
  • Strong track-record of academic excellence (e.g. multiple spotlight papers at top-tier conferences).
  • Acting as a bar raiser for interviews.

Salary & Benefits

We are hiring individuals at all ranges of seniority and experience within the research unit, and this advert allows you to apply for any of the roles within this range. We will discuss and calibrate with you as part of the process. The full range of salaries available is as follows:

  • L4: £85,000 - £95,000
  • L5: £105,000 - £115,000
  • L6: £125,000 - £135,000

There are a range of pension options available which can be found through the Civil Service website.


Selection Process

In accordance with the Civil Service Commission rules, the following list contains all selection criteria for the interview process.

Required Experience

We select based on skills and experience regarding the following areas:

  • Coaching & Mentoring
  • Feedback
  • Building Relationships
  • Strategy
  • Management Styles
  • Engineering Leadership
  • Research Leadership
  • Verbal Communication
  • Written Communication
  • AI Safety Knowledge
  • Frontier AI Knowledge & Experience
  • Model Evaluations Knowledge
  • Teamwork
  • Interpersonal skills
  • Tackle challenging problems
  • Learn through coaching

Desired Experience

We additionally may factor in experience with any of the areas that our work-streams specialise in:

  • Autonomous systems
  • Cyber security
  • Chemistry or Biology
  • Safeguards
  • Safety Cases
  • Societal Impacts

Additional Information

 

Internal Fraud Database 

The Internal Fraud function of the Fraud, Error, Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud, or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed, or who would have been dismissed had they not resigned, for internal fraud. In instances such as this, civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out the pre employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil service. In this way, the policy is ensured and the repetition of internal fraud is prevented.  For more information please see - Internal Fraud Register.

Security

Successful candidates must undergo a criminal record check and get baseline personnel security standard (BPSS) clearance before they can be appointed. Additionally, there is a strong preference for eligibility for counter-terrorist check (CTC) clearance. Some roles may require higher levels of clearance, and we will state this by exception in the job advertisement. See our vetting charter here.

 

Nationality requirements

We may be able to offer roles to applicant from any nationality or background. As such we encourage you to apply even if you do not meet the standard nationality requirements (opens in a new window).

Working for the Civil Service

The Civil Service Code (opens in a new window) sets out the standards of behaviour expected of civil servants. We recruit by merit on the basis of fair and open competition, as outlined in the Civil Service Commission's recruitment principles (opens in a new window). The Civil Service embraces diversity and promotes equal opportunities. As such, we run a Disability Confident Scheme (DCS) for candidates with disabilities who meet the minimum selection criteria. The Civil Service also offers a Redeployment Interview Scheme to civil servants who are at risk of redundancy, and who meet the minimum requirements for the advertised vacancy.

Diversity and Inclusion

The Civil Service is committed to attract, retain and invest in talent wherever it is found. To learn more please see the Civil Service People Plan (opens in a new window) and the Civil Service Diversity and Inclusion Strategy (opens in a new window).

Apply for this Job

* Required
resume chosen  
(File types: pdf, doc, docx, txt, rtf)


Enter the verification code sent to to confirm you are not a robot, then submit your application.

This application was flagged as potential bot traffic. To resubmit your application, turn off any VPNs, clear the browser's cache and cookies, or try another browser. If you still can't submit it, contact our support team through the help center.