About the AI Safety Institute

The AI Safety Institute (AISI), launched at the 2023 Bletchley Park AI Safety Summit, is the world's first state-backed organization dedicated to advancing AI safety for the public interest. Our mission is to prevent extreme risks from advanced AI systems, including cyber attacks on critical infrastructure, AI-enhanced chemical and biological threats, large-scale societal disruptions, and potential loss of control over increasingly powerful AI. In just one year, we've assembled one of the planet's largest and most respected model evaluation teams, featuring world-renowned scientists and many senior researchers from leading AI labs such as Anthropic, DeepMind, and OpenAI.

At AISI, we're building the premier institution for impacting both technical AI safety and AI governance. We conduct cutting-edge research, develop novel evaluation tools, and provide crucial insights to governments and international partners. By joining us, you'll collaborate with the brightest minds in the field, directly shape global AI policies, and tackle complex challenges at the forefront of technology and ethics. Whether you're a researcher, engineer, or policy expert, at AISI, you're not just advancing your career – you're positioned to have one of the most significant impacts on humanity's future in the age of artificial intelligence.

Role Description

The AI Safety Institute research unit is looking for exceptionally motivated and talented people to join its Safeguard Analysis Team. 

Interventions that secure a system from abuse by bad actors will grow in importance as AI systems become more advanced and integrated into society. The AI Safety Institute’s Safeguard Analysis Team researches such interventions, which it refers to as 'safeguards', evaluating protections used to secure current frontier AI systems and considering what measures could and should be used to secure such systems in the future. 

The Safeguard Analysis Team takes a broad view of security threats and interventions. It's keen to hire researchers with expertise developing and analysing attacks and protections for systems based on large language models, but is also keen to hire security researchers who have historically worked outside of AI, such as in - non-exhaustively - computer security, information security, web technology policy, and hardware security. Diverse perspectives and research interests are welcomed.

The Team seeks people with skillsets leaning in the direction of either or both of Research Scientist and Research Engineer, recognising that some technical staff may prefer work that spans or alternates between engineering and research responsibilities. The Team's priorities include research-oriented responsibilities – like assessing the threats to frontier systems and developing novel attacks – and engineering-oriented ones, such as building infrastructure for running evaluations.

In this role, you’ll receive mentorship and coaching from your manager and the technical leads on your team. You'll also regularly interact with world-famous researchers and other incredible staff, including alumni from Anthropic, DeepMind, OpenAI and ML professors from Oxford and Cambridge.

In addition to Junior roles, Senior, Staff and Principle RE positions are available for candidates with the required seniority and experience.

Person Specification

You may be a good fit if you have some of the following skills, experience and attitudes:

  • Experience working on machine learning, AI, AI security, computer security, information security, or some other security discipline in industry industry, in academia, or independently.
  • Experience working with a world-class research team comprised of both scientists and engineers (e.g. in a top-3 lab).
  • Red-teaming experience against any sort of system.
  • Strong written and verbal communication skills.
  • Comprehensive understanding of large language models (e.g. GPT-4). This includes both a broad understanding of the literature, as well as hands-on experience with things like pre-training or fine tuning LLMs.
  • Extensive Python experience, including understanding the intricacies of the language, the good vs. bad Pythonic ways of doing things and much of the wider ecosystem/tooling.
  • Ability to work in a self-directed way with high agency, thriving in a constantly changing environment and a steadily growing team, while figuring out the best and most efficient ways to solve a particular problem.
  • Bring your own voice and experience but also an eagerness to support your colleagues together with a willingness to do whatever is necessary for the team’s success and find new ways of getting things done within government.
  • Have a sense of mission, urgency, and responsibility for success, demonstrating problem-solving abilities and preparedness to acquire any missing knowledge necessary to get the job done.
  • Writing production quality code.
  • Improving technical standards across a team through mentoring and feedback.
  • Designing, shipping, and maintaining complex tech products.

Salary & Benefits

We are hiring individuals at all ranges of seniority and experience within the research unit, and this advert allows you to apply for any of the roles within this range. We will discuss and calibrate with you as part of the process. The full range of salaries available is as follows:

  • L3: £65,000 - £75,000
  • L4: £85,000 - £95,000
  • L5: £105,000 - £115,000
  • L6: £125,000 - £135,000
  • L7: £145,000

There are a range of pension options available which can be found through the Civil Service website.

Selection Process

In accordance with the Civil Service Commission rules, the following list contains all selection criteria for the interview process.

Required Experience

This job advert encompasses a range of possible research and engineering roles within the Safeguard Analysis Team. The 'required' experiences listed below should be interpreted as examples of the expertise we're looking for, as opposed to a list of everything we expect to find in one applicant:

  • Writing production quality code
  • Writing code efficiently
  • Python
  • Frontier model architecture knowledge
  • Frontier model training knowledge
  • Model evaluations knowledge
  • AI safety research knowledge
  • Security research knowledge
  • Research problem selection
  • Research science
  • Written communication
  • Verbal communication
  • Teamwork
  • Interpersonal skills
  • Tackle challenging problems
  • Learn through coaching

Additional Information

Internal Fraud Database 

The Internal Fraud function of the Fraud, Error, Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud, or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed, or who would have been dismissed had they not resigned, for internal fraud. In instances such as this, civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out the pre employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil service. In this way, the policy is ensured and the repetition of internal fraud is prevented.  For more information please see - Internal Fraud Register.

Security

Successful candidates must undergo a criminal record check. Successful candidates must meet the security requirements before they can be appointed. The level of security needed is counter-terrorist check (opens in a new window)See our vetting charter (opens in a new window). People working with government assets must complete baseline personnel security standard (opens in new window) checks.

Nationality requirements

We may be able to offer roles to applicant from any nationality or background. As such we encourage you to apply even if you do not meet the standard nationality requirements (opens in a new window).

Working for the Civil Service

The Civil Service Code (opens in a new window) sets out the standards of behaviour expected of civil servants. We recruit by merit on the basis of fair and open competition, as outlined in the Civil Service Commission's recruitment principles (opens in a new window). The Civil Service embraces diversity and promotes equal opportunities. As such, we run a Disability Confident Scheme (DCS) for candidates with disabilities who meet the minimum selection criteria. The Civil Service also offers a Redeployment Interview Scheme to civil servants who are at risk of redundancy, and who meet the minimum requirements for the advertised vacancy.

Diversity and Inclusion

The Civil Service is committed to attract, retain and invest in talent wherever it is found. To learn more please see the Civil Service People Plan (opens in a new window) and the Civil Service Diversity and Inclusion Strategy (opens in a new window).

Apply for this Job

* Required
resume chosen  
(File types: pdf, doc, docx, txt, rtf)


Enter the verification code sent to to confirm you are not a robot, then submit your application.

This application was flagged as potential bot traffic. To resubmit your application, turn off any VPNs, clear the browser's cache and cookies, or try another browser. If you still can't submit it, contact our support team through the help center.