About the AI Security Institute

The AI Security Institute is the largest team in a government dedicated to understanding AI capabilities and risks in the world. 

Our mission is to equip governments with an empirical understanding of the safety of advanced AI systems. We conduct research to understand the capabilities and impacts of advanced AI and develop and test risk mitigations. We focus on risks with security implications, including the potential of AI to assist with the development of chemical and biological weapons, how it can be used to carry out cyber-attacks, enable crimes such as fraud, and the possibility of loss of control. 

The risks from AI are not sci-fi, they are urgent. By combining the agility of a tech start-up with the expertise and mission-driven focus of government, we’re building a unique and innovative organisation to prevent AI’s harms from impeding its potential. 

About the Team 

As AI systems become more advanced, the potential for misuse of their cyber capabilities may pose a threat to the security of organisations and individuals. Cyber capabilities also form common bottlenecks in scenarios across other AI risk  areas such as harmful outcomes from biological and chemical capabilities and from autonomous systems. One approach to better understanding these risks is by conducting robust empirical tests of AI systems so we can better understand how capable they currently are when it comes to performing cyber security tasks.  

The AI Security Institute’s Cyber Evaluations Team is developing first-of-its-kind government-run infrastructure to benchmark the progress of advanced AI capabilities in the domain of cyber security. Our goal is to carry out and publish scientific research supporting a global effort to understand the risks and improve the safety of advanced AI systems. Our current focus is on doing this by building difficult cyber tasks that we can measure the performance of AI agents against.  

We are building a cross-functional team of cybersecurity researchers, machine learning researchers, research engineers and infrastructure engineers to help us create new kinds of capability and safety evaluations. As such to scale up we require all candidates to be able to to evaluate frontier AI systems as they are released.  

 

JOB SUMMARY  

 As AI systems become more advanced, the potential for misuse of their cyber capabilities may pose a threat to the security of organisations and individuals. Cyber capabilities also form common bottlenecks in scenarios across other AI risk 
areas such as harmful outcomes from biological and chemical capabilities and from autonomous systems. One approach to better understanding these risks is by conducting robust empirical tests of AI systems so we can better understand how capable they currently are when it comes to performing cyber security tasks. 

The AI Security Institute’s Cyber Evaluations Team is developing first-of-its-kind government-run infrastructure to benchmark the progress of advanced AI capabilities in the domain of cyber security. Our goal is to carry out and publish scientific research supporting a global effort to understand the risks and improve the safety of advanced AI systems. Our current focus is on doing this by building difficult cyber tasks that we can measure the performance of AI agents against. 

We are building a cross-functional team of cybersecurity researchers, machine learning researchers, research engineers and infrastructure engineers to help us create new kinds of capability and safety evaluations, and to scale up our capacity to evaluate frontier AI systems as they are released. 

We are also open to hiring technical generalists with a background spanning many of these areas, as well as threat intelligence experts with a focus on researching novel cyber security risks from advanced AI systems. 

 

RESPONSIBILITIES 

As a Cyber Security Researcher at AISI your role will range from helping design our overall research strategy and threat model, to working with research and infrastructure engineers to build environments and challenges against which to benchmark the  capabilities of AI systems. You may also be involved in coordinating teams of internal and external cyber security experts for open-ended probing exercises to explore the capabilities of AI systems, or with exploring the interactions between narrow cyber automation tools and general purpose AI systems. 

 Your day-to-day responsibilities could include: 

  • Designing CTF-style challenges and other methods for automatically grading the performance of AI systems on cyber-security tasks. 
  • Advising ML research scientists on how to analyse and interpret results of cyber capability evaluations. 
  • Writing reports, research papers and blog posts to share our research with stakeholders. 
  • Helping to evaluate the performance of general purpose models when they are augmented with narrow red-teaming automation tools such as Wireshark, Metasploit, and Ghidra. 
  • Keeping up-to-date with related research taking place in other organisations. 

 

PERSON SPECIFICATION 

 You will need experience in at least one of the following areas: 

  • Proven experience related to cyber-security red-teaming such as: 
  • Penetration testing 
  • Cyber range design 
  • Competing in or designing in CTFs 
  • Developing automated security testing tools 
  • Bug bounties, vulnerability research, or exploit discovery and patching 
  • Communicating the outcomes of cyber security research to a range of technical and non-technical audiences. 
  • Familiarity with cybersecurity tools and platforms such as Wireshark, Metasploit or Ghidra. 
  • Software skills in one or more relevant domains such as network engineering, secure application development, or binary analysis. 

 

This role might be a great fit if: 

  • You have a strong interest in helping improve the safety of AI systems. 
  • You are active in the cyber security community and enjoy keeping up to date with new research in this field. 
  • You have previous experience building or measuring the impact of new automation tools on cyber red-teaming workflows. 

 

Core requirements 

  • You should be able to spend at least 4 days per week on working with us 
  • You should be able to join us for at least 24 months 
  • You should be able work from our office in London (Whitehall) for parts of the week, but we provide flexibility for remote work 

 
Salary & Benefits

We are hiring individuals at all ranges of seniority and experience within this research unit, and this advert allows you to apply for any of the roles within this range. Your dedicated talent partner will work with you as you move through our assessment process to explain our internal benchmarking process. The full range of salaries are available below, salaries comprise of a base salary, technical allowance plus additional benefits as detailed on this page.

  • Level 3 - Total Package £65,000 - £75,000 inclusive of a base salary £35,720 plus additional technical talent allowance of between £29,280 - £39,280
  • Level 4 - Total Package £85,000 - £95,000 inclusive of a base salary £42,495 plus additional technical talent allowance of between £42,505 - £52,505
  • Level 5 - Total Package £105,000 - £115,000 inclusive of a base salary £55,805 plus additional technical talent allowance of between £49,195 - £59,195
  • Level 6 - Total Package £125,000 - £135,000 inclusive of a base salary £68,770 plus additional technical talent allowance of between £56,230 - £66,230
  • Level 7 - Total Package £145,000 inclusive of a base salary £68,770 plus additional technical talent allowance of £76,230

This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety, robustness and advanced AI architectures.

Government Digital and Data Profession Capability Framework - Government Digital and Data Profession Capability Framework

There are a range of pension options available which can be found through the Civil Service website. 

 


Additional Information

Internal Fraud Database 

The Internal Fraud function of the Fraud, Error, Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud, or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed, or who would have been dismissed had they not resigned, for internal fraud. In instances such as this, civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out the pre employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil service. In this way, the policy is ensured and the repetition of internal fraud is prevented.  For more information please see - Internal Fraud Register.

Security

Successful candidates must undergo a criminal record check and get baseline personnel security standard (BPSS) clearance before they can be appointed. Additionally, there is a strong preference for eligibility for counter-terrorist check (CTC) clearance. Some roles may require higher levels of clearance, and we will state this by exception in the job advertisement. See our vetting charter here.

 

Nationality requirements

We may be able to offer roles to applicant from any nationality or background. As such we encourage you to apply even if you do not meet the standard nationality requirements (opens in a new window).

Working for the Civil Service

The Civil Service Code (opens in a new window) sets out the standards of behaviour expected of civil servants. We recruit by merit on the basis of fair and open competition, as outlined in the Civil Service Commission's recruitment principles (opens in a new window). The Civil Service embraces diversity and promotes equal opportunities. As such, we run a Disability Confident Scheme (DCS) for candidates with disabilities who meet the minimum selection criteria. The Civil Service also offers a Redeployment Interview Scheme to civil servants who are at risk of redundancy, and who meet the minimum requirements for the advertised vacancy.

Diversity and Inclusion

The Civil Service is committed to attract, retain and invest in talent wherever it is found. To learn more please see the Civil Service People Plan (opens in a new window) and the Civil Service Diversity and Inclusion Strategy (opens in a new window).

Apply for this Job

* Required
resume chosen  
(File types: pdf, doc, docx, txt, rtf)


Enter the verification code sent to to confirm you are not a robot, then submit your application.

This application was flagged as potential bot traffic. To resubmit your application, turn off any VPNs, clear the browser's cache and cookies, or try another browser. If you still can't submit it, contact our support team through the help center.