About the AI Security Institute

The AI Security Institute is the largest team in a government dedicated to understanding AI capabilities and risks in the world. 

Our mission is to equip governments with an empirical understanding of the safety of advanced AI systems. We conduct research to understand the capabilities and impacts of advanced AI and develop and test risk mitigations. We focus on risks with security implications, including the potential of AI to assist with the development of chemical and biological weapons, how it can be used to carry out cyber-attacks, enable crimes such as fraud, and the possibility of loss of control. 

The risks from AI are not sci-fi, they are urgent. By combining the agility of a tech start-up with the expertise and mission-driven focus of government, we’re building a unique and innovative organisation to prevent AI’s harms from impeding its potential. 

This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety, robustness and advanced AI architectures. 

We will review applications on an ongoing basis.

Applications will close on 9 March, 23:59, anywhere on earth. 

AISI is starting a team to focus on AI alignment. This team will examine ways to prevent models from autonomously attempting to cause harm. The team’s research will be led by Geoffrey Irving, AISI’s Chief Scientist. 

This exciting new team is part of AISI’s Solutions Group, which will also examine ways to prevent misuse risks (our Safeguards team) and ways to prevent models from causing harm, even if they are autonomously attempting to do so (our Control team).   

ROLE SUMMARY 

As a Research Scientist working on alignment, you will: 

  • Conduct foundational research pushing forward the frontier of our understanding about how to make highly advanced AI systems safe alongside AISI expertise and external collaborators 
  • Break down the problem by producing safety case sketches for highly-capable AI systems using alignment and scalable oversight techniques 
  • Make recommendations about the best places to direct funding towards alignment research – and supervising this external research in a ‘PI’-type role 

The role offers a unique opportunity to work closely alongside the world’s best technical talent, including Geoffrey Irving who leads the alignment workstream, as well as external experts, partner organisations and policy makers. There will be significant scope to contribute to the overall vision and strategy of the alignment team as an early hire.  

We will initially focus on AI safety via debate (Irving et al., 2018), as that’s our current area of expertise, but we would be excited to hire people to focus on other alignment agendas. 

RESPONSIBILITIES 

This role offers the opportunity to progress deep technical work at the frontier of AI safety and governance. Your work would likely include:  

  • Detailed research on alignment.  We think it’s important to get into the details. This might mean trying to write a safety case sketch on existing techniques. 
  • High-level research into alignment (e.g. what novel methods might be used, and what properties one would need for these arguments to be correct). 
  • Input into our strategy, which focuses on: how to get more alignment work to occur, what approaches are worthy of further study, and how to improve the chance future frontier AI systems (AGI and post-AGI) safe. 
  • Collaboration with external partners (e.g. frontier labs, academics) on joint research into alignment. 

Person Specification 

We are interested in hiring individuals at a range of seniority and experience within this team, including in Senior ML Research Scientist positions. Calibration on final title, seniority and pay will take place as part of the recruitment process. We encourage all candidates who would be interested in joining to apply.   

You may be a good fit if you have some of the following skills, experience and attitudes: 

  • Relevant machine learning research experience in industry, relevant open-source collectives, or academia in a field related to machine learning, AI, AI security, or computer security. 
  • Broad knowledge of existing approaches to alignment (T-shaped: some deep knowledge, lots of shallow knowledge). 
  • Strong writing ability. 
  • Ability to work autonomously and in a self-directed way with high agency,thriving in a constantly changing environment and a steadily growing team, while figuring out the best and most efficient ways to solve a particular problem. 
  • Bring your own voice and experience but also an eagerness to support your colleagues together with a willingness to do whatever is necessary for the team’s success and find new ways of getting things done within government. 
  • Have a sense of mission, urgency, and responsibility for success,demonstrating problem-solving abilities and preparedness to acquire any missing knowledge necessary to get the job done. 
  • Comprehensive understanding of large language models (e.g. Claude 3.5). This includes both a broad understanding of the literature, as well as hands-on experience with things like pre-training or fine tuning LLMs.  
  • Direct research experience (e.g. PhD in a technical field and/or spotlight papers at NeurIPS/ICML/ICLR).   
  • Experience working with world-class multi-disciplinary teams, including both scientists and engineers (e.g. in a top-3 lab).   

Salary & Benefits 

We are hiring individuals at all ranges of seniority and experience within this research unit, and this advert allows you to apply for any of the roles within this range. Your dedicated talent partner will work with you as you move through our assessment process to explain our internal benchmarking process. The full range of salaries are available below, salaries comprise of a base salary, technical allowance plus additional benefits as detailed on this page. 

  • Level 3 - Total Package £65,000 - £75,000 inclusive of a base salary £35,720 plus additional technical talent allowance of between £29,280 - £39,280 
  • Level 4 - Total Package £85,000 - £95,000 inclusive of a base salary £42,495 plus additional technical talent allowance of between £42,505 - £52,505 
  • Level 5 - Total Package £105,000 - £115,000 inclusive of a base salary £55,805 plus additional technical talent allowance of between £49,195 - £59,195 
  • Level 6 - Total Package £125,000 - £135,000 inclusive of a base salary £68,770 plus additional technical talent allowance of between £56,230 - £66,230 
  • Level 7 - Total Package £145,000 inclusive of a base salary £68,770 plus additional technical talent allowance of £76,230 

 


Additional Information

Internal Fraud Database 

The Internal Fraud function of the Fraud, Error, Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud, or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed, or who would have been dismissed had they not resigned, for internal fraud. In instances such as this, civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out the pre employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil service. In this way, the policy is ensured and the repetition of internal fraud is prevented.  For more information please see - Internal Fraud Register.

Security

Successful candidates must undergo a criminal record check and get baseline personnel security standard (BPSS) clearance before they can be appointed. Additionally, there is a strong preference for eligibility for counter-terrorist check (CTC) clearance. Some roles may require higher levels of clearance, and we will state this by exception in the job advertisement. See our vetting charter here.

 

Nationality requirements

We may be able to offer roles to applicant from any nationality or background. As such we encourage you to apply even if you do not meet the standard nationality requirements (opens in a new window).

Working for the Civil Service

The Civil Service Code (opens in a new window) sets out the standards of behaviour expected of civil servants. We recruit by merit on the basis of fair and open competition, as outlined in the Civil Service Commission's recruitment principles (opens in a new window). The Civil Service embraces diversity and promotes equal opportunities. As such, we run a Disability Confident Scheme (DCS) for candidates with disabilities who meet the minimum selection criteria. The Civil Service also offers a Redeployment Interview Scheme to civil servants who are at risk of redundancy, and who meet the minimum requirements for the advertised vacancy.

Diversity and Inclusion

The Civil Service is committed to attract, retain and invest in talent wherever it is found. To learn more please see the Civil Service People Plan (opens in a new window) and the Civil Service Diversity and Inclusion Strategy (opens in a new window).

Apply for this Job

* Required
resume chosen  
(File types: pdf, doc, docx, txt, rtf)


Enter the verification code sent to to confirm you are not a robot, then submit your application.

This application was flagged as potential bot traffic. To resubmit your application, turn off any VPNs, clear the browser's cache and cookies, or try another browser. If you still can't submit it, contact our support team through the help center.