About the AI Security Institute
The AI Security Institute is the largest team in a government dedicated to understanding AI capabilities and risks in the world.
Our mission is to equip governments with an empirical understanding of the safety of advanced AI systems. We conduct research to understand the capabilities and impacts of advanced AI and develop and test risk mitigations. We focus on risks with security implications, including the potential of AI to assist with the development of chemical and biological weapons, how it can be used to carry out cyber-attacks, enable crimes such as fraud, and the possibility of loss of control.
The risks from AI are not sci-fi, they are urgent. By combining the agility of a tech start-up with the expertise and mission-driven focus of government, we’re building a unique and innovative organisation to prevent AI’s harms from impeding its potential.
Control Team
Our team's focus is on ensuring that even if frontier AI systems are misaligned, they can be effectively controlled. To achieve this, we are attempting to advance the state of conceptual research into control protocols and corresponding safety cases. Additionally, we will conduct realistic empirical research on mock frontier AI development infrastructure, to identify flaws in theoretical approaches and refine them accordingly.
Role Summary
You will be part of a team of 11 researchers, including people with experience in the control agenda and/or experience at frontier labs. Your work will involve a mix of both conceptual and empirical research, with the core goal of making substantial improvements in the robustness of control protocols across major labs, particularly as progress continues towards AGI.
Research partnerships with frontier AI labs will also be a significant part of your role. This will include collaborating on promising research directions (e.g., more realistic empirical experiments in settings that closely mimic lab infrastructure), as well as supporting development of control-based safety cases.
You will report to Alan Cooney - our team lead. You will also receive research mentorship from our research directors, including Geoffrey Irving and Yarin Gal. From a compute perspective, you will have excellent access to resources from both our research platform team and the UK's Isambard supercomputer (5,000 H100s).
Person Specification
You may be a good fit if you have some of the following skills, experience and attitudes. Please note that you don’t need to meet all of these criteria, and if you're unsure, we encourage you to apply.
- Experience working in a research team or group that has delivered exceptional research in deep learning, or a related field.
- Comprehensive understanding of frontier AI development, including key processes involved in research, data collection & generation, pre-training, post-training and safety assessment.
- Proven track record of academic excellence, demonstrated by novel research contributions and spotlight papers at top-tier conferences (e.g., NeurIPS, ICML, ICLR).
- Exceptional written and verbal communication skills, with the ability to convey complex ideas clearly and effectively to diverse audiences.
- Extensive experience in collaborating with multi-disciplinary teams, including researchers and engineers, and leading high-impact projects.
- A strong desire to improve the global state of AI safety.
- While existing experience working on control is desired, it is not a requirement for this role.
Salary & Benefits
We are primarily hiring individuals at all ranges of the following scale (L5-L7). The full range of salaries are available below.
- Level 3 - Total Package £65,000 - £75,000
- Level 4 - Total Package £85,000 - £95,000
- Level 5 - Total Package £105,000 - £115,000
- Level 6 - Total Package £125,000 - £135,000
- Level 7 - Total Package £145,000
Additional Information
Internal Fraud Database
The Internal Fraud function of the Fraud, Error, Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud, or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed, or who would have been dismissed had they not resigned, for internal fraud. In instances such as this, civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out the pre employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil service. In this way, the policy is ensured and the repetition of internal fraud is prevented. For more information please see - Internal Fraud Register.
Security
Successful candidates must undergo a criminal record check and get baseline personnel security standard (BPSS) clearance before they can be appointed. Additionally, there is a strong preference for eligibility for counter-terrorist check (CTC) clearance. Some roles may require higher levels of clearance, and we will state this by exception in the job advertisement. See our vetting charter here.
Nationality requirements
We may be able to offer roles to applicant from any nationality or background. As such we encourage you to apply even if you do not meet the standard nationality requirements (opens in a new window).