About the AI Security Institute
The AI Security Institute is the largest team in a government dedicated to understanding AI capabilities and risks in the world.
Our mission is to equip governments with an empirical understanding of the safety of advanced AI systems. We conduct research to understand the capabilities and impacts of advanced AI and develop and test risk mitigations. We focus on risks with security implications, including the potential of AI to assist with the development of chemical and biological weapons, how it can be used to carry out cyber-attacks, enable crimes such as fraud, and the possibility of loss of control.
The risks from AI are not sci-fi, they are urgent. By combining the agility of a tech start-up with the expertise and mission-driven focus of government, we’re building a unique and innovative organisation to prevent AI’s harms from impeding its potential.
Autonomous Systems
We're focused on loss of control risks from frontier AI systems. To address this, we're advancing the state of the science in risk modeling, incorporating insights from other safety-critical and adversarial domains, while developing our own novel techniques. Additionally, we're empirically evaluating these risks - building out one of the world's largest agentic evaluation suites, as well as pushing forward the science of model evaluations, to better understand the risks and predict their materialisation. Lastly, we are developing novel mitigations that, for example, attempt to prevent models from intentionally underperforming on dangerous capability evaluations.
Role Summary
As a research engineer, you'll work as part of a multi-disciplinary team including scientists, engineers and domain experts on the risks that we are investigating. Your team is given huge amounts of autonomy to chase research directions & build evaluations that relate to your team’s over-arching threat model. This includes coming up with ways of breaking down the space of risks, as well as designing & building ways to evaluate them. All of this is done within an extremely collaborative environment, where everyone does a bit of everything. Some of the areas we focus on include:
- Self-replication. Researching the potential for AI systems to autonomously replicate themselves across networks and establish persistence.
- AI R&D. Investigating AI systems' potential to iteratively improve themselves, potentially leading to an intelligence explosion.
- Safety sabotage. Evaluating AI systems' potential to sabotage safety - for example by sabotaging safety research.
You’ll receive coaching from your manager and mentorship from the principal research engineer on our team. We have a very strong learning & development culture to support this, including Friday afternoons devoted to deep reading and various weekly paper reading groups.
Person Specification
You may be a good fit if you have some of the following skills, experience and attitudes. Please note that you don’t need to meet all of these criteria, and if you're unsure, we encourage you to apply.
- Writing production quality code.
- Designing, shipping, and maintaining complex tech products.
- Improving technical standards across a team, through mentoring and feedback.
- Strong written and verbal communication skills.
- Experience working within a multi-disciplinary team comprised of both scientists and engineers.
- Strong understanding of large language models. This can include a broad understanding of the literature, and/or hands-on experience with things like pre-training or fine tuning LLMs.
- Extensive Python experience, including the wider ecosystem and tooling.
Salary & Benefits
We are hiring individuals at all ranges of seniority and experience within this research unit, and this advert allows you to apply for any of the roles within this range. Your dedicated talent partner will work with you as you move through our assessment process to explain our internal benchmarking process. The full range of salaries are available below, salaries comprise of a base salary, technical allowance plus additional benefits as detailed on this page.
- Level 3 - Total Package £65,000 - £75,000
- Level 4 - Total Package £85,000 - £95,000
- Level 5 - Total Package £105,000 - £115,000
- Level 6 - Total Package £125,000 - £135,000
- Level 7 - Total Package £145,000
Additional Information
Internal Fraud Database
The Internal Fraud function of the Fraud, Error, Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud, or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed, or who would have been dismissed had they not resigned, for internal fraud. In instances such as this, civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out the pre employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil service. In this way, the policy is ensured and the repetition of internal fraud is prevented. For more information please see - Internal Fraud Register.
Security
Successful candidates must undergo a criminal record check and get baseline personnel security standard (BPSS) clearance before they can be appointed. Additionally, there is a strong preference for eligibility for counter-terrorist check (CTC) clearance. Some roles may require higher levels of clearance, and we will state this by exception in the job advertisement. See our vetting charter here.
Nationality requirements
We may be able to offer roles to applicant from any nationality or background. As such we encourage you to apply even if you do not meet the standard nationality requirements (opens in a new window).