About the AI Safety Institute
The AI Safety Institute (AISI), launched at the 2023 Bletchley Park AI Safety Summit, is the world's first state-backed organization dedicated to advancing AI safety for the public interest. Our mission is to assess and mitigate risks from frontier AI systems, including cyber attacks on critical infrastructure, AI-enhanced chemical and biological threats, large-scale societal disruptions, and potential loss of control over increasingly powerful AI. In just one year, we've assembled one of the largest and most respected model evaluation teams, featuring renowned scientists and senior researchers from leading AI labs such as Anthropic, DeepMind, and OpenAI.
At AISI, we're building the premier institution for impacting both technical AI safety and AI governance. We conduct cutting-edge research, develop novel evaluation tools, and provide crucial insights to governments, companies, and international partners. By joining us, you'll collaborate with the brightest minds in the field, directly shape global AI policies, and tackle complex challenges at the forefront of technology and ethics. Whether you're a researcher, engineer, or policy expert, at AISI, you're not just advancing your career – you're positioned to have significant impact in the age of artificial intelligence.
About the Team
As AI systems become more advanced, the potential for misuse of their cyber capabilities may pose a threat to the security of organisations and individuals. Cyber capabilities also form common bottlenecks in scenarios across other AI risk areas such as harmful outcomes from biological and chemical capabilities and from autonomous systems. One approach to better understanding these risks is by conducting robust empirical tests of AI systems so we can better understand how capable they currently are when it comes to performing cyber security tasks.
The AI Safety Institute’s Cyber Evaluations Team is developing first-of-its-kind government-run infrastructure to benchmark the progress of advanced AI capabilities in the domain of cyber security. Our goal is to carry out and publish scientific research supporting a global effort to understand the risks and improve the safety of advanced AI systems. Our current focus is on doing this by building difficult cyber tasks that we can measure the performance of AI agents against.
We are building a cross-functional team of cybersecurity researchers, machine learning researchers, research engineers and infrastructure engineers to help us create new kinds of capability and safety evaluations, and to scale up our capacity to evaluate frontier AI systems as they are released.
The AI Safety Institute research unit is looking for exceptionally motivated and talented Research Engineers, to work with a range of Cyber Security and policy specialists to measure the capabilities of AI systems against scenarios covered by our risk models – with a focus on measuring their performance on tasks related to cyber security.
You will play a key role in designing and running experiments on frontier models. This could include designing experiments ranging from measuring the uplift that AI systems might provide to malicious attackers, to conducting research to develop mitigations to prevent misuse of AI systems or better defend against AI enabled cyber attacks.
In this role, you’ll receive mentorship and coaching from your manager and the technical leads on your team. You'll also regularly interact with world-famous researchers and other incredible staff (including alumni from Anthropic, DeepMind, OpenAI and ML professors from Oxford and Cambridge).
In addition to Junior roles, Senior, Staff and Principle RS positions are available for candidates with the required seniority and experience.
Person Specification
You may be a good fit if you have some of the following skills, experience and attitudes:
- Relevant experience in industry, relevant open-source collectives, or academia in a field related to machine learning, AI, AI security, or computer security
- Experience in building software systems to meet research requirements and have led or been a significant contributor to relevant software projects, demonstrating cross-functional collaboration skills.
- Knowledge of training, fine-tuning, scaffolding, prompting, deploying, and/or evaluating current cutting-edge machine learning systems such as large language models.
- Knowledge of statistics.
- A strong curiosity in understanding AI systems and studying the security implications of this technology.
- Motivated to conduct research that is not only curiosity driven but also solves concrete open questions in governance and policy making.
- Ability to work autonomously and in a self-directed way with high agency, thriving in a constantly changing environment and a steadily growing team, while figuring out the best and most efficient ways to solve a particular problem.
- Bring your own voice and experience but also an eagerness to support your colleagues together with a willingness to do whatever is necessary for the team’s success and find new ways of getting things done within government.
- Have a sense of mission, urgency, and responsibility for success, demonstrating problem-solving abilities and preparedness to acquire any missing knowledge necessary to get the job done.
- Comprehensive understanding of large language models (e.g. GPT-4). This includes both a broad understanding of the literature, as well as hands-on experience with things like pre-training or fine tuning LLMs.
The following are also nice-to-have:
- Relevant Cyber Security Expertise
- Extensive Python experience, including understanding the intricacies of the language, the good vs. bad Pythonic ways of doing things and much of the wider ecosystem/tooling.
- Direct research experience (e.g. PhD in a technical field and/or spotlight papers at NeurIPS/ICML/ICLR).
- Experience working with world-class multi-disciplinary teams, including both scientists and engineers (e.g. in a top-3 lab).
- Acting as a bar raiser for interviews
Salary & Benefits
We are hiring individuals at all ranges of seniority and experience within the research unit, and this advert allows you to apply for any of the roles within this range. We will discuss and calibrate with you as part of the process. The full range of salaries available is as follows:
- L3: £65,000 - £75,000
- L4: £85,000 - £95,000
- L5: £105,000 - £115,000
- L6: £125,000 - £135,000
- L7: £145,000
There are a range of pension options available which can be found through the Civil Service website.
Selection Process
In accordance with the Civil Service Commission rules, the following list contains all selection criteria for the interview process.
Required Experience
We select based on skills and experience regarding the following areas:
- Research problem selection
- Research science
- Writing code efficiently
- Python
- Frontier model architecture knowledge
- Frontier model training knowledge
- Model evaluations knowledge
- AI safety research knowledge
- Written communication
- Verbal communication
- Teamwork
- Interpersonal skills
- Tackle challenging problems
- Learn through coaching
Desired Experience
We additionally may factor in experience with any of the areas that our work-streams specialise in:
- Autonomous systems
- Cyber security
- Chemistry or Biology
- Safeguards
- Safety Cases
- Societal Impacts
Additional Information
Internal Fraud Database
The Internal Fraud function of the Fraud, Error, Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud, or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed, or who would have been dismissed had they not resigned, for internal fraud. In instances such as this, civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out the pre employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil service. In this way, the policy is ensured and the repetition of internal fraud is prevented. For more information please see - Internal Fraud Register.
Security
Successful candidates must undergo a criminal record check and get baseline personnel security standard (BPSS) clearance before they can be appointed. Additionally, there is a strong preference for eligibility for counter-terrorist check (CTC) clearance. Some roles may require higher levels of clearance, and we will state this by exception in the job advertisement. See our vetting charter here.
Nationality requirements
We may be able to offer roles to applicant from any nationality or background. As such we encourage you to apply even if you do not meet the standard nationality requirements (opens in a new window).