About the AI Safety Institute
The AI Safety Institute (AISI), launched at the 2023 Bletchley Park AI Safety Summit, is the world's first state-backed organization dedicated to advancing AI safety for the public interest. Our mission is to assess and mitigate risks from frontier AI systems, including cyber attacks on critical infrastructure, AI-enhanced chemical and biological threats, large-scale societal disruptions, and potential loss of control over increasingly powerful AI. In just one year, we've assembled one of the largest and most respected model evaluation teams, featuring renowned scientists and senior researchers from leading AI labs such as Anthropic, DeepMind, and OpenAI.
At AISI, we're building the premier institution for impacting both technical AI safety and AI governance. We conduct cutting-edge research, develop novel evaluation tools, and provide crucial insights to governments, companies, and international partners. By joining us, you'll collaborate with the brightest minds in the field, directly shape global AI policies, and tackle complex challenges at the forefront of technology and ethics. Whether you're a researcher, engineer, or policy expert, at AISI, you're not just advancing your career – you're positioned to have significant impact in the age of artificial intelligence.
We will review applications on an ongoing basis.
As a Machine Learning Research Scientist working on safety cases, you will conduct foundational research to help take this ambitious new pillar of AISI’s work forward. By building our understanding of how AI safety cases could be developed, you will help to expand AISI’s programme of technical work beyond the existing workstreams focussed on evaluating model capabilities and safeguards.
Safety cases are already used as standard in other industries and are structured arguments that a system is unlikely to cause significant harm if deployed in a particular setting. As the AI frontier develops, we expect safety cases could become an important tool for mitigating AI safety risks, whereby AI companies set out detailed arguments for how they have ensured their models are safe. We believe it is possible to significantly develop our understanding of what a good safety case would look like now, even though the field is far from knowing how to write a detailed safety case.
In this role, you would help push this understanding forward, both by direct technical research and via technical collaborations with external researchers and organisations. Key areas to cover include open problems that must be overcome to increase our confidence within specific safety agendas, and agenda-specific evaluations such as control and alignment evaluations.
The role offers a unique opportunity to work closely alongside the world’s best technical talent, including Chief Scientist Geoffrey Irving who leads the safety case workstream, as well as talented Policy / Strategy leads and other Research Engineers and Research Scientists. You will also collaborate with external topic-level experts, partner organisations and policy makers to coordinate and build on external research. There will be significant scope to contribute to the overall vision and strategy of the safety case team as an early hire.
We view work on safety cases at AISI as a critical component of the overall safety story, alongside our existing workstreams focused on evaluations of dangerous capabilities and safeguard effectiveness.
RESPONSIBILITIES
This role offers the opportunity to progress deep technical work at the frontier of AI safety and governance. Your work would likely include:
- Detailed research on safety cases. We think it’s important to get into the details, so this might mean trying to write a comprehensive safety case based on evals for a certain hazard, or developing novel methods for AI control or formal verification.
- High-level research into safety case material (e.g. what methods might be used, and what properties one would need for these arguments to be correct).
- Input into our strategy, which focuses on: how to get more safety case work to occur, what form they should take, and how to improve the chance that the results improve safety.
- Collaboration with external partners (e.g. labs, academics) on joint research into safety cases.
- Research organisational work (assembling teams, organising workshops) to create an environment where safety case work occurs.
Person Specification
We are interested in hiring individuals at a range of seniority and experience within this team, including in Senior ML Research Scientist positions. Calibration on final title, seniority and pay will take place as part of the recruitment process. We encourage all candidates who would be interested in joining to apply.
You may be a good fit if you have some of the following skills, experience and attitudes:
- Relevant machine learning research experience in industry, relevant open-source collectives, or academia in a field related to machine learning, AI, AI security, or computer security.
- Broad knowledge of technical safety methods (T-shaped: some deep knowledge, lots of shallow knowledge).
- Strong writing ability.
- Motivated to conduct technical research with an emphasis on direct policy impact rather than exploring novel ideas.
- Ability to work autonomously and in a self-directed way with high agency, thriving in a constantly changing environment and a steadily growing team, while figuring out the best and most efficient ways to solve a particular problem.
- Bring your own voice and experience but also an eagerness to support your colleagues together with a willingness to do whatever is necessary for the team’s success and find new ways of getting things done within government.
- Have a sense of mission, urgency, and responsibility for success, demonstrating problem-solving abilities and preparedness to acquire any missing knowledge necessary to get the job done.
- Comprehensive understanding of large language models (e.g. GPT-4). This includes both a broad understanding of the literature, as well as hands-on experience with things like pre-training or fine tuning LLMs.
- Direct research experience (e.g. PhD in a technical field and/or spotlight papers at NeurIPS/ICML/ICLR).
- Experience working with world-class multi-disciplinary teams, including both scientists and engineers (e.g. in a top-3 lab).
Salary & Benefits
We are hiring individuals at all ranges of seniority and experience within the research unit, and this advert allows you to apply for any of the roles within this range. We will discuss and calibrate with you as part of the process. The full range of salaries available is as follows:
- L3: £65,000 - £75,000
- L4: £85,000 - £95,000
- L5: £105,000 - £115,000
- L6: £125,000 - £135,000
- L7: £145,000
The Department for Science, Innovation and Technology offers a competitive mix of benefits including:
- A culture of flexible working, such as job sharing, homeworking and compressed hours.
- Automatic enrolment into the Civil Service Pension Scheme, with an average employer contribution of 27%.
- A minimum of 25 days of paid annual leave, increasing by 1 day per year up to a maximum of 30.
- An extensive range of learning & professional development opportunities, which all staff are actively encouraged to pursue.
- Access to a range of retail, travel and lifestyle employee discounts.
- The Department operates a discretionary hybrid working policy, which provides for a combination of working hours from your place of work and from your home in the UK. The current expectation for staff is to attend the office or non-home based location for 40-60% of the time over the accounting period.
Selection Process
In accordance with the Civil Service Commission rules, the following list contains all selection criteria for the interview process.
The interview process may vary candidate to candidate, however, you should expect a typical process to include some technical proficiency tests, discussions with a cross-section of our team at AISI (including non-technical staff), conversations with your workstream lead. The process will culminate in a conversation with members of the senior team here at AISI.
Candidates should expect to go through some or all of the following stages once an application has been submitted:-
- Initial interview
- Second interview
- Technical take home test
- Third interview and review of take home test
- Final interview with members of the senior team
Required Experience
We select based on skills and experience regarding the following areas:
- Research science
- Frontier model architecture knowledge
- Frontier model training knowledge
- AI safety research knowledge
- Written communication
- Verbal communication
- Safety cases or safety systems knowledge
- Research problem selection
Desired Experience
We additionally may factor in experience with any of the areas that our work-streams specialise in:
- Autonomous systems
- Cyber security
- Chemistry or Biology
- Safeguards
- Safety Cases
- Societal Impacts
Additional Information
Internal Fraud Database
The Internal Fraud function of the Fraud, Error, Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud, or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed, or who would have been dismissed had they not resigned, for internal fraud. In instances such as this, civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out the pre employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil service. In this way, the policy is ensured and the repetition of internal fraud is prevented. For more information please see - Internal Fraud Register.
Security
Successful candidates must undergo a criminal record check and get baseline personnel security standard (BPSS) clearance before they can be appointed. Additionally, there is a strong preference for eligibility for counter-terrorist check (CTC) clearance. Some roles may require higher levels of clearance, and we will state this by exception in the job advertisement. See our vetting charter here.
Nationality requirements
We may be able to offer roles to applicant from any nationality or background. As such we encourage you to apply even if you do not meet the standard nationality requirements (opens in a new window).