About the AI Security Institute
The AI Security Institute is the largest team in a government dedicated to understanding AI capabilities and risks in the world.
Our mission is to equip governments with an empirical understanding of the safety of advanced AI systems. We conduct research to understand the capabilities and impacts of advanced AI and develop and test risk mitigations. We focus on risks with security implications, including the potential of AI to assist with the development of chemical and biological weapons, how it can be used to carry out cyber-attacks, enable crimes such as fraud, and the possibility of loss of control.
The risks from AI are not sci-fi, they are urgent. By combining the agility of a tech start-up with the expertise and mission-driven focus of government, we’re building a unique and innovative organisation to prevent AI’s harms from impeding its potential.
Research Scientist/Engineer, Biological and Chemical Models | AI Security Institute
London, UK
About the AI Security Institute
The AI Security Institute (AISI), launched at the 2023 Bletchley Park AI Safety Summit, is the world's first state-backed organisation dedicated to advancing AI security for the public interest. Our mission is to assess and mitigate risks from frontier AI systems, including cyber attacks on critical infrastructure, AI-enhanced chemical and biological threats, large-scale societal disruptions, and potential loss of control over increasingly powerful AI. In just one year, we've assembled one of the largest and most respected research teams, featuring renowned scientists and senior researchers from leading AI labs such as Anthropic, DeepMind, and OpenAI.
At AISI, we're building the premier institution for impacting both technical AI safety and AI governance. We conduct cutting-edge research, develop novel evaluation tools, and provide crucial insights to governments, companies, and international partners. By joining us, you'll collaborate with the brightest minds in the field, directly shape global AI policies, and tackle complex challenges at the forefront of technology and ethics. Whether you're a researcher, engineer, or policy expert, at AISI, you're not just advancing your career – you're positioned to have significant impact in the age of artificial intelligence.
RESEARCH SCIENTIST/ENGINEER, BIOLOGICAL AND CHEMICAL MODELS
We are looking for an experienced Research Scientist/Engineer who specialises in AI/ML applied to engineering biology/chemistry.
You will be part of a cross-cutting team within the Chem-Bio work stream at the AI Security Institute. Our role is to advance the security science of advanced AI models including LLMs and more specialised AI models within biology and chemistry and to inform the wider policy environment.
You will join a team researching specialised models within Biology and Chemistry with a goal to evaluate the capabilities of frontier biological models, and develop evaluations, benchmarks and technical safeguards for these tools. This is a technical role ideally suited for someone with a strong machine learning background and experience in computational scientific research.
The workstream is situated within the AISI Research Unit and will report to the Chem-Bio workstream lead. This post requires Developed Vetting (DV) and any continued employment will be conditional on earning and maintaining this level of clearance. This is a UK Nationals only post, as it is a reserved position. More detail on Security Clearances can be found on the UK Government website.
ROLE SUMMARY
- Join a small, talent-driven, multidisciplinary team evaluating state-of-the-art machine learning models for engineering biology.
- Setup and deploy frontier models and evaluate these models in line with AISI's research objectives.
- Write code to assess capabilities of frontier models in fields such as protein design, structure prediction and biological foundation models.
- Explore research questions at the intersection of AI and biosecurity and help communicate results to inform wider cross-Government efforts.
PERSON SPECIFICATION
We strongly encourage you to apply even if you feel you only meet some of the criteria listed here.
Essential Criteria
- Background in machine learning, having worked directly on training, tuning or evaluating machine learning models using PyTorch or similar.
- Experience working on biological (frontier) AI models, such as protein or genomic language models, structure prediction (AlphaFold) or protein design models (RFDiffusion)
- Proficient at coding in Python
Desirable Criteria
- A background in computational biology, with understanding of the provenance and limitations of omics data, and the challenges of building predictive models using these data types.
- Strong background in biology or chemistry, with an understanding of protein biochemistry and experimental assays used to validate protein design.
- Good scientific research experience, and a motivation to follow research best practices to solve open questions at the intersection of AI and biosecurity.
- Experience writing production-level code that is scalable, robust and easy to maintain, ideally in Python.
- Experience working in small cross-functional teams, including both scientists and engineers.
- Experience in communicating technical work to a mixture of technical and non-technical audiences.
Clearance Criteria
Whilst AISI often encourages applications from individuals of any nationality, due to the unique nature of this role and the elevated security clearances that would be required once in post, we can only accept individuals capable of meeting DV criteria which is largely restricted to UK nationals. Therefore this statement supersedes our general nationality rules policy as described in the footnote of this post. For further vetting clarification please visit the GOV.UK site listed below:
National security vetting: clearance levels - GOV.UK
Salary & Benefits
We are hiring individuals at all ranges of seniority and experience within this research unit, and this advert allows you to apply for any of the roles within this range. Your dedicated talent partner will work with you as you move through our assessment process to explain our internal benchmarking process. The full range of salaries are available below, salaries comprise of a base salary, technical allowance plus additional benefits as detailed on this page.
- Level 3 - Total Package £65,000 - £75,000 inclusive of a base salary £35,720 plus additional technical talent allowance of between £29,280 - £39,280
- Level 4 - Total Package £85,000 - £95,000 inclusive of a base salary £42,495 plus additional technical talent allowance of between £42,505 - £52,505
- Level 5 - Total Package £105,000 - £115,000 inclusive of a base salary £55,805 plus additional technical talent allowance of between £49,195 - £59,195
- Level 6 - Total Package £125,000 - £135,000 inclusive of a base salary £68,770 plus additional technical talent allowance of between £56,230 - £66,230
- Level 7 - Total Package £145,000 inclusive of a base salary £68,770 plus additional technical talent allowance of £76,230
This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety, robustness and advanced AI architectures.
There are a range of pension options available which can be found through the Civil Service website.
Key Words: computational protein design, structural biology
Additional Information
Internal Fraud Database
The Internal Fraud function of the Fraud, Error, Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud, or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed, or who would have been dismissed had they not resigned, for internal fraud. In instances such as this, civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out the pre employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil service. In this way, the policy is ensured and the repetition of internal fraud is prevented. For more information please see - Internal Fraud Register.
Security
Successful candidates must undergo a criminal record check and get baseline personnel security standard (BPSS) clearance before they can be appointed. Additionally, there is a strong preference for eligibility for counter-terrorist check (CTC) clearance. Some roles may require higher levels of clearance, and we will state this by exception in the job advertisement. See our vetting charter here.
Nationality requirements
We may be able to offer roles to applicant from any nationality or background. As such we encourage you to apply even if you do not meet the standard nationality requirements (opens in a new window).