About the AI Safety Institute

The AI Safety Institute (AISI), launched at the 2023 Bletchley Park AI Safety Summit, is the world's first state-backed organization dedicated to advancing AI safety for the public interest. Our mission is to prevent extreme risks from advanced AI systems, including cyber attacks on critical infrastructure, AI-enhanced chemical and biological threats, large-scale societal disruptions, and potential loss of control over increasingly powerful AI. In just one year, we've assembled one of the planet's largest and most respected model evaluation teams, featuring world-renowned scientists and many senior researchers from leading AI labs such as Anthropic, DeepMind, and OpenAI.

At AISI, we're building the premier institution for impacting both technical AI safety and AI governance. We conduct cutting-edge research, develop novel evaluation tools, and provide crucial insights to governments and international partners. By joining us, you'll collaborate with the brightest minds in the field, directly shape global AI policies, and tackle complex challenges at the forefront of technology and ethics. Whether you're a researcher, engineer, or policy expert, at AISI, you're not just advancing your career – you're positioned to have one of the most significant impacts on humanity's future in the age of artificial intelligence.

The Crime and Social Destabilisation (Societal Impacts) Team:  

AISI is launching a new Crime and Social Destabilisation workstream, focussed on assessing and mitigating societal-level harms caused by advanced AI systems, particularly in the areas of criminal activity, including mis/disinformation, radicalisation, social engineering, and fraud.  

The team will be responsible for advancing the state of science in evaluating these risks – with a goal to ensure that AI systems do not become tools for large-scale societal disruption. We are starting by recruiting an ambitious workstream lead to spearhead the work. 

The workstream will be situated within AISI’s Research Unit, and you will report to Chris Summerfield, our Societal Impacts Research Director.  

Role Summary  

As workstream lead of a novel team, you will build a team to evaluate and mitigate some of the pressing societal-level risks that Frontier AI systems may exacerbate, including radicalization, misinformation, fraud, and social engineering. You will need to: 

  • Build and lead a talent-dense, multidisciplinary, and mission-driven team; 
  • Develop and deliver a strategy for building a cutting-edge crime and social destabilisation research agenda; 
  • Develop cutting edge evaluations which relate to these threat-models which can reliably assess the capability of Frontier AI systems; 
  • Deliver additional impactful research by overseeing a diverse portfolio of research projects, potentially included a portfolio of externally delivered research 
  • Ensure that research outcomes are disseminated to relevant stakeholders within government and the wider community 
  • Forge relationships with key partners in industry, academia, and across Government, including the national security community; 
  • Act as part of AISI’s overall leadership team, setting the culture and supporting staff 

The position offers a unique opportunity to push forward an emerging field, whilst part of an organization that is a unique and fast-growing presence in AI research and governance. 

Person specification: 

You may be a good fit if you have some of the following skills, experience, and attitudes: 

  • A track record of working to ensure positive outcomes for all of society from the creation of AI systems 
  • Strong track record of leading multidisciplinary teams to deliver multiple exceptional scientific breakthroughs or high-quality products. We’re looking for evidence of an ability to lead exceptional teams. 
  • Strong experience with mentorship of more junior team members. 
  • Comprehensive understanding of large language models (e.g. GPT-4). This includes both a broad understanding of the literature, as well as hands-on experience with leading work that involves pre-training or fine-tuning LLMs. 
  • Demonstrable commitment to improving scientific standards and rigour, through the development and implementation of best practice research methods. 
  • Excellent communication skills, with a track record of translating complex research findings into actionable insights for policy makers.  
  • Experience working at the intersection of criminal activity technology, including digital platforms and artificial intelligence 

This post requires Security Clearance (SC) as a minimum, and a willingness to undergo Developed Vetting (DV) if required. This is a UK Nationals only post, as it is a reserved position. More detail on Security Clearances can be found on the UK Government website. 


Additional Information

Internal Fraud Database 

The Internal Fraud function of the Fraud, Error, Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud, or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed, or who would have been dismissed had they not resigned, for internal fraud. In instances such as this, civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out the pre employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil service. In this way, the policy is ensured and the repetition of internal fraud is prevented.  For more information please see - Internal Fraud Register.

Security

Successful candidates must undergo a criminal record check. Successful candidates must meet the security requirements before they can be appointed. The level of security needed is counter-terrorist check (opens in a new window)See our vetting charter (opens in a new window). People working with government assets must complete baseline personnel security standard (opens in new window) checks.

Nationality requirements

We may be able to offer roles to applicant from any nationality or background. As such we encourage you to apply even if you do not meet the standard nationality requirements (opens in a new window).

Working for the Civil Service

The Civil Service Code (opens in a new window) sets out the standards of behaviour expected of civil servants. We recruit by merit on the basis of fair and open competition, as outlined in the Civil Service Commission's recruitment principles (opens in a new window). The Civil Service embraces diversity and promotes equal opportunities. As such, we run a Disability Confident Scheme (DCS) for candidates with disabilities who meet the minimum selection criteria. The Civil Service also offers a Redeployment Interview Scheme to civil servants who are at risk of redundancy, and who meet the minimum requirements for the advertised vacancy.

Diversity and Inclusion

The Civil Service is committed to attract, retain and invest in talent wherever it is found. To learn more please see the Civil Service People Plan (opens in a new window) and the Civil Service Diversity and Inclusion Strategy (opens in a new window).

Apply for this Job

* Required
resume chosen  
(File types: pdf, doc, docx, txt, rtf)


Enter the verification code sent to to confirm you are not a robot, then submit your application.

This application was flagged as potential bot traffic. To resubmit your application, turn off any VPNs, clear the browser's cache and cookies, or try another browser. If you still can't submit it, contact our support team through the help center.