About the AI Security Institute
The AI Security Institute is the largest team in a government dedicated to understanding AI capabilities and risks in the world.
Our mission is to equip governments with an empirical understanding of the safety of advanced AI systems. We conduct research to understand the capabilities and impacts of advanced AI and develop and test risk mitigations. We focus on risks with security implications, including the potential of AI to assist with the development of chemical and biological weapons, how it can be used to carry out cyber-attacks, enable crimes such as fraud, and the possibility of loss of control.
The risks from AI are not sci-fi, they are urgent. By combining the agility of a tech start-up with the expertise and mission-driven focus of government, we’re building a unique and innovative organisation to prevent AI’s harms from impeding its potential.
Criminal Misuse (Societal Impacts) Team:
AISI is launching a new Criminal misuse workstream, focussed on assessing and mitigating societal-level harms caused by advanced AI systems, particularly in the areas of criminal activity, including mis/disinformation, radicalisation, social engineering, and fraud.
The team will be responsible for advancing the state of science in evaluating these risks – with a goal to ensure that AI systems do not become tools for large-scale societal disruption. We are starting by recruiting an ambitious workstream lead to spearhead the work.
The workstream will be situated within AISI’s Research Unit, and you will report to Chris Summerfield, our Societal Impacts Research Director.
Role Summary
As workstream lead of a novel team, you will build a team to evaluate and mitigate some of the pressing societal-level risks that Frontier AI systems may exacerbate, including radicalisation, misinformation, fraud, and social engineering. You will need to:
- Build and lead a talent-dense, multidisciplinary, and mission-driven team;
- Develop and deliver a strategy for building a cutting-edge crime and social destabilisation research agenda;
- Develop cutting edge evaluations which relate to these threat-models which can reliably assess the capability of Frontier AI systems;
- Deliver additional impactful research by overseeing a diverse portfolio of research projects, potentially included a portfolio of externally delivered research
- Ensure that research outcomes are disseminated to relevant stakeholders within government and the wider community
- Forge relationships with key partners in industry, academia, and across Government, including the national security community;
- Act as part of AISI’s overall leadership team, setting the culture and supporting staff
The position offers a unique opportunity to push forward an emerging field, whilst part of an organization that is a unique and fast-growing presence in AI research and governance.
Person specification:
You may be a good fit if you have some or all of the following skills, experience, and attitudes:
- A track record of working to ensure positive outcomes for all of society from the creation of frontier AI systems.
- Strong track record of leading multidisciplinary teams to deliver multiple exceptional scientific breakthroughs or high-quality products. We’re looking for evidence of an ability to lead exceptional teams.
- Strong experience with mentorship of more junior team members.
- Previous research experience with frontier AI systems. This includes both a broad understanding of the literature, as well as hands-on experience with leading work that involves pre-training or fine-tuning LLMs.
- Demonstrable commitment to improving scientific standards and rigour, through the development and implementation of best practice research methods.
- Excellent communication skills, with a track record of translating complex research findings into actionable insights for policy makers.
- Experience working at the intersection of criminal activity technology, including digital platforms and artificial intelligence.
Salary & Benefits
We are hiring individuals at all ranges of seniority and experience within this research unit, and this advert allows you to apply for any of the roles within this range. Your dedicated talent partner will work with you as you move through our assessment process to explain our internal benchmarking process. The full range of salaries are available below, salaries comprise of a base salary, technical allowance plus additional benefits as detailed on this page.
- Level 3 - Total Package £65,000 - £75,000 inclusive of a base salary £35,720 plus additional technical talent allowance of between £29,280 - £39,280
- Level 4 - Total Package £85,000 - £95,000 inclusive of a base salary £42,495 plus additional technical talent allowance of between £42,505 - £52,505
- Level 5 - Total Package £105,000 - £115,000 inclusive of a base salary £55,805 plus additional technical talent allowance of between £49,195 - £59,195
- Level 6 - Total Package £125,000 - £135,000 inclusive of a base salary £68,770 plus additional technical talent allowance of between £56,230 - £66,230
- Level 7 - Total Package £145,000 inclusive of a base salary £68,770 plus additional technical talent allowance of £76,230
This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety, robustness and advanced AI architectures.
There are a range of pension options available which can be found through the Civil Service website.
This post requires Security Clearance (SC) as a minimum, and a willingness to undergo Developed Vetting (DV) if required. This is a UK Nationals only post, as it is a reserved position. More detail on Security Clearances can be found on the UK Government website.
Additional Information
Internal Fraud Database
The Internal Fraud function of the Fraud, Error, Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud, or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed, or who would have been dismissed had they not resigned, for internal fraud. In instances such as this, civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out the pre employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil service. In this way, the policy is ensured and the repetition of internal fraud is prevented. For more information please see - Internal Fraud Register.
Security
Successful candidates must undergo a criminal record check and get baseline personnel security standard (BPSS) clearance before they can be appointed. Additionally, there is a strong preference for eligibility for counter-terrorist check (CTC) clearance. Some roles may require higher levels of clearance, and we will state this by exception in the job advertisement. See our vetting charter here.
Nationality requirements
We may be able to offer roles to applicant from any nationality or background. As such we encourage you to apply even if you do not meet the standard nationality requirements (opens in a new window).