About the AI Security Institute
The AI Security Institute is the largest team in a government dedicated to understanding AI capabilities and risks in the world.
Our mission is to equip governments with an empirical understanding of the safety of advanced AI systems. We conduct research to understand the capabilities and impacts of advanced AI and develop and test risk mitigations. We focus on risks with security implications, including the potential of AI to assist with the development of chemical and biological weapons, how it can be used to carry out cyber-attacks, enable crimes such as fraud, and the possibility of loss of control.
The risks from AI are not sci-fi, they are urgent. By combining the agility of a tech start-up with the expertise and mission-driven focus of government, we’re building a unique and innovative organisation to prevent AI’s harms from impeding its potential.
Strategic Awareness
We're focused on keeping AISI and broader government abreast of the developments at the frontier of AI, with a focus on enabling appropriate preparedness for AGI and other advanced AI systems. To achieve this, we’re performing scenario-based foresight work and tracking of key developments. We couple technical research with communications activities to engage government decision makers on key developments.
Role Summary
As Technical Engagement Lead, you will be responsible for successfully engaging key government decision makers to enable appropriate preparedness for AGI and other advanced AI systems. You will be responsible for planning and delivering the communication of key technological developments and risks in a way that is salient to senior decision makers. You will be responsible for creating engagement content, like presentations or demonstrations. Depending on your skills and interests, you may build a team for technical engagement work. You will be collaborating with team members that specialise in technical research that foresees and tracks possible trajectories towards AGI and their impacts. You will also be working alongside experienced civil servants who can help you navigate key government stakeholders.
This role involves leading on engagements with key decision makers related to advanced AI systems and associated risks. This may involve strategic planning of what to communicate to whom to enable appropriate preparedness efforts. This may also involve preparing and presenting technical materials in a way that is salient to non-experts, such as through engaging presentations, interactive demonstrations, or technical reports. You will be able to draw on AISI’s world-leading technical staff and research findings for contributing to engagement products.
Person Specification
The ideal candidate will demonstrate technical insight into AI and its impacts, alongside exceptional communications abilities. They will have a track record of creating technical content on AI for non-specialist audiences. They will have a strong foundation of knowledge of AI and its impacts. They will have strong strategic sense of communicating about key AI developments and risks, tailoring engagements to an audience, and iterating on messaging based on feedback loops.
Particular areas of expertise may include AI governance, AI demonstrations, or AI evaluations. Particularly relevant types of communications experience include policy briefings, demonstrations of AI capabilities for non-technical audiences, or writing engagingly for a non-expert audience.
Required Experience
We select candidates based on skills and experience in the following areas:
- Track record in relevant research
- Strong knowledge on frontier AI, capability trajectories, the supply chain, and impacts.
- Experience in communicating about AI and its impact to non-specialist audiences
- Interpersonal skills
Desired Experience
You may be a good fit if you have some of the following skills, experience, and attitudes:
- Experience in the creation and delivery of demonstrations of AI capabilities
- Research experience in AI Safety
- Experience conducting model evaluations
- Masters/Bachelor’s degree in an AI/ML discipline (Ph.D. not necessary) or equivalent professional experience.
Salary & Benefits
We are hiring individuals at all ranges of seniority and experience within this research unit, and this advert allows you to apply for any of the roles within this range. Your dedicated talent partner will work with you as you move through our assessment process to explain our internal benchmarking process. The full range of salaries are available below, salaries comprise of a base salary, technical allowance plus additional benefits as detailed on this page.
- Level 3 - Total Package £65,000 - £75,000 inclusive of a base salary £35,720 plus additional technical talent allowance of between £29,280 - £39,280
- Level 4 - Total Package £85,000 - £95,000 inclusive of a base salary £42,495 plus additional technical talent allowance of between £42,505 - £52,505
- Level 5 - Total Package £105,000 - £115,000 inclusive of a base salary £55,805 plus additional technical talent allowance of between £49,195 - £59,195
- Level 6 - Total Package £125,000 - £135,000 inclusive of a base salary £68,770 plus additional technical talent allowance of between £56,230 - £66,230
- Level 7 - Total Package £145,000 inclusive of a base salary £68,770 plus additional technical talent allowance of £76,230
This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety, robustness and advanced AI architectures.
There are a range of pension options available which can be found through the Civil Service website.
Selection Process
In accordance with the Civil Service Commission rules, the following list contains all selection criteria for the interview process.
Whilst AISI often encourages applications from individuals of any nationality, due to the unique nature of this role and the elevated security clearances that would be required once in post, we can only accept individuals capable of meeting DV criteria which is largely restricted to UK nationals. Therefore, this statement supersedes our general nationality rules policy as described in the footnote of this post. For further vetting clarification please visit the GOV.UK site listed below:
National security vetting: clearance levels - GOV.UK
Additional Information
Internal Fraud Database
The Internal Fraud function of the Fraud, Error, Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud, or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed, or who would have been dismissed had they not resigned, for internal fraud. In instances such as this, civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out the pre employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil service. In this way, the policy is ensured and the repetition of internal fraud is prevented. For more information please see - Internal Fraud Register.
Security
Successful candidates must undergo a criminal record check and get baseline personnel security standard (BPSS) clearance before they can be appointed. Additionally, there is a strong preference for eligibility for counter-terrorist check (CTC) clearance. Some roles may require higher levels of clearance, and we will state this by exception in the job advertisement. See our vetting charter here.
Nationality requirements
We may be able to offer roles to applicant from any nationality or background. As such we encourage you to apply even if you do not meet the standard nationality requirements (opens in a new window).