About the AI Safety Institute

The AI Safety Institute (AISI), launched at the 2023 Bletchley Park AI Safety Summit, is the world's first state-backed organization dedicated to advancing AI safety for the public interest. Our mission is to assess and mitigate risks from frontier AI systems, including cyber attacks on critical infrastructure, AI-enhanced chemical and biological threats, large-scale societal disruptions, and potential loss of control over increasingly powerful AI. In just one year, we've assembled one of the largest and most respected model evaluation teams, featuring renowned scientists and senior researchers from leading AI labs such as Anthropic, DeepMind, and OpenAI.

At AISI, we're building the premier institution for impacting both technical AI safety and AI governance. We conduct cutting-edge research, develop novel evaluation tools, and provide crucial insights to governments, companies, and international partners. By joining us, you'll collaborate with the brightest minds in the field, directly shape global AI policies, and tackle complex challenges at the forefront of technology and ethics. Whether you're a researcher, engineer, or policy expert, at AISI, you're not just advancing your career – you're positioned to have significant impact in the age of artificial intelligence.

The deadline for applications to this role is 23:00 BST on Sunday 20th October

We are starting up a new team whose objective is to develop technically informed best practice for companies to develop frontier AI safely.

Our products will detail best practice and recommendations for safety across the full AI lifecycle, from pretraining to testing, release and deployment, and post-deployment monitoring.

We will do this in a manner that is responsive to the changing nature of the field, and that is interoperable internationally.

We currently have two key focuses:
1.    Setting out best practice in writing safety frameworks, as described in the Frontier AI Safety Commitments
2.    Looking at proxies for prioritising certain frontier AI systems ahead of development – those defined as highly capable and general purpose – and which could pose severe risk.

However, this is just some of the guidance team’s work. We will be developing more detailed best practice products on several areas. These will range from how best to evaluate the robustness of safeguards, to how to be transparent about deployments (i.e. what a good system card looks like).

1/ Best practice in writing frontier AI safety frameworks

As part of the AI Seoul Summit in May, 16 AI companies from the US, Europe, Middle East and China signed up to the Frontier AI Safety Commitments. Companies committed to 1) effectively identify, assess and manage risks when developing and deploying their frontier AI models and systems; 2) be accountable for safely developing and deploying their systems; and 3) make their approaches to frontier AI safety appropriately transparent to external actors, including governments.

This role may involve leading on the technical advice for producing best practice guidance for companies operationalising these commitments in the form of AI safety frameworks, working closely with the safety cases team.

2/ Best practice for prioritising highly capable, general purpose systems
 
There are many reasons a company or an AI Safety Institute may want to prioritise certain frontier AI systems. For instance, to prioritise resources for more intensive evaluations or for ensuring a deeper level of security or safeguards are in place.

This project will draw on latest trends and do scenario modelling for different possible technological futures for AI so that we are focused on those models which we expect to be the riskiest. We’re asking questions like how scaling laws may evolve over time, to what extent capability proxies such as compute can be relied on as criteria for prioritising certain systems, or what other criteria might add robustness given our uncertain future of AI development.

You will work closely with researchers in predictive evals, and in the horizon scanning team. But you will take this further to make suggestions on how best to proxy risk of a system ahead of development.
 
Responsibilities:

  • Leading technical advice on one or more of the products above, or future best practice on AI safety.
    • Collaborating closely with the AISI Research Unit, drawing on their insights and the latest research as much as possible, and productising this research
    •    To work with companies to improve their safety frameworks
    •    Collaborating with key international counterparts (including the US and EU) on their respective guidance and codes of practice, to promote the adoption of mutually compatible approaches
    •    Working with developers and policymakers, external researchers and civil society to ensure our guidance makes contact with the real world and gets feedback and input from appropriate experts as it develops.

The individual will be joining a blended policy and technical team, and will work very closely with the AI Safety Institute’s Research Unit, in particular the safety cases research team.

Person specification

We are interested in hiring individuals at a range of seniority and experience within this team. Calibration on final title, seniority and pay will take place as part of the recruitment process. We encourage all candidates who would be interested in joining to apply.

You may be a good fit if you have some of the following skills, experience and attitudes:

  • Relevant machine learning research experience in industry, relevant open-source collectives, or academia in a field related to machine learning, AI, AI security, or computer security.
  • Broad knowledge of technical safety methods (T-shaped: some deep knowledge, lots of shallow knowledge).
  • Strong writing ability.
  • Motivated to conduct technical research with an emphasis on direct policy impact rather than exploring novel ideas.
  • Ability to work autonomously and in a self-directed way with high agency, thriving in a constantly changing environment and a steadily growing team, while figuring out the best and most efficient ways to solve a particular problem.
  • Bring your own voice and experience but also an eagerness to support your colleagues together with a willingness to do whatever is necessary for the team’s success and find new ways of getting things done within government.
  • Have a sense of mission, urgency, and responsibility for success, demonstrating problem-solving abilities and preparedness to acquire any missing knowledge necessary to get the job done.
  • Comprehensive understanding of large language models (e.g. GPT-4). This includes both a broad understanding of the literature. Hands-on experience with things like pre-training, fine tuning, and building evaluations for large language models is a plus.
  • Direct research experience (e.g. PhD in a technical field and/or spotlight papers at NeurIPS/ICML/ICLR).
  • Experience working with world-class multi-disciplinary teams, including both scientists and engineers (e.g. in a top-3 lab).

Salary and benefits

We are hiring individuals at all ranges of seniority and experience, and this advert allows you to apply for any of the roles within this range. We will discuss and calibrate with you as part of the process. The full range of salaries available is as follows:

  • L3: £65,000 - £75,000
  • L4: £85,000 - £95,000
  • L5: £105,000 - £115,000
  • L6: £125,000 - £135,000
  • L7: £145,000

As part of The Department for Science, Innovation and Technology offers a competitive mix of benefits including:

  • A culture of flexible working, such as job sharing, homeworking and compressed hours.
  • Automatic enrolment into the Civil Service Pension Scheme, with an average employer contribution of 27%.
  • A minimum of 25 days of paid annual leave, increasing by 1 day per year up to a maximum of 30.
  • An extensive range of learning & professional development opportunities, which all staff are actively encouraged to pursue.
  • Access to a range of retail, travel and lifestyle employee discounts.
  • The Department operates a discretionary hybrid working policy, which provides for a combination of working hours from your place of work and from your home in the UK. The current expectation for staff is to attend the office or non-home based location for 40-60% of the time over the accounting period.

 

You can still apply if you do not meet the civil service nationality requirements, but please flag this early to your interviewer so we can assess whether we have options for working with you (like secondments). This will be a case-by-case decision. 

Salary
£65,000£135,000 GBP

Additional Information

Internal Fraud Database 

The Internal Fraud function of the Fraud, Error, Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud, or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed, or who would have been dismissed had they not resigned, for internal fraud. In instances such as this, civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out the pre employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil service. In this way, the policy is ensured and the repetition of internal fraud is prevented.  For more information please see - Internal Fraud Register.

Security

Successful candidates must undergo a criminal record check. Successful candidates must meet the security requirements before they can be appointed. The level of security needed is counter-terrorist check (opens in a new window)See our vetting charter (opens in a new window). People working with government assets must complete baseline personnel security standard (opens in new window) checks.

Nationality requirements

We may be able to offer roles to applicant from any nationality or background. As such we encourage you to apply even if you do not meet the standard nationality requirements (opens in a new window).

Working for the Civil Service

The Civil Service Code (opens in a new window) sets out the standards of behaviour expected of civil servants. We recruit by merit on the basis of fair and open competition, as outlined in the Civil Service Commission's recruitment principles (opens in a new window). The Civil Service embraces diversity and promotes equal opportunities. As such, we run a Disability Confident Scheme (DCS) for candidates with disabilities who meet the minimum selection criteria. The Civil Service also offers a Redeployment Interview Scheme to civil servants who are at risk of redundancy, and who meet the minimum requirements for the advertised vacancy.

Diversity and Inclusion

The Civil Service is committed to attract, retain and invest in talent wherever it is found. To learn more please see the Civil Service People Plan (opens in a new window) and the Civil Service Diversity and Inclusion Strategy (opens in a new window).

Apply for this Job

* Required
resume chosen  
(File types: pdf, doc, docx, txt, rtf)


Enter the verification code sent to to confirm you are not a robot, then submit your application.

This application was flagged as potential bot traffic. To resubmit your application, turn off any VPNs, clear the browser's cache and cookies, or try another browser. If you still can't submit it, contact our support team through the help center.