
Engineering Manager, Safeguards Infrastructure - Anthropic
View Company Profile- Job Title
- Engineering Manager, Safeguards Infrastructure
- Job Location
- London, UK
- Job Listing URL
- https://job-boards.greenhouse.io/anthropic/jobs/4636299008
- Job Description
About Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About the Role
Anthropic is seeking an experienced Engineering Manager to join our Safeguards (Trust & Safety) team, and lead a team of software engineers building infrastructure and tooling that powers our safety systems. In this role, you'll lead a team of engineers building the foundational infrastructure that enables Anthropic to build and maintain safe and responsible AI products.
Working closely with other policy, engineering, and machine learning teams, you'll be responsible for developing scalable systems that support our safety efforts while balancing immediate needs with long-term investments.
This role will demand of you thoughtful technical leadership and the ability to make strategic trade-offs that serve both our current customers’ requirements and a future technical vision.
Key Responsibilities:
- Build and lead a team of highly-performing software engineers building production infrastructure and tooling for Safeguards systems
- Help scale reliable systems that support the Trust & Safety operating loop of detection, mitigation and measurement
- Work closely with Research, ML engineers, and Data scientists to build the primitives, tools and infrastructure that allow them to go faster in service of building detection and response systems
- Collaborate with Safeguards Product, Policy and Enforcement partners to build tools that streamline enforcement operations, and build self-service tools for both internal teams and external customers
- Coach and mentor team members in their career growth
Requirements:
- 8+ years in an infrastructure, product infrastructure or trust and safety team and 4+ years in an engineering management role leading infrastructure or tooling teams building reliable, scalable systems
- 2+ years of experience managing senior and staff software engineers
- Experience in balancing speed and precision when evaluating technical trade-offs in a fast growth setting
- Excellent leadership and communication skills, with ability to work effectively across functions and an ability to communicate highly technical concepts to various stakeholders
- Strong, proven people management skills in coaching, recruiting, and developing engineers
- Experience designing operational processes around on-call, post-mortems, etc.
- Customer obsession, both for external customers and internal teams using your systems
Strong Candidates May Also Have:
- Experience in adversarial environments such as Trust and Safety, Integrity, or Fraud detection (beneficial but not required)
- Background in building systems at scale with a focus on reliability and performance, and deep knowledge of modern cloud infrastructure (GCP / AWS)
- Experience working directly with ML and Research engineering teams
You'll thrive in this role if you're passionate about building reliable infrastructure that keeps AI systems safe, and believe deeply in the importance of developing responsible AI technologies.
The expected salary range for this position is:
Annual Salary:£325,000—£390,000 GBPLogistics
Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience.
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
How we're different
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Come work with us!
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.
Everything You Need, One Platform.
From job listings to startups, investors to funding rounds, and everything in between, Employbl puts the power in your hands. Why wait?
Start your free trial today!Stay Ahead of the Curve
Sign up for our newsletter to stay informed about the latest startups and trends in the tech market. Let Employbl be your guide to success.
Anthropic Company Size
Between 2,000 - 5,000 employees
Anthropic Founded Year
2021
Anthropic Total Amount Raised
$9,740,378,112
Anthropic Funding Rounds
View funding detailsSecondary Market
$452,268,300 USD
Secondary Market
$884,109,327 USD
Series D
$750,000,000 USD
Corporate Round
$2,000,000,000 USD
Convertible Note
$4,000,000,000 USD
Corporate Round
$100,000,000 USD
Series C
$450,000,000 USD
Corporate Round
$400,000,000 USD
Series B
$580,000,000 USD
Series A
$124,000,000 USD