Integrity Operations Manager, Sensitive Content
Bumble
Operations
London, UK
GBP 50k-60k / year
Posted on Feb 21, 2026
Every decision in our sensitive content workflow shapes whether people feel safe, respected, and empowered when they show up on Bumble. That responsibility sits at the heart of our mission to build a world where all relationships are healthy and equitable. The Integrity Operations team transforms policy into practice — designing and scaling moderation systems that reduce harm while protecting expression and member trust.
As Integrity Operations Manager, Sensitive Content (IC), you will own the Human-in-the-Loop (HITL) layer of our image classification pipeline, ensuring decisions are consistent, policy-aligned, and grounded in real member impact. Partnering closely with Policy, Product, Engineering, and AI teams, you’ll continuously improve how we detect and respond to sensitive content at scale. This role calls for disciplined ownership, thoughtful AI fluency, and a deep commitment to our values of Respect and Courage — especially when navigating complex or high-risk material.
Please note: this position involves exposure to sensitive and potentially graphic content.
What You'll Do
- Lead day-to-day operations for the Sensitive Content pillar, ensuring accurate, timely, and policy-aligned image classification outcomes that reduce harm and protect member experience.
- Own end-to-end BPO and AI moderation vendor governance, including SLA definition, performance management, quality assurance frameworks, and structured business reviews that drive continuous improvement.
- Translate sensitive content policies and taxonomy updates into clear annotation guidelines, decision trees, and workflow documentation; run calibration sessions and inter-rater alignment exercises to strengthen consistency.
- Design and evolve quality measurement frameworks, including sampling strategies, error trend analysis, reviewer accuracy tracking, and root-cause insights that inform targeted training plans.
- Partner cross-functionally with Policy, Product, Engineering, and Machine Learning teams to improve moderation tooling, classifier performance feedback loops, and pipeline design — demonstrating an agile mindset as systems evolve.
- Coordinate special labeling initiatives (e.g., new harm typologies, taxonomy refinements, model retraining datasets), taking ownership from insight to impact with defined success metrics and clear timelines.
- Build and communicate operational reporting across quality, throughput, backlog health, escalation volumes, and cost efficiency — transforming data into clear narratives and actionable recommendations.
- Model calm, values-led decision-making when managing high-sensitivity escalations, balancing speed, risk, and member impact while upholding Bumble’s values of Respect and Excellence.
About You
- Typically requires 4–6 years of experience, though we welcome candidates with alternative backgrounds that demonstrate equivalent skills.
- Experience leading large-scale vendor or BPO moderation operations, including SLA management, structured QA programs, governance cadences, and distributed team performance oversight.
- Strong working knowledge of Trust & Safety policy taxonomies and demonstrated experience operationalizing them into labeling schemas, annotation standards, and moderation workflows.
- Hands-on experience supporting AI/ML-driven safety systems, including Human-in-the-Loop review design, dataset quality controls, calibration methodologies, and feedback loops for model improvement.
- Comfort with operational data analysis, including building reporting dashboards, conducting trend and variance analysis, identifying error themes, and presenting insights clearly; SQL proficiency is a strong plus.
- Demonstrated ability to collaborate with purpose across Policy, Product, Engineering, QA/Learning & Development, and external vendors — while taking ownership for delivery and outcomes.
- Strong problem-solving judgment under ambiguity, with the ability to see things through from insight to measurable impact and adapt quickly as harm patterns evolve.
- Thoughtful AI fluency: you understand where automation accelerates harm detection, where human judgment is essential, and how to continuously strengthen HITL systems without compromising fairness or member trust.
- A values-driven operator who fosters psychologically safe ways of working, demonstrates Curiosity when evaluating edge cases, and upholds Respect when navigating sensitive subject matter.
About Us
Bumble Inc. is the parent company of Bumble Date, BFF, and Badoo. The Bumble platform enables people to build healthy and equitable relationships, through Kind Connections. Founded by Whitney Wolfe Herd in 2014, Bumble was one of the first dating apps built with women at the center and connects people across dating (Bumble Date) and friendship (BFF). BFF is a friendship app where people in all stages of life can meet people nearby and create meaningful platonic connections and community based on shared interests. Badoo, which was founded in 2006, is one of the pioneers of web and mobile dating products.
Inclusion at Bumble Inc.
Bumble Inc. is an equal opportunity employer and we strongly encourage people of all ages, colour, lesbian, gay, bisexual, transgender, queer and non-binary people, veterans, parents, people with disabilities, and neurodivergent people to apply. We're happy to make any reasonable adjustments that will help you feel more confident throughout the process, please don't hesitate to let us know how we can help.
In your application, please feel free to note which pronouns you use (For example: she/her, he/him, they/them, etc).
AI in Bumble Inc. Hiring
At Bumble, we may use AI tools to support parts of our recruitment process — such as helping us record, transcribe, and summarize conversations, and supporting job alignment by comparing resumes and job descriptions to highlight skills and potential roles that may be a good match. These tools help us work more efficiently and stay focused on you during our conversations. Importantly, all hiring decisions are made by people. AI is used only to support our team’s efficiency and improve the candidate experience — not to evaluate or decide on your candidacy. Participation in AI-supported interviews and conversations is completely voluntary and will not impact your candidacy. If you’d prefer to opt out, simply let your recruiter or interviewer know at the start of a call, or anytime during the interview or conversation. Summaries and related data are retained only as long as needed in line with our internal data retention policies. If at any point you’d like a transcription or summary deleted, please contact your recruiter directly.
50000 - 60000 GBP a year