White Circle

MLE / MLOps

9.0/10

White Circle

$100,000 – $150,000
Hybrid
mid
3 days ago
aidevTensorZerovLLMSGlangTRTKubernetesCUDAGrafanaRust

AI Summary

The vacancy is well-detailed with clear responsibilities, compensation, and tech stack, but lacks company links.

Description

White Circle seeks an MLE / MLOps to optimize inference stack, bridging Research and Product for fast, production-ready models.

White Circle is an AI Safety company building the safety, reliability, and optimization layer for AI systems.

At the core of our platform are policies – simple natural-language rules that define what an AI model should and shouldn't do.

We automatically test, enforce, and continuously improve these policies at scale.

We've raised $11M from top funds, founders, and senior leaders at OpenAI, Anthropic, HuggingFace, Mistral, DeepMind, Datadog, Sentry, and others.

We process over one hundred million API calls every month and fine-tune and train our own LLMs so they run faster and cheaper than any open or proprietary model.

## What you'll do

  • Own inference infrastructure end-to-end: optimize latency, throughput, and cost across our model fleet.
  • Build and scale model serving with TensorZero, vLLM/SGlang/TRT, and Kubernetes.
  • Design and maintain vector search pipelines with Vector storages.
  • Turn research into product: grab experimental models from the research team, figure out what's production-ready, and ship it.

## Conditions

  • Salary of $100,000 to $150,000 + equity.
  • 20 days of paid vacation.
  • Work from Paris (hybrid) + relocation package.
  • Best medical insurance in France.
  • All the hardware, tools, and services you need.
  • Covered subscriptions for AI agents and IDEs.
  • Team off-sites twice a year.

Requirements

  • 3+ years shipping high performance ML systems in production.
  • Deep hands-on experience with inference optimization.
  • Comfortable across the stack: from CUDA kernels to Kubernetes manifests to Grafana dashboards.
  • Experience with Rust, custom Triton kernels, benchmarks is a plus.
Loading similar jobs...