Zendesk
Machine Learning Engineering Intern · Oct 2025 – Present · Berlin
zendesk.comZendesk is a global customer experience platform used by over 100,000 companies. I joined the Berlin AI team as an ML engineering intern, working on the agentic AI product that automates customer support at scale.
I designed and built an AI agent security framework that became the pre-deployment security gate for all of Zendesk's AI agents ahead of their public GA release. The framework covers over 1,500 attack scenarios — prompt injection, jailbreaks, policy violations, and adversarial inputs — and is now standardized across all agent releases.
I also designed a LLM-as-a-Judge online scorer that reaches 92% agreement with human raters and deployed it in Braintrust to continuously monitor agent quality in production. This replaced manual QA sampling with a fully automated, always-on quality signal.
Security Framework
4 attack categories, pre-deployment security gate for all AI agent releases. Standardized across the entire Zendesk agent platform.
Prompt Injection
Manipulating agent instructions
Jailbreaks
Bypassing safety guardrails
Policy Violations
Breaking business rules
Adversarial Inputs
Edge-case exploitation
Replaced manual QA sampling with always-on quality monitoring. LLM-as-a-Judge evaluates every agent response in production.