Proactively Stress-Test Your AIBefore Attackers Do.
Breaker-AI simulates real-world prompt injection and jailbreak attacks on your LLM systems — so you can catch vulnerabilities before anyone else does.
$ npx breaker-ai list-prompts
$ export OPENAI_API_KEY=your-api-key
$ export OPENAI_BASE_URL=https://openrouter.ai/api/v1
$ export OPENAI_MODEL=openai/gpt-4.1-mini
$ npx breaker-ai jailbreak <prompt-file-or-your-own-prompt>
$ npx breaker-ai scan <prompt-file-or-your-own-prompt> --expected 80
$ npx breaker-ai mask "Hello world" --words Hello,world
💥
Break Your Own Prompts
Jailbreak and injection test suite. Autonomous attacker agents (soon). Scoring, failure flags, reporting.
🧠
Custom Rules & Detection
Custom rules and pattern detection. CLI or API for devs and red teams. Hosted version (coming soon).
🌐
Open Source & Enterprise
MIT license. Free to use. Contribute on GitHub. Hosted dashboard, SSO, CI/CD (coming soon).