Vidoc checks if your LLM integrration can fail in a way you don’t want to. We probe for hallucination, data leakage, prompt injections, rate limiting, jailbreaking, privacy issues and more.Setup once and get continuous security testing for your LLM integration. We support ANY LLM, including open source models.Security issues we can detect:
Prompt injections
Toxicity generation
Misconfigured rate limiting
Jailbreaking
Data extraction
Privacy issues
Security of Agent tools (APIs that your agent has access to)