Security scanning for LLM integrations
Test for Prompt injections, Rate limiting, Jailbreaking, Data extraction & privacy
TL;DR;
Vidoc checks if your LLM integrration can fail in a way you don’t want to. We probe for hallucination, data leakage, prompt injections, rate limiting, jailbreaking, privacy issues and more.
Setup once and get continuous security testing for your LLM integration. We support ANY LLM, including open source models.
Security issues we can detect:
- Prompt injections
- Toxicity generation
- Misconfigured rate limiting
- Jailbreaking
- Data extraction
- Privacy issues
- Security of Agent tools (APIs that your agent has access to)
How to setup scanning?
Vidoc LLM integration scanning is easy to setup.
- Create a
vidoc.yaml
file in the root of your project.
- Run
vidoc scan
in your terminal. - Get a report in your terminal and in your Vidoc dashboard.
Would like to learn more?
Please contact us at contact@vidocsecurity.com) or book at https://cal.com/team/vidoc/demo