Run Crawdad as a sidecar container alongside your AI agents. All scanning happens inside the Crawdad container — content never leaves your Docker network.
services:
crawdad:
image: andrewsispoidis/crawdad-sidecar:latest
environment:
- CRAWDAD_LICENSE_KEY=${CRAWDAD_LICENSE_KEY}
- CRAWDAD_BIND=0.0.0.0
ports:
- "7749:7749" # Security API
- "7748:7748" # Anthropic proxy
- "7747:7747" # OpenAI proxy
- "7746:7746" # Google Gemini proxy
restart: always
your-agent:
image: your-agent-image
environment:
- ANTHROPIC_BASE_URL=http://crawdad:7748
- OPENAI_BASE_URL=http://crawdad:7747
depends_on:
- crawdad
Set ANTHROPIC_BASE_URL=http://crawdad:7748 in your agent container. All traffic flows through Crawdad's 7-layer detection pipeline automatically. No code changes needed.
Crawdad runs transparent proxy listeners for each AI provider. Your agent's SDK sends requests to Crawdad instead of the real API. Crawdad scans the message, forwards clean requests to the real API, and returns the response. Blocked requests get a provider-native error response that SDKs handle correctly.
| Variable | Default | Description |
|---|---|---|
CRAWDAD_LICENSE_KEY | Your Crawdad license key | |
CRAWDAD_BIND | 0.0.0.0 | Bind address (0.0.0.0 in Docker, 127.0.0.1 on host) |
CRAWDAD_PORT | 7749 | Port to listen on |
ANTHROPIC_BASE_URL | Set to http://crawdad:7748 in agent container for transparent proxy | |
OPENAI_BASE_URL | Set to http://crawdad:7747 for OpenAI proxy |
# From the repo root: docker build -t andrewsispoidis/crawdad-sidecar:latest -f crawdad-sidecar/Dockerfile .
curl http://crawdad:7749/v1/health
# {"status":"ok","version":"0.10.0","mode":"zero-knowledge"}
Ensure both containers are on the same Docker network. In docker-compose, services on the same file share a network automatically.
Inside a Docker container, localhost refers to the container itself. Use the service name crawdad or set CRAWDAD_HOST=crawdad.