AI-Powered Mitigation Suggestions
Optionally connect your own LLM for context-aware remediation guidance. ThreatMitigator never sends your source code to AI providers - only redacted threat metadata.
Get infrastructure-specific fix recommendations with code examples and implementation effort estimates.
Choose from OpenAI GPT-4o, Anthropic Claude, or run locally with Ollama. Bring your own API keys.
Ask follow-up questions about specific threats and get detailed explanations with step-by-step remediation guides.
Sensitive data is redacted before reaching any provider. Resource names, IPs, and source code are never sent.
Intelligent Security Guidance
Optionally connect your own LLM to get context-aware remediation guidance. ThreatMitigator never sends your source code to AI providers—only redacted threat metadata.
Supported Providers
| Provider | Models | Setup |
|---|---|---|
| OpenAI | GPT-4o, GPT-4, GPT-3.5-turbo | OPENAI_API_KEY env var |
| Anthropic | Claude 3.5 Sonnet, Claude 3 Opus, Claude 3 Haiku | ANTHROPIC_API_KEY env var |
| Ollama | LLaMA, Mistral, CodeLlama, any local model | OLLAMA_HOST env var (default: localhost) |
AI Features
Interactive Queries
Ask follow-up questions about specific threats:
| |
AI-Enhanced Reports
Add --ai-enhance to any scan for detailed remediation steps with code examples:
| |
AI-Adjusted Severity
LLM analysis can refine severity scoring based on deployment context, providing more accurate risk assessments tailored to your environment.
Fully Optional
All core detection works without AI. The LLM adds depth, not coverage. Rule-based detection is always free and requires no external services.
Privacy-First Architecture
What Gets Sent to AI
| Data Type | Sent to AI? |
|---|---|
| Resource types | Yes (needed for analysis) |
| Security patterns | Yes |
| Resource names | No (redacted to [REDACTED]) |
| IP addresses | No (redacted to [NETWORK_RANGE]) |
| Source code | Never |
AI Security Hardening
- Rate limiting and request budgeting
- Secret management via
secrecycrate (zeroized on drop) - Response validation against schemas
- Retry logic with exponential backoff
- Graceful degradation if AI is unavailable
Configuration
Configure your preferred LLM provider in .threatmitigator.toml:
| |
Provider Setup
OpenAI:
| |
Anthropic Claude:
| |
Ollama (Local):
| |
Cost Considerations
AI features use your own API keys, so costs vary by provider:
- OpenAI GPT-4o: ~$0.005 per query
- Anthropic Claude Sonnet: ~$0.003 per query
- Ollama (local): Free, requires local GPU/CPU resources
ThreatMitigator’s AI integration enhances security analysis while keeping you in complete control of your data and costs.
See it in action
Enhance rule-based detection with intelligent remediation guidance from leading LLM providers.
Ready to Secure Your Infrastructure?
Join teams already using ThreatMitigator to identify security threats in their Terraform, CloudFormation, Docker, and Helm configurations.