AI-Powered Analysis
Optional LLM integration for context-aware remediation recommendations. Works with both IaC and source code scanning.
Get infrastructure and code-specific fix recommendations that understand your configuration, cloud provider, and programming language.
Choose from OpenAI GPT-4o, Anthropic Claude Sonnet, or run locally with Ollama. Bring your own API keys.
Ask natural language questions about threats in IaC or source code. Get detailed explanations and step-by-step remediation guides.
AI-enhanced severity assessment based on your infrastructure context, code patterns, and potential business impact.
Intelligent Security Guidance
ThreatMitigator’s AI integration is completely optional but provides powerful capabilities when enabled. Unlike rule-based detection which runs locally without any external dependencies, AI features connect to your chosen LLM provider using your own API keys.
AI analysis works seamlessly with both Infrastructure as Code scanning and source code connectivity scanning, providing intelligent remediation guidance tailored to your specific threats—whether they’re in Terraform configurations or application code.
Supported LLM Providers
OpenAI
- GPT-4o - Latest and most capable model
- GPT-4 Turbo - Fast and cost-effective
- GPT-3.5 Turbo - Budget-friendly option
| |
Anthropic Claude
- Claude Sonnet 4 - Balanced performance and cost
- Claude Opus 4 - Most capable reasoning
- Claude Haiku - Fastest responses
| |
Ollama (Local)
- LLaMA 3 - Open-source foundation model
- Mistral - Efficient local inference
- CodeLlama - Optimized for code analysis
- Custom models - Bring your own fine-tuned models
| |
AI Capabilities
Remediation Recommendations
Get detailed, step-by-step instructions for fixing detected threats:
| |
Get intelligent remediation for source code connections too:
| |
Context-Aware Analysis
The AI understands your specific infrastructure and code configuration:
- Resource relationships and dependencies
- Cloud provider-specific best practices
- Programming language idioms and frameworks
- Database connection security patterns
- API authentication best practices
- Impact of changes on existing resources
- Compliance requirements and standards
Interactive Exploration
Ask follow-up questions to understand threats better:
| |
Privacy and Control
Your Data, Your Choice
- Bring your own API keys - You control the LLM provider
- No ThreatMitigator servers involved - Direct connection to your chosen provider
- Explicit opt-in - AI features disabled by default
- Local-first core - All rule-based detection runs offline
What Gets Sent
When you use AI features, ThreatMitigator sends:
- The specific threat details you’re querying
- Relevant infrastructure configuration context
- Your question or prompt
What never gets sent:
- Your entire infrastructure configuration
- Secrets or sensitive credentials (automatically redacted)
- Any data when AI features are disabled
Configuration
Configure your preferred LLM provider in .threatmitigator.toml:
| |
Cost Considerations
AI features use your own API keys, so costs vary by provider:
- OpenAI GPT-4o: ~$0.005 per query
- Anthropic Claude Sonnet: ~$0.003 per query
- Ollama (local): Free, requires local GPU/CPU resources
Rule-based detection is always free and requires no external services.
ThreatMitigator’s AI integration enhances security analysis while keeping you in complete control of your data and costs.
See it in action
Enhance rule-based detection with intelligent remediation guidance from leading LLM providers.
Ready to Secure Your Infrastructure?
Join teams already using ThreatMitigator to identify security threats in their Terraform code.