Intelligence

AI-Powered Mitigation Suggestions

Optionally connect your own LLM for context-aware remediation guidance. ThreatMitigator never sends your source code to AI providers - only redacted threat metadata.

Context-Aware Remediation

Get infrastructure-specific fix recommendations with code examples and implementation effort estimates.

Multiple LLM Providers

Choose from OpenAI GPT-4o, Anthropic Claude, or run locally with Ollama. Bring your own API keys.

Interactive Queries

Ask follow-up questions about specific threats and get detailed explanations with step-by-step remediation guides.

Privacy-First Design

Sensitive data is redacted before reaching any provider. Resource names, IPs, and source code are never sent.

Intelligent Security Guidance

Optionally connect your own LLM to get context-aware remediation guidance. ThreatMitigator never sends your source code to AI providers—only redacted threat metadata.

Supported Providers

ProviderModelsSetup
OpenAIGPT-4o, GPT-4, GPT-3.5-turboOPENAI_API_KEY env var
AnthropicClaude 3.5 Sonnet, Claude 3 Opus, Claude 3 HaikuANTHROPIC_API_KEY env var
OllamaLLaMA, Mistral, CodeLlama, any local modelOLLAMA_HOST env var (default: localhost)

AI Features

Interactive Queries

Ask follow-up questions about specific threats:

1
threatmitigator query THREAT-ID "How do I fix this?"

AI-Enhanced Reports

Add --ai-enhance to any scan for detailed remediation steps with code examples:

1
threatmitigator scan terraform ./infra --ai-enhance --format pdf

AI-Adjusted Severity

LLM analysis can refine severity scoring based on deployment context, providing more accurate risk assessments tailored to your environment.

Fully Optional

All core detection works without AI. The LLM adds depth, not coverage. Rule-based detection is always free and requires no external services.


Privacy-First Architecture

What Gets Sent to AI

Data TypeSent to AI?
Resource typesYes (needed for analysis)
Security patternsYes
Resource namesNo (redacted to [REDACTED])
IP addressesNo (redacted to [NETWORK_RANGE])
Source codeNever

AI Security Hardening

  • Rate limiting and request budgeting
  • Secret management via secrecy crate (zeroized on drop)
  • Response validation against schemas
  • Retry logic with exponential backoff
  • Graceful degradation if AI is unavailable

Configuration

Configure your preferred LLM provider in .threatmitigator.toml:

1
2
3
4
5
6
7
8
[ai]
provider = "anthropic"  # or "openai" or "ollama"
model = "claude-3-5-sonnet-20241022"

# Ollama-specific settings
[ai.ollama]
host = "http://localhost:11434"
timeout = 60

Provider Setup

OpenAI:

1
2
export OPENAI_API_KEY="sk-..."
threatmitigator query T-001 "How do I fix this?"

Anthropic Claude:

1
2
export ANTHROPIC_API_KEY="sk-ant-..."
threatmitigator query T-001 --provider anthropic

Ollama (Local):

1
2
export OLLAMA_HOST="http://localhost:11434"
threatmitigator query T-001 --provider ollama --model llama3

Cost Considerations

AI features use your own API keys, so costs vary by provider:

  • OpenAI GPT-4o: ~$0.005 per query
  • Anthropic Claude Sonnet: ~$0.003 per query
  • Ollama (local): Free, requires local GPU/CPU resources

ThreatMitigator’s AI integration enhances security analysis while keeping you in complete control of your data and costs.

See it in action

Enhance rule-based detection with intelligent remediation guidance from leading LLM providers.

Demo

Ready to Secure Your Infrastructure?

Join teams already using ThreatMitigator to identify security threats in their Terraform, CloudFormation, Docker, and Helm configurations.