Attenuator
Last updated
Last updated
The Attenuator is a powerful feature of Jibril that enhances its security detection capabilities through AI-powered analysis of security events.
Quickly check how to use this feature at Docker Container session.
The Attenuator acts as an intelligent filter that can analyze security events detected by Jibril and provide additional context, severity classifications, and even determine if an event is likely a false positive. This feature leverages AI models (like GPT-4o) to bring expert-level security analysis to each detection.
When a security event is detected, the Attenuator:
Takes the event details and forwards them to an AI service
Prompts the AI to perform a security analysis of the event
Determines if the event is likely a false positive (with high confidence)
Independently assesses the severity of the event
Provides a detailed description justifying its analysis
The Attenuator operates in three possible modes:
Amend Adds the AI verdict to the event without blocking it (default). This mode is particularly useful during initial deployment and fine-tuning periods when you're optimizing model parameters, temperature settings, and other configurations.
Reason Adds the AI verdict along with detailed reasoning to the event
Block Filters out events determined to be false positives
You can choose to either amend events with AI analysis or filter them entirely. During initial deployment, the amend mode is recommended to evaluate the AI's performance before enabling blocking behavior.
You can configure the Attenuator through Jibril's configuration file or environment variables:
Feature Flag
enabled
-
false
API token
token
AI_TOKEN
-
AI Model Name
model
AI_MODEL
gpt-4o
Model Temperature
temperature
AI_TEMPERATURE
0.3
Operational Mode
mode
AI_MODE
amend
AI Service URL
url
AI_URL
OpenAI API URL
To enable and configure the Attenuator in your Jibril setup, add the following to your configuration:
Alternatively, you can set environment variables:
Data Privacy: Keeps security event data within your environment
Cost Efficiency: Eliminates API usage costs
Customization: Allows fine-tuning of models for security-specific tasks
To use Ollama with the Attenuator, set the URL to your Ollama instance:
The Attenuator provides rich context for each security event it analyzes:
The Attenuator is particularly useful for:
Reducing Alert Fatigue By filtering out false positives (in block mode)
Prioritizing Alerts Through accurate severity classification
Contextualizing Detections Adding expert analysis to help security teams understand the significance of events
CI/CD Environments Automatically filtering security events in automated workflows
The Attenuator is automatically enabled in GitHub Actions environments when an API token is provided, making it perfect for security testing in CI/CD pipelines.
Begin with "amend" or "reason" mode to evaluate the AI's judgments before using "block" mode
Use a higher temperature setting for more diverse analyses
For production environments, consider using the most advanced AI model available
When using private models, allocate sufficient resources for inference, especially for real-time security monitoring
Use the with the attenuator plugin line at the .
The Attenuator can be used with local inference engines like to run private models on your own infrastructure. This approach offers several advantages:
For now, jibril recommends the use of model. It shows the best results with shorter inference times.