Attenuator

Reduce alert fatigue

The Attenuator is a powerful feature of Jibril that enhances its security detection capabilities through AI-powered analysis of security events.

Quickly check how to use this feature at Docker Container session.

Overview

The Attenuator acts as an intelligent filter that can analyze security events detected by Jibril and provide additional context, severity classifications, and even determine if an event is likely a false positive. This feature leverages AI models (like GPT-4o) to bring expert-level security analysis to each detection.

How It Works

When a security event is detected, the Attenuator:

  1. Takes the event details and forwards them to an AI service

  2. Prompts the AI to perform a security analysis of the event

  3. Determines if the event is likely a false positive (with high confidence)

  4. Independently assesses the severity of the event

  5. Provides a detailed description justifying its analysis

The Attenuator operates in three possible modes:

  • Amend Adds the AI verdict to the event without blocking it (default). This mode is particularly useful during initial deployment and fine-tuning periods when you're optimizing model parameters, temperature settings, and other configurations.

  • Reason Adds the AI verdict along with detailed reasoning to the event

  • Block Filters out events determined to be false positives

Configuration

You can configure the Attenuator through Jibril's configuration file or environment variables:

Configuration Options

Description
Config Option
Env Variable
Default Value

Feature Flag

enabled

-

false

API token

token

AI_TOKEN

-

AI Model Name

model

AI_MODEL

gpt-4o

Model Temperature

temperature

AI_TEMPERATURE

0.3

Operational Mode

mode

AI_MODE

amend

AI Service URL

url

AI_URL

OpenAI API URL

Example Configuration

To enable and configure the Attenuator in your Jibril setup, add the following to your configuration:

plugin:
  - jibril:hold
  - jibril:procfs
  - jibril:printers
  - jibril:attenuator:enabled=true:mode=reason
  - jibril:detect
  # - jibril:netpolicy:file=/home/rafaeldtinoco/netpolicy.yaml

Use the options above with the attenuator plugin line at the configuration file.

Alternatively, you can set environment variables:

export AI_TOKEN=your-ai-token
export AI_MODEL=gpt-4o
export AI_TEMPERATURE=0.3
export AI_MODE=reason

Local and Private Models

The Attenuator can be used with local inference engines like Ollama to run private models on your own infrastructure. This approach offers several advantages:

  • Data Privacy: Keeps security event data within your environment

  • Cost Efficiency: Eliminates API usage costs

  • Customization: Allows fine-tuning of models for security-specific tasks

To use Ollama with the Attenuator, set the URL to your Ollama instance:

extensions:
  jibril:
    plugins:
      attenuator:
        enabled: "true"
        url: "http://localhost:11434/v1/chat/completions"
        model: "deepseek-coder:latest"

Response Format

The Attenuator provides rich context for each security event it analyzes:

{
  "false_positive": false,
  "severity": "high",
  "description": "Detailed explanation of why this event is or isn't a false positive",
  "reasoning": "Additional context and analysis (only in 'reason' mode)"
}

Use Cases

The Attenuator is particularly useful for:

  1. Reducing Alert Fatigue By filtering out false positives (in block mode)

  2. Prioritizing Alerts Through accurate severity classification

  3. Contextualizing Detections Adding expert analysis to help security teams understand the significance of events

  4. CI/CD Environments Automatically filtering security events in automated workflows

Integration with GitHub Actions

The Attenuator is automatically enabled in GitHub Actions environments when an API token is provided, making it perfect for security testing in CI/CD pipelines.

Best Practices

  • Begin with "amend" or "reason" mode to evaluate the AI's judgments before using "block" mode

  • Use a higher temperature setting for more diverse analyses

  • For production environments, consider using the most advanced AI model available

  • When using private models, allocate sufficient resources for inference, especially for real-time security monitoring

Last updated