Skip to content

Attenuator

AI-Powered Analysis

Overview

The Attenuator acts as an intelligent filter that analyzes security events detected by Jibril and provides additional context, severity re-classification, and even determines if an event is likely a false positive. This feature leverages AI models to bring expert-level security analysis to each detection.

Think of the Attenuator as a security analyst who can analyze an event and immediately provide a detailed explanation of whether it is a false positive or not. This analysis is not based solely on the event's details, but also takes into account the broader context and the environment in which the event occurred*.

Operational Modes

Amend

In amend mode, the attenuator enriches events with AI-generated verdicts, including false positive likelihood and severity adjustments, adds fields without filtering, explains reasoning, and suggests a new importance level if applicable.

Reason

In reason mode, the attenuator builds on amend mode by including detailed analytical reasoning for its verdicts (explaining the model's thought process). This aids in understanding the AI's decision-making.

Block

In block mode, the attenuator automatically filters out events deemed false positives, dropping them without further processing. Suitable for noise reduction in critical environments, but use with caution.

Configuration

Configure the Attenuator through Jibril's configuration file or environment variables.

Configuration Options

DescriptionConfig OptionEnv VariableDefault Value
Feature Flagenabled-false
API tokentokenAI_TOKEN-
AI Model NamemodelAI_MODELgpt-4o
Model TemperaturetemperatureAI_TEMPERATURE0.3
Operational ModemodeAI_MODEamend
AI Service URLurlAI_URLOpenAI API URL

Example Configuration

yaml
features:
  - attenuator # use attenuator to detect suspicious behavior.

feature_options:
  # the feature must be enabled for the option to be used.
  attenuator:
    enabled: true
    url: https://api.openai.com/v1/chat/completions
    port: 443
    model: gpt-5
    temp: 1
    mode: reason

Environment Variables as Alternative

bash
export AI_TOKEN=your-ai-token
export AI_MODEL=gpt-4o
export AI_TEMPERATURE=0.3
export AI_MODE=reason

Local and Private Models

The Attenuator can be used with local inference engines like Ollama to run private models on your own infrastructure

Advantages
  • Data Privacy - Keeps security event data within your environment
  • Cost Efficiency - Eliminates API usage costs
  • Customization - Allows fine-tuning of models for security-specific tasks

To use Ollama with the Attenuator, set the URL to your Ollama instance:

yaml
features:
  - attenuator

feature_options:
  attenuator:
    enabled: true
    url: "http://localhost:11434/v1/chat/completions"
    model: "deepseek-coder:latest"

Note: Ensure your local model is capable of handling the security analysis tasks required by the Attenuator. Deepseek-coder is an example model; choose one that fits your needs.

Response Format

The Attenuator provides rich context for each security event it analyzes:

Example Response 01
json
{
  ...
  "attenuator": {
    "is_false_positive": true,
    "new_importance": "low",
    "interpretation": "The event involves the use of curl to access a URL on pastebin.com over HTTPS, which is a common and legitimate action for users retrieving data from pastebin. The command was executed by a user with UID 1000, indicating a non-root user, and there is no evidence of malicious intent or abnormal behavior in the process ancestry or file access patterns. The network flow shows a standard HTTPS connection to pastebin.com, which is not inherently suspicious. Therefore, this event is likely a false positive.",
    "attenuated_by": "gpt-4o"
  }
  ...
}
Example Response 02
json
{
  ...
  "attenuator": {
    "is_false_positive": false,
    "new_importance": "low",
    "interpretation": "User rafaeldtinoco ran '/usr/bin/curl -q https://xvideos.com' from an interactive sshd/bash session. Egress TLS flows to xvideos.com resolved IPs 185.88.181.9 and .10 on 443 match the command. Typical TLS/CA files were read; no anomalous file writes or process injection observed. The activity is deliberate and accurately attributed to curl, confirming adult site access and not a sensor misfire.",
    "attenuated_by": "gpt-5"
  },
  ...
}

Use Cases

  • Reduce alert fatigue by filtering false positives.
  • Prioritize alerts based on accurate severity classification.
  • Provide contextualized detections with expert analysis.
  • Filter out noise in high-volume environments (such as CI/CD).

Best Practices

  • Start with "amend" or "reason" mode to evaluate AI judgments.
  • Use higher temperature settings for diverse analyses.
  • Use a balanced approach between accuracy and cost.
  • Allocate sufficient resources for private model inference.