Skip to content

Assure1 Event Pipe Aggregator

Overview

The Assure1 Event Pipe Aggregator is a generic application that runs a command on the local system, then parses the results with customizable rules and creates de-duplicated events within Assure1.

Pipe Aggregator Setup

  1. Review the command that is executed in the configuration to see what will be executed for processing. Update the command as needed.

  2. Review the logic in the rules files referenced in the configuration to see the processing that will be done on the data that is returned:

    • "LoadRules" will be executed during application startup to load data that might be needed during processing.
    • "IncludeRules" will be read during application startup to load additional files that might be called during processing.
    • "BaseRules" will be executed for each event that is selected from the query above.

    Update the logic as needed.

  3. Enable the default Service, unless a specific configuration option is needed:

    Configuration -> Broker Control -> Services

Default Service

Field Value
Package Name coreCollection-app
Service Name Event Pipe Aggregator
Service Program bin/core/collection/Piped
Service Arguments
Service Description Pipe (command) Aggregator that reads event lines from output of a command
Failover Type Standalone (Supported: Standalone, Primary/Backup)
Status Disabled
Privileged (Checked)

Default Configuration

Name Value Possible Values Notes
BaseRules collection/event/pipe/base.rules Text, 255 characters Relative path to Base Rules.
BranchDir core/default Text, 255 characters relative path to Rules dir.
Command tail -f /var/log/messages Text, 255 characters Command run by the aggregator, excluding the pipe - NO RELOAD CONFIG SUPPORT.
IncludeRules collection/event/pipe/base.includes Text, 255 characters Relative path to Include Rules.
LoadRules collection/event/pipe/base.load Text, 255 characters Relative path to Load Rules.
LogFile logs/EventPipe.log Text, 255 characters Relative path to Log File.
LogLevel ERROR OFF, FATAL, ERROR, WARN, INFO, DEBUG Logging level used by application.
ShardID 1 Integer Database shard to be used.
Threads 3 Integer Number of process threads created. The aggregator takes a third of this value (rounded up) for database threads unless overridden by the "DBThreads" application configuration.
Capture Disabled Enabled/Disabled Optional - If enabled, saves the raw message in the Log.
DBThreads Integer Optional - Number of database threads to be created. If not specified, defaults to a third (rounded up) of "Threads" application configuration.
FailoverBufferLimit 0 Integer Optional - Enables Failover Standby buffer that keeps N-seconds worth of data and replays this buffer when becoming Failover Active. (0=off N=seconds to keep) See Tokens: $buffer and $received.
FieldSetFile Text, 255 characters Optional - Path to csv file containing custom list of fields that will be used when inserting data. (Requires InsertSQLFile.)
InsertSQLFile Text, 255 characters Optional - Path to file containing custom SQL Insert statement for handling of event inserts. (Requires FieldSetFile.)

Best Practices

The following list shows the best practices for working with this application:

  • The PIPE Aggregator can run any executable that is accessible to the local system. An executable that is located on a remote system can be run by mounting the remote file system to the server running the PIPE Aggregator service.

Rules

This aggregator uses the Assure1 standard rules architecture. The rules are written in Perl syntax. Refer to the following guides for details on rules creation:

Tokens

The aggregator exposes the following tokens for rules processing.

Token Description
$Event Reference to the hash that is used to create and insert the Event data into the database. Keys map to the fields within the table used and values assigned are inserted in the database to that field. (e.g. $Event->{'IPAddress'} = '192.0.2.1' to assign the event IP address to '192.0.2.1') At least the 'Node' and 'Summary' fields must be set, or no event is inserted.
$received epoch time line was received by the aggregator
$buffer Flag for if was buffered during standby and was replayed (0 = No, 1 = Yes)
$line Message, delimited by carriage return
$discard_flag Flag for discard (0 = No, 1 = Yes)
$count Message Counter
$AppConfig Hash reference to the application configuration name-value pairs that were configured. (i.e. use $AppConfig->{'Host'} to retrieve the set value for 'Host'.)
$CustomHash Custom key, value cache available across all rules. Contents commonly defined in Load Rules then used in Base or other rules. NOTE: This variable is a shared object and any additional sub hashes or arrays must be shared before use or it will cause the error: "Invalid value for shared scalar". Instantiate the sub hash/array using '&share({})' e.g.
$CustomHash->{SubObject} = &share({});
$StorageHash Internal cache used as the StorageHash option when calling rules functions such as FindDeviceID(). NOTE: The structure of this cache is subject to change! Not recommended for custom global storage or manual manipulation; use $CustomHash.

Example Integration

In this example, the Pipe Aggregator will be used to tail the "/var/log/secure" file and create an event when an unauthorized user tries to "sudo". When the file is tailed and the specified line is logged, an event will be created with Severity 5 (Critical) and the other values set below. The data will be parsed out so useful information can be shown directly in the event.

  1. Go to the Rules UI:

    Configuration -> Rules

  2. Expand the folder path: core -> default -> collection -> event -> pipe

  3. Select the "base.rules" file, then add the following logic before the "else" block of code. This will watch the file for the line user NOT in sudoers and create an event when found:

    • Logic

      elsif ($line =~ /user NOT in sudoers/){
          $Event->{'Method'}    = "PIPE";
          $Event->{'SubMethod'} = "sudo Pipe";
          $Event->{'Severity'}  = 5;
          $Event->{'Node'}      = hostfqdn();
      
          $line                 =~ /TTY=(.*); PWD=(.*) ; USER=(.*) ; COMMAND=(.*)/;
          $Event->{'Summary'}   = "User: $3 On Terminal: $1 ran command $4 in Directory $2";
      }
      
    • Click "Submit", then enter a commit message, then click "OK".

  4. Go to the Services UI:

    Configuration -> Broker Control -> Services

  5. Select the "Event Pipe Aggregator", then click the "Clone" button. Set the following:

    • Command => tail -f /var/log/secure
    • Click "Submit".
  6. Verify the service starts and events are received.

Administration Details

The following list shows the technical details needed for advanced administration of the application:

  • Package - coreCollection-app

  • Synopsis - ./Piped [OPTIONS]

  • Options:

     -c, --AppConfigID N   Application Config ID (Service, Job, or Request ID)
     -?, -h, --Help        Print usage and exit
    
  • Threaded - Multi-Threaded