SYNOPSIStenshi [ -c <conf file> ] [ -C ] [ -d <debug level> ] [ -f ] [ -h ] [ -p ] [ -P <pid file> ]
tenshi is a log monitoring program, designed to watch one or more log files for lines matching user defined regular expressions and report on the matches. The regular expressions are assigned to queues which have an alert interval and a list of mail recipients.
Queues can be set to send a notification as soon as there is a log line assigned to it, or to send periodic reports.
Additionally, uninteresting fields in the log lines (such as PID numbers) can be masked with the standard regular expression grouping operators ( ). This allows cleaner and more readable reports. All reports are separated by hostname and all messages are condensed when possible.
The program reads a configuration file (tenshi.conf) and then forks a daemon for monitoring the specified log files.
Read configuration from file. The default file is /etc/tenshi/tenshi.conf .
-c <conf file>
All directives are shown with the standard default value where applicable, if omitted the default value will be used.
EXTERNAL CONFIGURATION FILES
All configuration directives can be optionally split into different configuration files and then read with the two following statements.
- include <configuration file>
- Parse the specified configuration file.
- includedir <directory>
- Parse all files inside <directory>. The files will be parsed in alphabetical order, keep in mind that regexps order matters so includedir should be used carefully, see REGEXP DEFINITIONS for details.
These options will be read the first time tenshi reads its config file. They cannot be changed by re-reading the config file. If you change one of these options and HUP tenshi it will die. You have been warned.
- set uid tenshi
- Specify the effective user ID of the process when in daemon mode. The user must be able to read the selected log files, the configuration file and write the specified pid file. Never use privileged users here since it's not usually necessary (log files perms can be set accordingly with most syslog implementations).
- set gid tenshi
- Specify the effective group ID of the process when in daemon mode.
- set pidfile /var/run/tenshi.pid
- The file containing the PID of the process, useful for start/stop scripts.
- set logfile <log file path>
- A log file to monitor, this may be specified multiple times to watch more than one log file. Depending on your tail implementation you might need to use the tail_multiple setting for multiple files to work. This mode can be used along with fifo and listen settings.
- set tail /usr/bin/tail -q --follow=name --retry -n 0
- Specify the path and arguments for the tail binary used for reading the log files. The invocation must be tuned against your current 'tail' implementation. Default values are configured for standard GNU coreutils tail. The --follow=name and --retry flags should deal properly with log rotation, if missing on your implementation we suggest that you use something like 'cp /dev/null logfile' as a safe way for clearing the log file upon rotation.
- set tail_multiple <on|off>
- Some tail implementations do not handle more than one log file. When this option is enabled multiple tail commands will be forked, instead of a single command with multiple arguments. This option is disabled by default.
- set fifo <fifo path>
- A FIFO file to monitor. This option allows you to use a syslog-ng pipe() destination (or any other syslog implementation that allows FIFO usage). This may be specified multiple times to watch more than one fifo file. This option is meant to be used only when the installed 'tail' binary doesn't behave properly with FIFOs, please test your tail implementation before using this. This mode can be used along with logfile and listen settings.
- set listen 0.0.0.0:514
- Enables syslog server mode. With this option tenshi will bind to the specified address:port pair and read messages acting like a syslog server. We always recommend to filter the port accordingly and possibly use something like stunnel for encrypting the traffic. This mode can be used along with logfile and fifo settings.
These options are set each time the config file is read. tenshi reads its config file once on start-up and whenever it receives a HUP.
- set sleep 5
- The loop sleep time for the notification process. The value must be <= 60 seconds.
- set limit <number of lines>
- The maximum number of messages per hostname allowed in a report, any lines after the maximum will be omitted and a warning included. If this option is omitted then no limit is applied.
- set pager_limit <number of lines>
- The maximum number of messages per hostname allowed in pager friendly reports, any lines after the maximum will be omitted. If this option is omitted then no limit is applied.
- set logprefix <regexp>
- All valid syslog messages are parsed by default, while non-syslog ones are discarded unless the special noprefix queue is set. This option allows one to define an additional valid prefix for watching other type of logs. If the regexp is matched then the prefix is removed from the log and the first grouped string is used for the hostname field. This may be specified multiple times to watch many different non-syslog logs.
- set mask ______
- The mask for strings enclosed by the grouping operators ( ). See the REGEXP DEFINITIONS section. 'set mask' on its own will set the mask to an empty string.
- set mailserver localhost
- The mail server to be contacted for sending out reports.
- set mailtimeout 10
- The timeout in seconds for mail server reply.
- set subject tenshi report
- The subject of report emails, the queue name is always automatically appended.
- set hidepid <on|off>
- This option turns on automatic stripping of 'foo:' style PID strings from the start of log lines i.e. 'foo:' becomes 'foo:'. This allows you to write regexps without worrying about masking the PID. Bear in mind that any time you change this option you will need to re-write your regex rules or they will not work. This option is disabled by default.
- set filter <queue> <filter path> <arguments>
- When this option is enabled all reports matching the specified queue will be passed as STDIN to the specified filter, the resulting output is sent via smtp instead of the original report. The full path of the filter application must be specified.
- set csv <cron_spec> <filter path> <arguments>
- This feature allows periodic reporting, using a five-field cron-style specification like the set queue option, to the specified filter. The output is pre-formatted as CSV (Comma Separated Values) with hostname,log,hits format. This feature was coded for using AfterGlow (http://afterglow.sf.net) as a filter and graphing tenshi output. Check the FAQ for sample usage.
- set sort_order <descending|ascending>
- The sorting order for reports. It can be either descending or ascending, the number of messages is used as a key for sorting the log messages. The default order is ascending.
- set resolve <on|off>
- This option turns on resolution of the fully qualified domain name for the hostname passed along with log messages and, if found, reports it along with the original one. This only affects reports and not pager messages. The name resolution is cached in order to avoid re-resolving addresses that have been seen already, you have to restart or HUP tenshi in order to flush the cache. This option is disabled by default.
- set threshold <queue> <count> <regex>
- This option can be used to discard lines from a report that have a count below the given threshold. If a line matches the regex in the given queue but has fewer hits than count, it is discarded and omitted from the report. Note that this matches on the content of the lines that will actually appear in the report, in contrast to queue escalation which uses a count based on the regex that is matched.
All messages are assigned to queues. Every queue is processed periodically according to its notification interval. There are four default builtin queues, trash to which unwanted messages can be assigned (think /dev/null), repeat which is used for smart repeat messages handling, group and group_host , see REGEXP DEFINITIONS for details. There's also a special noprefix queue, read further for details about it.
All queues are automatically flushed before shutdown when a SIGTERM is received. Please see section SIGNALS for additional information.
The syntax is the following:
- set queue <queue_name> <mail_from> [pager:]<mail_to> <cron_spec> [<subject>]
- The queue name. Can be any alphanumeric character string except for the builtin queues name.
- The mail sender for reports related to the queue.
- The mail recipient(s) for reports related to the queue. Multiple address can be specified, separated by commas. Using the pager: prefix enables a pager friendly report.
- This is a five-field cron-style specification for when the reports should be emailed. Ranges and skip values are supported as per the de facto crontab syntax with a few exceptions. Please see crontab man page for crontab syntax explanation. The supported day names are: Mon, Tue, Wed, Thu, Fri, Sat, Sun. Monday is 1, Sunday 0 or 7. Supported month names are: Jan, Feb, Mar, Apr, May, Jun, Jul, Aug, Sep, Oct, Nov, Dec. Day and Month names are not case sensitive. Additionally, 'now' can be specified for immediate notifications.
- This is the subject for to use for email reports regarding this queue. If this isn't specified then the default subject will be used.
The special noprefix queue can be used and defined like any other queue with the difference that it will get all messages that don't match any configured prefix.
set queue report [email protected] [email protected] [0 9-17 * * *]
set queue report [email protected] [email protected] [30 18 * * *]
set queue report [email protected] [email protected] [*/10 * * * *]
set queue critical [email protected] [email protected],[email protected] [now] CRITICAL WARNING -
set queue pager [email protected] pager:[email protected],pager:[email protected] [now] ALERT
All valid syslog messages are matched against standard perl regexps, all regexps are defined with the following syntax:
- <queue_name>[,<queue_name>[:<escalation_number>]..] <regexp>
The regexps are evaluated in order so a matched message is not checked against the subsequent regexps. Keep this in mind when assembling the configuration file. It's advisable to catch all messages by placing an all matching regexp at the end of the configuration file. It's also good for performance having trash rules not logically connected with other matching rules at the beginning of the section. Multiple queues can be defined with a comma separated list, builtin queues cannot be used when using this syntax.
If an escalation number is provided for a queue, the matched message will only be placed into the queue when <escalation_number> messages have matched the regexp. The queue will receive the message that matched the regexp at the time of escalation, with a count equal to the escalation number. The count of messages matching the regexp will be reset when the left most queue mentioned in the queue list is mailed. The left most queue cannot have an escalation number unless it is the only queue listed. When the number of messages that match the regexp reaches the greatest escalation number mentioned, escalation will begin again into the escalation queues, modulus the greatest escalation number. For example, using the queues `a,b:10,c:50', when 10 messages match the regexp, a message will go into b, when 50 match, one will go into c. At 60, another will go into b, and at 100, another into c, 110 to b, 150 to c, and so on. Escalation numbers must be positive integers greater than zero and must be listed in increasing order from left to right. All queues without escalation numbers must be listed more left than the queues with escalation numbers.
The standard grouping operators ( ) can be used for string masking, literal "(" and ")" can be protected with the standard quotation operator "\". There's a lot of documentation about regular expressions, a good start could be perl perlre and perlretut manual pages.
You can also use the (?: ) operators to use groups without masking. This allows you to match, for example, output from several programs in a similar format. There is an example of this below (the sudo/su line).
The builtin queue repeat can be used for special handling of "last message repeated x times" style log lines. When the assigned regexps are matched the line count for the last line received from the same host is incremented by the first grouped string. Keep in mind that it is possible for syslog lines to be received from remote hosts out of order. If this happens you should not use this feature because tenshi will mis-report line counts.
The builtin queue group can be used to group sets of regex together to speed up line matching. If a line fails to match a regex assigned to the group queue then tenshi will skip all the regex up until the next group_end statement. Nested groups are allowed. An example of this is included below.
The builtin group_host queue can be used for selective hostname matching. Like the group queue it is also terminated with the group_end statement. All regex definitions within that group will only apply if the hostname associated to the log entries matches the regex passed to the group_host definition.
The regexps below assume hidepid is turned on. If you have it turned off then you will need to add in \[(.+)\] to the regex following the progam name to get them to work.
For example: mail ^sendmail: (.+): to=(.+),(.+)delay=(.+) becomes: mail ^sendmail\[(.+)\]: (.+): to=(.+),(.+)delay=(.+)
repeat ^(?:last message repeated|above message repeats) (\d+) time
mail ^sendmail: (.+): to=(.+),(.+)delay=(.+)
mail ^sendmail: (.+): to=(.+),(.+)relay=(.+),(.+)stat=Sent
mail ^ipop3d: Login user=(.+)
critical,report ^sshd: Illegal user
general,urgent:200,critical:1000 ^sshd: Illegal user
root ^sshd\(pam_unix\): session opened for user root by root\(uid=0\)
report ^sshd: Accepted rsa for (.+) from (.+) port (.+)
tenshi can handle different signals sent to the process, here's the list of supported ones:
- flush all queues and then exit
- flush all queues and then exit
- flush any queues which have reached their notification interval
- force all queues to be flushed, even if they have not reached their notification interval
- force all queues to be flushed, even if they have not reached their notification interval, re-read the config file and continue as normal.
WARNING: If you change a STATIC OPTION in the config file and send tenshi a HUP it will die. You will need to restart tenshi for changes to STATIC OPTIONs to take effect.
EXAMPLESSee the included tenshi.conf.
tenshi needs a working 'tail' implementation when not using FIFO mode.
It also requires Net::SMTP module for mailing reports, which should be included in your perl installation, and IO::BufferedSelect. If you miss any of them you can grab them at CPAN (http://www.cpan.org) or using the CPAN shell (`perl -e shell -MCPAN`).
BUGSDouble quotation characters present in your logs might break csv output (depending on how you pipe/process it in the filter) since there's no escape code (yet).
Please report any bugs you find at <[email protected]>
LICENSEtenshi is distributed under the terms of the following ISC-style license:
Permission to use, copy, modify, and distribute this software for any purpose with or without fee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
The tenshi project page is http://www.inversepath.com/tenshi.html
tenshi was once known as wasabi but the name was changed as we were informed that wasabi is a registered a trademark relating to another piece of software.
Copyright 2004-2011 Andrea Barisani <[email protected]>