I did some more troubleshooting last night and I found that I have two 
separate issues, possibly outside of the scope of my original issue, but I 
have the sinking feeling it's all related due to the scale of the 
environment:

1.) ossec-analysisd is pegging 1 CPU on restart (single threaded app), 
documented well here:  
https://groups.google.com/forum/?fromgroups#!topic/ossec-list/_f8g6eEIhn0 
Given the size of my /var/ossec directory, I can expect it to continue for 
a few more *days.  *Is there anything that I can do to mitigate this?  Can 
I purge the syscheck queue without causing other issues or degrading 
security?  

$ sudo du -max /var/ossec | sort -rn | head
6247    /var/ossec
5994    /var/ossec/queue
5834    /var/ossec/queue/syscheck


2a.) I currently have 77 newer hosts that have 'Never connected' according 
to agent control, which I'll be investigating in addition to the other 
issues (possibly related to changes in kickstart scripts),
2b.) and many more listed as 'Disconnected'.  Should active agents be 
listed as such?  I have tested with tcpdump and the agents listed as 
'Disconnected' are indeed sending information to the ossec server via port 
1514/UDP.  Or is this only showing the current state of syscheck? 

*Now, back to my original rule issue:*
Can someone provide me with some more troubleshooting steps in order to 
determine whether or not this is a issue with analysisd dropping the alert 
or a problem with my rule?  The fact that I need to restart OSSEC when I 
make changes to the rule is a bit of a problem due issue "1" above (unless 
it's not causing alerts to be dropped?)

 
On Friday, June 1, 2012 6:17:57 PM UTC-5, mcrane0 wrote:
>
> Very strange issue - OSSEC will intermittently fail to generate an alarm 
> for a specific decoder/rule.  All systems are RHEL, iptables is disabled.
>
> OSSEC HIDS v2.6 - Trend Micro Inc.
>
> /etc/init.conf:
> DIRECTORY="/var/ossec"
> VERSION="v2.6"
> DATE="Thu Nov 10 18:57:58 CST 2011"
> TYPE="server"
>
> Here is the decoder:
> <decoder name="silvertail">
>    <program_name>^mitigator|^reportbuilder</program_name>
> </decoder>
>
> <decoder name="silvertail-alert">
>    <parent>silvertail</parent>
>    <prematch>[SILVERTAIL_ALERT] </prematch>
>    <regex 
> offset="after_prematch">^ip=(\d+.\d+.\d+.\d+)\|\|action=(\.+)\|\|rule=(\w+)</regex>
>    <order>srcip,extra_data,action</order>
> </decoder>
>
> Here is the rule:
>   <rule id="700100" level="5">
>     <decoded_as>silvertail</decoded_as>
>     <match>[alert]</match>
>     <description>Silvertail Alert</description>
>   </rule>
>
>
> Log test (scrubbed) - note that there was no alert for this log, but it 
> says that one should be generated:
>
> ossec]# /var/ossec/bin/ossec-logtest -D . -c etc/ossec.conf
> 2012/06/01 18:09:45 ossec-testrule: INFO: Reading local decoder file.
> 2012/06/01 18:09:46 ossec-testrule: INFO: Started (pid: 17241).
> ossec-testrule: Type one log per line.
>
> Jun  1 17:55:09 <host> mitigator[1709]: [alert] <host> <host> [listener 
> 1.11] [SILVERTAIL_ALERT] 
> ip=<ip>||action=flag||rule=Security_Alert_DDoS_Targeted_Feature||duration=86400||request=<site>
>
> **Phase 1: Completed pre-decoding.
>        full event: 'Jun  1 17:55:09 <host> mitigator[1709]: [alert] <host> 
> <host> [listener 1.11] [SILVERTAIL_ALERT] 
> ip=<ip>||action=flag||rule=Security_Alert_DDoS_Targeted_Feature||duration=86400||request=<site>'
>        hostname: '<host>'
>        program_name: 'mitigator'
>        log: '[alert] <host> <host> [listener 1.11] [SILVERTAIL_ALERT] 
> ip=<ip>||action=flag||rule=Security_Alert_DDoS_Targeted_Feature||duration=86400||request=<host>'
>
> **Phase 2: Completed decoding.
>        decoder: 'silvertail'
>        srcip: '<host>'
>        extra_data: 'flag'
>        action: 'Security_Alert_DDoS_Targeted_Feature'
>
> **Phase 3: Completed filtering (rules).
>        Rule id: '700100'
>        Level: '5'
>        Description: 'Silvertail Alert'
> **Alert to be generated.
>
> Is it possible that the server becomes overloaded with alerts and misses a 
> few?  I cannot figure out why some alerts fire, and others don't, at 
> random.  Is there any other testing I can do to nail down the cause of this 
> issue?
>
>

Reply via email to