Thanks Brent,

I thought since the Phase 1 picked up the hostname as the status I was 
screwed from collecting it with regex but I was wrong. here is my 
local_decoder for it:

<decoder name="atlassian">
  <prematch>[\.+]\s+[\.+]</prematch>
  <regex>(.*)</regex>
  <order>extra_data</order>
</decoder>

<decoder name="atlassian">
  <prematch>NotificationException: 
com.sun.mail.smtp.SMTPSendFailedException: </prematch>
  <regex offset="after_prematch">(.*)</regex>
  <order>extra_data</order>
</decoder>

<decoder name="atlassian-event">
 <parent>atlassian</parent>
 <regex>\d\d\d\d-\d\d-\d\d 
\d\d:\d\d:\d\d,\d\d\d\s+(INFO)|(WARN)|(ERROR)</regex>
 <order>status</order>
</decoder>

local_rules:
<group name="atlassian,">

  <rule id="100150" level="1">
    <decoded_as>atlassian</decoded_as>
    <hostname>INFO</hostname>
    <description>Atlassian Info Event</description>
  </rule>

  <rule id="100151" level="8">
    <decoded_as>atlassian</decoded_as>
    <hostname>WARN</hostname>
    <description>Atlassian Warn Event</description>
  </rule>

  <rule id="100152" level="10">
    <decoded_as>atlassian</decoded_as>
    <hostname>ERROR</hostname>
    <description>Atlassian Error Event</description>
  </rule>

  <rule id="100153" level="1">
    <decoded_as>atlassian</decoded_as>
    <if_sid>100151</if_sid>
    <match>ROLE_ANONYMOUS</match>
    <description>Atlassian Ignore Event</description>
  </rule>

  <rule id="100154" level="10">
    <decoded_as>atlassian</decoded_as>
    <if_sid>100151</if_sid>
    <hostname>WARN</hostname>
    <regex>Remote agent\s'\.+'\swas unresponsive and has gone 
offline.</regex>
    <description>Atlassian Bamboo Agent Disconnected Event</description>
  </rule>

</group>


It works for like 99% of my events however randomly OSSEC will report them 
with rule id 1002 even though ossec-logtest reports it's my custom event:

2019-06-04 13:42:34,673 ERROR [Caesium-1-2] 
[atlassian.core.task.AbstractErrorQueuedTaskQueue] handleException 
com.atlassian.mail.MailException: 
com.sun.mail.smtp.SMTPSendFailedException: 550 5.1.     10 
RESOLVER.ADR.RecipientNotFound; Recipient not found by SMTP address lookup


**Phase 1: Completed pre-decoding.
       full event: '2019-06-04 13:42:34,673 ERROR [Caesium-1-2] 
[atlassian.core.task.AbstractErrorQueuedTaskQueue] handleException 
com.atlassian.mail.MailException: com.sun.mail.smtp.SMTPSendFaile    
 dException: 550 5.1.10 RESOLVER.ADR.RecipientNotFound; Recipient not found 
by SMTP address lookup'
       hostname: 'ERROR'
       program_name: '(null)'
       log: '[Caesium-1-2] 
[atlassian.core.task.AbstractErrorQueuedTaskQueue] handleException 
com.atlassian.mail.MailException: 
com.sun.mail.smtp.SMTPSendFailedException: 550 5.1.10 RESOLVER.ADR.R    
 ecipientNotFound; Recipient not found by SMTP address lookup'

**Phase 2: Completed decoding.
       decoder: 'atlassian'
       status: 'Error'

***Phase 3: Completed filtering (rules).*
*       Rule id: '100152'*
*       Level: '10'*
*       Description: 'Atlassian Error Event'*
***Alert to be generated.*

Email received:

OSSEC HIDS Notification.

2019 Jun 04 13:42:36

 

Received From: (confluence1) 
IP->/var/atlassian/application-data/confluence/logs/atlassian-confluence.log

Rule: 1002 fired (level 2) -> "Unknown problem somewhere in the system."

Portion of the log(s):

 

2019-06-04 13:42:34,673 ERROR [Caesium-1-2] 
[atlassian.core.task.AbstractErrorQueuedTaskQueue] handleException 
com.atlassian.mail.MailException: 
com.sun.mail.smtp.SMTPSendFailedException: 550 5.1.10 
RESOLVER.ADR.RecipientNotFound; Recipient not found by SMTP address lookup


Is this a bug? ossec-logtest properly detects the rule but the service 
isn't following them or is it because the status is the hostname it causes 
this rule detection breaks down?




On Monday, June 3, 2019 at 7:03:24 PM UTC-4, Brent wrote:
>
> Creating custom decoders isn't too terribly difficult to do; and I bet you 
> could pay someone else if you wanted to farm that out (I'm thinking of the 
> companies that specialize in OSSEC you may already know of).
>
> But doing it yourself probably wouldn't be as difficult as it sounds... 
> and once you get a feel for the regex implementation in OSSEC, you'll have 
> this done in an afternoon.  It looks like you have three levels (INFO, 
> WARN, ERROR).  Should be easy enough to create alerts on and all the rest...
>
>
>
> On Friday, May 24, 2019 at 7:59:38 AM UTC-7, Nate wrote:
>>
>> Hi Everyone -
>>
>> Does anyone have a custom decoder for Atlassian products or can point me 
>> in the correct path to properly identify them?
>>
>> Here is a sample of what I am dealing with:
>>
>> Bamboo
>> 019-05-23 12:56:11,870 WARN [scheduler_Worker-3] [RemoteAgentManagerImpl] 
>> Remote agent 'WINDOWSBUILD.domain.local' was unresponsive and has gone 
>> offline.
>> 2019-05-23 12:56:11,870 INFO [scheduler_Worker-3] [AgentManagerImpl] No 
>> deployments running on agent WINDOWSBUILD.domain.local
>> 2019-05-23 12:56:11,871 INFO [scheduler_Worker-3] [AgentManagerImpl] No 
>> builds running on agent WINDOWSBUILD.domain.local
>> 2019-05-23 12:56:11,902 INFO 
>> [AtlassianEvent::0-BAM::EVENTS:pool-3-thread-3] [ChainExecutionManagerImpl] 
>> Plan C334-141: - feature-dual-club has finished
>> 2019-05-23 12:56:11,812 WARN [scheduler_Worker-3] 
>> [RemoteAgentManagerImpl] Remote agent 'build-dev1.domain.local' was 
>> unresponsive and has gone offline.
>>
>>
>> Confluence
>> 2019-05-23 12:56:08,254 INFO [buildTailMessageListenerConnector-124] 
>> [FingerprintMatchingMessageListenerContainer] Successfully refreshed JMS 
>> Connection
>> 2019-05-23 12:56:11,812 WARN [scheduler_Worker-3] 
>> [RemoteAgentManagerImpl] Detected that remote agent 'build1.domain.local' 
>> has been inactive since Thu May 23 12:45:50 EDT 2019
>> 2019-05-23 12:56:11,812 WARN [scheduler_Worker-3] 
>> [RemoteAgentManagerImpl] Marking remote agent 'build1.domain.local' as 
>> unresponsive
>> 2018-08-22 12:11:50,828 INFO [Caesium-1-1] 
>> [directory.ldap.cache.AbstractCacheRefresher]
>> 2018-08-22 12:39:03,722 INFO [http-nio-8443-exec-24] 
>> [plugins.synchrony.service.SynchronyExternalChangesManager] 
>> performExternalChange Started external change for ContentId{id=37322926}
>> 2019-05-02 16:30:00,315 ERROR [NotificationSender:thread-2] 
>> [plugin.notifications.dispatcher.NotificationErrorRegistryImpl] addError 
>> Error sending notification to server '<Unknown>'(-1) for INDIVIDUAL task 
>> (resent 0 times): Error sending to individual 
>> 'ff8080815bd4b40a015c7dcb00e80009' on server 'System Mail'
>>
>> Sample decoder output:
>> 2019/05/24 09:19:49 ossec-testrule: INFO: Reading local decoder file.
>> 2019/05/24 09:19:49 ossec-testrule: INFO: Started (pid: 18995).
>> ossec-testrule: Type one log per line.
>>
>> 2019-05-23 12:56:11,812 WARN [scheduler_Worker-3] 
>> [RemoteAgentManagerImpl] Remote agent 'chasebuild-dev1.archergroup.local' 
>> was unresponsive and has gone offline.
>>
>>
>> **Phase 1: Completed pre-decoding.
>>        full event: '2019-05-23 12:56:11,812 WARN [scheduler_Worker-3] 
>> [RemoteAgentManagerImpl] Remote agent 'chasebuild-dev1.archergroup.local' 
>> was unresponsive and has gone offline.'
>>        hostname: '*WARN*'
>>        program_name: '(null)'
>>        log: '[scheduler_Worker-3] [RemoteAgentManagerImpl] Remote agent 
>> 'build-dev1.domain.local' was unresponsive and has gone offline.'
>>
>> **Phase 2: Completed decoding.
>>        No decoder matched.
>>
>> The logs are interpreted as syslog and the status is being pulled into 
>> the hostname and the only log data I can work with for Phase 2 is the 
>> *log:* section correct? So I'll never be able to get the status of the 
>> log?
>>
>>
>>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"ossec-list" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ossec-list/53262315-885e-4587-9951-c36c114a72f4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to