[ossec-list] Re: OSSEC & Logstash

2018-12-20 Thread Patrick Rogne
Thank you for your work on this awesome conf file.  I have been working 
with it latley but noticed today that the new version of logstash 6.6 looks 
like it will not be supporting the multiline codec anymore?  I hope I am 
wrong, can you confirm this?


On Saturday, March 8, 2014 at 4:02:35 PM UTC-6, Joshua Garnett wrote:
>
> All,
>
> I'll probably write a blog post on this, but I wanted to share some work 
> I've done today.  
> http://vichargrave.com/ossec-log-management-with-elasticsearch/ shows how 
> to use OSSEC's syslog output to route messages to Elasticsearch.  The 
> problem with this method is it uses UDP.  Even when sending packets to a 
> local process UDP by definition is unreliable.  Garbage collections and 
> other system events can cause packets to be lost.  I've found it tends to 
> cap out at around 1,500 messages per minute. 
>
> To address this issue I've put together a logstash config that will read 
> the alerts from /var/ossec/logs/alerts/alerts.log.  On top of solving the 
> reliability issue, it also fixes issues with multi-lines being lost, and 
> adds geoip lookups for the src_ip.  I tested it against approximately 1GB 
> of alerts (3M events).
>
> input {
>   file {
> type => "ossec"
> path => "/var/ossec/logs/alerts/alerts.log"
> sincedb_path => "/opt/logstash/"
> codec => multiline {
>   pattern => "^\*\*"
>   negate => true
>   what => "previous"
> }
>   }
> }
>
> filter {
>   if [type] == "ossec" {
> # Parse the header of the alert
> grok {
>   # Matches  2014 Mar 08 00:57:49 (some.server.com) 10.1.2.3->ossec
>   # (?m) fixes issues with multi-lines see 
> https://logstash.jira.com/browse/LOGSTASH-509
>   match => ["message", "(?m)\*\* Alert 
> %{DATA:timestamp_seconds}:%{SPACE}%{WORD}?%{SPACE}\- 
> %{DATA:ossec_group}\n%{YEAR} %{SYSLOGTIMESTAMP:syslog_timestamp} 
> \(%{DATA:reporting_host}\) 
> %{IP:reporting_ip}\-\>%{DATA:reporting_source}\nRule: 
> %{NONNEGINT:rule_number} \(level %{NONNEGINT:severity}\) \-\> 
> '%{DATA:signature}'\n%{GREEDYDATA:remaining_message}"]
>   
>   # Matches  2014 Mar 08 00:00:00 ossec-server01->/var/log/auth.log
>   match => ["message", "(?m)\*\* Alert 
> %{DATA:timestamp_seconds}:%{SPACE}%{WORD}?%{SPACE}\- 
> %{DATA:ossec_group}\n%{YEAR} %{SYSLOGTIMESTAMP:syslog_timestamp} 
> %{DATA:reporting_host}\-\>%{DATA:reporting_source}\nRule: 
> %{NONNEGINT:rule_number} \(level %{NONNEGINT:severity}\) \-\> 
> '%{DATA:signature}'\n%{GREEDYDATA:remaining_message}"]
> }
>
> # Attempt to parse additional data from the alert
> grok {
>   match => ["remaining_message", "(?m)(Src IP: 
> %{IP:src_ip}%{SPACE})?(Src Port: %{NONNEGINT:src_port}%{SPACE})?(Dst IP: 
> %{IP:dst_ip}%{SPACE})?(Dst Port: %{NONNEGINT:dst_port}%{SPACE})?(User: 
> %{USER:acct}%{SPACE})?%{GREEDYDATA:real_message}"]
> }
>
> geoip {
>   source => "src_ip"
> }
>
> mutate {
>   convert  => [ "severity", "integer"]
>   replace  => [ "@message", "%{real_message}" ]
>   replace  => [ "@fields.hostname", "%{reporting_host}"]
>   add_field=> [ "@fields.product", "ossec"]
>   add_field=> [ "raw_message", "%{message}"]
>   add_field=> [ "ossec_server", "%{host}"]
>   remove_field => [ "type", "syslog_program", "syslog_timestamp", 
> "reporting_host", "message", "timestamp_seconds", "real_message", 
> "remaining_message", "path", "host", "tags"]
> }
>   }
> }
>
> output {
>elasticsearch {
>  host => "10.0.0.1"
>  cluster => "mycluster"
>}
> }
>
> Here are a few examples of the output this generates.
>
> {
>"@timestamp":"2014-03-08T20:34:08.847Z",
>"@version":"1",
>"ossec_group":"syslog,sshd,invalid_login,authentication_failed,",
>"reporting_ip":"10.1.2.3",
>"reporting_source":"/var/log/auth.log",
>"rule_number":"5710",
>"severity":5,
>"signature":"Attempt to login using a non-existent user",
>"src_ip":"112.65.211.164",
>"geoip":{
>   "ip":"112.65.211.164",
>   "country_code2":"CN",
>   "country_code3":"CHN",
>   "country_name":"China",
>   "continent_code":"AS",
>   "region_name":"23",
>   "city_name":"Shanghai",
>   "latitude":31.0456007,
>   "longitude":121.3997,
>   "timezone":"Asia/Shanghai",
>   "real_region_name":"Shanghai",
>   "location":[
>  121.3997,
>  31.0456007
>   ]
>},
>"@message":"Mar  8 01:00:59 someserver sshd[22874]: Invalid user oracle 
> from 112.65.211.164\n",
>"@fields.hostname":"someserver.somedomain.com",
>"@fields.product":"ossec",
>"raw_message":"** Alert 1394240459.2305861: - 
> syslog,sshd,invalid_login,authentication_failed,\n2014 Mar 08 01:00:59 (
> someserver.somedomain.com) 10.1.2.3->/var/log/auth.log\nRule: 5710 (level 
> 5) -> 'Attempt to login using a non-existent user'\nSrc IP: 
> 112.65.211.164\nMar  8 01:00:59 someserver sshd[22874]: Invalid user oracle

[ossec-list] Re: OSSEC & Logstash

2016-09-22 Thread mangasof . manga
Hi JP1, you found a pattern for archive.log file?

Em quarta-feira, 18 de fevereiro de 2015 17:12:45 UTC-3, jp1...@gmail.com 
escreveu:
>
> So, this works OK for me on alerts.log - does anyone have a logstash conf 
> that works on the archives.log if you have ossec saving all logs to that?
>
> On Saturday, March 8, 2014 at 5:02:35 PM UTC-5, Joshua Garnett wrote:
>>
>> All,
>>
>> I'll probably write a blog post on this, but I wanted to share some work 
>> I've done today.  
>> http://vichargrave.com/ossec-log-management-with-elasticsearch/ shows 
>> how to use OSSEC's syslog output to route messages to Elasticsearch.  The 
>> problem with this method is it uses UDP.  Even when sending packets to a 
>> local process UDP by definition is unreliable.  Garbage collections and 
>> other system events can cause packets to be lost.  I've found it tends to 
>> cap out at around 1,500 messages per minute. 
>>
>> To address this issue I've put together a logstash config that will read 
>> the alerts from /var/ossec/logs/alerts/alerts.log.  On top of solving the 
>> reliability issue, it also fixes issues with multi-lines being lost, and 
>> adds geoip lookups for the src_ip.  I tested it against approximately 1GB 
>> of alerts (3M events).
>>
>> input {
>>   file {
>> type => "ossec"
>> path => "/var/ossec/logs/alerts/alerts.log"
>> sincedb_path => "/opt/logstash/"
>> codec => multiline {
>>   pattern => "^\*\*"
>>   negate => true
>>   what => "previous"
>> }
>>   }
>> }
>>
>> filter {
>>   if [type] == "ossec" {
>> # Parse the header of the alert
>> grok {
>>   # Matches  2014 Mar 08 00:57:49 (some.server.com) 10.1.2.3->ossec
>>   # (?m) fixes issues with multi-lines see 
>> https://logstash.jira.com/browse/LOGSTASH-509
>>   match => ["message", "(?m)\*\* Alert 
>> %{DATA:timestamp_seconds}:%{SPACE}%{WORD}?%{SPACE}\- 
>> %{DATA:ossec_group}\n%{YEAR} %{SYSLOGTIMESTAMP:syslog_timestamp} 
>> \(%{DATA:reporting_host}\) 
>> %{IP:reporting_ip}\-\>%{DATA:reporting_source}\nRule: 
>> %{NONNEGINT:rule_number} \(level %{NONNEGINT:severity}\) \-\> 
>> '%{DATA:signature}'\n%{GREEDYDATA:remaining_message}"]
>>   
>>   # Matches  2014 Mar 08 00:00:00 ossec-server01->/var/log/auth.log
>>   match => ["message", "(?m)\*\* Alert 
>> %{DATA:timestamp_seconds}:%{SPACE}%{WORD}?%{SPACE}\- 
>> %{DATA:ossec_group}\n%{YEAR} %{SYSLOGTIMESTAMP:syslog_timestamp} 
>> %{DATA:reporting_host}\-\>%{DATA:reporting_source}\nRule: 
>> %{NONNEGINT:rule_number} \(level %{NONNEGINT:severity}\) \-\> 
>> '%{DATA:signature}'\n%{GREEDYDATA:remaining_message}"]
>> }
>>
>> # Attempt to parse additional data from the alert
>> grok {
>>   match => ["remaining_message", "(?m)(Src IP: 
>> %{IP:src_ip}%{SPACE})?(Src Port: %{NONNEGINT:src_port}%{SPACE})?(Dst IP: 
>> %{IP:dst_ip}%{SPACE})?(Dst Port: %{NONNEGINT:dst_port}%{SPACE})?(User: 
>> %{USER:acct}%{SPACE})?%{GREEDYDATA:real_message}"]
>> }
>>
>> geoip {
>>   source => "src_ip"
>> }
>>
>> mutate {
>>   convert  => [ "severity", "integer"]
>>   replace  => [ "@message", "%{real_message}" ]
>>   replace  => [ "@fields.hostname", "%{reporting_host}"]
>>   add_field=> [ "@fields.product", "ossec"]
>>   add_field=> [ "raw_message", "%{message}"]
>>   add_field=> [ "ossec_server", "%{host}"]
>>   remove_field => [ "type", "syslog_program", "syslog_timestamp", 
>> "reporting_host", "message", "timestamp_seconds", "real_message", 
>> "remaining_message", "path", "host", "tags"]
>> }
>>   }
>> }
>>
>> output {
>>elasticsearch {
>>  host => "10.0.0.1"
>>  cluster => "mycluster"
>>}
>> }
>>
>> Here are a few examples of the output this generates.
>>
>> {
>>"@timestamp":"2014-03-08T20:34:08.847Z",
>>"@version":"1",
>>"ossec_group":"syslog,sshd,invalid_login,authentication_failed,",
>>"reporting_ip":"10.1.2.3",
>>"reporting_source":"/var/log/auth.log",
>>"rule_number":"5710",
>>"severity":5,
>>"signature":"Attempt to login using a non-existent user",
>>"src_ip":"112.65.211.164",
>>"geoip":{
>>   "ip":"112.65.211.164",
>>   "country_code2":"CN",
>>   "country_code3":"CHN",
>>   "country_name":"China",
>>   "continent_code":"AS",
>>   "region_name":"23",
>>   "city_name":"Shanghai",
>>   "latitude":31.0456007,
>>   "longitude":121.3997,
>>   "timezone":"Asia/Shanghai",
>>   "real_region_name":"Shanghai",
>>   "location":[
>>  121.3997,
>>  31.0456007
>>   ]
>>},
>>"@message":"Mar  8 01:00:59 someserver sshd[22874]: Invalid user 
>> oracle from 112.65.211.164\n",
>>"@fields.hostname":"someserver.somedomain.com",
>>"@fields.product":"ossec",
>>"raw_message":"** Alert 1394240459.2305861: - 
>> syslog,sshd,invalid_login,authentication_failed,\n2014 Mar 08 01:00:59 (
>> someserver.somedomain.com) 10.1.2.3->/var/log/auth.

[ossec-list] Re: OSSEC & Logstash

2016-09-22 Thread mangasof . manga
Hi JP1, you found a pattern for archive.log file?

Em quarta-feira, 18 de fevereiro de 2015 17:12:45 UTC-3, jp1...@gmail.com 
escreveu:
>
> So, this works OK for me on alerts.log - does anyone have a logstash conf 
> that works on the archives.log if you have ossec saving all logs to that?
>
> On Saturday, March 8, 2014 at 5:02:35 PM UTC-5, Joshua Garnett wrote:
>>
>> All,
>>
>> I'll probably write a blog post on this, but I wanted to share some work 
>> I've done today.  
>> http://vichargrave.com/ossec-log-management-with-elasticsearch/ shows 
>> how to use OSSEC's syslog output to route messages to Elasticsearch.  The 
>> problem with this method is it uses UDP.  Even when sending packets to a 
>> local process UDP by definition is unreliable.  Garbage collections and 
>> other system events can cause packets to be lost.  I've found it tends to 
>> cap out at around 1,500 messages per minute. 
>>
>> To address this issue I've put together a logstash config that will read 
>> the alerts from /var/ossec/logs/alerts/alerts.log.  On top of solving the 
>> reliability issue, it also fixes issues with multi-lines being lost, and 
>> adds geoip lookups for the src_ip.  I tested it against approximately 1GB 
>> of alerts (3M events).
>>
>> input {
>>   file {
>> type => "ossec"
>> path => "/var/ossec/logs/alerts/alerts.log"
>> sincedb_path => "/opt/logstash/"
>> codec => multiline {
>>   pattern => "^\*\*"
>>   negate => true
>>   what => "previous"
>> }
>>   }
>> }
>>
>> filter {
>>   if [type] == "ossec" {
>> # Parse the header of the alert
>> grok {
>>   # Matches  2014 Mar 08 00:57:49 (some.server.com) 10.1.2.3->ossec
>>   # (?m) fixes issues with multi-lines see 
>> https://logstash.jira.com/browse/LOGSTASH-509
>>   match => ["message", "(?m)\*\* Alert 
>> %{DATA:timestamp_seconds}:%{SPACE}%{WORD}?%{SPACE}\- 
>> %{DATA:ossec_group}\n%{YEAR} %{SYSLOGTIMESTAMP:syslog_timestamp} 
>> \(%{DATA:reporting_host}\) 
>> %{IP:reporting_ip}\-\>%{DATA:reporting_source}\nRule: 
>> %{NONNEGINT:rule_number} \(level %{NONNEGINT:severity}\) \-\> 
>> '%{DATA:signature}'\n%{GREEDYDATA:remaining_message}"]
>>   
>>   # Matches  2014 Mar 08 00:00:00 ossec-server01->/var/log/auth.log
>>   match => ["message", "(?m)\*\* Alert 
>> %{DATA:timestamp_seconds}:%{SPACE}%{WORD}?%{SPACE}\- 
>> %{DATA:ossec_group}\n%{YEAR} %{SYSLOGTIMESTAMP:syslog_timestamp} 
>> %{DATA:reporting_host}\-\>%{DATA:reporting_source}\nRule: 
>> %{NONNEGINT:rule_number} \(level %{NONNEGINT:severity}\) \-\> 
>> '%{DATA:signature}'\n%{GREEDYDATA:remaining_message}"]
>> }
>>
>> # Attempt to parse additional data from the alert
>> grok {
>>   match => ["remaining_message", "(?m)(Src IP: 
>> %{IP:src_ip}%{SPACE})?(Src Port: %{NONNEGINT:src_port}%{SPACE})?(Dst IP: 
>> %{IP:dst_ip}%{SPACE})?(Dst Port: %{NONNEGINT:dst_port}%{SPACE})?(User: 
>> %{USER:acct}%{SPACE})?%{GREEDYDATA:real_message}"]
>> }
>>
>> geoip {
>>   source => "src_ip"
>> }
>>
>> mutate {
>>   convert  => [ "severity", "integer"]
>>   replace  => [ "@message", "%{real_message}" ]
>>   replace  => [ "@fields.hostname", "%{reporting_host}"]
>>   add_field=> [ "@fields.product", "ossec"]
>>   add_field=> [ "raw_message", "%{message}"]
>>   add_field=> [ "ossec_server", "%{host}"]
>>   remove_field => [ "type", "syslog_program", "syslog_timestamp", 
>> "reporting_host", "message", "timestamp_seconds", "real_message", 
>> "remaining_message", "path", "host", "tags"]
>> }
>>   }
>> }
>>
>> output {
>>elasticsearch {
>>  host => "10.0.0.1"
>>  cluster => "mycluster"
>>}
>> }
>>
>> Here are a few examples of the output this generates.
>>
>> {
>>"@timestamp":"2014-03-08T20:34:08.847Z",
>>"@version":"1",
>>"ossec_group":"syslog,sshd,invalid_login,authentication_failed,",
>>"reporting_ip":"10.1.2.3",
>>"reporting_source":"/var/log/auth.log",
>>"rule_number":"5710",
>>"severity":5,
>>"signature":"Attempt to login using a non-existent user",
>>"src_ip":"112.65.211.164",
>>"geoip":{
>>   "ip":"112.65.211.164",
>>   "country_code2":"CN",
>>   "country_code3":"CHN",
>>   "country_name":"China",
>>   "continent_code":"AS",
>>   "region_name":"23",
>>   "city_name":"Shanghai",
>>   "latitude":31.0456007,
>>   "longitude":121.3997,
>>   "timezone":"Asia/Shanghai",
>>   "real_region_name":"Shanghai",
>>   "location":[
>>  121.3997,
>>  31.0456007
>>   ]
>>},
>>"@message":"Mar  8 01:00:59 someserver sshd[22874]: Invalid user 
>> oracle from 112.65.211.164\n",
>>"@fields.hostname":"someserver.somedomain.com",
>>"@fields.product":"ossec",
>>"raw_message":"** Alert 1394240459.2305861: - 
>> syslog,sshd,invalid_login,authentication_failed,\n2014 Mar 08 01:00:59 (
>> someserver.somedomain.com) 10.1.2.3->/var/log/auth.

[ossec-list] Re: OSSEC & Logstash

2015-02-18 Thread jp10558
So, this works OK for me on alerts.log - does anyone have a logstash conf 
that works on the archives.log if you have ossec saving all logs to that?

On Saturday, March 8, 2014 at 5:02:35 PM UTC-5, Joshua Garnett wrote:
>
> All,
>
> I'll probably write a blog post on this, but I wanted to share some work 
> I've done today.  
> http://vichargrave.com/ossec-log-management-with-elasticsearch/ shows how 
> to use OSSEC's syslog output to route messages to Elasticsearch.  The 
> problem with this method is it uses UDP.  Even when sending packets to a 
> local process UDP by definition is unreliable.  Garbage collections and 
> other system events can cause packets to be lost.  I've found it tends to 
> cap out at around 1,500 messages per minute. 
>
> To address this issue I've put together a logstash config that will read 
> the alerts from /var/ossec/logs/alerts/alerts.log.  On top of solving the 
> reliability issue, it also fixes issues with multi-lines being lost, and 
> adds geoip lookups for the src_ip.  I tested it against approximately 1GB 
> of alerts (3M events).
>
> input {
>   file {
> type => "ossec"
> path => "/var/ossec/logs/alerts/alerts.log"
> sincedb_path => "/opt/logstash/"
> codec => multiline {
>   pattern => "^\*\*"
>   negate => true
>   what => "previous"
> }
>   }
> }
>
> filter {
>   if [type] == "ossec" {
> # Parse the header of the alert
> grok {
>   # Matches  2014 Mar 08 00:57:49 (some.server.com) 10.1.2.3->ossec
>   # (?m) fixes issues with multi-lines see 
> https://logstash.jira.com/browse/LOGSTASH-509
>   match => ["message", "(?m)\*\* Alert 
> %{DATA:timestamp_seconds}:%{SPACE}%{WORD}?%{SPACE}\- 
> %{DATA:ossec_group}\n%{YEAR} %{SYSLOGTIMESTAMP:syslog_timestamp} 
> \(%{DATA:reporting_host}\) 
> %{IP:reporting_ip}\-\>%{DATA:reporting_source}\nRule: 
> %{NONNEGINT:rule_number} \(level %{NONNEGINT:severity}\) \-\> 
> '%{DATA:signature}'\n%{GREEDYDATA:remaining_message}"]
>   
>   # Matches  2014 Mar 08 00:00:00 ossec-server01->/var/log/auth.log
>   match => ["message", "(?m)\*\* Alert 
> %{DATA:timestamp_seconds}:%{SPACE}%{WORD}?%{SPACE}\- 
> %{DATA:ossec_group}\n%{YEAR} %{SYSLOGTIMESTAMP:syslog_timestamp} 
> %{DATA:reporting_host}\-\>%{DATA:reporting_source}\nRule: 
> %{NONNEGINT:rule_number} \(level %{NONNEGINT:severity}\) \-\> 
> '%{DATA:signature}'\n%{GREEDYDATA:remaining_message}"]
> }
>
> # Attempt to parse additional data from the alert
> grok {
>   match => ["remaining_message", "(?m)(Src IP: 
> %{IP:src_ip}%{SPACE})?(Src Port: %{NONNEGINT:src_port}%{SPACE})?(Dst IP: 
> %{IP:dst_ip}%{SPACE})?(Dst Port: %{NONNEGINT:dst_port}%{SPACE})?(User: 
> %{USER:acct}%{SPACE})?%{GREEDYDATA:real_message}"]
> }
>
> geoip {
>   source => "src_ip"
> }
>
> mutate {
>   convert  => [ "severity", "integer"]
>   replace  => [ "@message", "%{real_message}" ]
>   replace  => [ "@fields.hostname", "%{reporting_host}"]
>   add_field=> [ "@fields.product", "ossec"]
>   add_field=> [ "raw_message", "%{message}"]
>   add_field=> [ "ossec_server", "%{host}"]
>   remove_field => [ "type", "syslog_program", "syslog_timestamp", 
> "reporting_host", "message", "timestamp_seconds", "real_message", 
> "remaining_message", "path", "host", "tags"]
> }
>   }
> }
>
> output {
>elasticsearch {
>  host => "10.0.0.1"
>  cluster => "mycluster"
>}
> }
>
> Here are a few examples of the output this generates.
>
> {
>"@timestamp":"2014-03-08T20:34:08.847Z",
>"@version":"1",
>"ossec_group":"syslog,sshd,invalid_login,authentication_failed,",
>"reporting_ip":"10.1.2.3",
>"reporting_source":"/var/log/auth.log",
>"rule_number":"5710",
>"severity":5,
>"signature":"Attempt to login using a non-existent user",
>"src_ip":"112.65.211.164",
>"geoip":{
>   "ip":"112.65.211.164",
>   "country_code2":"CN",
>   "country_code3":"CHN",
>   "country_name":"China",
>   "continent_code":"AS",
>   "region_name":"23",
>   "city_name":"Shanghai",
>   "latitude":31.0456007,
>   "longitude":121.3997,
>   "timezone":"Asia/Shanghai",
>   "real_region_name":"Shanghai",
>   "location":[
>  121.3997,
>  31.0456007
>   ]
>},
>"@message":"Mar  8 01:00:59 someserver sshd[22874]: Invalid user oracle 
> from 112.65.211.164\n",
>"@fields.hostname":"someserver.somedomain.com",
>"@fields.product":"ossec",
>"raw_message":"** Alert 1394240459.2305861: - 
> syslog,sshd,invalid_login,authentication_failed,\n2014 Mar 08 01:00:59 (
> someserver.somedomain.com) 10.1.2.3->/var/log/auth.log\nRule: 5710 (level 
> 5) -> 'Attempt to login using a non-existent user'\nSrc IP: 
> 112.65.211.164\nMar  8 01:00:59 someserver sshd[22874]: Invalid user oracle 
> from 112.65.211.164\n",
>"ossec_server":"ossec-server.somedomain.com"
> }
>
> and 
>
> {
>"@t

Re: [ossec-list] Re: OSSEC & Logstash

2015-01-22 Thread Slobodan Aleksić
I managed it by putting logstash user in the ossec group. Not nice but
works.

On 12/30/2014 03:27 PM, Glenn Ford wrote:
> How did you securely configure to get around the fact OSSEC permissions
> don't allow access to that file?
> 
> I believe the reason this isn't working for me is because the file is
> not accessible (logstash shows no errors running, aggravating).
> 
> I temporarily modified logstash to allow login and tried this:
> 
> ]# su - logstash
> -bash-4.1$ pwd
> /opt/logstash
> -bash-4.1$ stat /var/ossec/logs/alerts/alerts.log
> stat: cannot stat `/var/ossec/logs/alerts/alerts.log': Permission denied
> 
> 
> 
> On Saturday, March 8, 2014 5:02:35 PM UTC-5, Joshua Garnett wrote:
> 
> To address this issue I've put together a logstash config that will
> read the alerts from /var/ossec/logs/alerts/alerts.log.  On top of
> solving the reliability issue, it also fixes issues with multi-lines
> being lost, and adds geoip lookups for the src_ip.  I tested it
> against approximately 1GB of alerts (3M events).
> 
> -- 
> 
> ---
> You received this message because you are subscribed to the Google
> Groups "ossec-list" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to ossec-list+unsubscr...@googlegroups.com
> .
> For more options, visit https://groups.google.com/d/optout.

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"ossec-list" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ossec-list+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ossec-list] Re: OSSEC & Logstash

2014-12-31 Thread dan (ddp)
On Mon, Dec 29, 2014 at 3:13 PM, Glenn Ford  wrote:
> Hi Joshua,
>
> When I do this I get this error:
>
> ./logstash agent -f ./logstash.conf
> Using milestone 2 input plugin 'file'. This plugin should be stable, but if
> you see strange behavior, please let us know! For more information on plugin
> milestones, see http://logstash.net/docs/1.4.2-modified/plugin-milestones
> {:level=>:warn}
> log4j, [2014-12-29T15:10:20.039]  WARN: org.elasticsearch.discovery:
> [logstash-xxx-xxx.xxx-5946-4022] waited for 30s and no initial state was
> set by the discovery
>
> Exception in thread ">output"
> org.elasticsearch.discovery.MasterNotDiscoveredException: waited for [30s]
> at
> org.elasticsearch.action.support.master.TransportMasterNodeOperationAction$3.onTimeout(org/elasticsearch/action/support/master/TransportMasterNodeOperationAction.java:180)
> at
> org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(org/elasticsearch/cluster/service/InternalClusterService.java:492)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(java/util/concurrent/ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(java/util/concurrent/ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(java/lang/Thread.java:745)
>
> Any ideas whats wrong here?
>

Something's wrong in your output section? Elastic search isn't running?

>
>
> On Saturday, March 8, 2014 5:02:35 PM UTC-5, Joshua Garnett wrote:
>>
>> All,
>>
>> I'll probably write a blog post on this, but I wanted to share some work
>> I've done today.
>> http://vichargrave.com/ossec-log-management-with-elasticsearch/ shows how to
>> use OSSEC's syslog output to route messages to Elasticsearch.  The problem
>> with this method is it uses UDP.  Even when sending packets to a local
>> process UDP by definition is unreliable.  Garbage collections and other
>> system events can cause packets to be lost.  I've found it tends to cap out
>> at around 1,500 messages per minute.
>>
>> To address this issue I've put together a logstash config that will read
>> the alerts from /var/ossec/logs/alerts/alerts.log.  On top of solving the
>> reliability issue, it also fixes issues with multi-lines being lost, and
>> adds geoip lookups for the src_ip.  I tested it against approximately 1GB of
>> alerts (3M events).
>>
>> input {
>>   file {
>> type => "ossec"
>> path => "/var/ossec/logs/alerts/alerts.log"
>> sincedb_path => "/opt/logstash/"
>> codec => multiline {
>>   pattern => "^\*\*"
>>   negate => true
>>   what => "previous"
>> }
>>   }
>> }
>>
>> filter {
>>   if [type] == "ossec" {
>> # Parse the header of the alert
>> grok {
>>   # Matches  2014 Mar 08 00:57:49 (some.server.com) 10.1.2.3->ossec
>>   # (?m) fixes issues with multi-lines see
>> https://logstash.jira.com/browse/LOGSTASH-509
>>   match => ["message", "(?m)\*\* Alert
>> %{DATA:timestamp_seconds}:%{SPACE}%{WORD}?%{SPACE}\-
>> %{DATA:ossec_group}\n%{YEAR} %{SYSLOGTIMESTAMP:syslog_timestamp}
>> \(%{DATA:reporting_host}\)
>> %{IP:reporting_ip}\-\>%{DATA:reporting_source}\nRule:
>> %{NONNEGINT:rule_number} \(level %{NONNEGINT:severity}\) \-\>
>> '%{DATA:signature}'\n%{GREEDYDATA:remaining_message}"]
>>
>>   # Matches  2014 Mar 08 00:00:00 ossec-server01->/var/log/auth.log
>>   match => ["message", "(?m)\*\* Alert
>> %{DATA:timestamp_seconds}:%{SPACE}%{WORD}?%{SPACE}\-
>> %{DATA:ossec_group}\n%{YEAR} %{SYSLOGTIMESTAMP:syslog_timestamp}
>> %{DATA:reporting_host}\-\>%{DATA:reporting_source}\nRule:
>> %{NONNEGINT:rule_number} \(level %{NONNEGINT:severity}\) \-\>
>> '%{DATA:signature}'\n%{GREEDYDATA:remaining_message}"]
>> }
>>
>> # Attempt to parse additional data from the alert
>> grok {
>>   match => ["remaining_message", "(?m)(Src IP:
>> %{IP:src_ip}%{SPACE})?(Src Port: %{NONNEGINT:src_port}%{SPACE})?(Dst IP:
>> %{IP:dst_ip}%{SPACE})?(Dst Port: %{NONNEGINT:dst_port}%{SPACE})?(User:
>> %{USER:acct}%{SPACE})?%{GREEDYDATA:real_message}"]
>> }
>>
>> geoip {
>>   source => "src_ip"
>> }
>>
>> mutate {
>>   convert  => [ "severity", "integer"]
>>   replace  => [ "@message", "%{real_message}" ]
>>   replace  => [ "@fields.hostname", "%{reporting_host}"]
>>   add_field=> [ "@fields.product", "ossec"]
>>   add_field=> [ "raw_message", "%{message}"]
>>   add_field=> [ "ossec_server", "%{host}"]
>>   remove_field => [ "type", "syslog_program", "syslog_timestamp",
>> "reporting_host", "message", "timestamp_seconds", "real_message",
>> "remaining_message", "path", "host", "tags"]
>> }
>>   }
>> }
>>
>> output {
>>elasticsearch {
>>  host => "10.0.0.1"
>>  cluster => "mycluster"
>>}
>> }
>>
>> Here are a few examples of the output this generates.
>>
>> {
>>"@timestamp":"2014-03-08T20:34:08.847Z",
>>"@version":"1",
>>"ossec_group":"syslog,sshd,invalid_login,authenticatio

[ossec-list] Re: OSSEC & Logstash

2014-12-30 Thread Glenn Ford
How did you securely configure to get around the fact OSSEC permissions 
don't allow access to that file?

I believe the reason this isn't working for me is because the file is not 
accessible (logstash shows no errors running, aggravating).

I temporarily modified logstash to allow login and tried this:

]# su - logstash
-bash-4.1$ pwd
/opt/logstash
-bash-4.1$ stat /var/ossec/logs/alerts/alerts.log
stat: cannot stat `/var/ossec/logs/alerts/alerts.log': Permission denied



On Saturday, March 8, 2014 5:02:35 PM UTC-5, Joshua Garnett wrote:
>
> To address this issue I've put together a logstash config that will read 
> the alerts from /var/ossec/logs/alerts/alerts.log.  On top of solving the 
> reliability issue, it also fixes issues with multi-lines being lost, and 
> adds geoip lookups for the src_ip.  I tested it against approximately 1GB 
> of alerts (3M events).
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"ossec-list" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ossec-list+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ossec-list] Re: OSSEC & Logstash

2014-12-30 Thread Glenn Ford
That was my bad on setup for output parameters, please ignore. not up and 
running yet but closer.

On Monday, December 29, 2014 3:13:17 PM UTC-5, Glenn Ford wrote:
>
> Hi Joshua,
>
> When I do this I get this error:
>
> ./logstash agent -f ./logstash.conf
> Using milestone 2 input plugin 'file'. This plugin should be stable, but 
> if you see strange behavior, please let us know! For more information on 
> plugin milestones, see 
> http://logstash.net/docs/1.4.2-modified/plugin-milestones {:level=>:warn}
> log4j, [2014-12-29T15:10:20.039]  WARN: org.elasticsearch.discovery: 
> [logstash-xxx-xxx.xxx-5946-4022] waited for 30s and no initial state 
> was set by the discovery
>
> Exception in thread ">output" 
> org.elasticsearch.discovery.MasterNotDiscoveredException: waited for [30s]
> at 
> org.elasticsearch.action.support.master.TransportMasterNodeOperationAction$3.onTimeout(org/elasticsearch/action/support/master/TransportMasterNodeOperationAction.java:180)
> at 
> org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(org/elasticsearch/cluster/service/InternalClusterService.java:492)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(java/util/concurrent/ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(java/util/concurrent/ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(java/lang/Thread.java:745)
>
> Any ideas whats wrong here?
>
>
> On Saturday, March 8, 2014 5:02:35 PM UTC-5, Joshua Garnett wrote:
>>
>> All,
>>
>> I'll probably write a blog post on this, but I wanted to share some work 
>> I've done today.  
>> http://vichargrave.com/ossec-log-management-with-elasticsearch/ shows 
>> how to use OSSEC's syslog output to route messages to Elasticsearch.  The 
>> problem with this method is it uses UDP.  Even when sending packets to a 
>> local process UDP by definition is unreliable.  Garbage collections and 
>> other system events can cause packets to be lost.  I've found it tends to 
>> cap out at around 1,500 messages per minute. 
>>
>> To address this issue I've put together a logstash config that will read 
>> the alerts from /var/ossec/logs/alerts/alerts.log.  On top of solving the 
>> reliability issue, it also fixes issues with multi-lines being lost, and 
>> adds geoip lookups for the src_ip.  I tested it against approximately 1GB 
>> of alerts (3M events).
>>
>> input {
>>   file {
>> type => "ossec"
>> path => "/var/ossec/logs/alerts/alerts.log"
>> sincedb_path => "/opt/logstash/"
>> codec => multiline {
>>   pattern => "^\*\*"
>>   negate => true
>>   what => "previous"
>> }
>>   }
>> }
>>
>> filter {
>>   if [type] == "ossec" {
>> # Parse the header of the alert
>> grok {
>>   # Matches  2014 Mar 08 00:57:49 (some.server.com) 10.1.2.3->ossec
>>   # (?m) fixes issues with multi-lines see 
>> https://logstash.jira.com/browse/LOGSTASH-509
>>   match => ["message", "(?m)\*\* Alert 
>> %{DATA:timestamp_seconds}:%{SPACE}%{WORD}?%{SPACE}\- 
>> %{DATA:ossec_group}\n%{YEAR} %{SYSLOGTIMESTAMP:syslog_timestamp} 
>> \(%{DATA:reporting_host}\) 
>> %{IP:reporting_ip}\-\>%{DATA:reporting_source}\nRule: 
>> %{NONNEGINT:rule_number} \(level %{NONNEGINT:severity}\) \-\> 
>> '%{DATA:signature}'\n%{GREEDYDATA:remaining_message}"]
>>   
>>   # Matches  2014 Mar 08 00:00:00 ossec-server01->/var/log/auth.log
>>   match => ["message", "(?m)\*\* Alert 
>> %{DATA:timestamp_seconds}:%{SPACE}%{WORD}?%{SPACE}\- 
>> %{DATA:ossec_group}\n%{YEAR} %{SYSLOGTIMESTAMP:syslog_timestamp} 
>> %{DATA:reporting_host}\-\>%{DATA:reporting_source}\nRule: 
>> %{NONNEGINT:rule_number} \(level %{NONNEGINT:severity}\) \-\> 
>> '%{DATA:signature}'\n%{GREEDYDATA:remaining_message}"]
>> }
>>
>> # Attempt to parse additional data from the alert
>> grok {
>>   match => ["remaining_message", "(?m)(Src IP: 
>> %{IP:src_ip}%{SPACE})?(Src Port: %{NONNEGINT:src_port}%{SPACE})?(Dst IP: 
>> %{IP:dst_ip}%{SPACE})?(Dst Port: %{NONNEGINT:dst_port}%{SPACE})?(User: 
>> %{USER:acct}%{SPACE})?%{GREEDYDATA:real_message}"]
>> }
>>
>> geoip {
>>   source => "src_ip"
>> }
>>
>> mutate {
>>   convert  => [ "severity", "integer"]
>>   replace  => [ "@message", "%{real_message}" ]
>>   replace  => [ "@fields.hostname", "%{reporting_host}"]
>>   add_field=> [ "@fields.product", "ossec"]
>>   add_field=> [ "raw_message", "%{message}"]
>>   add_field=> [ "ossec_server", "%{host}"]
>>   remove_field => [ "type", "syslog_program", "syslog_timestamp", 
>> "reporting_host", "message", "timestamp_seconds", "real_message", 
>> "remaining_message", "path", "host", "tags"]
>> }
>>   }
>> }
>>
>> output {
>>elasticsearch {
>>  host => "10.0.0.1"
>>  cluster => "mycluster"
>>}
>> }
>>
>> Here are a few examples of the output this generates.
>>
>> {
>>"@timestamp":"2014-03-08T20:34:08.847

[ossec-list] Re: OSSEC & Logstash

2014-12-29 Thread Glenn Ford
Hi Joshua,

When I do this I get this error:

./logstash agent -f ./logstash.conf
Using milestone 2 input plugin 'file'. This plugin should be stable, but if 
you see strange behavior, please let us know! For more information on 
plugin milestones, see 
http://logstash.net/docs/1.4.2-modified/plugin-milestones {:level=>:warn}
log4j, [2014-12-29T15:10:20.039]  WARN: org.elasticsearch.discovery: 
[logstash-xxx-xxx.xxx-5946-4022] waited for 30s and no initial state 
was set by the discovery

Exception in thread ">output" 
org.elasticsearch.discovery.MasterNotDiscoveredException: waited for [30s]
at 
org.elasticsearch.action.support.master.TransportMasterNodeOperationAction$3.onTimeout(org/elasticsearch/action/support/master/TransportMasterNodeOperationAction.java:180)
at 
org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(org/elasticsearch/cluster/service/InternalClusterService.java:492)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(java/util/concurrent/ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(java/util/concurrent/ThreadPoolExecutor.java:615)
at java.lang.Thread.run(java/lang/Thread.java:745)

Any ideas whats wrong here?


On Saturday, March 8, 2014 5:02:35 PM UTC-5, Joshua Garnett wrote:
>
> All,
>
> I'll probably write a blog post on this, but I wanted to share some work 
> I've done today.  
> http://vichargrave.com/ossec-log-management-with-elasticsearch/ shows how 
> to use OSSEC's syslog output to route messages to Elasticsearch.  The 
> problem with this method is it uses UDP.  Even when sending packets to a 
> local process UDP by definition is unreliable.  Garbage collections and 
> other system events can cause packets to be lost.  I've found it tends to 
> cap out at around 1,500 messages per minute. 
>
> To address this issue I've put together a logstash config that will read 
> the alerts from /var/ossec/logs/alerts/alerts.log.  On top of solving the 
> reliability issue, it also fixes issues with multi-lines being lost, and 
> adds geoip lookups for the src_ip.  I tested it against approximately 1GB 
> of alerts (3M events).
>
> input {
>   file {
> type => "ossec"
> path => "/var/ossec/logs/alerts/alerts.log"
> sincedb_path => "/opt/logstash/"
> codec => multiline {
>   pattern => "^\*\*"
>   negate => true
>   what => "previous"
> }
>   }
> }
>
> filter {
>   if [type] == "ossec" {
> # Parse the header of the alert
> grok {
>   # Matches  2014 Mar 08 00:57:49 (some.server.com) 10.1.2.3->ossec
>   # (?m) fixes issues with multi-lines see 
> https://logstash.jira.com/browse/LOGSTASH-509
>   match => ["message", "(?m)\*\* Alert 
> %{DATA:timestamp_seconds}:%{SPACE}%{WORD}?%{SPACE}\- 
> %{DATA:ossec_group}\n%{YEAR} %{SYSLOGTIMESTAMP:syslog_timestamp} 
> \(%{DATA:reporting_host}\) 
> %{IP:reporting_ip}\-\>%{DATA:reporting_source}\nRule: 
> %{NONNEGINT:rule_number} \(level %{NONNEGINT:severity}\) \-\> 
> '%{DATA:signature}'\n%{GREEDYDATA:remaining_message}"]
>   
>   # Matches  2014 Mar 08 00:00:00 ossec-server01->/var/log/auth.log
>   match => ["message", "(?m)\*\* Alert 
> %{DATA:timestamp_seconds}:%{SPACE}%{WORD}?%{SPACE}\- 
> %{DATA:ossec_group}\n%{YEAR} %{SYSLOGTIMESTAMP:syslog_timestamp} 
> %{DATA:reporting_host}\-\>%{DATA:reporting_source}\nRule: 
> %{NONNEGINT:rule_number} \(level %{NONNEGINT:severity}\) \-\> 
> '%{DATA:signature}'\n%{GREEDYDATA:remaining_message}"]
> }
>
> # Attempt to parse additional data from the alert
> grok {
>   match => ["remaining_message", "(?m)(Src IP: 
> %{IP:src_ip}%{SPACE})?(Src Port: %{NONNEGINT:src_port}%{SPACE})?(Dst IP: 
> %{IP:dst_ip}%{SPACE})?(Dst Port: %{NONNEGINT:dst_port}%{SPACE})?(User: 
> %{USER:acct}%{SPACE})?%{GREEDYDATA:real_message}"]
> }
>
> geoip {
>   source => "src_ip"
> }
>
> mutate {
>   convert  => [ "severity", "integer"]
>   replace  => [ "@message", "%{real_message}" ]
>   replace  => [ "@fields.hostname", "%{reporting_host}"]
>   add_field=> [ "@fields.product", "ossec"]
>   add_field=> [ "raw_message", "%{message}"]
>   add_field=> [ "ossec_server", "%{host}"]
>   remove_field => [ "type", "syslog_program", "syslog_timestamp", 
> "reporting_host", "message", "timestamp_seconds", "real_message", 
> "remaining_message", "path", "host", "tags"]
> }
>   }
> }
>
> output {
>elasticsearch {
>  host => "10.0.0.1"
>  cluster => "mycluster"
>}
> }
>
> Here are a few examples of the output this generates.
>
> {
>"@timestamp":"2014-03-08T20:34:08.847Z",
>"@version":"1",
>"ossec_group":"syslog,sshd,invalid_login,authentication_failed,",
>"reporting_ip":"10.1.2.3",
>"reporting_source":"/var/log/auth.log",
>"rule_number":"5710",
>"severity":5,
>"signature":"Attempt to login using a non-existent user",
>"src_ip":"112.65.211.164

[ossec-list] Re: OSSEC & Logstash

2014-08-14 Thread Villiers Tientcheu Ngandjeuu
Hi Josh,
Everything is ok now! In fact, I had to remove the condition if 
[type=ossec] in logstash's config file. However I have a question: is there 
any problem with the condition in logstash's output?
output { 
   if [type] == "ossec" {   elasticsearch { 
 
host => "127.0.0.1"  cluster => "ossec" 
 
index => "logstash-ossec-%{+.MM.dd}"  index_type => 
"ossec"  template_name => "template-ossec"  

template => "/usr/local/share/logstash/elasticsearch_template.json" 
 
template_overwrite => true}  }  }
Le samedi 8 mars 2014 23:02:35 UTC+1, Joshua Garnett a écrit :
>
> All,
>
> I'll probably write a blog post on this, but I wanted to share some work 
> I've done today.  
> http://vichargrave.com/ossec-log-management-with-elasticsearch/ shows how 
> to use OSSEC's syslog output to route messages to Elasticsearch.  The 
> problem with this method is it uses UDP.  Even when sending packets to a 
> local process UDP by definition is unreliable.  Garbage collections and 
> other system events can cause packets to be lost.  I've found it tends to 
> cap out at around 1,500 messages per minute. 
>
> To address this issue I've put together a logstash config that will read 
> the alerts from /var/ossec/logs/alerts/alerts.log.  On top of solving the 
> reliability issue, it also fixes issues with multi-lines being lost, and 
> adds geoip lookups for the src_ip.  I tested it against approximately 1GB 
> of alerts (3M events).
>
> input {
>   file {
> type => "ossec"
> path => "/var/ossec/logs/alerts/alerts.log"
> sincedb_path => "/opt/logstash/"
> codec => multiline {
>   pattern => "^\*\*"
>   negate => true
>   what => "previous"
> }
>   }
> }
>
> filter {
>   if [type] == "ossec" {
> # Parse the header of the alert
> grok {
>   # Matches  2014 Mar 08 00:57:49 (some.server.com) 10.1.2.3->ossec
>   # (?m) fixes issues with multi-lines see 
> https://logstash.jira.com/browse/LOGSTASH-509
>   match => ["message", "(?m)\*\* Alert 
> %{DATA:timestamp_seconds}:%{SPACE}%{WORD}?%{SPACE}\- 
> %{DATA:ossec_group}\n%{YEAR} %{SYSLOGTIMESTAMP:syslog_timestamp} 
> \(%{DATA:reporting_host}\) 
> %{IP:reporting_ip}\-\>%{DATA:reporting_source}\nRule: 
> %{NONNEGINT:rule_number} \(level %{NONNEGINT:severity}\) \-\> 
> '%{DATA:signature}'\n%{GREEDYDATA:remaining_message}"]
>   
>   # Matches  2014 Mar 08 00:00:00 ossec-server01->/var/log/auth.log
>   match => ["message", "(?m)\*\* Alert 
> %{DATA:timestamp_seconds}:%{SPACE}%{WORD}?%{SPACE}\- 
> %{DATA:ossec_group}\n%{YEAR} %{SYSLOGTIMESTAMP:syslog_timestamp} 
> %{DATA:reporting_host}\-\>%{DATA:reporting_source}\nRule: 
> %{NONNEGINT:rule_number} \(level %{NONNEGINT:severity}\) \-\> 
> '%{DATA:signature}'\n%{GREEDYDATA:remaining_message}"]
> }
>
> # Attempt to parse additional data from the alert
> grok {
>   match => ["remaining_message", "(?m)(Src IP: 
> %{IP:src_ip}%{SPACE})?(Src Port: %{NONNEGINT:src_port}%{SPACE})?(Dst IP: 
> %{IP:dst_ip}%{SPACE})?(Dst Port: %{NONNEGINT:dst_port}%{SPACE})?(User: 
> %{USER:acct}%{SPACE})?%{GREEDYDATA:real_message}"]
> }
>
> geoip {
>   source => "src_ip"
> }
>
> mutate {
>   convert  => [ "severity", "integer"]
>   replace  => [ "@message", "%{real_message}" ]
>   replace  => [ "@fields.hostname", "%{reporting_host}"]
>   add_field=> [ "@fields.product", "ossec"]
>   add_field=> [ "raw_message", "%{message}"]
>   add_field=> [ "ossec_server", "%{host}"]
>   remove_field => [ "type", "syslog_program", "syslog_timestamp", 
> "reporting_host", "message", "timestamp_seconds", "real_message", 
> "remaining_message", "path", "host", "tags"]
> }
>   }
> }
>
> output {
>elasticsearch {
>  host => "10.0.0.1"
>  cluster => "mycluster"
>}
> }
>
> Here are a few examples of the output this generates.
>
> {
>"@timestamp":"2014-03-08T20:34:08.847Z",
>"@version":"1",
>"ossec_group":"syslog,sshd,invalid_login,authentication_failed,",
>"reporting_ip":"10.1.2.3",
>"reporting_source":"/var/log/auth.log",
>"rule_number":"5710",
>"severity":5,
>"signature":"Attempt to login using a non-existent user",
>"src_ip":"112.65.211.164",
>"geoip":{
>   "ip":"112.65.211.164",
>   "country_code2":"CN",
>   "country_code3":"CHN",
>   "country_name":"China",
>   "continent_code":"AS",
>   "region_name":"23",
>   "city_name":"Shanghai",
>   "latitude":31.0456007,
>   "longitude":121.3997,
>   "timezone":"Asia/Shanghai",
>   "real_region_name":"Shanghai",
>   "location":[
>  121.3997,
>  31.0456007
>   ]
>},
>"@message":"Mar  8 01:00:59 someserver sshd[22874]: Invalid user oracle 
> from 112.65.211

Re: [ossec-list] Re: OSSEC & Logstash

2014-08-13 Thread Joshua Garnett
We just did the upgrade to logstash 1.4 and Elasticsearch 1.2 a few weeks
ago.  Everything appears to still be working.

My updated output config:

output {
  elasticsearch {
node_name => "ossec-server"
host => "10.0.0.1"
cluster => "mycluster"
protocol => "transport"
index => "logstash-ossec-%{+.MM.dd}"
index_type => "ossec"
template_name => "template-ossec"
template => "/etc/logstash/elasticsearch_template.json"
template_overwrite => true
  }
}

You should make sure that host has been changed to the IP of your
Elasticsearch instance.  Also, cluster should match the name you've
specified in the Elasticsearch config.

Example /etc/elasticsearch/elasticsearch.yml:

---
cluster:
  name: mycluster
  routing:
allocation:
  concurrent_streams: 6
  node_concurrent_recoveries: 6

... (more config) ...


--Josh


On Tue, Aug 12, 2014 at 9:18 AM, Villiers Tientcheu Ngandjeuu <
tientcheuvilli...@gmail.com> wrote:

>
> Hi Joshua,
> Thank you for your post. I'am also concern with Ossec and Logstash for a
> business.
> I used your configuration in my test environment with somes changes in
> the name of the cluster and ip adress.
> This my test environment: I have two virtual host in the same network and
> the ping together, one let's say A with logstash and the other let's say B
> with Elasticsearch.
> In host A, I have copied somes ossec log and aggregate them in an unique
> file in other to get what you have with "alert.log".
> The other parameter in the configuration file remain the same with what
> you mentionned.
> But this is the issue I have: Iogstash doesn't create any index in
> Elasticsearch cluster and I don't know why. Have you met this issue?
> However, Elasticsearch instance détectes logstash instance. And when I
> configure in logstash's file config, to send output in stdout, I get
> something, the result you have.
> So, why logstash can't send result of the parsing Elasticsearch.
> I use logstash-1.4.0 and Elasticsearch-1.3.0
> Thank you for any help!
>
> Le samedi 8 mars 2014 23:02:35 UTC+1, Joshua Garnett a écrit :
>
>> All,
>>
>> I'll probably write a blog post on this, but I wanted to share some work
>> I've done today.  http://vichargrave.com/ossec-log-management-with-
>> elasticsearch/ shows how to use OSSEC's syslog output to route messages
>> to Elasticsearch.  The problem with this method is it uses UDP.  Even when
>> sending packets to a local process UDP by definition is unreliable.
>>  Garbage collections and other system events can cause packets to be lost.
>>  I've found it tends to cap out at around 1,500 messages per minute.
>>
>> To address this issue I've put together a logstash config that will read
>> the alerts from /var/ossec/logs/alerts/alerts.log.  On top of solving
>> the reliability issue, it also fixes issues with multi-lines being lost,
>> and adds geoip lookups for the src_ip.  I tested it against approximately
>> 1GB of alerts (3M events).
>>
>> input {
>>   file {
>> type => "ossec"
>> path => "/var/ossec/logs/alerts/alerts.log"
>> sincedb_path => "/opt/logstash/"
>> codec => multiline {
>>   pattern => "^\*\*"
>>   negate => true
>>   what => "previous"
>> }
>>   }
>> }
>>
>> filter {
>>   if [type] == "ossec" {
>> # Parse the header of the alert
>> grok {
>>   # Matches  2014 Mar 08 00:57:49 (some.server.com) 10.1.2.3->ossec
>>   # (?m) fixes issues with multi-lines see https://logstash.jira.com/
>> browse/LOGSTASH-509
>>   match => ["message", "(?m)\*\* Alert 
>> %{DATA:timestamp_seconds}:%{SPACE}%{WORD}?%{SPACE}\-
>> %{DATA:ossec_group}\n%{YEAR} %{SYSLOGTIMESTAMP:syslog_timestamp}
>> \(%{DATA:reporting_host}\) 
>> %{IP:reporting_ip}\-\>%{DATA:reporting_source}\nRule:
>> %{NONNEGINT:rule_number} \(level %{NONNEGINT:severity}\) \-\>
>> '%{DATA:signature}'\n%{GREEDYDATA:remaining_message}"]
>>
>>   # Matches  2014 Mar 08 00:00:00 ossec-server01->/var/log/auth.log
>>   match => ["message", "(?m)\*\* Alert 
>> %{DATA:timestamp_seconds}:%{SPACE}%{WORD}?%{SPACE}\-
>> %{DATA:ossec_group}\n%{YEAR} %{SYSLOGTIMESTAMP:syslog_timestamp}
>> %{DATA:reporting_host}\-\>%{DATA:reporting_source}\nRule:
>> %{NONNEGINT:rule_number} \(level %{NONNEGINT:severity}\) \-\>
>> '%{DATA:signature}'\n%{GREEDYDATA:remaining_message}"]
>> }
>>
>> # Attempt to parse additional data from the alert
>> grok {
>>   match => ["remaining_message", "(?m)(Src IP:
>> %{IP:src_ip}%{SPACE})?(Src Port: %{NONNEGINT:src_port}%{SPACE})?(Dst IP:
>> %{IP:dst_ip}%{SPACE})?(Dst Port: %{NONNEGINT:dst_port}%{SPACE})?(User:
>> %{USER:acct}%{SPACE})?%{GREEDYDATA:real_message}"]
>> }
>>
>> geoip {
>>   source => "src_ip"
>> }
>>
>> mutate {
>>   convert  => [ "severity", "integer"]
>>   replace  => [ "@message", "%{real_message}" ]
>>   replace  => [ "@fields.hostname", "%{reporting_host}"]
>>   add_field=> [ "@fields.product", "ossec"]
>>   ad

[ossec-list] Re: OSSEC & Logstash

2014-08-12 Thread Villiers Tientcheu Ngandjeuu

Hi Joshua,
Thank you for your post. I'am also concern with Ossec and Logstash for a 
business.
I used your configuration in my test environment with somes changes in the 
name of the cluster and ip adress.
This my test environment: I have two virtual host in the same network and 
the ping together, one let's say A with logstash and the other let's say B 
with Elasticsearch.
In host A, I have copied somes ossec log and aggregate them in an unique 
file in other to get what you have with "alert.log".
The other parameter in the configuration file remain the same with what you 
mentionned.
But this is the issue I have: Iogstash doesn't create any index in 
Elasticsearch cluster and I don't know why. Have you met this issue? 
However, Elasticsearch instance détectes logstash instance. And when I 
configure in logstash's file config, to send output in stdout, I get 
something, the result you have.
So, why logstash can't send result of the parsing Elasticsearch.
I use logstash-1.4.0 and Elasticsearch-1.3.0
Thank you for any help!
 
Le samedi 8 mars 2014 23:02:35 UTC+1, Joshua Garnett a écrit :
>
> All,
>
> I'll probably write a blog post on this, but I wanted to share some work 
> I've done today.  
> http://vichargrave.com/ossec-log-management-with-elasticsearch/ shows how 
> to use OSSEC's syslog output to route messages to Elasticsearch.  The 
> problem with this method is it uses UDP.  Even when sending packets to a 
> local process UDP by definition is unreliable.  Garbage collections and 
> other system events can cause packets to be lost.  I've found it tends to 
> cap out at around 1,500 messages per minute. 
>
> To address this issue I've put together a logstash config that will read 
> the alerts from /var/ossec/logs/alerts/alerts.log.  On top of solving the 
> reliability issue, it also fixes issues with multi-lines being lost, and 
> adds geoip lookups for the src_ip.  I tested it against approximately 1GB 
> of alerts (3M events).
>
> input {
>   file {
> type => "ossec"
> path => "/var/ossec/logs/alerts/alerts.log"
> sincedb_path => "/opt/logstash/"
> codec => multiline {
>   pattern => "^\*\*"
>   negate => true
>   what => "previous"
> }
>   }
> }
>
> filter {
>   if [type] == "ossec" {
> # Parse the header of the alert
> grok {
>   # Matches  2014 Mar 08 00:57:49 (some.server.com) 10.1.2.3->ossec
>   # (?m) fixes issues with multi-lines see 
> https://logstash.jira.com/browse/LOGSTASH-509
>   match => ["message", "(?m)\*\* Alert 
> %{DATA:timestamp_seconds}:%{SPACE}%{WORD}?%{SPACE}\- 
> %{DATA:ossec_group}\n%{YEAR} %{SYSLOGTIMESTAMP:syslog_timestamp} 
> \(%{DATA:reporting_host}\) 
> %{IP:reporting_ip}\-\>%{DATA:reporting_source}\nRule: 
> %{NONNEGINT:rule_number} \(level %{NONNEGINT:severity}\) \-\> 
> '%{DATA:signature}'\n%{GREEDYDATA:remaining_message}"]
>   
>   # Matches  2014 Mar 08 00:00:00 ossec-server01->/var/log/auth.log
>   match => ["message", "(?m)\*\* Alert 
> %{DATA:timestamp_seconds}:%{SPACE}%{WORD}?%{SPACE}\- 
> %{DATA:ossec_group}\n%{YEAR} %{SYSLOGTIMESTAMP:syslog_timestamp} 
> %{DATA:reporting_host}\-\>%{DATA:reporting_source}\nRule: 
> %{NONNEGINT:rule_number} \(level %{NONNEGINT:severity}\) \-\> 
> '%{DATA:signature}'\n%{GREEDYDATA:remaining_message}"]
> }
>
> # Attempt to parse additional data from the alert
> grok {
>   match => ["remaining_message", "(?m)(Src IP: 
> %{IP:src_ip}%{SPACE})?(Src Port: %{NONNEGINT:src_port}%{SPACE})?(Dst IP: 
> %{IP:dst_ip}%{SPACE})?(Dst Port: %{NONNEGINT:dst_port}%{SPACE})?(User: 
> %{USER:acct}%{SPACE})?%{GREEDYDATA:real_message}"]
> }
>
> geoip {
>   source => "src_ip"
> }
>
> mutate {
>   convert  => [ "severity", "integer"]
>   replace  => [ "@message", "%{real_message}" ]
>   replace  => [ "@fields.hostname", "%{reporting_host}"]
>   add_field=> [ "@fields.product", "ossec"]
>   add_field=> [ "raw_message", "%{message}"]
>   add_field=> [ "ossec_server", "%{host}"]
>   remove_field => [ "type", "syslog_program", "syslog_timestamp", 
> "reporting_host", "message", "timestamp_seconds", "real_message", 
> "remaining_message", "path", "host", "tags"]
> }
>   }
> }
>
> output {
>elasticsearch {
>  host => "10.0.0.1"
>  cluster => "mycluster"
>}
> }
>
> Here are a few examples of the output this generates.
>
> {
>"@timestamp":"2014-03-08T20:34:08.847Z",
>"@version":"1",
>"ossec_group":"syslog,sshd,invalid_login,authentication_failed,",
>"reporting_ip":"10.1.2.3",
>"reporting_source":"/var/log/auth.log",
>"rule_number":"5710",
>"severity":5,
>"signature":"Attempt to login using a non-existent user",
>"src_ip":"112.65.211.164",
>"geoip":{
>   "ip":"112.65.211.164",
>   "country_code2":"CN",
>   "country_code3":"CHN",
>   "country_name":"China",
>   "continent_code":"AS",
>   "region_name":"23",
>   "city_name"

Re: [ossec-list] Re: OSSEC & Logstash

2014-05-12 Thread Joshua Garnett
Sercan,

There are a few ways you can handle this.  2GB a day seems a little on the
high side for 200+ clients, so you may want to look at creating rules for
noisy non-security related messages as severity 0, which essentially
/dev/nulls the messages.  The other option is to use the log_alert_level
setting of alerts, which allows you to configure what severity levels are
logged to the file.

All of that said, be very careful about throwing away even low severity log
messages.  You never know what will be useful after a security incident.

--Josh



On Fri, May 9, 2014 at 5:26 AM, sercan acar  wrote:

> Hi,
>
> Is there a way to control the alert level which is stored by
> elasticsearch? I know you can do this through rsyslog, but is it possible
> through logstash.conf?
>
> With 200+ clients and they are generating around 2GB of data a day!
>
> Regards,
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "ossec-list" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to ossec-list+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"ossec-list" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ossec-list+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ossec-list] Re: OSSEC & Logstash

2014-05-09 Thread sercan acar
Hi,

Is there a way to control the alert level which is stored by elasticsearch? 
I know you can do this through rsyslog, but is it possible through 
logstash.conf?

With 200+ clients and they are generating around 2GB of data a day!

Regards,

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"ossec-list" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ossec-list+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ossec-list] Re: OSSEC & Logstash

2014-05-07 Thread Denis
ok, the problem is that ossec rotates logs every midnight, and chmod is 
740, so have to deal with that.
cheers

On Tuesday, May 6, 2014 12:01:09 PM UTC+1, Denis wrote:
>
> I was trying to configure everything "Joshua" way, and i see all data is 
> coming into stdin{}, but when i switch to elasticsearch index,  index is 
> empty.
> how to debug why data is not coming into index? is there any more debug 
> keys available?
>
> thank you
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"ossec-list" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ossec-list+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ossec-list] Re: OSSEC & Logstash

2014-05-06 Thread Denis
I was trying to configure everything "Joshua" way, and i see all data is 
coming into stdin{}, but when i switch to elasticsearch index,  index is 
empty.
how to debug why data is not coming into index? is there any more debug 
keys available?

thank you

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"ossec-list" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ossec-list+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ossec-list] Re: OSSEC & Logstash

2014-04-11 Thread Joshua Garnett
Sercan,

The BetterMaps map provider has been cranky lately. I've seen issues over
the past few days with loading the actual map.

--Josh


On Thu, Apr 10, 2014 at 4:33 PM, sercan acar  wrote:

> Thank you Josh. Not sure why I though filtering would be more complicated,
> lucene syntax is simple enough and it is very easy to add the timestamp
> field back in.
>
> I'm having deficilties with the Bettermap. The panel loads with values in
> different colour codes and number of alerts (so far so good) however the
> background is blank and loading bar is in a loop. What have I done wrong?
>
> Sercan
>
> On Tuesday, 8 April 2014 05:24:36 UTC+1, Joshua Garnett wrote:
>
>> Hi Sercan,
>>
>>- Kibana/Elasticsearch uses lucene syntax by default.  To filter
>>Alert Level 5 or above use:  severity:[5 TO *]
>>- geoip.location is the correct field for Bettermap
>>- @timestamp is the standard field used for the DateTime.  I didn't
>>see the need to have the extra field.  It'd be easy to add in if you 
>> prefer
>>it.
>>
>> --Josh
>>
>>
>>
>>
>> On Mon, Apr 7, 2014 at 1:31 PM, sercan acar  wrote:
>>
>>> Thank you Joshua Garnett. I've switched from syslog to localhost to
>>> reading the log file directly.
>>>
>>> Few questions:
>>>
>>>- Is there are way to filter with "Alert Level X or above"? (This is
>>>more generic Kibana question)
>>>- Which field did you use for the Bettermap Panel? I've added the
>>>panel with geoip.lattitude however the panel fails to load without any
>>>errors
>>>- Is there a reason why you choose to remove fields? for me
>>>syslog_timestamp is much cleaner than @timestamp
>>>
>>> Cheers
>>>
>>>
>>> On Thursday, 20 March 2014 17:02:52 UTC, vic hargrave wrote:
>>>
 Since writing my blog on using Elasticseach for OSSEC log management,
 I've upgraded to Elasticsearch 1.0.1 which does not seem to be able get
 logs data from Logstash 1.3.2 or 1.3.3.  The solution is to use
 "elasticsearch_http" in the "output" section of the logstash configuration
 file.  When you do that all is well.

 For more information on better log ingestion rates, check out Brad
 Lhotsky's article - http://edgeofsanity.net/article/2012/12/26/
 elasticsearch-for-logging.html.


 On Thu, Mar 20, 2014 at 3:43 AM, Chris H  wrote:

> Thanks, I'll have a look.  For me the default template created each
> field as a multi-field, with the regular, analysed field and an additional
> "raw" un-analysed field.  I'm extracting quite a lot of fields from the
> different log types, which is something I was doing in Splunk before 
> trying
> elasticsearch.
>
> "Alert_Level" : {
>   "type" : "multi_field",
>   "fields" : {
> "Alert_Level" : {
>   "type" : "string",
>   "omit_norms" : true
> },
> "raw" : {
>
>   "type" : "string",
>   "index" : "not_analyzed",
>   "omit_norms" : true,
>   "index_options" : "docs",
>   "include_in_all" : false,
>   "ignore_above" : 256
> }
>   }
> },
>
> I created a new default template in elasticsearch:
>
> curl -XPUT 'http://localhost:9200/_template/template_logstash/' -d '{
>   "template": "logstash-*",
>   "settings": {
> "index.store.compress.stored": true
>   },
>   "mappings": {
> "_default_": {
>   "_source": { "compress": "true" },
>   "_all" : {
> "enabled" : false
>   }
> }
>   }
> }'
>
> This has applied, but the compression doesn't seem to do much.  I'm at
> the point where I might only be able to store a limited amount of data in
> elasticsearch :(
>
> Chris
>
>
>
> On Wednesday, March 19, 2014 7:37:41 PM UTC, Joshua Garnett wrote:
>
>> Chris,
>>
>> Yeah digging into the templates was another big win for me.  For
>> instance, if you try to do a topN query on signature with the default
>> template, you end up with words like the and and as your top hits.  
>> Setting
>> signature to not_analyzed ensures the field isn't tokenized.  Below is my
>> template.
>>
>> --Josh
>>
>> Logstash settings:
>>
>> output {
>>elasticsearch {
>>  host => "10.0.0.1"
>>  cluster => "mycluster"
>>  index => "logstash-ossec-%{+.MM.dd}"
>>  index_type => "ossec"
>>  template_name => "template-ossec"
>>  template => "/etc/logstash/elasticsearch_template.json"
>>  template_overwrite => true
>>}
>> }
>>
>> elasticsearch_template.json
>>
>> {
>>   "template":"logstash-ossec-*",
>>   "settings":{
>> "index.analysis.analyzer.default.stopwords":"_n

Re: [ossec-list] Re: OSSEC & Logstash

2014-04-10 Thread sercan acar
Thank you Josh. Not sure why I though filtering would be more complicated, 
lucene syntax is simple enough and it is very easy to add the timestamp 
field back in.

I'm having deficilties with the Bettermap. The panel loads with values in 
different colour codes and number of alerts (so far so good) however the 
background is blank and loading bar is in a loop. What have I done wrong?

Sercan
On Tuesday, 8 April 2014 05:24:36 UTC+1, Joshua Garnett wrote:
>
> Hi Sercan,
>
>- Kibana/Elasticsearch uses lucene syntax by default.  To filter Alert 
>Level 5 or above use:  severity:[5 TO *]
>- geoip.location is the correct field for Bettermap
>- @timestamp is the standard field used for the DateTime.  I didn't 
>see the need to have the extra field.  It'd be easy to add in if you 
> prefer 
>it. 
>
> --Josh
>
>
>
>
> On Mon, Apr 7, 2014 at 1:31 PM, sercan acar 
> > wrote:
>
>> Thank you Joshua Garnett. I've switched from syslog to localhost to 
>> reading the log file directly.
>>
>> Few questions:
>>
>>- Is there are way to filter with "Alert Level X or above"? (This is 
>>more generic Kibana question)
>>- Which field did you use for the Bettermap Panel? I've added the 
>>panel with geoip.lattitude however the panel fails to load without any 
>>errors 
>>- Is there a reason why you choose to remove fields? for me 
>>syslog_timestamp is much cleaner than @timestamp
>>
>> Cheers
>>
>>
>> On Thursday, 20 March 2014 17:02:52 UTC, vic hargrave wrote:
>>
>>> Since writing my blog on using Elasticseach for OSSEC log management, 
>>> I've upgraded to Elasticsearch 1.0.1 which does not seem to be able get 
>>> logs data from Logstash 1.3.2 or 1.3.3.  The solution is to use 
>>> "elasticsearch_http" in the "output" section of the logstash configuration 
>>> file.  When you do that all is well.  
>>>
>>> For more information on better log ingestion rates, check out Brad 
>>> Lhotsky's article - http://edgeofsanity.net/article/2012/12/26/
>>> elasticsearch-for-logging.html.
>>>
>>>
>>> On Thu, Mar 20, 2014 at 3:43 AM, Chris H  wrote:
>>>
 Thanks, I'll have a look.  For me the default template created each 
 field as a multi-field, with the regular, analysed field and an additional 
 "raw" un-analysed field.  I'm extracting quite a lot of fields from the 
 different log types, which is something I was doing in Splunk before 
 trying 
 elasticsearch.

 "Alert_Level" : {
   "type" : "multi_field",
   "fields" : {
 "Alert_Level" : {
   "type" : "string",
   "omit_norms" : true
 },
 "raw" : {

   "type" : "string",
   "index" : "not_analyzed",
   "omit_norms" : true,
   "index_options" : "docs",
   "include_in_all" : false,
   "ignore_above" : 256
 }
   }
 },

 I created a new default template in elasticsearch:

 curl -XPUT 'http://localhost:9200/_template/template_logstash/' -d '{
   "template": "logstash-*",
   "settings": {
 "index.store.compress.stored": true
   },
   "mappings": {
 "_default_": {
   "_source": { "compress": "true" },
   "_all" : {
 "enabled" : false
   }
 }
   }
 }'

 This has applied, but the compression doesn't seem to do much.  I'm at 
 the point where I might only be able to store a limited amount of data in 
 elasticsearch :(

 Chris



 On Wednesday, March 19, 2014 7:37:41 PM UTC, Joshua Garnett wrote:

> Chris,
>
> Yeah digging into the templates was another big win for me.  For 
> instance, if you try to do a topN query on signature with the default 
> template, you end up with words like the and and as your top hits.  
> Setting 
> signature to not_analyzed ensures the field isn't tokenized.  Below is my 
> template.
>
> --Josh
>
> Logstash settings:
>
> output {
>elasticsearch {
>  host => "10.0.0.1"
>  cluster => "mycluster"
>  index => "logstash-ossec-%{+.MM.dd}"
>  index_type => "ossec"
>  template_name => "template-ossec"
>  template => "/etc/logstash/elasticsearch_template.json"
>  template_overwrite => true
>}
> }
>
> elasticsearch_template.json
>
> {
>   "template":"logstash-ossec-*",
>   "settings":{
> "index.analysis.analyzer.default.stopwords":"_none_",
> "index.refresh_interval":"5s",
> "index.analysis.analyzer.default.type":"standard"
>   },
>   "mappings":{
> "ossec":{
>   "properties":{
> "@fields.hostname":{
>   "type":"string",
>   "index":"not_analyzed"
> },

Re: [ossec-list] Re: OSSEC & Logstash

2014-04-07 Thread Joshua Garnett
Hi Sercan,

   - Kibana/Elasticsearch uses lucene syntax by default.  To filter Alert
   Level 5 or above use:  severity:[5 TO *]
   - geoip.location is the correct field for Bettermap
   - @timestamp is the standard field used for the DateTime.  I didn't see
   the need to have the extra field.  It'd be easy to add in if you prefer it.

--Josh




On Mon, Apr 7, 2014 at 1:31 PM, sercan acar  wrote:

> Thank you Joshua Garnett. I've switched from syslog to localhost to
> reading the log file directly.
>
> Few questions:
>
>- Is there are way to filter with "Alert Level X or above"? (This is
>more generic Kibana question)
>- Which field did you use for the Bettermap Panel? I've added the
>panel with geoip.lattitude however the panel fails to load without any
>errors
>- Is there a reason why you choose to remove fields? for me
>syslog_timestamp is much cleaner than @timestamp
>
> Cheers
>
>
> On Thursday, 20 March 2014 17:02:52 UTC, vic hargrave wrote:
>
>> Since writing my blog on using Elasticseach for OSSEC log management,
>> I've upgraded to Elasticsearch 1.0.1 which does not seem to be able get
>> logs data from Logstash 1.3.2 or 1.3.3.  The solution is to use
>> "elasticsearch_http" in the "output" section of the logstash configuration
>> file.  When you do that all is well.
>>
>> For more information on better log ingestion rates, check out Brad
>> Lhotsky's article - http://edgeofsanity.net/article/2012/12/26/
>> elasticsearch-for-logging.html.
>>
>>
>> On Thu, Mar 20, 2014 at 3:43 AM, Chris H  wrote:
>>
>>> Thanks, I'll have a look.  For me the default template created each
>>> field as a multi-field, with the regular, analysed field and an additional
>>> "raw" un-analysed field.  I'm extracting quite a lot of fields from the
>>> different log types, which is something I was doing in Splunk before trying
>>> elasticsearch.
>>>
>>> "Alert_Level" : {
>>>   "type" : "multi_field",
>>>   "fields" : {
>>> "Alert_Level" : {
>>>   "type" : "string",
>>>   "omit_norms" : true
>>> },
>>> "raw" : {
>>>
>>>   "type" : "string",
>>>   "index" : "not_analyzed",
>>>   "omit_norms" : true,
>>>   "index_options" : "docs",
>>>   "include_in_all" : false,
>>>   "ignore_above" : 256
>>> }
>>>   }
>>> },
>>>
>>> I created a new default template in elasticsearch:
>>>
>>> curl -XPUT 'http://localhost:9200/_template/template_logstash/' -d '{
>>>   "template": "logstash-*",
>>>   "settings": {
>>> "index.store.compress.stored": true
>>>   },
>>>   "mappings": {
>>> "_default_": {
>>>   "_source": { "compress": "true" },
>>>   "_all" : {
>>> "enabled" : false
>>>   }
>>> }
>>>   }
>>> }'
>>>
>>> This has applied, but the compression doesn't seem to do much.  I'm at
>>> the point where I might only be able to store a limited amount of data in
>>> elasticsearch :(
>>>
>>> Chris
>>>
>>>
>>>
>>> On Wednesday, March 19, 2014 7:37:41 PM UTC, Joshua Garnett wrote:
>>>
 Chris,

 Yeah digging into the templates was another big win for me.  For
 instance, if you try to do a topN query on signature with the default
 template, you end up with words like the and and as your top hits.  Setting
 signature to not_analyzed ensures the field isn't tokenized.  Below is my
 template.

 --Josh

 Logstash settings:

 output {
elasticsearch {
  host => "10.0.0.1"
  cluster => "mycluster"
  index => "logstash-ossec-%{+.MM.dd}"
  index_type => "ossec"
  template_name => "template-ossec"
  template => "/etc/logstash/elasticsearch_template.json"
  template_overwrite => true
}
 }

 elasticsearch_template.json

 {
   "template":"logstash-ossec-*",
   "settings":{
 "index.analysis.analyzer.default.stopwords":"_none_",
 "index.refresh_interval":"5s",
 "index.analysis.analyzer.default.type":"standard"
   },
   "mappings":{
 "ossec":{
   "properties":{
 "@fields.hostname":{
   "type":"string",
   "index":"not_analyzed"
 },
 "@fields.product":{
   "type":"string",
   "index":"not_analyzed"
 },
 "@message":{
   "type":"string",
   "index":"not_analyzed"
 },
 "@timestamp":{
   "type":"date"
 },
 "@version":{
   "type":"string",
   "index":"not_analyzed"
 },
 "acct":{
   "type":"string",
   "index":"not_analyzed"
 },
 "ossec_group":{
   "type":"string",
   "index":"not_analyzed"
 },
 "ossec_

Re: [ossec-list] Re: OSSEC & Logstash

2014-04-07 Thread sercan acar
Thank you Joshua Garnett. I've switched from syslog to localhost to reading 
the log file directly.

Few questions:

   - Is there are way to filter with "Alert Level X or above"? (This is 
   more generic Kibana question)
   - Which field did you use for the Bettermap Panel? I've added the panel 
   with geoip.lattitude however the panel fails to load without any errors
   - Is there a reason why you choose to remove fields? for me 
   syslog_timestamp is much cleaner than @timestamp

Cheers


On Thursday, 20 March 2014 17:02:52 UTC, vic hargrave wrote:
>
> Since writing my blog on using Elasticseach for OSSEC log management, I've 
> upgraded to Elasticsearch 1.0.1 which does not seem to be able get logs 
> data from Logstash 1.3.2 or 1.3.3.  The solution is to use 
> "elasticsearch_http" in the "output" section of the logstash configuration 
> file.  When you do that all is well.  
>
> For more information on better log ingestion rates, check out Brad 
> Lhotsky's article - 
> http://edgeofsanity.net/article/2012/12/26/elasticsearch-for-logging.html.
>
>
> On Thu, Mar 20, 2014 at 3:43 AM, Chris H 
> > wrote:
>
>> Thanks, I'll have a look.  For me the default template created each field 
>> as a multi-field, with the regular, analysed field and an additional "raw" 
>> un-analysed field.  I'm extracting quite a lot of fields from the different 
>> log types, which is something I was doing in Splunk before trying 
>> elasticsearch.
>>
>> "Alert_Level" : {
>>   "type" : "multi_field",
>>   "fields" : {
>> "Alert_Level" : {
>>   "type" : "string",
>>   "omit_norms" : true
>> },
>> "raw" : {
>>
>>   "type" : "string",
>>   "index" : "not_analyzed",
>>   "omit_norms" : true,
>>   "index_options" : "docs",
>>   "include_in_all" : false,
>>   "ignore_above" : 256
>> }
>>   }
>> },
>>
>> I created a new default template in elasticsearch:
>>
>> curl -XPUT 'http://localhost:9200/_template/template_logstash/' -d '{
>>   "template": "logstash-*",
>>   "settings": {
>> "index.store.compress.stored": true
>>   },
>>   "mappings": {
>> "_default_": {
>>   "_source": { "compress": "true" },
>>   "_all" : {
>> "enabled" : false
>>   }
>> }
>>   }
>> }'
>>
>> This has applied, but the compression doesn't seem to do much.  I'm at 
>> the point where I might only be able to store a limited amount of data in 
>> elasticsearch :(
>>
>> Chris
>>
>>
>>
>> On Wednesday, March 19, 2014 7:37:41 PM UTC, Joshua Garnett wrote:
>>
>>> Chris,
>>>
>>> Yeah digging into the templates was another big win for me.  For 
>>> instance, if you try to do a topN query on signature with the default 
>>> template, you end up with words like the and and as your top hits.  Setting 
>>> signature to not_analyzed ensures the field isn't tokenized.  Below is my 
>>> template.
>>>
>>> --Josh
>>>
>>> Logstash settings:
>>>
>>> output {
>>>elasticsearch {
>>>  host => "10.0.0.1"
>>>  cluster => "mycluster"
>>>  index => "logstash-ossec-%{+.MM.dd}"
>>>  index_type => "ossec"
>>>  template_name => "template-ossec"
>>>  template => "/etc/logstash/elasticsearch_template.json"
>>>  template_overwrite => true
>>>}
>>> }
>>>
>>> elasticsearch_template.json
>>>
>>> {
>>>   "template":"logstash-ossec-*",
>>>   "settings":{
>>> "index.analysis.analyzer.default.stopwords":"_none_",
>>> "index.refresh_interval":"5s",
>>> "index.analysis.analyzer.default.type":"standard"
>>>   },
>>>   "mappings":{
>>> "ossec":{
>>>   "properties":{
>>> "@fields.hostname":{
>>>   "type":"string",
>>>   "index":"not_analyzed"
>>> },
>>> "@fields.product":{
>>>   "type":"string",
>>>   "index":"not_analyzed"
>>> },
>>> "@message":{
>>>   "type":"string",
>>>   "index":"not_analyzed"
>>> },
>>> "@timestamp":{
>>>   "type":"date"
>>> },
>>> "@version":{
>>>   "type":"string",
>>>   "index":"not_analyzed"
>>> },
>>> "acct":{
>>>   "type":"string",
>>>   "index":"not_analyzed"
>>> },
>>> "ossec_group":{
>>>   "type":"string",
>>>   "index":"not_analyzed"
>>> },
>>> "ossec_server":{
>>>   "type":"string",
>>>   "index":"not_analyzed"
>>> },
>>> "raw_message":{
>>>   "type":"string",
>>>   "index":"analyzed"
>>> },
>>> "reporting_ip":{
>>>   "type":"string",
>>>   "index":"not_analyzed"
>>> },
>>> "reporting_source":{
>>>   "type":"string",
>>>   "index":"analyzed"
>>> },
>>> "rule_number":{
>>>   "type":"integer"
>>> },
>>> "severity":{
>>>   "ty

Re: [ossec-list] Re: OSSEC & Logstash

2014-03-20 Thread Vic Hargrave
Since writing my blog on using Elasticseach for OSSEC log management, I've
upgraded to Elasticsearch 1.0.1 which does not seem to be able get logs
data from Logstash 1.3.2 or 1.3.3.  The solution is to use
"elasticsearch_http" in the "output" section of the logstash configuration
file.  When you do that all is well.

For more information on better log ingestion rates, check out Brad
Lhotsky's article -
http://edgeofsanity.net/article/2012/12/26/elasticsearch-for-logging.html.


On Thu, Mar 20, 2014 at 3:43 AM, Chris H  wrote:

> Thanks, I'll have a look.  For me the default template created each field
> as a multi-field, with the regular, analysed field and an additional "raw"
> un-analysed field.  I'm extracting quite a lot of fields from the different
> log types, which is something I was doing in Splunk before trying
> elasticsearch.
>
> "Alert_Level" : {
>   "type" : "multi_field",
>   "fields" : {
> "Alert_Level" : {
>   "type" : "string",
>   "omit_norms" : true
> },
> "raw" : {
>
>   "type" : "string",
>   "index" : "not_analyzed",
>   "omit_norms" : true,
>   "index_options" : "docs",
>   "include_in_all" : false,
>   "ignore_above" : 256
> }
>   }
> },
>
> I created a new default template in elasticsearch:
>
> curl -XPUT 'http://localhost:9200/_template/template_logstash/' -d '{
>   "template": "logstash-*",
>   "settings": {
> "index.store.compress.stored": true
>   },
>   "mappings": {
> "_default_": {
>   "_source": { "compress": "true" },
>   "_all" : {
> "enabled" : false
>   }
> }
>   }
> }'
>
> This has applied, but the compression doesn't seem to do much.  I'm at the
> point where I might only be able to store a limited amount of data in
> elasticsearch :(
>
> Chris
>
>
>
> On Wednesday, March 19, 2014 7:37:41 PM UTC, Joshua Garnett wrote:
>
>> Chris,
>>
>> Yeah digging into the templates was another big win for me.  For
>> instance, if you try to do a topN query on signature with the default
>> template, you end up with words like the and and as your top hits.  Setting
>> signature to not_analyzed ensures the field isn't tokenized.  Below is my
>> template.
>>
>> --Josh
>>
>> Logstash settings:
>>
>> output {
>>elasticsearch {
>>  host => "10.0.0.1"
>>  cluster => "mycluster"
>>  index => "logstash-ossec-%{+.MM.dd}"
>>  index_type => "ossec"
>>  template_name => "template-ossec"
>>  template => "/etc/logstash/elasticsearch_template.json"
>>  template_overwrite => true
>>}
>> }
>>
>> elasticsearch_template.json
>>
>> {
>>   "template":"logstash-ossec-*",
>>   "settings":{
>> "index.analysis.analyzer.default.stopwords":"_none_",
>> "index.refresh_interval":"5s",
>> "index.analysis.analyzer.default.type":"standard"
>>   },
>>   "mappings":{
>> "ossec":{
>>   "properties":{
>> "@fields.hostname":{
>>   "type":"string",
>>   "index":"not_analyzed"
>> },
>> "@fields.product":{
>>   "type":"string",
>>   "index":"not_analyzed"
>> },
>> "@message":{
>>   "type":"string",
>>   "index":"not_analyzed"
>> },
>> "@timestamp":{
>>   "type":"date"
>> },
>> "@version":{
>>   "type":"string",
>>   "index":"not_analyzed"
>> },
>> "acct":{
>>   "type":"string",
>>   "index":"not_analyzed"
>> },
>> "ossec_group":{
>>   "type":"string",
>>   "index":"not_analyzed"
>> },
>> "ossec_server":{
>>   "type":"string",
>>   "index":"not_analyzed"
>> },
>> "raw_message":{
>>   "type":"string",
>>   "index":"analyzed"
>> },
>> "reporting_ip":{
>>   "type":"string",
>>   "index":"not_analyzed"
>> },
>> "reporting_source":{
>>   "type":"string",
>>   "index":"analyzed"
>> },
>> "rule_number":{
>>   "type":"integer"
>> },
>> "severity":{
>>   "type":"integer"
>> },
>> "signature":{
>>   "type":"string",
>>   "index":"not_analyzed"
>> },
>> "src_ip":{
>>   "type":"string",
>>   "index":"not_analyzed"
>> },
>> "geoip":{
>>   "type" : "object",
>>   "dynamic": true,
>>   "path": "full",
>>   "properties" : {
>> "location" : { "type" : "geo_point" }
>>   }
>> }
>>   },
>>   "_all":{
>> "enabled":true
>>   }
>> }
>>   }
>> }
>>
>>
>> On Wed, Mar 19, 2014 at 10:54 AM, Chris H  wrote:
>>
>>> Hi, Joshua.
>>>
>>> I'm using a very similar technique.  Are you applying a mapping
>>> template, or using the default?  I'm using 

Re: [ossec-list] Re: OSSEC & Logstash

2014-03-20 Thread Chris H
Thanks, I'll have a look.  For me the default template created each field 
as a multi-field, with the regular, analysed field and an additional "raw" 
un-analysed field.  I'm extracting quite a lot of fields from the different 
log types, which is something I was doing in Splunk before trying 
elasticsearch.

"Alert_Level" : {
  "type" : "multi_field",
  "fields" : {
"Alert_Level" : {
  "type" : "string",
  "omit_norms" : true
},
"raw" : {
  "type" : "string",
  "index" : "not_analyzed",
  "omit_norms" : true,
  "index_options" : "docs",
  "include_in_all" : false,
  "ignore_above" : 256
}
  }
},

I created a new default template in elasticsearch:

curl -XPUT 'http://localhost:9200/_template/template_logstash/' -d '{
  "template": "logstash-*",
  "settings": {
"index.store.compress.stored": true
  },
  "mappings": {
"_default_": {
  "_source": { "compress": "true" },
  "_all" : {
"enabled" : false
  }
}
  }
}'

This has applied, but the compression doesn't seem to do much.  I'm at the 
point where I might only be able to store a limited amount of data in 
elasticsearch :(

Chris


On Wednesday, March 19, 2014 7:37:41 PM UTC, Joshua Garnett wrote:
>
> Chris,
>
> Yeah digging into the templates was another big win for me.  For instance, 
> if you try to do a topN query on signature with the default template, you 
> end up with words like the and and as your top hits.  Setting signature 
> to not_analyzed ensures the field isn't tokenized.  Below is my template.
>
> --Josh
>
> Logstash settings:
>
> output {
>elasticsearch {
>  host => "10.0.0.1"
>  cluster => "mycluster"
>  index => "logstash-ossec-%{+.MM.dd}"
>  index_type => "ossec"
>  template_name => "template-ossec"
>  template => "/etc/logstash/elasticsearch_template.json"
>  template_overwrite => true
>}
> }
>
> elasticsearch_template.json
>
> {
>   "template":"logstash-ossec-*",
>   "settings":{
> "index.analysis.analyzer.default.stopwords":"_none_",
> "index.refresh_interval":"5s",
> "index.analysis.analyzer.default.type":"standard"
>   },
>   "mappings":{
> "ossec":{
>   "properties":{
> "@fields.hostname":{
>   "type":"string",
>   "index":"not_analyzed"
> },
> "@fields.product":{
>   "type":"string",
>   "index":"not_analyzed"
> },
> "@message":{
>   "type":"string",
>   "index":"not_analyzed"
> },
> "@timestamp":{
>   "type":"date"
> },
> "@version":{
>   "type":"string",
>   "index":"not_analyzed"
> },
> "acct":{
>   "type":"string",
>   "index":"not_analyzed"
> },
> "ossec_group":{
>   "type":"string",
>   "index":"not_analyzed"
> },
> "ossec_server":{
>   "type":"string",
>   "index":"not_analyzed"
> },
> "raw_message":{
>   "type":"string",
>   "index":"analyzed"
> },
> "reporting_ip":{
>   "type":"string",
>   "index":"not_analyzed"
> },
> "reporting_source":{
>   "type":"string",
>   "index":"analyzed"
> },
> "rule_number":{
>   "type":"integer"
> },
> "severity":{
>   "type":"integer"
> },
> "signature":{
>   "type":"string",
>   "index":"not_analyzed"
> },
> "src_ip":{
>   "type":"string",
>   "index":"not_analyzed"
> },
> "geoip":{
>   "type" : "object",
>   "dynamic": true,
>   "path": "full",
>   "properties" : {
> "location" : { "type" : "geo_point" }
>   }
> }
>   },
>   "_all":{
> "enabled":true
>   }
> }
>   }
> }
>
>
> On Wed, Mar 19, 2014 at 10:54 AM, Chris H 
> > wrote:
>
>> Hi, Joshua.  
>>
>> I'm using a very similar technique.  Are you applying a mapping template, 
>> or using the default?  I'm using the default automatic templates, because 
>> frankly I don't fully understand templates.  What this means though is that 
>> my daily indexes are larger than the uncompressed alerts.log, between 2-4GB 
>> per day, and I'm rapidly running out of disk space.  I gather than this can 
>> be optimised by enabling compression and excluding the _source and _all 
>> fields through the mapping template, but I'm not sure exactly how this 
>> works.  Just wondered if you've come across the same problem.
>>
>> Thanks.
>>
>>
>> On Saturday, March 8, 2014 10:02:35 PM UTC, Joshua Garnett wrote:
>>>
>>> All,
>>>
>>> I'll probably write a blog post on this, but I wanted to share some work 
>>> I've done today.  http://vichargrave.com/oss

Re: [ossec-list] Re: OSSEC & Logstash

2014-03-19 Thread Joshua Garnett
Chris,

Yeah digging into the templates was another big win for me.  For instance,
if you try to do a topN query on signature with the default template, you
end up with words like the and and as your top hits.  Setting signature
to not_analyzed ensures the field isn't tokenized.  Below is my template.

--Josh

Logstash settings:

output {
   elasticsearch {
 host => "10.0.0.1"
 cluster => "mycluster"
 index => "logstash-ossec-%{+.MM.dd}"
 index_type => "ossec"
 template_name => "template-ossec"
 template => "/etc/logstash/elasticsearch_template.json"
 template_overwrite => true
   }
}

elasticsearch_template.json

{
  "template":"logstash-ossec-*",
  "settings":{
"index.analysis.analyzer.default.stopwords":"_none_",
"index.refresh_interval":"5s",
"index.analysis.analyzer.default.type":"standard"
  },
  "mappings":{
"ossec":{
  "properties":{
"@fields.hostname":{
  "type":"string",
  "index":"not_analyzed"
},
"@fields.product":{
  "type":"string",
  "index":"not_analyzed"
},
"@message":{
  "type":"string",
  "index":"not_analyzed"
},
"@timestamp":{
  "type":"date"
},
"@version":{
  "type":"string",
  "index":"not_analyzed"
},
"acct":{
  "type":"string",
  "index":"not_analyzed"
},
"ossec_group":{
  "type":"string",
  "index":"not_analyzed"
},
"ossec_server":{
  "type":"string",
  "index":"not_analyzed"
},
"raw_message":{
  "type":"string",
  "index":"analyzed"
},
"reporting_ip":{
  "type":"string",
  "index":"not_analyzed"
},
"reporting_source":{
  "type":"string",
  "index":"analyzed"
},
"rule_number":{
  "type":"integer"
},
"severity":{
  "type":"integer"
},
"signature":{
  "type":"string",
  "index":"not_analyzed"
},
"src_ip":{
  "type":"string",
  "index":"not_analyzed"
},
"geoip":{
  "type" : "object",
  "dynamic": true,
  "path": "full",
  "properties" : {
"location" : { "type" : "geo_point" }
  }
}
  },
  "_all":{
"enabled":true
  }
}
  }
}


On Wed, Mar 19, 2014 at 10:54 AM, Chris H  wrote:

> Hi, Joshua.
>
> I'm using a very similar technique.  Are you applying a mapping template,
> or using the default?  I'm using the default automatic templates, because
> frankly I don't fully understand templates.  What this means though is that
> my daily indexes are larger than the uncompressed alerts.log, between 2-4GB
> per day, and I'm rapidly running out of disk space.  I gather than this can
> be optimised by enabling compression and excluding the _source and _all
> fields through the mapping template, but I'm not sure exactly how this
> works.  Just wondered if you've come across the same problem.
>
> Thanks.
>
>
> On Saturday, March 8, 2014 10:02:35 PM UTC, Joshua Garnett wrote:
>>
>> All,
>>
>> I'll probably write a blog post on this, but I wanted to share some work
>> I've done today.  http://vichargrave.com/ossec-log-management-with-
>> elasticsearch/ shows how to use OSSEC's syslog output to route messages
>> to Elasticsearch.  The problem with this method is it uses UDP.  Even when
>> sending packets to a local process UDP by definition is unreliable.
>>  Garbage collections and other system events can cause packets to be lost.
>>  I've found it tends to cap out at around 1,500 messages per minute.
>>
>> To address this issue I've put together a logstash config that will read
>> the alerts from /var/ossec/logs/alerts/alerts.log.  On top of solving
>> the reliability issue, it also fixes issues with multi-lines being lost,
>> and adds geoip lookups for the src_ip.  I tested it against approximately
>> 1GB of alerts (3M events).
>>
>> input {
>>   file {
>> type => "ossec"
>> path => "/var/ossec/logs/alerts/alerts.log"
>> sincedb_path => "/opt/logstash/"
>> codec => multiline {
>>   pattern => "^\*\*"
>>   negate => true
>>   what => "previous"
>> }
>>   }
>> }
>>
>> filter {
>>   if [type] == "ossec" {
>> # Parse the header of the alert
>> grok {
>>   # Matches  2014 Mar 08 00:57:49 (some.server.com) 10.1.2.3->ossec
>>   # (?m) fixes issues with multi-lines see https://logstash.jira.com/
>> browse/LOGSTASH-509
>>   match => ["message", "(?m)\*\* Alert 
>> %{DATA:timestamp_seconds}:%{SPACE}%{WORD}?%{SPACE}\-
>> %{DATA:ossec_group}\n%{YEAR} %{SYSLOGTIMESTAMP:syslog_timestamp}
>> \(%{DATA:reporting_host}\) 
>> %{IP:reporting_ip}\-\>%{DATA:reporting_source}\nRule:
>> %{NONNEGINT:rule_number} \(level %{NONNEGINT:severity}\) \-\>
>> '%{DATA:signature}'\n%{GREEDYDATA

[ossec-list] Re: OSSEC & Logstash

2014-03-19 Thread Chris H
Hi, Joshua.  

I'm using a very similar technique.  Are you applying a mapping template, 
or using the default?  I'm using the default automatic templates, because 
frankly I don't fully understand templates.  What this means though is that 
my daily indexes are larger than the uncompressed alerts.log, between 2-4GB 
per day, and I'm rapidly running out of disk space.  I gather than this can 
be optimised by enabling compression and excluding the _source and _all 
fields through the mapping template, but I'm not sure exactly how this 
works.  Just wondered if you've come across the same problem.

Thanks.

On Saturday, March 8, 2014 10:02:35 PM UTC, Joshua Garnett wrote:
>
> All,
>
> I'll probably write a blog post on this, but I wanted to share some work 
> I've done today.  
> http://vichargrave.com/ossec-log-management-with-elasticsearch/ shows how 
> to use OSSEC's syslog output to route messages to Elasticsearch.  The 
> problem with this method is it uses UDP.  Even when sending packets to a 
> local process UDP by definition is unreliable.  Garbage collections and 
> other system events can cause packets to be lost.  I've found it tends to 
> cap out at around 1,500 messages per minute. 
>
> To address this issue I've put together a logstash config that will read 
> the alerts from /var/ossec/logs/alerts/alerts.log.  On top of solving the 
> reliability issue, it also fixes issues with multi-lines being lost, and 
> adds geoip lookups for the src_ip.  I tested it against approximately 1GB 
> of alerts (3M events).
>
> input {
>   file {
> type => "ossec"
> path => "/var/ossec/logs/alerts/alerts.log"
> sincedb_path => "/opt/logstash/"
> codec => multiline {
>   pattern => "^\*\*"
>   negate => true
>   what => "previous"
> }
>   }
> }
>
> filter {
>   if [type] == "ossec" {
> # Parse the header of the alert
> grok {
>   # Matches  2014 Mar 08 00:57:49 (some.server.com) 10.1.2.3->ossec
>   # (?m) fixes issues with multi-lines see 
> https://logstash.jira.com/browse/LOGSTASH-509
>   match => ["message", "(?m)\*\* Alert 
> %{DATA:timestamp_seconds}:%{SPACE}%{WORD}?%{SPACE}\- 
> %{DATA:ossec_group}\n%{YEAR} %{SYSLOGTIMESTAMP:syslog_timestamp} 
> \(%{DATA:reporting_host}\) 
> %{IP:reporting_ip}\-\>%{DATA:reporting_source}\nRule: 
> %{NONNEGINT:rule_number} \(level %{NONNEGINT:severity}\) \-\> 
> '%{DATA:signature}'\n%{GREEDYDATA:remaining_message}"]
>   
>   # Matches  2014 Mar 08 00:00:00 ossec-server01->/var/log/auth.log
>   match => ["message", "(?m)\*\* Alert 
> %{DATA:timestamp_seconds}:%{SPACE}%{WORD}?%{SPACE}\- 
> %{DATA:ossec_group}\n%{YEAR} %{SYSLOGTIMESTAMP:syslog_timestamp} 
> %{DATA:reporting_host}\-\>%{DATA:reporting_source}\nRule: 
> %{NONNEGINT:rule_number} \(level %{NONNEGINT:severity}\) \-\> 
> '%{DATA:signature}'\n%{GREEDYDATA:remaining_message}"]
> }
>
> # Attempt to parse additional data from the alert
> grok {
>   match => ["remaining_message", "(?m)(Src IP: 
> %{IP:src_ip}%{SPACE})?(Src Port: %{NONNEGINT:src_port}%{SPACE})?(Dst IP: 
> %{IP:dst_ip}%{SPACE})?(Dst Port: %{NONNEGINT:dst_port}%{SPACE})?(User: 
> %{USER:acct}%{SPACE})?%{GREEDYDATA:real_message}"]
> }
>
> geoip {
>   source => "src_ip"
> }
>
> mutate {
>   convert  => [ "severity", "integer"]
>   replace  => [ "@message", "%{real_message}" ]
>   replace  => [ "@fields.hostname", "%{reporting_host}"]
>   add_field=> [ "@fields.product", "ossec"]
>   add_field=> [ "raw_message", "%{message}"]
>   add_field=> [ "ossec_server", "%{host}"]
>   remove_field => [ "type", "syslog_program", "syslog_timestamp", 
> "reporting_host", "message", "timestamp_seconds", "real_message", 
> "remaining_message", "path", "host", "tags"]
> }
>   }
> }
>
> output {
>elasticsearch {
>  host => "10.0.0.1"
>  cluster => "mycluster"
>}
> }
>
> Here are a few examples of the output this generates.
>
> {
>"@timestamp":"2014-03-08T20:34:08.847Z",
>"@version":"1",
>"ossec_group":"syslog,sshd,invalid_login,authentication_failed,",
>"reporting_ip":"10.1.2.3",
>"reporting_source":"/var/log/auth.log",
>"rule_number":"5710",
>"severity":5,
>"signature":"Attempt to login using a non-existent user",
>"src_ip":"112.65.211.164",
>"geoip":{
>   "ip":"112.65.211.164",
>   "country_code2":"CN",
>   "country_code3":"CHN",
>   "country_name":"China",
>   "continent_code":"AS",
>   "region_name":"23",
>   "city_name":"Shanghai",
>   "latitude":31.0456007,
>   "longitude":121.3997,
>   "timezone":"Asia/Shanghai",
>   "real_region_name":"Shanghai",
>   "location":[
>  121.3997,
>  31.0456007
>   ]
>},
>"@message":"Mar  8 01:00:59 someserver sshd[22874]: Invalid user oracle 
> from 112.65.211.164\n",
>"@fields.hostname":"someserver.somedomain.com",
>"@fields.product"

Re: [ossec-list] Re: OSSEC & Logstash

2014-03-09 Thread Michael Starks

On 03/09/2014 12:50 AM, Nick Turley wrote:

This is awesome. Thanks for posting. I recently updated our OSSEC
environment to utilize ElasticSearch/Logstash/Kibana. Everything has
been working great, but the one annoyance has been multi-line messages
being lost. I've considered switching over to monitoring alerts.log
directly, but haven't had time. I'll have to try out your config. :)

Nick


Joshua's work is very nice. Also, don't forget that alerts.log can be 
set to write in a non-multiline way: 
http://ossec-docs.readthedocs.org/en/latest/syntax/head_ossec_config.global.html


--

--- 
You received this message because you are subscribed to the Google Groups "ossec-list" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to ossec-list+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ossec-list] Re: OSSEC & Logstash

2014-03-09 Thread Jeremy Rossi
This is great.  We have started to add json and zeromq output in git to make 
things like this even simpler.   I don't think the json format is perfect for 
logstash but it might be worth checking out to make this simpler.   Also please 
let's us know if their are ways to make this even better.  

Zeromq output:
http://ossec-docs.readthedocs.org/en/latest/syntax/head_ossec_config.global.html?highlight=zeromq#element-zeromq_output
 

Json format:
http://ossec-docs.readthedocs.org/en/latest/formats/json.html?highlight=json


Sent from my iPhone

> On Mar 9, 2014, at 7:33 AM, "Nick Turley"  wrote:
> 
> This is awesome. Thanks for posting. I recently updated our OSSEC environment 
> to utilize ElasticSearch/Logstash/Kibana. Everything has been working great, 
> but the one annoyance has been multi-line messages being lost. I've 
> considered switching over to monitoring alerts.log directly, but haven't had 
> time. I'll have to try out your config. :)
> 
> Nick
> 
>> On Saturday, March 8, 2014 2:02:35 PM UTC-8, Joshua Garnett wrote:
>> All,
>> 
>> I'll probably write a blog post on this, but I wanted to share some work 
>> I've done today.  
>> http://vichargrave.com/ossec-log-management-with-elasticsearch/ shows how to 
>> use OSSEC's syslog output to route messages to Elasticsearch.  The problem 
>> with this method is it uses UDP.  Even when sending packets to a local 
>> process UDP by definition is unreliable.  Garbage collections and other 
>> system events can cause packets to be lost.  I've found it tends to cap out 
>> at around 1,500 messages per minute. 
>> 
>> To address this issue I've put together a logstash config that will read the 
>> alerts from /var/ossec/logs/alerts/alerts.log.  On top of solving the 
>> reliability issue, it also fixes issues with multi-lines being lost, and 
>> adds geoip lookups for the src_ip.  I tested it against approximately 1GB of 
>> alerts (3M events).
>> 
>> input {
>>   file {
>> type => "ossec"
>> path => "/var/ossec/logs/alerts/alerts.log"
>> sincedb_path => "/opt/logstash/"
>> codec => multiline {
>>   pattern => "^\*\*"
>>   negate => true
>>   what => "previous"
>> }
>>   }
>> }
>> 
>> filter {
>>   if [type] == "ossec" {
>> # Parse the header of the alert
>> grok {
>>   # Matches  2014 Mar 08 00:57:49 (some.server.com) 10.1.2.3->ossec
>>   # (?m) fixes issues with multi-lines see 
>> https://logstash.jira.com/browse/LOGSTASH-509
>>   match => ["message", "(?m)\*\* Alert 
>> %{DATA:timestamp_seconds}:%{SPACE}%{WORD}?%{SPACE}\- 
>> %{DATA:ossec_group}\n%{YEAR} %{SYSLOGTIMESTAMP:syslog_timestamp} 
>> \(%{DATA:reporting_host}\) 
>> %{IP:reporting_ip}\-\>%{DATA:reporting_source}\nRule: 
>> %{NONNEGINT:rule_number} \(level %{NONNEGINT:severity}\) \-\> 
>> '%{DATA:signature}'\n%{GREEDYDATA:remaining_message}"]
>>   
>>   # Matches  2014 Mar 08 00:00:00 ossec-server01->/var/log/auth.log
>>   match => ["message", "(?m)\*\* Alert 
>> %{DATA:timestamp_seconds}:%{SPACE}%{WORD}?%{SPACE}\- 
>> %{DATA:ossec_group}\n%{YEAR} %{SYSLOGTIMESTAMP:syslog_timestamp} 
>> %{DATA:reporting_host}\-\>%{DATA:reporting_source}\nRule: 
>> %{NONNEGINT:rule_number} \(level %{NONNEGINT:severity}\) \-\> 
>> '%{DATA:signature}'\n%{GREEDYDATA:remaining_message}"]
>> }
>> 
>> # Attempt to parse additional data from the alert
>> grok {
>>   match => ["remaining_message", "(?m)(Src IP: 
>> %{IP:src_ip}%{SPACE})?(Src Port: %{NONNEGINT:src_port}%{SPACE})?(Dst IP: 
>> %{IP:dst_ip}%{SPACE})?(Dst Port: %{NONNEGINT:dst_port}%{SPACE})?(User: 
>> %{USER:acct}%{SPACE})?%{GREEDYDATA:real_message}"]
>> }
>> 
>> geoip {
>>   source => "src_ip"
>> }
>> 
>> mutate {
>>   convert  => [ "severity", "integer"]
>>   replace  => [ "@message", "%{real_message}" ]
>>   replace  => [ "@fields.hostname", "%{reporting_host}"]
>>   add_field=> [ "@fields.product", "ossec"]
>>   add_field=> [ "raw_message", "%{message}"]
>>   add_field=> [ "ossec_server", "%{host}"]
>>   remove_field => [ "type", "syslog_program", "syslog_timestamp", 
>> "reporting_host", "message", "timestamp_seconds", "real_message", 
>> "remaining_message", "path", "host", "tags"]
>> }
>>   }
>> }
>> 
>> output {
>>elasticsearch {
>>  host => "10.0.0.1"
>>  cluster => "mycluster"
>>}
>> }
>> 
>> Here are a few examples of the output this generates.
>> 
>> {
>>"@timestamp":"2014-03-08T20:34:08.847Z",
>>"@version":"1",
>>"ossec_group":"syslog,sshd,invalid_login,authentication_failed,",
>>"reporting_ip":"10.1.2.3",
>>"reporting_source":"/var/log/auth.log",
>>"rule_number":"5710",
>>"severity":5,
>>"signature":"Attempt to login using a non-existent user",
>>"src_ip":"112.65.211.164",
>>"geoip":{
>>   "ip":"112.65.211.164",
>>   "country_code2":"CN",
>>   "country_code3":"CHN",
>>   "country_name":"China",
>>   "continent_code":

[ossec-list] Re: OSSEC & Logstash

2014-03-09 Thread Nick Turley
This is awesome. Thanks for posting. I recently updated our OSSEC 
environment to utilize ElasticSearch/Logstash/Kibana. Everything has been 
working great, but the one annoyance has been multi-line messages being 
lost. I've considered switching over to monitoring alerts.log directly, but 
haven't had time. I'll have to try out your config. :)

Nick

On Saturday, March 8, 2014 2:02:35 PM UTC-8, Joshua Garnett wrote:
>
> All,
>
> I'll probably write a blog post on this, but I wanted to share some work 
> I've done today.  
> http://vichargrave.com/ossec-log-management-with-elasticsearch/ shows how 
> to use OSSEC's syslog output to route messages to Elasticsearch.  The 
> problem with this method is it uses UDP.  Even when sending packets to a 
> local process UDP by definition is unreliable.  Garbage collections and 
> other system events can cause packets to be lost.  I've found it tends to 
> cap out at around 1,500 messages per minute. 
>
> To address this issue I've put together a logstash config that will read 
> the alerts from /var/ossec/logs/alerts/alerts.log.  On top of solving the 
> reliability issue, it also fixes issues with multi-lines being lost, and 
> adds geoip lookups for the src_ip.  I tested it against approximately 1GB 
> of alerts (3M events).
>
> input {
>   file {
> type => "ossec"
> path => "/var/ossec/logs/alerts/alerts.log"
> sincedb_path => "/opt/logstash/"
> codec => multiline {
>   pattern => "^\*\*"
>   negate => true
>   what => "previous"
> }
>   }
> }
>
> filter {
>   if [type] == "ossec" {
> # Parse the header of the alert
> grok {
>   # Matches  2014 Mar 08 00:57:49 (some.server.com) 10.1.2.3->ossec
>   # (?m) fixes issues with multi-lines see 
> https://logstash.jira.com/browse/LOGSTASH-509
>   match => ["message", "(?m)\*\* Alert 
> %{DATA:timestamp_seconds}:%{SPACE}%{WORD}?%{SPACE}\- 
> %{DATA:ossec_group}\n%{YEAR} %{SYSLOGTIMESTAMP:syslog_timestamp} 
> \(%{DATA:reporting_host}\) 
> %{IP:reporting_ip}\-\>%{DATA:reporting_source}\nRule: 
> %{NONNEGINT:rule_number} \(level %{NONNEGINT:severity}\) \-\> 
> '%{DATA:signature}'\n%{GREEDYDATA:remaining_message}"]
>   
>   # Matches  2014 Mar 08 00:00:00 ossec-server01->/var/log/auth.log
>   match => ["message", "(?m)\*\* Alert 
> %{DATA:timestamp_seconds}:%{SPACE}%{WORD}?%{SPACE}\- 
> %{DATA:ossec_group}\n%{YEAR} %{SYSLOGTIMESTAMP:syslog_timestamp} 
> %{DATA:reporting_host}\-\>%{DATA:reporting_source}\nRule: 
> %{NONNEGINT:rule_number} \(level %{NONNEGINT:severity}\) \-\> 
> '%{DATA:signature}'\n%{GREEDYDATA:remaining_message}"]
> }
>
> # Attempt to parse additional data from the alert
> grok {
>   match => ["remaining_message", "(?m)(Src IP: 
> %{IP:src_ip}%{SPACE})?(Src Port: %{NONNEGINT:src_port}%{SPACE})?(Dst IP: 
> %{IP:dst_ip}%{SPACE})?(Dst Port: %{NONNEGINT:dst_port}%{SPACE})?(User: 
> %{USER:acct}%{SPACE})?%{GREEDYDATA:real_message}"]
> }
>
> geoip {
>   source => "src_ip"
> }
>
> mutate {
>   convert  => [ "severity", "integer"]
>   replace  => [ "@message", "%{real_message}" ]
>   replace  => [ "@fields.hostname", "%{reporting_host}"]
>   add_field=> [ "@fields.product", "ossec"]
>   add_field=> [ "raw_message", "%{message}"]
>   add_field=> [ "ossec_server", "%{host}"]
>   remove_field => [ "type", "syslog_program", "syslog_timestamp", 
> "reporting_host", "message", "timestamp_seconds", "real_message", 
> "remaining_message", "path", "host", "tags"]
> }
>   }
> }
>
> output {
>elasticsearch {
>  host => "10.0.0.1"
>  cluster => "mycluster"
>}
> }
>
> Here are a few examples of the output this generates.
>
> {
>"@timestamp":"2014-03-08T20:34:08.847Z",
>"@version":"1",
>"ossec_group":"syslog,sshd,invalid_login,authentication_failed,",
>"reporting_ip":"10.1.2.3",
>"reporting_source":"/var/log/auth.log",
>"rule_number":"5710",
>"severity":5,
>"signature":"Attempt to login using a non-existent user",
>"src_ip":"112.65.211.164",
>"geoip":{
>   "ip":"112.65.211.164",
>   "country_code2":"CN",
>   "country_code3":"CHN",
>   "country_name":"China",
>   "continent_code":"AS",
>   "region_name":"23",
>   "city_name":"Shanghai",
>   "latitude":31.0456007,
>   "longitude":121.3997,
>   "timezone":"Asia/Shanghai",
>   "real_region_name":"Shanghai",
>   "location":[
>  121.3997,
>  31.0456007
>   ]
>},
>"@message":"Mar  8 01:00:59 someserver sshd[22874]: Invalid user oracle 
> from 112.65.211.164\n",
>"@fields.hostname":"someserver.somedomain.com",
>"@fields.product":"ossec",
>"raw_message":"** Alert 1394240459.2305861: - 
> syslog,sshd,invalid_login,authentication_failed,\n2014 Mar 08 01:00:59 (
> someserver.somedomain.com) 10.1.2.3->/var/log/auth.log\nRule: 5710 (level 
> 5) -> 'Attempt to login using a non-existe