Syslog (in the syslog-ng log flatfile) generates this line then sends it 
via UDP to Graylog. I then want to process it into a stream matching 
'Completed rules on'. 

Feb  9 02:14:15 ip-10-100-2-136 
|vnn|INFO|quartzScheduler_Worker-11|services.com.viakoo.vnns.dbms.SiteStateQueueService|
 
Completed rules on site LaAvenida/77765889 company Acme Widget Co/50799808 
in 2.107sec. totaltime=5.382 sec

.

Okay, so here's the event from the Graylog log that's supposed to be going 
into a stream (I set an output to stdout to log the events). As you can 
see, the stream regexp matched the above:

2016-02-08T18:14:11.912-08:00 INFO  [LoggingOutput] Writing message:  
source: ip-10-2-4-21 | message: ip-10-2-4-21 
|vnn|INFO|quartzScheduler_Worker-1|services.com.viakoo.vnns.dbms.SiteStateQueueService|
 
Completed rules on site LaAvenida/77765889 company Acme Widget Co/50799808 
in 1.485sec. totaltime=4.965 sec. { api_thread: quartzScheduler_Worker-1 | 
perf_siteid: 77765889 | perf_companyname: Acme Widget Co | perf_totaltime: 
4.965 | perf_ruletime: 1.485 | gl2_source_node: 
7b0ccf7a-2d69-4217-b536-fcf19d574a42 | perf_sitename: LaAvenida | 
api_loglevel: INFO | gl2_remote_port: 37839 | gl2_remote_ip: 12.8.XXX.XXX | 
api_class: services.com.viakoo.vnns.dbms.SiteStateQueueService | timestamp: 
2016-02-09T02:13:32.000-08:00 | api_message: Completed rules on site 
LaAvenida/77765889 company Acme Widget Co/50799808 in 1.485sec. 
totaltime=4.965 sec. | api_timestamp: vnn | level: 6 | facility: user-level 
| _id: b6fb3a80-ced2-11e5-895e-525400e68097 | gl2_source_input: 
56afc86099323257c052444a | perf_companyid: 50799808 }


I look for the string "Completed Rules on site LaAvenida"  on Kibana in the 
Elasticsearch cluster: "No results found".

Hmm. Says to widen the search. So I do -- to widen it to the corresponding 
UTC time, and I get:


Then I check with the Mongo database and validate that this is the correct 
stream ID that corresponds to the stream I set up in Graylog. Yes, it is. 

So I go into Graylog into the Streams view:

"Nothing found in stream SitesProcessedProduction"

Hmm, that's not so useful. 

I look at the Query, and specifically at the timestamps within it (I asked 
for two hours of data):

              "range": {
                "timestamp": {
                  "from": "2016-02-09 00:38:09.164",
                  "to": "2016-02-09 02:38:09.164",
                  "include_lower": true,
                  "include_upper": true
                }


Huh. That should have matched. Still nothing. Then I go to the advanced 
settings and set it to search in the future:

              "range": {
                "timestamp": {
                  "from": "2016-02-08 20:40:32.818",
                  "to": "2016-02-09 11:00:00.000",
                  "include_lower": true,
                  "include_upper": true
                }
              }
            },

And I get results:
February 9th 2016, 02:14:15.000 
message:ip-10-100-2-136 
|vnn|INFO|quartzScheduler_Worker-11|services.com.viakoo.vnns.dbms.SiteStateQueueService|
 Completed rules on site LaAvenida/77765889 company 
Acme Widget Co/50799808 in 2.107sec. totaltime=5.382 sec. api_thread:
quartzScheduler_Worker-11 perf_siteid:77,765,889 perf_companyname:Acme 
Widget Co perf_totaltime:5.382 perf_ruletime:2.107 gl2_source_node:
7b0ccf7a-2d69-4217-b536-fcf19d574a42 perf_sitename:LaAvenida api_loglevel:
INFOgl2_remote_port:37,839 gl2_remote_ip:12.8.XXX.XXX api_class:
services.com.viakoo.vnns.dbms.SiteStateQueueService timestamp:February 9th 
2016, 02:14:15.000 api_message:Completed rules on site LaAvenida/77765889 
company Acme Widget Co/50799808 in 2.107sec. totaltime=5.382 sec. 
api_timestamp:vnn level:6facility:user-level _id:
d0c87401-ced2-11e5-895e-525400e68097 source:ip-10-100-2-136 
gl2_source_input:56afc86099323257c052444a perf_companyid:50,799,808 streams:
56b8226199320b2eb5538266 _type:message _index:graylog2_28

Yep, that's my message alright -- logged properly in UTC time, like 
ElasticSearch is supposed to do. I expand to see the JSON and get the 
stream ID out of it:

"streams": [       "56b8226199320b2eb5538266"     ]
*2016-02-09 02:14:12.000* ip-10-2-4-67

ip-10-2-4-67 
|vnn|INFO|quartzScheduler_Worker-12|services.com.viakoo.vnns.dbms.SiteStateQueueService|
 
Completed rules on site LaAvenida/77765889 company Acme Widget Co/50799808 
in 1.71sec. totaltime=2.156 sec. 
Permalink 
<https://graylog.engr.videonetworknavigator.com/messages/graylog2_28/ced90010-ced2-11e5-895e-525400e68097>Copy
 
ID
Test against stream 
 ced90010-ced2-11e5-895e-525400e68097 
<https://graylog.engr.videonetworknavigator.com/messages/graylog2_28/ced90010-ced2-11e5-895e-525400e68097>
Received by*Syslog UDP* on  7b0ccf7a / 12.8.243.244 
<https://graylog.engr.videonetworknavigator.com/system/nodes/7b0ccf7a-2d69-4217-b536-fcf19d574a42>
Stored in indexgraylog2_28Routed into streams
   
   - SitesProcessedProduction 
   
<https://graylog.engr.videonetworknavigator.com/streams/56b81ff799320b2eb5537fc5/messages?q=*&rangetype=relative&relative=300>

api_class
Toggle dropdown 
services.com.viakoo.vnns.dbms.SiteStateQueueService
api_loglevel
Toggle dropdown 
INFO
api_message
Toggle dropdown 
Completed rules on site LaAvenida/77765889 company Acme Widget Co/50799808 
in 1.71sec. totaltime=2.156 sec.
api_thread

And yep, there's my message, set way into the future.

Okay, so clearly something is *not* adjusting to localtime. I am running 
three Centos 6.7 servers with ElasticSearch. MongoDB and Graylog are set up 
on one of those servers. All three are set up with NTP, and are set to PST:

[root@elasticsearch0 elasticsearch]# date

Mon Feb  8 18:45:27 PST 2016

...

[root@elasticsearch1 elasticsearch]# date

Mon Feb  8 18:45:37 PST 2016

...

[root@elasticsearch2 elasticsearch]# date

Mon Feb  8 18:45:47 PST 2016



So the dates are all set correctly. ElasticSearch works in UTC. Let's do a 
raw Elasticsearch query via WGET and see if the data in Elasticsearch is 
properly set in UTC:

   "timestamp": "2016-02-09 10:24:32.000",

Err, no. It should have been set to 02:24:32 in UTC time, because it came 
in at 18:14:11 PST time.

Okay, so clearly timezones are completely FUBAR, I got a *double* UTC 
correction. So now I'm baffled... am I simply going to have to set the 
local timezones of all these servers to UTC in order to make things work 
right?! So I reset localtime to UTC then restart everything -- 
elasticsearch, graylog-server, graylog-web, and syslog-ng (which is what's 
feeding data into graylog). So I watch another message pop out, do the 
'curl' to fetch it out of ElasticSearch, and...

"timestamp":"2016-02-09 11:11:46.000"

*STILL* 16 hours in the future, when it's supposed to only be 8 hours in 
the future if it's in UTC! (PST8PDT, yo).

Maybe my Java? Hmm.... 

# java -version

java version "1.7.0_95"

OpenJDK Runtime Environment (rhel-2.6.4.0.el6_7-x86_64 u95-b00)

OpenJDK 64-Bit Server VM (build 24.95-b01, mixed mode)



Same one on all three servers.

I am completely bumfuddled. 

# rpm -q graylog-server

graylog-server-1.3.3-1.noarch

# rpm -q elasticsearch

elasticsearch-1.7.5-1.noarch



Looks like the latest in both of those. Yet all my stuff is still being 
logged 8 hours in the future, even though all three servers are now set to 
UTC. So WTF is going on? I've run out of things to try. Any ideas?

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/9100eb49-ed4e-44dc-9e9e-5a1b934e1487%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to