Hi Manisha,

Graylog doesn't support the index naming scheme of Logstash. If you want to 
query log messages via the Graylog web interface, you'll also have to 
ingest those messages with Graylog.

FWIW, Logstash comes with a GELF output plugin which you can use to send 
messages to Graylog: 
https://www.elastic.co/guide/en/logstash/current/plugins-outputs-gelf.html

Cheers,
Jochen


On Monday, 4 April 2016 13:11:21 UTC+2, Manisha Sharma wrote:
>
> I have installed Graylog2, Elasticsearch, Logstash and MongoDB
> All of them are running on same machine. The thing is log collection is 
> working, which I can verify by querying Elasticsearch but Graylog2 web 
> interface doesn't show any messages.
>
> here is web-interface log:
> 2016-04-04T00:49:49.931+05:30 - [INFO] - from play in Thread-4
> Shutdown application default Akka system.
>
> 2016-04-04T00:49:51.930+05:30 - [INFO] - from play in main
> Application started (Prod)
>
> 2016-04-04T00:49:51.972+05:30 - [INFO] - from play in main
> Listening for HTTP on /0:0:0:0:0:0:0:0:9000
>
> 2016-04-04T00:49:52.250+05:30 - [INFO] - from play in New I/O worker #10 
> <https://github.com/Graylog2/graylog2-server/pull/10>
> Starting application default Akka system
>
> Nothing unusual there. Here is Graylog2-server log:
>
> 2016-04-04T00:59:57.905+05:30 INFO [Periodicals] Starting 
> [org.graylog2.periodical.IndexRangesCleanupPeriodical] periodical in [15s], 
> polling every [3600s].
> 2016-04-04T00:59:57.974+05:30 INFO [PeriodicalsService] Not starting 
> [org.graylog2.periodical.UserPermissionMigrationPeriodical] periodical. Not 
> configured to run on this node.
> 2016-04-04T00:59:57.974+05:30 INFO [Periodicals] Starting 
> [org.graylog2.periodical.AlarmCallbacksMigrationPeriodical] periodical, 
> running forever.
> 2016-04-04T00:59:57.976+05:30 INFO [Periodicals] Starting 
> [org.graylog.plugins.usagestatistics.UsageStatsNodePeriodical] periodical 
> in [300s], polling every [21600s].
> 2016-04-04T00:59:57.976+05:30 INFO [Periodicals] Starting 
> [org.graylog.plugins.usagestatistics.UsageStatsClusterPeriodical] 
> periodical in [300s], polling every [21600s].
> 2016-04-04T00:59:58.064+05:30 INFO [transport] [graylog2-server] 
> bound_address {inet[/0:0:0:0:0:0:0:0:9350]}, publish_address {inet[/
> 135.249.20.115:9350]}
> 2016-04-04T00:59:58.080+05:30 INFO [discovery] [graylog2-server] 
> graylog2/1s3nMRl2Ra2KzP8XfXwNzQ
> 2016-04-04T01:00:01.081+05:30 WARN [discovery] [graylog2-server] waited 
> for 3s and no initial state was set by the discovery
> 2016-04-04T01:00:01.082+05:30 INFO [node] [graylog2-server] started
> 2016-04-04T01:00:01.143+05:30 INFO [service] [graylog2-server] 
> detected_master [Wonder 
> Man][FLmkoZ_VRB2IeOEdPORRtQ][localhost.localdomain][inet[/135.249.20.115:9300]],
>  
> added {[Wonder 
> Man][FLmkoZ_VRB2IeOEdPORRtQ][localhost.localdomain][inet[/135.249.20.115:9300]],},
>  
> reason: zen-disco-receive(from master [[Wonder 
> Man][FLmkoZ_VRB2IeOEdPORRtQ][localhost.localdomain][inet[/135.249.20.115:9300]]])
> 2016-04-04T01:00:02.160+05:30 INFO [RestApiService] Adding security 
> context factory:org.graylog2.security.ShiroSecurityContextFactory@670aab4b
> 2016-04-04T01:00:02.181+05:30 INFO [RestApiService] Started REST API at 
> http://127.0.0.1:12900/
> 2016-04-04T01:00:02.183+05:30 INFO [ServiceManagerListener] Services are 
> healthy
> 2016-04-04T01:00:02.183+05:30 INFO [InputSetupService] Triggering 
> launching persisted inputs, node transitioned from Uninitialized [LB:DEAD] 
> to Running [LB:ALIVE]
> 2016-04-04T01:00:02.184+05:30 INFO [ServerBootstrap] Services started, 
> startup times in ms: {OutputSetupService [RUNNING]=15, 
> MetricsReporterService [RUNNING]=17, BufferSynchronizerService 
> [RUNNING]=17, InputSetupService [RUNNING]=61, DashboardRegistryService 
> [RUNNING]=66, KafkaJournal [RUNNING]=72, JournalReader [RUNNING]=83, 
> PeriodicalsService [RUNNING]=229, IndexerSetupService [RUNNING]=3348, 
> RestApiService [RUNNING]=4429}
> 2016-04-04T01:00:02.188+05:30 INFO [ServerBootstrap] Graylog server up and 
> running.
> 2016-04-04T01:00:02.192+05:30 *ERROR [KafkaJournal] Read offset 26 before 
> start of log at 25039, starting to read from the beginning of the journal.*
> 2016-04-04T01:00:02.205+05:30 INFO [InputStateListener] Input [GELF 
> UDP/57014f05327c569aeaf85512] is now STARTING
> 2016-04-04T01:00:02.229+05:30 INFO [InputStateListener] Input [GELF 
> UDP/57014f05327c569aeaf85512] is now RUNNING
>
> Following are the indices:
> health status index pri rep docs.count docs.deleted store.size 
> pri.store.size
> green open logstash-2016.03.17 5 0 27 0 37.4kb 37.4kb
> green open graylog2_0 1 0 26 0 24.7kb 24.7kb
> green open logstash-2016.04.03 5 0 39375 0 8.7mb 8.7mb
>
> I could see only the data present in the logstash-2016.03.17 in from 
> Graylog. My graylog is not able to show logstash-2016.03.17 data. Why?
>
> How to fix the ERROR in graylog-server log. Is that the cause of problem? 
> What does this mean?
>
> Graylog 1.3
> Elasticsearch 1.7.5
> Logstash 2.2
>

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/e2920383-ca22-432f-8413-8c94dc617251%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to