Also possible, your choice really. We preferred filebeat as it can be 
configured to be a reliable source for log output (the consequence is during 
ELK unavailability (network, elk problems, etc) it can be delayed, but you’ll 
get everything). IIRC the logstash emitter is best effort and will drop stuff 
in the above availability scenarios, though maybe that’s configurable now, I 
just haven’t kept up with it in particular.

Cody

From: Antoine Tran <[email protected]>
Reply-To: "[email protected]" <[email protected]>
Date: Friday, March 31, 2017 at 9:58 AM
To: "[email protected]" <[email protected]>
Subject: Re: Centralized logging for storm


How about modifying the worker.xml configuration, so that we add a appender to 
logstash/elasticsearch ? No need to add a filebeat if this is handled by Storm 
itself.

On 31/03/2017 16:44, Cody Lee wrote:
Ditto, filebeat + ELK works very well. You can even tokenize these logs 
appropriately to have a richer search/filtering.

Cody

From: Harsh Choudhary <[email protected]><mailto:[email protected]>
Reply-To: "[email protected]"<mailto:[email protected]> 
<[email protected]><mailto:[email protected]>
Date: Friday, March 31, 2017 at 4:38 AM
To: "[email protected]"<mailto:[email protected]> 
<[email protected]><mailto:[email protected]>
Subject: Re: Centralized logging for storm

Hi Shashank

What we do is, we have filebeats installed on our Storm clusters and they send 
the log files data to our central log server, Graylog. This tool is great and 
you can see your logs like they are one stream of messages, sorted by 
timestamp. One thing that really helps is that you can also lookup all the 
other logs near a timestamp.


Cheers!

Harsh Choudhary

On Fri, Mar 31, 2017 at 1:16 PM, Shashank Prasad 
<[email protected]<mailto:[email protected]>> wrote:
Hi folks,

Storm is a great tool but the logs are all over the place. As you increase your 
workers, your log files will increase as well and there is no single file it 
logs to.

This makes it very hard to troubleshoot since you have to tail multiple logs.

Ideally, i would like to ship all the logs for a topology to a centralized log 
server where i could use something like Kibana and filter the logs on what i am 
searching for.

Anyone has any suggestion on how to achieve this or has a use case of how you 
currently doing it.

Thanks a lot for your time!

-shashank




--

My THALES email is 
[email protected]<mailto:[email protected]>.

+33 (0)5 62 88 84 40

Thales Services, Toulouse, France

Reply via email to