How about modifying the worker.xml configuration, so that we add a appender to logstash/elasticsearch ? No need to add a filebeat if this is handled by Storm itself.

On 31/03/2017 16:44, Cody Lee wrote:

Ditto, filebeat + ELK works very well. You can even tokenize these logs appropriately to have a richer search/filtering.

Cody

*From: *Harsh Choudhary <[email protected]>
*Reply-To: *"[email protected]" <[email protected]>
*Date: *Friday, March 31, 2017 at 4:38 AM
*To: *"[email protected]" <[email protected]>
*Subject: *Re: Centralized logging for storm

Hi Shashank

What we do is, we have filebeats installed on our Storm clusters and they send the log files data to our central log server, Graylog. This tool is great and you can see your logs like they are one stream of messages, sorted by timestamp. One thing that really helps is that you can also lookup all the other logs near a timestamp.


/Cheers!/

*Harsh Choudhary*

On Fri, Mar 31, 2017 at 1:16 PM, Shashank Prasad <[email protected] <mailto:[email protected]>> wrote:

    Hi folks,

    Storm is a great tool but the logs are all over the place. As you
    increase your workers, your log files will increase as well and
    there is no single file it logs to.

    This makes it very hard to troubleshoot since you have to tail
    multiple logs.

    Ideally, i would like to ship all the logs for a topology to a
    centralized log server where i could use something like Kibana and
    filter the logs on what i am searching for.

    Anyone has any suggestion on how to achieve this or has a use case
    of how you currently doing it.

    Thanks a lot for your time!

    -shashank


--
My THALES email is [email protected].
+33 (0)5 62 88 84 40
Thales Services, Toulouse, France

Reply via email to