This is a good question. Redundant log entries are a problem for us as well. I'm looking for a way to get rid of them.
I hope someone answers... On Tuesday, March 15, 2016 at 11:32:20 AM UTC-4, [email protected] wrote: > > Hi! > > We are currently planning a large scale Graylog setup, consisting of > syslog-based shipping to dedicated Logstash forwarders (for preprocessing) > and then transferring into Graylog. We now have some issues regarding the > overall architecture for which I would appreciate your support: > > Due to high availability requirements, each individual component is > required to be redundant. Bottom up, this seems to be achievable with > Elasticsearch clustering, a MongoDB replica set and multiple Graylog nodes. > However, if we implement redundant syslog shipping, i.e. each log source > sends its events to two distinct forwarders (via different network paths) > and then into Graylog, we most likely will end up with duplicate log > entries. What is an approach to avoid this? Is it possible to solve this in > the message queue component? > > Thank you in advance for feedback. > > Best regards > tokred > -- You received this message because you are subscribed to the Google Groups "Graylog Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/graylog2/2d805358-7ea2-4e20-b4fd-3666f5d183db%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
