Hi!

We are currently planning a large scale Graylog setup, consisting of 
syslog-based shipping to dedicated Logstash forwarders (for preprocessing) 
and then transferring into Graylog. We now have some issues regarding the 
overall architecture for which I would appreciate your support:

Due to high availability requirements, each individual component is 
required to be redundant. Bottom up, this seems to be achievable with 
Elasticsearch clustering, a MongoDB replica set and multiple Graylog nodes. 
However, if we implement redundant syslog shipping, i.e. each log source 
sends its events to two distinct forwarders (via different network paths) 
and then into Graylog, we most likely will end up with duplicate log 
entries. What is an approach to avoid this? Is it possible to solve this in 
the message queue component?

Thank you in advance for feedback.

Best regards
tokred

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/38f3e106-78f6-4f17-b655-eb92cc69fce0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to