>
> Due to high availability requirements, each individual component is 
> required to be redundant. Bottom up, this seems to be achievable with 
> Elasticsearch clustering, a MongoDB replica set and multiple Graylog nodes. 
> However, if we implement redundant syslog shipping, i.e. each log source 
> sends its events to two distinct forwarders (via different network paths) 
> and then into Graylog, we most likely will end up with duplicate log 
> entries. What is an approach to avoid this? Is it possible to solve this in 
> the message queue component?
>
Suggestion what I did in my case: 
1. syslog messages to central log via loadbalancer vip (archive purpose). 
2. from central log using filebeat (via multiple inputs) to different 
graylog servers.
3. elasticsearch is using zones which are set according to 2 data centers 
(1 replica), so 1 dc can be shut down and elasticsearch is yellow and still 
accept data.
4. mongodb 2x2 (2 in every dc)

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/817b67a4-890d-4e63-af86-984b0c968db9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to