Hi, 
I've the following problem: our application publishes content to an 
Elasticsearch cluster. We use local data less node for querying 
elasticsearch then, so we don't use HTTP REST and the local nodes are the 
loadbalancer. Now they came with the requirement of having the cluster 
replicated to another data center too (and in the future maybe another 
too... ) for resilience. 

At the very beginning we thought of having one large cluster that goes 
across data centers (crazy). This solution has the following problems:

- The cluster has the split-brain problem (!)
- The client data less node will try to do requests across different data 
centers (is there a solution to this???). I can't find a way to avoid this. 
We don't want this to happen because of a) latency and b) firewalling 
issues.

So we started to think that this solution is not really viable. So we 
thought of having one cluster per data center, which seems more sensible. 
But then here we have the problem that we must publish data to all clusters 
and, if one fails, we have no means of rolling back (unless we try to set 
up a complicated version based rollback system). I find this very 
complicated and hard to maintain, although can be somewhat doable. 

My biggest problem is that we have to keep the data centers in the same 
state at any time, so that if one goes down, we can readily switch to the 
other.

Any ideas, or can you recommend some support to help use deal with this?

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/5424a274-3f6b-4c12-9fe6-621e04f87a8d%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to