Robert,

Sorry for the confusion. I jumped to the conclusion that you'd want to be 
looking at the four nodes as a single logical node which is what those projects 
provide even beyond the sharding and partitioning.

On Aug 19, 2011, at 1:21 PM, Robert Elliot wrote:

> Thanks; we're aware of both solutions, but as we do not currently need to 
> partition/shard we are interested in exploring what CouchDB could do by 
> itself, since it supports bi-directional replication.
> 
> Is the answer "the CouchDB developers recommend not running CouchDB in a 
> multi-master replicated topology"?
> 
> On 19 Aug 2011, at 18:26, Paul Davis wrote:
> 
>> Robert,
>> 
>> There are two projects for running CouchDB in a cluster:
>> CouchDB-Lounge and BigCouch. CouchDB-Lounge is a combination of an
>> nginx module and Python proxy that handles managing multiple backend
>> CouchDB nodes. BigCouch is a pure Erlang implementation that works
>> more like Dynamo.
>> 
>> [1] http://tilgovi.github.com/couchdb-lounge/
>> [2] http://github.com/cloudant/bigcouch
>> 
>> On Fri, Aug 19, 2011 at 8:32 AM, Robert Elliot <[email protected]> wrote:
>>> Hi,
>>> 
>>> We are considering setting up a cluster with more than two nodes. There is 
>>> a lot of documentation regarding two nodes but we couldn't find an exact 
>>> answer for let's say a cluster of 4 nodes.
>>> 
>>> Would you recommend a multi-master setup where all nodes receive writes?  
>>> This would be simpler to setup and administer, and would also be the most 
>>> fault tolerant (any combination of nodes can be shutdown so long as one is 
>>> still active).
>>> 
>>> If so, should we use only push replication?  Or only pull replication?  Or 
>>> a combination of both?
>>> 
>>> Assuming we are using pull replication within 4 nodes: A, B, C and D. 
>>> Should we set up A to pull changes from B, C and D, B to pull changes from 
>>> A, C and D, C to pull changes from A, B and D and D to pull changes from A, 
>>> B and C? Is this the recommended approach?
>>> 
>>> Thanks for any guidance,
>>> 
>>> Rob
>>> 
> 

Reply via email to