On Oct 26, 2009, at 10:45 AM, Miles Fidelman wrote:
Chris Anderson wrote:
If you do a hub-and-spoke or ring topology you can simplify the
replication problem a bit. But a gossip protocol is more resistant to
down nodes. I'd like to see a replication bus in the open source
project.
However, continuous replication will pretty much work.
The environment we're looking at is more of a mesh where
connectivity is coming up and down - think mobile ad hoc networks.
I like the idea of a replication bus, perhaps using something like
spread (http://www.spread.org/) or spines (www.spines.org) as a
multi-cast fabric.
I'm thinking something like continuous replication - but where the
updates are pushed to a multi-cast port rather than to a specific
node, with each node subscribing to update feeds.
Anybody have any thoughts on how that would play with the current
replication and conflict resolution schemes?
Miles Fidelman
Hi Miles, this sounds like really cool stuff. Caveat: I have no
experience using Spread/Spines and very little experience with IP
multicasting, which I guess is what those tools try to reproduce in
internet-like environments. So bear with me if I ask stupid questions.
1) Would the CouchDB servers be responsible for error detection and
correction? I imagine that complicates matters considerably, but it
wouldn't be impossible.
2) When these CouchDB servers drop off for an extended period and then
rejoin, how do they subscribe to the update feed from the replication
bus at a particular sequence? This is really the key element of the
setup. When I think of multicasting I think of video feeds and such,
where if you drop off and rejoin you don't care about the old stuff
you missed. That's not the case here. Does the bus store all this
old feed data?
3) Which steps of the replication do you envision using the
replication bus? Just the _changes feed (essentially a list of
docid:rev pairs) or the actual documents themselves?
The conflict resolution model shouldn't care about whether replication
is p2p or uses this bus. Best,
Adam