One confusion is that ZK is NOT a distributed hash table.  It is a
replicated hash table with ordered updates.  All ZK servers have all of the
data in memory and a majority will have written any updates to disk if they
have been confirmed.

The ordered update means that all servers are severely bounded as to the
number of possible states that they can be in.  You cannot have a situation
where two updates A and then B have been committed and one server has A but
not B and another has B but not A.  The only possible states are no updates,
A only or both A and B.  Eventually, all live servers will get all updates
in exactly the correct order.

On Mon, Mar 15, 2010 at 6:19 PM, Maxime Caron <maxime.ca...@gmail.com>wrote:

> Thanks a lots it's much clearer now.
>
> When i say "more replicas" i don't mean the number of node but the number
> of
> copy of an item value.
> This was my misunderstanding because in Scalaris the item value is
> replicated when node join and leave the DHT.
>
> So this is all about the "operation log" so if a node is in minority but
> have more recent committed value this node is in Veto over the other node.
> This is where Zookeeper differ from scalaris because Scalaris dont have
> "Operation log".
>
> So if i understood well , zookeeper have a better consistency model at the
> price of not being built on a DHT.  I wonder if the two can be mixed to get
> the advantage of both.
>
>
> On 15 March 2010 20:56, Henry Robinson <he...@cloudera.com> wrote:
>
> > Hi Maxime -
> >
> > I'm not very familiar with Scalaris, but I can answer for the ZooKeeper
> > side
> > of things.
> >
> > ZooKeeper servers log each operation to a persistent store before they
> vote
> > on the outcome of that operation. So if a vote passes, we know that a
> > majority of servers has written that operation to disk. Then, if a node
> > fails and restarts, it can read all the committed operations from disk.
> As
> > long as a majority of nodes is still working, at least one of them will
> > have
> > seen all the committed operations.
> >
> > If we didn't do this, the loss of a majority of servers (even if they
> > restarted) could mean that updates are lost. But ZooKeeper is meant to be
> > durable - once a write is made, it will persist for the lifetime of the
> > system if it is not overwritten later. So in order to properly tolerate
> > crash failures and not lose any updates, you have to make sure a majority
> > of
> > servers write to disk.
> >
> > There is no possibility of more replicas being in the system than are
> > allowed - you start off with a fixed number, and never go above it.
> >
> > Hope this helps - let me know if you have any further questions!
> >
> > Henry
> >
> > --
> > Henry Robinson
> > Software Engineer
> > Cloudera
> > 415-994-6679
> >
> > On 15 March 2010 16:47, Maxime Caron <maxime.ca...@gmail.com> wrote:
> >
> > > Hi everybody,
> > >
> > > From what i understand Zookeeper consistency model work the same way as
> > > does
> > > Scalaris.
> > > Which is to keep the majority of the replica for an item UP.
> > >
> > > In Scalaris i
> > >
> > > f a single failed node does crash and recover, it simply start like a
> > fresh
> > > new node and all data is lost.
> > >
> > > This is the case because it may otherwise get some inconsistencies as
> > > another node already took over.
> > >
> > > For a short timeframe there might be more replicas in the system than
> > > allowed, which destroys the proper functioning of our majority based
> > > algorithms.
> > >
> > > So my question is how Zookeeper use the persistent storage during node
> > > recovery, how does the
> > >
> > > majority based algorithms is different so consistency is preserved.
> > >
> > >
> > > Thanks a lots
> > >
> > > Maxime Caron
> > >
> >
>

Reply via email to