one more point on this .. if i only start a cluster with 2 nodes, and i use the same config setup (RF=2, etc) .. it works fine. it's only when i start with the 3 nodes and remove 1. in fact, i remove the node before i do any reads or writes at all, completely fresh database.
i guess i don't really understand why it shouldn't work. thx for humoring me On Thu, 2009-11-19 at 19:19 -0600, Jonathan Ellis wrote: > Oh, sorry -- I didn't read your description closely at first. Yes, > with RF=2, quorum (RF/2 + 1) is still 2. That is why 3 is really the > minimum RF for quorum to be useful. > > On Thu, Nov 19, 2009 at 6:46 PM, B. Todd Burruss <[email protected]> wrote: > > still happens. i just downloaded 882359 from subversion. > > > > thx! > > > > On Thu, 2009-11-19 at 16:58 -0600, Jonathan Ellis wrote: > >> This sounds like a bug that was fixed in trunk. Could you retest with > >> that code? > >> > >> -Jonathan > >> > >> On Thu, Nov 19, 2009 at 4:49 PM, B. Todd Burruss <[email protected]> wrote: > >> > I'm doing some testing with build 881977. I have setup a 3 node cluster > >> > with ReplicationFactor = 2, reads using ConsistencyLevel.ONE, and writes > >> > using ConsistencyLevel.QUORUM. If I take one node out of the cluster, > >> > then do some writes I would expect that all the writes would succeed > >> > because I still have 2 nodes in the cluster, so I have a quorum. My > >> > guess is that since one of the nodes the quorum write "should" use is > >> > down, the write fails. > >> > > >> > Is this the expected behavior? Is hinted handoff in the future? > >> > > >> > If not, this seems to make quorum kinda useless since some writes will > >> > fail if ANY node becomes unavailable for whatever reason. > >> > > >> > thx! > >> > > >> > > >> > > > > > > >
