so would it make more sense to have the N=2?
I'd like to be on the safe side, after reading "There are no guarantees
that the three replicas will go to three separate physical nodes" in the
replication docs I started to worry about the reliability of a 2 nodes N=2
setting.

On Fri, Jun 28, 2013 at 11:06 AM, Justin Sheehy <[email protected]> wrote:

> Hi, Louis-Philippe.
>
> With a 2-node cluster and N=3, each value will be written to disk a total
> of three times: twice on one node, once on the other. (The W setting has no
> effect on the number of copies made or hosts used.) That behavior might
> seem a bit strange, but it's a strange configuration to run Riak on only
> two machines while asking it to store data on three of them.
>
> The standard settings and behavior of Riak are generally optimized for
> non-tiny clusters, and make much more sense when there are at least five
> machines.
>
> I hope this helps with your understanding.
>
> -Justin
>
>
>
> On Jun 28, 2013, at 10:54 AM, Louis-Philippe Perron wrote:
>
> > So if I get you right and extrapolate with the replication documentation
> page, can I say that on a 2 nodes cluster, with a bucket set to N=3 and
> W=ALL, my writes would be written 3 times to disk? (and with no guarantee
> to be on different nodes)?
> >
> > thanks!
> >
> > On Wed, Jun 26, 2013 at 8:17 PM, Mark Phillips <[email protected]> wrote:
> > Hi Louis-Philippe
> >
> > There are no dumb questions. :)
> >
> > On Wednesday, June 26, 2013, Louis-Philippe Perron wrote:
> > Hi Riak people!
> > Here is a dumb question, but anyway I want to clear this doubt out:
> >
> > What happens when a bucket has a W quorum value higher than the N number
> of nodes?
> > are writes to disk multiplied?
> >
> >
> > Precisely. For example, if you run a one node Riak cluster on your dev
> machine you'll be writing with a N val of 3 and W of 2 by default. In other
> words, Riak will always attempt to satisfy the W value regardless of
> physical node count.
> >
> > Hope that helps.
> >
> > Mark
> > twitter.com/pharkmillups
> >
> > thanks!
> >
> > _______________________________________________
> > riak-users mailing list
> > [email protected]
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
_______________________________________________
riak-users mailing list
[email protected]
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to