"Lazy writing of the data is not really an option for what we need."

Then I would say dw should equal n (dw=3 in your case) which would ensure that 
all data is on disk. Try rerunning your tests with that configuration. This 
should negatively affect performance though.

-Alexander Sicular

@siculars

On Sep 14, 2012, at 3:07 PM, Pat Lynch wrote:

> Hey Alexander,
> thanks for the response.  Yup - I'm a Riak newbie for sure :).  I have 
> replication set to 3 on the cluster.  So, if dw=2, is it durable or not?  
> even if it's durable across vnodes.  If it is durable across vnodes, then it 
> should be queryable? 
> 
> One of the big considerations I have to look into is if the client writes to 
> the server, and that client gets told that the key has been written, is it 
> actually on disk or not?  Eventual consistency is fine, but many of the other 
> datastores have a WAL (much like leveldb), so at least if I pull the plug or 
> the app dies we know that on restart the key is actually available.  Do any 
> of the RIAK settings give me this? (I thought that I was getting this with 
> dw=2/r=2/w=2 and replication).  Lazy writing of the data is not really an 
> option for what we need.
> 
> thanks again,
> Pat
> 
> 
> 
> Subject: Re: missing keys?
> From: [email protected]
> Date: Fri, 14 Sep 2012 12:51:28 -0400
> CC: [email protected]; [email protected]
> To: [email protected]
> 
> Hi Pat,
> 
> "thanks for that.  Is the default on dw not also quorum though??  so, in my 
> case it should be 2 (since I have 2 nodes)??"
> 
> Consider the difference in the usage of the word "node." There is actually a 
> large difference between two definitions of the commonly used word, "node."
> 
> 1. node: virtual node, as indicated by the ring_creation_size default value, 
> which is 64, and can be changed in app.config.
> 
> 2. node: physical node, which is the number of physical machines in a cluster.
> 
> Historically and at the moment, virtual nodes are evenly divided amongst 
> physical nodes.
> 
> The term "node" has been conflated and continues to be a pain point for 
> newcomers to Riak (not that you are a newcomer, but in general).
> 
> In all cases, the values r, w, dw and considerations for quorum are entirely 
> dependent on the replication value, known as n or n_val,  which in turn 
> represents replicas on a per virtual node basis.
> 
> Hope that clears some stuff up,
> 
> -Alexander Sicular
> 
> @siculars
> 
> On Sep 14, 2012, at 12:17 PM, Pat Lynch wrote:
> 
> Hi,
> thanks for that.  Is the default on dw not also quorum though??  so, in my 
> case it should be 2 (since I have 2 nodes)??
> 
> thanks,
> Pat.
> 
> Date: Fri, 14 Sep 2012 12:11:25 -0400
> Subject: Re: missing keys?
> From: [email protected]
> To: [email protected]
> CC: [email protected]; [email protected]
> 
> Pat,
> 
> On Fri, Sep 14, 2012 at 11:08 AM, Pat Lynch <[email protected]> wrote:
> 
> So, if I write a key to a cluster and then try to query that key, it may not 
> find it unless I retry?  Even if r=2 & w=2 ?  I thought that at least 2 
> systems had to have that key (either in memory or disk) for the write to have 
> succeeded with those settings?  If I had w=3, would you expect any missed 
> keys ??  (note that I didn't try that because I expected performance issues).
> 
> It's important to note that `w` is considered complete once the value is 
> queued in-memory waiting to be written do disk [1].  A read does not check 
> the queue but only on-disk data.  The `dw` flag is what you want to know that 
> the value has made it to disk and can be seen by a read.
> 
> -Z
> 
> [1]: https://github.com/basho/riak_kv/blob/master/src/riak_kv_vnode.erl#L284
> _______________________________________________
> riak-users mailing list
> [email protected]
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

_______________________________________________
riak-users mailing list
[email protected]
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to