Re: Configure multiple riak clients in a cluster

2016-02-22 Thread Chathuri Gunawardhana
I think it should use distributed basho bench configurations as in here
.
But still with that, I have the same throughput results as in one basho
client. I'm not sure what I'm doing wrong. Lets say I have  2 nodes A and B.

In A's config file I have
{remote_nodes, [{'172.31.0.174', 'nodeB'}]}.
{distribute_work, true}.

Then on A I run the following command,

./basho_bench -N basho_bench@172.31.0.117 -C basho_bench
examples/riakclient.config

here 172.31.0.117 is the A's ip.

Can you please tell me what I'm doing wrong?

Thank you very much!

On Tue, Feb 23, 2016 at 2:14 AM, Chathuri Gunawardhana <
lanch.gunawardh...@gmail.com> wrote:

> I didn't get it clearly. Can you please provide me an example?
>
> Thank you very much!
>
> On Tue, Feb 23, 2016 at 2:07 AM, Christopher Meiklejohn <
> christopher.meiklej...@gmail.com> wrote:
>
>> Your client is registering with the name in the config file, and that
>> name can only be used once.
>>
>> You need to have each client use a different name.
>>
>> {riakclient_mynode, ['riak_bench@172.31.0.117', longnames]}.
>>
>>
>> Christopher
>>
>> Sent from my iPhone
>>
>> On Feb 22, 2016, at 15:46, Chathuri Gunawardhana <
>> lanch.gunawardh...@gmail.com> wrote:
>>
>> Hi All,
>>
>> I'm using distributed version of riak client (here
>> ).
>> I could configure one riak client for the cluster. But when I try to start
>> 2, one of them crashes (error suggests that there is a global name
>> conflict). Can you please suggest me how I can run multiple riak clients on
>> my cluster?
>>
>> My configurations are shown below. (Same in both nodes, other than the
>> riakclient_mynode parameter)
>>
>> {mode, max}.
>>
>> {duration,5} .
>>
>> {concurrent,30}.
>>
>> {operations, [{put,1},{update,1},{get,1}]}.
>>
>> {driver, basho_bench_driver_riakclient}.
>>
>> {code_paths, ["/root/Riak/riak/rel/riak/lib/riak_kv-2.1.1-36-g5f58f01",
>>   "/root/Riak/riak/rel/riak/lib/riak_core-2.1.5"]}.
>>
>> {key_generator, {int_to_bin_bigendian, {uniform_int, 350}}}.
>>
>> {value_generator, {fixed_bin, 100}}.
>>
>> {riakclient_nodes, ['riak@172.31.0.106']}.
>>
>> {riakclient_mynode, ['riak_bench@172.31.0.117', longnames]}.
>>
>>
>> {riakclient_replies, 1}.
>>
>> Thank you very much!
>>
>> --
>> Chathuri Gunawardhana
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>
>
> --
> Chathuri Gunawardhana
>
>


-- 
Chathuri Gunawardhana
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Configure multiple riak clients in a cluster

2016-02-22 Thread Chathuri Gunawardhana
I didn't get it clearly. Can you please provide me an example?

Thank you very much!

On Tue, Feb 23, 2016 at 2:07 AM, Christopher Meiklejohn <
christopher.meiklej...@gmail.com> wrote:

> Your client is registering with the name in the config file, and that name
> can only be used once.
>
> You need to have each client use a different name.
>
> {riakclient_mynode, ['riak_bench@172.31.0.117', longnames]}.
>
>
> Christopher
>
> Sent from my iPhone
>
> On Feb 22, 2016, at 15:46, Chathuri Gunawardhana <
> lanch.gunawardh...@gmail.com> wrote:
>
> Hi All,
>
> I'm using distributed version of riak client (here
> ).
> I could configure one riak client for the cluster. But when I try to start
> 2, one of them crashes (error suggests that there is a global name
> conflict). Can you please suggest me how I can run multiple riak clients on
> my cluster?
>
> My configurations are shown below. (Same in both nodes, other than the
> riakclient_mynode parameter)
>
> {mode, max}.
>
> {duration,5} .
>
> {concurrent,30}.
>
> {operations, [{put,1},{update,1},{get,1}]}.
>
> {driver, basho_bench_driver_riakclient}.
>
> {code_paths, ["/root/Riak/riak/rel/riak/lib/riak_kv-2.1.1-36-g5f58f01",
>   "/root/Riak/riak/rel/riak/lib/riak_core-2.1.5"]}.
>
> {key_generator, {int_to_bin_bigendian, {uniform_int, 350}}}.
>
> {value_generator, {fixed_bin, 100}}.
>
> {riakclient_nodes, ['riak@172.31.0.106']}.
>
> {riakclient_mynode, ['riak_bench@172.31.0.117', longnames]}.
>
>
> {riakclient_replies, 1}.
>
> Thank you very much!
>
> --
> Chathuri Gunawardhana
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>


-- 
Chathuri Gunawardhana
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Configure multiple riak clients in a cluster

2016-02-22 Thread Christopher Meiklejohn
Your client is registering with the name in the config file, and that name can 
only be used once.

You need to have each client use a different name.

> {riakclient_mynode, ['riak_bench@172.31.0.117', longnames]}.

Christopher 

Sent from my iPhone

> On Feb 22, 2016, at 15:46, Chathuri Gunawardhana 
>  wrote:
> 
> Hi All,
> 
> I'm using distributed version of riak client (here). I could configure one 
> riak client for the cluster. But when I try to start 2, one of them crashes 
> (error suggests that there is a global name conflict). Can you please suggest 
> me how I can run multiple riak clients on my cluster?
> 
> My configurations are shown below. (Same in both nodes, other than the 
> riakclient_mynode parameter)
> 
> {mode, max}.
> 
> {duration,5} .
> 
> {concurrent,30}.
> 
> {operations, [{put,1},{update,1},{get,1}]}.
> 
> {driver, basho_bench_driver_riakclient}.
> 
> {code_paths, ["/root/Riak/riak/rel/riak/lib/riak_kv-2.1.1-36-g5f58f01",
>   "/root/Riak/riak/rel/riak/lib/riak_core-2.1.5"]}.
> 
> {key_generator, {int_to_bin_bigendian, {uniform_int, 350}}}.
> 
> {value_generator, {fixed_bin, 100}}.
> 
> {riakclient_nodes, ['riak@172.31.0.106']}.
> 
> {riakclient_mynode, ['riak_bench@172.31.0.117', longnames]}.
> 
> {riakclient_replies, 1}.
> 
> Thank you very much!
> 
> -- 
> Chathuri Gunawardhana
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Configure multiple riak clients in a cluster

2016-02-22 Thread Chathuri Gunawardhana
Hi All,

I'm using distributed version of riak client (here
).
I could configure one riak client for the cluster. But when I try to start
2, one of them crashes (error suggests that there is a global name
conflict). Can you please suggest me how I can run multiple riak clients on
my cluster?

My configurations are shown below. (Same in both nodes, other than the
riakclient_mynode parameter)

{mode, max}.

{duration,5} .

{concurrent,30}.

{operations, [{put,1},{update,1},{get,1}]}.

{driver, basho_bench_driver_riakclient}.

{code_paths, ["/root/Riak/riak/rel/riak/lib/riak_kv-2.1.1-36-g5f58f01",
  "/root/Riak/riak/rel/riak/lib/riak_core-2.1.5"]}.

{key_generator, {int_to_bin_bigendian, {uniform_int, 350}}}.

{value_generator, {fixed_bin, 100}}.

{riakclient_nodes, ['riak@172.31.0.106']}.

{riakclient_mynode, ['riak_bench@172.31.0.117', longnames]}.

{riakclient_replies, 1}.

Thank you very much!

-- 
Chathuri Gunawardhana
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java Riak client can't handle a Riak node failure?

2016-02-22 Thread Vanessa Williams
Thanks very much for the advice. I'll give it a good test and then write
something. Somewhere. Cheers.

On Mon, Feb 22, 2016 at 3:42 PM, Alex Moore  wrote:

> If the contract is "Return true iff the object existed", then the second
> fetch is superfluous + so is the async example I posted.  You can use the
> code you had as-is.
>
> Thanks,
> Alex
>
> On Mon, Feb 22, 2016 at 1:23 PM, Vanessa Williams <
> vanessa.willi...@thoughtwire.ca> wrote:
>
>> Hi Alex, would a second fetch just indicate that the object is *still*
>> deleted? Or that this delete operation succeeded? In other words, perhaps
>> what my contract really is is: return true if there was already a value
>> there. In which case would the second fetch be superfluous?
>>
>> Thanks for your help.
>>
>> Vanessa
>>
>> On Mon, Feb 22, 2016 at 11:15 AM, Alex Moore  wrote:
>>
>>> That's the correct behaviour: it should return true iff a value was
 actually deleted.
>>>
>>>
>>> Ok, if that's the case you should do another FetchValue after the
>>> deletion (to update the response.hasValues()) field, or use the async
>>> version of the delete function. I also noticed that we weren't passing the
>>> vclock to the Delete function, so I added that here as well:
>>>
>>> public boolean delete(String key) throws ExecutionException, 
>>> InterruptedException {
>>>
>>> // fetch in order to get the causal context
>>> FetchValue.Response response = fetchValue(key);
>>>
>>> if(response.isNotFound())
>>> {
>>> return ???; // what do we return if it doesn't exist?
>>> }
>>>
>>> DeleteValue deleteValue = new DeleteValue.Builder(new 
>>> Location(namespace, key))
>>>  
>>> .withVClock(response.getVectorClock())
>>>  .build();
>>>
>>> final RiakFuture deleteFuture = 
>>> client.executeAsync(deleteValue);
>>>
>>> deleteFuture.await();
>>>
>>> if(deleteFuture.isSuccess())
>>> {
>>> return true;
>>> }
>>> else
>>> {
>>> deleteFuture.cause(); // Cause of failure
>>> return false;
>>> }
>>> }
>>>
>>>
>>> Thanks,
>>> Alex
>>>
>>> On Mon, Feb 22, 2016 at 10:48 AM, Vanessa Williams <
>>> vanessa.willi...@thoughtwire.ca> wrote:
>>>
 See inline:

 On Mon, Feb 22, 2016 at 10:31 AM, Alex Moore  wrote:

> Hi Vanessa,
>
> You might have a problem with your delete function (depending on it's
> return value).
> What does the return value of the delete() function indicate?  Right
> now if an object existed, and was deleted, the function will return true,
> and will only return false if the object didn't exist or only consisted of
> tombstones.
>


 That's the correct behaviour: it should return true iff a value was
 actually deleted.


> If you never look at the object value returned by your fetchValue(key) 
> function, another potential optimization you could make is to only return 
> the HEAD / metadata:
>
> FetchValue fv = new FetchValue.Builder(new Location(new Namespace(
> "some_bucket"), key))
>
>   .withOption(FetchValue.Option.HEAD, true)
>   .build();
>
> This would be more efficient because Riak won't have to send you the
> values over the wire, if you only need the metadata.
>
>
 Thanks, I'll clean that up.


> If you do write this up somewhere, share the link! :)
>

 Will do!

 Regards,
 Vanessa


>
> Thanks,
> Alex
>
>
> On Mon, Feb 22, 2016 at 6:23 AM, Vanessa Williams <
> vanessa.willi...@thoughtwire.ca> wrote:
>
>> Hi Dmitri, this thread is old, but I read this part of your answer
>> carefully:
>>
>> You can use the following strategies to prevent stale values, in
>>> increasing order of security/preference:
>>> 1) Use timestamps (and not pass in vector clocks/causal context).
>>> This is ok if you're not editing objects, or you're ok with a bit of 
>>> risk
>>> of stale values.
>>> 2) Use causal context correctly (which means, read-before-you-write
>>> -- in fact, the Update operation in the java client does this for you, I
>>> think). And if Riak can't determine which version is correct, it will 
>>> fall
>>> back on timestamps.
>>> 3) Turn on siblings, for that bucket or bucket type.  That way, Riak
>>> will still try to use causal context to decide the right value. But if 
>>> it
>>> can't decide, it will store BOTH values, and give them back to you on 
>>> the
>>> next read, so that your application can decide which is the correct one.
>>
>>
>> I decided on strategy #2. What I am hoping for is some validation
>> that the code we use to "get", "put", and "delete" is correct in that
>> context, or if it could be simplified in some 

Re: Java Riak client can't handle a Riak node failure?

2016-02-22 Thread Alex Moore
If the contract is "Return true iff the object existed", then the second
fetch is superfluous + so is the async example I posted.  You can use the
code you had as-is.

Thanks,
Alex

On Mon, Feb 22, 2016 at 1:23 PM, Vanessa Williams <
vanessa.willi...@thoughtwire.ca> wrote:

> Hi Alex, would a second fetch just indicate that the object is *still*
> deleted? Or that this delete operation succeeded? In other words, perhaps
> what my contract really is is: return true if there was already a value
> there. In which case would the second fetch be superfluous?
>
> Thanks for your help.
>
> Vanessa
>
> On Mon, Feb 22, 2016 at 11:15 AM, Alex Moore  wrote:
>
>> That's the correct behaviour: it should return true iff a value was
>>> actually deleted.
>>
>>
>> Ok, if that's the case you should do another FetchValue after the
>> deletion (to update the response.hasValues()) field, or use the async
>> version of the delete function. I also noticed that we weren't passing the
>> vclock to the Delete function, so I added that here as well:
>>
>> public boolean delete(String key) throws ExecutionException, 
>> InterruptedException {
>>
>> // fetch in order to get the causal context
>> FetchValue.Response response = fetchValue(key);
>>
>> if(response.isNotFound())
>> {
>> return ???; // what do we return if it doesn't exist?
>> }
>>
>> DeleteValue deleteValue = new DeleteValue.Builder(new 
>> Location(namespace, key))
>>  
>> .withVClock(response.getVectorClock())
>>  .build();
>>
>> final RiakFuture deleteFuture = 
>> client.executeAsync(deleteValue);
>>
>> deleteFuture.await();
>>
>> if(deleteFuture.isSuccess())
>> {
>> return true;
>> }
>> else
>> {
>> deleteFuture.cause(); // Cause of failure
>> return false;
>> }
>> }
>>
>>
>> Thanks,
>> Alex
>>
>> On Mon, Feb 22, 2016 at 10:48 AM, Vanessa Williams <
>> vanessa.willi...@thoughtwire.ca> wrote:
>>
>>> See inline:
>>>
>>> On Mon, Feb 22, 2016 at 10:31 AM, Alex Moore  wrote:
>>>
 Hi Vanessa,

 You might have a problem with your delete function (depending on it's
 return value).
 What does the return value of the delete() function indicate?  Right
 now if an object existed, and was deleted, the function will return true,
 and will only return false if the object didn't exist or only consisted of
 tombstones.

>>>
>>>
>>> That's the correct behaviour: it should return true iff a value was
>>> actually deleted.
>>>
>>>
 If you never look at the object value returned by your fetchValue(key) 
 function, another potential optimization you could make is to only return 
 the HEAD / metadata:

 FetchValue fv = new FetchValue.Builder(new Location(new Namespace(
 "some_bucket"), key))

   .withOption(FetchValue.Option.HEAD, true)
   .build();

 This would be more efficient because Riak won't have to send you the
 values over the wire, if you only need the metadata.


>>> Thanks, I'll clean that up.
>>>
>>>
 If you do write this up somewhere, share the link! :)

>>>
>>> Will do!
>>>
>>> Regards,
>>> Vanessa
>>>
>>>

 Thanks,
 Alex


 On Mon, Feb 22, 2016 at 6:23 AM, Vanessa Williams <
 vanessa.willi...@thoughtwire.ca> wrote:

> Hi Dmitri, this thread is old, but I read this part of your answer
> carefully:
>
> You can use the following strategies to prevent stale values, in
>> increasing order of security/preference:
>> 1) Use timestamps (and not pass in vector clocks/causal context).
>> This is ok if you're not editing objects, or you're ok with a bit of risk
>> of stale values.
>> 2) Use causal context correctly (which means, read-before-you-write
>> -- in fact, the Update operation in the java client does this for you, I
>> think). And if Riak can't determine which version is correct, it will 
>> fall
>> back on timestamps.
>> 3) Turn on siblings, for that bucket or bucket type.  That way, Riak
>> will still try to use causal context to decide the right value. But if it
>> can't decide, it will store BOTH values, and give them back to you on the
>> next read, so that your application can decide which is the correct one.
>
>
> I decided on strategy #2. What I am hoping for is some validation that
> the code we use to "get", "put", and "delete" is correct in that context,
> or if it could be simplified in some cases. Not we are using delete-mode
> "immediate" and no duplicates.
>
> In their shortest possible forms, here are the three methods I'd like
> some feedback on (note, they're being used in production and haven't 
> caused
> any problems yet, however we have very few writes in production so the 
> lack
>>>

Re: Java Riak client can't handle a Riak node failure?

2016-02-22 Thread Vanessa Williams
Hi Alex, would a second fetch just indicate that the object is *still*
deleted? Or that this delete operation succeeded? In other words, perhaps
what my contract really is is: return true if there was already a value
there. In which case would the second fetch be superfluous?

Thanks for your help.

Vanessa

On Mon, Feb 22, 2016 at 11:15 AM, Alex Moore  wrote:

> That's the correct behaviour: it should return true iff a value was
>> actually deleted.
>
>
> Ok, if that's the case you should do another FetchValue after the deletion
> (to update the response.hasValues()) field, or use the async version of
> the delete function. I also noticed that we weren't passing the vclock to
> the Delete function, so I added that here as well:
>
> public boolean delete(String key) throws ExecutionException, 
> InterruptedException {
>
> // fetch in order to get the causal context
> FetchValue.Response response = fetchValue(key);
>
> if(response.isNotFound())
> {
> return ???; // what do we return if it doesn't exist?
> }
>
> DeleteValue deleteValue = new DeleteValue.Builder(new Location(namespace, 
> key))
>  
> .withVClock(response.getVectorClock())
>  .build();
>
> final RiakFuture deleteFuture = 
> client.executeAsync(deleteValue);
>
> deleteFuture.await();
>
> if(deleteFuture.isSuccess())
> {
> return true;
> }
> else
> {
> deleteFuture.cause(); // Cause of failure
> return false;
> }
> }
>
>
> Thanks,
> Alex
>
> On Mon, Feb 22, 2016 at 10:48 AM, Vanessa Williams <
> vanessa.willi...@thoughtwire.ca> wrote:
>
>> See inline:
>>
>> On Mon, Feb 22, 2016 at 10:31 AM, Alex Moore  wrote:
>>
>>> Hi Vanessa,
>>>
>>> You might have a problem with your delete function (depending on it's
>>> return value).
>>> What does the return value of the delete() function indicate?  Right now
>>> if an object existed, and was deleted, the function will return true, and
>>> will only return false if the object didn't exist or only consisted of
>>> tombstones.
>>>
>>
>>
>> That's the correct behaviour: it should return true iff a value was
>> actually deleted.
>>
>>
>>> If you never look at the object value returned by your fetchValue(key) 
>>> function, another potential optimization you could make is to only return 
>>> the HEAD / metadata:
>>>
>>> FetchValue fv = new FetchValue.Builder(new Location(new Namespace(
>>> "some_bucket"), key))
>>>
>>>   .withOption(FetchValue.Option.HEAD, true)
>>>   .build();
>>>
>>> This would be more efficient because Riak won't have to send you the
>>> values over the wire, if you only need the metadata.
>>>
>>>
>> Thanks, I'll clean that up.
>>
>>
>>> If you do write this up somewhere, share the link! :)
>>>
>>
>> Will do!
>>
>> Regards,
>> Vanessa
>>
>>
>>>
>>> Thanks,
>>> Alex
>>>
>>>
>>> On Mon, Feb 22, 2016 at 6:23 AM, Vanessa Williams <
>>> vanessa.willi...@thoughtwire.ca> wrote:
>>>
 Hi Dmitri, this thread is old, but I read this part of your answer
 carefully:

 You can use the following strategies to prevent stale values, in
> increasing order of security/preference:
> 1) Use timestamps (and not pass in vector clocks/causal context). This
> is ok if you're not editing objects, or you're ok with a bit of risk of
> stale values.
> 2) Use causal context correctly (which means, read-before-you-write --
> in fact, the Update operation in the java client does this for you, I
> think). And if Riak can't determine which version is correct, it will fall
> back on timestamps.
> 3) Turn on siblings, for that bucket or bucket type.  That way, Riak
> will still try to use causal context to decide the right value. But if it
> can't decide, it will store BOTH values, and give them back to you on the
> next read, so that your application can decide which is the correct one.


 I decided on strategy #2. What I am hoping for is some validation that
 the code we use to "get", "put", and "delete" is correct in that context,
 or if it could be simplified in some cases. Not we are using delete-mode
 "immediate" and no duplicates.

 In their shortest possible forms, here are the three methods I'd like
 some feedback on (note, they're being used in production and haven't caused
 any problems yet, however we have very few writes in production so the lack
 of problems doesn't support the conclusion that the implementation is
 correct.) Note all argument-checking, exception-handling, and logging
 removed for clarity. *I'm mostly concerned about correct use of causal
 context and response.isNotFound and response.hasValues. *Is there
 anything I could/should have left out?

 public RiakClient(String name,
 com.basho.riak.client.api.RiakClient client)
 {

Re: Increase number of partitions above 1024

2016-02-22 Thread Chathuri Gunawardhana
I'm using riak master version on riak github (riak_kv_version :
<<"2.1.1-38-ga8bc9e0">>)
. I don't use coverage queries.

When I try to set the partition count over 1024, it suggest me to do it via
advanced config (in cuttlefish schema for riak core, there is a validation
to see whether it is above 1024 and if so they give this suggestion). But I
don't know how I can add this parameter to advanced.config.

Thank you very much!

On Mon, Feb 22, 2016 at 5:05 PM, Alex Moore  wrote:

> Ok, what does `riak-admin status | grep riak_kv_version` return?  The
> config files are different for Riak 1.x and 2.x.
>
> Also for your tests, are you using any "coverage query" features like
> MapReduce or 2i queries?
>
> Thanks,
> Alex
>
>
>
>
> On Mon, Feb 22, 2016 at 10:43 AM, Chathuri Gunawardhana <
> lanch.gunawardh...@gmail.com> wrote:
>
>> For my experiment I will be using 100 nodes.
>>
>> Thank you!
>>
>> On Mon, Feb 22, 2016 at 4:40 PM, Alex Moore  wrote:
>>
>>> Hi Chathuri,
>>>
>>> Larger ring sizes are not usually recommended, you can overload disk I/O
>>> if the number of vnodes to nodes is too high.
>>> Similarly you can underload other system resources if the vnode/node
>>> ratio is too low.
>>>
>>> How many nodes are you planning on running?
>>>
>>> Thanks,
>>> Alex
>>>
>>> On Mon, Feb 22, 2016 at 5:42 AM, Chathuri Gunawardhana <
>>> lanch.gunawardh...@gmail.com> wrote:
>>>
 Hi,

 It is not possible to increase the number of partitions above 1024 and
 has been disabled via cuttlefish in riak.config. When I try to increase
 ring_size via riak.config, the error suggest that I should configure
 partition size>1024 via advanced config file. But I couldn't find a way of
 how I can specify this in advanced.config file. Can you please suggest me
 how I can do this?

 Thank you very much!

 --
 Chathuri Gunawardhana


 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


>>>
>>
>>
>> --
>> Chathuri Gunawardhana
>>
>>
>


-- 
Chathuri Gunawardhana
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java Riak client can't handle a Riak node failure?

2016-02-22 Thread Alex Moore
>
> That's the correct behaviour: it should return true iff a value was
> actually deleted.


Ok, if that's the case you should do another FetchValue after the deletion
(to update the response.hasValues()) field, or use the async version of the
delete function. I also noticed that we weren't passing the vclock to the
Delete function, so I added that here as well:

public boolean delete(String key) throws ExecutionException,
InterruptedException {

// fetch in order to get the causal context
FetchValue.Response response = fetchValue(key);

if(response.isNotFound())
{
return ???; // what do we return if it doesn't exist?
}

DeleteValue deleteValue = new DeleteValue.Builder(new
Location(namespace, key))

.withVClock(response.getVectorClock())
 .build();

final RiakFuture deleteFuture =
client.executeAsync(deleteValue);

deleteFuture.await();

if(deleteFuture.isSuccess())
{
return true;
}
else
{
deleteFuture.cause(); // Cause of failure
return false;
}
}


Thanks,
Alex

On Mon, Feb 22, 2016 at 10:48 AM, Vanessa Williams <
vanessa.willi...@thoughtwire.ca> wrote:

> See inline:
>
> On Mon, Feb 22, 2016 at 10:31 AM, Alex Moore  wrote:
>
>> Hi Vanessa,
>>
>> You might have a problem with your delete function (depending on it's
>> return value).
>> What does the return value of the delete() function indicate?  Right now
>> if an object existed, and was deleted, the function will return true, and
>> will only return false if the object didn't exist or only consisted of
>> tombstones.
>>
>
>
> That's the correct behaviour: it should return true iff a value was
> actually deleted.
>
>
>> If you never look at the object value returned by your fetchValue(key) 
>> function, another potential optimization you could make is to only return 
>> the HEAD / metadata:
>>
>> FetchValue fv = new FetchValue.Builder(new Location(new Namespace(
>> "some_bucket"), key))
>>
>>   .withOption(FetchValue.Option.HEAD, true)
>>   .build();
>>
>> This would be more efficient because Riak won't have to send you the
>> values over the wire, if you only need the metadata.
>>
>>
> Thanks, I'll clean that up.
>
>
>> If you do write this up somewhere, share the link! :)
>>
>
> Will do!
>
> Regards,
> Vanessa
>
>
>>
>> Thanks,
>> Alex
>>
>>
>> On Mon, Feb 22, 2016 at 6:23 AM, Vanessa Williams <
>> vanessa.willi...@thoughtwire.ca> wrote:
>>
>>> Hi Dmitri, this thread is old, but I read this part of your answer
>>> carefully:
>>>
>>> You can use the following strategies to prevent stale values, in
 increasing order of security/preference:
 1) Use timestamps (and not pass in vector clocks/causal context). This
 is ok if you're not editing objects, or you're ok with a bit of risk of
 stale values.
 2) Use causal context correctly (which means, read-before-you-write --
 in fact, the Update operation in the java client does this for you, I
 think). And if Riak can't determine which version is correct, it will fall
 back on timestamps.
 3) Turn on siblings, for that bucket or bucket type.  That way, Riak
 will still try to use causal context to decide the right value. But if it
 can't decide, it will store BOTH values, and give them back to you on the
 next read, so that your application can decide which is the correct one.
>>>
>>>
>>> I decided on strategy #2. What I am hoping for is some validation that
>>> the code we use to "get", "put", and "delete" is correct in that context,
>>> or if it could be simplified in some cases. Not we are using delete-mode
>>> "immediate" and no duplicates.
>>>
>>> In their shortest possible forms, here are the three methods I'd like
>>> some feedback on (note, they're being used in production and haven't caused
>>> any problems yet, however we have very few writes in production so the lack
>>> of problems doesn't support the conclusion that the implementation is
>>> correct.) Note all argument-checking, exception-handling, and logging
>>> removed for clarity. *I'm mostly concerned about correct use of causal
>>> context and response.isNotFound and response.hasValues. *Is there
>>> anything I could/should have left out?
>>>
>>> public RiakClient(String name, com.basho.riak.client.api.RiakClient
>>> client)
>>> {
>>> this.name = name;
>>> this.namespace = new Namespace(name);
>>> this.client = client;
>>> }
>>>
>>> public byte[] get(String key) throws ExecutionException,
>>> InterruptedException {
>>>
>>> FetchValue.Response response = fetchValue(key);
>>> if (!response.isNotFound())
>>> {
>>> RiakObject riakObject = response.getValue(RiakObject.class);
>>> return riakObject.getValue().getValue();
>>> }
>>> return null;
>>> }
>>>
>>> public void put(String key, byte[] value) thro

Re: Increase number of partitions above 1024

2016-02-22 Thread Alex Moore
Ok, what does `riak-admin status | grep riak_kv_version` return?  The
config files are different for Riak 1.x and 2.x.

Also for your tests, are you using any "coverage query" features like
MapReduce or 2i queries?

Thanks,
Alex




On Mon, Feb 22, 2016 at 10:43 AM, Chathuri Gunawardhana <
lanch.gunawardh...@gmail.com> wrote:

> For my experiment I will be using 100 nodes.
>
> Thank you!
>
> On Mon, Feb 22, 2016 at 4:40 PM, Alex Moore  wrote:
>
>> Hi Chathuri,
>>
>> Larger ring sizes are not usually recommended, you can overload disk I/O
>> if the number of vnodes to nodes is too high.
>> Similarly you can underload other system resources if the vnode/node
>> ratio is too low.
>>
>> How many nodes are you planning on running?
>>
>> Thanks,
>> Alex
>>
>> On Mon, Feb 22, 2016 at 5:42 AM, Chathuri Gunawardhana <
>> lanch.gunawardh...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> It is not possible to increase the number of partitions above 1024 and
>>> has been disabled via cuttlefish in riak.config. When I try to increase
>>> ring_size via riak.config, the error suggest that I should configure
>>> partition size>1024 via advanced config file. But I couldn't find a way of
>>> how I can specify this in advanced.config file. Can you please suggest me
>>> how I can do this?
>>>
>>> Thank you very much!
>>>
>>> --
>>> Chathuri Gunawardhana
>>>
>>>
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>>
>>>
>>
>
>
> --
> Chathuri Gunawardhana
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java Riak client can't handle a Riak node failure?

2016-02-22 Thread Vanessa Williams
See inline:

On Mon, Feb 22, 2016 at 10:31 AM, Alex Moore  wrote:

> Hi Vanessa,
>
> You might have a problem with your delete function (depending on it's
> return value).
> What does the return value of the delete() function indicate?  Right now
> if an object existed, and was deleted, the function will return true, and
> will only return false if the object didn't exist or only consisted of
> tombstones.
>


That's the correct behaviour: it should return true iff a value was
actually deleted.


> If you never look at the object value returned by your fetchValue(key) 
> function, another potential optimization you could make is to only return the 
> HEAD / metadata:
>
> FetchValue fv = new FetchValue.Builder(new Location(new Namespace(
> "some_bucket"), key))
>
>   .withOption(FetchValue.Option.HEAD, true)
>   .build();
>
> This would be more efficient because Riak won't have to send you the
> values over the wire, if you only need the metadata.
>
>
Thanks, I'll clean that up.


> If you do write this up somewhere, share the link! :)
>

Will do!

Regards,
Vanessa


>
> Thanks,
> Alex
>
>
> On Mon, Feb 22, 2016 at 6:23 AM, Vanessa Williams <
> vanessa.willi...@thoughtwire.ca> wrote:
>
>> Hi Dmitri, this thread is old, but I read this part of your answer
>> carefully:
>>
>> You can use the following strategies to prevent stale values, in
>>> increasing order of security/preference:
>>> 1) Use timestamps (and not pass in vector clocks/causal context). This
>>> is ok if you're not editing objects, or you're ok with a bit of risk of
>>> stale values.
>>> 2) Use causal context correctly (which means, read-before-you-write --
>>> in fact, the Update operation in the java client does this for you, I
>>> think). And if Riak can't determine which version is correct, it will fall
>>> back on timestamps.
>>> 3) Turn on siblings, for that bucket or bucket type.  That way, Riak
>>> will still try to use causal context to decide the right value. But if it
>>> can't decide, it will store BOTH values, and give them back to you on the
>>> next read, so that your application can decide which is the correct one.
>>
>>
>> I decided on strategy #2. What I am hoping for is some validation that
>> the code we use to "get", "put", and "delete" is correct in that context,
>> or if it could be simplified in some cases. Not we are using delete-mode
>> "immediate" and no duplicates.
>>
>> In their shortest possible forms, here are the three methods I'd like
>> some feedback on (note, they're being used in production and haven't caused
>> any problems yet, however we have very few writes in production so the lack
>> of problems doesn't support the conclusion that the implementation is
>> correct.) Note all argument-checking, exception-handling, and logging
>> removed for clarity. *I'm mostly concerned about correct use of causal
>> context and response.isNotFound and response.hasValues. *Is there
>> anything I could/should have left out?
>>
>> public RiakClient(String name, com.basho.riak.client.api.RiakClient
>> client)
>> {
>> this.name = name;
>> this.namespace = new Namespace(name);
>> this.client = client;
>> }
>>
>> public byte[] get(String key) throws ExecutionException,
>> InterruptedException {
>>
>> FetchValue.Response response = fetchValue(key);
>> if (!response.isNotFound())
>> {
>> RiakObject riakObject = response.getValue(RiakObject.class);
>> return riakObject.getValue().getValue();
>> }
>> return null;
>> }
>>
>> public void put(String key, byte[] value) throws ExecutionException,
>> InterruptedException {
>>
>> // fetch in order to get the causal context
>> FetchValue.Response response = fetchValue(key);
>> RiakObject storeObject = new
>>
>> RiakObject().setValue(BinaryValue.create(value)).setContentType("binary/octet-stream");
>> StoreValue.Builder builder =
>> new StoreValue.Builder(storeObject).withLocation(new
>> Location(namespace, key));
>> if (response.getVectorClock() != null) {
>> builder = builder.withVectorClock(response.getVectorClock());
>> }
>> StoreValue storeValue = builder.build();
>> client.execute(storeValue);
>> }
>>
>> public boolean delete(String key) throws ExecutionException,
>> InterruptedException {
>>
>> // fetch in order to get the causal context
>> FetchValue.Response response = fetchValue(key);
>> if (!response.isNotFound())
>> {
>> DeleteValue deleteValue = new DeleteValue.Builder(new
>> Location(namespace, key)).build();
>> client.execute(deleteValue);
>> }
>> return !response.isNotFound() || !response.hasValues();
>> }
>>
>>
>> Any comments much appreciated. I want to provide a minimally correct
>> example of simple client code somewhere (GitHub, blog 

Re: Increase number of partitions above 1024

2016-02-22 Thread Chathuri Gunawardhana
For my experiment I will be using 100 nodes.

Thank you!

On Mon, Feb 22, 2016 at 4:40 PM, Alex Moore  wrote:

> Hi Chathuri,
>
> Larger ring sizes are not usually recommended, you can overload disk I/O
> if the number of vnodes to nodes is too high.
> Similarly you can underload other system resources if the vnode/node ratio
> is too low.
>
> How many nodes are you planning on running?
>
> Thanks,
> Alex
>
> On Mon, Feb 22, 2016 at 5:42 AM, Chathuri Gunawardhana <
> lanch.gunawardh...@gmail.com> wrote:
>
>> Hi,
>>
>> It is not possible to increase the number of partitions above 1024 and
>> has been disabled via cuttlefish in riak.config. When I try to increase
>> ring_size via riak.config, the error suggest that I should configure
>> partition size>1024 via advanced config file. But I couldn't find a way of
>> how I can specify this in advanced.config file. Can you please suggest me
>> how I can do this?
>>
>> Thank you very much!
>>
>> --
>> Chathuri Gunawardhana
>>
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>


-- 
Chathuri Gunawardhana
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: riak crash

2016-02-22 Thread Matthew Von-Maszewski
Raviraj,

Please run 'riak-debug'.  This is in the bin directory along with 'riak start' 
and 'riak-admin'.

riak-debug will produce a file named similar to 
/home/user/r...@10.0.0.15-riak-debug.tar.gz 


You should email that file to me directly, or post it to dropbox or similar and 
send me a link.  You do not want to send that file to the entire mailing list.

I will review the file and suggest next steps.

Matthew

> On Feb 22, 2016, at 5:13 AM, Raviraj Vaishampayan  
> wrote:
> 
> Hi,
> 
> We have been using riak to gather our test data and analyze results after 
> test completes.
> Recently we have observed riak crash in riak console logs.
> This causes our tests failing to record data to riak and bailing out :-(
> 
> The crash logs are as follow:
> 2016-02-19 16:25:26.255 [error] <0.2160.0> gen_fsm <0.2160.0> in state active 
> terminated with reason: no function clause matching 
> riak_kv_vnode:handle_info({#Ref<0.0.482.161540>,{ok,<0.11042.842>}}, 
> {state,268322566228720457638957762256505085639956365312,riak_kv_eleveldb_backend,true,{state,<<>>,...},...})
>  line 1195
> 2016-02-19 16:25:26.260 [error] <0.2160.0> CRASH REPORT Process <0.2160.0> 
> with 2 neighbours exited with reason: no function clause matching 
> riak_kv_vnode:handle_info({#Ref<0.0.482.161540>,{ok,<0.11042.842>}}, 
> {state,268322566228720457638957762256505085639956365312,riak_kv_eleveldb_backend,true,{state,<<>>,...},...})
>  line 1195 in gen_fsm:terminate/7 line 622
> 2016-02-19 16:25:26.260 [error] <0.172.0> Supervisor riak_core_vnode_sup had 
> child undefined started with {riak_core_vnode,start_link,undefined} at 
> <0.2160.0> exit with reason no function clause matching 
> riak_kv_vnode:handle_info({#Ref<0.0.482.161540>,{ok,<0.11042.842>}}, 
> {state,268322566228720457638957762256505085639956365312,riak_kv_eleveldb_backend,true,{state,<<>>,...},...})
>  line 1195 in context child_terminated
> 2016-02-19 16:25:26.261 [error] <0.4319.0> gen_fsm <0.4319.0> in state ready 
> terminated with reason: no function clause matching 
> riak_kv_vnode:handle_info({#Ref<0.0.482.161540>,{ok,<0.11042.842>}}, 
> {state,268322566228720457638957762256505085639956365312,riak_kv_eleveldb_backend,true,{state,<<>>,...},...})
>  line 1195
> 2016-02-19 16:25:26.275 [error] <0.4319.0> CRASH REPORT Process <0.4319.0> 
> with 10 neighbours exited with reason: no function clause matching 
> riak_kv_vnode:handle_info({#Ref<0.0.482.161540>,{ok,<0.11042.842>}}, 
> {state,268322566228720457638957762256505085639956365312,riak_kv_eleveldb_backend,true,{state,<<>>,...},...})
>  line 1195 in gen_fsm:terminate/7 line 622
> 2016-02-19 16:25:26.278 [error] <0.4320.0> Supervisor 
> {<0.4320.0>,poolboy_sup} had child riak_core_vnode_worker started with 
> riak_core_vnode_worker:start_link([{worker_module,riak_core_vnode_worker},{worker_args,[268322566228720457638957762256505085639956365312,...]},...])
>  at undefined exit with reason no function clause matching 
> riak_kv_vnode:handle_info({#Ref<0.0.482.161540>,{ok,<0.11042.842>}}, 
> {state,268322566228720457638957762256505085639956365312,riak_kv_eleveldb_backend,true,{state,<<>>,...},...})
>  line 1195 in context shutdown_error
> 2016-02-19 16:25:26.278 [error] <0.4320.0> gen_server <0.4320.0> terminated 
> with reason: no function clause matching 
> riak_kv_vnode:handle_info({#Ref<0.0.482.161540>,{ok,<0.11042.842>}}, 
> {state,268322566228720457638957762256505085639956365312,riak_kv_eleveldb_backend,true,{state,<<>>,...},...})
>  line 1195
> 2016-02-19 16:25:26.278 [error] <0.4320.0> CRASH REPORT Process <0.4320.0> 
> with 0 neighbours exited with reason: no function clause matching 
> riak_kv_vnode:handle_info({#Ref<0.0.482.161540>,{ok,<0.11042.842>}}, 
> {state,268322566228720457638957762256505085639956365312,riak_kv_eleveldb_backend,true,{state,<<>>,...},...})
>  line 1195 in gen_server:terminate/6 line 744
> 2016-02-19 16:25:26.806 [error] <0.2157.0> gen_fsm <0.2157.0> in state active 
> terminated with reason: {timeout,{gen_server,call,[<0.5141.0>,stop]}}
> 2016-02-19 16:25:26.808 [error] <0.2157.0> CRASH REPORT Process <0.2157.0> 
> with 2 neighbours exited with reason: 
> {timeout,{gen_server,call,[<0.5141.0>,stop]}} in gen_fsm:terminate/7 line 600
> 2016-02-19 16:25:26.809 [error] <0.5450.0> gen_fsm <0.5450.0> in state ready 
> terminated with reason: {timeout,{gen_server,call,[<0.5141.0>,stop]}}
> 2016-02-19 16:25:26.809 [error] <0.172.0> Supervisor riak_core_vnode_sup had 
> child undefined started with {riak_core_vnode,start_link,undefined} at 
> <0.2157.0> exit with reason {timeout,{gen_server,call,[<0.5141.0>,stop]}} in 
> context child_terminated
> 2016-02-19 16:25:26.809 [error] <0.5450.0> CRASH REPORT Process <0.5450.0> 
> with 10 neighbours exited with reason: 
> {timeout,{gen_server,call,[<0.5141.0>,stop]}} in gen_fsm:terminate/7 line 622
> 2016-02-19 16:25:26.809 [error] <0.5451.0> Supervisor 
> {<0.5451.0>,poolboy_sup} had child riak_core_v

Re: Increase number of partitions above 1024

2016-02-22 Thread Alex Moore
Hi Chathuri,

Larger ring sizes are not usually recommended, you can overload disk I/O if
the number of vnodes to nodes is too high.
Similarly you can underload other system resources if the vnode/node ratio
is too low.

How many nodes are you planning on running?

Thanks,
Alex

On Mon, Feb 22, 2016 at 5:42 AM, Chathuri Gunawardhana <
lanch.gunawardh...@gmail.com> wrote:

> Hi,
>
> It is not possible to increase the number of partitions above 1024 and has
> been disabled via cuttlefish in riak.config. When I try to increase
> ring_size via riak.config, the error suggest that I should configure
> partition size>1024 via advanced config file. But I couldn't find a way of
> how I can specify this in advanced.config file. Can you please suggest me
> how I can do this?
>
> Thank you very much!
>
> --
> Chathuri Gunawardhana
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak 2.1.3 hooks not invoked

2016-02-22 Thread Adam Kovari
Hello

I am trying to enable post commit hook for my bucket type, and I can see it is 
configured in the bucket-type:

➜  idvt-riak git:(master) ✗ riak-admin bucket-type status change_log
change_log is active

active: true
allow_mult: true
basic_quorum: false
big_vclock: 50
chash_keyfun: {riak_core_util,chash_std_keyfun}
claimant: 'riak@127.0.0.1'
dvv_enabled: true
dw: quorum
last_write_wins: false
linkfun: {modfun,riak_kv_wm_link_walker,mapreduce_linkfun}
n_val: 3
notfound_ok: true
old_vclock: 86400
postcommit: [{struct,[{<<"mod">>,<<"idvt_hooks">>},
                      {<<"fun">>,<<"postcommit_batch">>}]}]
pr: 0
precommit: []
pw: 0
r: quorum
rw: quorum
small_vclock: 50
w: quorum
young_vclock: 20



The problem is that this hook never gets invoked.

The module with the hook looks like:

-module(idvt_hooks).

-export([postcommit_batch/1]).

postcommit_batch(Object) ->
    Data = binary_to_term(riak_object:get_value(Object)),
    file:write_file("/tmp/riak.tmp", "test"),
    io:format(standard_error, "~p~n", [Data]),
    Object.


And whenever I create an object in the bucket with this bucket type, no file is 
created and no log message written (not in console.log, nor in error.log, not 
even in erl when running riak attach). I am using riak-erlang-client to create 
an object remotely, using something like this:

Object = riakc_obj:new({BucketType, Bucket}, Id, Payload, ContentType),
riakc_pb_socket:put(Pid, Object),


I am wondering if anyone has experienced similar issue before or if someone 
might have any tips how to debug this problem.


This behaves the same way on os x 10.11.3, latest homebrew build of riak and 
also in debian linux, latest riak.


Thanks


-- 
Adam Kovari

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java Riak client can't handle a Riak node failure?

2016-02-22 Thread Alex Moore
Hi Vanessa,

You might have a problem with your delete function (depending on it's
return value).
What does the return value of the delete() function indicate?  Right now if
an object existed, and was deleted, the function will return true, and will
only return false if the object didn't exist or only consisted of
tombstones.

If you never look at the object value returned by your fetchValue(key)
function, another potential optimization you could make is to only
return the HEAD / metadata:

FetchValue fv = new FetchValue.Builder(new Location(new Namespace(
"some_bucket"), key))

  .withOption(FetchValue.Option.HEAD, true)
  .build();

This would be more efficient because Riak won't have to send you the values
over the wire, if you only need the metadata.

If you do write this up somewhere, share the link! :)

Thanks,
Alex


On Mon, Feb 22, 2016 at 6:23 AM, Vanessa Williams <
vanessa.willi...@thoughtwire.ca> wrote:

> Hi Dmitri, this thread is old, but I read this part of your answer
> carefully:
>
> You can use the following strategies to prevent stale values, in
>> increasing order of security/preference:
>> 1) Use timestamps (and not pass in vector clocks/causal context). This is
>> ok if you're not editing objects, or you're ok with a bit of risk of stale
>> values.
>> 2) Use causal context correctly (which means, read-before-you-write -- in
>> fact, the Update operation in the java client does this for you, I think).
>> And if Riak can't determine which version is correct, it will fall back on
>> timestamps.
>> 3) Turn on siblings, for that bucket or bucket type.  That way, Riak will
>> still try to use causal context to decide the right value. But if it can't
>> decide, it will store BOTH values, and give them back to you on the next
>> read, so that your application can decide which is the correct one.
>
>
> I decided on strategy #2. What I am hoping for is some validation that the
> code we use to "get", "put", and "delete" is correct in that context, or if
> it could be simplified in some cases. Not we are using delete-mode
> "immediate" and no duplicates.
>
> In their shortest possible forms, here are the three methods I'd like some
> feedback on (note, they're being used in production and haven't caused any
> problems yet, however we have very few writes in production so the lack of
> problems doesn't support the conclusion that the implementation is
> correct.) Note all argument-checking, exception-handling, and logging
> removed for clarity. *I'm mostly concerned about correct use of causal
> context and response.isNotFound and response.hasValues. *Is there
> anything I could/should have left out?
>
> public RiakClient(String name, com.basho.riak.client.api.RiakClient
> client)
> {
> this.name = name;
> this.namespace = new Namespace(name);
> this.client = client;
> }
>
> public byte[] get(String key) throws ExecutionException,
> InterruptedException {
>
> FetchValue.Response response = fetchValue(key);
> if (!response.isNotFound())
> {
> RiakObject riakObject = response.getValue(RiakObject.class);
> return riakObject.getValue().getValue();
> }
> return null;
> }
>
> public void put(String key, byte[] value) throws ExecutionException,
> InterruptedException {
>
> // fetch in order to get the causal context
> FetchValue.Response response = fetchValue(key);
> RiakObject storeObject = new
>
> RiakObject().setValue(BinaryValue.create(value)).setContentType("binary/octet-stream");
> StoreValue.Builder builder =
> new StoreValue.Builder(storeObject).withLocation(new
> Location(namespace, key));
> if (response.getVectorClock() != null) {
> builder = builder.withVectorClock(response.getVectorClock());
> }
> StoreValue storeValue = builder.build();
> client.execute(storeValue);
> }
>
> public boolean delete(String key) throws ExecutionException,
> InterruptedException {
>
> // fetch in order to get the causal context
> FetchValue.Response response = fetchValue(key);
> if (!response.isNotFound())
> {
> DeleteValue deleteValue = new DeleteValue.Builder(new
> Location(namespace, key)).build();
> client.execute(deleteValue);
> }
> return !response.isNotFound() || !response.hasValues();
> }
>
>
> Any comments much appreciated. I want to provide a minimally correct
> example of simple client code somewhere (GitHub, blog post, something...)
> so I don't want to post this without review.
>
> Thanks,
> Vanessa
>
> ThoughtWire Corporation
> http://www.thoughtwire.com
>
>
>
>
> On Thu, Oct 8, 2015 at 8:45 AM, Dmitri Zagidulin 
> wrote:
>
>> Hi Vanessa,
>>
>> The thing to keep in mind about read repair is -- it happens
>> asynchronously on every GET, but /after/ the results are ret

Re: Custom Object Mapper settings in Java Client

2016-02-22 Thread Cosmin Marginean
Thank you, Vitaly, will give that a go.

On Mon, Feb 22, 2016 at 9:54 AM, Vitaly E <13vitam...@gmail.com> wrote:

> Hi Cosmin,
>
> Have a look at com.basho.riak.client.api.convert.ConverterFactory. It's a
> singleton, you can register a custom converter there (the default for
> classes other than String and RiakObject is
> com.basho.riak.client.api.convert.JSONConverter).
>
> It's also possible to pass a custom converter to the FetchValue API, for
> instance via
> com.basho.riak.client.api.commands.kv.KvResponseBase.getValues(com.basho.riak.client.api.convert.Converter).
>
> I think this is the right way to add custom serialization.
>
> Regards,
> Vitaly
>
> On Mon, Feb 22, 2016 at 11:29 AM, Cosmin Marginean <
> cos.margin...@gmail.com> wrote:
>
>> Hi
>>
>> I presume that Riak Java client is using Jackson for JSON-to-POJO and
>> vice versa.
>>
>> Is there a way to easily inject a custom object mapper there? Or at least
>> to get a reference to it in order to add custom serializers?
>>
>> Thank you
>> Cosmin
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java Riak client can't handle a Riak node failure?

2016-02-22 Thread Vanessa Williams
Hi Dmitri, this thread is old, but I read this part of your answer
carefully:

You can use the following strategies to prevent stale values, in increasing
> order of security/preference:
> 1) Use timestamps (and not pass in vector clocks/causal context). This is
> ok if you're not editing objects, or you're ok with a bit of risk of stale
> values.
> 2) Use causal context correctly (which means, read-before-you-write -- in
> fact, the Update operation in the java client does this for you, I think).
> And if Riak can't determine which version is correct, it will fall back on
> timestamps.
> 3) Turn on siblings, for that bucket or bucket type.  That way, Riak will
> still try to use causal context to decide the right value. But if it can't
> decide, it will store BOTH values, and give them back to you on the next
> read, so that your application can decide which is the correct one.


I decided on strategy #2. What I am hoping for is some validation that the
code we use to "get", "put", and "delete" is correct in that context, or if
it could be simplified in some cases. Not we are using delete-mode
"immediate" and no duplicates.

In their shortest possible forms, here are the three methods I'd like some
feedback on (note, they're being used in production and haven't caused any
problems yet, however we have very few writes in production so the lack of
problems doesn't support the conclusion that the implementation is
correct.) Note all argument-checking, exception-handling, and logging
removed for clarity. *I'm mostly concerned about correct use of causal
context and response.isNotFound and response.hasValues. *Is there anything
I could/should have left out?

public RiakClient(String name, com.basho.riak.client.api.RiakClient
client)
{
this.name = name;
this.namespace = new Namespace(name);
this.client = client;
}

public byte[] get(String key) throws ExecutionException,
InterruptedException {

FetchValue.Response response = fetchValue(key);
if (!response.isNotFound())
{
RiakObject riakObject = response.getValue(RiakObject.class);
return riakObject.getValue().getValue();
}
return null;
}

public void put(String key, byte[] value) throws ExecutionException,
InterruptedException {

// fetch in order to get the causal context
FetchValue.Response response = fetchValue(key);
RiakObject storeObject = new

RiakObject().setValue(BinaryValue.create(value)).setContentType("binary/octet-stream");
StoreValue.Builder builder =
new StoreValue.Builder(storeObject).withLocation(new
Location(namespace, key));
if (response.getVectorClock() != null) {
builder = builder.withVectorClock(response.getVectorClock());
}
StoreValue storeValue = builder.build();
client.execute(storeValue);
}

public boolean delete(String key) throws ExecutionException,
InterruptedException {

// fetch in order to get the causal context
FetchValue.Response response = fetchValue(key);
if (!response.isNotFound())
{
DeleteValue deleteValue = new DeleteValue.Builder(new
Location(namespace, key)).build();
client.execute(deleteValue);
}
return !response.isNotFound() || !response.hasValues();
}


Any comments much appreciated. I want to provide a minimally correct
example of simple client code somewhere (GitHub, blog post, something...)
so I don't want to post this without review.

Thanks,
Vanessa

ThoughtWire Corporation
http://www.thoughtwire.com




On Thu, Oct 8, 2015 at 8:45 AM, Dmitri Zagidulin 
wrote:

> Hi Vanessa,
>
> The thing to keep in mind about read repair is -- it happens
> asynchronously on every GET, but /after/ the results are returned to the
> client.
>
> So, when you issue a GET with r=1, the coordinating node only waits for 1
> of the replicas before responding to the client with a success, and only
> afterwards triggers read-repair.
>
> It's true that with notfound_ok=false, it'll wait for the first
> non-missing replica before responding. But if you edit or update your
> objects at all, an R=1 still gives you a risk of stale values being
> returned.
>
> For example, say you write an object with value A.  And let's say your 3
> replicas now look like this:
>
> replica 1: A,  replica 2: A, replica 3: notfound/missing
>
> A read with an R=1 and notfound_ok=false is just fine, here. (Chances are,
> the notfound replica will arrive first, but the notfound_ok setting will
> force the coordinator to wait for the first non-empty value, A, and return
> it to the client. And then trigger read-repair).
>
> But what happens if you edit that same object, and give it a new value,
> B?  So, now, there's a chance that your replicas will look like this:
>
> replica 1: A, replica 2: B, replica 3: B.
>
> So now if you do a read with an R=1, there's a chance that replica 1, with
>

Increase number of partitions above 1024

2016-02-22 Thread Chathuri Gunawardhana
Hi,

It is not possible to increase the number of partitions above 1024 and has
been disabled via cuttlefish in riak.config. When I try to increase
ring_size via riak.config, the error suggest that I should configure
partition size>1024 via advanced config file. But I couldn't find a way of
how I can specify this in advanced.config file. Can you please suggest me
how I can do this?

Thank you very much!

-- 
Chathuri Gunawardhana
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Regarding the number of partitions in riak cluser

2016-02-22 Thread Chathuri Gunawardhana
I'm running from the master version of this .
I'm running on Ubuntu precise and have 10GB RAM,2 VCPUs and 100 GB hard
disk. Each instance is running a single riak node. Altogether I have 10
instances (for 512 partitions) and 20 for 1024 partitions. The cluster
works fine with 256 partitions or less, but not for more. And yes I have
enough bandwidth.

Thank you very much!

On Mon, Feb 22, 2016 at 7:47 AM, Vitaly E <13vitam...@gmail.com> wrote:

> Hi Chathuri,
>
> Is it Riak KV? Which version?
>
> What hardware are you running your cluster on?
>
> How are the Riak nodes distributed over your physical machines?
>
> Do you have enough network bandwidth?
>
> Regards,
> Vitaly
>
> On Mon, Feb 22, 2016 at 12:35 AM, Chathuri Gunawardhana <
> lanch.gunawardh...@gmail.com> wrote:
>
>> Hi,
>>
>> For an experiment I need to run riak (around 100 instances) with very
>> high number of partitions. But when I have 512 partitions even with 10 or
>> 20 instances, there are lot or timeouts with reads and writes and tale a
>> long time to start a node. Can you please tell me why this happens?
>>
>> Thank you very much!
>>
>> --
>> Chathuri Gunawardhana
>>
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>


-- 
Chathuri Gunawardhana
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


riak crash

2016-02-22 Thread Raviraj Vaishampayan
Hi,

We have been using riak to gather our test data and analyze results after test 
completes.
Recently we have observed riak crash in riak console logs.
This causes our tests failing to record data to riak and bailing out :-(

The crash logs are as follow:
2016-02-19 16:25:26.255 [error] <0.2160.0> gen_fsm <0.2160.0> in state active 
terminated with reason: no function clause matching 
riak_kv_vnode:handle_info({#Ref<0.0.482.161540>,{ok,<0.11042.842>}}, 
{state,268322566228720457638957762256505085639956365312,riak_kv_eleveldb_backend,true,{state,<<>>,...},...})
 line 1195

2016-02-19 16:25:26.260 [error] <0.2160.0> CRASH REPORT Process <0.2160.0> with 
2 neighbours exited with reason: no function clause matching 
riak_kv_vnode:handle_info({#Ref<0.0.482.161540>,{ok,<0.11042.842>}}, 
{state,268322566228720457638957762256505085639956365312,riak_kv_eleveldb_backend,true,{state,<<>>,...},...})
 line 1195 in gen_fsm:terminate/7 line 622

2016-02-19 16:25:26.260 [error] <0.172.0> Supervisor riak_core_vnode_sup had 
child undefined started with {riak_core_vnode,start_link,undefined} at 
<0.2160.0> exit with reason no function clause matching 
riak_kv_vnode:handle_info({#Ref<0.0.482.161540>,{ok,<0.11042.842>}}, 
{state,268322566228720457638957762256505085639956365312,riak_kv_eleveldb_backend,true,{state,<<>>,...},...})
 line 1195 in context child_terminated

2016-02-19 16:25:26.261 [error] <0.4319.0> gen_fsm <0.4319.0> in state ready 
terminated with reason: no function clause matching 
riak_kv_vnode:handle_info({#Ref<0.0.482.161540>,{ok,<0.11042.842>}}, 
{state,268322566228720457638957762256505085639956365312,riak_kv_eleveldb_backend,true,{state,<<>>,...},...})
 line 1195

2016-02-19 16:25:26.275 [error] <0.4319.0> CRASH REPORT Process <0.4319.0> with 
10 neighbours exited with reason: no function clause matching 
riak_kv_vnode:handle_info({#Ref<0.0.482.161540>,{ok,<0.11042.842>}}, 
{state,268322566228720457638957762256505085639956365312,riak_kv_eleveldb_backend,true,{state,<<>>,...},...})
 line 1195 in gen_fsm:terminate/7 line 622

2016-02-19 16:25:26.278 [error] <0.4320.0> Supervisor {<0.4320.0>,poolboy_sup} 
had child riak_core_vnode_worker started with 
riak_core_vnode_worker:start_link([{worker_module,riak_core_vnode_worker},{worker_args,[268322566228720457638957762256505085639956365312,...]},...])
 at undefined exit with reason no function clause matching 
riak_kv_vnode:handle_info({#Ref<0.0.482.161540>,{ok,<0.11042.842>}}, 
{state,268322566228720457638957762256505085639956365312,riak_kv_eleveldb_backend,true,{state,<<>>,...},...})
 line 1195 in context shutdown_error

2016-02-19 16:25:26.278 [error] <0.4320.0> gen_server <0.4320.0> terminated 
with reason: no function clause matching 
riak_kv_vnode:handle_info({#Ref<0.0.482.161540>,{ok,<0.11042.842>}}, 
{state,268322566228720457638957762256505085639956365312,riak_kv_eleveldb_backend,true,{state,<<>>,...},...})
 line 1195

2016-02-19 16:25:26.278 [error] <0.4320.0> CRASH REPORT Process <0.4320.0> with 
0 neighbours exited with reason: no function clause matching 
riak_kv_vnode:handle_info({#Ref<0.0.482.161540>,{ok,<0.11042.842>}}, 
{state,268322566228720457638957762256505085639956365312,riak_kv_eleveldb_backend,true,{state,<<>>,...},...})
 line 1195 in gen_server:terminate/6 line 744

2016-02-19 16:25:26.806 [error] <0.2157.0> gen_fsm <0.2157.0> in state active 
terminated with reason: {timeout,{gen_server,call,[<0.5141.0>,stop]}}

2016-02-19 16:25:26.808 [error] <0.2157.0> CRASH REPORT Process <0.2157.0> with 
2 neighbours exited with reason: {timeout,{gen_server,call,[<0.5141.0>,stop]}} 
in gen_fsm:terminate/7 line 600

2016-02-19 16:25:26.809 [error] <0.5450.0> gen_fsm <0.5450.0> in state ready 
terminated with reason: {timeout,{gen_server,call,[<0.5141.0>,stop]}}

2016-02-19 16:25:26.809 [error] <0.172.0> Supervisor riak_core_vnode_sup had 
child undefined started with {riak_core_vnode,start_link,undefined} at 
<0.2157.0> exit with reason {timeout,{gen_server,call,[<0.5141.0>,stop]}} in 
context child_terminated

2016-02-19 16:25:26.809 [error] <0.5450.0> CRASH REPORT Process <0.5450.0> with 
10 neighbours exited with reason: {timeout,{gen_server,call,[<0.5141.0>,stop]}} 
in gen_fsm:terminate/7 line 622

2016-02-19 16:25:26.809 [error] <0.5451.0> Supervisor {<0.5451.0>,poolboy_sup} 
had child riak_core_vnode_worker started with 
riak_core_vnode_worker:start_link([{worker_module,riak_core_vnode_worker},{worker_args,[211232658520482062396626323478525280184646500352,...]},...])
 at undefined exit with reason {timeout,{gen_server,call,[<0.5141.0>,stop]}} in 
context shutdown_error

2016-02-19 16:25:26.809 [error] <0.5451.0> gen_server <0.5451.0> terminated 
with reason: {timeout,{gen_server,call,[<0.5141.0>,stop]}}

2016-02-19 16:25:26.809 [error] <0.5451.0> CRASH REPORT Process <0.5451.0> with 
0 neighbours exited with reason: {timeout,{gen_server,call,[<0.5141.0>,stop]}} 
in gen_server:terminate/6 line 744

Our setup is as follow:
We have a riak cluster 

Re: Custom Object Mapper settings in Java Client

2016-02-22 Thread Vitaly E
Hi Cosmin,

Have a look at com.basho.riak.client.api.convert.ConverterFactory. It's a
singleton, you can register a custom converter there (the default for
classes other than String and RiakObject is
com.basho.riak.client.api.convert.JSONConverter).

It's also possible to pass a custom converter to the FetchValue API, for
instance via
com.basho.riak.client.api.commands.kv.KvResponseBase.getValues(com.basho.riak.client.api.convert.Converter).

I think this is the right way to add custom serialization.

Regards,
Vitaly

On Mon, Feb 22, 2016 at 11:29 AM, Cosmin Marginean 
wrote:

> Hi
>
> I presume that Riak Java client is using Jackson for JSON-to-POJO and vice
> versa.
>
> Is there a way to easily inject a custom object mapper there? Or at least
> to get a reference to it in order to add custom serializers?
>
> Thank you
> Cosmin
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Custom Object Mapper settings in Java Client

2016-02-22 Thread Cosmin Marginean
Hi

I presume that Riak Java client is using Jackson for JSON-to-POJO and vice
versa.

Is there a way to easily inject a custom object mapper there? Or at least
to get a reference to it in order to add custom serializers?

Thank you
Cosmin
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com