Re:

2013-09-09 Thread Alex Moore
Hi David,Have you ever started up this node prior to changing the IP address?Can you also grep your config files for "127.0.0.1"? If there aren't any settings with that IP, can you reply with your config files so we can take a look?Thanks,Alex Moore On September 9, 2013 at 7:40:10 AM, David Montgomery (davidmontgom...@gmail.com) wrote: Why do I get errors restarting riak when i use aws EIP's?I change the ip address per this doc. http://docs.basho.com/riak/1.2.1/cookbooks/Basic-Cluster-Setup/ I am using the latest version of riak for ubuntu 12.04
I change the 127.0.0.1 to the EIP. It should work. Yetriak will not work. Any issues I am missing? How do I resolve?console.log2013-09-09 11:28:28.211 [info] 0.7.0 Application webmachine started on node 'riak@54.247.68.179'
2013-09-09 11:28:28.211 [info] 0.7.0 Application basho_stats started on node 'riak@54.247.68.179'2013-09-09 11:28:28.229 [info] 0.7.0 Application bitcask started on node 'riak@54.247.68.179'
2013-09-09 11:28:29.385 [error] 0.172.0 CRASH REPORT Process 0.172.0 with 0 neighbours exited with reason: eaddrnotavail in gen_server:init_it/6 line 3202013-09-09 11:28:29.385 [error] 0.138.0 Supervisor riak_core_sup had child "http_54.247.68.179:8098" started with webmachine_mochiweb:start([{name,"http_54.247.68.179:8098"},{ip,"54.247.68.179"},{p$
2013-09-09 11:28:29.387 [info] 0.7.0 Application riak_core exited with reason: {shutdown,{riak_core_app,start,[normal,[]]}}error.log013-09-09 11:08:13.109 [error] 0.138.0 Supervisor riak_core_sup had child riak_core_capability started with riak_core_capability:start_link() at 0.156.0 exit with reason no function clause match$
2013-09-09 11:08:13.110 [error] 0.136.0 CRASH REPORT Process 0.136.0 with 0 neighbours exited with reason: {{function_clause,[{orddict,fetch,['riak@10.239.130.225',[{'riak@127.0.0.1',[{{riak_cont$
2013-09-09 11:08:34.956 [error] 0.156.0 gen_server riak_core_capability terminated with reason: no function clause matching orddict:fetch('riak@10.239.130.225', [{'riak@127.0.0.1',[{{riak_control,m$
2013-09-09 11:08:34.957 [error] 0.156.0 CRASH REPORT Process riak_core_capability with 0 neighbours exited with reason: no function clause matching orddict:fetch('riak@10.239.130.225', [{'riak@127.$
2013-09-09 11:08:34.957 [error] 0.140.0 Supervisor riak_core_sup had child riak_core_capability started with riak_core_capability:start_link() at 0.156.0 exit with reason no function clause match$2013-09-09 11:08:34.958 [error] 0.138.0 CRASH REPORT Process 0.138.0 with 0 neighbours exited with reason: {{function_clause,[{orddict,fetch,['riak@10.239.130.225',[{'riak@127.0.0.1',[{{riak_cont$
2013-09-09 11:10:56.863 [error] 0.154.0 gen_server riak_core_capability terminated with reason: no function clause matching orddict:fetch('riak@54.247.68.179', []) line 72
2013-09-09 11:10:56.864 [error] 0.154.0 CRASH REPORT Process riak_core_capability with 0 neighbours exited with reason: no function clause matching orddict:fetch('riak@54.247.68.179', []) line 72 i$
 [ Read 32 lines ]
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Deleting data from bitcask backend

2013-09-16 Thread Alex Moore
Hi Charl,

The problem is that even though documents seem to no longer be 
available (doing a GET on a deleted document returns an expected 404) 
the disk usage is not seeming reducing much and has currently been at 
~80% utilisation across all nodes for almost a week. 
When you delete a document, a tombstone record is written to bitcask, and the 
reference to the key is removed from memory (which is why you get 404's).  The 
old entry isn't actually removed until the next bitcask merge.

At first I though the large amount of deletes being performed might be 
causing fragmentation of the merge index so I've been regularly 
running forced compaction as documented here: 
https://gist.github.com/rzezeski/3996286. 
That merge index is for Riak Search, not bitcask.

There are ways of forcing a merge, but let's double check your settings/logs 
first. Can you send me your app.config and a console.log from one of your nodes?

Thanks,
Alex 

 -- 
Alex Moore
Sent with Airmail

On September 16, 2013 at 4:43:07 AM, Charl Matthee (ch...@ntrippy.net) wrote:

Hi, 

We have a 8-node riak v1.4.0 cluster writing data to bitcask backends. 

We've recently started running out of disk across all nodes and so 
implemented a 30-day sliding window data retention policy. This policy 
is enforced by a go app that concurrently deletes documents outside 
the window. 

The problem is that even though documents seem to no longer be 
available (doing a GET on a deleted document returns an expected 404) 
the disk usage is not seeming reducing much and has currently been at 
~80% utilisation across all nodes for almost a week. 

At first I though the large amount of deletes being performed might be 
causing fragmentation of the merge index so I've been regularly 
running forced compaction as documented here: 
https://gist.github.com/rzezeski/3996286. 

This has helped somewhat but I suspect it has reached the limits of 
what can be done so I wonder if there is not further fragmentation 
elsewhere that is not being compacted. 

Could this be an issue? How can I tell whether merge indexes or 
something else needs compaction/attention? 

Our nodes were initially configured to run with the default settings 
for the bitcask backend but when this all started I switched to the 
following to try and see if I can trigger compaction more frequently: 

{bitcask, [ 
%% Configure how Bitcask writes data to disk. 
%% erlang: Erlang's built-in file API 
%% nif: Direct calls to the POSIX C API 
%% 
%% The NIF mode provides higher throughput for certain 
%% workloads, but has the potential to negatively impact 
%% the Erlang VM, leading to higher worst-case latencies 
%% and possible throughput collapse. 
{io_mode, erlang}, 

{data_root, /var/lib/riak/bitcask}, 

{frag_merge_trigger, 40}, %% trigger merge if 
framentation is  40% default is 60% 
{dead_bytes_merge_trigger, 67108864}, %% trigger if dead 
bytes for keys  64MB default is 512MB 
{frag_threshold, 20}, %% framentation = 20% default is 40 
{dead_bytes_threshold, 67108864} %% trigger if dead bytes 
for data  64MB default is 128MB 
]}, 

From my observations this change did not make much of a difference. 

The data we're inserting is hierarchical JSON data that roughly falls 
into the following size (in bytes) profile: 

Max: 10320 
Min: 1981 
Avg: 3707 
Med: 2905 

-- 
Ciao 

Charl 

I will either find a way, or make one. -- Hannibal 

___ 
riak-users mailing list 
riak-users@lists.basho.com 
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com ___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Deleting data from bitcask backend

2013-09-16 Thread Alex Moore
Hey Charl,Thanks for the logs and the config. I do see bitcask merges occurring in your console.log, so it's not stuck.I would recommend running "riak-admin vnode-status" during an off-peak time like Evan mentioned, that way we can see what your dead bytes/fragmentation levels look like. This will also let us know if you are hitting the triggers or if they need to be adjusted more.One thing I did notice is that your dead_bytes_merge_trigger and dead_bytes_threshold are both set to 64MB. This will means that a merge will be triggered when a bitcask file has more than 64MB of dead objects in it (dead_bytes_merge_trigger), and only files with more than 64 MB of dead objects will be merged (dead_bytes_threshold). If you want more than that single file to be merged, you can reduce dead_bytes_threshold further so it can include files that are nearing the limit.Thanks,Alex MooreOn September 16, 2013 at 1:26:05 PM, Evan Vigil-McClanahan (emcclana...@basho.com) wrote: riak-admin vnode-status can be used to get information about the
number of bitcask files, their fragmentation and dead bytes, but since
it uses a lot of blocking vnode commands, it can spike latencies, so
should only be used off-peak.

On Mon, Sep 16, 2013 at 7:36 AM, Alex Moore amo...@basho.com wrote:
 Hi Charl,

 The problem is that even though documents seem to no longer be
 available (doing a GET on a deleted document returns an expected 404)
 the disk usage is not seeming reducing much and has currently been at
 ~80% utilisation across all nodes for almost a week.

 When you delete a document, a tombstone record is written to bitcask, and
 the reference to the key is removed from memory (which is why you get
 404's).  The old entry isn't actually removed until the next bitcask merge.

 At first I though the large amount of deletes being performed might be
 causing fragmentation of the merge index so I've been regularly
 running forced compaction as documented here:
 https://gist.github.com/rzezeski/3996286.

 That merge index is for Riak Search, not bitcask.

 There are ways of forcing a merge, but let's double check your settings/logs
 first. Can you send me your app.config and a console.log from one of your
 nodes?

 Thanks,
 Alex

  --
 Alex Moore
 Sent with Airmail

 On September 16, 2013 at 4:43:07 AM, Charl Matthee (ch...@ntrippy.net)
 wrote:

 Hi,

 We have a 8-node riak v1.4.0 cluster writing data to bitcask backends.

 We've recently started running out of disk across all nodes and so
 implemented a 30-day sliding window data retention policy. This policy
 is enforced by a go app that concurrently deletes documents outside
 the window.

 The problem is that even though documents seem to no longer be
 available (doing a GET on a deleted document returns an expected 404)
 the disk usage is not seeming reducing much and has currently been at
 ~80% utilisation across all nodes for almost a week.

 At first I though the large amount of deletes being performed might be
 causing fragmentation of the merge index so I've been regularly
 running forced compaction as documented here:
 https://gist.github.com/rzezeski/3996286.

 This has helped somewhat but I suspect it has reached the limits of
 what can be done so I wonder if there is not further fragmentation
 elsewhere that is not being compacted.

 Could this be an issue? How can I tell whether merge indexes or
 something else needs compaction/attention?

 Our nodes were initially configured to run with the default settings
 for the bitcask backend but when this all started I switched to the
 following to try and see if I can trigger compaction more frequently:

 {bitcask, [
 %% Configure how Bitcask writes data to disk.
 %% erlang: Erlang's built-in file API
 %% nif: Direct calls to the POSIX C API
 %%
 %% The NIF mode provides higher throughput for certain
 %% workloads, but has the potential to negatively impact
 %% the Erlang VM, leading to higher worst-case latencies
 %% and possible throughput collapse.
 {io_mode, erlang},

 {data_root, "/var/lib/riak/bitcask"},

 {frag_merge_trigger, 40}, %% trigger merge if
 framentation is  40% default is 60%
 {dead_bytes_merge_trigger, 67108864}, %% trigger if dead
 bytes for keys  64MB default is 512MB
 {frag_threshold, 20}, %% framentation = 20% default is 40
 {dead_bytes_threshold, 67108864} %% trigger if dead bytes
 for data  64MB default is 128MB
 ]},

 From my observations this change did not make much of a difference.

 The data we're inserting is hierarchical JSON data that roughly falls
 into the following size (in bytes) profile:

 Max: 10320
 Min: 1981
 Avg: 3707
 Med: 2905

 --
 Ciao

 Charl

 "I will either find a way, or make one." -- Hannibal

 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


 ___
 riak-users mailing list
 riak-users@lists.bas

Re: member-status pending section

2013-09-17 Thread Alex Moore
Hi Thomas,host / # riak-admin member-status= Membership==Status Ring Pending Node---valid 37.5% 34.4% 'ri...@10.xxx'valid 32.8% 32.8% 'ri...@10.xxx'valid 29.7% 32.8% 'ri...@10.xxx'---Valid:3 / Leaving:0 / Exiting:0 / Joining:0 / Down:0Is the pending section not supposed to be 0%?Pending is what it will look like when transfers are complete. Once the cluster is done transferring partitions it will display "--" instead of a percentage.riak-admin transfers looks suspicious too.'ri...@10.xxx' waiting to handoff 3 partitions'ri...@10.xxx' waiting to handoff 3 partitions'ri...@10.xxx' waiting to handoff 5 partitionsActive Transfers:transfer type: ownership_transfervnode type: riak_kv_vnodepartition: 1210306043414653979137426502093171875652569137152started: 2013-09-17 21:14:55 [20.78 s ago]last update: no updates seentotal size: unknownobjects transferred: unknownunknownri...@10.xxx === ri...@10.xxx| | 0%unknowntransfer type: ownership_transfervnode type: riak_kv_vnodepartition: 936274486415109681974235595958868809467081785344started: 2013-09-17 21:15:05 [10.78 s ago]last update: no updates seentotal size: unknownobjects transferred: unknownunknownri...@10.xxx === ri...@10.xxx| | 0%unknownIs there a way to repair this?Is it stuck at 0%? Can you provide the output of `riak-admin ring-status` and `riak-admin transfer_limit` ?Thanks,Alex___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re:

2013-09-18 Thread Alex Moore
Hi Markus,With Riak 1.4, 2i results are sorted in ascending order. So a range query from 0 to  or  to 0 with a max result of 1 will always return the lowest index value. In your case you may want to store the largest key # somewhere, or if you need to find it infrequently you may also trya map reduce job. How often will you need to know this max key?Thanks,Alex MooreOn September 18, 2013 at 5:56:22 AM, Markus Doppelbauer (doppelba...@gmx.net) wrote:Hello,Is there a chance to get the biggest key of a bucket or secondary index?E.g.: A bucket (or 2i) contains the keys: 10, 20, 30, 40, 50The result should be: 50Could this problem be solved, by quering a secondary index from back, e.g.:curl http://localhost:8098/buckets/mybucket/index/field_int/999/0?max_results=1Thanks a lotMarkus


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Using /etc/hosts entry and not ip in configuration files ..

2013-09-19 Thread Alex Moore
Hi Mike,Just to make sure: Riak only supports IP addresses and not DNS names in the config files.Though in your docs it still says:"Riak identifies other machines in the ring using Erlang identifiers (hostname or IP, ex:riak@10.9.8.7)."Riak only accepts ip addresses in the app.config file, but the vm.args "-name" parameter can use either full hostnames or ip addresses. You can read about the -name parameter athttp://www.erlang.org/doc/reference_manual/distributed.html.The error logged when changing our config from something like172.31.41.138 to riaknode2-ext is:2013-09-19 21:49:53.383 [error] 0.113.0 Supervisor riak_core_sup had child "http_riaknode2-ext:8098" started with webmachine_mochiweb:start([{name,"http_riaknode2-ext:8098"},{ip,"riaknode2-ext"},{port,8098},{nodelay,true},{log_dir,"log"},...]) at undefined exit with reason {'EXIT',{{badmatch,{error,einval}},[{mochiweb_socket_server,parse_options,2},{mochiweb_socket_server,start,1},{supervisor,do_start_child,2},{supervisor,start_children,3},{supervisor,init_children,2},{gen_server,init_it,6},{proc_lib,init_p_do_apply,3}]}} in context start_errorIf you've previously started the node with another "-name" parameter, you will have to do the renaming procedure instead of just changing the "-name" value:http://docs.basho.com/riak/latest/ops/running/nodes/renaming/. Thanks,AlexOn September 19, 2013 at 6:52:29 PM, Mike Nathe (mna...@fathom-i.com) wrote:hi.we are trying to run Riak in the Amazon cloud (using OpsWorks).With every restart of the servers the ip addresses change so using an entry in the /etc/hosts instead of an IP looks like a great idea.Google's answer is:http://lists.basho.com/pipermail/riak-users_lists.basho.com/2011-December/006912.htmlJust to make sure: Riak only supports IP addresses and not DNS names in the config files.Though in your docs it still says:"Riak identifies other machines in the ring using Erlang identifiers (hostname or IP, ex:riak@10.9.8.7)."The error logged when changing our config from something like172.31.41.138 to riaknode2-ext is:2013-09-19 21:49:53.383 [error] 0.113.0 Supervisor riak_core_sup had child "http_riaknode2-ext:8098" started with webmachine_mochiweb:start([{name,"http_riaknode2-ext:8098"},{ip,"riaknode2-ext"},{port,8098},{nodelay,true},{log_dir,"log"},...]) at undefined exit with reason {'EXIT',{{badmatch,{error,einval}},[{mochiweb_socket_server,parse_options,2},{mochiweb_socket_server,start,1},{supervisor,do_start_child,2},{supervisor,start_children,3},{supervisor,init_children,2},{gen_server,init_it,6},{proc_lib,init_p_do_apply,3}]}} in context start_errorThanks a bunch.Have a great day.Michael
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Using /etc/hosts entry and not ip in configuration files ..

2013-09-19 Thread Alex Moore
Hey Mike,Another option for your app.config would be to bind everything to "0.0.0.0", which will make Riak listen on all interfaces. This combined with using a domain name in the vm.args should prevent you from having to do renames/changing ips in the config files. If you do this, please consider using a VPC to limit access to the machines. A good resource to read over is Amazon's Riak Whitepaper:http://media.amazonwebservices.com/AWS_NoSQL_Riak.pdf, as it goes through some operation considerations and tuning points that might be useful to you.Thanks,AlexOn September 19, 2013 at 6:52:29 PM, Mike Nathe (mna...@fathom-i.com) wrote: hi.we are trying to run Riak in the Amazon cloud (using OpsWorks).
With every restart of the servers the ip addresses change so using an entry in the /etc/hosts instead of an IP looks like a great idea.
Google's answer is:http://lists.basho.com/pipermail/riak-users_lists.basho.com/2011-December/006912.html
Just to make sure: Riak only supports IP addresses and not DNS names in the config files.
Though in your docs it still says:"Riak identifies other machines in the ring using Erlang identifiers (hostname or IP, ex:riak@10.9.8.7)."
The error logged when changing our config from something like172.31.41.138 to riaknode2-ext is:
2013-09-19 21:49:53.383 [error] 0.113.0 Supervisor riak_core_sup had child "http_riaknode2-ext:8098" started with webmachine_mochiweb:start([{name,"http_riaknode2-ext:8098"},{ip,"riaknode2-ext"},{port,8098},{nodelay,true},{log_dir,"log"},...]) at undefined exit with reason {'EXIT',{{badmatch,{error,einval}},[{mochiweb_socket_server,parse_options,2},{mochiweb_socket_server,start,1},{supervisor,do_start_child,2},{supervisor,start_children,3},{supervisor,init_children,2},{gen_server,init_it,6},{proc_lib,init_p_do_apply,3}]}} in context start_error
Thanks a bunch.
Have a great day.Michael
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Memory Backend TTL doesn't work

2013-10-14 Thread Alex Moore
Hi Chan,

Instead of:
{storage_backend, riak_kv_memory_backend}, 
{memory_backend, [ 
{max_memory, 8192}, 
{ttl, 1} 
]}, 


Try:

{multi_backend_default, expiring_memory_backend},
{multi_backend, [
{expiring_memory_backend,  riak_kv_memory_backend, [
{max_memory, 8192}, %% 8GB
{ttl, 1}
]}
]},
Also, for future reference, the max_memory field is in MB per vnode, so your 
current setting is 8GB per vnode.  Unless you have a ridiculous amount of RAM, 
you might want to reduce that setting a bit :)

Thanks, 

Alex

-- 
Alex Moore
Sent with Airmail

On October 14, 2013 at 7:32:03 AM, 성동찬_Chan (c...@kakao.com) wrote:

Hi~! 

I'm checking riak to use as a cache like memcached. 
But I found some strange situation. 
I set ttl like this to expire data, but failed. 
- 
{riak_kv, [ 
... 
{storage_backend, riak_kv_memory_backend}, 
{memory_backend, [ 
{max_memory, 8192}, 
{ttl, 1} 
]}, 
... 
]}, 

- 

TTL isn't supported on riak_kv_memory_backend? 
I changed riak_kv_memory_backend to riak_kv_bitcask_backend, that's fine. 

Thanks. 
Chan. 
___ 
riak-users mailing list 
riak-users@lists.basho.com 
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com ___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak secondary indices with Java Api

2013-11-25 Thread Alex Moore
Hi Shimon,

The `fetchIndex` method is only available on the RawClient interface, I’ll see 
if we can get that post updated to reflect that.  

If you’re using the IRiakClient interface (likely), try fetching the index from 
a bucket object instead:

```
ListString adminUserKeys = userBucket.fetchIndex(BinIndex.named(“user_group))
 
.withValue(“administrator).execute();
```

Also, what version of the Java client are you using?

Thanks,
Alex Moore

On November 25, 2013 at 3:34:45 AM, Shimon Benattar (shim...@gmail.com) wrote:

Hi, Riak users,

 

I am trying to index my data with the Java API

 

I saw on the basho site (http://basho.com/index-for-fun-and-for-profit/) the 
following code, The first part of creating the index works fine.

My problem here is that under riakClient I do not have any mothod called 
fetchIndex.

 

Can anyone assist here (I tried both http and pbc clients)?

 

Bucket userBucket = riakClient.fetchBucket(users).execute();

IRiakObject userObject = userBucket.fetch(thevegan3000).execute();

userObject.addIndex(user_group_bin, administrator);

userBucket.store(userObject).execute();

 

BinIndex binIndex = BinIndex.named(user_group_bin);

BinValueQuery indexQuery = new BinValueQuery(binIndex, users, 
administrator);

ListString adminUserKeys = riakClient.fetchIndex(indexQuery);

 

Thanks,

 

Shimon

___  
riak-users mailing list  
riak-users@lists.basho.com  
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com  
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: riak nagios script

2013-12-10 Thread Alex Moore
Hi Kathleen,

If you’d like to run riak_nagios from the erl command line, you’ll need to 
compile everything in src and include it in the path along with the getopt 
library.   

You can compile everything with a simple call to make, and then include it in 
the path with erl -pa deps/*/ebin ebin”.  
Once everything is loaded, you can call check_node:main([--node, 
dev1@127.0.0.1, riak_kv_up]).” or something similar to run it.  The last 
parameter in the Args array will be the check to make.  

Is there a reason you’re running it this way instead of compiling it to an 
escript and running it from bash? 

Thanks,
Alex Moore

On December 10, 2013 at 1:26:20 PM, kzhang (kzh...@wayfair.com) wrote:

Thanks Hector.  

Here is how I executed the script.  

I downloaded and installed the erlang shell from  
http://www.erlang.org/documentation/doc-5.3/doc/getting_started/getting_started.html
  

started erlang OTP:  

root@MYRIAKNODE otp_src_R16B02]# erl -s toolbar  
Erlang R16B02 (erts-5.10.3) [source] [64-bit] [async-threads:10] [hipe]  
[kernel-poll:false]  

Eshell V5.10.3 (abort with ^G)  

grabbed the source code  
(https://github.com/basho/riak_nagios/blob/master/src/check_node.erl),  
compiled it:  
c(check_node).  

ran it:  
check_node:main([{node, 'xx.xx.xx.xx'}]).   

then got:  

** exception error: undefined function getopt:parse/2  
in function check_node:main/2 (check_node.erl, line 15)  

Here is where I am. I found this:  

https://github.com/jcomellas/getopt  

I grabbed the source code, compiled it under otp_src_R16B02.  

ran it again:  
2 check_node:main([{node, 'xx.xx.xx.xx'}]).  
UNKNOWN: invalid_option_arg {check,{node,'xx.xx.xx.xx'}}  

Am I on the right path?  

Thanks,  

Kathleen  












--  
View this message in context: 
http://riak-users.197444.n3.nabble.com/riak-nagios-script-tp4030025p4030037.html
  
Sent from the Riak Users mailing list archive at Nabble.com.  

___  
riak-users mailing list  
riak-users@lists.basho.com  
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com  
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: What will happen when I remove a node from the cluster?

2014-03-11 Thread Alex Moore
Hi Yaochitc,
Answers below:

On Mar 11, 2014, at 2:47 AM, yaochitc yaoch...@gmail.com wrote:

 Hello, I'm trying to do some test with a riak consist of 8 nodes. I tried to 
 leave a node from the cluster, and I can see the ownership handoff through 
 ring-status.But when I tried to remove a node, such process didn't occur.I 
 have several questions about node-leaving and node-removing.Please tell me 
 the answers if U know, thanks a lot!
 1. After I leave a node from the cluster, it takes several hours before the 
 ownership handoff finishes, and another hours before the nodes repartition 
 finishes.During this time, is the cluster available for reading and writing?

Yes, the cluster is available for reading and writing while a node is being 
removed or added.

 2. I saw nothing like repartition happened between nodes still in the cluster 
 through ring-status command after a node removed, what will happen to make 
 the ring works well (I mean, operations like filling the lost copies on the 
 removed node) , if there are such processes, how can I observe them?  Is the 
 cluster available for reading and writing before they finish?

What process did you use to “leave” a node and “remove” a node?  
The correct process should be to run a “riak-admin cluster leave node” on the 
node you’d like to take out of the cluster, and this should handoff the node’s 
partitions and shut it down afterwards. You can find more info about the 
command here: 
http://docs.basho.com/riak/latest/ops/running/nodes/adding-removing/#Removing-a-Node-From-a-Cluster.

Thanks,
Alex___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak CS : 405 (Method Not Allowed) when creating a bucket

2014-03-14 Thread Alex Moore
Hi Mikhail,

 Look like a single point of failure.
 In current version situation is same?

Yes, this is a possible single point of failure, but Stanchion is only needed 
for the creation of buckets and user accounts. Object access is unaffected if 
Stanchion isn’t running.
If stanchion did die, the system would run at partial feature availability 
until you get it running again.  If the node irrecoverably died, you could just 
point all the RiakCS instances to a new stanchion node.

One of Riak 2.0’s features is strong consistency, so we plan on using that in 
the future instead of stanchion. For Riak 1.4.8 (current version), stanchion is 
required. 

Thanks,
Alex


On Mar 14, 2014, at 12:44 AM, ten ten@gmail.com wrote:

 Hello,
 
 Bucket creation is one of the operations handled by 'stanchion'.  So,
 make sure you've install stanchion installed on only one node and that
 all riak-cs nodes are pointing at that one stanchion node.
 
 Look like a single point of failure.
 In current version situation is same?
 
 Regards,
 Mikhail.
 
 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Question about link-walk results returned by erlang_pb_client

2014-03-14 Thread Alex Moore
Hi fxmy,

 The return value is just Bucket/Key/link-tag pairs, without ObjectValue or 
 other metadata.
 {ok,[{0,[[people,timoreilly,friend]]},
 {1,[[people,dhh,friend]]}]}
 
 Is this intended or not?

This is intended.  You are running a map reduce job with one “link” stage, 
which does the link walking for you.   
Link stages only return a list of key/value/relation tuples.

 If so, what's the best way to get these ObjectValues through one single pass 
 of link-walking?

If you want to grab everything in one shot you would have to feed the link 
stage’s output into a map stage to grab the actual objects:

{ok, RiakObj} = riakc_pb_socket:mapred(Pid,[{people, timoreilly}],
[{link, people, friend, false},
 {link, people, friend, false}, 
 {map, {modfun, riak_kv_mapreduce, map_identity}, none, true}]).

This should give you the entire object for the results of the last link phase, 
namely Dave Thomas’s.

I should warn you thought that while link walking and map reduce let you do 
things like this in one shot, you should be cautious about using them in 
production since a bad query can kill performance.  

Thanks,
Alex

On Mar 14, 2014, at 6:49 AM, fxmy wang fxm...@gmail.com wrote:

 
 Hi, list,
 
 This should be a trivial question and I think I'm definitely missing 
 something( and feeling stupid :\).
 
 So when I am doing a chained link-walking through HTTP interface like 
 this(copied from link walking docs):
 curl -v localhost:8091/riak/people/davethomas/_,friend,1/_,friend,_/
  
 The output is quite verbose, including Bucket/Key/Value etc.etc.
 --JCgqdOHsL4BdXPCb0cuQDnLTxOH
 Content-Type: multipart/mixed; boundary=LpfqXc9urbAJJNFH7aGGPBiAtnX
 
 --LpfqXc9urbAJJNFH7aGGPBiAtnX
 X-Riak-Vclock: a85hYGBgzGDKBVIcc+04TgWFOj/NYEpkzGNlyNCadoYvCwA=
 Location: /riak/people/timoreilly
 Content-Type: text/plain
 Link: /riak/people/dhh; riaktag=friend, /riak/people; rel=up
 Etag: 3DmGNeyDj2hUlLR2UhJvMr
 Last-Modified: Thu, 13 Mar 2014 13:11:04 GMT
 
 I am an excellent public speaker.
 --LpfqXc9urbAJJNFH7aGGPBiAtnX--
 
 --JCgqdOHsL4BdXPCb0cuQDnLTxOH
 Content-Type: multipart/mixed; boundary=IcBLyeIFObvJlJGyXuhTty5cRSs
 
 --IcBLyeIFObvJlJGyXuhTty5cRSs
 X-Riak-Vclock: a85hYGBgzGDKBVIcR4M2cgeFOkdkMCUy5rEyzNSadoYvCwA=
 Location: /riak/people/dhh
 Content-Type: text/plain
 Link: /riak/people; rel=up
 Etag: 4qbA2ZufXNgzFRb8PlSLUO
 Last-Modified: Thu, 13 Mar 2014 13:11:53 GMT
 
 I drive a Zonda.
 --IcBLyeIFObvJlJGyXuhTty5cRSs--
 
 --JCgqdOHsL4BdXPCb0cuQDnLTxOH--
 
 
 But when I retried it through the erlang_pb_client
 riakc_pb_socket:mapred(Pid,[{people, davethomas}],[{link, 
 people, friend, true},{link, people, friend, true}]).
  
 The return value is just Bucket/Key/link-tag pairs, without ObjectValue or 
 other metadata.
 {ok,[{0,[[people,timoreilly,friend]]},
 {1,[[people,dhh,friend]]}]}
 
 Is this intended or not?
 If so, what's the best way to get these ObjectValues through one single pass 
 of link-walking?
 
 
 Cheers,
 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Choose/Wants claim functions.

2014-04-10 Thread Alex Moore
Hi Guido,

 What's the latest non-standard version of this function? v3 right? If Basho 
 adds more versions to this, is this somewhere documented?

 For our nodes standard choose/wants claim functions were doing a weird 
 distribution so the numbers even out a bit better (just a bit better) by 
 using v3, so it would be nice to know if improvements are done in this area 
 and where they are being documented.

v3 would be the “latest non-standard” version of this function. It works better 
than v2 for balancing nodes but it has a performance caveat with larger ring 
sizes, which is why we still default to v2.  I will address the documentation 
issue of this, but for now the source code is the best documentation (see below 
for links).

 As of the latest 
 http://docs.basho.com/riak/latest/ops/advanced/configs/configuration-files/ 
 both parameters have no default where my understanding is that the default 
 for both is v2.

So in the typical case, the default will be v2, via the `default_wants_claim` 
and `default_choose_claim` functions in  `riak_core_claim.erl`.  If you’re 
running a legacy ring, it will default to v1 instead.
https://github.com/basho/riak_core/blob/1.4.4/src/riak_core_claim.erl#L119-L125
https://github.com/basho/riak_core/blob/1.4.4/src/riak_core_claim.erl#L140-L146

I’ve put in a docs issue to get the documentation clarified.
https://github.com/basho/basho_docs/issues/1017

Thanks,
Alex

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Recommended riak configuration options for better performance

2014-06-05 Thread Alex Moore
Hi Naveen,

You are running out of MR workers, you’ll either have to:
a) Increase the worker limits on the current nodes (particularly 
map_js_vm_count and reduce_js_vm_count)
b) Add more nodes (and thereby more workers)
c) Do less MR work.
d) Implement your MapReduce functions in Erlang to avoid the JS VM altogether

Bryan Fink has a nice writeup on how to estimate your MR worker needs here: 
http://riak-users.197444.n3.nabble.com/Follow-up-Riak-Map-Reduce-error-preflist-exhausted-tp4024330p4024380.html

Thanks,
Alex

On Jun 4, 2014, at 7:58 AM, Naveen Tamanam naveen32in...@gmail.com wrote:

 Hi Guys, 
 
 
 I have 5 nodes riak cluster in  use. Each machine is having 16GB ram.  All 
 those 5 machines are 
 ​dedicated for riak only. No other application is there to eat resources. ​ 
 ​I use to do a lot of work with
 map reduce queries. Having a may map reduce queries with both map and reduce 
 phases. 
 I have many cases with the  the following error and log messages, ​
 ​  
   error:[preflist_exhausted]
  RiakError: 'could not get a response'   
  All VMs are busy
 
 I know  above errors can be avoided with fine tuned riak configuration 
 options. I am looking for recommended values
 Here  are  few riak configuration options currently I have on each node, 
 
   { kernel, [
 {inet_dist_listen_min, 6000},
 {inet_dist_listen_max, 7999}
   ]},
 
{map_js_vm_count, 48 },
{reduce_js_vm_count, 26 },
{hook_js_vm_count, 12 },
 {js_max_vm_mem, 32},
  {js_thread_stack, 16}

 
 
  
 
 
 
 
 
 ​
 -- 
 Thanks  Regards,
 Naveen Tamanam ​​
 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Using the 'fold' keys threshold

2014-06-05 Thread Alex Moore
Hi Venkat,

You can find those settings in our docs: 
http://docs.basho.com/riak/1.4.9/ops/advanced/backends/bitcask/#Configuring-Bitcask
 (search for “Fold Keys Threshold”).   

In Bitcask when we do range operations like “List Keys” or other operations 
that require us to fold over all the data, we take a snapshot of the “Keydir” 
to get a consistent read.  The Keydir is the hash table that holds the `key- 
latest object` mapping.  When we do this snapshot, we also start a delta of any 
changes since the snapshot.

We use the two “Fold Keys Threshold” options `max_fold_age` and `max_fold_puts` 
only when Bitcask is processing one fold operation, and gets a request for a 
second one.  These two options let the user choose whether to reuse the 
snapshot, or to block and get a new snapshot before starting the second fold.  
This lets you tradeoff between a potential performance boost and consistency.

By default we have Bitcask side toward consistency; it will get a new snapshot, 
by setting `max_fold_puts` to `0`.  If any new puts come in, we must grab a new 
snapshot before folding again. 

- Increasing `max_fold_puts` to `n` will let Bitcask reuse the snapshot 
if there are fewer than `n` changes in the delta.
- Increasing `max_fold_age` to `s` will let Bitcask reuse the snapshot 
if the snapshot is younger than `s` microseconds. 

Setting either of these to positive values can let folds ignore recent changes, 
so you can run into stale data. Because of that, we recommend that you don’t 
change them.
I hope this helps.

Thanks,
Alex


On May 13, 2014, at 3:02 PM, Venkatachalam Subramanian 
venkatsubb...@gmail.com wrote:

 Hi All,
 
 It was very helpful to get my first few questions about Riak/Bitcacsk 
 answered pretty quickly.
 
 I just have a another question on the same lines,
 
 I ran across the 'fold keys threshold' option in riak/bitcask.
 I could not find enough information about the 'fold keys' option to 
 understand it completely.
 
 Could someone tell me what 'fold keys' option is? what does it do? when could 
 we use it? 
 Does it help when you want to get the list of all keys available?
 
 I greatly appreciate your help.
 Thank You.
 
 -- 
 Regards,
 Venkat Subramanian
 
 
 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak 1.4.8 to 1.4.9

2014-06-19 Thread Alex Moore
Hi Lukas,

The 1.4.9 release notes are here:  
https://github.com/basho/riak/blob/1.4/RELEASE-NOTES.md

Thanks,
Alex

On Jun 19, 2014, at 1:05 PM, Lukas J. Dickie lukas.dic...@gimigo.com wrote:

 Hello all,
 
 I couldn't find release notes as to what changed from Riak 1.4.8 to 1.4.9.  
 Does anyone know what changes took place?
 
 Thanks,
 Lukas
 
 
 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Exception in Tutorial

2014-06-28 Thread Alex Moore
Try port 10017 instead. 10018 is usually the HTTP port for the dev1 devrel node.

Thanks,
Alex

 On Jun 28, 2014, at 5:03 AM, darkchanter omsec...@hotmail.com wrote:
 
 Hello there
 sorry to bother you guys with this totally noob question...
 
 I am trying to run the tutorial Tast of Riak in eclipse. I have a cluster
 running made of 3 joined dev-nodes, from which I can get the stats via
 browser. The all in one client jar is referenced by the java project.
 
 However I receive a java.io.EOFException  when trying to fetch any bucket:
 
 Bucket myBucket = client.fetchBucket(test).execute();
 
 (as far I know they should be created/managed automatically when not
 existing)
 
 Since the exception occures at this line, I assume that the connection
 itself was successfully established:
 
 IRiakClient client = RiakFactory.pbcClient(127.0.0.1, 10018);
 
 I set up (and started) Riak under root, while I am working in KDE with
 another user. Could this cause the problem?
 
 Thanks
 Roger
 
 
 
 --
 View this message in context: 
 http://riak-users.197444.n3.nabble.com/Exception-in-Tutorial-tp4031300.html
 Sent from the Riak Users mailing list archive at Nabble.com.
 
 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak Java Client

2015-04-27 Thread Alex Moore
Hi Cos,

We will continue to support and develop the Java client.

With Roach's departure I'll be taking over development for it.  I've been
his understudy of sorts for about a year now, although my focus has been
more towards the .NET and PHP clients as of late.

Consequently, if there's no one on this right now, I'm happy to help out
with current and future issues as my time allows it.
We're happy for any of your help, if you've got any issues / bugs / great
ideas just submit an issue on GitHub
https://github.com/basho/riak-java-client/blob/develop/CONTRIBUTING.md and
we'll get the ball rolling from there.

Thanks,
Alex


On Mon, Apr 27, 2015 at 7:04 PM, Cosmin Marginean cosmin...@gmail.com
wrote:

 One quick question on the Riak Java Client.

 Brian Roach seemed to have been the only active contributor to this.
 Recently he mentioned that he's leaving Basho though, so I was wondering if
 he'll be maintaining this moving forward.

 Since the Riak Java client is now a fundamental part of our ecosystem, I'm
 very interested in its destiny (as probably many others are)
 Consequently, if there's no one on this right now, I'm happy to help out
 with current and future issues as my time allows it.

 Looking forward to hearing from you

 Cheers
 Cos
 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Does riak dotnet client work with Mono?

2015-05-04 Thread Alex Moore
Hi Shabeer,




The .NET client does work with Mono under a .Net 4.0+ runtime, and you can 
download it through Nuget (http://www.nuget.org/Packages/RiakClient/).  




If you don’t have NuGet, you can install an extension to MonoDevelop 
(https://github.com/mrward/monodevelop-nuget-addin), or use the NuGet.exe 
command line application.

See http://docs.nuget.org/consume/installing-nuget and 
https://docs.nuget.org/consume/nuget-faq for more information.



Thanks,
Alex




On Monday, May 4, 2015 at 9:35 AM, syed shabeer shabeer.s...@gmail.com, wrote:


Hi,
Does riak dotnet client 2.0 work with Mono? if yes, then please share the steps 
to add and configure in mono.


Thanks,

Shabeer___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: How to use Search.Options in Riak Java client for Solr faceting

2015-04-28 Thread Alex Moore
Hi Santi,

Riak's Protocol Buffers interface doesn't currently support the full range
of Solr queries, this was done to keep it compatible with the old Riak 1.x
search interface.

If you need to use any search properties beyond those provided by Riak
clients, you'll need to query with a Solr driver over http for the time
being.

I'll let our PM's know that we're seeing a need for other Solr query
options, that way we can hopefully schedule it in soon.

Thanks,
Alex

On Tue, Apr 21, 2015 at 3:00 AM, Santi Kumar sa...@veradocs.com wrote:

 Hi,
 I'm need to use faceting for some of the data in Riak, I know we can use
 it with Search.Options, but I couldn't find a decent way to use that. I was
 looking at any test cases, but couldn't find any. If some body used it, can
 you please post a gist or sample here.

 I just need to use enable facet and pass facet fields.

 Thanks
 Santi

 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Clarifying withoutFetch() with LevelDB and

2015-05-13 Thread Alex Moore
Hey Daniel,

I wanted to know a little bit more about using withoutFetch() option when
 used with levelDB.
 I'm trying to write to a single key as fast as I can with n=3.
 I deliberately create siblings by writing with stale vclock.
 ...
 During the test I see activity on the on disk via iostat and it's between
 20-30 MB/s on each node.
 Even taking into account multiple copies and overhead of Riak (vclocks etc)
 this seems to be pretty high rate.
 I don't see any read activity which suggest withoutFetch() works as
 expected.
 After 2 mins of tests leveldb on each node is 250MB is size, before test
 (11MB)
 Am I using it incorrectly?

Is writing in this way to a single key a good idea or will I be bitten by
 something?
 How to explain high number of MB written to disks?


We call this problem a hot key.

When you write with a stale vclock, it will generate a new sibling every
time.
For example the first time you store your object it's just {v1}, the next
time it will get a sibling: {v1, v2}, eventually it's {v1,...v1000}
since the siblings are never resolved.  That data is read, updated, old
version tombstoned, and the new data written with every PUT. Based on your
info I would see about 250MB raw data there if LevelDB hasn't compacted the
tombstones away.

RiakObject.withoutFetch() tells your java client to store data without
fetching the most current value first.  During that fetch, it would resolve
siblings before writing the value back.  You may get better throughput by
resolving your siblings (less writes overall), or by rethinking your data
model so you're not always writing to the same key repeatedly.   Is this
just a benchmark or are you modeling something in your application?

Thanks,
Alex


On Wed, May 13, 2015 at 11:03 AM, Daniel Iwan iwan.dan...@gmail.com wrote:

 We are using Java client 1.1.4.
 We haven't moved to newer version of Riak as as for the moment we don't
 need
 any new features.
 Also roll out of the new version may be complicated since we have multiple
 clusters.

 As with regards to object size its ~250-300 bytes per write. We store
 simple
 JSON structures.

 Is there anything in new versions that would limit size of data going to
 the
 disk?
 And more importantly is there a way of determining why levelDB grows so
 big?

 I'm using ring size 128 which is probably too high at the moment, but after
 switching to 64 not much has  changed.
 I also disabled 2i indexes that I thought may matter (four 16 bytes fields)
 and that did not made any difference, still 25-38MB/s write to level db per
 node.

 D.



 --
 View this message in context:
 http://riak-users.197444.n3.nabble.com/Clarifying-withoutFetch-with-LevelDB-and-tp4033051p4033053.html
 Sent from the Riak Users mailing list archive at Nabble.com.

 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java client 1.1.4 and headOnly() in domain buckets

2015-05-13 Thread Alex Moore
Hey Daniel,

It appears that the domain buckets api does not support headOnly().  That
api was written to be a higher-level abstraction around a common usage, so
it abstracted that idea of head vs object data away.  If you need that
support, I would use the regular bucket methods instead.

Thanks,
Alex

On Tue, May 12, 2015 at 9:15 AM, Daniel Iwan iwan.dan...@gmail.com wrote:

 We are using official 1.1.4 which is the latest recommended with Riak 1.3
 we
 have installed.
 Upgrade to Riak 1.4 is not possible at the moment.

 D.



 --
 View this message in context:
 http://riak-users.197444.n3.nabble.com/Java-client-1-1-4-and-headOnly-in-domain-buckets-tp4033042p4033048.html
 Sent from the Riak Users mailing list archive at Nabble.com.

 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


[Announcement] Official Riak .NET client released.

2015-04-02 Thread Alex Moore
Greetings Riak Users!

We've released the official .NET client for Riak.
It is a 2.0 compatible evolution of the Corrugated Iron .NET client.

It's available via nuget:

https://www.nuget.org/packages/RiakClient/

The github repo can be found here:

*https://github.com/basho/riak-dotnet-client
https://github.com/basho/riak-dotnet-client*

API docs are published here:

*http://basho.github.io/riak-dotnet-client-api/
http://basho.github.io/riak-dotnet-client-api/*

We would like to give special thanks to Jeremiah Peschka and OJ Reeves for
their hard work on CorrugatedIron, and for passing on CI's stewardship to
Basho so we could continue its story.

Thanks!
Alex Moore
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak Java Client 2.0.2 Released

2015-07-28 Thread Alex Moore
Hi All,

I just pushed the blue button and released Riak Java Client 2.0.2.
Get it here: http://repo1.maven.org/maven2/com/basho/riak/riak-client/2.0.2/
It should be synced to Maven Central within the hour. Just update your pom 
dependency to 2.0.2.

Special thanks to Justin Plock, Cesar Alvernaz, Zack Manning, and Cosmin 
Marginean for their bug reports and pull requests.

Features:

Option to temporarily queue commands when maxConnections is reached - 
https://github.com/basho/riak-java-client/issues/510
Made mocking easier with K/V commands - 
https://github.com/basho/riak-java-client/pull/528
Improved JavaDocs for Search and MapReduce - 
https://github.com/basho/riak-java-client/pull/524
Added Reset Bucket Properties Command - 
https://github.com/basho/riak-java-client/pull/522
Upgraded Animal Sniffer Maven Plugin - 
https://github.com/basho/riak-java-client/pull/514

Bugfixes:

Fixed Java 6 Runtime Compatibility - 
https://github.com/basho/riak-java-client/pull/530
Partially fixed missing No UnknownHostException, added logging - 
https://github.com/basho/riak-java-client/pull/529
Fixed Java 8 Build Support - https://github.com/basho/riak-java-client/pull/517
Removed dependency on Sun Tools jar - 
https://github.com/basho/riak-java-client/pull/517
Fixed an inconsistency of visibility at secondary index query.response.get 
entries - https://github.com/basho/riak-java-client/pull/515
Fixed an inconsistency in StringBinIndex.named()returning type - 
https://github.com/basho/riak-java-client/pull/511

Cheers,
Alex Moore


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak shuts down itself after couple of minutes

2015-08-07 Thread Alex Moore
Hi Hao,

Looks like you might be running into an EMFILE error:  
http://docs.basho.com/riak/latest/community/faqs/logs/#riak-logs-contain-error-emfile-in-the-message
 
http://docs.basho.com/riak/latest/community/faqs/logs/#riak-logs-contain-error-emfile-in-the-message

What’s the open files limit on your nodes?

Thanks,
Alex

 On Aug 7, 2015, at 11:55 AM, 王昊 jusf...@163.com wrote:
 
 I am running it on a single node.
 
 Using Erlang riak client. I added a map bucket type, create a new index, set 
 the index on a bucket, saved a few map data into the bucket. Then I did some 
 search. All good. Then I can remember I tried a invalid search query like 
 title_s_register:[New movie one] which shuts down Riak. But it may or may 
 not be the first time Riak starts to shut down. I can't remember.
 
 Now Riak itself shut down after running a few minutes.  I must have done 
 something really wrong. Any idea what I can do? I have tired to restore 
 bitcask data folder from a backup from days ago. It didn't help. It still 
 crashes.
 
 The console.log is here : http://www.pastebin.ca/3092510 
 http://www.pastebin.ca/3092510
 The error begins at Line 172
 
 Anyone can help? Much appreciated.
 
 -Hao
 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak Go Client Beta is Available! .Net Client Update!

2015-08-10 Thread Alex Moore
Hi All,

The clients team has been hard at work on an official Riak Go client, and we’re 
ready to share our progress.

To install it in your go project, just run `go get 
github.com/basho/riak-go-client` http://github.com/basho/riak-go-client%60.

You can find more information on the project page 
(https://github.com/basho/riak-go-client 
https://github.com/basho/riak-go-client), and in the API docs 
(https://godoc.org/github.com/basho/riak-go-client 
https://godoc.org/github.com/basho/riak-go-client).

Special thanks to Luke Bakkan  Chris Mancini for their work on the client, and 
Timo Gatsonides for his inspiring work on goriakpbc.


.NET Client 2.1.1

Available at: https://www.nuget.org/packages/RiakClient/2.1.1 
https://www.nuget.org/packages/RiakClient/2.1.1

This release fixes a bug in the RiakPbcSocket class where connections were not 
being reclaimed after a forcible disconnect, which could result in running out 
of connections.

Special thanks to Joseph Jeganathan for reporting the issue  the PR.


Thanks,
Alex


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Erlang Driver Error

2015-10-23 Thread Alex Moore
Hi Satish,

Could you post the entire console.log somewhere accessible?

Thanks,
Alex

On Fri, Oct 23, 2015 at 2:59 PM satish bhatti  wrote:

> I am getting this error in my console.log running Riak 1.4.6 on OSX
> Yosemite:
>
> 2015-10-23 11:55:13.724 [error] emulator driver_select(0x2a1d,
> 1039, ERL_DRV_USE, 0) by tcp_inet driver #Port<0.10781> failed: fd=1039 is
> larger than the largest allowed fd=1023
>
> It eventually does give the correct results on the query, but taking much
> longer than it should. Is there some system config file I need to modify
> for this?
>
> Satish
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: simpler implementation of RiakMap

2015-11-16 Thread Alex Moore
Hey David,

So with the CRDT Map implementation, you can have a register, a counter, an
inner map, a set, and a flag embedded within a map, all with the same name
(oh my!).

In the underlying API the map entry identifier is actually made of both the
name and the type, so you can pull off that bit of trickery.  You can see
this bleed through a little bit in the HTTP JSON API, where map field id's
are represented as _ :

curl -XPOST 
http://localhost:8098/types/maps/buckets/customers/datatypes/ahmed_info
\ -H "Content-Type: application/json" \ -d ' { "update": {
"annika_info_map": { "update": { "first_name_register": "Annika",
"last_name_register": "Weiss", "phone_number_register": "5559876543" } } }
} '

Also since the API's been out for awhile now, doing the changes you listed
could break compatibility with other clients / users :-/

I'll take a look at getRegister and friends, and see if there's a better
way to implement them though.

Thanks,
Alex

On Fri, Nov 13, 2015 at 7:49 PM, David Byron  wrote:

> As I stare at the code for RiakMap in the riak-java-client (
> https://github.com/basho/riak-java-client/blob/develop/src/main/java/com/basho/riak/client/core/query/crdt/types/RiakMap.java),
> I have the urge for a simpler implementation.
>
> If I'm reading the code right, it looks like it handles a value of each
> RiakDatatype for each key.  As in, it's possible for the same key in a map
> to have all of a counter, map, flag, register and set value.
>
> If it turns out that the underlying data has only one of those, or perhaps
> that I only care about one of those, it feels like
>
> private final Map entries =
> new HashMap();
>
> could become
>
>private final Map entries =
> new HashMap();
>
> What I'm getting at is that the code I'm writing to iterate over the
> elements in a map is doing way more than I'd like.  For a map whose values
> are all registers, I've got something like:
>
>   RiakMap myMap;
>   for (BinaryValue key : myMap.view().keySet()) {
> RiakRegister register = myMap.getRegister(key);
>   }
>
> which looks clean enough, but is busy under the covers.  I'm really
> whining about the implementation of getRegister and friends:
>
> public RiakRegister getRegister(BinaryValue key)
> {
>   if (entries.containsKey(key)) {
> for (RiakDatatype dt : entries.get(key)) {
>   if (dt.isRegister()) {
> return dt.getAsRegister();
>   }
> }
>   }
>   return null;
> }
>
> Because I'm iterating I already know the key is one of the entries, so
> would you consider a patch with an unsafe* (or any other name) set of
> accessors that assumes the key is present?  I could of course call view and
> implement my own method, but I can't see how to take out the loop without
> changing the underlying data structure for entries.
>
> If I'm iterating over lots and lots of keys (i.e. myMap.view().keySet()
> contains lots and lots of elements), it might actually be enough savings to
> notice.  In addition though, I think it's simpler enough to be helpful when
> trying to read/understand the code.
>
> It is of course possible that the underlying data isn't what I expect, and
> contains other data types too.  So a full implementation might need hints
> from the caller about whether to use the more general/existing data
> structure, or the simplified one...and then whether to fail in the face of
> data that doesn't fit in the simplified one, or silently ignore it...or
> ignore it but notify the caller that it's present...and probably other
> stuff I haven't considered yet.
>
> I'd love to hear what you think about all of this.
>
> Thanks much for your help.
>
> -DB
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: handling failure in RiakCommand#execute

2015-11-16 Thread Alex Moore
Hey David,

If you use the synchronous RiakCommand.execute method and it errors out,
then the method will returned a checked ExecutionException wrapping the
original exception (
https://github.com/basho/riak-java-client/blob/develop/src/main/java/com/basho/riak/client/api/RiakCommand.java#L85),
which matches the olde style of Java programming.

The async/futures style only throws that issue if you get an error and then
call get() without checking the isSuccess() method / cause field first.
(
https://github.com/basho/riak-java-client/blob/develop/src/main/java/com/basho/riak/client/core/RiakFuture.java#L86-L87
)

So we could maybe use some better docs on the RiakCommand:execute method,
but it all works as intended :-)

Thanks,
Alex

On Fri, Nov 13, 2015 at 8:05 PM, David Byron  wrote:

> From my reading of RiakCommand#execute (
> https://github.com/basho/riak-java-client/blob/develop/src/main/java/com/basho/riak/client/api/RiakCommand.java#L85),
> I'm curious why there's no call to
>
> future.isSuccess()
>
> I can imagine that future.await() throws an exception for all possible
> failures, but then the docs for RiakFuture (
> https://github.com/basho/riak-java-client/blob/develop/src/main/java/com/basho/riak/client/core/RiakFuture.java#L60)
> say:
>
> The typical use pattern is to call await(), check isSuccess(), then call
> getNow() or cause().
>
> Maybe all I'm looking for is a comment in RiakCommand#execute explaining
> why it's not the typical use pattern...but my paranoid self is nervous at
> the moment.
>
> Thanks for helping me understand.
>
> -DB
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Using Bucket Data Types slowed insert performance

2015-10-20 Thread Alex Moore
Hi Dennis & Mark,

I noticed some timing code in your snippets:

 long beforeProcessing = DateTime.Now.Ticks;

Do you have any numbers on what an individual operation for KV vs CRDTs
looks like on your system? (Mean, percentiles if possible)
Also, how big are your KV objects?

CRDTs will take extra processing on Riak's side, so I'm wondering if you're
getting limited by a longer RTT + your 20 threads.  One easy thing to try
would be to double the thread pool (and connections) and see if that shaves
off any overall time by overlapping the time we're waiting for Riak to
respond.

If it doesn't, then we can look in other directions :)

Thanks,
Alex


On Tue, Oct 20, 2015 at 3:25 PM, Dennis Nicolay 
wrote:

>
>
>   ResultObject cdr;
>
> while (queued.TryDequeue(out cdr))
>
> {
>
> long beforeProcessing = DateTime.Now.Ticks;
>
> UpdateMap.Builder builder = BuildMapObject(bucket,
> cdr);
>
> UpdateMap cmd = builder.Build();
>
> RiakResult rslt = client.Execute(cmd);
>
>
>
>
>
>
>
>
>
> *private static UpdateMap.Builder BuildMapObject(string bucketname,
> ResultObject cdr )*
>
> *{*
>
>
>
> *var builder = new UpdateMap.Builder()*
>
> *   .WithBucketType("maps")*
>
> *   .WithBucket(bucketname)*
>
> *   .WithKey(cdr.CdrKey);   *
>
> *var mapOperation = new UpdateMap.MapOperation();*
>
> *mapOperation.SetRegister("FileTimeStamp",
> cdr.CdrValue.FileTimeStamp.ToString());*
>
> *mapOperation.SetRegister("AuditId",
> cdr.CdrValue.AuditId.ToString());*
>
> *mapOperation.SetRegister("CdrId",
> cdr.CdrValue.CdrId.ToString());*
>
> *mapOperation.SetRegister("IsBillable",
> cdr.CdrValue.IsBillable.ToString());*
>
> *mapOperation.SetRegister("SwitchId",
> cdr.CdrValue.SwitchId.ToString());*
>
> *mapOperation.SetRegister("SwitchDescription",
> cdr.CdrValue.SwitchDescription.ToString());*
>
> *mapOperation.SetRegister("SequenceNumber",
> cdr.CdrValue.SequenceNumber.ToString());*
>
> *mapOperation.SetRegister("CallDirection",
> cdr.CdrValue.CallDirection.ToString());*
>
> *mapOperation.SetRegister("CallTypeId",
> cdr.CdrValue.CallTypeId.ToString());*
>
> *mapOperation.SetRegister("Partition",
> cdr.CdrValue.Partition.ToString());*
>
> *mapOperation.SetRegister("CustomerTrunkId",
> cdr.CdrValue.CustomerTrunkId.ToString());*
>
> *mapOperation.SetRegister("OrigIpAddress",
> cdr.CdrValue.OrigIpAddress.ToString());*
>
> *mapOperation.SetRegister("OrigPort",
> cdr.CdrValue.OrigPort.ToString());*
>
> *mapOperation.SetRegister("SupplierTrunkId",
> cdr.CdrValue.SupplierTrunkId.ToString());*
>
> *mapOperation.SetRegister("TermIpAddress",
> cdr.CdrValue.TermIpAddress.ToString());*
>
> *mapOperation.SetRegister("TermPort",
> cdr.CdrValue.TermPort.ToString());*
>
> *mapOperation.SetRegister("Ani", cdr.CdrValue.Ani.ToString());*
>
> *mapOperation.SetRegister("OutpulseNumber",
> cdr.CdrValue.OutpulseNumber.ToString());*
>
> *mapOperation.SetRegister("SubscriberNumber",
> cdr.CdrValue.SupplierTrunkId.ToString());*
>
> *mapOperation.SetRegister("CallingNoa",
> cdr.CdrValue.CallingNoa.ToString());*
>
> *mapOperation.SetRegister("DialedNoa",
> cdr.CdrValue.DialedNoa.ToString());*
>
> *mapOperation.SetRegister("OutpulseNoa",
> cdr.CdrValue.OutpulseNumber.ToString());*
>
> *mapOperation.SetRegister("TreatmentCode",
> cdr.CdrValue.TreatmentCode.ToString());*
>
> *mapOperation.SetRegister("CompletionCode",
> cdr.CdrValue.CompletionCode.ToString());*
>
> *mapOperation.SetRegister("CustomerName",
> cdr.CdrValue.CustomerName.ToString());*
>
> *mapOperation.SetRegister("CustId",
> cdr.CdrValue.CustId.ToString());*
>
> *mapOperation.SetRegister("CustContractId",
> cdr.CdrValue.CustContractId.ToString());*
>
> *mapOperation.SetRegister("CustCountryCode",
> cdr.CdrValue.CustCountryCode.ToString());*
>
> *mapOperation.SetRegister("CustDuration",
> cdr.CdrValue.CustDuration.ToString());*
>
> *mapOperation.SetRegister("Price",
> cdr.CdrValue.Price.ToString());*
>
> *mapOperation.SetRegister("BasePrice",
> cdr.CdrValue.BasePrice.ToString());*
>
> *mapOperation.SetRegister("BillingDestinationName",
> cdr.CdrValue.BillingDestinationName.ToString());*
>
> *mapOperation.SetRegister("BillingGroupId",
> cdr.CdrValue.BillingGroupId.ToString());*
>
> *mapOperation.SetRegister("SupplierName",
> cdr.CdrValue.SupplierName.ToString());*
>
> *mapOperation.SetRegister("SuppId",
> cdr.CdrValue.SuppId.ToString());*
>
> *

Riak Go Client 1.1 is Available!

2015-08-27 Thread Alex Moore
Hi All,

The Go client for Riak is out of beta, so Go get it!

To install it in your go project, just run `go get 
github.com/basho/riak-go-client` http://github.com/basho/riak-go-client%60.

You can find more information on the project page 
(https://github.com/basho/riak-go-client 
https://github.com/basho/riak-go-client), and in the API docs 
(https://godoc.org/github.com/basho/riak-go-client 
https://godoc.org/github.com/basho/riak-go-client).

Special thanks to Luke Bakkan  Chris Mancini for their work on the client, 
Timo Gatsonides for his inspiring work on goriakpbc, and Sergio Arteaga for 
submitting GitHub issues.

Thanks,
Alex


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Fastest Method for Importing Into Riak.

2015-08-31 Thread Alex Moore
Hi Dennis,

The fastest way would be to chunk your file, and use a Parallel.ForEach
loop to do parallel puts.  Just make sure you have your connection pool size

turned up, and you override the default MaxDegreesOfParallelism

limit
too (these two should match, or the pool should be bigger).

Thanks,
Alex

On Tue, Aug 18, 2015 at 5:03 PM, Dennis Nicolay 
wrote:

> Hi,
>
>
>
> What is the fastest way to import data from a delimited file into Riak
> using the .net RiakClient?
>
>
>
> Is there a bulk insert using the other Riak clients?
>
>
>
> Thanks in advance,
>
> Dennis
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: What's the maximum seconds to set index on bucket after creating the index

2015-09-02 Thread Alex Moore
Hao,

What version of Riak are you using?

Thanks,
Alex

> On Sep 2, 2015, at 11:26 AM, Fred Dushin  wrote:
> 
> I apologize, I was wrong about the timeouts -- they are configurable, either 
> through the client, or in the advanced config on the Riak server(s).
> 
> The timeout gets set in the server here:
> 
> https://github.com/basho/yokozuna/blob/2.1.1/src/yz_pb_admin.erl#L114 
> 
> 
> This means you can set the timeout in the PB client, as in
> 
> riakc_pb_socket:create_search_index(Pid, Index, Schema, [{timeout, Timeout}, 
> ...])
> 
> where timeout is in milliseconds (or the atom 'infinity').
> 
> cf. http://basho.github.io/riak-erlang-client/ 
> 
> 
> The order of precedence is:
> 
> 1. client-defined
> 2. riak config
> 3. default (45 seconds)
> 
> -Fred
> 
>> On Sep 2, 2015, at 8:13 AM, Fred Dushin > > wrote:
>> 
>> What is the return value you are getting from 
>> rick_pb_socket:create_search_index?  If it's ok, then the Solr cores should 
>> have been created on all nodes.  Otherwise, you should check the logs for 
>> timeout messages, e.g.,
>> 
>> https://github.com/basho/yokozuna/blob/2.1.1/src/yz_index.erl#L443 
>> 
>> 
>> If you are getting timeouts, instead of sleeping, you should probably query 
>> your cluster for the search index, along the lines of what is done in one of 
>> the riak tests, e.g.,
>> 
>> https://github.com/basho/yokozuna/blob/2.1.1/riak_test/yz_pb.erl#L100 
>> 
>> 
>> If necessary, you might want to fold over all nodes in your cluster, to 
>> ensure the index has been propagated to all nodes, and possibly use the 
>> wait_for patterns used in the tests.
>> 
>> Unfortunately, it looks like the internal timeout used to wait for 
>> propagation of indexes to all nodes is not configurable -- it defaults to 45 
>> seconds:
>> 
>> https://github.com/basho/yokozuna/blob/2.1.1/include/yokozuna.hrl#L134 
>> 
>> 
>> I hope that helps,
>> 
>> -Fred
>> 
>>> On Sep 2, 2015, at 6:27 AM, Hao > 
>>> wrote:
>>> 
>>> Hi,
>>> 
>>> What's the maximum seconds to wait after creating an search index and 
>>> before setting it on the bucket?
>>> 
>>> On my local machine, I only need to wait 1 second, sometimes I feel I don't 
>>> need to wait at all, but on a production server which is basically zero 
>>> traffic, I have to wait about 10 seconds(definitely over 5s) before I can 
>>> set the index on a bucket.
>>> 
>>> I am using riakc_pb_socket client. At first I thought something wrong with 
>>> my function to "create" and "set" the index but then when I split the 
>>> process, it's fine. So seems it's the interval in between that matters.
>>> 
>>> I need to know how long is the maximum because I need to restore a lot of 
>>> buckets and set index on them via a script. I don't care how long it takes 
>>> but I don't want it to miss any index not being set on the bucket.
>>> 
>>> The exact error on the console when I set the index on a bucket is
>>> 
>>> <<"Invalid bucket properties: [{search_index,\n 
>>> <<\"application_test_player_idx does not exist\">>}]">>
>>> 
>>> 
>>> 
>>> 
>>> Thanks,
>>> 
>>> 
>>> 
>>> --
>>> Hao
>>> 
>>> 
>>> 
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com 
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com 
>>> 
>> 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak 2 Netty BlockingOperationException

2015-12-14 Thread Alex Moore
Hi Riak4me,

What are your min / max connection settings for the RiakNode objects you
have set up?

It looks like during your Update command, the Riak client attempted to get
another connection for the store, but you were out of available connections
and the thread blocked.  Netty doesn't like blocked threads, so it threw
the io.netty.util.concurrent.BlockingOperationException.

Thanks,
Alex

On Mon, Dec 14, 2015 at 12:28 PM, Riak4me 
wrote:

> I am trying to use the Riak 2 java client. After some testing, I found that
> during a put request to Riak, I consistently get a
> BlockingOperationException from Netty. I've tried different connection
> configs, operation queue size..etc. Could this be a bug in usage of
> ChannelFuture.await? Thanks. Here's most of the stacktrace.
>
> io.netty.util.concurrent.BlockingOperationException:
> DefaultChannelPromise@5da44ece(incomplete)
> at
>
> io.netty.util.concurrent.DefaultPromise.checkDeadLock(DefaultPromise.java:396)
> ~[netty-all-4.0.17.Final.jar:4.0.17.Final]
> at
>
> io.netty.channel.DefaultChannelPromise.checkDeadLock(DefaultChannelPromise.java:157)
> ~[netty-all-4.0.17.Final.jar:4.0.17.Final]
> at
> io.netty.util.concurrent.DefaultPromise.await(DefaultPromise.java:257)
> ~[netty-all-4.0.17.Final.jar:4.0.17.Final]
> at
>
> io.netty.channel.DefaultChannelPromise.await(DefaultChannelPromise.java:129)
> ~[netty-all-4.0.17.Final.jar:4.0.17.Final]
> at
> io.netty.channel.DefaultChannelPromise.await(DefaultChannelPromise.java:28)
> ~[netty-all-4.0.17.Final.jar:4.0.17.Final]
> at
> com.basho.riak.client.core.RiakNode.doGetConnection(RiakNode.java:671)
> ~[riak-client-2.0.2.jar:na]
> at
> com.basho.riak.client.core.RiakNode.getConnection(RiakNode.java:640)
> ~[riak-client-2.0.2.jar:na]
> at com.basho.riak.client.core.RiakNode.execute(RiakNode.java:574)
> ~[riak-client-2.0.2.jar:na]
> at
>
> com.basho.riak.client.core.DefaultNodeManager.executeOnNode(DefaultNodeManager.java:90)
> ~[riak-client-2.0.2.jar:na]
> at
> com.basho.riak.client.core.RiakCluster.execute(RiakCluster.java:321)
> ~[riak-client-2.0.2.jar:na]
> at
> com.basho.riak.client.core.RiakCluster.execute(RiakCluster.java:239)
> ~[riak-client-2.0.2.jar:na]
> at
>
> com.basho.riak.client.api.commands.kv.StoreValue.executeAsync(StoreValue.java:118)
> ~[riak-client-2.0.2.jar:na]
> at
>
> com.basho.riak.client.api.commands.kv.UpdateValue$1.handle(UpdateValue.java:183)
> ~[riak-client-2.0.2.jar:na]
> at
>
> com.basho.riak.client.api.commands.ListenableFuture.notifyListeners(ListenableFuture.java:78)
> ~[riak-client-2.0.2.jar:na]
> at
>
> com.basho.riak.client.api.commands.CoreFutureAdapter.handle(CoreFutureAdapter.java:120)
> ~[riak-client-2.0.2.jar:na]
> at
>
> com.basho.riak.client.core.FutureOperation.fireListeners(FutureOperation.java:131)
> ~[riak-client-2.0.2.jar:na]
> at
>
> com.basho.riak.client.core.FutureOperation.setResponse(FutureOperation.java:170)
> ~[riak-client-2.0.2.jar:na]
> at com.basho.riak.client.core.RiakNode.onSuccess(RiakNode.java:836)
> ~[riak-client-2.0.2.jar:na]
> at
>
> com.basho.riak.client.core.netty.RiakResponseHandler.channelRead(RiakResponseHandler.java:58)
> ~[riak-client-2.0.2.jar:na]
> at
>
> io.netty.channel.DefaultChannelHandlerContext.invokeChannelRead(DefaultChannelHandlerContext.java:340)
> [netty-all-4.0.17.Final.jar:4.0.17.Final]
> at
>
> io.netty.channel.DefaultChannelHandlerContext.fireChannelRead(DefaultChannelHandlerContext.java:326)
> [netty-all-4.0.17.Final.jar:4.0.17.Final]
> at
>
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:155)
> [netty-all-4.0.17.Final.jar:4.0.17.Final]
> at
>
> io.netty.handler.codec.ByteToMessageCodec.channelRead(ByteToMessageCodec.java:108)
> [netty-all-4.0.17.Final.jar:4.0.17.Final]
> at
>
> io.netty.channel.DefaultChannelHandlerContext.invokeChannelRead(DefaultChannelHandlerContext.java:340)
> [netty-all-4.0.17.Final.jar:4.0.17.Final]
> at
>
> io.netty.channel.DefaultChannelHandlerContext.fireChannelRead(DefaultChannelHandlerContext.java:326)
> [netty-all-4.0.17.Final.jar:4.0.17.Final]
> at
>
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:785)
> [netty-all-4.0.17.Final.jar:4.0.17.Final]
> at
>
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:116)
> [netty-all-4.0.17.Final.jar:4.0.17.Final]
> at
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:494)
> [netty-all-4.0.17.Final.jar:4.0.17.Final]
> at
>
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:461)
> [netty-all-4.0.17.Final.jar:4.0.17.Final]
> at
>
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:378)
> 

Re: Forming inputs to MR job

2015-12-30 Thread Alex Moore
Hi Timur,

For your inputs line, try:

"inputs":["mybucket_type", "mybucket"]

Thanks,
Alex

On Wed, Dec 30, 2015 at 3:01 PM Timur Fayruzov 
wrote:

> Hello,
>
> I'm trying to write a simple MR job using Javascript and hit a wall right
> at start. I can't figure out how to specify "inputs". Here's the code:
> curl -XPOST "my_riak_server:8098/mapred" -H "Content-Type:
> application/json" -d @- < {
>   "input": "my_bucket"
>   "query":[{
> "map":{
> "language":"javascript",
> "source":"function(riakObject, keydata, arg) {
> var m = riakObject.values[0].data;
> return [m];
> }"
> }
>   }]
> }
> EOF
>
> this returns empty array.
>
> Aside: I know that listing all keys is slow but for now I can live with
> this.
>
> Note, that I'm using non-default bucket type, so the actual location of my
> keys is my_riak_server/types/my_bucket_type/buckets/my_bucket/my_key, but I
> can't figure out how to communicate this location properly in the "input"
> field. I have found this "documentation":
> https://github.com/basho/riak_kv/blob/2.1/src/riak_kv_mapred_json.erl#L101,
> but it does not explain how to specify bucket type and I'm not proficient
> enough in Erlang to follow the code easily. I did not find any other
> documentation on this field.
>
> Following returns all keys successfully, so data is there:
> curl '
> http://my_riak_cluster:8098/types/my_bucket_type/buckets/my_bucket/keys?keys=true
> '
>
> Any pointers are highly appreciated.
>
> Thanks,
> Timur
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: using riak to store user sessions

2016-06-03 Thread Alex Moore
Hi Norman,

It's still the case. Riak KV was designed to be highly available, and
highly scalable key/value store with predictable low latency.  This lets
you scale to a few or as many machines as needed, while removing any single
failure points that a single-machine database or cache might impose.

We unfortunately don't have connectors for every language / platform /
framework combination, the explosive exponential swath of support would be
exhausting.  We do keep well maintained client libraries
, and
community libraries and plugins are always growing in number. I think
Luke's suggestion would provide a good way forward, and we'd be happy to
help with any questions you have along the way.

Thanks,
Alex

On Thu, Jun 2, 2016 at 4:51 PM Norman Khine  wrote:

> hi luke, thanks for the reply, in the riak document "Riak KV is uniquely
> architected to handle user and session data..." or is this no longer the
> case?
>
>
> On 2 June 2016 at 17:40, Luke Bakken  wrote:
> >
> > Hi Norman,
> >
> > A quick search turns up this Node.js module:
> > https://www.npmjs.com/package/express-session
> >
> > There is currently not a session store for Riak
> > (https://www.npmjs.com/package/express-session#compatible-session-stores
> )
> >
> > Since memcached operates as a key/value store, forking the connector
> > for that session store as a starting point would be your best bet.
> >
> > https://github.com/balor/connect-memcached
> >
> > You would then use the Riak Node.js client to fetch and store data:
> > https://github.com/basho/riak-nodejs-client
> >
> > --
> > Luke Bakken
> > Engineer
> > lbak...@basho.com
> >
> >
> > On Thu, Jun 2, 2016 at 6:19 AM, Norman Khine  wrote:
> > > hello, i am trying to setup a node.js/express application to use riak
> to
> > > store the logged in users' sessions as detailed
> > > http://basho.com/use-cases/session-data/
> > >
> > > i have setup the dev cluster on my machine and everything is running
> fine.
> > >
> > > what is the correct way to set this up?
> > >
> > > any advise much appreciated.
> > >
> > >
> > >
> > > --
> > > %>>> "".join( [ {'*':'@','^':'.'}.get(c,None) or
> chr(97+(ord(c)-83)%26) for
> > > c in ",adym,*)^zqf" ] )
> > >
> > > ___
> > > riak-users mailing list
> > > riak-users@lists.basho.com
> > > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> > >
>
>
>
>
> --
> %>>> "".join( [ {'*':'@','^':'.'}.get(c,None) or chr(97+(ord(c)-83)%26)
> for c in ",adym,*)^zqf" ] )
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: How to cold (re)boot a cluster with already existing node data

2016-06-06 Thread Alex Moore
Hi Jan,

When you update the Kubernates nodes, do you have to do them all at once or
can they be done in a rolling fashion (one after another)?

If you can do them rolling-wise, you should be able to:

For each node, one at a time:
1. Shut down Riak
2. Shutdown/restart/upgrade Kubernates
3. Start Riak
4. Use `riak-admin force-replace` to rename the old node name to the new
node name
5. Repeat on remaining nodes.

This is covered in "Renaming Multi-node clusters
"
doc.

As for your current predicament,  have you created any new buckets/changed
bucket props in the default namespace since you restarted? Or have you only
done regular operations since?

Thanks,
Alex


On Mon, Jun 6, 2016 at 5:25 AM Jan-Philip Loos  wrote:

> Hi,
>
> we are using riak in a kuberentes cluster (on GKE). Sometimes it's
> necessary to reboot the complete cluster to update the kubernetes-nodes.
> This results in a complete shutdown of the riak cluster and the riak-nodes
> are rescheduled with a new IP. So how can I handle this situation? How can
> I form a new riak cluster out of the old nodes with new names?
>
> The /var/lib/riak directory is persisted. I had to delete the
> /var/lib/riak/ring folder otherwise "riak start" crashed with this message
> (but saved the old ring state in a tar):
>
> {"Kernel pid
>> terminated",application_controller,"{application_start_failure,riak_core,{{shutdown,{failed_to_start_child,riak_core_broadcast,{'EXIT',{function_clause,[{orddict,fetch,['
>> riak@10.44.2.8
>> ',[]],[{file,\"orddict.erl\"},{line,72}]},{riak_core_broadcast,init_peers,1,[{file,\"src/riak_core_broadcast.erl\"},{line,616}]},{riak_core_broadcast,start_link,0,[{file,\"src/riak_core_broadcast.erl\"},{line,116}]},{supervisor,do_start_child,2,[{file,\"supervisor.erl\"},{line,310}]},{supervisor,start_children,3,[{file,\"supervisor.erl\"},{line,293}]},{supervisor,init_children,2,[{file,\"supervisor.erl\"},{line,259}]},{gen_server,init_it,6,[{file,\"gen_server.erl\"},{line,304}]},{proc_lib,init_p_do_apply,3,[{file,\"proc_lib.erl\"},{line,239}]}],{riak_core_app,start,[normal,[]]}}}"}
>> Crash dump was written to: /var/log/riak/erl_crash.dump
>> Kernel pid terminated (application_controller)
>> ({application_start_failure,riak_core,{{shutdown,{failed_to_start_child,riak_core_broadcast,{'EXIT',{function_clause,[{orddict,fetch,['
>> riak@10.44.2.8',
>
>
> The I formed a new cluster via join & plan & commit.
>
> But now, I discovered a problems with incomplete and inconsistent
> partitions:
>
> *$ *curl -Ss "
> http://riak.default.svc.cluster.local:8098/buckets/users/keys?keys=true;
> | jq '.[] | length'
>
> 3064
>
> *$* curl -Ss "
> http://riak.default.svc.cluster.local:8098/buckets/users/keys?keys=true;
> | jq '.[] | length'
>
> 2987
>
> *$* curl -Ss "
> http://riak.default.svc.cluster.local:8098/buckets/users/keys?keys=true;
> | jq '.[] | length'
>
> 705
>
> *$* curl -Ss "
> http://riak.default.svc.cluster.local:8098/buckets/users/keys?keys=true;
> | jq '.[] | length'
> 3064
>
> Is there a way to fix this? I guess this is caused by the missing old
> ring-state?
>
> Greetings
>
> Jan
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: accessor for FetchDatatype's location?

2016-01-14 Thread Alex Moore
Hi David,

It doesn't look like we expose that property anywhere, but it can probably
be chalked up to YAGNI when it was written.   Go forth and PR :)

Thanks,
Alex

On Tue, Jan 12, 2016 at 6:01 PM, David Byron  wrote:

> I'm looking for access to the location member of FetchDatatype (
> https://github.com/basho/riak-java-client/blob/develop/src/main/java/com/basho/riak/client/api/commands/datatypes/FetchDatatype.java#L37)
> to verify in a test that I'm using the right location.
>
> I'm happy to make a pull request if this seems like a reasonable way
> forward.
>
> -DB
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: map reduce on multiple buckets

2016-02-08 Thread Alex Moore
One possibility would be to get the intermediate results from each bucket,
and then compute the final results on the client. How much data would be
involved in the initial MR, and at the point where you would have to
combine the results?

--Alex

On Thu, Jan 28, 2016 at 12:10 PM, Eugene Shubin <ev.sh...@gmail.com> wrote:

> I want to keep daily and weekly data in different buckets, and generate
> monthly report using map reduce on these two.
> so 2 buckets and MR operations are custom.
>
> Best,
> Eugene
>
> 2016-01-28 17:29 GMT+01:00 Alex Moore <amo...@basho.com>:
>
>> Hi Eugene,
>>
>> MR is limited to one bucket for inputs, and the Solr inputs to a map
>> phase have this restriction too.
>>
>> How many buckets are you trying to MR across, and also what type of MR
>> operation are you trying to do? There may be another way to get around this
>> restriction.
>>
>> Thanks,
>> Alex
>>
>> On Thu, Jan 28, 2016 at 8:10 AM, Eugene Shubin <ev.sh...@gmail.com>
>> wrote:
>>
>>> Is it possible to run mapreduce job on two or more buckets?
>>> I see from documentation that it might be possible if I specify inputs
>>> as list of {bucket, key} pairs,
>>> although list of secondary index inputs causes an error:
>>> riakc_pb_socket:mapred(P, [
>>> {index, Bucket1, Index1, From, To},
>>> {index, Bucket2, Index2, From, To}
>>>   ], ...
>>>  {error,<<"{inputs,{\"Inputs target tuples must be {B,K} or
>>> {{B,K},KeyData}:\",\n
>>>
>>> Is it possible using Solr (riak search) indexes?
>>>
>>> Evgenii Shubin
>>>
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>>
>>>
>>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: map reduce on multiple buckets

2016-01-28 Thread Alex Moore
Hi Eugene,

MR is limited to one bucket for inputs, and the Solr inputs to a map phase
have this restriction too.

How many buckets are you trying to MR across, and also what type of MR
operation are you trying to do? There may be another way to get around this
restriction.

Thanks,
Alex

On Thu, Jan 28, 2016 at 8:10 AM, Eugene Shubin  wrote:

> Is it possible to run mapreduce job on two or more buckets?
> I see from documentation that it might be possible if I specify inputs as
> list of {bucket, key} pairs,
> although list of secondary index inputs causes an error:
> riakc_pb_socket:mapred(P, [
> {index, Bucket1, Index1, From, To},
> {index, Bucket2, Index2, From, To}
>   ], ...
>  {error,<<"{inputs,{\"Inputs target tuples must be {B,K} or
> {{B,K},KeyData}:\",\n
>
> Is it possible using Solr (riak search) indexes?
>
> Evgenii Shubin
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java Riak client can't handle a Riak node failure?

2016-02-22 Thread Alex Moore
If the contract is "Return true iff the object existed", then the second
fetch is superfluous + so is the async example I posted.  You can use the
code you had as-is.

Thanks,
Alex

On Mon, Feb 22, 2016 at 1:23 PM, Vanessa Williams <
vanessa.willi...@thoughtwire.ca> wrote:

> Hi Alex, would a second fetch just indicate that the object is *still*
> deleted? Or that this delete operation succeeded? In other words, perhaps
> what my contract really is is: return true if there was already a value
> there. In which case would the second fetch be superfluous?
>
> Thanks for your help.
>
> Vanessa
>
> On Mon, Feb 22, 2016 at 11:15 AM, Alex Moore <amo...@basho.com> wrote:
>
>> That's the correct behaviour: it should return true iff a value was
>>> actually deleted.
>>
>>
>> Ok, if that's the case you should do another FetchValue after the
>> deletion (to update the response.hasValues()) field, or use the async
>> version of the delete function. I also noticed that we weren't passing the
>> vclock to the Delete function, so I added that here as well:
>>
>> public boolean delete(String key) throws ExecutionException, 
>> InterruptedException {
>>
>> // fetch in order to get the causal context
>> FetchValue.Response response = fetchValue(key);
>>
>> if(response.isNotFound())
>> {
>> return ???; // what do we return if it doesn't exist?
>> }
>>
>> DeleteValue deleteValue = new DeleteValue.Builder(new 
>> Location(namespace, key))
>>  
>> .withVClock(response.getVectorClock())
>>  .build();
>>
>> final RiakFuture<Void, Location> deleteFuture = 
>> client.executeAsync(deleteValue);
>>
>> deleteFuture.await();
>>
>> if(deleteFuture.isSuccess())
>> {
>> return true;
>> }
>> else
>> {
>> deleteFuture.cause(); // Cause of failure
>> return false;
>> }
>> }
>>
>>
>> Thanks,
>> Alex
>>
>> On Mon, Feb 22, 2016 at 10:48 AM, Vanessa Williams <
>> vanessa.willi...@thoughtwire.ca> wrote:
>>
>>> See inline:
>>>
>>> On Mon, Feb 22, 2016 at 10:31 AM, Alex Moore <amo...@basho.com> wrote:
>>>
>>>> Hi Vanessa,
>>>>
>>>> You might have a problem with your delete function (depending on it's
>>>> return value).
>>>> What does the return value of the delete() function indicate?  Right
>>>> now if an object existed, and was deleted, the function will return true,
>>>> and will only return false if the object didn't exist or only consisted of
>>>> tombstones.
>>>>
>>>
>>>
>>> That's the correct behaviour: it should return true iff a value was
>>> actually deleted.
>>>
>>>
>>>> If you never look at the object value returned by your fetchValue(key) 
>>>> function, another potential optimization you could make is to only return 
>>>> the HEAD / metadata:
>>>>
>>>> FetchValue fv = new FetchValue.Builder(new Location(new Namespace(
>>>> "some_bucket"), key))
>>>>
>>>>   .withOption(FetchValue.Option.HEAD, true)
>>>>   .build();
>>>>
>>>> This would be more efficient because Riak won't have to send you the
>>>> values over the wire, if you only need the metadata.
>>>>
>>>>
>>> Thanks, I'll clean that up.
>>>
>>>
>>>> If you do write this up somewhere, share the link! :)
>>>>
>>>
>>> Will do!
>>>
>>> Regards,
>>> Vanessa
>>>
>>>
>>>>
>>>> Thanks,
>>>> Alex
>>>>
>>>>
>>>> On Mon, Feb 22, 2016 at 6:23 AM, Vanessa Williams <
>>>> vanessa.willi...@thoughtwire.ca> wrote:
>>>>
>>>>> Hi Dmitri, this thread is old, but I read this part of your answer
>>>>> carefully:
>>>>>
>>>>> You can use the following strategies to prevent stale values, in
>>>>>> increasing order of security/preference:
>>>>>> 1) Use timestamps (and not pass in vector clocks/causal context).
>>>>>> This is ok if you're not editing objects, or you're ok with a bit of risk
>>>>>> of stale

Re: Java Riak client can't handle a Riak node failure?

2016-02-22 Thread Alex Moore
Hi Vanessa,

You might have a problem with your delete function (depending on it's
return value).
What does the return value of the delete() function indicate?  Right now if
an object existed, and was deleted, the function will return true, and will
only return false if the object didn't exist or only consisted of
tombstones.

If you never look at the object value returned by your fetchValue(key)
function, another potential optimization you could make is to only
return the HEAD / metadata:

FetchValue fv = new FetchValue.Builder(new Location(new Namespace(
"some_bucket"), key))

  .withOption(FetchValue.Option.HEAD, true)
  .build();

This would be more efficient because Riak won't have to send you the values
over the wire, if you only need the metadata.

If you do write this up somewhere, share the link! :)

Thanks,
Alex


On Mon, Feb 22, 2016 at 6:23 AM, Vanessa Williams <
vanessa.willi...@thoughtwire.ca> wrote:

> Hi Dmitri, this thread is old, but I read this part of your answer
> carefully:
>
> You can use the following strategies to prevent stale values, in
>> increasing order of security/preference:
>> 1) Use timestamps (and not pass in vector clocks/causal context). This is
>> ok if you're not editing objects, or you're ok with a bit of risk of stale
>> values.
>> 2) Use causal context correctly (which means, read-before-you-write -- in
>> fact, the Update operation in the java client does this for you, I think).
>> And if Riak can't determine which version is correct, it will fall back on
>> timestamps.
>> 3) Turn on siblings, for that bucket or bucket type.  That way, Riak will
>> still try to use causal context to decide the right value. But if it can't
>> decide, it will store BOTH values, and give them back to you on the next
>> read, so that your application can decide which is the correct one.
>
>
> I decided on strategy #2. What I am hoping for is some validation that the
> code we use to "get", "put", and "delete" is correct in that context, or if
> it could be simplified in some cases. Not we are using delete-mode
> "immediate" and no duplicates.
>
> In their shortest possible forms, here are the three methods I'd like some
> feedback on (note, they're being used in production and haven't caused any
> problems yet, however we have very few writes in production so the lack of
> problems doesn't support the conclusion that the implementation is
> correct.) Note all argument-checking, exception-handling, and logging
> removed for clarity. *I'm mostly concerned about correct use of causal
> context and response.isNotFound and response.hasValues. *Is there
> anything I could/should have left out?
>
> public RiakClient(String name, com.basho.riak.client.api.RiakClient
> client)
> {
> this.name = name;
> this.namespace = new Namespace(name);
> this.client = client;
> }
>
> public byte[] get(String key) throws ExecutionException,
> InterruptedException {
>
> FetchValue.Response response = fetchValue(key);
> if (!response.isNotFound())
> {
> RiakObject riakObject = response.getValue(RiakObject.class);
> return riakObject.getValue().getValue();
> }
> return null;
> }
>
> public void put(String key, byte[] value) throws ExecutionException,
> InterruptedException {
>
> // fetch in order to get the causal context
> FetchValue.Response response = fetchValue(key);
> RiakObject storeObject = new
>
> RiakObject().setValue(BinaryValue.create(value)).setContentType("binary/octet-stream");
> StoreValue.Builder builder =
> new StoreValue.Builder(storeObject).withLocation(new
> Location(namespace, key));
> if (response.getVectorClock() != null) {
> builder = builder.withVectorClock(response.getVectorClock());
> }
> StoreValue storeValue = builder.build();
> client.execute(storeValue);
> }
>
> public boolean delete(String key) throws ExecutionException,
> InterruptedException {
>
> // fetch in order to get the causal context
> FetchValue.Response response = fetchValue(key);
> if (!response.isNotFound())
> {
> DeleteValue deleteValue = new DeleteValue.Builder(new
> Location(namespace, key)).build();
> client.execute(deleteValue);
> }
> return !response.isNotFound() || !response.hasValues();
> }
>
>
> Any comments much appreciated. I want to provide a minimally correct
> example of simple client code somewhere (GitHub, blog post, something...)
> so I don't want to post this without review.
>
> Thanks,
> Vanessa
>
> ThoughtWire Corporation
> http://www.thoughtwire.com
>
>
>
>
> On Thu, Oct 8, 2015 at 8:45 AM, Dmitri Zagidulin 
> wrote:
>
>> Hi Vanessa,
>>
>> The thing to keep in mind about read repair is -- it happens
>> asynchronously on every GET, but 

Re: Increase number of partitions above 1024

2016-02-22 Thread Alex Moore
Ok, what does `riak-admin status | grep riak_kv_version` return?  The
config files are different for Riak 1.x and 2.x.

Also for your tests, are you using any "coverage query" features like
MapReduce or 2i queries?

Thanks,
Alex




On Mon, Feb 22, 2016 at 10:43 AM, Chathuri Gunawardhana <
lanch.gunawardh...@gmail.com> wrote:

> For my experiment I will be using 100 nodes.
>
> Thank you!
>
> On Mon, Feb 22, 2016 at 4:40 PM, Alex Moore <amo...@basho.com> wrote:
>
>> Hi Chathuri,
>>
>> Larger ring sizes are not usually recommended, you can overload disk I/O
>> if the number of vnodes to nodes is too high.
>> Similarly you can underload other system resources if the vnode/node
>> ratio is too low.
>>
>> How many nodes are you planning on running?
>>
>> Thanks,
>> Alex
>>
>> On Mon, Feb 22, 2016 at 5:42 AM, Chathuri Gunawardhana <
>> lanch.gunawardh...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> It is not possible to increase the number of partitions above 1024 and
>>> has been disabled via cuttlefish in riak.config. When I try to increase
>>> ring_size via riak.config, the error suggest that I should configure
>>> partition size>1024 via advanced config file. But I couldn't find a way of
>>> how I can specify this in advanced.config file. Can you please suggest me
>>> how I can do this?
>>>
>>> Thank you very much!
>>>
>>> --
>>> Chathuri Gunawardhana
>>>
>>>
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>>
>>>
>>
>
>
> --
> Chathuri Gunawardhana
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java Riak client can't handle a Riak node failure?

2016-02-22 Thread Alex Moore
>
> That's the correct behaviour: it should return true iff a value was
> actually deleted.


Ok, if that's the case you should do another FetchValue after the deletion
(to update the response.hasValues()) field, or use the async version of the
delete function. I also noticed that we weren't passing the vclock to the
Delete function, so I added that here as well:

public boolean delete(String key) throws ExecutionException,
InterruptedException {

// fetch in order to get the causal context
FetchValue.Response response = fetchValue(key);

if(response.isNotFound())
{
return ???; // what do we return if it doesn't exist?
}

DeleteValue deleteValue = new DeleteValue.Builder(new
Location(namespace, key))

.withVClock(response.getVectorClock())
 .build();

final RiakFuture<Void, Location> deleteFuture =
client.executeAsync(deleteValue);

deleteFuture.await();

if(deleteFuture.isSuccess())
{
return true;
}
else
{
deleteFuture.cause(); // Cause of failure
return false;
}
}


Thanks,
Alex

On Mon, Feb 22, 2016 at 10:48 AM, Vanessa Williams <
vanessa.willi...@thoughtwire.ca> wrote:

> See inline:
>
> On Mon, Feb 22, 2016 at 10:31 AM, Alex Moore <amo...@basho.com> wrote:
>
>> Hi Vanessa,
>>
>> You might have a problem with your delete function (depending on it's
>> return value).
>> What does the return value of the delete() function indicate?  Right now
>> if an object existed, and was deleted, the function will return true, and
>> will only return false if the object didn't exist or only consisted of
>> tombstones.
>>
>
>
> That's the correct behaviour: it should return true iff a value was
> actually deleted.
>
>
>> If you never look at the object value returned by your fetchValue(key) 
>> function, another potential optimization you could make is to only return 
>> the HEAD / metadata:
>>
>> FetchValue fv = new FetchValue.Builder(new Location(new Namespace(
>> "some_bucket"), key))
>>
>>   .withOption(FetchValue.Option.HEAD, true)
>>   .build();
>>
>> This would be more efficient because Riak won't have to send you the
>> values over the wire, if you only need the metadata.
>>
>>
> Thanks, I'll clean that up.
>
>
>> If you do write this up somewhere, share the link! :)
>>
>
> Will do!
>
> Regards,
> Vanessa
>
>
>>
>> Thanks,
>> Alex
>>
>>
>> On Mon, Feb 22, 2016 at 6:23 AM, Vanessa Williams <
>> vanessa.willi...@thoughtwire.ca> wrote:
>>
>>> Hi Dmitri, this thread is old, but I read this part of your answer
>>> carefully:
>>>
>>> You can use the following strategies to prevent stale values, in
>>>> increasing order of security/preference:
>>>> 1) Use timestamps (and not pass in vector clocks/causal context). This
>>>> is ok if you're not editing objects, or you're ok with a bit of risk of
>>>> stale values.
>>>> 2) Use causal context correctly (which means, read-before-you-write --
>>>> in fact, the Update operation in the java client does this for you, I
>>>> think). And if Riak can't determine which version is correct, it will fall
>>>> back on timestamps.
>>>> 3) Turn on siblings, for that bucket or bucket type.  That way, Riak
>>>> will still try to use causal context to decide the right value. But if it
>>>> can't decide, it will store BOTH values, and give them back to you on the
>>>> next read, so that your application can decide which is the correct one.
>>>
>>>
>>> I decided on strategy #2. What I am hoping for is some validation that
>>> the code we use to "get", "put", and "delete" is correct in that context,
>>> or if it could be simplified in some cases. Not we are using delete-mode
>>> "immediate" and no duplicates.
>>>
>>> In their shortest possible forms, here are the three methods I'd like
>>> some feedback on (note, they're being used in production and haven't caused
>>> any problems yet, however we have very few writes in production so the lack
>>> of problems doesn't support the conclusion that the implementation is
>>> correct.) Note all argument-checking, exception-handling, and logging
>>> removed for clarity. *I'm mostly concerned about correct use of causal
>>> context and response.isNotFound and response.hasValues. *Is there
>>> anything I could/should have 

Re: Solr Error Handling

2016-02-26 Thread Alex Moore
Hey Colin,

Do you see any errors in your solr log that would give you the info on the
bad entries?

Thanks,
Alex

On Fri, Feb 26, 2016 at 10:40 AM, Colin Walker  wrote:

> Hey again everyone,
>
> Due to bad planning on my part, Solr is having trouble indexing some of
> the fields I am sending to it, specifically, I ended up with some string
> fields in a numerical field. Is there a way to retrieve the records from
> Riak that have thrown errors in solr?
>
> Cheers,
>
> Colin
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: accessor for FetchDatatype's location?

2016-01-20 Thread Alex Moore
Hey David,

We include/shade that jar in the final riak-client jar so we stopped
shipping it separately.
You'll want to clone riak_pb, and build this tag to install it locally for
dev:  https://github.com/basho/riak_pb/tree/java-2.1.1.0

You'll need protocol buffers 2.5.0
<https://github.com/google/protobuf/releases/tag/v2.5.0> to build, and you
should be able to build/install the riak_pb lib with `mvn install`.  If you
use homebrew you can use this <http://stackoverflow.com/a/23760535> procedure
to install the older protobuf lib (just swap protobuf241 for protobuf250).

Thanks,
Alex

On Thu, Jan 14, 2016 at 2:22 PM, David Byron <dby...@dbyron.com> wrote:

> On 1/14/16 7:40 AM, Alex Moore wrote:
> > Hi David,
> >
> > It doesn't look like we expose that property anywhere, but it can
> > probably be chalked up to YAGNI when it was written.   Go forth and
> > PR :)
>
> Excellent...except for this at the HEAD of develop (24e1404).
>
> $ mvn clean install
> [INFO] Scanning for projects...
> [INFO]
> [INFO]
> 
> [INFO] Building Riak Client for Java 2.0.5-SNAPSHOT
> [INFO]
> 
> [WARNING] The POM for com.basho.riak.protobuf:riak-pb:jar:2.1.1.0 is
> missing, no dependency information available
> [INFO]
> 
> [INFO] BUILD FAILURE
> [INFO]
> 
> [INFO] Total time: 0.285 s
> [INFO] Finished at: 2016-01-14T11:17:22-08:00
> [INFO] Final Memory: 10M/309M
> [INFO]
> 
> [ERROR] Failed to execute goal on project riak-client: Could not resolve
> dependencies for project com.basho.riak:riak-client:jar:2.0.5-SNAPSHOT:
> Failure to find com.basho.riak.protobuf:riak-pb:jar:2.1.1.0 in
> https://repo.maven.apache.org/maven2 was cached in the local repository,
> resolution will not be reattempted until the update interval of central has
> elapsed or updates are forced -> [Help 1]
>
> The latest version I see at
> http://mvnrepository.com/artifact/com.basho.riak.protobuf/riak-pb is
> 2.0.0.16.  When I change pom.xml to use that version I get truckloads of
> errors.
>
> -DB
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Can't delete objects from Riak KV

2016-04-14 Thread Alex Moore
Hi Alex,

It looks like Riak couldn't find the objected to delete (404), but ran into
an issue generating the 404 response.

Could you send the output from `riak-debug` to either Luke or myself?

Thanks,
Alex

On Thu, Apr 14, 2016 at 10:07 AM, Alex De la rosa 
wrote:

> I seem to be having this error messages on the log, any ideas?
>
> 2016-04-14 16:03:00.460 [error] <0.5460.8143> CRASH REPORT Process
> <0.5460.8143> with 0 neighbours crashed with reason: call to undefined
> function webmachine_error_handler:render_error(404,
> {webmachine_request,{wm_reqstate,#Port<0.147587697>,[],undefined,undefined,"xx.xx.xx.xx",{wm_reqdata,...},...}},
> {none,none,[]})
> Thanks,
> Alex
>
> On Thu, Apr 14, 2016 at 6:04 PM, Luke Bakken  wrote:
>
>> Hi Alex,
>>
>> Thanks for running that. This proves that it is not a Python client
>> issue. You can see the transcript of storing, fetching and deleting an
>> object successfully here:
>> https://gist.github.com/lukebakken/f1f3cbc96c2762eabb2f124b42797fda
>>
>> At this point, I suggest checking the error.log files on each Riak
>> node for information. Or, if you run "riak-debug" on your cluster and
>> provide the archives somewhere (private access), I could take a look.
>>
>> --
>> Luke Bakken
>> Engineer
>> lbak...@basho.com
>>
>>
>> On Thu, Apr 14, 2016 at 6:57 AM, Alex De la rosa
>>  wrote:
>> > Hi Luke, I tried and get this and didn't work:
>> >
>> > ~ # curl -4vvv -XDELETE
>> http://xx.xx.xx.xx:8098/buckets/test/keys/something
>> > * Hostname was NOT found in DNS cache
>> > *   Trying xx.xx.xx.xx...
>> > * Connected to xx.xx.xx.xx (xx.xx.xx.xx) port 8098 (#0)
>> >> DELETE /buckets/test/keys/something HTTP/1.1
>> >> User-Agent: curl/7.35.0
>> >> Host: xx.xx.xx.xx:8098
>> >> Accept: */*
>> >>
>> > * Empty reply from server
>> > * Connection #0 to host xx.xx.xx.xx left intact
>> > curl: (52) Empty reply from server
>> >
>> > Thanks,
>> > Alex
>> >
>> > On Thu, Apr 14, 2016 at 5:50 PM, Alex De la rosa <
>> alex.rosa@gmail.com>
>> > wrote:
>> >>
>> >> I can try that, but I would like to do it via the python client
>> itself...
>> >>
>> >> Thanks,
>> >> Rohman
>> >>
>> >> On Thu, Apr 14, 2016 at 5:47 PM, Luke Bakken 
>> wrote:
>> >>>
>> >>> Hi Alex,
>> >>>
>> >>> Can you use the HTTP API to delete an object? Something like:
>> >>>
>> >>> curl -4vvv -XDELETE riak-host:8098/buckets/test/keys/something
>> >>>
>> >>> --
>> >>> Luke Bakken
>> >>> Engineer
>> >>> lbak...@basho.com
>> >>>
>> >>>
>> >>> On Thu, Apr 14, 2016 at 2:05 AM, Alex De la rosa
>> >>>  wrote:
>> >>> > I upgraded the Python library to the latest and is still failing...
>> I'm
>> >>> > unable to delete any objects at all.
>> >>> >
>> >>> > ~ # pip show riak
>> >>> > ---
>> >>> > Name: riak
>> >>> > Version: 2.4.2
>> >>> > Location: /usr/local/lib/python2.7/dist-packages
>> >>> > Requires: six, pyOpenSSL, protobuf
>> >>> >
>> >>> > Everything else seems fine, just timeouts when deleting :(
>> >>> >
>> >>> > Thanks,
>> >>> > Alex
>> >>> >
>> >>> > On Thu, Apr 14, 2016 at 8:53 AM, Alex De la rosa
>> >>> > 
>> >>> > wrote:
>> >>> >>
>> >>> >> Hi there,
>> >>> >>
>> >>> >> I'm trying to delete objects from riak with the python library and
>> is
>> >>> >> timing out, any ideas? (this example is from a simple object, but
>> also
>> >>> >> have
>> >>> >> issues with bucket types with map objects, etc...)... Just I seem
>> to
>> >>> >> unable
>> >>> >> to delete anything, just times out.
>> >>> >>
>> >>> >> >>> import riak
>> >>> >> >>> RIAK = riak.RiakClient(protocol = 'pbc', nodes = [{'host':
>> >>> >> >>> '',
>> >>> >> >>> 'http_port': 8098, 'pb_port': 8087}])
>> >>> >> >>> x = RIAK.bucket('test').get('something')
>> >>> >> >>> print x.data
>> >>> >> {"something":"here"}
>> >>> >> >>> x.delete()
>> >>> >> Traceback (most recent call last):
>> >>> >>   File "", line 1, in 
>> >>> >>   File
>> "/usr/local/lib/python2.7/dist-packages/riak/riak_object.py",
>> >>> >> line
>> >>> >> 329, in delete
>> >>> >> timeout=timeout)
>> >>> >>   File
>> >>> >> "/usr/local/lib/python2.7/dist-packages/riak/client/transport.py",
>> >>> >> line 196, in wrapper
>> >>> >> return self._with_retries(pool, thunk)
>> >>> >>   File
>> >>> >> "/usr/local/lib/python2.7/dist-packages/riak/client/transport.py",
>> >>> >> line 138, in _with_retries
>> >>> >> return fn(transport)
>> >>> >>   File
>> >>> >> "/usr/local/lib/python2.7/dist-packages/riak/client/transport.py",
>> >>> >> line 194, in thunk
>> >>> >> return fn(self, transport, *args, **kwargs)
>> >>> >>   File
>> >>> >> "/usr/local/lib/python2.7/dist-packages/riak/client/operations.py",
>> >>> >> line 744, in delete
>> >>> >> pw=pw, timeout=timeout)
>> >>> >>   File
>> >>> >>
>> >>> >>
>> "/usr/local/lib/python2.7/dist-packages/riak/transports/pbc/transport.py",
>> >>> >> line 283, in delete
>> >>> >> 

Request for Comments - Discontinuing Java Client Support for Java 7

2016-07-13 Thread Alex Moore
Hi All,

It's been more than a year since the End of Public Updates for Java 7, and
we're interested in dropping support for it as well. The only issue is that
since it was so widely used, some of you may need to use it for new
projects.

We'd like anybody that is stuck on Java 7 to leave a thumbs down on the
Github issue, and if you'd like to cast down Java 7, leave a thumbs up on
the issue.

https://github.com/basho/riak-java-client/issues/635

Thanks,
Alex Moore
The Benevolent Java Dictator
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: How to best store arbitrarily large Java objects

2016-07-21 Thread Alex Moore
Hi Henning,

Responses inline:

...

> However, depending on the size of the `TreeMap`, the serialization
> output can become rather large, and this limits the usefulness of my
> object. In our tests, dealing with Riak-objects >2MB proved to be
> significantly slower than dealing with objects <200kB.


Yes. We usually recommend  keeping objects < 100kB for the best
performance; and Riak can usually withstand objects up to 1MB with the
understanding that everything will be a little slower with the larger
objects going around the system.


> My idea was to use a converter that splits the serialized JSON into
> chunks during _write_, and uses links to point from one chunk to the
> next. During _fetch_ the links would be traversed, the JSON string
> concatenated from chunks, deserialized and the object would be
> returned. Looking at `com.basho.riak.client.api.convert.Converter`, it
> seems this is not going to work.


Linkwalking was deprecated in Riak 2.0 so I wouldn't do it that way.

I'm beginning to think that I'll need to remodel my data and use CRDTs
> for individual fields such as the `TreeMap`. Would that be a better
> way?


This sounds like a plausible idea.  If you do a lot of possibly conflicting
updates to the Tree, then a CRDT map would be the way to go.  You could
reuse the key from the main object, and just put it in the new
buckettype/bucket.

If you don't need to update the tree much, you could also just serialize
the tree into it's own object - split up the static data and the often
updated data, and put them in different buckets that share the same key.

Thanks,
Alex


On Thu, Jul 21, 2016 at 9:36 AM, Henning Verbeek 
wrote:

> I have a Java class, which is being stored in Riak. The class contains
> a `TreeMap` field, amongst other fields. Out of the box, Riak is
> converting the object to/from JSON. Everything works fine.
>
> However, depending on the size of the `TreeMap`, the serialization
> output can become rather large, and this limits the usefulness of my
> object. In our tests, dealing with Riak-objects >2MB proved to be
> significantly slower than dealing with objects <200kB.
>
> So, in order to store/fetch instances of my class with arbitrary
> sizes, but with reliable performance, I believe I need to split the
> output into separate Riak-objects after serialization, and reassemble
> before deserialization.
>
> My idea was to use a converter that splits the serialized JSON into
> chunks during _write_, and uses links to point from one chunk to the
> next. During _fetch_ the links would be traversed, the JSON string
> concatenated from chunks, deserialized and the object would be
> returned. Looking at `com.basho.riak.client.api.convert.Converter`, it
> seems this is not going to work.
>
> I'm beginning to think that I'll need to remodel my data and use CRDTs
> for individual fields such as the `TreeMap`. Would that be a better
> way?
>
> Any other recommendations would be much appreciated.
>
> Thanks,
> Henning
> --
> My other signature is a regular expression.
> http://www.pray4snow.de
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Issues with the Go client

2016-08-16 Thread Alex Moore
Hi Kasper,

Looks like you might be trying to connect to Riak's HTTP port with the
client, which only uses protocol buffers. Can you try switching to the PB
port?

Thanks,
Alex

On Tue, Aug 16, 2016 at 4:11 PM, Kasper Tidemann  wrote:

> Hi everybody,
>
> I'm using the Go client for Riak, trying to make things work. I have a
> Riak node running locally on 127.0.0.1:26420 (HTTP).
>
> The node responds fine to *$ riak ping*. GET'ing keys, searching for
> records via */search/query/data?wt=json=some:thing* works as well. It
> all seems to be working, except for the fact that I can't get the Go client
> to talk to the node - it encounters a timeout.
>
> I'm using the code found here, having replaced the IP and port:
>
> https://github.com/basho/riak-go-client/blob/master/
> examples/dev/using/search/main.go
>
> I have set *riak.EnableDebugLogging = true* to figure out what happens.
> Here is the output from running the code:
>
> --
>
> 2016/08/16 22:06:06 [DEBUG] [Cluster] starting
> 2016/08/16 22:06:06 [DEBUG] [Node] (127.0.0.1:26420|0|0) starting
> 2016/08/16 22:06:06 [DEBUG] [Connection] connected to: 127.0.0.1:26420
> 2016/08/16 22:06:06 [DEBUG] [Node] (127.0.0.1:26420|1|1) started
> 2016/08/16 22:06:06 [DEBUG] [Cluster] cluster started
> 2016/08/16 22:06:06 [DEBUG] [Node] (127.0.0.1:26420|1|0) - executing
> command 'Ping-1'
> 2016/08/16 22:06:06 [DEBUG] [connectionManager] connection expiration
> routine is starting
> 2016/08/16 22:06:11 [DEBUG] [connectionManager] (127.0.0.1:26420)
> expiring connections at 2016-08-16 22:06:11.858769974 +0200 CEST
> 2016/08/16 22:06:11 [DEBUG] [connectionManager] (127.0.0.1:26420) expired
> 0 connections.
> 2016/08/16 22:06:16 [DEBUG] [connectionManager] (127.0.0.1:26420)
> expiring connections at 2016-08-16 22:06:16.857905418 +0200 CEST
> 2016/08/16 22:06:16 [DEBUG] [connectionManager] (127.0.0.1:26420) expired
> 0 connections.
> 2016/08/16 22:06:21 [DEBUG] [connectionManager] (127.0.0.1:26420)
> expiring connections at 2016-08-16 22:06:21.858036918 +0200 CEST
> 2016/08/16 22:06:21 [DEBUG] [connectionManager] (127.0.0.1:26420) expired
> 0 connections.
> 2016/08/16 22:06:26 [DEBUG] [connectionManager] (127.0.0.1:26420)
> expiring connections at 2016-08-16 22:06:26.853930918 +0200 CEST
> 2016/08/16 22:06:26 [DEBUG] [connectionManager] (127.0.0.1:26420) expired
> 0 connections.
> 2016/08/16 22:06:31 [DEBUG] [connectionManager] (127.0.0.1:26420)
> expiring connections at 2016-08-16 22:06:31.858905057 +0200 CEST
> 2016/08/16 22:06:31 [DEBUG] [connectionManager] (127.0.0.1:26420) expired
> 0 connections.
> 2016/08/16 22:06:36 [DEBUG] [connectionManager] (127.0.0.1:26420)
> expiring connections at 2016-08-16 22:06:36.85518287 +0200 CEST
> 2016/08/16 22:06:36 [DEBUG] [connectionManager] (127.0.0.1:26420) expired
> 0 connections.
> 2016/08/16 22:06:36 [DEBUG] [DefaultNodeManager] executed 'Ping-1' on node
> '127.0.0.1:26420|0|0', err 'read tcp 127.0.0.1:49998->127.0.0.1:26420:
> i/o timeout'
> 2016/08/16 22:06:36 [DEBUG] [Cluster] executed cmd 'Ping-1': re-try due to
> error 'read tcp 127.0.0.1:49998->127.0.0.1:26420: i/o timeout'
> 2016/08/16 22:06:36 [DEBUG] [Cluster] cmd Ping-1 tries: 2
> 2016/08/16 22:06:36 [DEBUG] [Async] onRetry cmd: Ping-1 sleep: 100ms
> 2016/08/16 22:06:36 [DEBUG] [Connection] connected to: 127.0.0.1:26420
> 2016/08/16 22:06:36 [DEBUG] [Node] (127.0.0.1:26420|1|0) - executing
> command 'Ping-1'
> 2016/08/16 22:06:41 [DEBUG] [connectionManager] (127.0.0.1:26420)
> expiring connections at 2016-08-16 22:06:41.858466787 +0200 CEST
> 2016/08/16 22:06:41 [DEBUG] [connectionManager] (127.0.0.1:26420) expired
> 0 connections.
> 2016/08/16 22:06:46 [DEBUG] [connectionManager] (127.0.0.1:26420)
> expiring connections at 2016-08-16 22:06:46.857634755 +0200 CEST
> 2016/08/16 22:06:46 [DEBUG] [connectionManager] (127.0.0.1:26420) expired
> 0 connections.
> 2016/08/16 22:06:51 [DEBUG] [connectionManager] (127.0.0.1:26420)
> expiring connections at 2016-08-16 22:06:51.857639326 +0200 CEST
> 2016/08/16 22:06:51 [DEBUG] [connectionManager] (127.0.0.1:26420) expired
> 0 connections.
> 2016/08/16 22:06:56 [DEBUG] [connectionManager] (127.0.0.1:26420)
> expiring connections at 2016-08-16 22:06:56.854398145 +0200 CEST
> 2016/08/16 22:06:56 [DEBUG] [connectionManager] (127.0.0.1:26420) expired
> 0 connections.
> 2016/08/16 22:07:01 [DEBUG] [connectionManager] (127.0.0.1:26420)
> expiring connections at 2016-08-16 22:07:01.85134 +0200 CEST
> 2016/08/16 22:07:01 [DEBUG] [connectionManager] (127.0.0.1:26420) expired
> 0 connections.
> 2016/08/16 22:07:06 [DEBUG] [connectionManager] (127.0.0.1:26420)
> expiring connections at 2016-08-16 22:07:06.858879614 +0200 CEST
> 2016/08/16 22:07:06 [DEBUG] [connectionManager] (127.0.0.1:26420) expired
> 0 connections.
> 2016/08/16 22:07:06 [DEBUG] [DefaultNodeManager] executed 'Ping-1' on node
> '127.0.0.1:26420|0|0', err 'read tcp 127.0.0.1:50009->127.0.0.1:26420:
> i/o timeout'
> 2016/08/16 

Re: Start up problem talking to Riak

2017-02-13 Thread Alex Moore
Hi David,

In your riak.conf files, what do the "listener.protobuf.internal" and
 "listener.http.internal" lines look like?  Are they bound to "127.0.0.1:",
"0.0.0.0:", or the external ip address?

Thanks,
Alex

On Mon, Feb 13, 2017 at 5:00 AM, AWS  wrote:

>  I know that this isn't directly a Riak issue but I am sure that some of
> you have met this before and can maybe help me. I am used to Macs and
> Windows but have now set up an Ubuntu 14.04LTS server on my home network. I
> have 5 fixed IP addresses so the server has its own external address. I
> have opened port 8098 on my router to point at the server and checked that
> ufw isn't running. I have tested with it running ufw and with  'allow 8098'
> applied. I still cannot connect to Riak. On the same computer I get a pong
> back to a ping so Riak seems to be OK.
>
> I have a Riak server running on AWS and had trouble setting that up until
> I, eventually, opened all ports.
>
> Can anyone please suggest some steps that I might take? I need this
> running for an Open University course that I am studying. My AWS free
> server runs out before the course finishes so I have to get this up and
> running soon.
> Thanks  in advance.
> David
>
> --
>
> Message sent using Winmail Mail Server
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Start up problem talking to Riak

2017-02-13 Thread Alex Moore
Yeah, what Alex said.  You can't see it with your application because it's
currently bound to the localhost loopback address, but
it's bad to just expose everything publicly.

1. Where is this cluster running? (AWS or local dev cluster?)
2. What are you trying to connect to Riak with? Is it one of our clients or
just raw HTTP requests?

Thanks,
Alex

On Mon, Feb 13, 2017 at 10:33 AM, Alexander Sicular 
wrote:

> Please don't do that. Don't point the internet at your database. Have them
> communicate amongst each other on internal ips and route the public through
> a proxy / middleware.
>
> -Alexander
>
> @siculars
> http://siculars.posthaven.com
>
> Sent from my iRotaryPhone
>
> > On Feb 13, 2017, at 04:00, AWS  wrote:
> >
> >  I know that this isn't directly a Riak issue but I am sure that some of
> you have met this before and can maybe help me. I am used to Macs and
> Windows but have now set up an Ubuntu 14.04LTS server on my home network. I
> have 5 fixed IP addresses so the server has its own external address. I
> have opened port 8098 on my router to point at the server and checked that
> ufw isn't running. I have tested with it running ufw and with  'allow 8098'
> applied. I still cannot connect to Riak. On the same computer I get a pong
> back to a ping so Riak seems to be OK.
> >
> > I have a Riak server running on AWS and had trouble setting that up
> until I, eventually, opened all ports.
> >
> > Can anyone please suggest some steps that I might take? I need this
> running for an Open University course that I am studying. My AWS free
> server runs out before the course finishes so I have to get this up and
> running soon.
> > Thanks  in advance.
> > David
> >
> > Message sent using Winmail Mail Server
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: RIAK TS installed nodes not connecting

2016-09-13 Thread Alex Moore
Joris,

One thing to check - since you are using a downloaded jar, are you using
the Uber jar that contains all the dependencies?
http://search.maven.org/remotecontent?filepath=com/basho/riak/spark-riak-connector_2.10/1.6.0/spark-riak-connector_2.10-1.6.0-uber.jar

Thanks,
Alex

On Tue, Sep 13, 2016 at 8:44 AM, Stephen Etheridge 
wrote:

> Hi Joris,
>
> I have looked at the tutorial you have been following but I confess I am
> confused.  In the example you are following I do not see where the spark
> and sql contexts are created.  I use PySpark through the Jupyter notebook
> and I have to specify a path to the connector on invoking the jupyter
> notebook. Is it possible for you to share all your code (and how you are
> invoking zeppelin) with me so I can trace everything through?
>
> regards
> Stephen
>
> On Mon, Sep 12, 2016 at 3:27 PM, Agtmaal, Joris van <
> joris.vanagtm...@wartsila.com> wrote:
>
>> Hi
>>
>>
>>
>> I’m new to Riak and followed the installation instructions to get it
>> working on an AWS cluster (3 nodes).
>>
>>
>>
>> So far ive been able to use Riak in pyspark (zeppelin) to
>> create/read/write tables, but i would like to use the dataframes directly
>> from spark, using the Spark-Riak Connector.
>>
>> When following the example found here: http://docs.basho.com/riak/ts/
>> 1.4.0/add-ons/spark-riak-connector/quick-start/#python
>>
>> But i run into trouble on this last part:
>>
>>
>>
>> host= my_ip_adress_of_riak_node
>>
>> pb_port = '8087'
>>
>> hostAndPort = ":".join([host, pb_port])
>>
>> client = riak.RiakClient(host=host, pb_port=pb_port)
>>
>>
>>
>> df.write \
>>
>> .format('org.apache.spark.sql.riak') \
>>
>> .option('spark.riak.connection.host', hostAndPort) \
>>
>> .mode('Append') \
>>
>> .save('test')
>>
>>
>>
>> Important to note that i’m using a local download of the Jar file that is
>> loaded into the pyspark interpreter in zeppeling through:
>>
>> %dep
>>
>> z.reset()
>>
>> z.load("/home/hadoop/spark-riak-connector_2.10-1.6.0.jar")
>>
>>
>>
>> Here is the error message i get back:
>>
>> Py4JJavaError: An error occurred while calling o569.save. :
>> java.lang.NoClassDefFoundError: com/basho/riak/client/core/util/HostAndPort
>> at 
>> com.basho.riak.spark.rdd.connector.RiakConnectorConf$.apply(RiakConnectorConf.scala:76)
>> at 
>> com.basho.riak.spark.rdd.connector.RiakConnectorConf$.apply(RiakConnectorConf.scala:89)
>> at org.apache.spark.sql.riak.RiakRelation$.apply(RiakRelation.scala:115)
>> at 
>> org.apache.spark.sql.riak.DefaultSource.createRelation(DefaultSource.scala:51)
>> at org.apache.spark.sql.execution.datasources.ResolvedDataSourc
>> e$.apply(ResolvedDataSource.scala:222) at org.apache.spark.sql.DataFrame
>> Writer.save(DataFrameWriter.scala:148) at org.apache.spark.sql.DataFrame
>> Writer.save(DataFrameWriter.scala:139) at 
>> sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>> Method) at 
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> at 
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:606) at
>> py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231) at
>> py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381) at
>> py4j.Gateway.invoke(Gateway.java:259) at py4j.commands.AbstractCommand.
>> invokeMethod(AbstractCommand.java:133) at 
>> py4j.commands.CallCommand.execute(CallCommand.java:79)
>> at py4j.GatewayConnection.run(GatewayConnection.java:209) at
>> java.lang.Thread.run(Thread.java:745) (> 'py4j.protocol.Py4JJavaError'>, Py4JJavaError(u'An error occurred while
>> calling o569.save.\n', JavaObject id=o570), > 0x7f7021bb0200>)
>>
>>
>>
>> Hope somebody can help out.
>>
>> thanks, joris
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>
>
> --
> { "name" : "Stephen Etheridge",
>"title" : "Solution Architect, EMEA",
>"Organisation" : "Basho Technologies, Inc",
>"Telephone" : "07814 406662",
>"email" : "mailto:setheri...@basho.com;,
>"github" : "http://github.com/datalemming;,
>"twitter" : "@datalemming"}
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak Java Client 2.0.7 Released

2016-08-26 Thread Alex Moore
Hi All,

We've just released Java Client 2.0.7. This version includes numerous bug
fixes, community requests, and adds support for Riak TS 1.4.

Maven: http://search.maven.org/#artifactdetails%7Ccom.
basho.riak%7Criak-client%7C2.0.7%7Cjar
All-in-one Jar: http://riak-java-client.s3.amazonaws.com/index.html
API docs: http://basho.github.io/riak-java-client/
Tag: https://github.com/basho/riak-java-client/releases/tag/
riak-client-2.0.7

Notes

   - This will be the last planned version of Riak Java Client that
   supports Java 7.
   - Some of the changes are binary-incompatible with RJC 2.0.6, so you
   will need to recompile your project with this new version.

*Issues / PRs addressed:*

   - Fixed - Disallow 0 as a timeout value for TimeSeries operations
   
   - Fixed - In RiakUserMetadata#containsKey(), use the charset method
   parameter when encoding the key [1]
   , [2]
   
   - Fixed - Don't return success to update future after fetch future error
   [1] , [2]
   
   - Fixed - Demoted "channel close" log messages to info level
   
   - Fixed - Made domain name more invalid for UnknownHostException test
   
   - Fixed - Separate Content-type and charset in RiakObject
   
   - Fixed - BinaryValue JSON encoding for MapReduce inputs
   
   - Fixed - Catch & handle BlockingOperationException in RiakNode#execute
   
   - Added Batch Delete Command [1]
   , [2]
   
   - Added equals(), hashCode(), toString() to RiakObject and associated
   files [1] , [2]
   
   - Added getLocation() to KvResponseBase [1]
   , [2]
   
   - Added creation of RiakClient from a collection of HostAndPort objects
   
   - Added overload of RiakClient#execute that accepts a timeout [1]
   , [2]
   
   - Added shortcut commands for $bucket and $key 2i indices
   
   - Added isNotFound() field to data type responses
   
   - Added - DataPlatform / Riak Spark Connector changes merged back into
   main client [1] , [2]
   , [3]
   , [4]
   , [5]
   
   - Updated plugins and dependencies
   
   - Updated TS objects and Commands for TS 1.4
   
   - Enhanced - Made Integration Tests Great Again
   
   - Removed Antlr dependency
   

Special thanks to github users @bwittwer, @stela, @gerardstannard,
@christopherfrieler, @guidomedina, @Tolsi, @hankipanky, @gfbett,
@TimurFayruzov, @urzhumskov, @srgg, @aleksey-suprun, @jbrisbin,
@christophermancini, and @lukebakken for all the PRs, reported issues, and
reviews.

Thanks,
Alex
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: riak-users Digest, Vol 87, Issue 16

2016-10-30 Thread Alex Moore
Hi Pratik,

You should try our All-In-One / Uber jar: 
http://riak-java-client.s3.amazonaws.com/index.html

It will contain all the dependencies that the Riak Java Client needs to operate.

Thanks,
Alex

> On Oct 30, 2016, at 1:19 PM, Pratik Kulkarni  wrote:
> 
> I added the joda time jar . Then it throws some time xxx jar and keeps on 
> throwing this. The problem with maven is i am using ant to build my project 
> 
> Thanks!
> 
> 
>> On Oct 30, 2016 9:00 AM,  wrote:
>> Send riak-users mailing list submissions to
>> riak-users@lists.basho.com
>> 
>> To subscribe or unsubscribe via the World Wide Web, visit
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>> or, via email, send a message with subject or body 'help' to
>> riak-users-requ...@lists.basho.com
>> 
>> You can reach the person managing the list at
>> riak-users-ow...@lists.basho.com
>> 
>> When replying, please edit your Subject line so it is more specific
>> than "Re: Contents of riak-users digest..."
>> 
>> 
>> Today's Topics:
>> 
>>1. Riak Java client API (Pratik Kulkarni)
>>2. Re: Riak Java client API (AJAX DoneBy Jack)
>> 
>> 
>> --
>> 
>> Message: 1
>> Date: Fri, 28 Oct 2016 11:47:00 -0700
>> From: Pratik Kulkarni 
>> To: riak-users@lists.basho.com
>> Subject: Riak Java client API
>> Message-ID: <437945ad-b9e1-4e4f-9e8e-5aac04894...@icloud.com>
>> Content-Type: text/plain; charset="us-ascii"
>> 
>> Hi All,
>> 
>> I am working on a distributed file storage system using the Java Netty 
>> framework. For this purpose i have Raik KV as an in memory  storage solution.
>> Following jar dependencies are present in my build path :
>> 
>> jackson-all-1.8.5.jar
>> netty-all-4.0.15.Final.jar
>> slf4j-api-1.7.2.jar
>> slf4j-simple-1.7.2.jar
>> protobuf-java-2.6.1.jar
>> json-20160212.jar
>> riak-client-2.0.5.jar
>> 
>> When i try initiate connection with the riak node. The connection attempt is 
>> successful but when i try to store the object in Riak KV. I keep getting the 
>> following NoClassDefFoundError. I am not sure why these errors arrive though 
>> i have included all the jars. Do we require apart from riak-client X.X jar 
>> any more dependencies. As per the terminal output I tried to add the 
>> dependencies by downloading the jars. But it just keeps giving me new 
>> dependencies error every time.  Kindly help ?
>> 
>> Please see the riak client code in java to store the file object
>> 
>> 
>> package gash.router.inmemory;
>> 
>> import com.basho.riak.client.api.RiakClient;
>> import com.basho.riak.client.api.commands.kv.DeleteValue;
>> import com.basho.riak.client.api.commands.kv.FetchValue;
>> import com.basho.riak.client.api.commands.kv.StoreValue;
>> import com.basho.riak.client.core.RiakCluster;
>> import com.basho.riak.client.core.RiakNode;
>> import com.basho.riak.client.core.query.Location;
>> import com.basho.riak.client.core.query.Namespace;
>> import com.basho.riak.client.core.query.RiakObject;
>> import com.basho.riak.client.core.util.BinaryValue;
>> 
>> import java.net.UnknownHostException;
>> 
>> public class RiakClientHandler {
>> 
>> private static RiakCluster setUpCluster() throws 
>> UnknownHostException{
>> // This example will use only one node listening on 
>> localhost:8098--default config
>> 
>> RiakNode node = new RiakNode.Builder()
>> .withRemoteAddress("127.0.0.1")
>> .withRemotePort(8098)
>> .build();
>>  // This cluster object takes our one node as an argument
>> RiakCluster cluster = new RiakCluster.Builder(node)
>> .build();
>> 
>> // The cluster must be started to work, otherwise you will see errors
>> cluster.start();
>> 
>> return cluster;
>> }
>> 
>> private static class RiakFile{
>> 
>> public String filename;
>> public byte[] byteData;
>> }
>> 
>> public static void saveFile(String filename,byte[] byteData)
>> {
>> try{
>> System.out.println("Inside Riak handler");
>> RiakCluster cluster = setUpCluster();
>> RiakClient client = new RiakClient(cluster);
>> RiakFile newFile = createRiakFile(filename, byteData);
>> System.out.println("Riak file created");
>> Namespace fileBucket = new Namespace("files");
>> Location fileLocation = new Location(fileBucket, filename);
>> StoreValue storeFile = new 
>> StoreValue.Builder(newFile).withLocation(fileLocation).build();
>> client.execute(storeFile);
>> System.out.println("File saved to riak ");
>> cluster.shutdown();
>> }
>> catch(Exception e){
>> 

Re: How to specify dismax related parameters like qf

2016-10-17 Thread Alex Moore
Hey Ajax,

Have you tried adding those parameters to the LocalParameters {!dismax}
 block?

e.g.: {!type=dismax qf='myfield yourfield'}solr rocks

http://wiki.apache.org/solr/LocalParams#Basic_Syntax

Thanks,
Alex

On Fri, Oct 14, 2016 at 3:18 PM, AJAX DoneBy Jack 
wrote:

> Hello Basho,
>
> I am very new on Riak Search, I know can add {!dismax}before query string
> to use it, but don't know how to specify qf or other dismax related
> parameters in Riak Java Client. Could you advise?
>
> Thanks,
> Ajax
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Situations where a fetch can retun no causal context

2017-01-10 Thread Alex Moore
Hi Michael,

For the Set, Map, and Counter data types the only other situation I can
think of is if the user explicitly set the "INCLUDE_CONTEXT" option to
false.  That option defaults to true, so it should always return one if the
data type you fetched isn't a bottom (initial) value.  If it is the bottom
value, the context will be NULL.

Thanks,
Alex

On Tue, Jan 10, 2017 at 11:50 AM, Edwards, Michael R (Contractor) <
medwa...@cas.org> wrote:

> Hi everyone,
> I saw in the documentation for the Java client that the
> FetchDatatype.Response object has a method called hasContext() that
> checks to see if the causal context was present on the fetch response. Are
> there any situations where a causal context won’t be returned besides if
> the object being fetched doesn’t exist?
>
> Thanks,
> Michael
>
> *Confidentiality Notice*: This electronic message transmission, including
> any attachment(s), may contain confidential, proprietary, or privileged
> information from Chemical Abstracts Service (“CAS”), a division of the
> American Chemical Society (“ACS”). If you have received this transmission
> in error, be advised that any disclosure, copying, distribution, or use of
> the contents of this information is strictly prohibited. Please destroy all
> copies of the message and contact the sender immediately by either replying
> to this message or calling 614-447-3600 <(614)%20447-3600>.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Question about RiakCluster - Java client - 2.x

2016-12-01 Thread Alex Moore
Hi Konstantin,

The RiakClient class is reentrant and thread-safe, so you should be able to
share it among the different workers. You may have to adjust the min / max
connection settings to get the most performance, but that's relatively
easy.

One other thing to notice is RiakClient's cleanup() method
,
since you are working in a servlet / container environment.

Thanks,
Alex

On Mon, Nov 21, 2016 at 3:49 PM, Konstantin Kalin <
konstantin.ka...@gmail.com> wrote:

> I'm currently migrating Java client from 1.4 to 2.1 and I have a question
> about RiakCluster class.
>
> We hide Riak Java API by our interface since we use multiple backends and
> Riak is one of them.
> Let's say I have two independent business activities that makes calls to
> Riak cluster. Both activities are executed within same Tomcat instance.
> Currently we use two RiakClient instances (1.4). Each activity initializes
> its own RiakClient.
> Since I do the migration I cannot decide. Would it have more sense to
> create a "singleton" object that will own RiakCluster instead of creating
> two instances? What should I consider as recommended approach?
>
> Thank you,
> Konstantin.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Unsolved problem using Riak TS Java Client

2017-04-12 Thread Alex Moore
Hi Allexandre,

A few things to start:

1. Could you share how you are setting up the connections in the Java
client?
2. Are you able to connect to your Riak TS cluster using the HTTP
interface, or riak_shell ?
3. Are there any log files from your java application?

Thanks,
Alex Moore
Team Lead, Clients Team

On Wed, Apr 12, 2017 at 8:11 AM, Allexandre Sampaio <allexandre...@gmail.com
> wrote:

> Hi, I'm new to Riak and I'm using the TS version to perform some tests in
> my Java application, running on an Ubuntu 16.04 server, for a college
> project.
> The library I'm using is riak-client-2.1.1, with all of its dependencies.
> But, when I try to connect to the DB to save data, the connection doesn't
> happen, and after a few minutes, the java proccess is closed.
> What could it be? Maybe a problem with the DB settings?
>
> Att.,
>
> *Allexandre Sampaio*
> *Vitória da Conquista - BA, Brazil*
> *+55 (77) 99964 3521 <+55%2077%2099964-3521>*
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Problem with multithreadeding in Java

2017-04-14 Thread Alex Moore
Hi Allexandre,

Could you share your code for setting up the
RiakNode/RiakCluster/RiakClient objects on the Java side, and how you are
sharing them amongst your worker threads?

Thanks,
Alex
Clients Team Lead

On Thu, Apr 13, 2017 at 6:09 PM, Allexandre Sampaio  wrote:

> Hi, I'm using Riak TS in a multithreaded application, running on an Ubuntu
> 16.04 server, for a college project. The library I'm using is
> riak-client-2.1.1, with all of its dependencies.
> The problem starts when I try to set more than some dozens threads, so the
> Riak service starts to refuse the connection and throw errors.
> I've checked and all the threads are using the same client (that is thread
> safe, according to its docs).
> I also tried to change the riak conf file to allow the maximum of threads
> (1024), but it didn't change nothing...
>
> Need help.
> Thanks!
>
> Att.,
> *Allexandre Sampaio*
> *Sistemas de Informação - IFBA*
>
> *Vitória da Conquista - BA, Brasil*
> *(77) 99964 3521*
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Unable to compile Riak on Raspberry Pi 3

2017-04-01 Thread Alex Moore
Hi cocos,

I recently did some testing around this and journaled the steps needed to
build KV and TS on Raspbian:

https://gist.github.com/alexmoore/7bbdece19223cea1e144e4d23a5ed7ec

This includes building Basho's flavor of Erlang, and the fix that Matthew
mentioned.

Thanks,
Alex

On Sat, Apr 1, 2017 at 7:41 AM, Matthew Von-Maszewski 
wrote:

> A fix for that problem in leveldb/util/perf_count.cc already exists on the
> "develop" branch of the github repository basho/leveldb.  Download it and
> rebuild.
>
> Sent from my iPad
>
> On Apr 1, 2017, at 4:08 AM, cocos  wrote:
>
> Hello,
>
> currenty I'm trying  to install Riak on the Raspberry Pi 3 for testing
> purposes. I used the following instruction from basho:
>
> http://docs.basho.com/riak/kv/2.2.2/setup/installing/source/
>
> I'm having problems compiling it from source. I tried to compile it on
> Raspbian Jessie and then switched to Ubuntu Server 16.04. Both times with
> the same result. It is not compiling and aborts at a certain point. I don't
> know what causes the problem since it only says: `recipe for target
> 'util/perf_count.o' failed`. Searching Google and the mailing list from
> basho weren't successful.
>
> The version of `gcc` is `gcc (Raspbian 4.9.2-10) 4.9.2`. The version of
> `Erlang` is  `Erlang R16B02_basho8 (erts-5.10.3)`
>
> The commands i used are the following:
>
> *Installing Erlang:*
>
> wget http://s3.amazonaws.com/downloads.basho.com/erlang/
> otp_src_R16B02basho10.tar.gz
>
> tar zxvf otp_src_R16B02-basho10.tar.gz
>
> cd OTP_R16B02_basho10
> ./otp_build autoconf
> ./configure && make && sudo make install
>
> *Installing Riak:*
>
> wget http://s3.amazonaws.com/downloads.basho.com/riak/2.2/
> 2.2.1/riak-2.2.1.tar.gz
>
> tar zxvf riak-2.2.1.tar.gz
>
> cd riak-2.2.1
> make locked-deps
> make rel
>
> Any suggestions are welcome.
>
> ## *Output:* ##
>
> `./include/leveldb/atomics.h:155:15: note:
> template argument deduction/substitution failed
> util/perf_count.cc:439:40:
> note: deduced conflicting types for parameter ‘ValueT’
> (‘unsigned int’ and‘int’ add_and_fetch(ptr_32, 1);`
>
>
> `Makefile:190: recipe for target 'util/perf_count.o' failed
> make[1]: *** [util/perf_count.o] Error 1
> make[1]: *** Waiting for unfinished jobs
> make[1]: Leaving directory '/home/pi/Riak/riak/deps/
> eleveldb/c_src/leveldb'
> ERROR: Command [compile] failed!
> Makefile:23: recipe for target 'compile' failed
> make: *** [compile] Error 1`
>
> Von meinem Samsung Galaxy Smartphone gesendet.
>
> 
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com