Re: Recommended riak configuration options for better performance

2014-06-05 Thread Alex Moore
Hi Naveen,

You are running out of MR workers, you’ll either have to:
a) Increase the worker limits on the current nodes (particularly 
map_js_vm_count and reduce_js_vm_count)
b) Add more nodes (and thereby more workers)
c) Do less MR work.
d) Implement your MapReduce functions in Erlang to avoid the JS VM altogether

Bryan Fink has a nice writeup on how to estimate your MR worker needs here: 
http://riak-users.197444.n3.nabble.com/Follow-up-Riak-Map-Reduce-error-preflist-exhausted-tp4024330p4024380.html

Thanks,
Alex

On Jun 4, 2014, at 7:58 AM, Naveen Tamanam naveen32in...@gmail.com wrote:

 Hi Guys, 
 
 
 I have 5 nodes riak cluster in  use. Each machine is having 16GB ram.  All 
 those 5 machines are 
 ​dedicated for riak only. No other application is there to eat resources. ​ 
 ​I use to do a lot of work with
 map reduce queries. Having a may map reduce queries with both map and reduce 
 phases. 
 I have many cases with the  the following error and log messages, ​
 ​  
   error:[preflist_exhausted]
  RiakError: 'could not get a response'   
  All VMs are busy
 
 I know  above errors can be avoided with fine tuned riak configuration 
 options. I am looking for recommended values
 Here  are  few riak configuration options currently I have on each node, 
 
   { kernel, [
 {inet_dist_listen_min, 6000},
 {inet_dist_listen_max, 7999}
   ]},
 
{map_js_vm_count, 48 },
{reduce_js_vm_count, 26 },
{hook_js_vm_count, 12 },
 {js_max_vm_mem, 32},
  {js_thread_stack, 16}

 
 
  
 
 
 
 
 
 ​
 -- 
 Thanks  Regards,
 Naveen Tamanam ​​
 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Partition distribution between nodes

2014-06-05 Thread Luke Bakken
Hi Manu,

Partition distribution is determined by the claim algorithm. In this case
it more evenly distributes partitions in a from scratch scenario vs.
adding nodes. There has been work to improve the algorithm that you can
find here: https://github.com/basho/riak_core/pull/183

--
Luke Bakken
CSE
lbak...@basho.com


On Mon, Jun 2, 2014 at 11:51 PM, Manu Mäki - Compare Group 
m.m...@comparegroup.eu wrote:

  Hi Luke,

  Do you have any idea why creating the cluster from scratch creates “more
 balanced” cluster? Is this because of the actual partition sizes not being
 of equal size?


  Manu

   From: Luke Bakken lbak...@basho.com
 Date: Monday 2 June 2014 19:34
 To: Manu Maki m.m...@comparegroup.eu
 Cc: riak-users@lists.basho.com riak-users@lists.basho.com
 Subject: Re: Partition distribution between nodes

   Hi Manu,

  I see similar vnode distribution in my local dev cluster. This is due to
 64 not being evenly divisible by 5.

  4 nodes:

  $ dev1/bin/riak-admin member-status
 = Membership
 ==
 Status RingPendingNode

 ---
 valid  25.0%  --  'dev1@127.0.0.1'
 valid  25.0%  --  'dev2@127.0.0.1'
 valid  25.0%  --  'dev3@127.0.0.1'
 valid  25.0%  --  'dev4@127.0.0.1'

 ---

  5th node added:

  $ dev1/bin/riak-admin member-status
 = Membership
 ==
 Status RingPendingNode

 ---
 valid  18.8%  --  'dev1@127.0.0.1'
 valid  18.8%  --  'dev2@127.0.0.1'
 valid  18.8%  --  'dev3@127.0.0.1'
 valid  25.0%  --  'dev4@127.0.0.1'
 valid  18.8%  --  'dev5@127.0.0.1'

 ---

  Cluster *from scratch* with 5 nodes:

  $ dev1/bin/riak-admin member-status
 = Membership
 ==
 Status RingPendingNode

 ---
 valid  20.3%  --  'dev1@127.0.0.1'
 valid  20.3%  --  'dev2@127.0.0.1'
 valid  20.3%  --  'dev3@127.0.0.1'
 valid  20.3%  --  'dev4@127.0.0.1'
 valid  18.8%  --  'dev5@127.0.0.1'

 ---

  --
 Luke Bakken
 CSE
 lbak...@basho.com


 On Mon, Jun 2, 2014 at 6:52 AM, Manu Mäki - Compare Group 
 m.m...@comparegroup.eu wrote:

  Hi all,

  In the beginning we were running four nodes with n-value of 2. The
 partitions were distributed 25% for each node. Now when we added fifth node
 (still having n-value of 2), the partitions are distributed in following
 way: 25%, 19%, 19%, 19% and 19%. The ring size in use is 64. Is this normal
 behavior? The cluster seems to be working correctly. However I was
 expecting each node to have 20% of the partitions.


  Best regards,
 Manu Mäki

 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com



___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Upgraded riak 1.4.9 is pegging the CPU

2014-06-05 Thread Alain Rodriguez
Hi all,

I upgraded 1 of 9 riak nodes in a cluster last night from 1.4.0 to 1.4.9.
The rest are running 1.4.0.

Ever since I am seeing the upgraded node, riak01 consuming a significantly
larger percent of CPU and the PUT times on it have gotten worse. htop
indicicates one particular process pegging the CPU, and many many more
processes running than I was used to seeing before.

Has anyone seen this before? Do I have to retune something for 1.4.9?

I am attaching htop, cpu and put graphs, and my app.config used across all
servers.

Thanks!

htop:
https://s3.amazonaws.com/uploads.hipchat.com/17604/95038/vEznS9gh6BRRNMR/htop.png
cpu:
https://s3.amazonaws.com/uploads.hipchat.com/17604/95038/21jilAfIwn8L5zC/cpu.png
put:
https://s3.amazonaws.com/uploads.hipchat.com/17604/95038/wX36crPiMeRg8kb/put.png


app.config.erb
Description: Binary data


vm.args.erb
Description: Binary data
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Using the 'fold' keys threshold

2014-06-05 Thread Alex Moore
Hi Venkat,

You can find those settings in our docs: 
http://docs.basho.com/riak/1.4.9/ops/advanced/backends/bitcask/#Configuring-Bitcask
 (search for “Fold Keys Threshold”).   

In Bitcask when we do range operations like “List Keys” or other operations 
that require us to fold over all the data, we take a snapshot of the “Keydir” 
to get a consistent read.  The Keydir is the hash table that holds the `key- 
latest object` mapping.  When we do this snapshot, we also start a delta of any 
changes since the snapshot.

We use the two “Fold Keys Threshold” options `max_fold_age` and `max_fold_puts` 
only when Bitcask is processing one fold operation, and gets a request for a 
second one.  These two options let the user choose whether to reuse the 
snapshot, or to block and get a new snapshot before starting the second fold.  
This lets you tradeoff between a potential performance boost and consistency.

By default we have Bitcask side toward consistency; it will get a new snapshot, 
by setting `max_fold_puts` to `0`.  If any new puts come in, we must grab a new 
snapshot before folding again. 

- Increasing `max_fold_puts` to `n` will let Bitcask reuse the snapshot 
if there are fewer than `n` changes in the delta.
- Increasing `max_fold_age` to `s` will let Bitcask reuse the snapshot 
if the snapshot is younger than `s` microseconds. 

Setting either of these to positive values can let folds ignore recent changes, 
so you can run into stale data. Because of that, we recommend that you don’t 
change them.
I hope this helps.

Thanks,
Alex


On May 13, 2014, at 3:02 PM, Venkatachalam Subramanian 
venkatsubb...@gmail.com wrote:

 Hi All,
 
 It was very helpful to get my first few questions about Riak/Bitcacsk 
 answered pretty quickly.
 
 I just have a another question on the same lines,
 
 I ran across the 'fold keys threshold' option in riak/bitcask.
 I could not find enough information about the 'fold keys' option to 
 understand it completely.
 
 Could someone tell me what 'fold keys' option is? what does it do? when could 
 we use it? 
 Does it help when you want to get the list of all keys available?
 
 I greatly appreciate your help.
 Thank You.
 
 -- 
 Regards,
 Venkat Subramanian
 
 
 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re:

2014-06-05 Thread Luke Bakken
Hi Gaurav,

I believe you are running into this issue:
https://github.com/basho/bitcask/issues/136

To resolve, shut down a node and remove all 0-byte data and 18-byte hint
files. I would also recommend upgrading Riak at the same time as 1.4.8 and
beyond include the fix for this issue. 1.4.9 is the current stable version.

--
Luke Bakken
CSE
lbak...@basho.com


On Fri, May 23, 2014 at 12:21 AM, Gaurav Sood 
gaurav.s...@mediologysoftware.com wrote:

 Hi All

 I am getting below error while writing anything in riak database. This is
 random issue   i am able to write data in 3-4 attempts which creates null
 value information in bucket. Please help me to resolve this problem.

 I have restored the live database of 5 node cluster on 1 node server.
 Riak Version : 1.4.7
 Installed on ubuntu 12.04

 Error -
 2014-05-23 12:28:51.701 [error] 0.19748.55 CRASH REPORT Process
 0.19748.55 with 10 neighbours exited with reason: no match of right hand
 value {error,{badmatch,{error,eexist}}} in bitcask:do_put/5 line 1232 in
 gen_fsm:terminate/7 line 611
 2014-05-23 12:28:51.702 [error] 0.19749.55 Supervisor
 {0.19749.55,poolboy_sup} had child riak_core_vnode_worker started with
 riak_core_vnode_worker:start_link([{worker_module,riak_core_vnode_worker},{worker_args,[890602560248518965780370444936484965102833893376,...]},...])
 at undefined exit with reason no match of right hand value
 {error,{badmatch,{error,eexist}}} in bitcask:do_put/5 line 1232 in context
 shutdown_error


 Thanks  Regards
 Gaurav

 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Upgraded riak 1.4.9 is pegging the CPU

2014-06-05 Thread Shane McEwan
On 05/06/14 16:20, Alain Rodriguez wrote:
 Hi all,
 
 I upgraded 1 of 9 riak nodes in a cluster last night from 1.4.0 to
 1.4.9. The rest are running 1.4.0.
 
 Ever since I am seeing the upgraded node, riak01 consuming a
 significantly larger percent of CPU and the PUT times on it have gotten
 worse. htop indicicates one particular process pegging the CPU, and many
 many more processes running than I was used to seeing before.

G'day!

Did you turn off and remove the Active Anti Entropy files before upgrading?

From the 1.4.8 release notes:

IMPORTANT We recommend removing current AAE trees before upgrading. That
is, all files under the anti_entropy sub-directory. This will avoid
potentially large amounts of repair activity once correct hashes start
being added. The data in the current trees can only be fixed by a full
rebuild, so this repair activity is wasteful. Trees will start to build
once AAE is re-enabled. To minimize the impact of this, we recommend
upgrading during a period of low activity.

Shane.

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Upgraded riak 1.4.9 is pegging the CPU

2014-06-05 Thread Alain Rodriguez
Thanks for the quick reply and no I did not. Is this something I should be
able to do now (stop, remove files, start again) or is it too late? How
could I verify this is the issue?


On Thu, Jun 5, 2014 at 8:42 AM, Shane McEwan sh...@mcewan.id.au wrote:

 On 05/06/14 16:20, Alain Rodriguez wrote:
  Hi all,
 
  I upgraded 1 of 9 riak nodes in a cluster last night from 1.4.0 to
  1.4.9. The rest are running 1.4.0.
 
  Ever since I am seeing the upgraded node, riak01 consuming a
  significantly larger percent of CPU and the PUT times on it have gotten
  worse. htop indicicates one particular process pegging the CPU, and many
  many more processes running than I was used to seeing before.

 G'day!

 Did you turn off and remove the Active Anti Entropy files before upgrading?

 From the 1.4.8 release notes:

 IMPORTANT We recommend removing current AAE trees before upgrading. That
 is, all files under the anti_entropy sub-directory. This will avoid
 potentially large amounts of repair activity once correct hashes start
 being added. The data in the current trees can only be fixed by a full
 rebuild, so this repair activity is wasteful. Trees will start to build
 once AAE is re-enabled. To minimize the impact of this, we recommend
 upgrading during a period of low activity.

 Shane.

 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Upgraded riak 1.4.9 is pegging the CPU

2014-06-05 Thread Alain Rodriguez
Actually I just noticed it is likely the AAE issue:

2014-06-05 14:53:47.587 [error] 0.16054.31 CRASH REPORT Process
0.16054.31 with 0 neighbours exited with reason: no match of right hand
value {error,{db_open,IO error: lock
/var/lib/riak/anti_entropy/1061872283373234151507364761270424381468763488256/LOCK:
already held by process}} in hashtree:new_segment_store/2 line 505 in
gen_server:init_it/6 line 328
2014-06-05 14:53:47.588 [error] 0.16056.31 CRASH REPORT Process
0.16056.31 with 0 neighbours exited with reason: no match of right hand
value {error,{db_open,IO error: lock
/var/lib/riak/anti_entropy/1335903840372778448670555667404727447654250840064/LOCK:
already held by process}} in hashtree:new_segment_store/2 line 505 in
gen_server:init_it/6 line 328
2014-06-05 14:53:47.588 [error] 0.16055.31 CRASH REPORT Process
0.16055.31 with 0 neighbours exited with reason: no match of right hand
value {error,{db_open,IO error: lock
/var/lib/riak/anti_entropy/1267395951122892374379757940871151681107879002112/LOCK:
already held by process}} in hashtree:new_segment_store/2 line 505 in
gen_server:init_it/6 line 328

Bollocks!


On Thu, Jun 5, 2014 at 8:49 AM, Alain Rodriguez al...@uber.com wrote:

 Thanks for the quick reply and no I did not. Is this something I should be
 able to do now (stop, remove files, start again) or is it too late? How
 could I verify this is the issue?


 On Thu, Jun 5, 2014 at 8:42 AM, Shane McEwan sh...@mcewan.id.au wrote:

 On 05/06/14 16:20, Alain Rodriguez wrote:
  Hi all,
 
  I upgraded 1 of 9 riak nodes in a cluster last night from 1.4.0 to
  1.4.9. The rest are running 1.4.0.
 
  Ever since I am seeing the upgraded node, riak01 consuming a
  significantly larger percent of CPU and the PUT times on it have gotten
  worse. htop indicicates one particular process pegging the CPU, and many
  many more processes running than I was used to seeing before.

 G'day!

 Did you turn off and remove the Active Anti Entropy files before
 upgrading?

 From the 1.4.8 release notes:

 IMPORTANT We recommend removing current AAE trees before upgrading. That
 is, all files under the anti_entropy sub-directory. This will avoid
 potentially large amounts of repair activity once correct hashes start
 being added. The data in the current trees can only be fixed by a full
 rebuild, so this repair activity is wasteful. Trees will start to build
 once AAE is re-enabled. To minimize the impact of this, we recommend
 upgrading during a period of low activity.

 Shane.

 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com



___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Upgraded riak 1.4.9 is pegging the CPU

2014-06-05 Thread Engel Sanchez
Hi Alain. I don't think you are seeing the AAE issue. The problem with
upgrading from 1.4.4-1.4.7 to 1.4.8 was a broken hash function in those,
which made the AAE trees incompatible. You should not have the same problem
in 1.4.0.  It seems that Erlang processes are repeatedly crashing and
restarting. It would be good to grab all your logs before they rotate so we
can take a look at exactly what is the first thing crashing and causing
this snowball effect.


On Thu, Jun 5, 2014 at 11:58 AM, Alain Rodriguez al...@uber.com wrote:

 Actually I just noticed it is likely the AAE issue:

 2014-06-05 14:53:47.587 [error] 0.16054.31 CRASH REPORT Process
 0.16054.31 with 0 neighbours exited with reason: no match of right hand
 value {error,{db_open,IO error: lock
 /var/lib/riak/anti_entropy/1061872283373234151507364761270424381468763488256/LOCK:
 already held by process}} in hashtree:new_segment_store/2 line 505 in
 gen_server:init_it/6 line 328
 2014-06-05 14:53:47.588 [error] 0.16056.31 CRASH REPORT Process
 0.16056.31 with 0 neighbours exited with reason: no match of right hand
 value {error,{db_open,IO error: lock
 /var/lib/riak/anti_entropy/1335903840372778448670555667404727447654250840064/LOCK:
 already held by process}} in hashtree:new_segment_store/2 line 505 in
 gen_server:init_it/6 line 328
 2014-06-05 14:53:47.588 [error] 0.16055.31 CRASH REPORT Process
 0.16055.31 with 0 neighbours exited with reason: no match of right hand
 value {error,{db_open,IO error: lock
 /var/lib/riak/anti_entropy/1267395951122892374379757940871151681107879002112/LOCK:
 already held by process}} in hashtree:new_segment_store/2 line 505 in
 gen_server:init_it/6 line 328

 Bollocks!


 On Thu, Jun 5, 2014 at 8:49 AM, Alain Rodriguez al...@uber.com wrote:

 Thanks for the quick reply and no I did not. Is this something I should
 be able to do now (stop, remove files, start again) or is it too late? How
 could I verify this is the issue?


 On Thu, Jun 5, 2014 at 8:42 AM, Shane McEwan sh...@mcewan.id.au wrote:

 On 05/06/14 16:20, Alain Rodriguez wrote:
  Hi all,
 
  I upgraded 1 of 9 riak nodes in a cluster last night from 1.4.0 to
  1.4.9. The rest are running 1.4.0.
 
  Ever since I am seeing the upgraded node, riak01 consuming a
  significantly larger percent of CPU and the PUT times on it have gotten
  worse. htop indicicates one particular process pegging the CPU, and
 many
  many more processes running than I was used to seeing before.

 G'day!

 Did you turn off and remove the Active Anti Entropy files before
 upgrading?

 From the 1.4.8 release notes:

 IMPORTANT We recommend removing current AAE trees before upgrading. That
 is, all files under the anti_entropy sub-directory. This will avoid
 potentially large amounts of repair activity once correct hashes start
 being added. The data in the current trees can only be fixed by a full
 rebuild, so this repair activity is wasteful. Trees will start to build
 once AAE is re-enabled. To minimize the impact of this, we recommend
 upgrading during a period of low activity.

 Shane.

 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com




 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Upgraded riak 1.4.9 is pegging the CPU

2014-06-05 Thread Engel Sanchez
Alain, thanks for the logs you sent me on the side.  I'm not yet sure what
the root cause is, but I saw a lot of handoff activity and busy distributed
port messages, which indicate the single TCP connection between two Erlang
nodes is completely saturated.  Since there is too much going on, turning
off AAE and examining your cluster with less activity might still be a good
idea.  Check the output of riak-admin transfers until it is quiet.   I
noticed you have a file limit of 8192. That is not low, but newer Riaks eat
more file handles, so it would be a good idea to double that.  Let us know
how the stats and the logs look like after AAE is off to see what else we
can do.


On Thu, Jun 5, 2014 at 1:05 PM, Engel Sanchez en...@basho.com wrote:

 Hi Alain. I don't think you are seeing the AAE issue. The problem with
 upgrading from 1.4.4-1.4.7 to 1.4.8 was a broken hash function in those,
 which made the AAE trees incompatible. You should not have the same problem
 in 1.4.0.  It seems that Erlang processes are repeatedly crashing and
 restarting. It would be good to grab all your logs before they rotate so we
 can take a look at exactly what is the first thing crashing and causing
 this snowball effect.


 On Thu, Jun 5, 2014 at 11:58 AM, Alain Rodriguez al...@uber.com wrote:

 Actually I just noticed it is likely the AAE issue:

 2014-06-05 14:53:47.587 [error] 0.16054.31 CRASH REPORT Process
 0.16054.31 with 0 neighbours exited with reason: no match of right hand
 value {error,{db_open,IO error: lock
 /var/lib/riak/anti_entropy/1061872283373234151507364761270424381468763488256/LOCK:
 already held by process}} in hashtree:new_segment_store/2 line 505 in
 gen_server:init_it/6 line 328
 2014-06-05 14:53:47.588 [error] 0.16056.31 CRASH REPORT Process
 0.16056.31 with 0 neighbours exited with reason: no match of right hand
 value {error,{db_open,IO error: lock
 /var/lib/riak/anti_entropy/1335903840372778448670555667404727447654250840064/LOCK:
 already held by process}} in hashtree:new_segment_store/2 line 505 in
 gen_server:init_it/6 line 328
 2014-06-05 14:53:47.588 [error] 0.16055.31 CRASH REPORT Process
 0.16055.31 with 0 neighbours exited with reason: no match of right hand
 value {error,{db_open,IO error: lock
 /var/lib/riak/anti_entropy/1267395951122892374379757940871151681107879002112/LOCK:
 already held by process}} in hashtree:new_segment_store/2 line 505 in
 gen_server:init_it/6 line 328

 Bollocks!


 On Thu, Jun 5, 2014 at 8:49 AM, Alain Rodriguez al...@uber.com wrote:

 Thanks for the quick reply and no I did not. Is this something I should
 be able to do now (stop, remove files, start again) or is it too late? How
 could I verify this is the issue?


 On Thu, Jun 5, 2014 at 8:42 AM, Shane McEwan sh...@mcewan.id.au wrote:

 On 05/06/14 16:20, Alain Rodriguez wrote:
  Hi all,
 
  I upgraded 1 of 9 riak nodes in a cluster last night from 1.4.0 to
  1.4.9. The rest are running 1.4.0.
 
  Ever since I am seeing the upgraded node, riak01 consuming a
  significantly larger percent of CPU and the PUT times on it have
 gotten
  worse. htop indicicates one particular process pegging the CPU, and
 many
  many more processes running than I was used to seeing before.

 G'day!

 Did you turn off and remove the Active Anti Entropy files before
 upgrading?

 From the 1.4.8 release notes:

 IMPORTANT We recommend removing current AAE trees before upgrading. That
 is, all files under the anti_entropy sub-directory. This will avoid
 potentially large amounts of repair activity once correct hashes start
 being added. The data in the current trees can only be fixed by a full
 rebuild, so this repair activity is wasteful. Trees will start to build
 once AAE is re-enabled. To minimize the impact of this, we recommend
 upgrading during a period of low activity.

 Shane.

 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com




 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com



___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Cost of many PBC connections held open?

2014-06-05 Thread Toby Corkindale
Hi,
Just looking for a bit of guidance on good practice when using Riak
via the PBC interface.

To keep request latency low, it's good to keep the connection open
between requests, rather than making a new connection for just a few
requests and then dropping it again, right?

In the JVM client, pools of connections are shared between large
numbers of threads. However it's harder to do that in Perl, which some
of our codebase is written in.
It'd be a lot easier to have one connection per process, but that's
potentially quite a lot of connections, albeit ones that are idle most
of the time.

I'd like to get a feeling for how expensive is it for the Erlang VM to
hold those connections open. Does it consume a lot of resources or is
it negligible?

Thanks,
Toby

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com