},{delete, 1}] which
means that out of every 9 operations, 'get' will be called four times, 'put'
will called four times, and 'delete' will be called once, on average.
% Run 80% gets, 20% puts
{operations, [{get, 4}, {put, 1}]}.
===
Denis
--
View this message
I mean, this is equivalent to...
:)
Denis
--
View this message in context:
http://riak-users.197444.n3.nabble.com/Efficiency-of-RIAK-tp4025765p4025783.html
Sent from the Riak Users mailing list archive at Nabble.com.
___
riak-users mailing list
RIAK does not make sense to compare with elevelDB, objective was to assess
the cost of organizing the cluster.
I was surprised by the difference in performance. I expected to order 20 000
put / sec. Perhaps, RIAK configured incorrect?
With these results, you need at least 4 servers with RIAK to
Hello
I'm trying to understand how Solr indexes are interact with Riak KV store.
Let me explain a bit... The Riak uses sharding per vnode. Each physical
node contains several vnodes and data stored there are indexed by Solr. As
far as i understood, Solr is not clustered solution, i.e. Solr
> This is just a quick reply since this is somewhat a current topic on the
> ML.
>
> On 24 May 2017, at 12:57, Denis Gudtsov <dg.65...@gmail.com> wrote:
>
> > Hello
> >
> > We have 6-nodes cluster with ring size 128 configured. The problem is
> that
>
Hello Fred
Thank you very much for your explanation. I'm trying to understand how this
scheme can work, but need some time... Let me come back to you after some
time.
Also you're referring to Ryan Zezeski slides, but i can't find it. Could
you please share me link if you have it? Thank you.
I'm using the latest version of riak (0.14.2-1) with the
riak-python-client (v. 1.3.0).
I want to run the Riak Search example shown in the documentation
(https://bitbucket.org/basho/riak-python-client), but the json docs
I've added to my bucket are more complex than the example.
I.e., instead of
Hello!
I've faced a problem recently. There are strange processes like
...start_clean which use 100% of CPU. My riak version is 0.14.2.
Do you know what is it and what can be done with it?
Thank you!
___
riak-users mailing list
riak_kv_vnode_master started with
riak_core_vnode_master:start_link(riak_kv_vnode, riak_kv_legacy_vnode,
riak_kv) at 0.1475.0 exit with reason reached_max_restart_intensity in
context shutdown
PS. I tuned up all the limits and open ports values...
Waiting for you help and
thank you!
Denis
:23.108 [info] Merged
[/var/lib/riak/kv_bitcask/exp_1day/1233142006497949337234359077604363797834693083136,[],[/var/lib/riak/kv_bitcask/exp_1day/1233142006497949337234359077604363797834693083136/1341246813.bitcask.data]]
in 0.06311 seconds.
Regards,
Denis
Hi All,
Could someone answer is basho_bench's basho_bench_driver_http_raw driver
working or not now?
I'm trying to benchmark my Riak node using basho_bench,
using basho_bench_driver_http_raw driver. But when i'm trying to run it all
instances of basho_bench_driver_http_raw clients instantly
2013/3/9 Steve Vinoski st...@basho.com
Hi Denis,
It looks like the raw HTTP driver currently lacks support for delete
operations. I've added it on this branch if you want to try it:
https://github.com/basho/basho_bench/tree/sbv-add-raw-http-delete
--steve
Hi Steve,
Oh, thank you very
Hi Stefan,
Yes, we faced same issue after upgrade to 1.3.1.
I made rolling restart today - and ... it's a magic - PUT time was dropped
from 150 ms to 2 ms! - http://s15.postimg.org/3lrl036yz/put.png
Thanks for pointing to right direction, I hope that will be fixed soon -
it's not very
.
Backing up individual bitcask/ring directories and restoring with reip
working like a charm, but require complete stop of testing cluster
operations.
With best regards,
Denis.
2013/8/8 Guillermo guille...@cientifico.net
Having the same problem. Still running a restore of 2.7 gb. In my
Hello
We have 6-nodes cluster with ring size 128 configured. The problem is that
two partitions has replicas only on two nodes rather than three as required
(n_val=3). We have tried several times to clean leveldb and ring directories
and then rebuild cluster, but this issue is still present.
How
15 matches
Mail list logo