Hi Daniel,

What version of the Java client are you using?

Any reason you're on such an old version of Riak?

What is the size of each object written?
--
Luke Bakken
Engineer
lbak...@basho.com


On Wed, May 13, 2015 at 4:48 AM, Daniel Iwan <iwan.dan...@gmail.com> wrote:
> Hi
>
> I'm using 4 node Riak cluster v1.3.1
>
> I wanted to know a little bit more about using withoutFetch() option when
> used with levelDB.
> I'm trying to write to a single key as fast as I can with n=3.
> I deliberately create siblings by writing with stale vclock. I'm limiting
> number of writes to 1000 per key to keep size of Riak object under control
> and then I switch to another key. Siblings will probably never be resolved
> (or resolved in realtime during sporadic reads)
>
> Single write operation is about 250 bytes, rate 10-80 events per sec which
> gives 3-20kB per second per node. So roughly 100kB / s for the cluster.
>
> During the test I see activity on the on disk via iostat and it's between
> 20-30 MB/s on each node.
> Even taking into account multiple copies and overhead of Riak (vclocks etc)
> this seems to be pretty high rate.
> I don't see any read activity which suggest withoutFetch() works as
> expected.
> After 2 mins of tests leveldb on each node is 250MB is size, before test
> (11MB)
>
> Am I using it incorrectly?
> Is writing in this way to a single key a good idea or will I be bitten by
> something?
> How to explain high number of MB written to disks?
>
> Regards
> Daniel
>
>
>
>
>
>
>
>
> --
> View this message in context: 
> http://riak-users.197444.n3.nabble.com/Clarifying-withoutFetch-with-LevelDB-and-tp4033051.html
> Sent from the Riak Users mailing list archive at Nabble.com.
>
> _______________________________________________
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to