If you are testing on a single node change either the default or the
specific bucket nval to 1. Otherwise every write is x3.
@siculars on twitter
http://siculars.posterous.com
Sent from my iPhone
On Mar 15, 2011, at 18:53, Luke Monahan <[email protected]> wrote:
Hi All,
I am just poking around riak and riaksearch at the moment, and have
found that the disk space usage I am seeing is about 100-to-1 for
the actual data size being stored. That is, for every 1MB of data I
am saving in Riak I am getting about 100MB of disk space used.
My data has keys of variable length, but only up to about 16
characters. The data itself is up to about 100B only for each entry
(I'm saving many small entries). I'm only counting the size of the
bitcask directory when looking at this, starting from a clean
install (i.e. no overwriting of old data, which I know would simply
append until the cleanup process occurs).
The IO levels I'm seeing when loading up data does not seem to be
anywhere near enough to count for the amount of disk usage. Is there
some thick allocation of files going on, or am I mistaken here?
In any case, is the disk usage I'm seeing normal or is something
going wrong? I actually can't play with the data set I want on my
laptop with this high disk usage, so before I go and buy and
external SATA disk I thought I'd check to see if this is expected.
Ubuntu 10.10 64-bit, Erlang and riak from source (latest release) as
per the wiki, ext4 filesystem, 4GB RAM. Using riak-js and nodejs,
both at the latest released version.
Thanks in advance,
Luke.
_______________________________________________
riak-users mailing list
[email protected]
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
_______________________________________________
riak-users mailing list
[email protected]
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com