Hello,

I was experimenting with Riak yesterday when it went away during a PDF
upload. I think it crashed due to an out-of-memory error.

This (virtual) server is running the upcoming Debian Wheezy release
and installed the package that the installation wiki refers to
(riak/1.2/1.2.1/debian/6/riak_1.2.1-1_amd64.deb) as there's no Wheezy
package yet. As far as I know, Erlang is R15B01.

I ran "riak-admin test" succesfully. Using curl, I inserted a simple
key/value string and a 25MB PDF file. Then I inserted a 266MB PDF
file, which failed (connection closed). Riak wasn't running anymore.

An excerpt from erl_crash.dump (the whole file is 1GB):

=erl_crash_dump:0.1
Wed Mar 13 14:40:15 2013
Slogan: temp_alloc: Cannot allocate 266223543 bytes of memory (of type "tmp").
System version: Erlang R15B01 (erts-5.9.1) [source] [64-bit] [smp:2:2]
[async-threads:64] [kernel-poll:true]
Compiled: Tue Aug 28 14:29:23 2012
Taints: crypto,bitcask_nifs,dyntrace
Atoms: 15906
=memory
total: 1189373880
processes: 7325906
processes_used: 7320944
system: 1182047974
atom: 463441
atom_used: 456851
binary: 1165653920
code: 10067657
ets: 1071312
[...]

The large PDF that failed is 266222972 bytes, so clearly this is
directly related to the temp_alloc error.

I understand that such large binary files might not be optimal for
Riak, but a select few documents in my use-case will be such large PDF
files, which is why I tested files of that size.

Does Riak keep the whole binary value of a key in memory, instead of
reading/writing chunks? And why does it crash instead of just closing
the connection and moving on?


Thanks for any pointers,

– Wouter

_______________________________________________
riak-users mailing list
[email protected]
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to