Default leveldb config is (when built using `make devrel`):
%% eLevelDB Config
{eleveldb, [
{data_root, "./data/leveldb"}
]},
Drop everything but the data_root. Also: simplify your set up. You're
running into timeout errors in a networked environment. Test your
functionality on a local cluster (either 1 node or a devrel setup with 4
nodes). Validate your logic and functionality there.
I wish I could promise that I can help out at all, but I just got back from
12 days out of the country and I'm booked solid for the next two weeks. In
short - my responses will be hit or miss at best.
Good luck and godspeed.
---
Jeremiah Peschka - Founder, Brent Ozar Unlimited
MCITP: SQL Server 2008, MVP
Cloudera Certified Developer for Apache Hadoop
On Tue, Mar 12, 2013 at 6:16 PM, Kevin Burton <[email protected]>wrote:
> The eleveldb app.config are the default. I don’t know enough to modify
> them. What are the defaults?****
>
> ** **
>
> *From:* riak-users [mailto:[email protected]] *On Behalf
> Of *Jeremiah Peschka
> *Sent:* Tuesday, March 12, 2013 8:06 PM
> *To:* riak-users
> *Subject:* Re: Error interpretation****
>
> ** **
>
> Your eleveldb app.config settings are decidedly non-default. Is there any
> reason for that? Also, you spelled "cache_size" incorrectly in your
> eleveldb config setting. What happens when you set those back to defaults?
> Is there a reason that you're changing these settings?****
>
> ** **
>
> This looks eerily similar to an unanswered Stack Overflow question [1] and
> marginally similar to this mailing list thread about bad disks [2]. Since
> it's after hours Pacific time, I'd wait for tomorrow and hope that some of
> the folks living in "the future" have an idea.****
>
> ** **
>
> [1]:
> http://stackoverflow.com/questions/12748154/riak-riak-kv-vnode-worker-pool-crashed
> ****
>
> [2]: http://www.mail-archive.com/[email protected]/msg09242.html*
> ***
>
>
> ****
>
> ---****
>
> Jeremiah Peschka - Founder, Brent Ozar Unlimited****
>
> MCITP: SQL Server 2008, MVP****
>
> Cloudera Certified Developer for Apache Hadoop****
>
> ** **
>
> On Tue, Mar 12, 2013 at 5:14 PM, Kevin Burton <[email protected]>
> wrote:****
>
> On another node the console.log looks like:****
>
> ****
>
> 2013-03-12 18:41:19.687 [info] <0.7.0> Application riak_kv started on node
> '[email protected]'****
>
> 2013-03-12 18:41:19.709 [info] <0.7.0> Application merge_index started on
> node '[email protected]'****
>
> 2013-03-12 18:41:19.719 [info] <0.7.0> Application riak_search started on
> node '[email protected]'****
>
> 2013-03-12 18:41:19.724 [info]
> <0.473.0>@riak_core:wait_for_application:419 Wait complete for application
> riak_kv (0 seconds)****
>
> 2013-03-12 18:41:19.782 [info] <0.7.0> Application riak_api started on
> node '[email protected]'****
>
> 2013-03-12 18:41:19.846 [info] <0.7.0> Application cluster_info started on
> node '[email protected]'****
>
> 2013-03-12 18:41:19.867 [info] <0.7.0> Application riak_control started on
> node '[email protected]'****
>
> 2013-03-12 18:41:19.868 [info] <0.7.0> Application erlydtl started on node
> '[email protected]'****
>
> 2013-03-12 18:41:20.471 [info] <0.296.0>@riak_core:wait_for_service:439
> Wait complete for service riak_kv (1 seconds)****
>
> ****
>
> And in erlang.log I see:****
>
> ****
>
> 18:41:19.513 [info] New capability: {riak_kv,vnode_vclocks} = true^M****
>
> 18:41:19.526 [info] New capability: {riak_kv,legacy_keylisting} = false^M*
> ***
>
> 18:41:19.541 [info] New capability: {riak_kv,listkeys_backpressure} =
> true^M****
>
> 18:41:19.558 [info] New capability: {riak_kv,mapred_system} = pipe^M****
>
> 18:41:19.582 [info] New capability: {riak_kv,mapred_2i_pipe} = true^M****
>
> 18:41:19.623 [info] Waiting for application riak_kv to start (0 seconds).^M
> ****
>
> 18:41:19.687 [info] Application riak_kv started on node '
> [email protected]'^M****
>
> 18:41:19.709 [info] Application merge_index started on node '
> [email protected]' ^H^M****
>
> 18:41:19.719 [info] Application riak_search started on node '
> [email protected]' ^H^M****
>
> 18:41:19.724 [info] Wait complete for application riak_kv (0 seconds)^M***
> *
>
> 18:41:19.782 [info] Application riak_api started on node '
> [email protected]'^M****
>
> 18:41:19.846 [info] Application cluster_info started on node '
> [email protected]'^M****
>
> 18:41:19.867 [info] Application riak_control started on node '
> [email protected]'^M****
>
> 18:41:19.868 [info] Application erlydtl started on node '
> [email protected]'^M****
>
> Eshell V5.9.1 (abort with ^G)^M****
>
> ([email protected])1> 18:41:20.471 [info] Wait complete for service
> riak_kv (1 seconds)^M****
>
> ****
>
> ===== ALIVE Tue Mar 12 19:09:45 CDT 2013****
>
> I don’t see any errors there.****
>
> ****
>
> There is an empty crash.log that was dated with a timestamp recently.****
>
> ****
>
> *From:* riak-users [mailto:[email protected]] *On Behalf
> Of *Jeremiah Peschka
> *Sent:* Tuesday, March 12, 2013 6:34 PM
> *To:* <[email protected]>
> *Subject:* Re: Error interpretation****
>
> ****
>
> Your Riak search hook crashed. There is a bad argument being sent to
> Erlang. Check your Riak logs for more detail.****
>
>
> --****
>
> Jeremiah Peschka - Founder, Brent Ozar Unlimited****
>
> MCITP: SQL Server 2008, MVP****
>
> Cloudera Certified Developer for Apache Hadoop****
>
>
> On Mar 12, 2013, at 4:23 PM, "Kevin Burton" <[email protected]>
> wrote:****
>
> I am writing a value to a 4 node Riak cluster with a CorrugatedIron client
> and am getting the following error:****
>
> ****
>
> Riak returned an error. Code '0'. Message:
> {precommit_fail,{hook_crashed,{riak_search_kv_hook,precommit,error,badarg}}}
> ****
>
> ****
>
> I don’t know how to interpret this error. Any help would be appreciated.**
> **
>
> ****
>
> Thank you.****
>
> _______________________________________________
> riak-users mailing list
> [email protected]
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com****
>
> ** **
>
_______________________________________________
riak-users mailing list
[email protected]
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com