I'm trying to delete keys across multiple buckets in a Riak cluster made of
two machines (n_val = 2). I'm using the Erlang map and reduce scripts that I
found at:

Map: http://contrib.basho.com/get_keys.html
Reduce: http://contrib.basho.com/delete_keys.html

Because I don't want to delete every key, I'm executing the map/reduce job
on top of a Secondary Index query. The SI query filters a subset of keys
that I want to delete. When executing the map/reduce job I get many
"not_found" errors, like:

{"phase":0,"error":"function_clause","input":"{{error,notfound},{<<\"my_bucket\">>,<<\"item_key\">>},undefined}","type":"error","stack":"[{riak_object,bucket,[{error,notfound}],[{file,\"src/riak_object.erl\"},{line,251}]},{delete_map_function,get_keys,3,[{file,\"delete_map_function.erl\"},{line,7}]},{riak_kv_mrc_map,map,3,[{file,\"src/riak_kv_mrc_map.erl\"},{line,164}]},{riak_kv_mrc_map,process,3,[{file,\"src/riak_kv_mrc_map.erl\"},{line,140}]},{riak_pipe_vnode_worker,process_input,3,[{file,\"src/riak_pipe_vnode_worker.erl\"},{line,444}]},{riak_pipe_vnode_worker,wait_for_input,2,[{file,\"src/riak_pipe_vnode_worker.erl\"},{line,376}]},{gen_fsm,...},...]"}

This is really weird to me. If the Map function finds an object and properly
returns it to the reduce function, why then the reduce function can't find
the object again? Makes no sense at all.

Thanks



--
View this message in context: 
http://riak-users.197444.n3.nabble.com/The-riak-contrib-reduce-phase-to-delete-Bucket-Key-pairs-returns-many-not-found-exceptions-tp4027353.html
Sent from the Riak Users mailing list archive at Nabble.com.

_______________________________________________
riak-users mailing list
[email protected]
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to