Depending on the client you're using, you can perform results streaming to pull 
back results and process them in chunks rather than waiting for a buffer to get 
filled. 

It's easy enough to write something like this using Ripple or CorrugatedIron. 
I'm guessing it's possible with other clients. 
---
Jeremiah Peschka
Founder, Brent Ozar PLF, LLC

On Jul 28, 2011, at 1:40 PM, Jonathan Langevin wrote:

> I've read on the wiki that to delete a bucket, the only method is to manually 
> delete all keys within the bucket.
> So then what is the recommended process for deleting all keys within a 
> bucket, manually?
> 
> I was initially just listing all keys within a bucket, and then iterating the 
> keys to send delete requests, but I hit a wall when I had too many keys to 
> get back in a list request (received header too large errors).
> 
> So I assume the alternative would be to run a mapreduce to pull keys from the 
> bucket with a specified limit, to then execute the deletes?
> While that's fine for an "active record" style environment (where there may 
> be cleanup actions that must occur per object being deleted), is there 
> another method for deleting all keys within a bucket, massively? (Maybe via a 
> map call?)
> 
> 
> Jonathan Langevin
> Systems Administrator
> Loom Inc.
> Wilmington, NC: (910) 241-0433 - [email protected] - 
> www.loomlearning.com - Skype: intel352
> 
> _______________________________________________
> riak-users mailing list
> [email protected]
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


_______________________________________________
riak-users mailing list
[email protected]
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to