I have a cron job that prunes old records using a secondary index range
search on created_at (milliseconds since epoch). The code looks like this:
Riak::SecondaryIndex.new(bucket, "created_at_bin", start..finish, {
max_results: per_page })
then it iterates through the keys and calls bucket.delete on each key.
The problem I'm having is that previously deleted results are showing up
when re-running the script, but attempts to fetch the objects fails because
they have been deleted. This is happening several hours after they were
originally deleted, so I don't think it is a transient replication issue.
My suspicion is that the secondary indexes are somehow out of
sync/disassociated with the object. Is there a way to repair these? Or am I
attempting something that just isn't going to work with riak?
Other details:
riak version: 1.4.8
ring_size: 256
cluster nodes: 6
client: https://github.com/basho/riak-ruby-client (1.4.4.1)
connection type: protocol buffers
bucket delete_mode: 3000
_______________________________________________
riak-users mailing list
[email protected]
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com