Internally we have a tool that does get range slice on the souce cluster and replicates to destination.
Remeber that writes are itempotemt. Our tool can optionally only replicate data between two timestamps, allowing incremental transfers. So if you get your application writing new data to both clusters you can run a range scanning program to copy all the data. On Monday, December 23, 2013, horschi <hors...@gmail.com> wrote: > Interesting you even dare to do a live migration :-) > > Do you do all Murmur-writes with the timestamp from the "Random"-data? So that all migrated data is written with timestamps from the past. > > > > On Mon, Dec 23, 2013 at 3:59 PM, Rahul Menon <ra...@apigee.com> wrote: >> >> Christian, >> >> I have been planning to migrate my cluster from random to murmur3 in a similar manner. I intend to use pycassa to read and then write to the newer cluster. My only concern would be ensuring the consistency of already migrated data as the cluster ( with random ) would be constantly serving the production traffic. I was able to do this on a non prod cluster, but production is a different game. >> >> I would also like to hear more about this, especially if someone was able to successfully do this. >> >> Thanks >> Rahul >> >> >> On Mon, Dec 23, 2013 at 6:45 PM, horschi <hors...@gmail.com> wrote: >>> >>> Hi list, >>> >>> has anyone ever tried to migrate a cluster from Random to Murmur? >>> >>> We would like to do so, to have a more standardized setup. I wrote a small (yet untested) utility, which should be able to read SSTable files from disk and write them into a cassandra cluster using Hector. This migration would be offline of course and would only work for smaller clusters. >>> >>> Any thoughts on the topic? >>> >>> kind regards, >>> Christian >>> >>> PS: The reason for doing so are not "performance". It is to simplify operational stuff for the years to come. :-) >> > > -- Sorry this was sent from mobile. Will do less grammar and spell check than usual.