you can either use get_range_slices to scan through all your rows and
batch_mutate them into the 2nd cluster, or you can start a test
cluster with the same number of nodes as the live one and just scp
everything over, 1 to 1.

it's possible but highly error-prone to manually slice and dice data
files (raw or as json).

On Tue, Aug 17, 2010 at 12:48 PM, Artie Copeland <yeslinux....@gmail.com> wrote:
> what is the best way to move data between clusters.  we currently have a 4
> node prod cluster with 80G of data and want to move it to a dev env with 3
> nodes.  we have plenty of disk were looking into nodetool snapshot, but it
> look like that wont work because of the system tables.  sstabletojson does
> look like it would work as it would miss the index files.  am i missing
> something?  have others tried to do the same and been successful.
> thanx
> artie
>
> --
> http://yeslinux.org
> http://yestech.org
>



-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of Riptano, the source for professional Cassandra support
http://riptano.com

Reply via email to