Hi,

We were trying to do a similar kind of migration (to a new cluster, no
downtime) in order to remove a legacy OrderedPartitioner limitation.  In
the end we were allowed enough downtime to migrate, but originally we were
proposing a similar solution based around deploying an update to the
application to write to two clusters simultaneously, and a background copy
of older data in some way.

I'd love to hear how the migration went, and whether there were any
(un)expected hurdles along the way!

Thanks,


Conan

On 24 May 2012 23:56, Rob Coli <rc...@palominodb.com> wrote:

> On Thu, May 24, 2012 at 12:44 PM, Steve Neely <sne...@rallydev.com> wrote:
> > It also seems like a dark deployment of your new cluster is a great
> method
> > for testing the Linux-based systems before switching your mision critical
> > traffic over. Monitor them for a while with real traffic and you can have
> > confidence that they'll function correctly when you perform the
> switchover.
>
> FWIW, I would love to see graphs which show their compared performance
> under identical write load and then show the cut-over point for reads
> between the two clusters. My hypothesis is that your linux cluster
> will magically be much more perfomant/less loaded due to many
> linux-specific optimizations in Cassandra, but I'd dig seeing this
> illustrated in an apples to apples sense with real app traffic.
>
> =Rob
>
> --
> =Robert Coli
> AIM&GTALK - rc...@palominodb.com
> YAHOO - rcoli.palominob
> SKYPE - rcoli_palominodb
>

Reply via email to