I agree.  I am running a 0.6 cluster and would like to upgrade to 0.7.  But,
I can not simply stop my existing nodes.

I need a way to load a new cluster - either on the same machines or new
machines - with the existing data.

I think my overall preference would be to upgrade the cluster to 0.7 running
on a new port (or new set of machines), then have a tiny translation service
on the old port which did whatever translation is required from 0.6 protocol
to 0.7 protocol.

Then I would upgrade my clients once to the 0.7 protocol and also change
their connection parameters to the new 0.7 cluster.

But, I'd be open to anything ... just need a way to upgrade without having
to turn everything off, do the upgrade, then turn everything back on.  I am
not able to do that in my production environment (for business reasons).
 Docs on alternatives other than "turn off, upgrade, turn on" would be
fantastic.

Dave Viner


On Fri, Jan 21, 2011 at 1:01 PM, Aaron Morton <aa...@thelastpickle.com>wrote:

> Yup, you can use diff ports and you can give them different cluster names
> and different seed lists.
>
> After you upgrade the second cluster partition the data should repair
> across, either via RR or the HHs that were stored while the first partition
> was down. Easiest thing would be to run node tool repair. Then a clean up to
> remove any leftover data.
>
> AFAIK file formats are compatible. But drain the nodes before upgrading to
> clear the log.
>
> Can you test this on a non production system?
>
> Aaron
> (we really need to write some upgrade docs:))
>
> On 21/01/2011, at 10:42 PM, Dave Gardner <dave.gard...@imagini.net> wrote:
>
> What about executing writes against both clusters during the changeover?
> Interested in this topic because we're currently thinking about the same
> thing - how to upgrade to 0.7 without any interruption.
>
> Dave
>
> On 21 January 2011 09:20, Daniel Josefsson < <jid...@gmail.com>
> jid...@gmail.com> wrote:
>
>> No, what I'm thinking of is having two clusters (0.6 and 0.7) running on
>> different ports so they can't find each other. Or isn't that configurable?
>>
>> Then, when I have the two clusters, I could upgrade all of the clients to
>> run against the new cluster, and finally upgrade the rest of the Cassandra
>> nodes.
>>
>> I don't know how the new cluster would cope with having new data in the
>> old cluster when they are upgraded though.
>>
>> /Daniel
>>
>> 2011/1/20 Aaron Morton < <aa...@thelastpickle.com>aa...@thelastpickle.com
>> >
>>
>> I'm not sure if your suggesting running a mixed mode cluster there, but
>>> AFAIK the changes to the internode protocol prohibit this. The nodes will
>>> probable see each either via gossip, but the way the messages define their
>>> purpose (their verb handler) has been changed.
>>>
>>> Out of interest which is more painful, stopping the cluster and upgrading
>>> it or upgrading your client code?
>>>
>>> Aaron
>>>
>>> On 21/01/2011, at 12:35 AM, Daniel Josefsson < <jid...@gmail.com>
>>> jid...@gmail.com> wrote:
>>>
>>> In our case our replication factor is more than half the number of nodes
>>> in the cluster.
>>>
>>> Would it be possible to do the following:
>>>
>>>    - Upgrade half of them
>>>    - Change Thrift Port and inter-server port (is this the
>>>    storage_port?)
>>>    - Start them up
>>>    - Upgrade clients one by one
>>>    - Upgrade the the rest of the servers
>>>
>>> Or might we get some kind of data collision when still writing to the old
>>> cluster as the new storage is being used?
>>>
>>> /Daniel
>>>
>>>
>>
>

Reply via email to