One should avoid doing things (apart from normal traffic) in mixed mode.
The best approach is to upgrade the nodes as fast as possible and then do
other activities.

Regards


On Thu, May 20, 2021 at 10:44 AM Jai Bheemsen Rao Dhanwada <
jaibheem...@gmail.com> wrote:

> Thanks for the suggestions.
>
> Few more questions on the mixed mode. Can I perform the below operations
> on my cluster in the mixed mode?
>
> 1. Add a new datacenter with the new version
> 2. Remove a datacenter
> 3. Add more nodes to the upgraded or non-upgraded datacenter
> 4. Schema changes
>
> I haven't tested these yet, but just trying to gather information about
> do's and don'ts
>
>
> On Wed, May 19, 2021 at 7:39 PM Jeff Jirsa <jji...@gmail.com> wrote:
>
>> Incredibly slow and has no real benefits beyond a single canary.
>>
>>
>> On May 19, 2021, at 7:19 PM, rammohan ganapavarapu <
>> rammohanga...@gmail.com> wrote:
>>
>> 
>> How about option 1? Any issues with option 1?
>>
>> On Wed, May 19, 2021, 6:58 PM Kane Wilson <k...@raft.so> wrote:
>>
>>> On Thu, May 20, 2021 at 11:17 AM Jai Bheemsen Rao Dhanwada <
>>> jaibheem...@gmail.com> wrote:
>>>
>>>> Thanks for the response,
>>>>
>>>> Is there a limit on how long I can run in mixed mode? Let's say if
>>>> datacenter 1 is upgraded and upgradesstables was run on day 1 and
>>>> datacenter 3 is upgraded and upgradesstables runs on day 10. Is that going
>>>> to be a big concern?
>>>>
>>> There is no "limit". The major caveat is a lack of ability to run
>>> repair, which may or may not be a problem in your scenario.
>>>
>>>
>>>> > 2 might be strictly safer if you trust internode mixed mode AND have
>>>> a way to fail out of a dc and rebuild it without violating consistency.
>>>> I tested the mixed mode and it works, but are there any catches that
>>>> won't work?
>>>>
>>>> I am okay to disable repair during this time.
>>>>
>>> I'd still advise limiting time in mixed mode. You probably don't want to
>>> be stuck doing operations in mixed mode, or without repair for too long. An
>>> alternative would be to just upgrade one node in each DC first, and monitor
>>> that node for any issues. If that node seems stable enough you can roll out
>>> to the whole DC, whereas if it encounters problems you can downgrade/fix it
>>> without having to go through a complex DC failover. You could even do this
>>> in parallel across all your DC's, and thus limit the time you're in mixed
>>> mode substantially.
>>>
>>> --
>>> raft.so - Cassandra consulting, support, and managed services
>>>
>>

Reply via email to