[ 
https://issues.apache.org/jira/browse/CASSANDRA-13441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16006683#comment-16006683
 ] 

Jeff Jirsa commented on CASSANDRA-13441:
----------------------------------------

Hi [~juliuszaromskis] - if you're upgrading from 3.0.9 to 3.0.13, it's unlikely 
that this is your issue (this would mostly impact people going from 2.1 -> 3.0, 
or 2.2 -> 3.0. Unless you're very confident that the schema version on 
{{10.240.0.6}} is different and more desirable than that on the other two 
nodes, the most likely solution is to issue a {{nodetool resetlocalschema}} on 
10.240.0.6, allowing it to re-pull its schema from .7 and .8.



> Schema version changes for each upgraded node in a rolling upgrade, causing 
> migration storms
> --------------------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-13441
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-13441
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Schema
>            Reporter: Jeff Jirsa
>            Assignee: Jeff Jirsa
>             Fix For: 3.0.14, 3.11.0, 4.0
>
>
> In versions < 3.0, during a rolling upgrade (say 2.0 -> 2.1), the first node 
> to upgrade to 2.1 would add the new tables, setting the new 2.1 version ID, 
> and subsequently upgraded hosts would settle on that version.
> When a 3.0 node upgrades and writes its own new-in-3.0 system tables, it'll 
> write the same tables that exist in the schema with brand new timestamps. As 
> written, this will cause all nodes in the cluster to change schema (to the 
> version with the newest timestamp). On a sufficiently large cluster with a 
> non-trivial schema, this could cause (literally) millions of migration tasks 
> to needlessly bounce across the cluster.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to