Re: Upgrade From 2.0 to 2.1
Very soon. If not today, it will be up tomorrow. :) Yayyy, just saw the release of 3.11.4. :-) You'll need to go to v3 for 3.11. Congratulations on being aware enough to do this - advanced upgrade coordination, it's absolutely the right thing to do, but most people don't know it's possible or useful. Thanks a lot Jeff for clarifying this. I really hoped the answer would be different. Now I need to nag our R&D teams again :-) Thanks! On Mon, Feb 11, 2019 at 8:21 PM Michael Shuler wrote: > On 2/11/19 9:24 AM, shalom sagges wrote: > > I've successfully upgraded a 2.0 cluster to 2.1 on the way to upgrade to > > 3.11 (hopefully 3.11.4 if it'd be released very soon). > > Very soon. If not today, it will be up tomorrow. :) > > -- > Michael > > - > To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org > For additional commands, e-mail: user-h...@cassandra.apache.org > >
Re: Upgrade From 2.0 to 2.1
On 2/11/19 9:24 AM, shalom sagges wrote: > I've successfully upgraded a 2.0 cluster to 2.1 on the way to upgrade to > 3.11 (hopefully 3.11.4 if it'd be released very soon). Very soon. If not today, it will be up tomorrow. :) -- Michael - To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org For additional commands, e-mail: user-h...@cassandra.apache.org
Re: Upgrade From 2.0 to 2.1
On Mon, Feb 11, 2019 at 7:24 AM shalom sagges wrote: > Hi All, > > I've successfully upgraded a 2.0 cluster to 2.1 on the way to upgrade to > 3.11 (hopefully 3.11.4 if it'd be released very soon). > > I have 2 small questions: > >1. Currently the Datastax clients are enforcing Protocol Version 2 to >prevent mixed cluster issues. Do I need now to enforce Protocol Version 3 >while upgrading from 2.1 to 3.11 or can I still use Protocol Version 2? > > You'll need to go to v3 for 3.11. Congratulations on being aware enough to do this - advanced upgrade coordination, it's absolutely the right thing to do, but most people don't know it's possible or useful. > >1. >2. After the upgrade, I found that system table NodeIdInfo has not >been upgraded, i.e. I still see it in *-jb-* convention. Does this >mean that this table is obsolete and can be removed? > > It is obsolete and can be removed. > > Thanks! > > >
Upgrade From 2.0 to 2.1
Hi All, I've successfully upgraded a 2.0 cluster to 2.1 on the way to upgrade to 3.11 (hopefully 3.11.4 if it'd be released very soon). I have 2 small questions: 1. Currently the Datastax clients are enforcing Protocol Version 2 to prevent mixed cluster issues. Do I need now to enforce Protocol Version 3 while upgrading from 2.1 to 3.11 or can I still use Protocol Version 2? 2. After the upgrade, I found that system table NodeIdInfo has not been upgraded, i.e. I still see it in *-jb-* convention. Does this mean that this table is obsolete and can be removed? Thanks!
Re: live dsc upgrade from 2.0 to 2.1 behind the scenes
Thank you Eric, for advice. It's great help.-Park On Tuesday, August 15, 2017 3:42 AM, Erick Ramirez wrote: 1) You should not perform any streaming operations (repair, bootstrap, decommission) in the middle of an upgrade. Note that an upgrade is not complete until you have completed upgradesstables on all nodes in the cluster. 2) No streaming involved with writes so it's not an issue. 3) It doesn't matter whichever way you do it. My personal experience is that it's best to do it as you go but YMMV. 4) It depends on a lot of factors including type of disks (e.g. SSDs vs HDDs), data model, access patterns, cluster load, etc. The only way you'll be able to estimate it is by running your own tests. 5) There is no "max" time but it is preferable that you complete the upgrades in the shortest amount of time. Until you have completed upgradesstables on all nodes, there is a performance hit with reading older generations of sstables. I'm sure you're about to ask "how much perf hit?" and the answer is "test it". 6) It is not advisable to perform schema changes in mixed-mode -- the schema version on upgraded nodes are different and there will be a mismatch until all nodes are upgraded. Good luck! On Mon, Aug 14, 2017 at 12:14 PM, Park Wu wrote: Hi, folks: I am planning to upgrade our production from dsc 2.0.16 to 2.1.18 for 2 DC (20 nodes each, 600GB per node). Few questions:1), what happen when doing rolling upgrade. Let's say we only upgrade one node to new version, before upgrade sstable, the data coming in will stay in the node and not be able to stream to other nodes?2), What if I have very active writes? how much data it can hold until it sees other nodes with new version so it can stream?3), Should I upgrade sstable when all nodes in one DC upgraded? or wait until all 2 DC upgraded?4), any idea or experience how long it will take to upgrade sstable for 600 GB data on each node?5), what is the max time I can take for rolling upgrade on each DC?6), I was doing a test with 3-nodes cluster, one node with 2.1.18, other two are 2.0.16. I got a warning on the node with newer version when I tried to create keyspace and insert some sample data:"Warning: schema version mismatch detected, which might be caused by DOWN nodes; if this is not the case, check the schema versions of your nodes in system.local and system.peers. OperationTimedOu t: errors={}, last_host=xxx"But data upserted successfully, even not be seen on other nodes. Any suggestion?Great thanks for any help or comments!- Park
Re: live dsc upgrade from 2.0 to 2.1 behind the scenes
1) You should not perform any streaming operations (repair, bootstrap, decommission) in the middle of an upgrade. Note that an upgrade is not complete until you have completed upgradesstables on all nodes in the cluster. 2) No streaming involved with writes so it's not an issue. 3) It doesn't matter whichever way you do it. My personal experience is that it's best to do it as you go but YMMV. 4) It depends on a lot of factors including type of disks (e.g. SSDs vs HDDs), data model, access patterns, cluster load, etc. The only way you'll be able to estimate it is by running your own tests. 5) There is no "max" time but it is preferable that you complete the upgrades in the shortest amount of time. Until you have completed upgradesstables on all nodes, there is a performance hit with reading older generations of sstables. I'm sure you're about to ask "how much perf hit?" and the answer is "test it". 6) It is not advisable to perform schema changes in mixed-mode -- the schema version on upgraded nodes are different and there will be a mismatch until all nodes are upgraded. Good luck! On Mon, Aug 14, 2017 at 12:14 PM, Park Wu wrote: > Hi, folks: I am planning to upgrade our production from dsc 2.0.16 to > 2.1.18 for 2 DC (20 nodes each, 600GB per node). Few questions: > 1), what happen when doing rolling upgrade. Let's say we only upgrade one > node to new version, before upgrade sstable, the data coming in will stay > in the node and not be able to stream to other nodes? > 2), What if I have very active writes? how much data it can hold until it > sees other nodes with new version so it can stream? > 3), Should I upgrade sstable when all nodes in one DC upgraded? or wait > until all 2 DC upgraded? > 4), any idea or experience how long it will take to upgrade sstable for > 600 GB data on each node? > 5), what is the max time I can take for rolling upgrade on each DC? > 6), I was doing a test with 3-nodes cluster, one node with 2.1.18, other > two are 2.0.16. I got a warning on the node with newer version when I tried > to create keyspace and insert some sample data: > "Warning: schema version mismatch detected, which might be caused by DOWN > nodes; if this is not the case, check the schema versions of your nodes in > system.local and system.peers. OperationTimedOut: errors={}, > last_host=xxx" > But data upserted successfully, even not be seen on other nodes. Any > suggestion? > Great thanks for any help or comments! > - Park >
live dsc upgrade from 2.0 to 2.1 behind the scenes
Hi, folks: I am planning to upgrade our production from dsc 2.0.16 to 2.1.18 for 2 DC (20 nodes each, 600GB per node). Few questions:1), what happen when doing rolling upgrade? Let''s say we upgrade only one node to new version, before upgrade sstable, the data coming in will stay in the node and not be able to stream to other nodes?2), What if I have very active writes? how much data it can hold until it sees other nodes with new version so it can stream?3), Should I upgrade sstable when all nodes in one DC upgraded? or wait until all 2 DC upgraded?4), any idea or experience how long it will take to upgrade sstable for 600 GB data on each node?5), what is the max time I can take for rolling upgrade on each DC?6), I was doing a test with 3-nodes cluster, one node with 2.1.18, other two are 2.0.16. I got a warning on the node with newer version when I tried to create keyspace and insert some sample data:"Warning: schema version mismatch detected, which might be caused by DOWN nodes; if this is not the case, check the schema versions of your nodes in system.local and system.peers. OperationTimedOut: errors={}, last_host=xxx"But data upserted successfully, even not be seen on other nodes. Any suggestion?Great thanks for any help or comments!- Park
live dsc upgrade from 2.0 to 2.1 behind the scenes
Hi, folks: I am planning to upgrade our production from dsc 2.0.16 to 2.1.18 for 2 DC (20 nodes each, 600GB per node). Few questions:1), what happen when doing rolling upgrade. Let's say we only upgrade one node to new version, before upgrade sstable, the data coming in will stay in the node and not be able to stream to other nodes?2), What if I have very active writes? how much data it can hold until it sees other nodes with new version so it can stream?3), Should I upgrade sstable when all nodes in one DC upgraded? or wait until all 2 DC upgraded?4), any idea or experience how long it will take to upgrade sstable for 600 GB data on each node?5), what is the max time I can take for rolling upgrade on each DC?6), I was doing a test with 3-nodes cluster, one node with 2.1.18, other two are 2.0.16. I got a warning on the node with newer version when I tried to create keyspace and insert some sample data:"Warning: schema version mismatch detected, which might be caused by DOWN nodes; if this is not the case, check the schema versions of your nodes in system.local and system.peers. OperationTimedOut: errors={}, last_host=xxx"But data upserted successfully, even not be seen on other nodes. Any suggestion?Great thanks for any help or comments!- Park