ing upgrades is not supported; please ensure
> it is disabled before beginning the upgrade and re-enable after.
>
> – Scott
>
> On Jan 6, 2024, at 10:25 PM, manish khandelwal <
> manishkhandelwa...@gmail.com> wrote:
>
>
> Hi
>
> Is Cassandra upgrade from
Upgrading from 3.11.x to 4.1.x is supported, yes. As the documentation you
reference mentions, it is not possible to downgrade from 4.x to 3.x.
Note that running repair during upgrades is not supported; please ensure it is
disabled before beginning the upgrade and re-enable after.
– Scott
Hi
Is Cassandra upgrade from Cassandra 3.11.x to Cassandra 4.1.3 is supported.
NEWS.txt has a general guideline that
Snapshotting is fast (especially if you have JNA installed) and takes
effectively zero disk space until you start compacting the live data
files again. Thus, best practice
21:08
To: user@cassandra.apache.org
Subject: Re: Upgrade from C* 3 to C* 4 per datacenter
Just a heads-up, but there have been issues (at least one) reported when
upgrading a multi-DC cluster from 3.x to 4.x when the cluster uses node-to-node
SSL/TLS encryption. This is largely attributed
that you clone, you need exactly
> the same number of nodes in your test cluster that you have in the
> respective data center of your production cluster.
>
> Once the cluster is cloned, you can test whatever you like (e.g. upgrade
> to C* 4, test operations in a mixed-version
the cluster is cloned, you can test whatever you like (e.g. upgrade to C*
4, test operations in a mixed-version cluster, etc.).
Our experience with the upgrade from C* 3.11 to C* 4.1 on the test cluster was
quite smooth. The only problem that we saw was that when later adding a second
data center
Hello Jeff et al,
Thanks a lot for your valuable info. Your comment covers all my queries.
BR
MK
From: Jeff Jirsa
Sent: October 26, 2023 15:48
To: user@cassandra.apache.org
Cc: Michalis Kotsiouros (EXT)
Subject: Re: Upgrade from C* 3 to C* 4 per datacenter
On Oct 26, 2023, at 12:32 AM
> On Oct 26, 2023, at 12:32 AM, Michalis Kotsiouros (EXT) via user
> wrote:
>
>
> Hello Cassandra community,
> We are trying to upgrade our systems from Cassandra 3 to Cassandra 4. We plan
> to do this per data center.
> During the upgrade, a cluster with mix
Hello Scott,
Thanks a lot for the immediate answer.
We use a semi automated procedure to do the upgrade of the SW in our systems
which is done per datacenter.
Our limitation is that if we want to rollback we need to rollback the Cassandra
nodes from the whole datacenter.
May I return
The recommended approach to upgrading is to perform a replica-safe rolling restart of instances in
each datacenter, one datacenter at a time. > In case of an upgrade failure, would it be possible
to remove the data center from the cluster, restore the datacenter to C*3 SW and add it b
Hello Cassandra community,
We are trying to upgrade our systems from Cassandra 3 to Cassandra 4. We plan
to do this per data center.
During the upgrade, a cluster with mixed SW levels is expected. At this point
is it possible to perform topology changes?
In case of an upgrade failure, would
that updating the
> Java version is not a simple thing but wanted to throw that little
> Interesting bit of info that I had experienced.
>
> Sent from my iPhone
>
> On Aug 16, 2023, at 1:28 PM, vaibhav khedkar wrote:
>
>
> Thanks Patrick,
>
>
> We do have plans
is not a simple thing but wanted to throw that little Interesting bit of info that I had experienced. Sent from my iPhoneOn Aug 16, 2023, at 1:28 PM, vaibhav khedkar wrote:Thanks Patrick,We do have plans to upgrade to java 11 eventually but we will go through internal testing and would also need some
Thanks Patrick,
We do have plans to upgrade to *java 11* eventually but we will go through
internal testing and would also need some time given the size of our
infrastructure.
Is it safe to assume that the issue exists in the combination of upgrades
from 3.11.x to 4.0.x *and* running on JAVA 8
I've actually noticed this as well on a few clusters I deal with but after
upgrading Cassandra from 3.11 to 4 we also changed to use Java 11 shortly
after the cluster upgrade. After I moved to Java 11 I have not experienced
a problem.
On Wed, Aug 16, 2023 at 12:12 PM vaibhav khedkar
wrote
t;
> We recently upgraded our fleet of ~2500 Cassandra instances from 3.11.9 to
> 4.0.5.
>
> After the upgrade, we are seeing a unique issue where the compacted
> SSTables's file descriptors are still present and are never cleared. This
> is causing false disk alerts. We have to r
for this for discussion / investigation is
probably a good next step.Thanks,– ScottOn Aug 16, 2023, at 9:28 AM, vaibhav khedkar
wrote:Hi everyone,We recently upgraded our fleet of ~2500
Cassandra instances from 3.11.9 to 4.0.5.After the upgrade, we are seeing a unique
issue where the compacted
Hi everyone,
We recently upgraded our fleet of ~2500 Cassandra instances from 3.11.9 to
4.0.5.
After the upgrade, we are seeing a unique issue where the compacted
SSTables's file descriptors are still present and are never cleared. This
is causing false disk alerts. We have to restart nodes very
You can check in your lower environment.
On Fri, 11 Aug, 2023, 06:25 Surbhi Gupta, wrote:
> Thanks,
>
> I am looking to to upgrade to 4.1.x .
> Please advise.
>
> Thanks
> Surbhi
>
> On Thu, Aug 10, 2023 at 5:39 PM MyWorld wrote:
>
>> Though it's reco
Thanks,
I am looking to to upgrade to 4.1.x .
Please advise.
Thanks
Surbhi
On Thu, Aug 10, 2023 at 5:39 PM MyWorld wrote:
> Though it's recommended to upgrade to latest version of 3.11.x and then to
> ver 4 but even upgrading directly won't be a problem. Just check the
> rele
Though it's recommended to upgrade to latest version of 3.11.x and then to
ver 4 but even upgrading directly won't be a problem. Just check the
release notes.
However for production, I would recommend to go for 4.0.x latest stable
version.
Regards
Ashish
On Sat, 8 Jul, 2023, 05:44 Surbhi Gupta
to
allow for the time to upgrade DC1, wait, upgrade DC2 and then complete a
repair, or you may end up with resurrected data.
You also must ensure you do not enable any new features on new version
nodes in a mixed version cluster. You may enable new features after all
nodes in the cluster
Assuming "do it in one go" means a rolling upgrade from 3.11.5 to 4.1.2
skipping all version numbers between these two, the answer is yes, you
can "do it in one go".
On 08/07/2023 01:14, Surbhi Gupta wrote:
Hi,
We have to upgrade from 3.11.5 to 4.1.x .
Can we do it in on
Yes repairs are prohibited in mixed version cluster. If you want to monitor
please disable repairs till complete upgrade is finished
On Sat, Jul 8, 2023, 01:21 Runtian Liu wrote:
> Hi,
>
> We are upgrading our Cassandra clusters from 3.0.27 to 4.0.6 and we
> observed some e
Hi,
We have to upgrade from 3.11.5 to 4.1.x .
Can we do it in one go ?
Or do we have to go to an intermediate version first?
Thanks
Surbhi
Hi,
We are upgrading our Cassandra clusters from 3.0.27 to 4.0.6 and we
observed some error related to repair: j.l.IllegalArgumentException:
Unknown verb id 32
We have two datacenters for each Cassandra cluster and when we are doing an
upgrade, we want to upgrade 1 datacenter first and monitor
You should take a snapshot before starting the upgrade process. You
cannot achieve a snapshot of "the most current situation" in a live
cluster anyway, as data are constantly written to the cluster even after
a node is stopped for upgrading. So you've gotta to accept the outdated
Hi all,
On a test setup I a looking to do an upgrade from 4.0.3 to 4.0.6.
Would one typically snapshot before DRAIN or after?
If DRAIN after snapshot, I would have to restart the service to snapshot and
would this not then be accepting new operations/data?
If DRAIN before snapshot, would
.
* Set the protocol version explicitly in your application.
* Ensure that the list of initial contact points contains only hosts
with the oldest Cassandra version or protocol version.
Server side:
* Do not enable new features.
* Do not run nodetool repair.
* During the upgrade, do
Hi all,
What (if any) problems could we expect from an upgrade?
Ie., If we have 12 nodes and I upgrade them one-at-a-time, some will be on the
new version and others on the old.
Assuming that daily operations continue during this process, could problems
occur with streaming replica from one
Groovy. Thanks.
From: Erick Ramirez
Sent: Wednesday, October 12, 2022 4:08 PM
To: user@cassandra.apache.org
Subject: Re: Upgrade
EXTERNAL
That's correct. Cheers!
That's correct. Cheers!
On every node?
From: Erick Ramirez
Sent: Wednesday, October 12, 2022 3:20 PM
To: user@cassandra.apache.org
Subject: Re: Upgrade
EXTERNAL
It's just a minor patch upgrade so all you're really upgrading is the binaries.
In any case, switching off replication is not the recommended approach
It's just a minor patch upgrade so all you're really upgrading is the
binaries. In any case, switching off replication is not the recommended
approach. The recommended pre-upgrade procedure is to take backups of the
data on your nodes with nodetool snapshot. Cheers!
Hi all,
Looking at upgrading our install from 4.0.3 to 4.0.6.
We have replication from one datacentre to a backup site. Other than modifying
the replication config from dc1 to dc2, is there a simple method or command to
stop replication for a period?
The idea being that, should something go
nodes in the cluster upgraded
>> to 4.0.x?
>>
>> On Tue, Aug 16, 2022 at 2:12 AM Erick Ramirez
>> wrote:
>>
>>> As convenient as it is, there are a few caveats and it isn't a silver
>>> bullet. The automatic feature will only kick in if there are no other
t;> so it will take a while to get through all the sstables on dense nodes.
>>
>> In contrast, you'll have a bit more control if you manually upgrade the
>> sstables. For example, you can schedule the upgrade during low traffic
>> periods so reads are not competing with compactions for IO. Cheers!
>>
>>>
>>>
>
efault so it
will take a while to get through all the sstables on dense nodes.In contrast, you'll have a bit more
control if you manually upgrade the sstables. For example, you can schedule the upgrade during low
traffic periods so reads are not competing with compactions for IO. Cheers!
compactions scheduled. Also, it is going to be single-threaded by default
> so it will take a while to get through all the sstables on dense nodes.
>
> In contrast, you'll have a bit more control if you manually upgrade the
> sstables. For example, you can schedule the upgrade during
, you'll have a bit more control if you manually upgrade the
sstables. For example, you can schedule the upgrade during low traffic
periods so reads are not competing with compactions for IO. Cheers!
>
Hello,
I am evaluating the upgrade from 3.11.x to 4.0.x and as per CASSANDRA-14197
<https://issues.apache.org/jira/browse/CASSANDRA-14197> we don't need to
run upgradesstables any more. We have tested this in a test environment and
see that setting "-Dcassandra.automatic_sstable_u
Yeah, we have a fork of Cassandra with custom patches, and a fork of dtest
with some additional custom tests, so we will have to upgrade dtest as
well.
Is there any specific tag of dtest we should use or the latest trunk is
fine to test against 3.0.27?
Jaydeep
On Mon, Jun 13, 2022 at 10:51 PM C
If you have a fork of Cassandra with custom patches and build/execute the dtest
suite as part of qualification, you’d want to upgrade that as well.
Note that in more recent 3.0.x releases, the project also introduced in-JVM
dtests. This is a new suite that serves a similar purpose to the Python
Thanks Jeff and Scott for valuable feedback!
One more question, do we have to upgrade the dTest repo if we go to 3.0.27,
or the one we have currently already working with 3.0.14 should continue to
work fine?
Jaydeep
On Mon, Jun 13, 2022 at 10:25 PM C. Scott Andreas
wrote:
> Thank
Thank you for reaching out, and for planning the upgrade!
Upgrading from 3.0.14 to 3.0.27 would be best, followed by upgrading to 4.0.4.
3.0.14 contains a number of serious bugs that are resolved in more recent 3.0.x
releases (3.0.19+ are generally good/safe). Upgrading to 3.0.27 will put you
The versions with caveats should all be enumerated in
https://github.com/apache/cassandra/blob/cassandra-3.0/NEWS.txt
The biggest caveat was 3.0.14 (which had the fix for cassandra-13004),
which you're already on.
Personally, I'd qualify exactly one upgrade, and rather than doing 3
different
Hi,
I am running Cassandra version 3.0.14 at scale on thousands of nodes. I am
planning to do a minor version upgrade from 3.0.14 to 3.0.26 in a safe
manner. My eventual goal is to upgrade from 3.0.26 to a major release 4.0.
As you know, there are multiple minor releases between 3.0.14
>
> Thank you for that clarification, Erick. So do i understand correctly,
> that because of the upgrade the host id changed and therefore differs from
> the ones in the sstables where the old host id is still sitting until a
> sstable upgrade?
>
Not quite. :) The host ID
re moved/copied to other nodes
> (CASSANDRA-16619). That's why the message is logged at WARN level instead
> of ERROR. Cheers!
>
Thank you for that clarification, Erick. So do i understand correctly, that
because of the upgrade the host id changed and therefore differs from the
ones in the sst
It's expected and is nothing to worry about. From C* 3.0.25/3.11.11/4.0,
the SSTables now contain the host ID on which they were created to prevent
loss of commitlog data when SSTables are moved/copied to other nodes
(CASSANDRA-16619). That's why the message is logged at WARN level instead
of
Hi,
i just upgraded my one-node cassandra from 3.11.6 to 4.0.3. Now every time
it starts it's producing the following warn messages in the log like:
WARN [main] 2022-01-18 10:55:00,696 CommitLogReplayer.java:305 - Origin of
2 sstables is unknown or doesn't match the local node; commitLogIntervals
The general advice is to always upgrade to 3.11.latest before upgrading to
4.0.latest. It is possible to upgrade from an older 3.11 version but you'll
probably run into known issues already fixed in the latest version.
Also, we recommend you run upgradesstables BEFORE upgrading to 4.0.latest
You can see upgrading instructions here
https://github.com/apache/cassandra/blob/cassandra-4.0.2/NEWS.txt.
On Fri, Feb 11, 2022 at 2:52 AM Abdul Patel wrote:
> Hi
> apart from standard upgrade process any thing specific needs ti be
> handled separately for this upgrade process?
>
&
Make sure you go through all the instructions in
https://github.com/apache/cassandra/blob/trunk/NEWS.txt. It's also highly
recommended that you upgrade to the latest 3.0.x or 3.11.x version before
upgrading to 4.0.
Generally there are no changes required on the client side apart from
setting
Hi
apart from standard upgrade process any thing specific needs ti be handled
separately for this upgrade process?
Any changes needed at client side w.r.t drivers?
> we had an awful performance/throughput experience with 3.x coming from 2.1.
> 3.11 is simply a memory hog, if you are using batch statements on the client
> side. If so, you are likely affected by
> https://issues.apache.org/jira/browse/CASSANDRA-16201
>
Confirming what Thomas writes,
From: Leon Zaruvinsky
Sent: Wednesday, October 28, 2020 5:21 AM
To: user@cassandra.apache.org
Subject: Re: GC pauses way up after single node Cassandra 2.2 -> 3.11 binary
upgrade
Our JVM options are unchanged between 2.2 and 3.11
For the sake of clarity, do you mean:
(a) you
X:+CMSClassUnloadingEnabled
> The distinction is important because at the moment, you need to go through
> a process of elimination to identify the cause.
>
>
>> Read throughput (rate, bytes read/range scanned, etc.) seems fairly
>> consistent before and after the upgrade across all n
t because at the moment, you need to go through
a process of elimination to identify the cause.
> Read throughput (rate, bytes read/range scanned, etc.) seems fairly
> consistent before and after the upgrade across all nodes.
>
What I was trying to get at is whether the upgraded no
Thanks Erick.
Our JVM options are unchanged between 2.2 and 3.11, and we have disk access
mode set to standard. Generally we’ve maintained all configuration between
the two versions.
Read throughput (rate, bytes read/range scanned, etc.) seems fairly
consistent before and after the upgrade
I haven't seen this specific behaviour in the past but things that I would
look at are:
- JVM options which differ between 3.11 defaults and what you have
configured in 2.2
- review your monitoring and check read throughput on the upgraded node
as compared to 2.2 nodes
- possibly
On Wed, 28 Oct 2020 at 14:41, Rich Hawley wrote:
> unsubscribe
>
You need to email user-unsubscr...@cassandra.apache.org to unsubscribe from
the list. Cheers!
unsubscribe
On Tue, Oct 27, 2020 at 11:40 PM Leon Zaruvinsky
wrote:
> Hi,
>
> I'm attempting an upgrade of Cassandra 2.2.18 to 3.11.6, but had to abort
> because of major performance issues associated with GC pauses.
>
> Details:
> 3 node cluster, RF 3, 1 DC
> ~2TB d
Hi,
I'm attempting an upgrade of Cassandra 2.2.18 to 3.11.6, but had to abort
because of major performance issues associated with GC pauses.
Details:
3 node cluster, RF 3, 1 DC
~2TB data per node
Heap Size: 12G / New Size: 5G
I didn't even get very far in the upgrade - I just upgraded a binary
Thanks Alex for the reply.
On Sat, Sep 5, 2020 at 3:09 PM Oleksandr Shulgin <
oleksandr.shul...@zalando.de> wrote:
> On Sat, Sep 5, 2020 at 5:55 AM manish khandelwal <
> manishkhandelwa...@gmail.com> wrote:
>
>> Hi
>>
>> We have been forced into rolling
On Sat, Sep 5, 2020 at 5:55 AM manish khandelwal <
manishkhandelwa...@gmail.com> wrote:
> Hi
>
> We have been forced into rolling back our Cassandra after 1 node upgrade.
> The node was upgraded 10 days ago. We have the backup of the old data.
>
> Strategy one whi
>
>
> 3.11.2 to 2.1.16
>
> On Sat, Sep 5, 2020 at 9:27 AM Surbhi Gupta
> wrote:
>
> Hi Manish,
>
> Please provide both versions.
>
> Thanks
> Surbhi
>
> On Fri, Sep 4, 2020 at 8:55 PM manish khandelwal <
> manishkhandelwa...@gmail.com> wro
, Sep 4, 2020 at 8:55 PM manish khandelwal
wrote:
Hi
We have been forced into rolling back our Cassandra after 1 node upgrade. The
node was upgraded 10 days ago. We have the backup of the old data.
Strategy one which we are thinking : 1. Rollback to old binaries and
configuration.2. Restore
t; We have been forced into rolling back our Cassandra after 1 node upgrade.
>> The node was upgraded 10 days ago. We have the backup of the old data.
>>
>> Strategy one which we are thinking :
>> 1. Rollback to old binaries and configuration.
>> 2. Restore the old
Hi Manish,
Please provide both versions.
Thanks
Surbhi
On Fri, Sep 4, 2020 at 8:55 PM manish khandelwal <
manishkhandelwa...@gmail.com> wrote:
> Hi
>
> We have been forced into rolling back our Cassandra after 1 node upgrade.
> The node was upgraded 10 days ago. We have the
Hi
We have been forced into rolling back our Cassandra after 1 node upgrade.
The node was upgraded 10 days ago. We have the backup of the old data.
Strategy one which we are thinking :
1. Rollback to old binaries and configuration.
2. Restore the old data from backup.
3. Run Repair.
Another
Did you run any alter command during upgrade.?
No need to run drain before running upgrade sstable.
Regards
Manish
On Fri, Jun 26, 2020 at 9:48 PM Meenakshi Subramanyam
wrote:
> A quick question on the same topic, we are upgrading from 3.11.1 to
> 3.11.6. We had a schema mismatch
A quick question on the same topic, we are upgrading from 3.11.1 to 3.11.6.
We had a schema mismatch after upgrading one node. RR did ont fix it and we
had to remove that node. have anyone faced this issue ?
Also Do we need to do a nodetool drain before running upgrade sstables.
Meena
On Wed
Generally speaking, don't run mixed versions longer than you have to, and
don't upgrade that way.
Why?
* We don't support it.
* We don't even test it.
* If you run into trouble and ask for help, the first thing people will
tell you is to get all nodes on the same version.
Anyone that's doing so
That seems like a lot of unnecessary streaming operations to me. I think
someone said that streaming works between these 2 versions. But I would not use
this approach. Why not an in-place upgrade?
Sean Durity
From: Jai Bheemsen Rao Dhanwada
Sent: Wednesday, June 24, 2020 11:36 AM
To: user
Thank you all for the suggestions.
I am not trying to scale up the cluster for capacity but for the upgrade
process instead of in place upgrade I am planning to add nodes with 3.11.6
and then decommission the nodes with 3.11.3.
On Wednesday, June 24, 2020, Durity, Sean R
wrote:
> Stream
Streaming operations (repair/bootstrap) with different file versions is usually
a problem. Running a mixed version cluster is fine – for the time you are doing
the upgrade. I would not stay on mixed versions for any longer than that. It
takes more time, but I separate out the admin tasks so
n.
CASSANDRA-14861 [1] and CASSANDRA-14096 [2] should give you the motivation
to want to upgrade the existing nodes first before expanding the cluster.
Cheers!
[1] CASSANDRA-14861
<https://issues.apache.org/jira/browse/CASSANDRA-14861> Sstable
min/max metadata can cause data loss
Rightly said by Surbhi, it is not good to scale with mixed versions as
debugging issues will be very difficult.
Better to upgrade first and then scale.
Regards
On Wed, Jun 24, 2020 at 11:20 AM Surbhi Gupta
wrote:
> In case of any issue, it gets very difficult to debug when we have
> mu
from 3.11.0 to 3.11.5 . There is a sstable
> format change from 3.11.4 .
> We also had to expand the cluster and we also discussed about expansion
> first and than upgrade. But finally we upgraded and than expanded.
> As per our experience what I could tell you is, it is not advisable to ad
gesendet
> Am 24.06.2020 um 02:56 schrieb Surbhi Gupta :
>
>
>
> Hi ,
>
> We have recently upgraded from 3.11.0 to 3.11.5 . There is a sstable format
> change from 3.11.4 .
> We also had to expand the cluster and we also discussed about expansion first
> an
Hi ,
We have recently upgraded from 3.11.0 to 3.11.5 . There is a sstable format
change from 3.11.4 .
We also had to expand the cluster and we also discussed about expansion
first and than upgrade. But finally we upgraded and than expanded.
As per our experience what I could tell you
Hello,
I am trying to upgrade from 3.11.3 to 3.11.6.
Can I add new nodes with the 3.11.6 version to the cluster running with
3.11.3?
Also, I see the SSTable format changed from mc-* to md-*, does this cause
any issues?
zen UDT in C* 2.1 or 2.2, the
> serialisation header does not get written correctly when you upgrade to C*
> 3.0. So if you want to replicate it, you'd need to create the table in C*
> 2.1/2.2.
>
> Consequently, you would have seen the notification from Michael Shuler a few
>
When you create a table with a frozen UDT in C* 2.1 or 2.2, the
serialisation header does not get written correctly when you upgrade to C*
3.0. So if you want to replicate it, you'd need to create the table in C*
2.1/2.2.
Consequently, you would have seen the notification from Michael Shuler a
few mi
ey seem exactly the same.
>
> This is the pre upgrade version 3.0.14 ( a snapshotted version )
>
> sstabledump mc-1-big-Data.db
> [
> {
> "partition" : {
> "key" : [ "1" ],
> "position" : 0
> },
> &
Erick,
Thank you for your help.
I am still having problems reproducing this, so I am wondering if I have
created the tables correctly to create this issue.
I have looked at the sstabledumps and they seem exactly the same.
This is the pre upgrade version 3.0.14 ( a snapshotted version
<https://github.com/datastax/>
<https://www.datastax.com/accelerate>
On Fri, 14 Feb 2020 at 01:43, Paul Chandler wrote:
> Hi all,
>
> I have looked at the release notes for the up coming release 3.11.6 and
> seen the part about corruption of frozen UDT types during upgrade fro
Hi all,
I have looked at the release notes for the up coming release 3.11.6 and seen
the part about corruption of frozen UDT types during upgrade from 3.0.
We have a number of cluster using UDT and have been upgrading to 3.11.4 and
haven’t noticed any problems.
In the ticket ( CASSANDRA-15035
All my upgrades are without downtime for the application. Yes, do the binary
upgrade one node at a time. Then run upgradesstables on as many nodes as your
app load can handle (maybe you can point the app to a different DC, while
another DC is doing upgradesstables). Upgradesstables doesn’t
t; On Nov 29, 2019, at 8:58 PM, Shishir Kumar
> wrote:
>
>
> Some more background. We are planning (tested) binary upgrade across all
> nodes without downtime. As next step running upgradesstables. As C*
> file format and version (from format big, version mc to format bti, versi
mar wrote:
>
>
> Some more background. We are planning (tested) binary upgrade across all
> nodes without downtime. As next step running upgradesstables. As C* file
> format and version (from format big, version mc to format bti, version aa
> (Refer
> https://docs.dat
Some more background. We are planning (tested) binary upgrade across all
nodes without downtime. As next step running upgradesstables. As C*
file format and version (from format big, version mc to format bti, version
aa (Refer
https://docs.datastax.com/en/dse/6.0/dse-admin/datastax_enterprise
experience it
is purely for optimizing the performance of the database once the software
upgrade is complete. I recommend trying out an upgrade in a test
environment without using upgradesstables, which should bring the 5 hours
per node down to just a few minutes.
If you're running NetworkTopologyStrategy
Hi,
Need input on cassandra upgrade strategy for below:
1. We have Datacenter across 4 geography (multiple isolated deployments in
each DC).
2. Number of Cassandra nodes in each deployment is between 6 to 24
3. Data volume on each nodes between 150 to 400 GB
4. All production environment has DR
Thank you Romain
On Sat, Jul 27, 2019 at 1:42 AM Romain Hardouin
wrote:
> Hi,
>
> Here are some upgrade options:
> - Standard rolling upgrade: node by node
>
> - Fast rolling upgrade: rack by rack.
> If clients use CL=LOCAL_ONE then it's OK as long as one rack is UP.
Hi,
Here are some upgrade options: - Standard rolling upgrade: node by node -
Fast rolling upgrade: rack by rack. If clients use CL=LOCAL_ONE then it's OK
as long as one rack is UP. For higher CL it's possible assuming you have no
more than one replica per rack e.g. CL=LOCAL_QUORUM
yes correct, it doesn't work for the servers. trying to see if any had any
workaround for this issue? (may be changing the protocol version during the
upgrade time?)
On Fri, Jul 26, 2019 at 1:11 PM Durity, Sean R
wrote:
> This would handle client protocol, but not streaming protocol betw
This would handle client protocol, but not streaming protocol between nodes.
Sean Durity – Staff Systems Engineer, Cassandra
From: Alok Dwivedi
Sent: Friday, July 26, 2019 3:21 PM
To: user@cassandra.apache.org
Subject: Re: [EXTERNAL] Apache Cassandra upgrade path
Hi Sean
The recommended
Hi Sean
The recommended practice for upgrade is to explicitly control protocol
version in your application during upgrade process. Basically the protocol
version is negotiated on first connection and based on chance it can talk
to an already upgraded node first which means it will negotiate
1 - 100 of 864 matches
Mail list logo