Hello everyone,
I am adding new DC to my existing cluster, with application using consistency
of ONE. Will the new node of new DC participates in serving the requests during
the bootstrapping/ rebuild ? I tested the scenario of building lost seed node
with nodetool rebuild , where the binary
Jonathan's high level process for in place conversion looks right.
To answer your original question about versioning, DSE release notes lists
the equivalent Cassandra version as 3.11.0.
DataStax Enterprise 5.1.2 -
DataStax Enterprise 5.1.10
Apache Cassandra™ 3.11.0 (updated)
Kind Regards,
An alternative approach is to form another new cluster, leave the original
cluster alive (many times
it's a must since it needs to be 24x7 online). Double write to the two
clusters and later migrate the
data to it. Either by taking a snapshot and pass those files to the new
cluster or with
Hi,
I'd like to update a small cluster running 2.0.17 (on virtual private
servers with ssh access) to a new version of Cassandra and in the process,
I'd like to make it easier to update/add nodes/run maintenance in the
future. What tools are common to use to automate such tasks? RIght now
On Wed, 5 Dec 2018, 19:34 Riccardo Ferrari Hi Alex,
>
> I saw that behaviout in the past.
>
Riccardo,
Thank you for the reply!
Do you refer to kswapd issue only or have you observed more problems that
match behavior I have described?
I can tell you the kswapd0 usage is connected to the
On Wed, 5 Dec 2018, 19:53 Jonathan Haddad Seeing high kswapd usage means there's a lot of churn in the page cache.
> It doesn't mean you're using swap, it means the box is spending time
> clearing pages out of the page cache to make room for the stuff you're
> reading now.
>
Jon,
Thanks for
Seeing high kswapd usage means there's a lot of churn in the page cache.
It doesn't mean you're using swap, it means the box is spending time
clearing pages out of the page cache to make room for the stuff you're
reading now. The machines don't have enough memory - they are way
undersized for a
The kswapd issue is interesting, is it possible you're being affected by
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1518457 - although I
don't see a fix for Trusty listed on there?
On Wed, Dec 5, 2018 at 11:34 AM Riccardo Ferrari wrote:
> Hi Alex,
>
> I saw that behaviout in the past.
Hi Alex,
I saw that behaviout in the past. I can tell you the kswapd0 usage is
connected to the `disk_access_mode` property. On 64bit systems defaults to
mmap. That also explains why your virtual memory is so high (it somehow
matches the node load, right?). I can not find and good reference
Hello,
We are running the following setup on AWS EC2:
Host system (AWS AMI): Ubuntu 14.04.4 LTS,
Linux 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 5
08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
Cassandra process runs inside a docker container.
Docker image is based on Ubuntu 18.04.1
In my understanding, there is a balance of getting upgradesstables done vs
normal activity. I think the cluster can function fine with old and new
sstables, but there can be a performance hit to reading the older version
(perhaps). Personally, I don’t restart repairs until upgradesstables is
11 matches
Mail list logo