Hi everyone,
Yesterday, I did a live stream on "GenAI for Cassandra Teams" you can see
it on YouTube[1].
I love creating content that helps you work through problems or new things.
The GenAI thing has been hitting Cassandra teams with requests for new app
features and there are a lot of topics I
Okay, that proves I was wrong on the client side bottleneck.
On 24/04/2024 17:55, Nathan Marz wrote:
I tried running two client processes in parallel and the numbers were
unchanged. The max throughput is still a single client doing 10
in-flight BatchStatement containing 100 inserts.
On Tue,
I tried running two client processes in parallel and the numbers were
unchanged. The max throughput is still a single client doing 10 in-flight
BatchStatement containing 100 inserts.
On Tue, Apr 23, 2024 at 10:24 PM Bowen Song via user <
user@cassandra.apache.org> wrote:
> You might have run
Hi Paul,
IMO, if they are truly risk-adverse, they should follow the tested and
proven best practices, instead of doing things in a less tested way
which is also know to pose a danger to the data correctness.
If they must do this over a long period of time, then they may need to
temporarily
Hi Bowen,
Thanks for your quick reply.
Sorry I used the wrong term there, there it is a maintenance window rather than
an outage. This is a key system and the vital nature of it means that the
customer is rightly very risk adverse, so we will only even get permission to
upgrade one DC per
Hi Paul,
You don't need to plan for or introduce an outage for a rolling upgrade,
which is the preferred route. It isn't advisable to take down an entire
DC to do upgrade.
You should aim to complete upgrading the entire cluster and finish a
full repair within the shortest gc_grace_seconds
Hi all,
We have some large clusters ( 1000+ nodes ), these are across multiple
datacenters.
When we perform upgrades we would normally upgrade a DC at a time during a
planned outage for one DC. This means that a cluster might be in a mixed mode
with multiple versions for a week or 2.
We
You might have run into the bottleneck of the driver's IO thread. Try
increase the driver's connections-per-server limit to 2 or 3 if you've
only got 1 server in the cluster. Or alternatively, run two client
processes in parallel.
On 24/04/2024 07:19, Nathan Marz wrote:
Tried it again with
Tried it again with one more client thread, and that had no effect on
performance. This is unsurprising as there's only 2 CPU on this node and
they were already at 100%. These were good ideas, but I'm still unable to
even match the performance of batch commit mode with group commit mode.
On Tue,