I just changed these properties to increase flushed file size (decrease number
of compactions):
memtable_allocation_type from heap_buffers to offheap_objects
memtable_offheap_space_in_mb: from default (2048) to 8192
Using default value for other memtable/compaction/commitlog configurations .
Hi
I have a production 3.11.6 cluster which I'm might want to enable
authentication in, I'm trying to understand what will be the performance
impact, if any.
I understand each use case might be different, trying to understand if
there is a common % people usually see their performance hit, or if
Set the Auth cache to a long validity
Don’t go crazy with RF of system auth
Drop bcrypt rounds if you see massive cpu spikes on reconnect storms
> On Jun 1, 2020, at 11:26 PM, Gil Ganz wrote:
>
>
> Hi
> I have a production 3.11.6 cluster which I'm might want to enable
> authentication in,
As I understand it, Cassandra clusters should be limited to a number of tables
in the low hundreds (under 200), at most. What you are seeing is the carving up
of memtables for each of those 3,000. I try to limit my clusters to roughly 100
tables.
Sean Durity
From: Jai Bheemsen Rao Dhanwada
How many total tables in the cluster?
Sean Durity
From: Jai Bheemsen Rao Dhanwada
Sent: Monday, June 1, 2020 8:36 PM
To: user@cassandra.apache.org
Subject: [EXTERNAL] Re: Cassandra Bootstrap Sequence
Thanks Erick,
I see below tasks are being run mostly. I didn't quite understand what exactly
3000 tables
On Tuesday, June 2, 2020, Durity, Sean R
wrote:
> How many total tables in the cluster?
>
>
>
>
>
> Sean Durity
>
>
>
> *From:* Jai Bheemsen Rao Dhanwada
> *Sent:* Monday, June 1, 2020 8:36 PM
> *To:* user@cassandra.apache.org
> *Subject:* [EXTERNAL] Re: Cassandra Bootstrap
I’d also take a look at the O/S level. You might be queued up on flushing of
dirty pages, which would also throttle your ability to write mempages. Once
the I/O gets throttled badly, I’ve seen it push back into what you see in C*.
To Aaron’s point, you want a balance in memory between C* and
I would try running it with memtable_offheap_space_in_mb at the default for
sure, but definitely lower than 8GB. With 32GB of RAM, you're already
allocating half of that for your heap, and then halving the remainder for
off heap memtables. What's left may not be enough for the OS, etc. Giving
Thank you,
Does that mean there is no way to improve this delay? And i have to live
with it since i have more tables?
On Tuesday, June 2, 2020, Durity, Sean R
wrote:
> As I understand it, Cassandra clusters should be limited to a number of
> tables in the low hundreds (under 200), at most.
primary key ((partition_key, clustering_key))
Also, this primary key definition does not define a partitioning key and a
clustering key. It defines a *composite* partition key.
If you want it to instantiate both a partition and clustering key, get rid
of one set of parens.
primary key
To flesh this out a bit, I set roles_validity_in_ms and
permissions_validity_in_ms to 360 (10 minutes). The default of 2000 is far
too often for my use cases. Usually I set the RF for system_auth to 3 per DC.
On a larger, busier cluster I have set it to 6 per DC. NOTE: if you set the
Would updating disk boundaries be sensitive to disk I/O tuning? I’m
remembering Jon Haddad’s talk about typical throughput problems in disk page
sizing.
From: Jai Bheemsen Rao Dhanwada
Reply-To: "user@cassandra.apache.org"
Date: Tuesday, June 2, 2020 at 10:48 AM
To:
It's marked as a duplicate of
https://issues.apache.org/jira/browse/CASSANDRA-10699 which is not yet fixed
On Tue, Jun 2, 2020 at 9:39 AM Deepak Sharma
wrote:
> Hi There,
>
> I see this (https://issues.apache.org/jira/browse/CASSANDRA-11143) issue
> in the resolved state. Does it mean it has
Also during this time, I am losing metrics for all the nodes in the cluster
(metrics agent is timing out collecting within 10s) and recovers once the
node starts the CQL port. Is there any known issue which could cause this?
In my case the delay between Gossip settle and CQL port open is 3
Dor - that looks very useful. Looking forward to trying the CDC Kafka
connector!
On Thu, 28 May 2020 at 02:53, Dor Laor wrote:
> If it's helpful, IMO, the approach Cassandra needs to take isn't
> by tracking the individual node commit log and putting the burden
> on the client. At Scylla, we
Hi There,
I see this (https://issues.apache.org/jira/browse/CASSANDRA-11143) issue in
the resolved state. Does it mean it has been fixed? This question is
specific in the context of 3.0.13 and 3.11.4 versions of Cassandra.
Thanks,
Deepak
Just did some more debugging it looks like the "nodetool compactionstats"
which is hung/taking time during this period causing the delay in metrics.
I still puzzled why the nodetool compactionstats commands takes longer on
all the nodes at the same time, when one node is being restarted
$ time
Thanks! That's what I realized later too.
On Tue, Jun 2, 2020 at 9:57 AM Jeff Jirsa wrote:
> It's marked as a duplicate of
> https://issues.apache.org/jira/browse/CASSANDRA-10699 which is not yet
> fixed
>
>
> On Tue, Jun 2, 2020 at 9:39 AM Deepak Sharma
> wrote:
>
>> Hi There,
>>
>> I see
18 matches
Mail list logo