Hi Team,
What is the best way to patch OS of 1000 nodes Multi DC Cassandra cluster
where we cannot suspend application traffic( we can redirect traffic to one
DC).
Please suggest if anyone has any best practice around it.
--
*C*heers,*
*Anshu V*
>
> It looks like the number of tables is the problem, with 5,000 - 10,000
> tables, that is way above the recommendations.
> Take a look here:
> https://docs.datastax.com/en/dse-planning/doc/planning/planningAntiPatterns.html#planningAntiPatterns__AntiPatTooManyTables
> This suggests that 5-10GB
> Specifically for the NegativeArraySizeException, what's happening is that
> the keyLength is so huge that it blows up MAX_UNSIGNED_SHORT so it looks
> like it's a negative value. Someone will correct me if I got that wrong but
> the "Key length longer than max" error confirms that.
>
Is it
Oh, I just saw the example. Never mind. :)
>
> Does anyone perhaps have an idea on what could've gone wrong here?
> Could it be just a calculation error on startup?
>
Specifically for the NegativeArraySizeException, what's happening is that
the keyLength is so huge that it blows up MAX_UNSIGNED_SHORT so it looks
like it's a negative
Hi again,
Does anyone perhaps have an idea on what could've gone wrong here?
Could it be just a calculation error on startup?
Thanks!
On Sun, Jan 26, 2020 at 5:57 PM Shalom Sagges
wrote:
> Hi Jeff,
>
> It is happening on multiple servers and even on different DCs.
> The schema contains two
It means that you are using 5-10GB of memory just to hold information about
tables. Memtables hold the data that is written to the database until those are
flushed to the disk, and those happen when memory is low or some other
threshold is reached.
Every table will have a memtable that takes
It doesn't seem to be the problem but I do not have deep knowledge of C*
internals.
When do memtable come into play? Only at startup?
-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands,
Hi Behroz,
It looks like the number of tables is the problem, with 5,000 - 10,000 tables,
that is way above the recommendations.
Take a look here:
https://docs.datastax.com/en/dse-planning/doc/planning/planningAntiPatterns.html#planningAntiPatterns__AntiPatTooManyTables
IIRC there is an overhead of about 1MB per table which you have about
5000-1 => 5GB - 10GB overhead of just having that many tables. To me it
looks like that you need to increase the heap size and later potentially work
on the data models to have less tables.
Hannu
> On 29. Jan 2020, at
>> If it's after the host comes online and it's hint replay from the other
hosts, you probably want to throttle hint replay significantly on the rest
of the cluster. Whatever your hinted handoff throttle is, consider dropping
it by 50-90% to work around whichever of those two problems it is.
This
>> Startup would replay commitlog, which would re-materialize all of those
mutations and put them into the memtable. The memtable would flush over
time to disk, and clear the commitlog.
>From our observation, the node is already online and it seems to be happening
>after the commit log replay
>> Some environment details like Cassandra version, amount of physical RAM,
JVM configs (heap and others), and any other non-default cassandra.yaaml
configs would help. The amount of data, number of keyspaces & tables,
since you mention "clients", would also be helpful for people to suggest
13 matches
Mail list logo