Shouldn't cause GCs.
You can usually think of heap memory separately from the rest. It's
already allocated as far as the OS is concerned, and it doesn't know
anything about GC going on inside of that allocation. You can set
"-XX:+AlwaysPreTouch" to make sure it's physically allocated on
Thanks. I guess some earlier thread got truncated.
I already applied Erick's recommendations and that seem to have worked in
reducing the ram consumption by around 50%.
Regarding cheap memory and hardware, we are already running 96GB boxes and
getting multiple larger ones might be a little
I think Erick posted https://community.datastax.com/questions/6947/.
explained very clearly.
We hit same issue only on a huge table when upgrade, and we changed back
after done.
My understanding, Which option to chose, shall depend on your user case.
If chasing high performance on a big table,
Missed the heap part, not sure why is that happening
On Tue, Aug 3, 2021 at 8:59 AM manish khandelwal <
manishkhandelwa...@gmail.com> wrote:
> mmap is used for faster reads and as you guessed right you might see read
> performance degradation. If you are seeing high memory usage after repairs
>
mmap is used for faster reads and as you guessed right you might see read
performance degradation. If you are seeing high memory usage after repairs
due to mmaped files, the only way to reduce the memory usage is to trigger
some other process which requires memory. *mmapped* files use buffer/cache
Can anyone please help with the above questions? To summarise:
1) What is the impact of using mmap only for indices besides a degradation
in read performance?
2) Why does the off heap consumed during Cassandra full repair remains
occupied 12+ hours after the repair completion and is there a
Hi Erick,
Limiting mmap to index only seems to have resolved the issue. The max ram
usage remained at 60% this time. Could you please point me to the
limitations for setting this param? - For starters, I can see read
performance getting reduced up to 30% (CASSANDRA-8464
Thanks, Bowen, don't think that's an issue - but yes I can try upgrading to
3.11.5 and limit the merkle tree size to bring down the memory utilization.
Thanks, Erick, let me try that.
Can someone please share documentation relating to internal functioning of
full repairs - if there exists one?
Based on the symptoms you described, it's most likely caused by SSTables
being mmap()ed as part of the repairs.
Set `disk_access_mode: mmap_index_only` so only index files get mapped and
not the data files. I've explained it in a bit more detail in this article
--
Could it be related to
https://issues.apache.org/jira/browse/CASSANDRA-14096 ?
On 28/07/2021 13:55, Amandeep Srivastava wrote:
Hi team,
My Cluster configs: DC1 - 9 nodes, DC2 - 4 nodes
Node configs: 12 core x 96GB ram x 1 TB HDD
Repair params: -full -pr -local
Cassandra version: 3.11.4
I'm
Hi team,
My Cluster configs: DC1 - 9 nodes, DC2 - 4 nodes
Node configs: 12 core x 96GB ram x 1 TB HDD
Repair params: -full -pr -local
Cassandra version: 3.11.4
I'm running a full repair on DC2 nodes - one node and one keyspace at a
time. During the repair, ram usage on all 4 nodes spike up to
11 matches
Mail list logo