[
https://issues.apache.org/jira/browse/CASSANDRA-8817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18047814#comment-18047814
]
Rishabh Saraswat commented on CASSANDRA-8817:
---------------------------------------------
[~mpdehaan_ds]
I tested current Cassandra with very small JVM heaps (e.g. -Xmx128M). Cassandra
does start and responds to nodetool, but still consumes significant off-heap
memory and may later be killed by the OS under memory pressure with little
diagnostic logging.
The issue therefore seems less about startup failure and more about lack of
early warning that the configured heap is unsafe given Cassandra’s configured
memory expectations.
Since
[DatabaseDescriptor.getMemtableHeapSpace()|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/config/DatabaseDescriptor.java#L4503]
already represents Cassandra’s configured heap usage intent, would it make
sense to emit a startup warning when the configured JVM max heap (-Xmx) is less
than the configured memtable heap space plus a small safety margin?
This could be implemented as a StartupCheck and would not change startup
behavior, only improve operator visibility.
> Error handling in Cassandra logs in low memory scenarios could use improvement
> ------------------------------------------------------------------------------
>
> Key: CASSANDRA-8817
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8817
> Project: Apache Cassandra
> Issue Type: Improvement
> Environment: Ubuntu 14.04, VM originally created with 1 GB RAM, DSE
> 4.6.0 installed
> Reporter: Michael DeHaan
> Assignee: Rishabh Saraswat
> Priority: Low
> Labels: lhf
> Fix For: 2.1.x
>
>
> When running Cassandra with a low amount of RAM, in this case, using DataStax
> Enterprise 4.6.0 in a reasonably default configuration, I find that I get an
> error after starting and trying to use nodetool, namely that it cannot
> connect to 127.0.0.1. Originally this sends me up a creek, looking for why
> Cassandra is not listening on 7199. The truth ends up being a bit more
> cryptic - that Cassandra isn't running.
> Upon looking at the Cassandra system logs, I see the last thing that it did
> was print out the (very long) class path. This confused me as basically I'm
> seeing no errors in the log at all.
> I am proposing that Cassandra should check the amount of available RAM and
> issue a warning in the log, or possibly an error, because in this scenario
> Cassandra is going to oomkill and probably could have predicted this in
> advance.
> Something like:
> "Found X MB of RAM, expecting at least Y MB of RAM, Z MB recommended, may
> crash, adjust <SETTINGS>" or something similar would be a possible solution.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]