Hi, Paulo
Thank you for your help!
I'll try to use it.
Regards,
Satoshi
On Wed, Apr 20, 2016 at 8:43 PM, Paulo Motta
wrote:
> 2.2.6 should be released in the next couple of weeks, but I also attached
> a patch file to the issue if you want to patch 2.2.5 manually.
>
2.2.6 should be released in the next couple of weeks, but I also attached a
patch file to the issue if you want to patch 2.2.5 manually.
2016-04-20 0:19 GMT-03:00 Satoshi Hikida :
> Hi,
>
> I'm looking forward to a patch (file) for this bug(CASSANDRA-11344) to
> apply
Hi,
I'm looking forward to a patch (file) for this bug(CASSANDRA-11344) to
apply C* version 2.2.5. Is there available patch for that version? I
watched link(https://issues.apache.org/jira/browse/CASSANDRA-11344) but
couldn't find patch file or something like that. Or is there any
workaround to
OK so good news, I'm running with the patched jar file in my cluster and
haven't seen any issues. The bloom filter off-heap memory usage is between
1.5GB and 2GB per node, which is much more in-line with what I'm expecting!
(thumbsup)
On Mon, Mar 14, 2016 at 9:42 AM, Adam Plumb
Thanks for the link! Luckily the cluster I'm running is not yet in
production and running with dummy data so I will throw that jar on the
nodes and I'll let you know how things shake out.
On Sun, Mar 13, 2016 at 11:02 PM, Paulo Motta
wrote:
> You could be hitting
You could be hitting CASSANDRA-11344 (
https://issues.apache.org/jira/browse/CASSANDRA-11344). If that's the
case, you may try to replace your cassandra jar on an affected node with a
version with this fix in place and force bloom filter regeneration to see
if if it fixes your problem. You can
So it's looking like the bloom filter off heap memory usage is ramping up
and up until the OOM killer kills the java process. I relaunched on
instances with 60GB of memory and the same thing is happening. A node will
start using more and more RAM until the process is killed, then another
node
Here is the creation syntax for the entire schema. The xyz table has about
2.1 billion keys and the def table has about 230 million keys. Max row
size is about 3KB, mean row size is 700B.
CREATE KEYSPACE abc WITH replication = {'class': 'NetworkTopologyStrategy',
> 'us-east': 3};
> CREATE TABLE
What is your schema and data like - in particular, how wide are your
partitions (number of rows and typical row size)?
Maybe you just need (a lot) more heap for rows during the repair process.
-- Jack Krupansky
On Fri, Mar 11, 2016 at 11:19 AM, Adam Plumb wrote:
> These are
These are brand new boxes only running Cassandra. Yeah the kernel is what
is killing the JVM, and this does appear to be a memory leak in Cassandra.
And Cassandra is the only thing running, aside from the basic services
needed for Amazon Linux to run.
On Fri, Mar 11, 2016 at 11:17 AM, Sebastian
Sacrifice child in dmesg is your OS killing the process with the most ram.
That means you're actually running out of memory at the Linux level outside
of the JVM.
Are you running anything other than Cassandra on this box?
If so, does it have a memory leak?
all the best,
Sebastián
On Mar 11,
I've got a new cluster of 18 nodes running Cassandra 3.4 that I just
launched and loaded data into yesterday (roughly 2TB of total storage) and
am seeing runaway memory usage. These nodes are EC2 c3.4xlarges with 30GB
RAM and the heap size is set to 8G with a new heap size of 1.6G.
Last night I
12 matches
Mail list logo