-Original Message-
From: Mia [mailto:yeomii...@gmail.com]
Sent: Wednesday, May 01, 2019 5:47 AM
To: user@cassandra.apache.org
Subject: Re: cassandra node was put down with oom error
Hello, Ayub.
I'm using apache cassandra, not dse edition. So I have never used the dse
search feature.
In my case, all
First, you have to find out where the memory is going. So, you can use the
mbeans in jconsole or something like that. You'll have to look at different
caches and offheap in cache and metrics types. Once you've figured that
out, then you can start working on tuning things. Yes, your heap is 32G,
I think 3.11.3 has some bug and which can cause OOMs on nodes with full
repairs. Just check if there is any correlation with ooms and repair
process.
Thanks,
Sandeep
On Wed, 1 May 2019 at 11:02 PM, Mia wrote:
> Hi Sandeep.
>
> I'm not running any manual repair and I think there is no running
Hi Sandeep.
I'm not running any manual repair and I think there is no running full repair.
I cannot see any log about repair in system.log these days.
Does full repair have anything to do with using large amount of memory?
Thanks.
On 2019/05/01 10:47:50, Sandeep Nethi wrote:
> Are you by any
Are you by any chance running the full repair on these nodes?
Thanks,
Sandeep
On Wed, 1 May 2019 at 10:46 PM, Mia wrote:
> Hello, Ayub.
>
> I'm using apache cassandra, not dse edition. So I have never used the dse
> search feature.
> In my case, all the nodes of the cluster have the same
Hello, Ayub.
I'm using apache cassandra, not dse edition. So I have never used the dse
search feature.
In my case, all the nodes of the cluster have the same problem.
Thanks.
On 2019/05/01 06:13:06, Ayub M wrote:
> Do you have search on the same nodes or is it only cassandra. In my case it
Do you have search on the same nodes or is it only cassandra. In my case it
was due to a memory leak bug in dse search that consumed more memory
resulting in oom.
On Tue, Apr 30, 2019, 2:58 AM yeomii...@gmail.com
wrote:
> Hello,
>
> I'm suffering from similar problem with OSS cassandra
Hello,
I'm suffering from similar problem with OSS cassandra version3.11.3.
My cassandra cluster have been running for longer than 1 years and there was no
problem until this year.
The cluster is write-intensive, consists of 70 nodes, and all rows have 2 hr
TTL.
The only change is the read
You’re running DSE so the OSS list may not be much help. Datastax May have more
insight
In open source, the only things offheap that vary significantly are bloom
filters and compression offsets - both scale with disk space, and both increase
during compaction. Large STCS compaction can cause
Cassandra node went down due to OOM, and checking the /var/log/message I
see below.
```
Jan 23 20:07:17 ip-xxx-xxx-xxx-xxx kernel: java invoked oom-killer:
gfp_mask=0x280da, order=0, oom_score_adj=0
Jan 23 20:07:17 ip-xxx-xxx-xxx-xxx kernel: java cpuset=/ mems_allowed=0
Jan 23 20:07:17
10 matches
Mail list logo