I’ve lost some context but there are two direct memory allocations per sstable
- compression offsets and the bloom filter. Both of those get built during
sstable creation and the bloom filter’s size is aggressively allocated , so
you’ll see a big chunk of memory disappear as compaction kicks
We
On Fri, Dec 28, 2018, 4:23 PM Oleksandr Shulgin <
oleksandr.shul...@zalando.de> wrote:
> On Fri, Dec 7, 2018 at 12:43 PM Oleksandr Shulgin <
> oleksandr.shul...@zalando.de> wrote:
>
>>
>> After a fresh JVM start the memory allocation looks roughly like this:
>>
>> total
On Fri, Dec 7, 2018 at 12:43 PM Oleksandr Shulgin <
oleksandr.shul...@zalando.de> wrote:
>
> After a fresh JVM start the memory allocation looks roughly like this:
>
> total used free sharedbuffers cached
> Mem: 14G14G 173M 1.1M
On Thu, Dec 6, 2018 at 3:39 PM Riccardo Ferrari wrote:
> To be honest I've never seen the OOM in action on those instances. My Xmx
> was 8GB just like yours and that let me think you have some process that is
> competing for memory, is it? Do you have any cron, any backup, anything
> that can
Hi,
To be honest I've never seen the OOM in action on those instances. My Xmx
was 8GB just like yours and that let me think you have some process that is
competing for memory, is it? Do you have any cron, any backup, anything
that can trick the OOMKiller ?
My unresponsiveness was seconds long.
On Thu, Dec 6, 2018 at 11:14 AM Riccardo Ferrari wrote:
>
> I had few instances in the past that were showing that unresponsivveness
> behaviour. Back then I saw with iotop/htop/dstat ... the system was stuck
> on a single thread processing (full throttle) for seconds. According to
> iotop that
Alex,
I had few instances in the past that were showing that unresponsivveness
behaviour. Back then I saw with iotop/htop/dstat ... the system was stuck
on a single thread processing (full throttle) for seconds. According to
iotop that was the kswapd0 process. That system was an ubuntu 16.04
On Wed, 5 Dec 2018, 19:34 Riccardo Ferrari Hi Alex,
>
> I saw that behaviout in the past.
>
Riccardo,
Thank you for the reply!
Do you refer to kswapd issue only or have you observed more problems that
match behavior I have described?
I can tell you the kswapd0 usage is connected to the
On Wed, 5 Dec 2018, 19:53 Jonathan Haddad Seeing high kswapd usage means there's a lot of churn in the page cache.
> It doesn't mean you're using swap, it means the box is spending time
> clearing pages out of the page cache to make room for the stuff you're
> reading now.
>
Jon,
Thanks for
Seeing high kswapd usage means there's a lot of churn in the page cache.
It doesn't mean you're using swap, it means the box is spending time
clearing pages out of the page cache to make room for the stuff you're
reading now. The machines don't have enough memory - they are way
undersized for a
The kswapd issue is interesting, is it possible you're being affected by
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1518457 - although I
don't see a fix for Trusty listed on there?
On Wed, Dec 5, 2018 at 11:34 AM Riccardo Ferrari wrote:
> Hi Alex,
>
> I saw that behaviout in the past.
Hi Alex,
I saw that behaviout in the past. I can tell you the kswapd0 usage is
connected to the `disk_access_mode` property. On 64bit systems defaults to
mmap. That also explains why your virtual memory is so high (it somehow
matches the node load, right?). I can not find and good reference
Hello,
We are running the following setup on AWS EC2:
Host system (AWS AMI): Ubuntu 14.04.4 LTS,
Linux 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 5
08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
Cassandra process runs inside a docker container.
Docker image is based on Ubuntu 18.04.1
13 matches
Mail list logo