Re: Sporadic high IO bandwidth and Linux OOM killer

2018-12-28 Thread Jeff Jirsa
I’ve lost some context but there are two direct memory allocations per sstable - compression offsets and the bloom filter. Both of those get built during sstable creation and the bloom filter’s size is aggressively allocated , so you’ll see a big chunk of memory disappear as compaction kicks

Re: Sporadic high IO bandwidth and Linux OOM killer

2018-12-28 Thread Yuri de Wit
We On Fri, Dec 28, 2018, 4:23 PM Oleksandr Shulgin < oleksandr.shul...@zalando.de> wrote: > On Fri, Dec 7, 2018 at 12:43 PM Oleksandr Shulgin < > oleksandr.shul...@zalando.de> wrote: > >> >> After a fresh JVM start the memory allocation looks roughly like this: >> >> total

Re: Sporadic high IO bandwidth and Linux OOM killer

2018-12-28 Thread Oleksandr Shulgin
On Fri, Dec 7, 2018 at 12:43 PM Oleksandr Shulgin < oleksandr.shul...@zalando.de> wrote: > > After a fresh JVM start the memory allocation looks roughly like this: > > total used free sharedbuffers cached > Mem: 14G14G 173M 1.1M

Re: Is there any chance the bootstrapping lost data?

2018-12-28 Thread Jeff Jirsa
> On Dec 28, 2018, at 2:17 AM, Jinhua Luo wrote: > > Hi All, > > While the pending node get streaming of token ranges from other nodes, > all coordinator would send new writes to it so that it would not miss > any new data, correct? > > I have two (maybe silly) questions here: > Given the

Re: [EXTERNAL] Writes and Reads with high latency

2018-12-28 Thread Marco Gasparini
- How many event_datetime records can you have per pkey? during a day of work I can have less than 10 event_datetime records per pkey. Every day I maintain maximum 3 of them, so each new event_datetime for a pkey determines a delete and an insert into Cassandra. - How many pkeys (roughly) do you