Do you have some detailed benchmark metrics? Like the QPS, Avg read/write
latency, P95/P99 read/write latency?
On Fri, Jan 5, 2018 at 5:57 PM, Justin Sanciangco
wrote:
> I am benchmarking with the YCSB tool doing 1k writes.
>
>
>
> Below are my server specs
>
> 2
I am benchmarking with the YCSB tool doing 1k writes.
Below are my server specs
2 sockets
12 core hyperthreaded processor
64GB memory
Cassandra settings
32GB heap
Concurrent_reads: 128
Concurrent_writes:256
From what we are seeing it looks like the kernel writing to the disk causes
degrading
The warn is a hint you’ve got tombstones, maybe not a big deal, but a hint at
your data model. It’s not causing this
The log at INFO is Cassandra connection to your app getting severed, Cassandra
is saying the reset is on the other side (app side, maybe firewall or something
in the middle
Update: Still getting the NoHostAvailable periodically in client logs.
Also seeing these INFO and WARN messages in
/var/log/cassandra/system.log
INFO [epollEventLoopGroup-2-5] 2018-01-06 01:39:02,412
Message.java:623 - Unexpected exception during request; channel = [id:
0xae99b597,
Second the note about compression chunk size in particular.
--
Jeff Jirsa
> On Jan 5, 2018, at 5:48 PM, Jon Haddad wrote:
>
> Generally speaking, disable readahead. After that it's very likely the issue
> isn’t in the settings you’re using the disk settings, but is
Oh, I should have added, my compression settings comment only applies to read
heavy workloads, as reading 64KB off disk in order to return a handful of bytes
is incredibly wasteful by orders of magnitude but doesn’t really cause any
problems on write heavy workloads.
> On Jan 5, 2018, at 5:48
Generally speaking, disable readahead. After that it's very likely the issue
isn’t in the settings you’re using the disk settings, but is actually in your
Cassandra config or the data model. How are you measuring things? Are you
saturating your disks? What resource is your bottleneck?
Can you quantify very bad performance?
--
Jeff Jirsa
> On Jan 5, 2018, at 5:41 PM, Justin Sanciangco
> wrote:
>
> Hello,
>
> I am currently benchmarking NVMe SSDs with Cassandra and am getting very bad
> performance when my workload exceeds the memory size.
Hello,
I am currently benchmarking NVMe SSDs with Cassandra and am getting very bad
performance when my workload exceeds the memory size. What mount settings for
NVMe should be used? Right now the SSD is formatted as XFS using noop
scheduler. Are there any additional mount options that should
Hello Oliver,
I don't see this being a particularly good fit for Cassandra, but I hope
someone confirms this.
However, your use case did look interesting for another project I've
interacted with indirectly, Pilosa, which used to have a Cassandra backend
before a complete Golang rewrite:
Hello,
Let's say I have a table that has one column with a unique id as a
primary key, and then hundreds of columns of floats, although a large
fraction of cells are empty.
I want to create an application that allows users to pick one or more
number columns, specify a condition
Hi,
We also noticed an increase of CPU - both system and user - on our c3.4xlarge
fleet. So far it's really visible with max(%user) and especially max(%system),
it has doubled!I graphed a ratio "write/s / %system", it's interesting to see
how the value dropped yesterday, you can see it here:
Hello,
Could someone explain me the difference between the values of the two
following metrics:
*ThreadPool Metrics:CompactionExecutor:CompletedTasks* vs *Compaction
Metrics:CompletedTasks*
I do not the same value when I query JMX!
Thanks
Hi Thomas,
No clue about AWS, and it is of course highly dependent on hardware, but on
CentOS 7 on bare metal, the patched kernel
(kernel-3.10.0-693.11.6.el7.x86_64) seems to have a roughly 50% CPU
increase compared to an unpatched kernel
(kernel-3.10.0-693.11.1.el7.x86_64). On a happier note,
Hello,
has anybody already some experience/results if a patched Linux kernel regarding
Meltdown/Spectre is affecting performance of Cassandra negatively?
In production, all nodes running in AWS with m4.xlarge, we see up to a 50%
relative (e.g. AVG CPU from 40% => 60%) CPU increase since Jan 4,
15 matches
Mail list logo