Thanks Chris.
I run a *client on a separate* AWS *instance from* the Cassandra cluster
servers. At the client side, I create 40 or 50 threads for sending requests
to each Cassandra node. I create one thrift client for each of the threads.
And at the beginning, all the created thrift clients
So I would -expect- an increase of ~20k qps per node with m3.xlarge so
there may be something up with your client (I am not a c++ person however
but hopefully someone on list will take notice).
Latency does not decreases linearly as you add nodes. What you are likely
seeing with latency since so
Do you mean that I should clear my table after each run? Indeed, I can
see several times of compaction during my test, but could only a few times
compaction affect the performance that much?
It certainly affects performance. Read performance suffers first, then
write performance suffers
Hi Joy,
Are you resetting your data after each test run? I wonder if your tests
are actually causing you to fall behind on data grooming tasks such as
compaction, and so performance suffers for your later tests.
There are *so many* factors which can affect performance, without reviewing
test
I'm sorry, I meant to say 6 nodes rf=3.
Also look at this performance over sustained periods of times, not burst
writing. Run your test for several hours and watch memory and especially
compaction stats. See if you can walk in what data volume you can write
while keeping outstanding compaction
Hi Eric,
Thank you very much for your reply!
Do you mean that I should clear my table after each run? Indeed, I can see
several times of compaction during my test, but could only a few times
compaction affect the performance that much? Also, I can see from the
OpsCenter some ParNew GC happen but
I think your client could use improvements. How many threads do you have
running in your test? With a thrift call like that you only can do one
request at a time per connection. For example, assuming C* takes 0ms, a
10ms network latency/driver overhead will mean 20ms RTT and a max
throughput