Those numbers, as I suspected, line up pretty well with your AWS
configuration and network latencies within AWS. It is clear that this is a
WRITE ONLY test. You might want to do a mixed (e.g. 50% read, 50% write)
test for sanity. Note that the test will populate the data BEFORE it begins
doing the read/write tests.

In a dedicated environment at a recent client, with 10gbit links (just
grabbing one casstest run from my archives) I see less than twice the
above. Note your latency max is the result of a stop-the-world garbage
collection. There were huge problems below because this particular run was
using 24gb (Cassandra 2.x) java heap.

op rate                   : 21567 [WRITE:21567]
partition rate            : 21567 [WRITE:21567]
row rate                  : 21567 [WRITE:21567]
latency mean              : 9.3 [WRITE:9.3]
latency median            : 7.7 [WRITE:7.7]
latency 95th percentile   : 13.2 [WRITE:13.2]
latency 99th percentile   : 32.6 [WRITE:32.6]
latency 99.9th percentile : 97.2 [WRITE:97.2]
latency max               : 14906.1 [WRITE:14906.1]
Total partitions          : 83333333 [WRITE:83333333]
Total errors              : 0 [WRITE:0]
total gc count            : 705
total gc mb               : 1691132
total gc time (s)         : 30
avg gc time(ms)           : 43
stdev gc time(ms)         : 13
Total operation time      : 01:04:23


*.......*



*Daemeon C.M. ReiydelleUSA (+1) 415.501.0198London (+44) (0) 20 8144 9872*

On Thu, Jul 7, 2016 at 2:51 PM, Yuan Fang <y...@kryptoncloud.com> wrote:

> Yes, here is my stress test result:
> Results:
> op rate                   : 12200 [WRITE:12200]
> partition rate            : 12200 [WRITE:12200]
> row rate                  : 12200 [WRITE:12200]
> latency mean              : 16.4 [WRITE:16.4]
> latency median            : 7.1 [WRITE:7.1]
> latency 95th percentile   : 38.1 [WRITE:38.1]
> latency 99th percentile   : 204.3 [WRITE:204.3]
> latency 99.9th percentile : 465.9 [WRITE:465.9]
> latency max               : 1408.4 [WRITE:1408.4]
> Total partitions          : 1000000 [WRITE:1000000]
> Total errors              : 0 [WRITE:0]
> total gc count            : 0
> total gc mb               : 0
> total gc time (s)         : 0
> avg gc time(ms)           : NaN
> stdev gc time(ms)         : 0
> Total operation time      : 00:01:21
> END
>
> On Thu, Jul 7, 2016 at 2:49 PM, Ryan Svihla <r...@foundev.pro> wrote:
>
>> Lots of variables you're leaving out.
>>
>> Depends on write size, if you're using logged batch or not, what
>> consistency level, what RF, if the writes come in bursts, etc, etc.
>> However, that's all sort of moot for determining "normal" really you need a
>> baseline as all those variables end up mattering a huge amount.
>>
>> I would suggest using Cassandra stress as a baseline and go from there
>> depending on what those numbers say (just pick the defaults).
>>
>> Sent from my iPhone
>>
>> On Jul 7, 2016, at 4:39 PM, Yuan Fang <y...@kryptoncloud.com> wrote:
>>
>> yes, it is about 8k writes per node.
>>
>>
>>
>> On Thu, Jul 7, 2016 at 2:18 PM, daemeon reiydelle <daeme...@gmail.com>
>> wrote:
>>
>>> Are you saying 7k writes per node? or 30k writes per node?
>>>
>>>
>>> *.......*
>>>
>>>
>>>
>>> *Daemeon C.M. ReiydelleUSA (+1) 415.501.0198
>>> <%28%2B1%29%20415.501.0198>London (+44) (0) 20 8144 9872
>>> <%28%2B44%29%20%280%29%2020%208144%209872>*
>>>
>>> On Thu, Jul 7, 2016 at 2:05 PM, Yuan Fang <y...@kryptoncloud.com> wrote:
>>>
>>>> writes 30k/second is the main thing.
>>>>
>>>>
>>>> On Thu, Jul 7, 2016 at 1:51 PM, daemeon reiydelle <daeme...@gmail.com>
>>>> wrote:
>>>>
>>>>> Assuming you meant 100k, that likely for something with 16mb of
>>>>> storage (probably way small) where the data is more that 64k hence will 
>>>>> not
>>>>> fit into the row cache.
>>>>>
>>>>>
>>>>> *.......*
>>>>>
>>>>>
>>>>>
>>>>> *Daemeon C.M. ReiydelleUSA (+1) 415.501.0198
>>>>> <%28%2B1%29%20415.501.0198>London (+44) (0) 20 8144 9872
>>>>> <%28%2B44%29%20%280%29%2020%208144%209872>*
>>>>>
>>>>> On Thu, Jul 7, 2016 at 1:25 PM, Yuan Fang <y...@kryptoncloud.com>
>>>>> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> I have a cluster of 4 m4.xlarge nodes(4 cpus and 16 gb memory and
>>>>>> 600GB ssd EBS).
>>>>>> I can reach a cluster wide write requests of 30k/second and read
>>>>>> request about 100/second. The cluster OS load constantly above 10. Are
>>>>>> those normal?
>>>>>>
>>>>>> Thanks!
>>>>>>
>>>>>>
>>>>>> Best,
>>>>>>
>>>>>> Yuan
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Reply via email to