[ 
https://issues.apache.org/jira/browse/CASSANDRA-7217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15003017#comment-15003017
 ] 

Ariel Weisberg commented on CASSANDRA-7217:
-------------------------------------------

Performance counters

2000 threads
{code}
Results:
op rate                   : 19419 [WRITE:19419]
partition rate            : 19419 [WRITE:19419]
row rate                  : 19419 [WRITE:19419]
latency mean              : 103.0 [WRITE:103.0]
latency median            : 91.3 [WRITE:91.3]
latency 95th percentile   : 179.4 [WRITE:179.4]
latency 99th percentile   : 252.3 [WRITE:252.3]
latency 99.9th percentile : 428.5 [WRITE:428.5]
latency max               : 57651.8 [WRITE:57651.8]
Total partitions          : 19000000 [WRITE:19000000]
Total errors              : 0 [WRITE:0]
total gc count            : 0
total gc mb               : 0
total gc time (s)         : 0
avg gc time(ms)           : NaN
stdev gc time(ms)         : 0
Total operation time      : 00:16:18
END

 Performance counter stats for './cassandra-stress write n=19000000 -rate 
threads=2000 -mode native cql3 -node 192.168.1.9':

 3,320,451,421,007      cycles                    #    2.192 GHz                
     [15.41%]
 2,563,758,232,484      instructions              #    0.77  insns per cycle    
    
                                                  #    0.94  stalled cycles per 
insn [20.47%]
    69,188,067,241      cache-references          #   45.664 M/sec              
     [25.56%]
    13,456,198,724      cache-misses              #   19.449 % of all cache 
refs     [30.60%]
   131,776,347,830      bus-cycles                #   86.973 M/sec              
     [35.65%]
 2,415,412,133,089      idle-cycles-frontend      #   72.74% frontend cycles 
idle    [40.69%]
 1,750,197,198,741      idle-cycles-backend       #   52.71% backend  cycles 
idle    [45.75%]
    1514363.238593      cpu-clock (msec)                                        
    
    1515146.390785      task-clock (msec)         #    1.530 CPUs utilized      
    
           154,815      page-faults               #    0.102 K/sec              
    
        87,357,050      cs                        #    0.058 M/sec              
    
        37,030,093      migrations                #    0.024 M/sec              
    
           154,691      minor-faults              #    0.102 K/sec              
    
                 0      major-faults              #    0.000 K/sec              
    
                 0      alignment-faults          #    0.000 K/sec              
    
                 0      emulation-faults          #    0.000 K/sec              
    
   358,579,878,595      branch-instructions       #  236.664 M/sec              
     [45.74%]
     5,088,330,722      branch-misses             #    1.42% of all branches    
     [45.80%]
    70,350,080,393      L1-dcache-load-misses     #   46.431 M/sec              
     [45.92%]
    24,626,765,787      L1-dcache-store-misses    #   16.254 M/sec              
     [40.88%]
    19,812,757,638      L1-dcache-prefetch-misses #   13.076 M/sec              
     [40.97%]
    59,285,911,291      L1-icache-load-misses     #   39.129 M/sec              
     [40.92%]
     4,437,071,985      dTLB-load-misses          #    2.928 M/sec              
     [40.90%]
       821,151,709      dTLB-store-misses         #    0.542 M/sec              
     [40.80%]
     1,188,402,914      iTLB-load-misses          #    0.784 M/sec              
     [40.66%]
     5,274,857,779      branch-load-misses        #    3.481 M/sec              
     [40.58%]
    39,293,189,238      LLC-loads                 #   25.934 M/sec              
     [40.47%]
    10,625,403,856      LLC-stores                #    7.013 M/sec              
     [40.45%]
    16,978,686,645      LLC-prefetches            #   11.206 M/sec              
     [10.08%]

     990.019887601 seconds time elapsed
{code}
500 threads
{code}
Results:
op rate                   : 63678 [WRITE:63678]
partition rate            : 63678 [WRITE:63678]
row rate                  : 63678 [WRITE:63678]
latency mean              : 7.8 [WRITE:7.8]
latency median            : 5.6 [WRITE:5.6]
latency 95th percentile   : 16.8 [WRITE:16.8]
latency 99th percentile   : 36.5 [WRITE:36.5]
latency 99.9th percentile : 77.5 [WRITE:77.5]
latency max               : 358.8 [WRITE:358.8]
Total partitions          : 19000000 [WRITE:19000000]
Total errors              : 0 [WRITE:0]
total gc count            : 0
total gc mb               : 0
total gc time (s)         : 0
avg gc time(ms)           : NaN
stdev gc time(ms)         : 0
Total operation time      : 00:04:58
END

 Performance counter stats for './cassandra-stress write n=19000000 -rate 
threads=500 -mode native cql3 -node 192.168.1.9':

 2,055,138,822,781      cycles                    #    2.519 GHz                
     [15.25%]
 1,923,953,212,761      instructions              #    0.94  insns per cycle    
    
                                                  #    0.71  stalled cycles per 
insn [20.30%]
    31,745,552,527      cache-references          #   38.904 M/sec              
     [25.33%]
     6,931,345,766      cache-misses              #   21.834 % of all cache 
refs     [30.35%]
    79,818,924,716      bus-cycles                #   97.818 M/sec              
     [35.35%]
 1,374,763,901,585      idle-cycles-frontend      #   66.89% frontend cycles 
idle    [40.37%]
   891,429,827,525      idle-cycles-backend       #   43.38% backend  cycles 
idle    [45.35%]
     815994.442406      cpu-clock (msec)                                        
    
     815998.411396      task-clock (msec)         #    2.635 CPUs utilized      
    
            84,202      page-faults               #    0.103 K/sec              
    
        34,375,605      cs                        #    0.042 M/sec              
    
         1,661,307      migrations                #    0.002 M/sec              
    
            83,803      minor-faults              #    0.103 K/sec              
    
                 0      major-faults              #    0.000 K/sec              
    
                 0      alignment-faults          #    0.000 K/sec              
    
                 0      emulation-faults          #    0.000 K/sec              
    
   219,082,315,466      branch-instructions       #  268.484 M/sec              
     [45.30%]
     2,321,109,537      branch-misses             #    1.06% of all branches    
     [45.35%]
    37,321,647,256      L1-dcache-load-misses     #   45.737 M/sec              
     [45.40%]
    15,702,399,931      L1-dcache-store-misses    #   19.243 M/sec              
     [40.39%]
    14,082,194,661      L1-dcache-prefetch-misses #   17.258 M/sec              
     [40.47%]
    35,512,444,743      L1-icache-load-misses     #   43.520 M/sec              
     [40.47%]
     2,048,574,473      dTLB-load-misses          #    2.511 M/sec              
     [40.46%]
       338,040,710      dTLB-store-misses         #    0.414 M/sec              
     [40.47%]
       680,218,846      iTLB-load-misses          #    0.834 M/sec              
     [40.47%]
     2,316,842,085      branch-load-misses        #    2.839 M/sec              
     [40.44%]
    16,883,500,935      LLC-loads                 #   20.691 M/sec              
     [40.41%]
     3,542,330,824      LLC-stores                #    4.341 M/sec              
     [40.37%]
     9,938,493,897      LLC-prefetches            #   12.180 M/sec              
     [10.04%]

     309.643226007 seconds time elapsed
{code}

> Native transport performance (with cassandra-stress) drops precipitously past 
> around 1000 threads
> -------------------------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-7217
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-7217
>             Project: Cassandra
>          Issue Type: Bug
>            Reporter: Benedict
>            Assignee: Ariel Weisberg
>              Labels: performance, stress, triaged
>             Fix For: 3.1
>
>
> This is obviously bad. Let's figure out why it's happening and put a stop to 
> it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to