Hi Andrew,

>From a PE perspective, I have not found anything wrong so far.

Results are there:
https://docs.google.com/spreadsheets/d/1yo-A-f4tjchdT9R-hkh6CkcXrbBHG_K2Y_ptF9QPT1Q/edit?usp=sharing

Still small perf on the scan but I have not find a diff as big as yours.
I'm also missing 0.98.3 results. Need to re-run them.

I will add YCSB to the scope soon...

JM


2014-07-17 20:43 GMT-04:00 Jean-Marc Spaggiari <jean-m...@spaggiari.org>:

> Hi Andrew,
>
> I run PE on all the releases and will have the results for 0.94.4 tomorow
> (even with the vote). I have a dedicated 4 nodes cluster. It's not as big
> as the one you used, but I can still also run YCBS on it if you want. Just
> ping me offline with the details of what you run and I will be glad to do
> it for you.
>
> JM
>
>
> 2014-07-17 20:16 GMT-04:00 Enis Söztutar <e...@apache.org>:
>
> Sorry, just saw your vote on RC now.
>>
>>
>> On Thu, Jul 17, 2014 at 5:16 PM, Enis Söztutar <e...@apache.org> wrote:
>>
>> > This looks indeed concerning. It seems that Workload E is 95% scan, and
>> > the other workloads have no scan, so it seems that we have some
>> regression
>> > in scans.
>> >
>> > Should this sink the RC, what do you think?
>> >
>> > Enis
>> >
>> >
>> > On Thu, Jul 17, 2014 at 5:08 PM, Andrew Purtell <apurt...@apache.org>
>> > wrote:
>> >
>> >> Comparing the relative performance of 0.98.4 RC0 and 0.98.0 on Hadoop
>> >> 2.2.0
>> >> using YCSB.
>> >>
>> >> This will be the last report of these from me for a while, as I will be
>> >> losing my current access to EC2 resources tomorrow.
>> >>
>> >> 5 concurrent YCSB clients on 5 servers target 100,000 ops/second in
>> >> aggregate. Reported average values are averages of readings from all
>> >> clients over 3 runs. Min values are the minimum reported by any client
>> on
>> >> any run. Max and percentile values are the maximum reported by any
>> client
>> >> on any run. What is interesting is relative differences, because each
>> EC2
>> >> testbed has a varying baseline. 0.98.0 and 0.98.4 tests were run on the
>> >> same instance set.
>> >>
>> >> These tests were run with no security coprocessors installed, using
>> HFile
>> >> V2. The workload E results are a concern. *It appears we have a 23%
>> >> decline
>> >> in measured scan throughput and an 23% increase in average op time
>> from 27
>> >> ms to 35 ms. *This does not correspond to any active security feature
>> >> (though that could worsen results potentially, untested) so is
>> something
>> >> changed in core code. Other workloads are not affected so this is
>> >> something
>> >> specific to scanning. Perhaps delete tracking.
>> >>
>> >>
>> >> *Hardware and Versions*
>> >>
>> >>  Hadoop 2.2.0
>> >>
>> >> HBase 0.98.0-hadoop2 + HBASE-11277
>> >>
>> >> HBase 0.98.4-hadoop2 RC0
>> >>
>> >> YCSB 1.0.4
>> >>
>> >>
>> >> 11x EC2 c3.8xlarge: 1 master, 5 slaves, 5 test clients
>> >>
>> >>     32 cores
>> >>
>> >>      60 GB RAM
>> >>
>> >>     2 x 320 GB directly attached SSD
>> >>
>> >>     NameNode: 4 GB heap
>> >>
>> >>     DataNode: 1 GB heap
>> >>
>> >>     Master: 1 GB heap
>> >>
>> >>     RegionServer: 8 GB heap, 24 GB bucket cache offheap engine
>> >>
>> >>
>> >> *Methodology*
>> >>
>> >>
>> >> Setup:
>> >>
>> >>      0. Start cluster
>> >>      1. shell: create "seed", { NAME=>"u", COMPRESSION=>"snappy" }
>> >>      2. YCSB: Preload 100 million rows into table "seed"
>> >>      3. shell: flush "seed" ; compact "seed"
>> >>      4. Wait for compaction to complete
>> >>      5. shell: create_snapshot "seed", "seed_snap"
>> >>      6. shell: disable "seed"
>> >>
>> >>
>> >>  For each test:
>> >>
>> >>      7. shell: clone_snapshot "seed_snap", "test"
>> >>      8. YCSB: On each client (5 clients), run test -p
>> >> operationcount=2000000 -threads 20 -target 20000
>> >>      9. shell: disable "test"
>> >>     10. shell: drop "test"
>> >>
>> >> ​
>> >>
>> >>    *Workload A*
>> >> *0.98.0* *0.98.4*
>> >>
>> >>
>> >>
>> >>  [OVERALL] RunTime(ms) 100743 100693  [OVERALL] Throughput(ops/sec)
>> 99263
>> >> 99312  [UPDATE] Operations 4997918 4999620  [UPDATE] AverageLatency(us)
>> >> 633
>> >> 647  [UPDATE] MinLatency(us) 269 268  [UPDATE] MaxLatency(us) 1450432
>> >> 713191
>> >> [UPDATE] 95thPercentileLatency(ms) 0 0  [UPDATE]
>> >> 99thPercentileLatency(ms) 5
>> >> 4  [READ] Operations 5002242 5000540  [READ] AverageLatency(us) 151 144
>> >> [READ] MinLatency(us) 0 0  [READ] MaxLatency(us) 1104157 952392  [READ]
>> >> 95thPercentileLatency(ms) 0 0  [READ] 99thPercentileLatency(ms) 0 0
>> >>
>> >>
>> >>
>> >>  *Workload B*
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>  [OVERALL] RunTime(ms) 100465 100458  [OVERALL] Throughput(ops/sec)
>> 99537
>> >> 99544  [UPDATE] Operations 9499627 9499891  [UPDATE] AverageLatency(us)
>> >> 556
>> >> 589  [UPDATE] MinLatency(us) 268 264  [UPDATE] MaxLatency(us) 709604
>> >> 695863
>> >> [UPDATE] 95thPercentileLatency(ms) 0 0  [UPDATE]
>> >> 99thPercentileLatency(ms) 1
>> >> 2  [READ] Operations 500533 500269  [READ] AverageLatency(us) 147 144
>> >> [READ] MinLatency(us) 0 0  [READ] MaxLatency(us) 571294 495148  [READ]
>> >> 95thPercentileLatency(ms) 0 0  [READ] 99thPercentileLatency(ms) 0 0
>> >>
>> >>
>> >>
>> >>  *Workload C*
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>  [OVERALL] RunTime(ms) 100091 100022  [OVERALL] Throughput(ops/sec)
>> 99909
>> >> 99978  [READ] Operations 9916831 10000000  [READ] AverageLatency(us)
>> 524
>> >> 526
>> >> [READ] MinLatency(us) 273 269  [READ] MaxLatency(us) 737108 741634
>>  [READ]
>> >> 95thPercentileLatency(ms) 0 0  [READ] 99thPercentileLatency(ms) 1 2
>> >>
>> >>
>> >>
>> >>  *Workload D*
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>  [OVERALL] RunTime(ms) 114244 103308  [OVERALL] Throughput(ops/sec)
>> 89114
>> >> 96809  [INSERT] Operations 9499965 9500306  [INSERT] AverageLatency(us)
>> >> 1145
>> >> 668  [INSERT] MinLatency(us) 270 271  [INSERT] MaxLatency(us) 4598999
>> >> 3291540  [INSERT] 95thPercentileLatency(ms) 6 1  [INSERT]
>> >> 99thPercentileLatency(ms) 13 3  [READ] Operations 500035 499694  [READ]
>> >> AverageLatency(us) 14 15  [READ] MinLatency(us) 4 4  [READ]
>> MaxLatency(us)
>> >> 494730 495198  [READ] 95thPercentileLatency(ms) 0 0  [READ]
>> >> 99thPercentileLatency(ms) 0 0
>> >>
>> >>
>> >>
>> >>  *Workload E*
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>  [OVERALL] RunTime(ms) 1600910 2078826  [OVERALL] Throughput(ops/sec)
>> 6308
>> >> 4835  [INSERT] Operations 499131 500322  [INSERT] AverageLatency(us)
>> 14 17
>> >> [INSERT] MinLatency(us) 5 5  [INSERT] MaxLatency(us) 506079 564468
>> >>  [INSERT]
>> >> 95thPercentileLatency(ms) 0 0  [INSERT] 99thPercentileLatency(ms) 0 0
>> >> [SCAN] Operations 9500869 9499678  [SCAN] AverageLatency(us)
>> >> ​​
>> >> ​​
>> >> 26636 34620  [SCAN] MinLatency(us) 746 755  [SCAN] MaxLatency(us)
>> 8067864
>> >> 4615914  [SCAN] 95thPercentileLatency(ms) 117 136  [SCAN]
>> >> 99thPercentileLatency(ms) 169 187
>> >>
>> >>
>> >>
>> >>  *Workload F*
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>  [OVERALL] RunTime(ms) 100876 100820  [OVERALL] Throughput(ops/sec)
>> 99133
>> >> 99187  [UPDATE] Operations 10000000 10000000  [UPDATE]
>> AverageLatency(us)
>> >> 737 746  [UPDATE] MinLatency(us) 273 272  [UPDATE] MaxLatency(us)
>> 759812
>> >> 747124  [UPDATE] 95thPercentileLatency(ms) 1 1  [UPDATE]
>> >> 99thPercentileLatency(ms) 5 6  [READ-MODIFY-WRITE] Operations 5000370
>> >> 5000082  [READ-MODIFY-WRITE] AverageLatency(us) 742 750
>> >>  [READ-MODIFY-WRITE]
>> >> MinLatency(us) 280 279  [READ-MODIFY-WRITE] MaxLatency(us) 756180
>> 747197
>> >> [READ-MODIFY-WRITE] 95thPercentileLatency(ms) 1 1  [READ-MODIFY-WRITE]
>> >> 99thPercentileLatency(ms) 5 6  [READ] Operations 5000530 5000242
>>  [READ]
>> >> AverageLatency(us) 22 17  [READ] MinLatency(us) 0 0  [READ]
>> MaxLatency(us)
>> >> 1551953 1097394  [READ] 95thPercentileLatency(ms) 0 0  [READ]
>> >> 99thPercentileLatency(ms) 0 0
>> >> ​
>> >>
>> >> ​​
>> >> --
>> >> Best regards,
>> >>
>> >>    - Andy
>> >>
>> >> Problems worthy of attack prove their worth by hitting back. - Piet
>> Hein
>> >> (via Tom White)
>> >>
>> >
>> >
>>
>
>

Reply via email to