Hello Marton Greber, Zoltan Chovan, Alexey Serbin, Attila Bukor, Kudu Jenkins,
Abhishek Chennaka,
I'd like you to reexamine a change. Please visit
http://gerrit.cloudera.org:8080/22867
to look at the new patch set (#18).
Change subject: KUDU-1973: Send no-op heartbeat operations batched - PART 1
......................................................................
KUDU-1973: Send no-op heartbeat operations batched - PART 1
Due to periodically sent heartbeat messages, a Kudu cluster
with thousands of tablets still consumes significant CPU and
network resources, even without any user activity. When
multiple messages are sent to the same host within a short
time frame, they can be batched to reduce CPU usage.
This results in fewer RPC calls, and some fields can also be
shared between the no-op messages, further reducing network
traffic.
Processing heartbeats for the same host together also reduces
the CPU load. We only process the periodic heartbeats and
still send the leadership no-op heartbeats unbatched, in a
separate task. Batching only the periodic no-op heartbeats
allows us to process the batch request on a single thread,
since an empty periodic update does not take much time.
The shared timer submits a single shared task for the
heartbeat senders that should beat. However, if it encounters
a consensus peer with actual messages, it leaves it out and
submits a separate task on the RPC threads using the
consensus peer's RPC token. This ensures that the other no-ops
in that batch will be sent out in a timely manner.
Using this method instead of buffering and periodically
flushing the buffer has two significant advantages:
+ If X heartbeats are batched together, instead of X timer
calls and X further task submissions to RPC threads, we will
have 1 timer call and [1, X+1] callbacks on the RPC thread -
usually just 1-2 callbacks.
+ Since there is no buffering, when a write request arrives,
there is no need for flush/discard logic.
Note that if some of the responses in the batch trigger
further updates, ProcessResponse calls DoProcessResponse on a
separate thread. So processing the response on a single thread
is fine as well; there is no need to change that logic.
Measurement:
I started a 1 master + 4 TS setup using t3.xlarge instances.
Created 200 tables with 10 hash partitions (RF=3 and 1,000
initial rows). Then performed the following random sampling:
For 1 tablet server (since the tablets are newly created, they
are distributed evenly):
+ Measure metrics for 40 seconds.
+ Also call:
sudo perf stat -e sched:sched_* -p <tserver-pid> sleep 40
before starting the write workload.
+ Right after the start of the 40-second window, start
0/3/6/.../24 separate write tasks (from 5 different VMs),
each writing 1 million rows into a random table.
(Randomly selected, so the sort term history of load is
"the same" for any value after the first few measurements).
I started the cluster with 3 configurations (always starting
with a fresh cluster):
1. Batching off
2. Batching on, max batch size = 10
3. Batching on, max batch size = 30
The results were the following (execution time of the
binary writing the rows using kudu_client.so):
========== write count: 0:==========
Runs with batching off: 15, 10: 19, 30: 18
Off client task runtimes avg: n/a, min n/a, max n/a, med n/a
10 client task runtimes avg: n/a, min n/a, max n/a, med n/a
30 client task runtimes avg: n/a, min n/a, max n/a, med n/a
Change (10 & 30): n/a% n/a%
Off cpu_stime avg: 13568.533333333333, min 9419, max 16947, med 13836
10 cpu_stime avg: 6807.368421052632, min 4772, max 8788, med 6899
30 cpu_stime avg: 5833.722222222223, min 3279, max 7128, med 6045.0
Change (10 & 30): -49.829740224544295% -57.005506203896594%
Off cpu_utime avg: 40332.86666666667, min 19340, max 50660, med 46046
10 cpu_utime avg: 28121.105263157893, min 8846, max 41993, med 27630
30 cpu_utime avg: 26349.055555555555, min 9592, max 39603, med 26169.0
Change (10 & 30): -30.277444706406786% -34.67100721275563%
Off rpc_incoming_queue_time avg: 85605.4, min 85252, max 85767, med 85666
10 rpc_incoming_queue_time avg: 8997.263157894737, min 8946, max 9031, med
8997
30 rpc_incoming_queue_time avg: 3497.0555555555557, min 3470, max 3532,
med 3496.5
Change (10 & 30): -89.4898415778739% -95.91491242894075%
Off utime+stime avg: 53901.4, min 32665, max 65281, med 56304
10 utime+stime avg: 34928.47368421053, min 14796, max 49508, med 35016
30 utime+stime avg: 32182.777777777777, min 15019, max 46496, med 31922.0
Change (10 & 30): -35.19932008405993% -40.29324325939998%
Off stat_runtime avg: 54531223159.181816, min 31976380571, max 65955396580,
med 63846401240
10 stat_runtime avg: 32422918329.933334, min 15178573784, max 49774116201,
med 31081119380
30 stat_runtime avg: 31854018849.733334, min 15354712860, max 46880459417,
med 33664261243
Change (10 & 30): -40.5424700720764% -41.58572464668833%
Off switch avg: 642389.0, min 623210, max 653324, med 649967
10 switch avg: 261494.33333333334, min 219716, max 270616, med 264348
30 switch avg: 249353.33333333334, min 247260, max 252089, med 249239
Change (10 & 30): -59.29346029690213% -61.18343661965984%
========== write count: 3:==========
Runs with batching off: 28, 10: 23, 30: 13
Off client task runtimes avg: 6.203379134337108, min 5.632460594177246, max
6.844575881958008, med 6.185922980308533
10 client task runtimes avg: 6.370108393655307, min 5.922394752502441, max
7.963289260864258, med 6.355126142501831
30 client task runtimes avg: 6.1353193796598, min 5.597020149230957, max
7.383537769317627, med 6.066960334777832
Change (10 & 30): 2.6877167380487066% -1.097140013586817%
Off cpu_stime avg: 14144.035714285714, min 9547, max 16631, med 15466.5
10 cpu_stime avg: 7969.0, min 4657, max 10297, med 8943
30 cpu_stime avg: 5950.0, min 3908, max 8000, med 6528
Change (10 & 30): -43.6582305009936% -57.93279852941547%
Off cpu_utime avg: 43743.46428571428, min 28862, max 54178, med 44754.5
10 cpu_utime avg: 42348.434782608696, min 17163, max 47387, med 44822
30 cpu_utime avg: 32516.30769230769, min 14061, max 44001, med 35589
Change (10 & 30): -3.1891152790136323% -25.665906385638394%
Off rpc_incoming_queue_time avg: 89446.75, min 88955, max 89839, med 89493.5
10 rpc_incoming_queue_time avg: 13017.217391304348, min 12267, max 13414,
med 13063
30 rpc_incoming_queue_time avg: 7832.2307692307695, min 7498, max 8057,
med 7876
Change (10 & 30): -85.44696437678915% -91.24369441122145%
Off utime+stime avg: 57887.5, min 40922, max 70702, med 58746.5
10 utime+stime avg: 50317.434782608696, min 24283, max 56899, med 53239
30 utime+stime avg: 38466.307692307695, min 20589, max 50935, med 43039
Change (10 & 30): -13.077201843906373% -33.54988954038834%
Off stat_runtime avg: 57709248396.1579, min 42352963849, max 69618765193,
med 55313027027
10 stat_runtime avg: 49429512557.3125, min 20883553890, max 54484284473,
med 52754845564.0
30 stat_runtime avg: 38284725561.85714, min 20290863097, max 50310539913,
med 41356308570
Change (10 & 30): -14.347329187182133% -33.65928923724118%
Off switch avg: 673106.2631578947, min 661281, max 691078, med 673573
10 switch avg: 291551.25, min 275920, max 297571, med 293681.5
30 switch avg: 272828.85714285716, min 268207, max 278893, med 272300
Change (10 & 30): -56.685702398284036% -59.46719380341614%
========== write count: 6:==========
Runs with batching off: 28, 10: 25, 30: 13
Off client task runtimes avg: 6.88737925745192, min 6.060053825378418, max
8.26358151435852, med 6.81439471244812
10 client task runtimes avg: 6.90557310740153, min 6.297971725463867, max
8.041281938552856, med 6.896089553833008
30 client task runtimes avg: 6.4807085410142555, min 6.009506940841675,
max 6.913873195648193, med 6.4735718965530396
Change (10 & 30): 0.26416216197073794% -5.904578523066817%
Off cpu_stime avg: 13949.535714285714, min 10154, max 18245, med 11736.5
10 cpu_stime avg: 7253.72, min 5144, max 10370, med 6352
30 cpu_stime avg: 6419.0, min 4378, max 9172, med 5325
Change (10 & 30): -48.00027650689859% -53.98413157631974%
Off cpu_utime avg: 50218.892857142855, min 27029, max 59428, med 54704.0
10 cpu_utime avg: 43645.04, min 28473, max 52156, med 47802
30 cpu_utime avg: 38955.230769230766, min 19804, max 46717, med 38318
Change (10 & 30): -13.09039782267487% -22.429132637299887%
Off rpc_incoming_queue_time avg: 93353.53571428571, min 92648, max 93824,
med 93383.5
10 rpc_incoming_queue_time avg: 16530.72, min 10724, max 17280, med 16793
30 rpc_incoming_queue_time avg: 11754.0, min 11185, max 12380, med 11716
Change (10 & 30): -82.29234717944342% -87.40915391145565%
Off utime+stime avg: 64168.42857142857, min 37183, max 77202, med 66883.0
10 utime+stime avg: 50898.76, min 34228, max 60366, med 54281
30 utime+stime avg: 45374.230769230766, min 27342, max 54821, med 45185
Change (10 & 30): -20.679435147235292% -29.288854691645128%
Off stat_runtime avg: 67253597832.07692, min 44608096874, max 72681918723,
med 69457593188
10 stat_runtime avg: 49455203107.55556, min 34066502223, max 57746660961,
med 52328569829
30 stat_runtime avg: 44232893390.8, min 24621447550, max 53333232716, med
46963805079
Change (10 & 30): -26.46459862111992% -34.22969950062223%
Off switch avg: 694087.7692307692, min 663129, max 713254, med 696097
10 switch avg: 311993.22222222225, min 279392, max 331949, med 315909
30 switch avg: 294191.6, min 284201, max 301534, med 298264
Change (10 & 30): -55.04988906979411% -57.61463995741616%
========== write count: 9:==========
Runs with batching off: 12, 10: 28, 30: 11
Off client task runtimes avg: 8.548090826582026, min 7.529751539230347, max
9.48080325126648, med 8.539497256278992
10 client task runtimes avg: 8.2908557114147, min 7.11676287651062, max
11.546950578689575, med 8.205906629562378
30 client task runtimes avg: 7.6591967903837865, min 6.522926092147827,
max 8.579655647277832, med 7.697844862937927
Change (10 & 30): -3.009269793523961% -10.398743464845307%
Off cpu_stime avg: 15005.166666666666, min 11661, max 19840, med 12412.0
10 cpu_stime avg: 8288.464285714286, min 5129, max 12050, med 7104.5
30 cpu_stime avg: 6007.545454545455, min 4793, max 9926, med 5750
Change (10 & 30): -44.76259764493816% -59.96348732406312%
Off cpu_utime avg: 59056.666666666664, min 56031, max 63356, med 59377.0
10 cpu_utime avg: 49433.57142857143, min 26090, max 57382, med 52271.5
30 cpu_utime avg: 44772.818181818184, min 27573, max 51210, med 50317
Change (10 & 30): -16.294680653770786% -24.186682539112404%
Off rpc_incoming_queue_time avg: 96753.83333333333, min 96291, max 97393,
med 96774.5
10 rpc_incoming_queue_time avg: 21073.464285714286, min 20380, max 21682,
med 21082.0
30 rpc_incoming_queue_time avg: 16262.272727272728, min 15920, max 16578,
med 16283
Change (10 & 30): -78.21950453052064% -83.19211532296974%
Off utime+stime avg: 74061.83333333333, min 67692, max 79069, med 73660.0
10 utime+stime avg: 57722.03571428572, min 31219, max 68704, med 61430.5
30 utime+stime avg: 50780.36363636364, min 32366, max 60608, med 56080
Change (10 & 30): -22.06237259278523% -31.435178754198212%
Off stat_runtime avg: 74426347984.6, min 69044550939, max 76319774349, med
75772265736
10 stat_runtime avg: 60146731951.2, min 56539088994, max 66356993395, med
60227311137.5
30 stat_runtime avg: 58533466007.0, min 58533466007, max 58533466007, med
58533466007
Change (10 & 30): -19.18623769683646% -21.353838268254812%
Off switch avg: 735295.8, min 709459, max 746328, med 741901
10 switch avg: 339284.5, min 316234, max 365507, med 333199.5
30 switch avg: 323555.0, min 323555, max 323555, med 323555
Change (10 & 30): -53.857413574237746% -55.99662067973189%
========== write count: 12:==========
Runs with batching off: 28, 10: 26, 30: 19
Off client task runtimes avg: 10.253776160994573, min 9.013848304748535,
max 11.68369174003601, med 10.17331576347351
10 client task runtimes avg: 10.120543375229223, min 8.6819486618042, max
12.503127098083496, med 9.816626071929932
30 client task runtimes avg: 9.125201345535746, min 6.506812334060669, max
10.293994665145874, med 9.27278220653534
Change (10 & 30): -1.2993533667349566% -11.006431169737564%
Off cpu_stime avg: 14449.607142857143, min 11600, max 21255, med 12862.5
10 cpu_stime avg: 8505.76923076923, min 6634, max 12915, med 7722.0
30 cpu_stime avg: 7201.578947368421, min 5122, max 11262, med 6769
Change (10 & 30): -41.13494473118685% -50.16072841171763%
Off cpu_utime avg: 61044.82142857143, min 44149, max 66881, med 62788.5
10 cpu_utime avg: 56311.92307692308, min 44619, max 64298, med 56596.5
30 cpu_utime avg: 50936.84210526316, min 27324, max 58087, med 54226
Change (10 & 30): -7.75315291434887% -16.558291246925204%
Off rpc_incoming_queue_time avg: 100650.96428571429, min 99736, max 101318,
med 100699.5
10 rpc_incoming_queue_time avg: 24776.884615384617, min 22855, max 25639,
med 24807.5
30 rpc_incoming_queue_time avg: 20277.105263157893, min 19619, max 21074,
med 20298
Change (10 & 30): -75.38336091341226% -79.8540377560636%
Off utime+stime avg: 75494.42857142857, min 55942, max 86122, med 76436.0
10 utime+stime avg: 64817.692307692305, min 51253, max 74834, med 64551.0
30 utime+stime avg: 58138.42105263158, min 32446, max 65996, med 61492
Change (10 & 30): -14.14241615675591% -22.989785931521702%
Off stat_runtime avg: 76728076228.42857, min 66549332918, max 82294242231,
med 80703935110
10 stat_runtime avg: 62724548520.2, min 49469140539, max 71325822979, med
64903018497
30 stat_runtime avg: 54956350974.333336, min 45573244914, max 62202256097,
med 57093551912
Change (10 & 30): -18.250852095572444% -28.375174153041748%
Off switch avg: 769651.1428571428, min 741196, max 783936, med 777328
10 switch avg: 364841.8, min 324393, max 403493, med 361469
30 switch avg: 342820.0, min 312424, max 363162, med 352874
Change (10 & 30): -52.5964713512133% -55.45774170783868%
========== write count: 15:==========
Runs with batching off: 29, 10: 25, 30: 11
Off client task runtimes avg: 12.093589447937717, min 9.694162607192993,
max 13.974745273590088, med 11.848691463470459
10 client task runtimes avg: 12.023993889490763, min 9.89328384399414, max
14.774570941925049, med 11.77902603149414
30 client task runtimes avg: 10.500564222624808, min 6.7602620124816895,
max 12.283529281616211, med 10.966847896575928
Change (10 & 30): -0.5754747897351642% -13.172476477482565%
Off cpu_stime avg: 15818.413793103447, min 12330, max 22206, med 13621
10 cpu_stime avg: 10410.12, min 7513, max 14940, med 8576
30 cpu_stime avg: 8565.09090909091, min 7139, max 12624, med 7520
Change (10 & 30): -34.18986166275009% -45.853667623582204%
Off cpu_utime avg: 64650.724137931036, min 42791, max 69649, med 66646
10 cpu_utime avg: 60900.88, min 47295, max 68286, med 61477
30 cpu_utime avg: 57182.27272727273, min 48519, max 60889, med 57494
Change (10 & 30): -5.8001579842026585% -11.551999626059118%
Off rpc_incoming_queue_time avg: 104149.72413793103, min 102866, max
105221, med 104088
10 rpc_incoming_queue_time avg: 28739.68, min 27077, max 30125, med 28748
30 rpc_incoming_queue_time avg: 24547.454545454544, min 23686, max 25125,
med 24814
Change (10 & 30): -72.40541898897541% -76.43061011573585%
Off utime+stime avg: 80469.13793103448, min 61847, max 90811, med 80286
10 utime+stime avg: 71311.0, min 60967, max 82608, med 71005
30 utime+stime avg: 65747.36363636363, min 60394, max 69941, med 66757
Change (10 & 30): -11.380932077193862% -18.294932284832033%
Off stat_runtime avg: 81980607740.77777, min 55653565554, max 87411163108,
med 85442439087
10 stat_runtime avg: 67729727090.3, min 55481286826, max 78981283495, med
67728369751.0
30 stat_runtime avg: 58272593883.666664, min 51761458017, max 64954871741,
med 58101451893
Change (10 & 30): -17.383233722222425% -28.91905111520485%
Off switch avg: 806852.8888888889, min 771841, max 826717, med 809368
10 switch avg: 405952.4, min 369202, max 436256, med 413122.0
30 switch avg: 357860.3333333333, min 340666, max 367513, med 365402
Change (10 & 30): -49.686937285552254% -55.64738773803734%
========== write count: 18:==========
Runs with batching off: 17, 10: 25, 30: 14
Off client task runtimes avg: 14.400622643676458, min 12.48328423500061,
max 17.490095376968384, med 14.047804951667786
10 client task runtimes avg: 13.62668704509735, min 11.661161422729492,
max 15.913615226745605, med 13.4107586145401
30 client task runtimes avg: 12.081949627588665, min 6.308183908462524,
max 13.923750400543213, med 12.532055974006653
Change (10 & 30): -5.374320386896292% -16.101199742956663%
Off cpu_stime avg: 16935.58823529412, min 13577, max 22825, med 14263
10 cpu_stime avg: 11495.4, min 8416, max 16228, med 9082
30 cpu_stime avg: 9291.07142857143, min 7162, max 14057, med 8078.5
Change (10 & 30): -32.12281829075564% -45.13877345453734%
Off cpu_utime avg: 70929.58823529411, min 67674, max 76405, med 70892
10 cpu_utime avg: 64712.68, min 52765, max 72801, med 65688
30 cpu_utime avg: 58736.642857142855, min 45035, max 64122, med 59485.0
Change (10 & 30): -8.764901065928676% -17.190210293768672%
Off rpc_incoming_queue_time avg: 107683.5294117647, min 106186, max 108626,
med 107884
10 rpc_incoming_queue_time avg: 32797.88, min 31176, max 33893, med 32851
30 rpc_incoming_queue_time avg: 28709.35714285714, min 28256, max 29613,
med 28559.5
Change (10 & 30): -69.54234303132272% -73.33913802817781%
Off utime+stime avg: 87865.17647058824, min 81530, max 98711, med 85818
10 utime+stime avg: 76208.08, min 65998, max 83642, med 75503
30 utime+stime avg: 68027.71428571429, min 52197, max 77436, med 69101.0
Change (10 & 30): -13.267026754894529% -22.577160806721064%
Off stat_runtime avg: 89877829083.33333, min 85904071609, max 93775583142,
med 90007821176.0
10 stat_runtime avg: 71741065414.63637, min 60447250324, max 78006781555,
med 73925084586
30 stat_runtime avg: 61287185255.0, min 56450910546, max 71324482809, med
58686673832.5
Change (10 & 30): -20.179352186934597% -31.810563428078055%
Off switch avg: 844333.5, min 833024, max 855182, med 845283.0
10 switch avg: 439501.7272727273, min 410184, max 471723, med 442107
30 switch avg: 387624.75, min 366818, max 405787, med 388947.0
Change (10 & 30): -47.94690400502558% -54.09103748696458%
========== write count: 21:==========
Runs with batching off: 14, 10: 32, 30: 15
Off client task runtimes avg: 16.00026828658824, min 13.41559386253357, max
17.893533945083618, med 15.739280700683594
10 client task runtimes avg: 15.867586015945388, min 10.710279941558838,
max 21.145692586898804, med 15.130183696746826
30 client task runtimes avg: 13.433500899208916, min 6.494636297225952,
max 15.636562824249268, med 14.214406967163086
Change (10 & 30): -0.829250286722194% -16.0420271798245%
Off cpu_stime avg: 17945.428571428572, min 13430, max 24512, med 15011.5
10 cpu_stime avg: 10738.25, min 8612, max 17861, med 9526.5
30 cpu_stime avg: 10200.266666666666, min 7895, max 15310, med 8659
Change (10 & 30): -40.161640847649224% -43.15952597026966%
Off cpu_utime avg: 72935.64285714286, min 65139, max 79458, med 73527.0
10 cpu_utime avg: 71266.0625, min 51727, max 79497, med 70270.5
30 cpu_utime avg: 63485.933333333334, min 50417, max 69125, med 65556
Change (10 & 30): -2.2891144737190006% -12.956229839940425%
Off rpc_incoming_queue_time avg: 111131.85714285714, min 109993, max
112204, med 111220.5
10 rpc_incoming_queue_time avg: 36567.9375, min 34429, max 38382, med
36493.5
30 rpc_incoming_queue_time avg: 32505.6, min 31728, max 33168, med 32553
Change (10 & 30): -67.09500008355582% -70.75042131419175%
Off utime+stime avg: 90881.07142857143, min 78856, max 98983, med 90825.0
10 utime+stime avg: 82004.3125, min 60339, max 96027, med 80178.5
30 utime+stime avg: 73686.2, min 58312, max 83714, med 74202
Change (10 & 30): -9.76744528760115% -18.920190044288653%
Off stat_runtime avg: 94648239082.2, min 92043681683, max 96604701517, med
95079846312
10 stat_runtime avg: 85979736782.5, min 79247833687, max 94078032523, med
84953429029.5
30 stat_runtime avg: 74313071072.0, min 63913748224, max 80499907081, med
76419314491.5
Change (10 & 30): -9.15865142738851% -21.48499349527183%
Off switch avg: 897395.0, min 875049, max 913416, med 901805
10 switch avg: 474644.8333333333, min 453589, max 501224, med 473482.0
30 switch avg: 434748.75, min 420292, max 452761, med 432971.0
Change (10 & 30): -47.10859394878138% -51.55436012012547%
========== write count: 24:==========
Runs with batching off: 18, 10: 21, 30: 12
Off client task runtimes avg: 17.88868013134709, min 14.400426149368286,
max 20.726062059402466, med 17.609864950180054
10 client task runtimes avg: 17.408534667600932, min 9.953590393066406,
max 21.59314775466919, med 17.0849187374115
30 client task runtimes avg: 15.434335373004554, min 7.21691107749939, max
17.90212059020996, med 16.086348295211792
Change (10 & 30): -2.684074287318594% -13.720099752030801%
Off cpu_stime avg: 18045.666666666668, min 14732, max 26013, med 15529.0
10 cpu_stime avg: 11599.238095238095, min 8406, max 18696, med 10275
30 cpu_stime avg: 11924.666666666666, min 8512, max 16717, med 9473.0
Change (10 & 30): -35.72286183993519% -33.91950052644218%
Off cpu_utime avg: 78219.0, min 72128, max 81487, med 78120.0
10 cpu_utime avg: 75045.85714285714, min 62685, max 86617, med 74246
30 cpu_utime avg: 69422.08333333333, min 61531, max 73578, med 70479.5
Change (10 & 30): -4.056741785426632% -11.246521518642105%
Off rpc_incoming_queue_time avg: 114673.38888888889, min 112955, max
116400, med 114882.5
10 rpc_incoming_queue_time avg: 40207.28571428572, min 33425, max 41661,
med 40531
30 rpc_incoming_queue_time avg: 36444.083333333336, min 35255, max 37595,
med 36542.0
Change (10 & 30): -64.93756214596223% -68.21923230275743%
Off utime+stime avg: 96264.66666666667, min 86860, max 106203, med 95218.0
10 utime+stime avg: 86645.09523809524, min 71091, max 96892, med 85338
30 utime+stime avg: 81346.75, min 70043, max 88279, med 82009.5
Change (10 & 30): -9.992837207737804% -15.496772786138225%
Off stat_runtime avg: 97726065525.8, min 92716532372, max 100737581795, med
97800868584
10 stat_runtime avg: 86468693080.0, min 82172604670, max 88534687430, med
87583740110.0
30 stat_runtime avg: 81758720786.6, min 77844194290, max 84349902830, med
82248094233
Change (10 & 30): -11.519314100318523% -16.33888016803927%
Off switch avg: 911795.0, min 899299, max 929683, med 906367
10 switch avg: 511426.5, min 475979, max 529168, med 520279.5
30 switch avg: 472257.4, min 454054, max 495222, med 464972
Change (10 & 30): -43.90992492830077% -48.20574800256636%
Change-Id: Ie92ba4de5eae00d56cd513cb644dce8fb6e14538
---
M src/kudu/consensus/CMakeLists.txt
M src/kudu/consensus/consensus.proto
M src/kudu/consensus/consensus_peers-test.cc
M src/kudu/consensus/consensus_peers.cc
M src/kudu/consensus/consensus_peers.h
M src/kudu/consensus/consensus_queue.cc
M src/kudu/consensus/consensus_queue.h
A src/kudu/consensus/multi_raft_batcher.cc
A src/kudu/consensus/multi_raft_batcher.h
A src/kudu/consensus/multi_raft_consensus_data.h
M src/kudu/consensus/peer_manager.cc
M src/kudu/consensus/peer_manager.h
M src/kudu/consensus/raft_consensus.cc
M src/kudu/consensus/raft_consensus.h
M src/kudu/consensus/raft_consensus_quorum-test.cc
M src/kudu/master/sys_catalog.cc
M src/kudu/tablet/tablet_replica-test-base.cc
M src/kudu/tablet/tablet_replica.cc
M src/kudu/tablet/tablet_replica.h
M src/kudu/tserver/tablet_copy_source_session-test.cc
M src/kudu/tserver/tablet_service.cc
M src/kudu/tserver/tablet_service.h
M src/kudu/tserver/ts_tablet_manager.cc
M src/kudu/tserver/ts_tablet_manager.h
24 files changed, 941 insertions(+), 45 deletions(-)
git pull ssh://gerrit.cloudera.org:29418/kudu refs/changes/67/22867/18
--
To view, visit http://gerrit.cloudera.org:8080/22867
To unsubscribe, visit http://gerrit.cloudera.org:8080/settings
Gerrit-Project: kudu
Gerrit-Branch: master
Gerrit-MessageType: newpatchset
Gerrit-Change-Id: Ie92ba4de5eae00d56cd513cb644dce8fb6e14538
Gerrit-Change-Number: 22867
Gerrit-PatchSet: 18
Gerrit-Owner: Zoltan Martonka <[email protected]>
Gerrit-Reviewer: Abhishek Chennaka <[email protected]>
Gerrit-Reviewer: Alexey Serbin <[email protected]>
Gerrit-Reviewer: Attila Bukor <[email protected]>
Gerrit-Reviewer: Kudu Jenkins (120)
Gerrit-Reviewer: Marton Greber <[email protected]>
Gerrit-Reviewer: Zoltan Chovan <[email protected]>
Gerrit-Reviewer: Zoltan Martonka <[email protected]>