[ 
https://issues.apache.org/jira/browse/CASSANDRA-10540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16519900#comment-16519900
 ] 

Lerh Chuan Low edited comment on CASSANDRA-10540 at 6/22/18 3:16 AM:
---------------------------------------------------------------------

Here is another benchmark run. It is still the same stressspec YAML. This time, 
the process is to stop one of the nodes in a DC (Same as before, 3 in 1 DC and 
2 in the other), and then insert for 10 minutes. 
{code:java}
nohup cassandra-stress user no-warmup profile=stressspec.yaml duration=10m 
cl=QUORUM ops\(insert=1\) -node file=nodelist.txt -rate threads=50 -log 
file=insert.log > nohup.txt &
{code}
After that, trigger a stress but at the same time run a full repair in the DC:
{code:java}
nohup cassandra-stress user no-warmup profile=stressspec.yaml duration=1h 
cl=QUORUM ops\(insert=10,simple1=10,range1=1\) -node file=nodelist.txt -rate 
threads=50 -log file=mixed.log > nohup.txt &


nohup nodetool repair --full stresscql2 typestest > nohup.txt &
{code}
Here are the results:
|| ||RACS||Non RACS||
|Stress Result 1|Op rate : 244 op/s [insert: 116 op/s, range1: 12 op/s, 
simple1: 116 op/s]
 Partition rate : 243 pk/s [insert: 116 pk/s, range1: 10 pk/s, simple1: 116 
pk/s]
 Row rate : 274 row/s [insert: 116 row/s, range1: 41 row/s, simple1: 116 row/s]
 Latency mean : 204.6 ms [insert: 2.5 ms, range1: 387.4 ms, simple1: 388.8 ms]
 Latency median : 39.7 ms [insert: 2.0 ms, range1: 378.0 ms, simple1: 377.7 ms]
 Latency 95th percentile : 706.2 ms [insert: 3.2 ms, range1: 802.2 ms, simple1: 
805.3 ms]
 Latency 99th percentile : 941.6 ms [insert: 19.7 ms, range1: 1,022.9 ms, 
simple1: 1,022.4 ms]
 Latency 99.9th percentile : 1183.8 ms [insert: 65.5 ms, range1: 1,232.1 ms, 
simple1: 1,218.4 ms]
 Latency max : 7314.9 ms [insert: 550.0 ms, range1: 1,472.2 ms, simple1: 
7,314.9 ms]
 Total partitions : 874,058 [insert: 419,116, range1: 36,428, simple1: 418,514]
 Total errors : 0 [insert: 0, range1: 0, simple1: 0]
 Total GC count : 0
 Total GC memory : 0.000 KiB
 Total GC time : 0.0 seconds
 Avg GC time : NaN ms
 StdDev GC time : 0.0 ms
 Total operation time : 01:00:00|Op rate : 221 op/s [insert: 105 op/s, range1: 
11 op/s, simple1: 105 op/s]
 Partition rate : 220 pk/s [insert: 105 pk/s, range1: 9 pk/s, simple1: 105 pk/s]
 Row rate : 248 row/s [insert: 105 row/s, range1: 38 row/s, simple1: 105 row/s]
 Latency mean : 226.2 ms [insert: 2.7 ms, range1: 428.8 ms, simple1: 429.1 ms]
 Latency median : 150.3 ms [insert: 2.0 ms, range1: 385.4 ms, simple1: 383.8 ms]
 Latency 95th percentile : 716.2 ms [insert: 3.0 ms, range1: 837.3 ms, simple1: 
841.5 ms]
 Latency 99th percentile : 1047.5 ms [insert: 14.8 ms, range1: 1,210.1 ms, 
simple1: 1,230.0 ms]
 Latency 99.9th percentile : 1830.8 ms [insert: 57.5 ms, range1: 2,029.0 ms, 
simple1: 2,063.6 ms]
 Latency max : 7457.5 ms [insert: 6,358.6 ms, range1: 7,159.7 ms, simple1: 
7,457.5 ms]
 Total partitions : 790,543 [insert: 378,618, range1: 33,908, simple1: 378,017]
 Total errors : 0 [insert: 0, range1: 0, simple1: 0]
 Total GC count : 0
 Total GC memory : 0.000 KiB
 Total GC time : 0.0 seconds
 Avg GC time : NaN ms
 StdDev GC time : 0.0 ms
 Total operation time : 01:00:00|
|Stress result 2|Op rate : 247 op/s [insert: 118 op/s, range1: 12 op/s, 
simple1: 118 op/s]
Partition rate : 246 pk/s [insert: 118 pk/s, range1: 10 pk/s, simple1: 118 pk/s]
Row rate : 278 row/s [insert: 118 row/s, range1: 42 row/s, simple1: 118 row/s]
Latency mean : 201.9 ms [insert: 3.8 ms, range1: 384.5 ms, simple1: 382.3 ms]
Latency median : 45.0 ms [insert: 2.0 ms, range1: 374.1 ms, simple1: 372.2 ms]
Latency 95th percentile : 666.4 ms [insert: 10.0 ms, range1: 761.3 ms, simple1: 
759.7 ms]
Latency 99th percentile : 888.7 ms [insert: 44.0 ms, range1: 973.1 ms, simple1: 
968.9 ms]
Latency 99.9th percentile : 1135.6 ms [insert: 68.8 ms, range1: 1,182.8 ms, 
simple1: 1,181.7 ms]
Latency max : 7101.0 ms [insert: 328.5 ms, range1: 6,970.9 ms, simple1: 7,101.0 
ms]
Total partitions : 886,043 [insert: 425,188, range1: 36,972, simple1: 423,883]
Total errors : 0 [insert: 0, range1: 0, simple1: 0]
Total GC count : 0
Total GC memory : 0.000 KiB
Total GC time : 0.0 seconds
Avg GC time : NaN ms
StdDev GC time : 0.0 ms
Total operation time : 01:00:01|Op rate : 211 op/s [insert: 100 op/s, range1: 
10 op/s, simple1: 101 op/s]
Partition rate : 210 pk/s [insert: 100 pk/s, range1: 9 pk/s, simple1: 101 pk/s]
Row rate : 236 row/s [insert: 100 row/s, range1: 35 row/s, simple1: 101 row/s]
Latency mean : 237.0 ms [insert: 2.5 ms, range1: 451.1 ms, simple1: 449.9 ms]
Latency median : 153.9 ms [insert: 2.0 ms, range1: 416.3 ms, simple1: 414.4 ms]
Latency 95th percentile : 753.4 ms [insert: 3.0 ms, range1: 867.7 ms, simple1: 
864.0 ms]
Latency 99th percentile : 1016.6 ms [insert: 18.0 ms, range1: 1,118.8 ms, 
simple1: 1,109.4 ms]
Latency 99.9th percentile : 1274.0 ms [insert: 58.6 ms, range1: 1,355.8 ms, 
simple1: 1,344.3 ms]
Latency max : 1746.9 ms [insert: 426.2 ms, range1: 1,746.9 ms, simple1: 1,598.0 
ms]
Total partitions : 754,815 [insert: 361,365, range1: 31,164, simple1: 362,286]
Total errors : 0 [insert: 0, range1: 0, simple1: 0]
Total GC count : 0
Total GC memory : 0.000 KiB
Total GC time : 0.0 seconds
Avg GC time : NaN ms
StdDev GC time : 0.0 ms
Total operation time : 01:00:01|

Big thanks to Jason Brown for the repair patch, works like a charm :)

 


was (Author: lerh low):
Here is another benchmark run. It is still the same stressspec YAML. This time, 
the process is to stop one of the nodes in a DC (Same as before, 3 in 1 DC and 
2 in the other), and then insert for 10 minutes. 
{code:java}
nohup cassandra-stress user no-warmup profile=stressspec.yaml duration=10m 
cl=QUORUM ops\(insert=1\) -node file=nodelist.txt -rate threads=50 -log 
file=insert.log > nohup.txt &
{code}
After that, trigger a stress but at the same time run a full repair in the DC:
{code:java}
nohup cassandra-stress user no-warmup profile=stressspec.yaml duration=1h 
cl=QUORUM ops\(insert=10,simple1=10,range1=1\) -node file=nodelist.txt -rate 
threads=50 -log file=mixed.log > nohup.txt &


nohup nodetool repair --full stresscql2 typestest > nohup.txt &
{code}
Here are the results:
|| ||RACS||Non RACS||
|Stress Result|Op rate : 244 op/s [insert: 116 op/s, range1: 12 op/s, simple1: 
116 op/s]
Partition rate : 243 pk/s [insert: 116 pk/s, range1: 10 pk/s, simple1: 116 pk/s]
Row rate : 274 row/s [insert: 116 row/s, range1: 41 row/s, simple1: 116 row/s]
Latency mean : 204.6 ms [insert: 2.5 ms, range1: 387.4 ms, simple1: 388.8 ms]
Latency median : 39.7 ms [insert: 2.0 ms, range1: 378.0 ms, simple1: 377.7 ms]
Latency 95th percentile : 706.2 ms [insert: 3.2 ms, range1: 802.2 ms, simple1: 
805.3 ms]
Latency 99th percentile : 941.6 ms [insert: 19.7 ms, range1: 1,022.9 ms, 
simple1: 1,022.4 ms]
Latency 99.9th percentile : 1183.8 ms [insert: 65.5 ms, range1: 1,232.1 ms, 
simple1: 1,218.4 ms]
Latency max : 7314.9 ms [insert: 550.0 ms, range1: 1,472.2 ms, simple1: 7,314.9 
ms]
Total partitions : 874,058 [insert: 419,116, range1: 36,428, simple1: 418,514]
Total errors : 0 [insert: 0, range1: 0, simple1: 0]
Total GC count : 0
Total GC memory : 0.000 KiB
Total GC time : 0.0 seconds
Avg GC time : NaN ms
StdDev GC time : 0.0 ms
Total operation time : 01:00:00|Op rate : 221 op/s [insert: 105 op/s, range1: 
11 op/s, simple1: 105 op/s]
 Partition rate : 220 pk/s [insert: 105 pk/s, range1: 9 pk/s, simple1: 105 pk/s]
 Row rate : 248 row/s [insert: 105 row/s, range1: 38 row/s, simple1: 105 row/s]
 Latency mean : 226.2 ms [insert: 2.7 ms, range1: 428.8 ms, simple1: 429.1 ms]
 Latency median : 150.3 ms [insert: 2.0 ms, range1: 385.4 ms, simple1: 383.8 ms]
 Latency 95th percentile : 716.2 ms [insert: 3.0 ms, range1: 837.3 ms, simple1: 
841.5 ms]
 Latency 99th percentile : 1047.5 ms [insert: 14.8 ms, range1: 1,210.1 ms, 
simple1: 1,230.0 ms]
 Latency 99.9th percentile : 1830.8 ms [insert: 57.5 ms, range1: 2,029.0 ms, 
simple1: 2,063.6 ms]
 Latency max : 7457.5 ms [insert: 6,358.6 ms, range1: 7,159.7 ms, simple1: 
7,457.5 ms]
 Total partitions : 790,543 [insert: 378,618, range1: 33,908, simple1: 378,017]
 Total errors : 0 [insert: 0, range1: 0, simple1: 0]
 Total GC count : 0
 Total GC memory : 0.000 KiB
 Total GC time : 0.0 seconds
 Avg GC time : NaN ms
 StdDev GC time : 0.0 ms
 Total operation time : 01:00:00|

Big thanks to Jason Brown for the repair patch, works like a charm :)

 

> RangeAwareCompaction
> --------------------
>
>                 Key: CASSANDRA-10540
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-10540
>             Project: Cassandra
>          Issue Type: New Feature
>            Reporter: Marcus Eriksson
>            Assignee: Marcus Eriksson
>            Priority: Major
>              Labels: compaction, lcs, vnodes
>             Fix For: 4.x
>
>
> Broken out from CASSANDRA-6696, we should split sstables based on ranges 
> during compaction.
> Requirements;
> * dont create tiny sstables - keep them bunched together until a single vnode 
> is big enough (configurable how big that is)
> * make it possible to run existing compaction strategies on the per-range 
> sstables
> We should probably add a global compaction strategy parameter that states 
> whether this should be enabled or not.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to