[jira] [Updated] (CASSANDRA-13863) Speculative retry causes read repair even if read_repair_chance is 0.0.

2017-11-14 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-13863:
--
Component/s: (was: Core)
 Coordination

> Speculative retry causes read repair even if read_repair_chance is 0.0.
> ---
>
> Key: CASSANDRA-13863
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13863
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Hiro Wakabayashi
> Attachments: 
> 0001-Use-read_repair_chance-when-starting-repairs-due-to-.patch, speculative 
> retries.pdf
>
>
> {{read_repair_chance = 0.0}} and {{dclocal_read_repair_chance = 0.0}} should 
> cause no read repair, but read repair happens with speculative retry. I think 
> {{read_repair_chance = 0.0}} and {{dclocal_read_repair_chance = 0.0}} should 
> stop read repair completely because the user wants to stop read repair in 
> some cases.
> {panel:title=Case 1: TWCS users}
> The 
> [documentation|http://cassandra.apache.org/doc/latest/operating/compaction.html?highlight=read_repair_chance]
>  states how to disable read repair.
> {quote}While TWCS tries to minimize the impact of comingled data, users 
> should attempt to avoid this behavior. Specifically, users should avoid 
> queries that explicitly set the timestamp via CQL USING TIMESTAMP. 
> Additionally, users should run frequent repairs (which streams data in such a 
> way that it does not become comingled), and disable background read repair by 
> setting the table’s read_repair_chance and dclocal_read_repair_chance to 0.
> {quote}
> {panel}
> {panel:title=Case 2. Strict SLA for read latency}
> In a peak time, read latency is a key for us but, read repair causes latency 
> higher than no read repair. We can use anti entropy repair in off peak time 
> for consistency.
> {panel}
>  
> Here is my procedure to reproduce the problem.
> h3. 1. Create a cluster and set {{hinted_handoff_enabled}} to false.
> {noformat}
> $ ccm create -v 3.0.14 -n 3 cluster_3.0.14
> $ for h in $(seq 1 3) ; do perl -pi -e 's/hinted_handoff_enabled: 
> true/hinted_handoff_enabled: false/' 
> ~/.ccm/cluster_3.0.14/node$h/conf/cassandra.yaml ; done
> $ for h in $(seq 1 3) ; do grep "hinted_handoff_enabled:" 
> ~/.ccm/cluster_3.0.14/node$h/conf/cassandra.yaml ; done
> hinted_handoff_enabled: false
> hinted_handoff_enabled: false
> hinted_handoff_enabled: false
> $ ccm start{noformat}
> h3. 2. Create a keyspace and a table.
> {noformat}
> $ ccm node1 cqlsh
> DROP KEYSPACE IF EXISTS ks1;
> CREATE KEYSPACE ks1 WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': '3'}  AND durable_writes = true;
> CREATE TABLE ks1.t1 (
> key text PRIMARY KEY,
> value blob
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.0
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = 'ALWAYS';
> QUIT;
> {noformat}
> h3. 3. Stop node2 and node3. Insert a row.
> {noformat}
> $ ccm node3 stop && ccm node2 stop && ccm status
> Cluster: 'cluster_3.0.14'
> --
> node1: UP
> node3: DOWN
> node2: DOWN
> $ ccm node1 cqlsh -k ks1 -e "consistency; tracing on; insert into ks1.t1 
> (key, value) values ('mmullass', bigintAsBlob(1));"
> Current consistency level is ONE.
> Now Tracing is enabled
> Tracing session: 01d74590-97cb-11e7-8ea7-c1bd4d549501
>  activity 
>| timestamp  | source| 
> source_elapsed
> -++---+
>   
> Execute CQL3 query | 2017-09-12 23:59:42.316000 | 127.0.0.1 | 
>  0
>  Parsing insert into ks1.t1 (key, value) values ('mmullass', 
> bigintAsBlob(1)); [SharedPool-Worker-1] | 2017-09-12 23:59:42.319000 | 
> 127.0.0.1 |   4323
>Preparing 
> statement [SharedPool-Worker-1] | 2017-09-12 

[jira] [Updated] (CASSANDRA-13863) Speculative retry causes read repair even if read_repair_chance is 0.0.

2017-11-14 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-13863:

Component/s: Core

> Speculative retry causes read repair even if read_repair_chance is 0.0.
> ---
>
> Key: CASSANDRA-13863
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13863
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Hiro Wakabayashi
> Attachments: 
> 0001-Use-read_repair_chance-when-starting-repairs-due-to-.patch, speculative 
> retries.pdf
>
>
> {{read_repair_chance = 0.0}} and {{dclocal_read_repair_chance = 0.0}} should 
> cause no read repair, but read repair happens with speculative retry. I think 
> {{read_repair_chance = 0.0}} and {{dclocal_read_repair_chance = 0.0}} should 
> stop read repair completely because the user wants to stop read repair in 
> some cases.
> {panel:title=Case 1: TWCS users}
> The 
> [documentation|http://cassandra.apache.org/doc/latest/operating/compaction.html?highlight=read_repair_chance]
>  states how to disable read repair.
> {quote}While TWCS tries to minimize the impact of comingled data, users 
> should attempt to avoid this behavior. Specifically, users should avoid 
> queries that explicitly set the timestamp via CQL USING TIMESTAMP. 
> Additionally, users should run frequent repairs (which streams data in such a 
> way that it does not become comingled), and disable background read repair by 
> setting the table’s read_repair_chance and dclocal_read_repair_chance to 0.
> {quote}
> {panel}
> {panel:title=Case 2. Strict SLA for read latency}
> In a peak time, read latency is a key for us but, read repair causes latency 
> higher than no read repair. We can use anti entropy repair in off peak time 
> for consistency.
> {panel}
>  
> Here is my procedure to reproduce the problem.
> h3. 1. Create a cluster and set {{hinted_handoff_enabled}} to false.
> {noformat}
> $ ccm create -v 3.0.14 -n 3 cluster_3.0.14
> $ for h in $(seq 1 3) ; do perl -pi -e 's/hinted_handoff_enabled: 
> true/hinted_handoff_enabled: false/' 
> ~/.ccm/cluster_3.0.14/node$h/conf/cassandra.yaml ; done
> $ for h in $(seq 1 3) ; do grep "hinted_handoff_enabled:" 
> ~/.ccm/cluster_3.0.14/node$h/conf/cassandra.yaml ; done
> hinted_handoff_enabled: false
> hinted_handoff_enabled: false
> hinted_handoff_enabled: false
> $ ccm start{noformat}
> h3. 2. Create a keyspace and a table.
> {noformat}
> $ ccm node1 cqlsh
> DROP KEYSPACE IF EXISTS ks1;
> CREATE KEYSPACE ks1 WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': '3'}  AND durable_writes = true;
> CREATE TABLE ks1.t1 (
> key text PRIMARY KEY,
> value blob
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.0
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = 'ALWAYS';
> QUIT;
> {noformat}
> h3. 3. Stop node2 and node3. Insert a row.
> {noformat}
> $ ccm node3 stop && ccm node2 stop && ccm status
> Cluster: 'cluster_3.0.14'
> --
> node1: UP
> node3: DOWN
> node2: DOWN
> $ ccm node1 cqlsh -k ks1 -e "consistency; tracing on; insert into ks1.t1 
> (key, value) values ('mmullass', bigintAsBlob(1));"
> Current consistency level is ONE.
> Now Tracing is enabled
> Tracing session: 01d74590-97cb-11e7-8ea7-c1bd4d549501
>  activity 
>| timestamp  | source| 
> source_elapsed
> -++---+
>   
> Execute CQL3 query | 2017-09-12 23:59:42.316000 | 127.0.0.1 | 
>  0
>  Parsing insert into ks1.t1 (key, value) values ('mmullass', 
> bigintAsBlob(1)); [SharedPool-Worker-1] | 2017-09-12 23:59:42.319000 | 
> 127.0.0.1 |   4323
>Preparing 
> statement [SharedPool-Worker-1] | 2017-09-12 23:59:42.32 | 127.0.0.1 |
>5250
>   

[jira] [Updated] (CASSANDRA-13863) Speculative retry causes read repair even if read_repair_chance is 0.0.

2017-10-10 Thread Murukesh Mohanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Murukesh Mohanan updated CASSANDRA-13863:
-
Attachment: 0001-Use-read_repair_chance-when-starting-repairs-due-to-.patch

As a quick fix I tried using {{read_repair_chance}} in the exception handler 
for {{DigestMismatchException}}. After running benchmarks with YCSB (default 
settings - so {{read_repair_chance}} is 0), {{workloada}}) on 3.0.(8,9,12) and 
3.0.12 with the patch, the results averaged across ~50 runs are so:

3.0.8:
{code}
[OVERALL], RunTime(ms), 5287.62
[OVERALL], Throughput(ops/sec), 189.70
[TOTAL_GCS_PS_Scavenge], Count, 1
[TOTAL_GC_TIME_PS_Scavenge], Time(ms), 14.47
[TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.27
[TOTAL_GCS_PS_MarkSweep], Count, 0
[TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 0
[TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0
[TOTAL_GCs], Count, 1
[TOTAL_GC_TIME], Time(ms), 14.47
[TOTAL_GC_TIME_%], Time(%), 0.27
[READ], Operations, 502.55
[READ], AverageLatency(us), 2701.96
[READ], MinLatency(us), 1144.75
[READ], MaxLatency(us), 21410.62
[READ], 95thPercentileLatency(us), 4606.09
[READ], 99thPercentileLatency(us), 8593.26
[READ], Return=OK, 502.55
[CLEANUP], Operations, 1
[CLEANUP], AverageLatency(us), 2230368.60
[CLEANUP], MinLatency(us), 2229344.60
[CLEANUP], MaxLatency(us), 2231391.60
[CLEANUP], 95thPercentileLatency(us), 2231391.60
[CLEANUP], 99thPercentileLatency(us), 2231391.60
[UPDATE], Operations, 497.45
[UPDATE], AverageLatency(us), 2118.83
[UPDATE], MinLatency(us), 976.21
[UPDATE], MaxLatency(us), 21953.26
[UPDATE], 95thPercentileLatency(us), 3519.23
[UPDATE], 99thPercentileLatency(us), 7775.53
[UPDATE], Return=OK, 497.45
{code}
3.0.9:
{code}
[OVERALL], RunTime(ms), 5269.64
[OVERALL], Throughput(ops/sec), 190.36
[TOTAL_GCS_PS_Scavenge], Count, 1
[TOTAL_GC_TIME_PS_Scavenge], Time(ms), 14.26
[TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.27
[TOTAL_GCS_PS_MarkSweep], Count, 0
[TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 0
[TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0
[TOTAL_GCs], Count, 1
[TOTAL_GC_TIME], Time(ms), 14.26
[TOTAL_GC_TIME_%], Time(%), 0.27
[READ], Operations, 499.26
[READ], AverageLatency(us), 2673.89
[READ], MinLatency(us), 1141.89
[READ], MaxLatency(us), 21053.04
[READ], 95thPercentileLatency(us), 4392.28
[READ], 99thPercentileLatency(us), 8742.70
[READ], Return=OK, 499.26
[CLEANUP], Operations, 1
[CLEANUP], AverageLatency(us), 2230214.04
[CLEANUP], MinLatency(us), 2229190.04
[CLEANUP], MaxLatency(us), 2231237.04
[CLEANUP], 95thPercentileLatency(us), 2231237.04
[CLEANUP], 99thPercentileLatency(us), 2231237.04
[UPDATE], Operations, 500.74
[UPDATE], AverageLatency(us), 2106.96
[UPDATE], MinLatency(us), 967.11
[UPDATE], MaxLatency(us), 21862.40
[UPDATE], 95thPercentileLatency(us), 3477.83
[UPDATE], 99thPercentileLatency(us), 7677.11
[UPDATE], Return=OK, 500.74
{code}
3.0.12:
{code}
[OVERALL], RunTime(ms), 5425.13
[OVERALL], Throughput(ops/sec), 184.86
[TOTAL_GCS_PS_Scavenge], Count, 1
[TOTAL_GC_TIME_PS_Scavenge], Time(ms), 17.42
[TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.32
[TOTAL_GCS_PS_MarkSweep], Count, 0
[TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 0
[TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0
[TOTAL_GCs], Count, 1
[TOTAL_GC_TIME], Time(ms), 17.42
[TOTAL_GC_TIME_%], Time(%), 0.32
[READ], Operations, 500.49
[READ], AverageLatency(us), 2805.40
[READ], MinLatency(us), 1158.47
[READ], MaxLatency(us), 24314.62
[READ], 95thPercentileLatency(us), 4903.83
[READ], 99thPercentileLatency(us), 9662.70
[READ], Return=OK, 500.49
[CLEANUP], Operations, 1
[CLEANUP], AverageLatency(us), 2230716.38
[CLEANUP], MinLatency(us), 2229692.38
[CLEANUP], MaxLatency(us), 2231739.38
[CLEANUP], 95thPercentileLatency(us), 2231739.38
[CLEANUP], 99thPercentileLatency(us), 2231739.38
[UPDATE], Operations, 499.51
[UPDATE], AverageLatency(us), 2225.51
[UPDATE], MinLatency(us), 971.92
[UPDATE], MaxLatency(us), 23552.06
[UPDATE], 95thPercentileLatency(us), 3822.02
[UPDATE], 99thPercentileLatency(us), 9153.19
[UPDATE], Return=OK, 499.51
{code}
3.0.12 with patch:
{code}
[OVERALL], RunTime(ms), 5128.40
[OVERALL], Throughput(ops/sec), 195.93
[TOTAL_GCS_PS_Scavenge], Count, 1
[TOTAL_GC_TIME_PS_Scavenge], Time(ms), 12.13
[TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.24
[TOTAL_GCS_PS_MarkSweep], Count, 0
[TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 0
[TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0
[TOTAL_GCs], Count, 1
[TOTAL_GC_TIME], Time(ms), 12.13
[TOTAL_GC_TIME_%], Time(%), 0.24
[READ], Operations, 500.79
[READ], AverageLatency(us), 2557.40
[READ], MinLatency(us), 1081.06
[READ], MaxLatency(us), 21607.91
[READ], 95thPercentileLatency(us), 4195.49
[READ], 99thPercentileLatency(us), 7990.74
[READ], Return=OK, 500.79
[CLEANUP], Operations, 1
[CLEANUP], AverageLatency(us), 2229325.28
[CLEANUP], MinLatency(us), 2228301.28
[CLEANUP], MaxLatency(us), 2230348.28
[CLEANUP], 95thPercentileLatency(us), 2230348.28
[CLEANUP], 99thPercentileLatency(us), 2230348.28
[UPDATE], 

[jira] [Updated] (CASSANDRA-13863) Speculative retry causes read repair even if read_repair_chance is 0.0.

2017-10-05 Thread Shogo Hoshii (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shogo Hoshii updated CASSANDRA-13863:
-
Attachment: speculative retries.pdf

The result of performance test between 3.0.9 and 3.0.12 clusters

> Speculative retry causes read repair even if read_repair_chance is 0.0.
> ---
>
> Key: CASSANDRA-13863
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13863
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Hiro Wakabayashi
> Attachments: speculative retries.pdf
>
>
> {{read_repair_chance = 0.0}} and {{dclocal_read_repair_chance = 0.0}} should 
> cause no read repair, but read repair happens with speculative retry. I think 
> {{read_repair_chance = 0.0}} and {{dclocal_read_repair_chance = 0.0}} should 
> stop read repair completely because the user wants to stop read repair in 
> some cases.
> {panel:title=Case 1: TWCS users}
> The 
> [documentation|http://cassandra.apache.org/doc/latest/operating/compaction.html?highlight=read_repair_chance]
>  states how to disable read repair.
> {quote}While TWCS tries to minimize the impact of comingled data, users 
> should attempt to avoid this behavior. Specifically, users should avoid 
> queries that explicitly set the timestamp via CQL USING TIMESTAMP. 
> Additionally, users should run frequent repairs (which streams data in such a 
> way that it does not become comingled), and disable background read repair by 
> setting the table’s read_repair_chance and dclocal_read_repair_chance to 0.
> {quote}
> {panel}
> {panel:title=Case 2. Strict SLA for read latency}
> In a peak time, read latency is a key for us but, read repair causes latency 
> higher than no read repair. We can use anti entropy repair in off peak time 
> for consistency.
> {panel}
>  
> Here is my procedure to reproduce the problem.
> h3. 1. Create a cluster and set {{hinted_handoff_enabled}} to false.
> {noformat}
> $ ccm create -v 3.0.14 -n 3 cluster_3.0.14
> $ for h in $(seq 1 3) ; do perl -pi -e 's/hinted_handoff_enabled: 
> true/hinted_handoff_enabled: false/' 
> ~/.ccm/cluster_3.0.14/node$h/conf/cassandra.yaml ; done
> $ for h in $(seq 1 3) ; do grep "hinted_handoff_enabled:" 
> ~/.ccm/cluster_3.0.14/node$h/conf/cassandra.yaml ; done
> hinted_handoff_enabled: false
> hinted_handoff_enabled: false
> hinted_handoff_enabled: false
> $ ccm start{noformat}
> h3. 2. Create a keyspace and a table.
> {noformat}
> $ ccm node1 cqlsh
> DROP KEYSPACE IF EXISTS ks1;
> CREATE KEYSPACE ks1 WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': '3'}  AND durable_writes = true;
> CREATE TABLE ks1.t1 (
> key text PRIMARY KEY,
> value blob
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.0
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = 'ALWAYS';
> QUIT;
> {noformat}
> h3. 3. Stop node2 and node3. Insert a row.
> {noformat}
> $ ccm node3 stop && ccm node2 stop && ccm status
> Cluster: 'cluster_3.0.14'
> --
> node1: UP
> node3: DOWN
> node2: DOWN
> $ ccm node1 cqlsh -k ks1 -e "consistency; tracing on; insert into ks1.t1 
> (key, value) values ('mmullass', bigintAsBlob(1));"
> Current consistency level is ONE.
> Now Tracing is enabled
> Tracing session: 01d74590-97cb-11e7-8ea7-c1bd4d549501
>  activity 
>| timestamp  | source| 
> source_elapsed
> -++---+
>   
> Execute CQL3 query | 2017-09-12 23:59:42.316000 | 127.0.0.1 | 
>  0
>  Parsing insert into ks1.t1 (key, value) values ('mmullass', 
> bigintAsBlob(1)); [SharedPool-Worker-1] | 2017-09-12 23:59:42.319000 | 
> 127.0.0.1 |   4323
>Preparing 
> statement [SharedPool-Worker-1] | 2017-09-12 23:59:42.32 | 127.0.0.1 |
>5250
>

[jira] [Updated] (CASSANDRA-13863) Speculative retry causes read repair even if read_repair_chance is 0.0.

2017-09-12 Thread Hiro Wakabayashi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hiro Wakabayashi updated CASSANDRA-13863:
-
Description: 
{{read_repair_chance = 0.0}} and {{dclocal_read_repair_chance = 0.0}} should 
cause no read repair, but read repair happens with speculative retry. I think 
{{read_repair_chance = 0.0}} and {{dclocal_read_repair_chance = 0.0}} should 
stop read repair completely because the user wants to stop read repair in some 
cases.

{panel:title=Case 1: TWCS users}
The 
[documentation|http://cassandra.apache.org/doc/latest/operating/compaction.html?highlight=read_repair_chance]
 states how to disable read repair.
{quote}While TWCS tries to minimize the impact of comingled data, users should 
attempt to avoid this behavior. Specifically, users should avoid queries that 
explicitly set the timestamp via CQL USING TIMESTAMP. Additionally, users 
should run frequent repairs (which streams data in such a way that it does not 
become comingled), and disable background read repair by setting the table’s 
read_repair_chance and dclocal_read_repair_chance to 0.
{quote}
{panel}
{panel:title=Case 2. Strict SLA for read latency}
In a peak time, read latency is a key for us but, read repair causes latency 
higher than no read repair. We can use anti entropy repair in off peak time for 
consistency.
{panel}
 
Here is my procedure to reproduce the problem.

h3. 1. Create a cluster and set {{hinted_handoff_enabled}} to false.
{noformat}
$ ccm create -v 3.0.14 -n 3 cluster_3.0.14
$ for h in $(seq 1 3) ; do perl -pi -e 's/hinted_handoff_enabled: 
true/hinted_handoff_enabled: false/' 
~/.ccm/cluster_3.0.14/node$h/conf/cassandra.yaml ; done
$ for h in $(seq 1 3) ; do grep "hinted_handoff_enabled:" 
~/.ccm/cluster_3.0.14/node$h/conf/cassandra.yaml ; done
hinted_handoff_enabled: false
hinted_handoff_enabled: false
hinted_handoff_enabled: false
$ ccm start{noformat}
h3. 2. Create a keyspace and a table.
{noformat}
$ ccm node1 cqlsh
DROP KEYSPACE IF EXISTS ks1;
CREATE KEYSPACE ks1 WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': '3'}  AND durable_writes = true;
CREATE TABLE ks1.t1 (
key text PRIMARY KEY,
value blob
) WITH bloom_filter_fp_chance = 0.01
AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32', 'min_threshold': '4'}
AND compression = {'chunk_length_in_kb': '64', 'class': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND crc_check_chance = 1.0
AND dclocal_read_repair_chance = 0.0
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = 'ALWAYS';
QUIT;
{noformat}
h3. 3. Stop node2 and node3. Insert a row.
{noformat}
$ ccm node3 stop && ccm node2 stop && ccm status
Cluster: 'cluster_3.0.14'
--
node1: UP
node3: DOWN
node2: DOWN

$ ccm node1 cqlsh -k ks1 -e "consistency; tracing on; insert into ks1.t1 (key, 
value) values ('mmullass', bigintAsBlob(1));"
Current consistency level is ONE.
Now Tracing is enabled

Tracing session: 01d74590-97cb-11e7-8ea7-c1bd4d549501

 activity   
 | timestamp  | source| source_elapsed
-++---+

  Execute CQL3 query | 2017-09-12 23:59:42.316000 | 127.0.0.1 |  0
 Parsing insert into ks1.t1 (key, value) values ('mmullass', bigintAsBlob(1)); 
[SharedPool-Worker-1] | 2017-09-12 23:59:42.319000 | 127.0.0.1 |   4323
   Preparing statement 
[SharedPool-Worker-1] | 2017-09-12 23:59:42.32 | 127.0.0.1 |   5250
 Determining replicas for mutation 
[SharedPool-Worker-1] | 2017-09-12 23:59:42.327000 | 127.0.0.1 |  11886
Appending to commitlog 
[SharedPool-Worker-3] | 2017-09-12 23:59:42.327000 | 127.0.0.1 |  12195
 Adding to t1 memtable 
[SharedPool-Worker-3] | 2017-09-12 23:59:42.327000 | 127.0.0.1 |  12392

Request complete | 2017-09-12 23:59:42.328680 | 127.0.0.1 |  12680


$ ccm node1 cqlsh -k ks1 -e "consistency; tracing on; select * from ks1.t1