[ 
https://issues.apache.org/jira/browse/CASSANDRA-10887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15061671#comment-15061671
 ] 

Simon Ashley commented on CASSANDRA-10887:
------------------------------------------

Ran a repro on 2.0.16 based on the description

(1)

create keyspace ks1 WITH replication = {'class': 'NetworkTopologyStrategy', 
'datacenter1':3}

(2)

CREATE TABLE ks1.users (
  login varchar,
  email varchar,
  name varchar,
  login_count int,
  PRIMARY KEY (login));

(3)

INSERT INTO ks1.users (login, email, name, login_count) VALUES ('jdoe', 
'[email protected]', 'Jane Doe', 1) IF NOT EXISTS;

(4)

#Run a simple lwt in a loop

cat lwtloop.sh

#!/bin/bash

#loop $1 times 

for (( a=1; a<=$1; a++ ))
do
 cqlsh -e "UPDATE ks1.users SET email = '[email protected]' WHERE login = 'jdoe' 
IF email = '[email protected]';"
 echo count: $a
done

./lwtloop.sh 1000

This outputs the following with a count

 [applied] | email
-----------+-----------------
     False | [email protected]

count: 304

(5) Initiate a nodetool move command

 nodetool -h 1.2.3.4 move -- \\-634023222112864484

(6) kill node running move against

sudo kill -9 <cassandra pid>

(7) lwtloop.sh script reports

<stdin>:1:Request did not complete within rpc_timeout.
count: 305

followed by

<stdin>:1:Unable to complete request: one or more nodes were unavailable.
count: 322

(8) Bring node back online

sudo cassandra

lwtloop.sh script now continues

[applied] | email
-----------+-----------------
     False | [email protected]

count: 587



> Pending range calculator gives wrong pending ranges for moves
> -------------------------------------------------------------
>
>                 Key: CASSANDRA-10887
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-10887
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Coordination
>            Reporter: Richard Low
>            Priority: Critical
>
> My understanding is the PendingRangeCalculator is meant to calculate who 
> should receive extra writes during range movements. However, it adds the 
> wrong ranges for moves. An extreme example of this can be seen in the 
> following reproduction. Create a 5 node cluster (I did this on 2.0.16 and 
> 2.2.4) and a keyspace RF=3 and a simple table. Then start moving a node and 
> immediately kill -9 it. Now you see a node as down and moving in the ring. 
> Try a quorum write for a partition that is stored on that node - it will fail 
> with a timeout. Further, all CAS reads or writes fail immediately with 
> unavailable exception because they attempt to include the moving node twice. 
> This is likely to be the cause of CASSANDRA-10423.
> In my example I had this ring:
> 127.0.0.1  rack1       Up     Normal  170.97 KB       20.00%              
> -9223372036854775808
> 127.0.0.2  rack1       Up     Normal  124.06 KB       20.00%              
> -5534023222112865485
> 127.0.0.3  rack1       Down   Moving  108.7 KB        40.00%              
> 1844674407370955160
> 127.0.0.4  rack1       Up     Normal  142.58 KB       0.00%               
> 1844674407370955161
> 127.0.0.5  rack1       Up     Normal  118.64 KB       20.00%              
> 5534023222112865484
> Node 3 was moving to -1844674407370955160. I added logging to print the 
> pending and natural endpoints. For ranges owned by node 3, node 3 appeared in 
> pending and natural endpoints. The blockFor is increased to 3 so we’re 
> effectively doing CL.ALL operations. This manifests as write timeouts and CAS 
> unavailables when the node is down.
> The correct pending range for this scenario is node 1 is gaining the range 
> (-1844674407370955160, 1844674407370955160). So node 1 should be added as a 
> destination for writes and CAS for this range, not node 3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to