[ 
https://issues.apache.org/jira/browse/CASSANDRA-6852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13935075#comment-13935075
 ] 

Cyril Scetbon edited comment on CASSANDRA-6852 at 3/17/14 1:42 PM:
-------------------------------------------------------------------

Great ! Can you tell me how does it work internally or point me to a url where 
it is explained. Cause to check where a key is stored I have sorted all tokens 
to find ranges and searched where it's stored my key.

Here is what I find :
{code}
cqlsh:ks1> select str,token(str) from t1 where str='str46947' limit 10;

 str      | token(str)
----------+--------------------
 str46947 | 936110467609605413

$ nodetool ring ks1 | grep Up | sort -k 8 -n | grep -C 1 936104068755107049
10.0.1.128  r01         Up     Normal  163.72 KB       0.00%               
904295202895283495                          
10.0.1.193  r01         Up     Normal  44 MB           50.75%              
914026073010016644                          
10.0.1.244  r01         Up     Normal  44.05 MB        50.89%              
924832200596750447                          
10.0.1.244  r01         Up     Normal  44.05 MB        50.89%              
926119170189943733                          
10.0.1.128  r01         Up     Normal  163.72 KB       0.00%               
936104068755107049         <--                 
10.0.1.128  r01         Up     Normal  163.72 KB       0.00%               
943395402795400988         <--                 
10.0.1.119  r01         Up     Normal  42.08 MB        48.58%              
947638296150677630                          
10.0.1.128  r01         Up     Normal  163.72 KB       0.00%               
948472585151216644                          
10.0.1.69   r01         Up     Normal  166.82 KB       0.00%               
956130981441305365    
{code}
As token for key str46947 is between 936104068755107049 and 943395402795400988, 
I suppose this key should be stored on 10.0.1.128. Does the code checks it and 
as it knows it can't store data for that keyspace it stores it on the previous 
or the next range which is not owned by this node ? 

Thanks


was (Author: cscetbon):
Great ! Can you tell me how does it work internally or to point me to a url 
where is explained. Cause to check where a key is stored I have sorted all 
tokens to find ranges and searched where it's stored my key.

Here is what I find :
{code}
cqlsh:ks1> select str,token(str) from t1 where str='str46947' limit 10;

 str      | token(str)
----------+--------------------
 str46947 | 936110467609605413

$ nodetool ring ks1 | grep Up | sort -k 8 -n | grep -C 1 936104068755107049
10.0.1.128  r01         Up     Normal  163.72 KB       0.00%               
904295202895283495                          
10.0.1.193  r01         Up     Normal  44 MB           50.75%              
914026073010016644                          
10.0.1.244  r01         Up     Normal  44.05 MB        50.89%              
924832200596750447                          
10.0.1.244  r01         Up     Normal  44.05 MB        50.89%              
926119170189943733                          
10.0.1.128  r01         Up     Normal  163.72 KB       0.00%               
936104068755107049         <--                 
10.0.1.128  r01         Up     Normal  163.72 KB       0.00%               
943395402795400988         <--                 
10.0.1.119  r01         Up     Normal  42.08 MB        48.58%              
947638296150677630                          
10.0.1.128  r01         Up     Normal  163.72 KB       0.00%               
948472585151216644                          
10.0.1.69   r01         Up     Normal  166.82 KB       0.00%               
956130981441305365    
{code}
As token for key str46947 is between 936104068755107049 and 943395402795400988, 
I suppose this key should be stored on 10.0.1.128. Does the code checks it and 
as it knows it can't store data for that keyspace it stores it on the previous 
or the next range which is not owned by this node ? 

Thanks

> can't repair -pr part of data when not replicating data everywhere (multiDCs)
> -----------------------------------------------------------------------------
>
>                 Key: CASSANDRA-6852
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-6852
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>            Reporter: Cyril Scetbon
>              Labels: multi-dcs, ranges, repair
>
> Our environment is as follows :
> - 3 DCS : dc1,dc2 and dc3
> - replicate all keyspaces to dc1 and dc2
> - replicate a few keyspaces to dc3 as we have less hardware and use it for 
> computing statistics
> We use repair -pr everywhere regularly. FYI, a full repair takes almost 20 
> hours per node. The matter is that we can't use "repair -pr" anymore for 
> tokens stored on dc3 concerning keyspaces not replicated. We should have a 
> way to repair those ranges without doing a FULL REPAIR everywhere



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to