[ 
https://issues.apache.org/jira/browse/CASSANDRA-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15859959#comment-15859959
 ] 

Marcus Eriksson commented on CASSANDRA-13079:
---------------------------------------------

The problem is that if we automatically mark all stables as unrepaired, all 
repaired sstables will potentially move to L0 in the unrepaired compaction 
strategy, this would cause a lot of compactions across the cluster and that 
would probably be even more surprising to users than the fact that they have to 
run repair -full

> Repair doesn't work after several replication factor changes
> ------------------------------------------------------------
>
>                 Key: CASSANDRA-13079
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-13079
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>         Environment: Debian 
>            Reporter: Vladimir Yudovin
>            Assignee: Paulo Motta
>            Priority: Critical
>
> Scenario:
> Start two nodes cluster.
> Create keyspace with rep.factor *one*:
> CREATE KEYSPACE rep WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 1};
> CREATE TABLE rep.data (str text PRIMARY KEY );
> INSERT INTO rep.data (str) VALUES ( 'qwerty');
> Run *nodetool flush* on all nodes. On one of them table files are created.
> Change replication factor to *two*:
> ALTER KEYSPACE rep WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 2};
> Run repair, then *nodetool flush* on all nodes. On all nodes table files are 
> created.
> Change replication factor to *one*:
> ALTER KEYSPACE rep WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 1};
> Then *nodetool cleanup*, only on initial node remained data files.
> Change replication factor to *two* again:
> ALTER KEYSPACE rep WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 2};
> Run repair, then *nodetool flush* on all nodes. No data files on second node 
> (though expected, as after first repair/flush).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to