[ 
https://issues.apache.org/jira/browse/CASSANDRA-16245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17309294#comment-17309294
 ] 

Alexander Dejanovski commented on CASSANDRA-16245:
--------------------------------------------------

[~e.dimitrova], this work hasn't been formally reviewed.

There's some flakiness [in the CI 
runs|https://app.circleci.com/pipelines/github/riptano/cassandra-rtest?branch=trunk]
 which is due to Medusa and S3 transient failures in downloads. This is being 
addressed in Medusa itself and should be fixed shortly [with this 
PR|https://github.com/thelastpickle/cassandra-medusa/pull/295].

It would be good to have someone validate that the test scenarios were 
implemented according to the description in order to close this ticket.

Then we'll work on having this work integrated into Cassandra (or as a 
subproject) post-4.0.

> Implement repair quality test scenarios
> ---------------------------------------
>
>                 Key: CASSANDRA-16245
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-16245
>             Project: Cassandra
>          Issue Type: Task
>          Components: Test/dtest/java
>            Reporter: Alexander Dejanovski
>            Assignee: Radovan Zvoncek
>            Priority: Normal
>             Fix For: 4.0-rc
>
>
> Implement the following test scenarios in a new test suite for repair 
> integration testing with significant load:
> Generate/restore a workload of ~100GB per node. Medusa should be considered 
> to create the initial backup which could then be restored from an S3 bucket 
> to speed up node population.
>  Data should on purpose require repair and be generated accordingly.
> Perform repairs for a 3 nodes cluster with 4 cores each and 16GB-32GB RAM 
> (m5d.xlarge instances would be the most cost efficient type).
>  Repaired keyspaces will use RF=3 or RF=2 in some cases (the latter is for 
> subranges with different sets of replicas).
> ||Mode||Version||Settings||Checks||
> |Full repair|trunk|Sequential + All token ranges|"No anticompaction 
> (repairedAt==0)
>  Out of sync ranges > 0
>  Subsequent run must show no out of sync range"|
> |Full repair|trunk|Parallel + Primary range|"No anticompaction (repairedAt==0)
>  Out of sync ranges > 0
>  Subsequent run must show no out of sync range"|
> |Full repair|trunk|Force terminate repair shortly after it was 
> triggered|Repair threads must be cleaned up|
> |Subrange repair|trunk|Sequential + single token range|"No anticompaction 
> (repairedAt==0)
>  Out of sync ranges > 0
>  Subsequent run must show no out of sync range"|
> |Subrange repair|trunk|Parallel + 10 token ranges which have the same 
> replicas|"No anticompaction (repairedAt == 0)
>  Out of sync ranges > 0
>  Subsequent run must show no out of sync range
> A single repair session will handle all subranges at once"|
> |Subrange repair|trunk|Parallel + 10 token ranges which have different 
> replicas|"No anticompaction (repairedAt==0)
>  Out of sync ranges > 0
>  Subsequent run must show no out of sync range
> More than one repair session is triggered to process all subranges"|
> |Incremental repair|trunk|"Parallel (mandatory)
>  No compaction during repair"|"Anticompaction status (repairedAt != 0) on all 
> SSTables
>  No pending repair on SSTables after completion (could require to wait a bit 
> as this will happen asynchronously)
>  Out of sync ranges > 0 + Subsequent run must show no out of sync range"|
> |Incremental repair|trunk|"Parallel (mandatory)
>  Major compaction triggered during repair"|"Anticompaction status (repairedAt 
> != 0) on all SSTables
>  No pending repair on SSTables after completion (could require to wait a bit 
> as this will happen asynchronously)
>  Out of sync ranges > 0 + Subsequent run must show no out of sync range"|
> |Incremental repair|trunk|Force terminate repair shortly after it was 
> triggered.|Repair threads must be cleaned up|



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to