[
https://issues.apache.org/jira/browse/CASSANDRA-10643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15406930#comment-15406930
]
Vishy Kasar commented on CASSANDRA-10643:
-----------------------------------------
Thanks for the quick review Marcus. I have taken care of most of your feedback.
* Ongoing compactions are cancelled
* In CompactionManager, submitOnSSTables and submitTask were merged in to a
single method. I have kept the submitOnSSTables public to keep parity with
submitMaximal.
* In ColumnFamilyStore#sstablesInBounds:
** Used the Set
** Used the View.sstablesInBounds
** I have kept the Range to keep parity with Repair
> Implement compaction for a specific token range
> -----------------------------------------------
>
> Key: CASSANDRA-10643
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10643
> Project: Cassandra
> Issue Type: Improvement
> Components: Compaction
> Reporter: Vishy Kasar
> Assignee: Vishy Kasar
> Labels: lcs
> Attachments: 10643-trunk-REV01.txt, 10643-trunk-REV02.txt,
> 10643-trunk-REV03.txt
>
>
> We see repeated cases in production (using LCS) where small number of users
> generate a large number repeated updates or tombstones. Reading data of such
> users brings in large amounts of data in to java process. Apart from the read
> itself being slow for the user, the excessive GC affects other users as well.
> Our solution so far is to move from LCS to SCS and back. This takes long and
> is an over kill if the number of outliers is small. For such cases, we can
> implement the point compaction of a token range. We make the nodetool compact
> take a starting and ending token range and compact all the SSTables that fall
> with in that range. We can refuse to compact if the number of sstables is
> beyond a max_limit.
> Example:
> nodetool -st 3948291562518219268 -et 3948291562518219269 compact keyspace
> table
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)