[
https://issues.apache.org/jira/browse/CASSANDRA-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13850687#comment-13850687
]
Nikolai Grigoriev edited comment on CASSANDRA-6496 at 12/17/13 5:24 PM:
------------------------------------------------------------------------
Cool!!!!! I have got the source tagged 2.0.3, applied the patch, recompiled,
restarted the node. Clearly now it compacts the groups of 32 L0 sstables into
large ones. I see that it just did one round and created 8Gb sstable from 32
256Mb ones.
Thanks a lot for the patch! I will revert the compaction settings to give it
enough resources and let it complete its job to see the end results before I
restart the test traffic.
was (Author: ngrigoriev):
Cool!!!!! I have got the source tagged 2.0.3, applied the patch, recompiled,
restarted the node. Clearly now it compacts the groups of 32 L0 sstables into
one large one. I see that it just did one round and created 8Gb sstable from 32
256Mb ones.
Thanks a lot for the patch! I will revert the compaction settings to give it
enough resources and let it complete its job to see the end results before I
restart the test traffic.
> Endless L0 LCS compactions
> --------------------------
>
> Key: CASSANDRA-6496
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6496
> Project: Cassandra
> Issue Type: Bug
> Components: Core
> Environment: Cassandra 2.0.3, Linux, 6 nodes, 5 disks per node
> Reporter: Nikolai Grigoriev
> Assignee: Jonathan Ellis
> Labels: compaction
> Fix For: 2.0.4
>
> Attachments: 6496.txt, system.log.1.gz, system.log.gz
>
>
> I have first described the problem here:
> http://stackoverflow.com/questions/20589324/cassandra-2-0-3-endless-compactions-with-no-traffic
> I think I have really abused my system with the traffic (mix of reads, heavy
> updates and some deletes). Now after stopping the traffic I see the
> compactions that are going on endlessly for over 4 days.
> For a specific CF I have about 4700 sstable data files right now. The
> compaction estimates are logged as "[3312, 4, 0, 0, 0, 0, 0, 0, 0]".
> sstable_size_in_mb=256. 3214 files are about 256Mb (+/1 few megs), other
> files are smaller or much smaller than that. No sstables are larger than
> 256Mb. What I observe is that LCS picks 32 sstables from L0 and compacts them
> into 32 sstables of approximately the same size. So, what my system is doing
> for last 4 days (no traffic at all) is compacting groups of 32 sstables into
> groups of 32 sstables without any changes. Seems like a bug to me regardless
> of what did I do to get the system into this state...
--
This message was sent by Atlassian JIRA
(v6.1.4#6159)