[
https://issues.apache.org/jira/browse/CASSANDRA-1608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13050908#comment-13050908
]
Benjamin Coverston commented on CASSANDRA-1608:
-----------------------------------------------
The LDBCompaction task was changed to limit the size of the SSTables that are
output by the compaction itself. Once the size of rows compacted exceeds the
size of the default size in MB then it creates a new SSTable:
>>
if(position > cfs.metadata.getMemtableThroughputInMb() * 1024 * 1024
|| nni.hasNext() == false)
{
<<
It feels like a bit of a hack because an optimal flush size may not always be
an optimal storage size, but my goal was to try to keep the SSTable size in a
reasonably small range to make compactions into into level 1 fast.
I'll make some more modifications to the manifest s.t. there is a single path
for getting new SSTables (flushed and streamed) into the manifest. I found a
bug on the plane today where they were getting added to the manifest, but they
weren't being added to the queue that I was adding flushed SSTables to. I'll
get that into my next revision.
>>
In promote, do we need to check for all the removed ones being on the same
level? I can't think of a scenario where we're not merging from multiple
levels. If so I'd change that to an assert. (In fact there should be exactly
two levels involved, right?)
<<
I considered this. There are some boundary cases where every SSTable that gets
compacted will be in the same level. Most of them have to do with L+1 being
empty. Also sending the SSTables through the same compaction path will evict
expired tombstones before they end up in the next level where compactions
become increasingly unlikely.
>>
Did some surgery on getCompactionCandidates. Generally renamed things to be
more succinct. Feels like we getCompactionCandidates should do lower levels
before doing higher levels?
<<
Let's just say my naming conventions have been shaped by different influences
:) I wouldn't object to any of the new names you chose however.
RE: the order, it does feel like we should do lower levels before higher
levels, however one thing that we have to do is make sure that level-1 stays at
10 SSTables. The algorithm dictates that all of the level-0 candidates get
compacted with all of the candidates at level-1. This means that you need to
promote out of level-1 so that it is ~10 SSTables before you schedule a
compaction for level-0 promotion. Right now tuning this so that it is
performant is the biggest hurdle, I have made some improvements by watching the
CompactionExecutor, but I have a feeling that making this work is going to
require some subtle manipulation of the way that the CompactionExecutor handles
tasks.
> Redesigned Compaction
> ---------------------
>
> Key: CASSANDRA-1608
> URL: https://issues.apache.org/jira/browse/CASSANDRA-1608
> Project: Cassandra
> Issue Type: Improvement
> Components: Core
> Reporter: Chris Goffinet
> Attachments: 0001-leveldb-style-compaction.patch, 1608-v2.txt
>
>
> After seeing the I/O issues in CASSANDRA-1470, I've been doing some more
> thinking on this subject that I wanted to lay out.
> I propose we redo the concept of how compaction works in Cassandra. At the
> moment, compaction is kicked off based on a write access pattern, not read
> access pattern. In most cases, you want the opposite. You want to be able to
> track how well each SSTable is performing in the system. If we were to keep
> statistics in-memory of each SSTable, prioritize them based on most accessed,
> and bloom filter hit/miss ratios, we could intelligently group sstables that
> are being read most often and schedule them for compaction. We could also
> schedule lower priority maintenance on SSTable's not often accessed.
> I also propose we limit the size of each SSTable to a fix sized, that gives
> us the ability to better utilize our bloom filters in a predictable manner.
> At the moment after a certain size, the bloom filters become less reliable.
> This would also allow us to group data most accessed. Currently the size of
> an SSTable can grow to a point where large portions of the data might not
> actually be accessed as often.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira