[ 
https://issues.apache.org/jira/browse/CASSANDRA-1608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13051090#comment-13051090
 ] 

Jonathan Ellis commented on CASSANDRA-1608:
-------------------------------------------

bq. The LDBCompaction task was changed to limit the size of the SSTables that 
are output by the compaction itself.

Ah, totally makes sense.  Wonder if we can refactor some more to avoid so much 
duplicate code.

bq. I'll make some more modifications to the manifest s.t. there is a single 
path for getting new SSTables (flushed and streamed) into the manifest. I found 
a bug on the plane today where they were getting added to the manifest, but 
they weren't being added to the queue

I think I fixed that by getting rid of the queue.  It was basically just L0 
anyway.

I like "Manifest.add()" [to L0] being The Single Path, feels pretty foolproof 
to me.

bq. There are some boundary cases where every SSTable that gets compacted will 
be in the same level. Most of them have to do with L+1 being empty.

Also makes sense.

bq. RE: the order, it does feel like we should do lower levels before higher 
levels, however one thing that we have to do is make sure that level-1 stays at 
10 SSTables. The algorithm dictates that all of the level-0 candidates get 
compacted with all of the candidates at level-1.

Well, all the overlapping ones.  Which is usually going to be all of them, but 
it's easy enough to check that we might as well on the off chance that we get 
to save some i/o.

bq. This means that you need to promote out of level-1 so that it is ~10 
SSTables before you schedule a compaction for level-0 promotion.

I'm not sure that necessarily follows.  Compacting lower levels first means 
less duplicate recompaction from L+1 later.  L0 is particularly important since 
lots of sstables in L0 means (potentially) lots of merging by readers.

In any case, the comments in gCC talked about prioritizing L1 but the code 
actually prioritized L0 so I went with that. :)

> Redesigned Compaction
> ---------------------
>
>                 Key: CASSANDRA-1608
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-1608
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Core
>            Reporter: Chris Goffinet
>         Attachments: 0001-leveldb-style-compaction.patch, 1608-v2.txt
>
>
> After seeing the I/O issues in CASSANDRA-1470, I've been doing some more 
> thinking on this subject that I wanted to lay out.
> I propose we redo the concept of how compaction works in Cassandra. At the 
> moment, compaction is kicked off based on a write access pattern, not read 
> access pattern. In most cases, you want the opposite. You want to be able to 
> track how well each SSTable is performing in the system. If we were to keep 
> statistics in-memory of each SSTable, prioritize them based on most accessed, 
> and bloom filter hit/miss ratios, we could intelligently group sstables that 
> are being read most often and schedule them for compaction. We could also 
> schedule lower priority maintenance on SSTable's not often accessed.
> I also propose we limit the size of each SSTable to a fix sized, that gives 
> us the ability to  better utilize our bloom filters in a predictable manner. 
> At the moment after a certain size, the bloom filters become less reliable. 
> This would also allow us to group data most accessed. Currently the size of 
> an SSTable can grow to a point where large portions of the data might not 
> actually be accessed as often.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to