Hi Marcus,

Do you have some stack trace to show that which function in the `
getNextBackgroundTask` is most expensive?

Yeah, I think having 15-20K sstables in L0 is very bad, in our heavy-write
cluster, I try my best to reduce the impact of repair, and keep number of
sstables in L0 < 100.

Thanks
Dikang.

On Thu, Nov 24, 2016 at 12:53 PM, Nate McCall <zznat...@gmail.com> wrote:

> > The reason is described here:
> https://issues.apache.org/jira/browse/CASSANDRA-5371?
> focusedCommentId=13621679&page=com.atlassian.jira.
> plugin.system.issuetabpanels:comment-tabpanel#comment-13621679
> >
> > /Marcus
>
> "...a lot of the work you've done you will redo when you compact your now
> bigger L0 sstable against L1."
>
> ^ Sylvain's hypothesis (next comment down) is actually something we see
> occasionally in practice: having to re-write the contents of L1 too often
> when large L0 SSTables are pulled in. Here is an example we took on a
> system with pending compaction spikes that was seeing this specific issue
> with four LCS-based tables:
>
> https://gist.github.com/zznate/d22812551fa7a527d4c0d931f107c950
>
> The significant part of this particular workload is a burst of heavy writes
> from long-duration scheduled jobs.
>



-- 
Dikang

Reply via email to