[
https://issues.apache.org/jira/browse/CASSANDRA-11623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15279792#comment-15279792
]
Marcus Eriksson commented on CASSANDRA-11623:
---------------------------------------------
I ran a few small benchmarks;
* write 10M keys, autocompaction disabled, then run a major compaction, in this
case this patched version was about 10% quicker (32MB/s vs 29MB/s).
* write 10M keys, autocompaction disabled, then nodetool enableautocompaction,
this showed basically no difference.
The reason for the big difference when running major compaction is probably
that trunk calls {{getOnDiskFilePointer}} 3 times per appended partition
With these results I'll commit it to trunk only, if anyone can show datamodels
with a bigger difference, I would be happy to backport to 2.2+
> Compactions w/ Short Rows Spending Time in getOnDiskFilePointer
> ---------------------------------------------------------------
>
> Key: CASSANDRA-11623
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11623
> Project: Cassandra
> Issue Type: Improvement
> Reporter: Tom Petracca
> Assignee: Tom Petracca
> Priority: Minor
> Fix For: 3.x
>
> Attachments: compactiontask_profile.png
>
>
> Been doing some performance tuning and profiling of my cassandra cluster and
> noticed that compaction speeds for my tables that I know to have very short
> rows were going particularly slowly. Profiling shows a ton of time being
> spent in BigTableWriter.getOnDiskFilePointer(), and attaching strace to a
> CompactionTask shows that a majority of time is being spent lseek (called by
> getOnDiskFilePointer), and not read or write.
> Going deeper it looks like we call getOnDiskFilePointer each row (sometimes
> multiple times per row) in order to see if we've reached our expected sstable
> size and should start a new writer. This is pretty unnecessary.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)