[
https://issues.apache.org/jira/browse/CASSANDRA-8630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14702714#comment-14702714
]
Stefania commented on CASSANDRA-8630:
-------------------------------------
bq. I think Ariel was suggesting a new class that explicitly performs no work.
However, since we use this class more often for reads than we do for
compaction, I would prefer we stick with the more performant option of just
null checking. Certainly using a full-fat RateLimiter is more expensive than
this
He also added _Constructor is private, maybe a rate limiter with a huge rate?_.
Anyway, I will just stick to null checking unless any other objection.
Thanks for the clarifications on the segments creation, I hadn't realized that
we could get rid of the boundaries as well. One thing is still not clear
however:
bq. At the same time we can eliminate the idea of multiple segments; we should
always have just one segment.
How would we handle files bigger than Integer.MAX_SIZE? Would we map the new
region on-the-fly when rebuffering (I guess not) or upfront when building the
'segmented' file, in which case we still need more than one mmap segment?
We also need to rename {{SegmentedFile}} and derived classes right? Any
preferences?
> Faster sequential IO (on compaction, streaming, etc)
> ----------------------------------------------------
>
> Key: CASSANDRA-8630
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8630
> Project: Cassandra
> Issue Type: Improvement
> Components: Core, Tools
> Reporter: Oleg Anastasyev
> Assignee: Stefania
> Labels: compaction, performance
> Fix For: 3.x
>
> Attachments: 8630-FasterSequencialReadsAndWrites.txt, cpu_load.png,
> flight_recorder_001_files.tar.gz, flight_recorder_002_files.tar.gz,
> mmaped_uncomp_hotspot.png
>
>
> When node is doing a lot of sequencial IO (streaming, compacting, etc) a lot
> of CPU is lost in calls to RAF's int read() and DataOutputStream's write(int).
> This is because default implementations of readShort,readLong, etc as well as
> their matching write* are implemented with numerous calls of byte by byte
> read and write.
> This makes a lot of syscalls as well.
> A quick microbench shows than just reimplementation of these methods in
> either way gives 8x speed increase.
> A patch attached implements RandomAccessReader.read<Type> and
> SequencialWriter.write<Type> methods in more efficient way.
> I also eliminated some extra byte copies in CompositeType.split and
> ColumnNameHelper.maxComponents, which were on my profiler's hotspot method
> list during tests.
> A stress tests on my laptop show that this patch makes compaction 25-30%
> faster on uncompressed sstables and 15% faster for compressed ones.
> A deployment to production shows much less CPU load for compaction.
> (I attached a cpu load graph from one of our production, orange is niced CPU
> load - i.e. compaction; yellow is user - i.e. not compaction related tasks)
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)