[
https://issues.apache.org/jira/browse/CASSANDRA-8630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14703606#comment-14703606
]
Benedict edited comment on CASSANDRA-8630 at 8/19/15 7:26 PM:
--------------------------------------------------------------
bq. In what scenario would we not want to map the file with as few 2 gigabyte
buffers as possible?
During early opening we currently remap our buffers every interval-, meaning
for a 2Gb buffer by default we will map it 20 times (plus once every 2Gb)-.
This is not horrible, but I would prefer if - at least during reopening - we
only mapped once, and each time we reopened/extended the size of the file, we
just mapped the bit that wasn't previously mapped. Once we cross a 2Gb boundary
(or we are opening the final copy of the file) we should certainly remap into
contiguous 2Gb chunks.
edit: currently we actually map it much more than this; we map each 2Gb range
every 50Mb, so for a 100Gb file we might map several thousand times. So
whatever we do will be a dramatic improvement, but I generally am on a mission
to sanitise the code base, and while we're here we might as well do it right.
was (Author: benedict):
bq. In what scenario would we not want to map the file with as few 2 gigabyte
buffers as possible?
During early opening we currently remap our buffers every interval, meaning for
a 2Gb buffer by default we will map it 20 times (plus once every 2Gb). This is
not horrible, but I would prefer if - at least during reopening - we only
mapped once, and each time we reopened/extended the size of the file, we just
mapped the bit that wasn't previously mapped. Once we cross a 2Gb boundary (or
we are opening the final copy of the file) we should certainly remap into
contiguous 2Gb chunks.
> Faster sequential IO (on compaction, streaming, etc)
> ----------------------------------------------------
>
> Key: CASSANDRA-8630
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8630
> Project: Cassandra
> Issue Type: Improvement
> Components: Core, Tools
> Reporter: Oleg Anastasyev
> Assignee: Stefania
> Labels: compaction, performance
> Fix For: 3.x
>
> Attachments: 8630-FasterSequencialReadsAndWrites.txt, cpu_load.png,
> flight_recorder_001_files.tar.gz, flight_recorder_002_files.tar.gz,
> mmaped_uncomp_hotspot.png
>
>
> When node is doing a lot of sequencial IO (streaming, compacting, etc) a lot
> of CPU is lost in calls to RAF's int read() and DataOutputStream's write(int).
> This is because default implementations of readShort,readLong, etc as well as
> their matching write* are implemented with numerous calls of byte by byte
> read and write.
> This makes a lot of syscalls as well.
> A quick microbench shows than just reimplementation of these methods in
> either way gives 8x speed increase.
> A patch attached implements RandomAccessReader.read<Type> and
> SequencialWriter.write<Type> methods in more efficient way.
> I also eliminated some extra byte copies in CompositeType.split and
> ColumnNameHelper.maxComponents, which were on my profiler's hotspot method
> list during tests.
> A stress tests on my laptop show that this patch makes compaction 25-30%
> faster on uncompressed sstables and 15% faster for compressed ones.
> A deployment to production shows much less CPU load for compaction.
> (I attached a cpu load graph from one of our production, orange is niced CPU
> load - i.e. compaction; yellow is user - i.e. not compaction related tasks)
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)