[
https://issues.apache.org/jira/browse/CASSANDRA-8630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14724898#comment-14724898
]
Benedict edited comment on CASSANDRA-8630 at 9/1/15 7:18 AM:
-------------------------------------------------------------
I must admit that I thought, from Ariel's comment
[here|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/io/util/NIODataInputStream.java#L182],
that we did not actually use {{FBO.copy}} anymore, and that it did not work. I
guess there was some other mistake happening there.
However there's no functional distinction between the two methods, since they
both operate on a target {{byte[]}}, and as such the
{{FastByteOperations.copy}} methods support an array as a target, so I've
pushed a version with that changed.
It's not clear how much of the variance is cstar's current inconsistency. I'm
reasonably certain that hotspot translates any byte-by-byte copy to a SIMD
optimised one. However looking at the C2 compilation output, it appears that
the {{FastByteOperations.copy}} call is fully inlined, whereas for some reason
the {{ByteBuffer.get}} call is left as invokevirtual. This is odd, since this
should at most be bimorphic, and I would expect to be a main target for
optimisation by the VM. However I cannot see anywhere in hotspot's intrinsic
definition any of the {{ByteBuffer.get}} methods either (whereas copyMemory
most certainly is), which would have explained this.
Given this, we're probably best retaining the {{FBO.copy}} version, however we
may as well port it over to {{read}}
was (Author: benedict):
I must admit that I thought, from Ariel's comment
[here|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/io/util/NIODataInputStream.java#L182]
that we did not actually use {{FBO.copy anymore}}, and that it did not work. I
guess there was some other mistake happening there.
However there's no functional distinction between the two methods, since they
both operate on a target {{byte[]}}, and as such the
{{FastByteOperations.copy}} methods support an array as a target, so I've
pushed a version with that changed.
It's not clear how much of the variance is cstar's current inconsistency. I'm
reasonably certain that hotspot translates any byte-by-byte copy to a SIMD
optimised one. However looking at the C2 compilation output, it appears that
the {{FastByteOperations.copy}} call is fully inlined, whereas for some reason
the {{ByteBuffer.get}} call is left as invokevirtual. This is odd, since this
should at most be bimorphic, and I would expect to be a main target for
optimisation by the VM. However I cannot see anywhere in hotspot's intrinsic
definition any of the {{ByteBuffer.get}} methods either (whereas copyMemory
most certainly is), which would have explained this.
Given this, we're probably best retaining the {{FBO.copy}} version, however we
may as well port it over to {{read}}
> Faster sequential IO (on compaction, streaming, etc)
> ----------------------------------------------------
>
> Key: CASSANDRA-8630
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8630
> Project: Cassandra
> Issue Type: Improvement
> Components: Core, Tools
> Reporter: Oleg Anastasyev
> Assignee: Stefania
> Labels: compaction, performance
> Fix For: 3.x
>
> Attachments: 8630-FasterSequencialReadsAndWrites.txt, cpu_load.png,
> flight_recorder_001_files.tar.gz, flight_recorder_002_files.tar.gz,
> mmaped_uncomp_hotspot.png
>
>
> When node is doing a lot of sequencial IO (streaming, compacting, etc) a lot
> of CPU is lost in calls to RAF's int read() and DataOutputStream's write(int).
> This is because default implementations of readShort,readLong, etc as well as
> their matching write* are implemented with numerous calls of byte by byte
> read and write.
> This makes a lot of syscalls as well.
> A quick microbench shows than just reimplementation of these methods in
> either way gives 8x speed increase.
> A patch attached implements RandomAccessReader.read<Type> and
> SequencialWriter.write<Type> methods in more efficient way.
> I also eliminated some extra byte copies in CompositeType.split and
> ColumnNameHelper.maxComponents, which were on my profiler's hotspot method
> list during tests.
> A stress tests on my laptop show that this patch makes compaction 25-30%
> faster on uncompressed sstables and 15% faster for compressed ones.
> A deployment to production shows much less CPU load for compaction.
> (I attached a cpu load graph from one of our production, orange is niced CPU
> load - i.e. compaction; yellow is user - i.e. not compaction related tasks)
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)