[
https://issues.apache.org/jira/browse/HDDS-9536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17804934#comment-17804934
]
Duong commented on HDDS-9536:
-----------------------------
[~szetszwo] I ran a test on the latest code and it looks like HDDS-7117 has
solved the buffer issues. The cost of allocating buffer when reading chunks
disappears and GC footprint drops.
See [^datanode-read-ratis-after-mapped-buffer.html]
Overall, the read performance increases around > 10% in my test (47GB/s ->
52GBs in a 16-datanode cluster).
Looking at the flame graph now, the cost is around serializing the
GRPC/Protobuf response that still involves some buffer copy.
I believe we can do better by getting rid of GRPC to ReadChunk, and moving the
block reading to native stream with Netty, HDDS-9904.
c.c. [~weichiu] [~ritesh]
> Datanode perf: Copying (heap) buffers is costly
> -----------------------------------------------
>
> Key: HDDS-9536
> URL: https://issues.apache.org/jira/browse/HDDS-9536
> Project: Apache Ozone
> Issue Type: Improvement
> Components: Ozone Datanode
> Reporter: Duong
> Assignee: Tsz-wo Sze
> Priority: Major
> Labels: pull-request-available
> Attachments: Screenshot 2023-10-25 at 8.44.16 AM.png,
> datanode-on-write2.html, datanode-read-ratis-after-mapped-buffer.html,
> datanode-read-ratis.html
>
>
> Today, datanodes don't use direct buffers for WriteChunk data. When the
> chunks are written to disk, NIO converts those buffers to direct ones and the
> conversion seems to be very costly (please see attached
> [^datanode-on-write2.html]).
> Chunk data proto should be serialized from network/ratis using (pooled)
> direct buffers. That would avoid lots of extra costs not only from buffer
> copying but also from GCing the immediate buffers.
> !Screenshot 2023-10-25 at 8.44.16 AM.png|width=853,height=467!
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]