[
https://issues.apache.org/jira/browse/HADOOP-1702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12594735#action_12594735
]
Hairong Kuang commented on HADOOP-1702:
---------------------------------------
A few initial comments:
1. Once Packet.getBuffer() is called, no more data can be written to packet.
This is not obvious to code readers. Better add this restriction to the comment.
2. packetSize/writePacketSize in DFSClient don't include the size of the packet
header. I think it is better to rename them to
packetPayloadSize/writePacketPayloadSize.
3. The packet size guess calculation in DataNode should match the calculation
in DFSClient.
> Reduce buffer copies when data is written to DFS
> ------------------------------------------------
>
> Key: HADOOP-1702
> URL: https://issues.apache.org/jira/browse/HADOOP-1702
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Affects Versions: 0.14.0
> Reporter: Raghu Angadi
> Assignee: Raghu Angadi
> Fix For: 0.18.0
>
> Attachments: HADOOP-1702.patch, HADOOP-1702.patch, HADOOP-1702.patch,
> HADOOP-1702.patch, HADOOP-1702.patch
>
>
> HADOOP-1649 adds extra buffering to improve write performance. The following
> diagram shows buffers as pointed by (numbers). Each eatra buffer adds an
> extra copy since most of our read()/write()s match the io.bytes.per.checksum,
> which is much smaller than buffer size.
> {noformat}
> (1) (2) (3) (5)
> +---||----[ CLIENT ]---||----<>-----||---[ DATANODE ]---||--<>-> to Mirror
>
> | (buffer) (socket) | (4)
> | +--||--+
> ===== |
> ===== =====
> (disk) =====
> {noformat}
> Currently loops that read and write block data, handle one checksum chunk at
> a time. By reading multiple chunks at a time, we can remove buffers (1), (2),
> (3), and (5).
> Similarly some copies can be reduced when clients read data from the DFS.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.