[
https://issues.apache.org/jira/browse/HADOOP-1702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12586474#action_12586474
]
dhruba borthakur commented on HADOOP-1702:
------------------------------------------
We could certainly fetch the value from the conf, my point was not to insert
this configuration parameter in the hadoop-defaults.xml file. Do you agree?
> Reduce buffer copies when data is written to DFS
> ------------------------------------------------
>
> Key: HADOOP-1702
> URL: https://issues.apache.org/jira/browse/HADOOP-1702
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Affects Versions: 0.14.0
> Reporter: Raghu Angadi
> Assignee: Raghu Angadi
> Attachments: HADOOP-1702.patch
>
>
> HADOOP-1649 adds extra buffering to improve write performance. The following
> diagram shows buffers as pointed by (numbers). Each eatra buffer adds an
> extra copy since most of our read()/write()s match the io.bytes.per.checksum,
> which is much smaller than buffer size.
> {noformat}
> (1) (2) (3) (5)
> +---||----[ CLIENT ]---||----<>-----||---[ DATANODE ]---||--<>-> to Mirror
>
> | (buffer) (socket) | (4)
> | +--||--+
> ===== |
> ===== =====
> (disk) =====
> {noformat}
> Currently loops that read and write block data, handle one checksum chunk at
> a time. By reading multiple chunks at a time, we can remove buffers (1), (2),
> (3), and (5).
> Similarly some copies can be reduced when clients read data from the DFS.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.