[
https://issues.apache.org/jira/browse/HADOOP-3205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12797069#action_12797069
]
Hudson commented on HADOOP-3205:
--------------------------------
Integrated in Hadoop-Common-trunk #210 (See
[http://hudson.zones.apache.org/hudson/job/Hadoop-Common-trunk/210/])
. Read multiple chunks directly from FSInputChecker subclass into user
buffers. Contributed by Todd Lipcon.
> Read multiple chunks directly from FSInputChecker subclass into user buffers
> ----------------------------------------------------------------------------
>
> Key: HADOOP-3205
> URL: https://issues.apache.org/jira/browse/HADOOP-3205
> Project: Hadoop Common
> Issue Type: Bug
> Components: fs
> Affects Versions: 0.22.0
> Reporter: Raghu Angadi
> Assignee: Todd Lipcon
> Fix For: 0.22.0
>
> Attachments: hadoop-3205.txt, hadoop-3205.txt, hadoop-3205.txt,
> hadoop-3205.txt, hadoop-3205.txt
>
>
> Implementations of FSInputChecker and FSOutputSummer like DFS do not have
> access to full user buffer. At any time DFS can access only up to 512 bytes
> even though user usually reads with a much larger buffer (often controlled by
> io.file.buffer.size). This requires implementations to double buffer data if
> an implementation wants to read or write larger chunks of data from
> underlying storage.
> We could separate changes for FSInputChecker and FSOutputSummer into two
> separate jiras.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.