[
https://issues.apache.org/jira/browse/HADOOP-3205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12793123#action_12793123
]
Raghu Angadi commented on HADOOP-3205:
--------------------------------------
+1. The patch looks good.
> Like the new comments and test modifications.
thanks for good comments and javadoc.
I find it surprising as well, that avoiding a buffer copy does not show any
improvement (so is 13% for function calls or other java voodoo).
Limit of 32 chunks : it is true that your tests didn't show benefit beyond that
for LocalFileSystem.. not sure if that justifies limiting fs implementation's
access to user buffer. Is to reduce memory allocated for checksum buffer?
I will run the tests on my laptop. The jira need not wait for my results.
> Read multiple chunks directly from FSInputChecker subclass into user buffers
> ----------------------------------------------------------------------------
>
> Key: HADOOP-3205
> URL: https://issues.apache.org/jira/browse/HADOOP-3205
> Project: Hadoop Common
> Issue Type: Bug
> Components: fs
> Affects Versions: 0.22.0
> Reporter: Raghu Angadi
> Assignee: Todd Lipcon
> Attachments: hadoop-3205.txt, hadoop-3205.txt, hadoop-3205.txt,
> hadoop-3205.txt, hadoop-3205.txt
>
>
> Implementations of FSInputChecker and FSOutputSummer like DFS do not have
> access to full user buffer. At any time DFS can access only up to 512 bytes
> even though user usually reads with a much larger buffer (often controlled by
> io.file.buffer.size). This requires implementations to double buffer data if
> an implementation wants to read or write larger chunks of data from
> underlying storage.
> We could separate changes for FSInputChecker and FSOutputSummer into two
> separate jiras.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.