[
https://issues.apache.org/jira/browse/HDFS-927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12828324#action_12828324
]
stack commented on HDFS-927:
----------------------------
.bq I actually mean "per read per block", i.e. within a singe read, there are 3
retries for each block.
That would be better in that with the current patch, on a 3 block file, if we
hiccupped the first read on each block, we'd trip the failures count though if
we'd been counting on a block basis, the read would have gone through.
That said, I'd be fine with Todds patch -- its nice and clean, semantically and
code-wise -- going in and in a new issue working on the Tsz suggested
improvement.
> DFSInputStream retries too many times for new block locations
> -------------------------------------------------------------
>
> Key: HDFS-927
> URL: https://issues.apache.org/jira/browse/HDFS-927
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: hdfs client
> Affects Versions: 0.21.0, 0.22.0
> Reporter: Todd Lipcon
> Assignee: Todd Lipcon
> Priority: Critical
> Attachments: hdfs-927.txt
>
>
> I think this is a regression caused by HDFS-127 -- DFSInputStream is supposed
> to only go back to the NN max.block.acquires times, but in trunk it goes back
> twice as many - the default is 3, but I am counting 7 calls to
> getBlockLocations before an exception is thrown.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.