[
https://issues.apache.org/jira/browse/HDFS-3318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13261902#comment-13261902
]
Aaron T. Myers commented on HDFS-3318:
--------------------------------------
I think the patch largely looks good. I'm confused, however, by the change of
how "filelength" is determined.
It changed from this:
{code}
final String cl = connection.getHeaderField(StreamFile.CONTENT_LENGTH);
filelength = (cl == null) ? -1 : Long.parseLong(cl);
{code}
To this:
{code}
final String cl = connection.getHeaderField(StreamFile.CONTENT_LENGTH);
...
final long streamlength = Long.parseLong(cl);
filelength = startPos + streamlength;
{code}
Why does the filelength now begin at startPos? That change seems unrelated to
this issue. Or am I missing something?
+1 once this question is addressed.
> Hftp hangs on transfers >2GB
> ----------------------------
>
> Key: HDFS-3318
> URL: https://issues.apache.org/jira/browse/HDFS-3318
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: hdfs client
> Affects Versions: 0.24.0, 0.23.3, 2.0.0
> Reporter: Daryn Sharp
> Assignee: Daryn Sharp
> Priority: Blocker
> Attachments: HDFS-3318-1.patch, HDFS-3318.patch
>
>
> Hftp transfers >2GB hang after the transfer is complete. The problem appears
> to be caused by java internally using an int for the content length. When it
> overflows 2GB, it won't check the bounds of the reads on the input stream.
> The client continues reading after all data is received, and the client
> blocks until the server times out the connection -- _many_ minutes later. In
> conjunction with hftp timeouts, all transfers >2G fail with a read timeout.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira