[ https://issues.apache.org/jira/browse/HDFS-3794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Ravi Prakash updated HDFS-3794: ------------------------------- Attachment: HDFS-3794.patch Attaching a patch that fixes the issue. Its too trivial to write a unit test (which will have to be pretty complicated :'( ... I tried briefly) Here's the testing I did 1. Small file with offset. Worked 2. Big file (multiple blocks) with offset. Worked 3. Big file with offset greater than file size. Correctly threw a RemoteException > WebHDFS Open used with Offset returns the original (and incorrect) Content > Length in the HTTP Header. > ----------------------------------------------------------------------------------------------------- > > Key: HDFS-3794 > URL: https://issues.apache.org/jira/browse/HDFS-3794 > Project: Hadoop HDFS > Issue Type: Bug > Components: webhdfs > Affects Versions: 0.23.3, 2.0.0-alpha, 2.1.0-alpha > Reporter: Ravi Prakash > Assignee: Ravi Prakash > Attachments: HDFS-3794.patch > > > When an offset is specified, the HTTP header Content Length still contains > the original file size. e.g. if the original file is 100 bytes, and the > offset specified it 10, then HTTP Content Length ought to be 90. Currently it > is still returned as 100. > This causes curl to give error 18, and JAVA to throw ConnectionClosedException -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira