[
https://issues.apache.org/jira/browse/HDFS-3577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13416895#comment-13416895
]
Hadoop QA commented on HDFS-3577:
---------------------------------
-1 overall. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12536946/h3577_20120717.patch
against trunk revision .
+1 @author. The patch does not contain any @author tags.
+1 tests included. The patch appears to include 1 new or modified test
files.
+1 javac. The applied patch does not increase the total number of javac
compiler warnings.
+1 javadoc. The javadoc tool did not generate any warning messages.
+1 eclipse:eclipse. The patch built with eclipse:eclipse.
+1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9)
warnings.
+1 release audit. The applied patch does not increase the total number of
release audit warnings.
-1 core tests. The patch failed these unit tests in
hadoop-hdfs-project/hadoop-hdfs:
org.apache.hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics
org.apache.hadoop.hdfs.TestPersistBlocks
+1 contrib tests. The patch passed contrib unit tests.
Test results:
https://builds.apache.org/job/PreCommit-HDFS-Build/2851//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2851//console
This message is automatically generated.
> WebHdfsFileSystem can not read files larger than 24KB
> -----------------------------------------------------
>
> Key: HDFS-3577
> URL: https://issues.apache.org/jira/browse/HDFS-3577
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: hdfs client
> Affects Versions: 0.23.3, 2.0.0-alpha
> Reporter: Alejandro Abdelnur
> Assignee: Tsz Wo (Nicholas), SZE
> Priority: Blocker
> Attachments: h3577_20120705.patch, h3577_20120708.patch,
> h3577_20120714.patch, h3577_20120716.patch, h3577_20120717.patch
>
>
> If reading a file large enough for which the httpserver running
> webhdfs/httpfs uses chunked transfer encoding (more than 24K in the case of
> webhdfs), then the WebHdfsFileSystem client fails with an IOException with
> message *Content-Length header is missing*.
> It looks like WebHdfsFileSystem is delegating opening of the inputstream to
> *ByteRangeInputStream.URLOpener* class, which checks for the *Content-Length*
> header, but when using chunked transfer encoding the *Content-Length* header
> is not present and the *URLOpener.openInputStream()* method thrown an
> exception.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira