[
https://issues.apache.org/jira/browse/HDFS-4649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13617656#comment-13617656
]
Hadoop QA commented on HDFS-4649:
---------------------------------
{color:red}-1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12576119/HDFS-4649.patch
against trunk revision .
{color:green}+1 @author{color}. The patch does not contain any @author
tags.
{color:green}+1 tests included{color}. The patch appears to include 1 new
or modified test files.
{color:green}+1 javac{color}. The applied patch does not increase the
total number of javac compiler warnings.
{color:green}+1 javadoc{color}. The javadoc tool did not generate any
warning messages.
{color:green}+1 eclipse:eclipse{color}. The patch built with
eclipse:eclipse.
{color:green}+1 findbugs{color}. The patch does not introduce any new
Findbugs (version 1.3.9) warnings.
{color:green}+1 release audit{color}. The applied patch does not increase
the total number of release audit warnings.
{color:red}-1 core tests{color}. The patch failed these unit tests in
hadoop-hdfs-project/hadoop-hdfs:
org.apache.hadoop.fs.TestFcHdfsSymlink
{color:green}+1 contrib tests{color}. The patch passed contrib unit tests.
Test results:
https://builds.apache.org/job/PreCommit-HDFS-Build/4168//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4168//console
This message is automatically generated.
> Webhdfs cannot list large directories
> -------------------------------------
>
> Key: HDFS-4649
> URL: https://issues.apache.org/jira/browse/HDFS-4649
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: namenode, security, webhdfs
> Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
> Reporter: Daryn Sharp
> Assignee: Daryn Sharp
> Priority: Blocker
> Attachments: HDFS-4649.branch-23.patch, HDFS-4649.branch-23.patch,
> HDFS-4649.patch
>
>
> Webhdfs returns malformed json for directories that exceed the conf
> {{dfs.ls.limit}} value. The streaming object returned by
> {{NamenodeWebhdfsMethods#getListingStream}} will repeatedly call
> {{getListing}} for each segment of the directory listing.
> {{getListingStream}} runs within the remote user's ugi and acquires the first
> segment of the directory, then returns a streaming object. The streaming
> object is later executed _outside of the user's ugi_. Luckily it runs as the
> host service principal (ie. {{host/namenode@REALM}}) so the result is
> permission denied for the "host" user:
> {noformat}
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=host, access=EXECUTE, inode="/path":someuser:group:drwx------
> {noformat}
> The exception causes the streamer to prematurely abort the json output
> leaving it malformed. Meanwhile, the client sees the cryptic:
> {noformat}
> java.lang.IllegalStateException: unexpected end of array
> at org.mortbay.util.ajax.JSON.parseArray(JSON.java:902)
> [...]
> at
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.jsonParse(WebHdfsFileSystem.java:242)
> at
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.run(WebHdfsFileSystem.java:441)
> at
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.listStatus(WebHdfsFileSystem.java:717)
> [...]
> {noformat}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira