[ https://issues.apache.org/jira/browse/HDFS-14234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16796345#comment-16796345 ]
Hadoop QA commented on HDFS-14234: ---------------------------------- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 32s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 38s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 24m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 7s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 11s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 11s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 3m 43s{color} | {color:orange} root: The patch generated 271 new + 462 unchanged - 4 fixed = 733 total (was 466) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 0s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 31s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 17 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch 73 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 45s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 28s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 27s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 21s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}123m 56s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 43s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}259m 21s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs | | | Nullcheck of query at line 237 of value previously dereferenced in org.apache.hadoop.hdfs.server.common.HostRestrictingAuthorizationFilter.handleInteraction(HostRestrictingAuthorizationFilter$HttpInteraction) At HostRestrictingAuthorizationFilter.java:237 of value previously dereferenced in org.apache.hadoop.hdfs.server.common.HostRestrictingAuthorizationFilter.handleInteraction(HostRestrictingAuthorizationFilter$HttpInteraction) At HostRestrictingAuthorizationFilter.java:[line 237] | | | Should org.apache.hadoop.hdfs.server.common.HostRestrictingAuthorizationFilter$Rule be a _static_ inner class? At HostRestrictingAuthorizationFilter.java:inner class? At HostRestrictingAuthorizationFilter.java:[lines 82-92] | | Failed junit tests | hadoop.hdfs.tools.TestDFSZKFailoverController | | | hadoop.tools.TestHdfsConfigFields | | | hadoop.hdfs.web.TestWebHdfsWithRestCsrfPreventionFilter | | | hadoop.hdfs.server.datanode.TestBPOfferService | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HDFS-14234 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12962984/0002-HDFS-14234.-Limit-WebHDFS-to-specifc-user-host-direc.patch | | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs compile javac javadoc mvninstall shadedclient findbugs checkstyle | | uname | Linux b93dac650881 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c9e50c4 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | shellcheck | v0.4.6 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/26492/artifact/out/diff-checkstyle-root.txt | | whitespace | https://builds.apache.org/job/PreCommit-HDFS-Build/26492/artifact/out/whitespace-eol.txt | | whitespace | https://builds.apache.org/job/PreCommit-HDFS-Build/26492/artifact/out/whitespace-tabs.txt | | findbugs | https://builds.apache.org/job/PreCommit-HDFS-Build/26492/artifact/out/new-findbugs-hadoop-hdfs-project_hadoop-hdfs.html | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/26492/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/26492/testReport/ | | Max. process+thread count | 2646 (vs. ulimit of 10000) | | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs U: . | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/26492/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Limit WebHDFS to specifc user, host, directory triples > ------------------------------------------------------ > > Key: HDFS-14234 > URL: https://issues.apache.org/jira/browse/HDFS-14234 > Project: Hadoop HDFS > Issue Type: New Feature > Components: webhdfs > Reporter: Clay B. > Assignee: Clay B. > Priority: Trivial > Attachments: > 0001-HDFS-14234.-Limit-WebHDFS-to-specifc-user-host-direc.patch, > 0002-HDFS-14234.-Limit-WebHDFS-to-specifc-user-host-direc.patch > > > For those who have multiple network zones, it is useful to prevent certain > zones from downloading data from WebHDFS while still allowing uploads. This > can enable functionality of HDFS as a dropbox for data - data goes in but can > not be pulled back out. (Motivation further presented in [StrangeLoop 2018 Of > Data Dropboxes and Data > Gloveboxes|https://www.thestrangeloop.com/2018/of-data-dropboxes-and-data-gloveboxes.html]). > Ideally, one could limit the datanodes from returning data via an > [{{OPEN}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Open_and_Read_a_File] > but still allow things such as > [{{GETFILECHECKSUM}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Get_File_Checksum] > and > {{[{{CREATE}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Create_and_Write_to_a_File]}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org