[ https://issues.apache.org/jira/browse/HADOOP-16350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16871865#comment-16871865 ]
Hadoop QA commented on HADOOP-16350: ------------------------------------ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 58s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} branch-2.8 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 47s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 10s{color} | {color:green} branch-2.8 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 50s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 20s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_212 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 3s{color} | {color:green} branch-2.8 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 27s{color} | {color:green} branch-2.8 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 34s{color} | {color:red} hadoop-common-project/hadoop-common in branch-2.8 has 1 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 32s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in branch-2.8 has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 30s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 50s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_212 {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 25s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 39s{color} | {color:green} the patch passed with JDK v1.8.0_212 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 46s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 2 new + 1 unchanged - 0 fixed = 3 total (was 1) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 32s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 54s{color} | {color:green} the patch passed with JDK v1.8.0_212 {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 41s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 14s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 81m 16s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 42s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}176m 52s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client | | | Load of known null value in org.apache.hadoop.hdfs.HdfsKMSUtil.getKeyProviderUri(UserGroupInformation, URI, String, Configuration) At HdfsKMSUtil.java:in org.apache.hadoop.hdfs.HdfsKMSUtil.getKeyProviderUri(UserGroupInformation, URI, String, Configuration) At HdfsKMSUtil.java:[line 177] | | | Redundant nullcheck of keyProviderUriStr which is known to be null in org.apache.hadoop.hdfs.HdfsKMSUtil.getKeyProviderUri(UserGroupInformation, URI, String, Configuration) Redundant null check at HdfsKMSUtil.java:is known to be null in org.apache.hadoop.hdfs.HdfsKMSUtil.getKeyProviderUri(UserGroupInformation, URI, String, Configuration) Redundant null check at HdfsKMSUtil.java:[line 177] | | Unreaped Processes | hadoop-hdfs:1 | | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby | | | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.TestEncryptionZones | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.TestEncryptionZonesWithKMS | | Timed out junit tests | org.apache.hadoop.hdfs.TestPread | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:b93746a | | JIRA Issue | HADOOP-16350 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12972779/HADOOP-16350-branch-2.8.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux fa94cdb9eaa0 4.4.0-144-generic #170~14.04.1-Ubuntu SMP Mon Mar 18 15:02:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | branch-2.8 / f388680 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_212 | | Multi-JDK versions | /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 /usr/lib/jvm/java-8-openjdk-amd64:1.8.0_212 | | findbugs | v3.0.0 | | findbugs | https://builds.apache.org/job/PreCommit-HADOOP-Build/16347/artifact/out/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html | | findbugs | https://builds.apache.org/job/PreCommit-HADOOP-Build/16347/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html | | findbugs | https://builds.apache.org/job/PreCommit-HADOOP-Build/16347/artifact/out/new-findbugs-hadoop-hdfs-project_hadoop-hdfs-client.html | | Unreaped Processes Log | https://builds.apache.org/job/PreCommit-HADOOP-Build/16347/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-reaper.txt | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/16347/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/16347/testReport/ | | asflicense | https://builds.apache.org/job/PreCommit-HADOOP-Build/16347/artifact/out/patch-asflicense-problems.txt | | Max. process+thread count | 1841 (vs. ulimit of 10000) | | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: . | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16347/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Ability to tell HDFS client not to request KMS Information from NameNode > ------------------------------------------------------------------------ > > Key: HADOOP-16350 > URL: https://issues.apache.org/jira/browse/HADOOP-16350 > Project: Hadoop Common > Issue Type: Improvement > Components: common, kms > Affects Versions: 2.8.3, 3.0.0, 2.7.6, 3.1.2 > Reporter: Greg Senia > Priority: Major > Fix For: 3.3.0 > > Attachments: HADOOP-16350-branch-2.8.patch, HADOOP-16350.00.patch, > HADOOP-16350.01.patch, HADOOP-16350.02.patch, HADOOP-16350.03.patch, > HADOOP-16350.04.patch, HADOOP-16350.05.patch > > > Before HADOOP-14104 Remote KMSServer URIs were not requested from the remote > NameNode and their associated remote KMSServer delegation token. Many > customers were using this as a security feature to prevent TDE/Encryption > Zone data from being distcped to remote clusters. But there was still a use > case to allow distcp of data residing in folders that are not being encrypted > with a KMSProvider/Encrypted Zone. > So after upgrading to a version of Hadoop that contained HADOOP-14104 distcp > now fails as we along with other customers (HDFS-13696) DO NOT allow > KMSServer endpoints to be exposed out of our cluster network as data residing > in these TDE/Zones contain very critical data that cannot be distcped between > clusters. > I propose adding a new code block with the following custom property > "hadoop.security.kms.client.allow.remote.kms" it will default to "true" so > keeping current feature of HADOOP-14104 but if specified to "false" will > allow this area of code to operate as it did before HADOOP-14104. I can see > the value in HADOOP-14104 but the way Hadoop worked before this JIRA/Issue > should of at least had an option specified to allow Hadoop/KMS code to > operate similar to how it did before by not requesting remote KMSServer URIs > which would than attempt to get a delegation token even if not operating on > encrypted zones. > Error when KMS Server traffic is not allowed between cluster networks per > enterprise security standard which cannot be changed they denied the request > for exception so the only solution is to allow a feature to not attempt to > request tokens. > {code:java} > $ hadoop distcp -Ddfs.namenode.kerberos.principal.pattern=* > -Dmapreduce.job.hdfs-servers.token-renewal.exclude=tech > hdfs:///processed/public/opendata/samples/distcp_test/distcp_file.txt > hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt > 19/05/29 14:06:09 INFO tools.DistCp: Input Options: DistCpOptions > {atomicCommit=false, syncFolder=false, deleteMissing=false, > ignoreFailures=false, overwrite=false, append=false, useDiff=false, > fromSnapshot=null, toSnapshot=null, skipCRC=false, blocking=true, > numListstatusThreads=0, maxMaps=20, mapBandwidth=100, > sslConfigurationFile='null', copyStrategy='uniformsize', preserveStatus=[], > preserveRawXattrs=false, atomicWorkPath=null, logPath=null, > sourceFileListing=null, > sourcePaths=[hdfs:/processed/public/opendata/samples/distcp_test/distcp_file.txt], > > targetPath=hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt, > targetPathExists=true, filtersFile='null', verboseLog=false} > 19/05/29 14:06:09 INFO client.AHSProxy: Connecting to Application History > server at ha21d53mn.unit.hdp.example.com/10.70.49.2:10200 > 19/05/29 14:06:10 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token > 5093920 for gss2002 on ha-hdfs:unit > 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: > HDFS_DELEGATION_TOKEN, Service: ha-hdfs:unit, Ident: (HDFS_DELEGATION_TOKEN > token 5093920 for gss2002) > 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: > kms-dt, Service: ha21d53en.unit.hdp.example.com:9292, Ident: (owner=gss2002, > renewer=yarn, realUser=, issueDate=1559153170120, maxDate=1559757970120, > sequenceNumber=237, masterKeyId=2) > 19/05/29 14:06:10 INFO tools.SimpleCopyListing: Paths (files+dirs) cnt = 1; > dirCnt = 0 > 19/05/29 14:06:10 INFO tools.SimpleCopyListing: Build file listing completed. > 19/05/29 14:06:10 INFO tools.DistCp: Number of paths in the copy list: 1 > 19/05/29 14:06:10 INFO tools.DistCp: Number of paths in the copy list: 1 > 19/05/29 14:06:10 INFO client.AHSProxy: Connecting to Application History > server at ha21d53mn.unit.hdp.example.com/10.70.49.2:10200 > 19/05/29 14:06:10 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token > 556079 for gss2002 on ha-hdfs:tech > 19/05/29 14:06:10 ERROR tools.DistCp: Exception encountered > java.io.IOException: java.net.NoRouteToHostException: No route to host (Host > unreachable) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.addDelegationTokens(KMSClientProvider.java:1029) > at > org.apache.hadoop.crypto.key.KeyProviderDelegationTokenExtension.addDelegationTokens(KeyProviderDelegationTokenExtension.java:110) > at > org.apache.hadoop.hdfs.DistributedFileSystem.addDelegationTokens(DistributedFileSystem.java:2407) > at > org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:140) > at > org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100) > at > org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80) > at > org.apache.hadoop.tools.mapred.CopyOutputFormat.checkOutputSpecs(CopyOutputFormat.java:124) > at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:266) > at > org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:139) > at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) > at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869) > at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) > at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:193) > at org.apache.hadoop.tools.DistCp.execute(DistCp.java:155) > at org.apache.hadoop.tools.DistCp.run(DistCp.java:128) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at org.apache.hadoop.tools.DistCp.main(DistCp.java:462) > Caused by: java.net.NoRouteToHostException: No route to host (Host > unreachable) > at java.net.PlainSocketImpl.socketConnect(Native Method) > at > java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) > at > java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) > at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) > at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) > at java.net.Socket.connect(Socket.java:589) > at sun.net.NetworkClient.doConnect(NetworkClient.java:175) > at sun.net.www.http.HttpClient.openServer(HttpClient.java:463) > at sun.net.www.http.HttpClient.openServer(HttpClient.java:558) > at sun.net.www.http.HttpClient.<init>(HttpClient.java:242) > at sun.net.www.http.HttpClient.New(HttpClient.java:339) > at sun.net.www.http.HttpClient.New(HttpClient.java:357) > at > sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1220) > at > sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1156) > at > sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1050) > at > sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:984) > at > org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:188) > at > org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:133) > at > org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216) > at > org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:299) > at > org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.getDelegationToken(DelegationTokenAuthenticator.java:171) > at > org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.getDelegationToken(DelegationTokenAuthenticatedURL.java:373) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider$4.run(KMSClientProvider.java:1016) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider$4.run(KMSClientProvider.java:1011) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.addDelegationTokens(KMSClientProvider.java:1011) > ... 19 more > {code} > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org