[
https://issues.apache.org/jira/browse/HADOOP-16350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16863065#comment-16863065
]
Greg Senia commented on HADOOP-16350:
-------------------------------------
Additional Information showing the working distcp:
*Working with custom property set to false and it can be seen below that only a
delegation token is requested for the local KMS Server which is how it operated
before HADOOP-14104*
[gss2002@ha21t51en ~]$ hadoop distcp
*-Dhadoop.security.kms.client.allow.remote.kms=false*
-Ddfs.namenode.kerberos.principal.pattern=*
-Dmapreduce.job.hdfs-servers.token-renewal.exclude=unit
hdfs:///processed/public/opendata/samples/distcp_test/distcp_file.txt
hdfs://unit/processed/public/opendata/samples/distcp_test/distcp_file2.txt
19/06/13 09:15:30 INFO tools.DistCp: Input Options:
DistCpOptions\{atomicCommit=false, syncFolder=false, deleteMissing=false,
ignoreFailures=false, overwrite=false, append=false, useDiff=false,
fromSnapshot=null, toSnapshot=null, skipCRC=false, blocking=true,
numListstatusThreads=0, maxMaps=20, mapBandwidth=100,
sslConfigurationFile='null', copyStrategy='uniformsize', preserveStatus=[],
preserveRawXattrs=false, atomicWorkPath=null, logPath=null,
sourceFileListing=null,
sourcePaths=[hdfs:/processed/public/opendata/samples/distcp_test/distcp_file.txt],
targetPath=hdfs://unit/processed/public/opendata/samples/distcp_test/distcp_file2.txt,
targetPathExists=true, filtersFile='null', verboseLog=false}
19/06/13 09:15:30 INFO client.AHSProxy: Connecting to Application History
server at ha21t53mn.tech.hdp.example.com/10.70.33.2:10200
19/06/13 09:15:31 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token
561537 for gss2002 on ha-hdfs:tech
19/06/13 09:15:31 INFO security.TokenCache: Got dt for hdfs://tech; Kind:
HDFS_DELEGATION_TOKEN, Service: ha-hdfs:tech, Ident: (HDFS_DELEGATION_TOKEN
token 561537 for gss2002)#######################################################
19/06/13 09:15:31 INFO security.TokenCache: Got dt for hdfs://tech; Kind:
kms-dt, Service: ha21t53en.tech.hdp.example.com:9292, Ident: (owner=gss2002,
renewer=yarn, realUser=, issueDate=1560431731476, maxDate=1561036531476,
sequenceNumber=7729, masterKeyId=91)
19/06/13 09:15:32 INFO tools.SimpleCopyListing: Paths (files+dirs) cnt = 1;
dirCnt = 0
19/06/13 09:15:32 INFO tools.SimpleCopyListing: Build file listing completed.
19/06/13 09:15:32 INFO tools.DistCp: Number of paths in the copy list: 1
19/06/13 09:15:32 INFO tools.DistCp: Number of paths in the copy list: 1
19/06/13 09:15:32 INFO client.AHSProxy: Connecting to Application History
server at ha21t53mn.tech.hdp.example.com/10.70.33.2:10200
19/06/13 09:15:32 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token
5141150 for gss2002 on ha-hdfs:unit
19/06/13 09:15:32 INFO security.TokenCache: Got dt for hdfs://unit; Kind:
HDFS_DELEGATION_TOKEN, Service: ha-hdfs:unit, Ident: (HDFS_DELEGATION_TOKEN
token 5141150 for gss2002)
19/06/13 09:15:32 INFO client.ConfiguredRMFailoverProxyProvider: Failing over
to rm2
19/06/13 09:15:33 INFO mapreduce.JobSubmitter: number of splits:1
19/06/13 09:15:33 INFO mapreduce.JobSubmitter: Submitting tokens for job:
job_1560183404146_0051
19/06/13 09:15:33 INFO mapreduce.JobSubmitter: Kind: HDFS_DELEGATION_TOKEN,
Service: ha-hdfs:unit, Ident: (HDFS_DELEGATION_TOKEN token 5141150 for gss2002)
19/06/13 09:15:33 INFO mapreduce.JobSubmitter: Kind: HDFS_DELEGATION_TOKEN,
Service: ha-hdfs:tech, Ident: (HDFS_DELEGATION_TOKEN token 561537 for gss2002)
*19/06/13 09:15:33 INFO mapreduce.JobSubmitter: Kind: kms-dt, Service:
ha21t53en.tech.hdp.example.com:9292, Ident: (owner=gss2002, renewer=yarn,
realUser=, issueDate=1560431731476, maxDate=1561036531476, sequenceNumber=7729,
masterKeyId=91)*
19/06/13 09:15:33 INFO impl.TimelineClientImpl: Timeline service address:
http://ha21t53mn.tech.hdp.example.com:8188/ws/v1/timeline/
19/06/13 09:15:34 INFO impl.YarnClientImpl: Submitted application
application_1560183404146_0051
19/06/13 09:15:34 INFO mapreduce.Job: The url to track the job:
http://ha21t53mn.tech.hdp.example.com:8088/proxy/application_1560183404146_0051/
19/06/13 09:15:34 INFO tools.DistCp: DistCp job-id:
job_1560183404146_0051emote.kms=false
-Ddfs.namenode.kerberos.principal.pattern=* -Dmapreduce.job.hdfs-serv19/06/13
09:15:34 INFO mapreduce.Job: Running job:
job_1560183404146_0051distcp_test/distcp_file.txt
hdfs://unit/processed/public/opendata/samples/dis
19/06/13 09:15:46 INFO mapreduce.Job: Job job_1560183404146_0051 running in
uber mode : false
19/06/13 09:15:46 INFO mapreduce.Job: map 0% reduce 0%
19/06/13 09:15:55 INFO mapreduce.Job: map 100% reduce 0%
19/06/13 09:15:55 INFO mapreduce.Job: Job job_1560183404146_0051 completed
successfully
19/06/13 09:15:55 INFO mapreduce.Job: Counters: 33
File System Counters
FILE: Number of bytes read=0
FILE: Number of bytes written=177893
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=472
HDFS: Number of bytes written=96
HDFS: Number of read operations=21
HDFS: Number of large read operations=0
HDFS: Number of write operations=5
Job Counters
Launched map tasks=1
Other local map tasks=1
Total time spent by all maps in occupied slots (ms)=6416
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=6416
Total vcore-milliseconds taken by all map tasks=6416
Total megabyte-milliseconds taken by all map tasks=6569984
Map-Reduce Framework
Map input records=1
Map output records=0
Input split bytes=119
Spilled Records=0
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=107
CPU time spent (ms)=2260
Physical memory (bytes) snapshot=358023168
Virtual memory (bytes) snapshot=4394815488
Total committed heap usage (bytes)=460324864
File Input Format Counters
Bytes Read=257
File Output Format Counters
Bytes Written=0
org.apache.hadoop.tools.mapred.CopyMapper$Counter
BYTESCOPIED=96
BYTESEXPECTED=96
COPY=1
[gss2002@ha21t51en ~]$
*Non Working with out new custom property without this non set
-**Dhadoop.security.kms.client.allow.remote.kms=false it defaults to true and
it fails as it attempts to get a remote KMS delegation token and since traffic
is blocked to that Server via firewalls it fails the distcp as it does
currently with HADOOP-14104.***
[gss2002@ha21t51en ~]$ hadoop distcp
-Ddfs.namenode.kerberos.principal.pattern=*
-Dmapreduce.job.hdfs-servers.token-renewal.exclude=unit
hdfs:///processed/public/opendata/samples/distcp_test/distcp_file.txt
hdfs://unit/processed/public/opendata/samples/distcp_test/distcp_file2.txt
19/06/13 09:16:17 INFO tools.DistCp: Input Options:
DistCpOptions\{atomicCommit=false, syncFolder=false, deleteMissing=false,
ignoreFailures=false, overwrite=false, append=false, useDiff=false,
fromSnapshot=null, toSnapshot=null, skipCRC=false, blocking=true,
numListstatusThreads=0, maxMaps=20, mapBandwidth=100,
sslConfigurationFile='null', copyStrategy='uniformsize', preserveStatus=[],
preserveRawXattrs=false, atomicWorkPath=null, logPath=null,
sourceFileListing=null,
sourcePaths=[hdfs:/processed/public/opendata/samples/distcp_test/distcp_file.txt],
targetPath=hdfs://unit/processed/public/opendata/samples/distcp_test/distcp_file2.txt,
targetPathExists=true, filtersFile='null', verboseLog=false}
19/06/13 09:16:17 INFO client.AHSProxy: Connecting to Application History
server at ha21t53mn.tech.hdp.example.com/10.70.33.2:10200
19/06/13 09:16:17 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token
561538 for gss2002 on ha-hdfs:tech
19/06/13 09:16:18 INFO security.TokenCache: Got dt for hdfs://tech; Kind:
HDFS_DELEGATION_TOKEN, Service: ha-hdfs:tech, Ident: (HDFS_DELEGATION_TOKEN
token 561538 for gss2002)
19/06/13 09:16:18 INFO security.TokenCache: Got dt for hdfs://tech; Kind:
kms-dt, Service: ha21t53en.tech.hdp.example.com:9292, Ident: (owner=gss2002,
renewer=yarn, realUser=, issueDate=1560431778041, maxDate=1561036578041,
sequenceNumber=7730, masterKeyId=91)
19/06/13 09:16:18 INFO tools.SimpleCopyListing: Paths (files+dirs) cnt = 1;
dirCnt = 0
19/06/13 09:16:18 INFO tools.SimpleCopyListing: Build file listing completed.
19/06/13 09:16:18 INFO tools.DistCp: Number of paths in the copy list: 1
19/06/13 09:16:18 INFO tools.DistCp: Number of paths in the copy list: 1
19/06/13 09:16:18 INFO client.AHSProxy: Connecting to Application History
server at ha21t53mn.tech.hdp.example.com/10.70.33.2:10200
*19/06/13 09:16:18 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token
5141183 for gss2002 on ha-hdfs:unit*
*19/06/13 09:16:18 ERROR tools.DistCp: Exception encountered*
*java.io.IOException: java.net.NoRouteToHostException: No route to host (Host
unreachable)*
*at
org.apache.hadoop.crypto.key.kms.KMSClientProvider.addDelegationTokens(KMSClientProvider.java:1029)*
*at*
org.apache.hadoop.crypto.key.KeyProviderDelegationTokenExtension.addDelegationTokens(KeyProviderDelegationTokenExtension.java:110)
at
org.apache.hadoop.hdfs.DistributedFileSystem.addDelegationTokens(DistributedFileSystem.java:2407)
at
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:140)
at
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
at
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
at
org.apache.hadoop.tools.mapred.CopyOutputFormat.checkOutputSpecs(CopyOutputFormat.java:124)
at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:266)
at
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:139)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:193)
at org.apache.hadoop.tools.DistCp.execute(DistCp.java:155)
at org.apache.hadoop.tools.DistCp.run(DistCp.java:128)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.tools.DistCp.main(DistCp.java:462)
Caused by: java.net.NoRouteToHostException: No route to host (Host unreachable)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:463)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:558)
at sun.net.www.http.HttpClient.<init>(HttpClient.java:242)
at sun.net.www.http.HttpClient.New(HttpClient.java:339)
at sun.net.www.http.HttpClient.New(HttpClient.java:357)
at
sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1220)
at
sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1156)
at
sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1050)
at
sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:984)
at
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:188)
at
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:133)
at
org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
at
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:299)
at
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.getDelegationToken(DelegationTokenAuthenticator.java:171)
at
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.getDelegationToken(DelegationTokenAuthenticatedURL.java:373)
at
org.apache.hadoop.crypto.key.kms.KMSClientProvider$4.run(KMSClientProvider.java:1016)
at
org.apache.hadoop.crypto.key.kms.KMSClientProvider$4.run(KMSClientProvider.java:1011)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
at
org.apache.hadoop.crypto.key.kms.KMSClientProvider.addDelegationTokens(KMSClientProvider.java:1011)
> Ability to tell Hadoop not to request KMS Information from Remote NN
> ---------------------------------------------------------------------
>
> Key: HADOOP-16350
> URL: https://issues.apache.org/jira/browse/HADOOP-16350
> Project: Hadoop Common
> Issue Type: Improvement
> Components: common, kms
> Affects Versions: 2.8.3, 3.0.0, 2.7.6, 3.1.2
> Reporter: Greg Senia
> Assignee: Greg Senia
> Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16350.patch
>
>
> Before HADOOP-14104 Remote KMSServer URIs were not requested from the remote
> NameNode and their associated remote KMSServer delegation token. Many
> customers were using this as a security feature to prevent TDE/Encryption
> Zone data from being distcped to remote clusters. But there was still a use
> case to allow distcp of data residing in folders that are not being encrypted
> with a KMSProvider/Encrypted Zone.
> So after upgrading to a version of Hadoop that contained HADOOP-14104 distcp
> now fails as we along with other customers (HDFS-13696) DO NOT allow
> KMSServer endpoints to be exposed out of our cluster network as data residing
> in these TDE/Zones contain very critical data that cannot be distcped between
> clusters.
> I propose adding a new code block with the following custom property
> "hadoop.security.kms.client.allow.remote.kms" it will default to "true" so
> keeping current feature of HADOOP-14104 but if specified to "false" will
> allow this area of code to operate as it did before HADOOP-14104. I can see
> the value in HADOOP-14104 but the way Hadoop worked before this JIRA/Issue
> should of at least had an option specified to allow Hadoop/KMS code to
> operate similar to how it did before by not requesting remote KMSServer URIs
> which would than attempt to get a delegation token even if not operating on
> encrypted zones.
> Error when KMS Server traffic is not allowed between cluster networks per
> enterprise security standard which cannot be changed they denied the request
> for exception so the only solution is to allow a feature to not attempt to
> request tokens.
> {code:java}
> $ hadoop distcp -Ddfs.namenode.kerberos.principal.pattern=*
> -Dmapreduce.job.hdfs-servers.token-renewal.exclude=tech
> hdfs:///processed/public/opendata/samples/distcp_test/distcp_file.txt
> hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt
> 19/05/29 14:06:09 INFO tools.DistCp: Input Options: DistCpOptions
> {atomicCommit=false, syncFolder=false, deleteMissing=false,
> ignoreFailures=false, overwrite=false, append=false, useDiff=false,
> fromSnapshot=null, toSnapshot=null, skipCRC=false, blocking=true,
> numListstatusThreads=0, maxMaps=20, mapBandwidth=100,
> sslConfigurationFile='null', copyStrategy='uniformsize', preserveStatus=[],
> preserveRawXattrs=false, atomicWorkPath=null, logPath=null,
> sourceFileListing=null,
> sourcePaths=[hdfs:/processed/public/opendata/samples/distcp_test/distcp_file.txt],
>
> targetPath=hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt,
> targetPathExists=true, filtersFile='null', verboseLog=false}
> 19/05/29 14:06:09 INFO client.AHSProxy: Connecting to Application History
> server at ha21d53mn.unit.hdp.example.com/10.70.49.2:10200
> 19/05/29 14:06:10 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token
> 5093920 for gss2002 on ha-hdfs:unit
> 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind:
> HDFS_DELEGATION_TOKEN, Service: ha-hdfs:unit, Ident: (HDFS_DELEGATION_TOKEN
> token 5093920 for gss2002)
> 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind:
> kms-dt, Service: ha21d53en.unit.hdp.example.com:9292, Ident: (owner=gss2002,
> renewer=yarn, realUser=, issueDate=1559153170120, maxDate=1559757970120,
> sequenceNumber=237, masterKeyId=2)
> 19/05/29 14:06:10 INFO tools.SimpleCopyListing: Paths (files+dirs) cnt = 1;
> dirCnt = 0
> 19/05/29 14:06:10 INFO tools.SimpleCopyListing: Build file listing completed.
> 19/05/29 14:06:10 INFO tools.DistCp: Number of paths in the copy list: 1
> 19/05/29 14:06:10 INFO tools.DistCp: Number of paths in the copy list: 1
> 19/05/29 14:06:10 INFO client.AHSProxy: Connecting to Application History
> server at ha21d53mn.unit.hdp.example.com/10.70.49.2:10200
> 19/05/29 14:06:10 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token
> 556079 for gss2002 on ha-hdfs:tech
> 19/05/29 14:06:10 ERROR tools.DistCp: Exception encountered
> java.io.IOException: java.net.NoRouteToHostException: No route to host (Host
> unreachable)
> at
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.addDelegationTokens(KMSClientProvider.java:1029)
> at
> org.apache.hadoop.crypto.key.KeyProviderDelegationTokenExtension.addDelegationTokens(KeyProviderDelegationTokenExtension.java:110)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.addDelegationTokens(DistributedFileSystem.java:2407)
> at
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:140)
> at
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
> at
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
> at
> org.apache.hadoop.tools.mapred.CopyOutputFormat.checkOutputSpecs(CopyOutputFormat.java:124)
> at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:266)
> at
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:139)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
> at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:193)
> at org.apache.hadoop.tools.DistCp.execute(DistCp.java:155)
> at org.apache.hadoop.tools.DistCp.run(DistCp.java:128)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.tools.DistCp.main(DistCp.java:462)
> Caused by: java.net.NoRouteToHostException: No route to host (Host
> unreachable)
> at java.net.PlainSocketImpl.socketConnect(Native Method)
> at
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
> at
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
> at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
> at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
> at java.net.Socket.connect(Socket.java:589)
> at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
> at sun.net.www.http.HttpClient.openServer(HttpClient.java:463)
> at sun.net.www.http.HttpClient.openServer(HttpClient.java:558)
> at sun.net.www.http.HttpClient.<init>(HttpClient.java:242)
> at sun.net.www.http.HttpClient.New(HttpClient.java:339)
> at sun.net.www.http.HttpClient.New(HttpClient.java:357)
> at
> sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1220)
> at
> sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1156)
> at
> sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1050)
> at
> sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:984)
> at
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:188)
> at
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:133)
> at
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
> at
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:299)
> at
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.getDelegationToken(DelegationTokenAuthenticator.java:171)
> at
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.getDelegationToken(DelegationTokenAuthenticatedURL.java:373)
> at
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$4.run(KMSClientProvider.java:1016)
> at
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$4.run(KMSClientProvider.java:1011)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
> at
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.addDelegationTokens(KMSClientProvider.java:1011)
> ... 19 more
> {code}
>
>
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]