[ 
https://issues.apache.org/jira/browse/HBASE-9890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13815447#comment-13815447
 ] 

Gary Helmling commented on HBASE-9890:
--------------------------------------

In the case that Francis points out, whether using CopyTable or something 
custom, you would actually have more than one token of type HBASE_AUTH_TOKEN.  
Does Oozie support running CopyTable between two clusters?  If so, it needs to 
fetch the delegation token for each, but this patch wouldn't pass along both, 
only the first that it sees.  Obtaining the token from UGI by type alone does 
not guarantee it is associated with the given cluster.  That need to match the 
token service against the cluster ID.

In fact, I think the change as it is will cause CopyTable between 2 secure 
HBase clusters to fail.  The change in this section of 
o.a.h.h.mapreduce.TableMapReduce.initCredentials() is the problem:
{code}
      try {
        // init credentials for remote cluster
        String quorumAddress = 
job.getConfiguration().get(TableOutputFormat.QUORUM_ADDRESS);
        if (quorumAddress != null) {
          Configuration peerConf = 
HBaseConfiguration.create(job.getConfiguration());
          ZKUtil.applyClusterKeyToConf(peerConf, quorumAddress);
-          userProvider.getCurrent().obtainAuthTokenForJob(peerConf, job);
+          user.obtainAuthTokenForJob(peerConf, job);
         }
-        
userProvider.getCurrent().obtainAuthTokenForJob(job.getConfiguration(), job);
+
+        Token<?> authToken = 
user.getToken(AuthenticationTokenIdentifier.AUTH_TOKEN_TYPE.toString());
+        if (authToken == null) {
{code}

When running between 2 secure clusters, we'll obtain a token against one 
cluster (using the config value of TableOutputFormat.QUORUM_ADDRESS), then the 
following call to user.getToken("HBASE_AUTH_TOKEN") will return the token just 
obtained, so we never fetch the second token.

You can use AuthenticationTokenSelector.selectToken() to pull out the correct 
token for a given cluster.  But first you will need the cluster ID for the 
cluster you're connecting to.

> MR jobs are not working if started by a delegated user
> ------------------------------------------------------
>
>                 Key: HBASE-9890
>                 URL: https://issues.apache.org/jira/browse/HBASE-9890
>             Project: HBase
>          Issue Type: Bug
>          Components: mapreduce, security
>    Affects Versions: 0.98.0, 0.94.12, 0.96.0
>            Reporter: Matteo Bertozzi
>            Assignee: Matteo Bertozzi
>             Fix For: 0.98.0, 0.94.13, 0.96.1
>
>         Attachments: HBASE-9890-94-v0.patch, HBASE-9890-94-v1.patch, 
> HBASE-9890-v0.patch, HBASE-9890-v1.patch
>
>
> If Map-Reduce jobs are started with by a proxy user that has already the 
> delegation tokens, we get an exception on "obtain token" since the proxy user 
> doesn't have the kerberos auth.
> For example:
>  * If we use oozie to execute RowCounter - oozie will get the tokens required 
> (HBASE_AUTH_TOKEN) and it will start the RowCounter. Once the RowCounter 
> tries to obtain the token, it will get an exception.
>  * If we use oozie to execute LoadIncrementalHFiles - oozie will get the 
> tokens required (HDFS_DELEGATION_TOKEN) and it will start the 
> LoadIncrementalHFiles. Once the LoadIncrementalHFiles tries to obtain the 
> token, it will get an exception.
> {code}
>  org.apache.hadoop.hbase.security.AccessDeniedException: Token generation 
> only allowed for Kerberos authenticated clients
>     at 
> org.apache.hadoop.hbase.security.token.TokenProvider.getAuthenticationToken(TokenProvider.java:87)
> {code}
> {code}
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): Delegation Token 
> can be issued only with kerberos or web authentication
>       at 
> org.apache.hadoop.hdfs.DFSClient.getDelegationToken(DFSClient.java:783)
>       at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getDelegationToken(DistributedFileSystem.java:868)
>       at 
> org.apache.hadoop.fs.FileSystem.collectDelegationTokens(FileSystem.java:509)
>       at 
> org.apache.hadoop.fs.FileSystem.addDelegationTokens(FileSystem.java:487)
>       at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:130)
>       at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:111)
>       at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:85)
>       at 
> org.apache.hadoop.filecache.TrackerDistributedCacheManager.getDelegationTokens(TrackerDistributedCacheManager.java:949)
>       at 
> org.apache.hadoop.mapred.JobClient.copyAndConfigureFiles(JobClient.java:854)
>       at 
> org.apache.hadoop.mapred.JobClient.copyAndConfigureFiles(JobClient.java:743)
>       at 
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:945)
>       at org.apache.hadoop.mapreduce.Job.submit(Job.java:566)
>       at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:596)
>       at 
> org.apache.hadoop.hbase.mapreduce.RowCounter.main(RowCounter.java:173)
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)

Reply via email to