Daryn Sharp commented on HADOOP-13081:

Just noticed this due to a conflict. This is very broken and should be reverted.

Neither the hadoop Credentials copy ctor nor the subject's cred sets are 

The subject's creds cannot be iterated w/o synchronizing on the set. Best case 
you'll get a CME. Worst case a copy of the set in an inconsistent state. For 
instance, the GSSAPI adds and removes service tickets from the subject. A 
snapshot at the wrong time will have a stale service ticket. Reusing the 
service ticket in the new ugi will cause replay attack exceptions. Or if 
another thread is attempting to relogin, the subject in the ugi being copied 
will not contain any kerberos creds.

The subject's cred sets aren't actually sets. They are backed by a linked list. 
GSSAPI often relies on ordering of tickets. Cloning into a hash set loses the 
implied ordering. Crazy exceptions occur when the client starts requesting 
tickets from the KDC with a TGS instead of a TGT.  Other ipc bugs cause the 
process to be unable to authenticate until a restart (ex. ran into this with 
oozie).  I have an internal patch I need to push out.

Relogin of a clone ugi will wipe out the kerberos credentials in the original 
ugi. The hadoop User principal contains the login context which references the 
original subject.
Perhaps I missed it, but what is a concrete use case? The description and the 
javadoc don't make sense to me: "... allowing multiple users with different 
tokens to reuse the UGI without re-authenticating with Kerberos". Using tokens 
makes kerberos irrelevant.

If the intention is mixing a ugi with kerberos creds for user1, and tokens for 
user2 - that's playing with fire esp. if user1 is a privileged user.  The ugi 
should only contain user2 tokens for allowed services, otherwise there's the 
security risk of being user1 to some services.  Proxy users exist for this 

Why isn't UGI.createRemoteUser(username) and ugi.addToken(token) sufficient if 
no further kerberos auth is intended, or a proxy user that contains the 
intended tokens if you need a mix of token and kerberos auth?

> add the ability to create multiple UGIs/subjects from one kerberos login
> ------------------------------------------------------------------------
>                 Key: HADOOP-13081
>                 URL: https://issues.apache.org/jira/browse/HADOOP-13081
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: security
>            Reporter: Sergey Shelukhin
>            Assignee: Sergey Shelukhin
>             Fix For: 2.8.0, 3.0.0-alpha1
>         Attachments: HADOOP-13081.01.patch, HADOOP-13081.02.patch, 
> HADOOP-13081.02.patch, HADOOP-13081.03.patch, HADOOP-13081.03.patch, 
> HADOOP-13081.patch
> We have a scenario where we log in with kerberos as a certain user for some 
> tasks, but also want to add tokens to the resulting UGI that would be 
> specific to each task. We don't want to authenticate with kerberos for every 
> task.
> I am not sure how this can be accomplished with the existing UGI interface. 
> Perhaps some clone method would be helpful, similar to createProxyUser minus 
> the proxy stuff; or it could just relogin anew from ticket cache. 
> getUGIFromTicketCache seems like the best option in existing code, but there 
> doesn't appear to be a consistent way of handling ticket cache location - the 
> above method, that I only see called in test, is using a config setting that 
> is not used anywhere else, and the env variable for the location that is used 
> in the main ticket cache related methods is not set uniformly on all paths - 
> therefore, trying to find the correct ticket cache and passing it via the 
> config setting to getUGIFromTicketCache seems even hackier than doing the 
> clone via reflection ;) Moreover, getUGIFromTicketCache ignores the user 
> parameter on the main path - it logs a warning for multiple principals and 
> then logs in with first available.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to