Github user shrutig commented on the issue:
https://github.com/apache/spark/pull/21756
@dbtsai @vanzin
What we are trying to achieve is to make Spark work with plain Kerberos
authentication.
We `login user from keytab` at the startup of driver and executors and then
use this UserGroupInformation. We do not have Hadoop token authentication
method, but use plain Kerberos auth.
Spark's runAsSparkUser creates a new UGI based on the current static UGI
and then transfers credentials from this current static UGI to the new UGI.
This works well with other auth methods, except Kerberos. For Kerberos, the UGI
has to be created from loginUserFromKeytab method only and not simply by doing
a transfer credentials from the previous UGI to the new UGI. This is because
isKeytab and isKrbTkt variables in UGI are needed for the UGI to work properly.
The implementation which has been working is below:
```
val currentUgi = UserGroupInformation.getCurrentUser
val ugi = if (currentUgi.getAuthenticationMethod ==
AuthenticationMethod.KERBEROS) {
currentUgi // This worked for us for KERBEROS auth method
} else {
val user = Utils.getCurrentUserName()
val newUgi = UserGroupInformation.createRemoteUser(user)
transferCredentials(UserGroupInformation.getCurrentUser, newUgi)
newUgi
}
logDebug("running as user: " + ugi.getUserName)
ugi.doAs(new PrivilegedExceptionAction[Unit] {
def run: Unit = func()
})
}
```
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]