Github user mridulm commented on the issue:

    https://github.com/apache/spark/pull/17723
  
    
    There is a distinction between what spark core implements and exposes - and 
what the actual SPI implementations depend on. My point is that spark core 
should not depend on hadoop-security; since we could need to support other 
models.
    
    The actual implementation which authenticates/authorizes for specific 
hadoop services (hdfs, hive, hbase, etc) could be leveraging UGI + kerberos 
principal/keytab + corresponding service token api, etc. That is an 
implementation detail of the credential provider - and is decoupled from spark 
core itself. This PR is bundling the SPI impl's also as part of core, but that 
is a packaging detail.
    
    The lifecycle of a credential provider, acquisition of credentials (tokens, 
DT, etc), distribution and eventual application to executors should not depend 
on hadoop-security from spark core's point of view: specific implementations 
could be doing ugi.getCurrentUser.addCredentials(creds), just as some other 
impls might be doing something else.
    
    In my earlier comments, I am not sure if this distinction came across.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to