pan3793 commented on PR #11:
URL: https://github.com/apache/spark-docker/pull/11#issuecomment-1279787385

   Thanks @Yikun for the explanation, the idea of keeping the default login 
user to 'spark' and allowing to extend to use dynamic login user makes sense to 
me.
   
   And let me explain a little about why we need the dynamic login user 
ability. In Spark on Yarn mode, when we launch a Spark application via 
`spark-submit --proxy-user jack ...`, the Yarn will launch containers(usually 
Linux processes) using Linux user "jack", and some components/libraries rely on 
the login user in default, one example is Alluxio
   
https://github.com/Alluxio/alluxio/blob/da77d688bdbb0cf0c6477bed4d3187897fe2a2e1/core/common/src/main/java/alluxio/conf/PropertyKey.java#L6469-L6476
   ```
     public static final PropertyKey SECURITY_LOGIN_USERNAME =
         stringBuilder(Name.SECURITY_LOGIN_USERNAME)
             .setDescription("When alluxio.security.authentication.type is set 
to SIMPLE or "
                 + "CUSTOM, user application uses this property to indicate the 
user requesting "
                 + "Alluxio service. If it is not set explicitly, the OS login 
user will be used.")
             .setConsistencyCheckLevel(ConsistencyCheckLevel.ENFORCE)
             .setScope(Scope.CLIENT)
             .build();
   ```
   To reduce the difference between Spark on Yarn and Spark on K8s, we hope 
Spark on K8s keeps the same ability to allow to dynamically change login user 
on submitting Spark application.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to