Github user mccheah commented on the issue:

    https://github.com/apache/spark/pull/23174
  
    The trouble is the API proposed here and how it would have to change for 
future features. If we wanted to add the optionality to support authentication 
via mounted files later, then what's the API for that, and how would that 
change the API for users that were relying on this authentication mechanism? 
That's why it's important to see the optionality now, so it can be clear to us 
that <X> are our options, and this is how we are going to use them.
    
    A proposed scheme is to have 
`spark.authenticate.k8s.secret.provider=autok8ssecret`, then document what that 
does. Perhaps that's the default mode. Then add another scheme, say 
`spark.authenticate.k8s.secret.provider=files` and then further options for 
specifying where that file is located on both the driver and the executors.
    
    It's helpful to put this patch in the context of where we want to go for 
authentication in general moving forward. Otherwise this feature taken in 
isolation will make it appear that Spark is being opinionated about using 
Kubernetes secrets and environment variables for authentication.
    
    if it's not introduced in this patch, then at least we should file JIRA 
tickets and reference them as future add-ons to this and have a roadmap for 
what SASL on K8s will look like in the bigger picture for 3.x.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to