Github user liyinan926 commented on the issue:

    https://github.com/apache/spark/pull/19717
  
    > I think we should move the headless service creation into the backend 
code - anything essential for the backend to run shouldn't depend on the 
submission client/steps.
    
    I agree service creation should be done by the scheduler backend code so to 
get rid of dependency on the submission client. 
    
    > Keeping that possible will let us plug Jupyter/spark-shell (running 
in-cluster for now). Disabling it completely will create an unnecessary 
dependency on spark-submit which IMO is undesirable. We do want people to be 
able to programmatically construct a spark context that can point at a k8s 
cluster in their code I think. kubernetes.default.svc is the in-cluster 
kube-dns provisioned DNS address that should point to the API server - its 
availability is a good indicator that we're in a place that can address other 
pods - so, that can be used to detect when we don't want to let users try and 
fail client mode.
    
    Yes, I agree that we eventually should allow client mode including use 
cases that directly create `SparkContext`. But until we have a solid 
well-tested solution, I think we should disable it for now. We can always 
revisit this once we have a good solution. Regarding `kubernetes.default.svc`, 
yes, it's a good indication. But again, the driver service must exist. Unless 
we change to have the backend create that service, this still won't work. 


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to