Github user liyinan926 commented on the issue:
https://github.com/apache/spark/pull/21260
> There is a fundamental problem with how we pass the options through spark
conf to fabric8. For each volume type and all possible volume options we would
have to implement some custom code to map config values to fabric8 calls. This
will result in big body of code we would have to support and means that Spark
will always be somehow out of sync with k8s.
This is indeed concerning, given that we don't yet support a lot of pod
customization options yet, e.g., affinity and anti-affinity, security context,
etc. Ideally, pod specs should be specified declaratively like in Deployment
and StatefulSet, but Spark is configuration property based. The [Spark
Operator](https://github.com/GoogleCloudPlatform/spark-on-k8s-operator)
attempted to address this using initializers. But initializers are alpha and
dangerous. Admission webhooks are an option, but again, they pose risks.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]