pan3793 commented on code in PR #52923:
URL: https://github.com/apache/spark/pull/52923#discussion_r2501639415
##########
core/src/main/scala/org/apache/spark/SparkEnv.scala:
##########
@@ -369,6 +371,11 @@ object SparkEnv extends Logging {
logInfo(log"Registering ${MDC(LogKeys.ENDPOINT_NAME, name)}")
rpcEnv.setupEndpoint(name, endpointCreator)
} else {
+ val useDriverPodIP =
+ conf.get("spark.kubernetes.executor.useDriverPodIP",
"false").equalsIgnoreCase("true")
+ if (useDriverPodIP) {
+ conf.set(config.DRIVER_HOST_ADDRESS.key,
conf.get(config.DRIVER_BIND_ADDRESS.key))
Review Comment:
line 376 only checks `spark.kubernetes.executor.useDriverPodIP`, I think we
should also check `spark.master` startsWith `k8s://`, in case some user
configures `spark.kubernetes.executor.useDriverPodIP` in yarn mode - in our
cases, we use a single spark client and a shared `spark-defaults.conf` which
includes both yarn and k8s configs, and user can override a few configs with
`--conf`, e.g., `spark.master` to switch submission between yarn/k8s easily.
and I will port this feature to the internal version and run some workloads,
this may take some time, and I will report the result later
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]