Github user gerashegalov commented on a diff in the pull request:
https://github.com/apache/spark/pull/20327#discussion_r169257050
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -744,7 +744,9 @@ object SparkSubmit extends CommandLineUtils with
Logging {
}
// Ignore invalid spark.driver.host in cluster modes.
- if (deployMode == CLUSTER) {
+ if (isYarnCluster) {
+ sparkConf.set("spark.driver.host", "${env:NM_HOST}")
--- End diff --
Sorry for the delay. I verified that moving setting `${env:NM_HOST}` to
`yarn/Client.scala` works. However, the most robust method is to use the YARN
cluster-side config by replicating how NodeManager determines the public
address. I added a unit test failing before the PR.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]