Github user gerashegalov commented on a diff in the pull request:
https://github.com/apache/spark/pull/20327#discussion_r176183580
--- Diff: core/src/main/scala/org/apache/spark/ui/WebUI.scala ---
@@ -126,7 +126,11 @@ private[spark] abstract class WebUI(
def bind(): Unit = {
assert(serverInfo.isEmpty, s"Attempted to bind $className more than
once!")
try {
- val host = Option(conf.getenv("SPARK_LOCAL_IP")).getOrElse("0.0.0.0")
+ val host = if (Utils.isClusterMode(conf)) {
--- End diff --
I am sorry, it does not make sense to me to treat RPC and http endpoints
inconsistently. Therefore I am removing the logic borrowed from YARN for RPC.
Now we have a simpler PR. We can achieve what I need either with
`appMasterEnv.SPARK_LOCAL_IP` or cluster-side config. At the same time we keep
the prior behavior as you requested.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]