Github user gerashegalov commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20327#discussion_r175934086
  
    --- Diff: core/src/main/scala/org/apache/spark/ui/WebUI.scala ---
    @@ -126,7 +126,11 @@ private[spark] abstract class WebUI(
       def bind(): Unit = {
         assert(serverInfo.isEmpty, s"Attempted to bind $className more than 
once!")
         try {
    -      val host = Option(conf.getenv("SPARK_LOCAL_IP")).getOrElse("0.0.0.0")
    +      val host = if (Utils.isClusterMode(conf)) {
    --- End diff --
    
    at best, the way you would do this is like this on each worker:
    ```
    <property>
    <name>yarn.nodemanager.admin-env</name>
    <value>MALLOC_ARENA_MAX=$MALLOC_ARENA_MAX,SPARK_LOCAL_IP=$NM_HOST</value>
    </property>
    ```
    which builds on the fact that NM_HOST is defined earlier on the launch 
script or 
    or some other value or maybe 
    ```
    <property>
    <name>yarn.nodemanager.admin-env</name>
    
<value>MALLOC_ARENA_MAX=$MALLOC_ARENA_MAX,SPARK_LOCAL_IP=${yarn.nodemanager.bind-host}</value>
    </property>
    ```
    As previously mentioned if I have to break abstractions and intermix YARN 
worker settings with Spark environment (which will also unnecessarily passed to 
other apps)  the only thing we need from this patch is the fix of setting the 
hostname  in JettyUtils.scala


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to