[ 
https://issues.apache.org/jira/browse/SPARK-15941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15339450#comment-15339450
 ] 

Thomas Graves commented on SPARK-15941:
---------------------------------------

can you give some more details about your setup.  Are you running on yarn, 
standalone, etc.  Which exact UI page shows IPs instead of host addresses?  
executors page, stages page, all...

I haven't seen any issues with spark 1.6 showing IPs instead of hostnames.  On 
spark 2.x I have seen the Tasks table in the Stages UI showing ips instead of 
hostnames for some reason but haven't had time to investigate. 

> Netty RPC implementation ignores the executor bind address
> ----------------------------------------------------------
>
>                 Key: SPARK-15941
>                 URL: https://issues.apache.org/jira/browse/SPARK-15941
>             Project: Spark
>          Issue Type: Bug
>    Affects Versions: 1.6.1
>            Reporter: Marco Capuccini
>
> When using Netty RPC implementation, which is the default one in Spark 1.6.x, 
> the executor addresses that I see in the Spark application UI (the one on 
> port 4040) are the IP addresses of the machines, even if I start the slaves 
> with the -H option, in order to bind each slave to the hostname of the 
> machine.
> This is a big deal when using Spark with HDFS, as the executor addresses need 
> to match the hostnames of the DataNodes, to achieve data locality.
> When setting spark.rpc=akka everything works as expected, and the executor 
> addresses in the Spark UI match the hostname, which the slaves are bound to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to