[ https://issues.apache.org/jira/browse/MAPREDUCE-3436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13189351#comment-13189351 ]
Siddharth Seth commented on MAPREDUCE-3436: ------------------------------------------- Bruno, on the node running the RM - were either of "yarn.resourcemanager.address" or "yarn.web-proxy.address" explicitly set ? I assume the history server address (mapreduce.jobhistory.address) was set to point to the second node, and "mapreduce.jobhistory.webapp.address" was not set. The changes to have the history webapp address pick the host from "mapreduce.jobhistory.address" look good. Other than that - I still can't see how this patch fixes the link being 0.0.0.0 . Looks like YarnConfiguration.getProxyHostAndPort() will return the default (0.0.0.0), unless the RM address / web-proxy address are configured. That call is used to construct the actual proxy url - which is what causes the link to be http://0.0.0.0:8088/proxy/* . Setup that I'm using. Single node - "yarn.resourcemanager.address", "yarn.web-proxy.address" not set. If that's not the behaviour others are seing - i can create that as a separate jira. This one is really just making sure the history webapp address host is being up form "mapreduce.jobhistory.address" instead of "mapreduce.jobhistory.webapp.address" > jobhistory link may be broken depending on the interface it is listening on > --------------------------------------------------------------------------- > > Key: MAPREDUCE-3436 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-3436 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mrv2, webapps > Affects Versions: 0.23.0, 0.23.1 > Reporter: Bruno Mahé > Assignee: Ahmed Radwan > Labels: bigtop > Attachments: MAPREDUCE-3436.patch, MAPREDUCE-3436_rev2.patch > > > On the following page : http://<RESOURCE_MANAGER>:8088/cluster/apps > There are links to the history for each application. None of them can be > reached since they all point to the ip 0.0.0.0. For instance: > http://0.0.0.0:8088/proxy/application_1321658790349_0002/jobhistory/job/job_1321658790349_2_2 > Am I missing something? > [root@bigtop-fedora-15 ~]# jps > 9968 ResourceManager > 1495 NameNode > 1645 DataNode > 12935 Jps > 11140 -- process information unavailable > 5309 JobHistoryServer > 10237 NodeManager > [root@bigtop-fedora-15 ~]# netstat -tlpn | grep 8088 > tcp 0 0 :::8088 :::* > LISTEN 9968/java > For reference, here is my configuration: > root@bigtop-fedora-15 ~]# cat /etc/yarn/conf/yarn-site.xml > <?xml version="1.0"?> > <configuration> > <!-- Site specific YARN configuration properties --> > <property> > <name>yarn.nodemanager.aux-services</name> > <value>mapreduce.shuffle</value> > </property> > <property> > <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> > <value>org.apache.hadoop.mapred.ShuffleHandler</value> > </property> > <property> > <name>mapreduce.admin.user.env</name> > > <value>CLASSPATH=/etc/hadoop/conf/*:/usr/lib/hadoop/*:/usr/lib/hadoop/lib/*</value> > </property> > </configuration> > [root@bigtop-fedora-15 ~]# cat /etc/hadoop/conf/hdfs-site.xml > <?xml version="1.0"?> > <configuration> > <property> > <name>dfs.replication</name> > <value>1</value> > </property> > <property> > <name>dfs.permissions</name> > <value>false</value> > </property> > <property> > <!-- specify this so that running 'hadoop namenode -format' formats the > right dir --> > <name>dfs.name.dir</name> > <value>/var/lib/hadoop/cache/hadoop/dfs/name</value> > </property> > </configuration> > [root@bigtop-fedora-15 ~]# cat /etc/hadoop/conf/core-site.xml > <?xml version="1.0"?> > <configuration> > <property> > <name>fs.default.name</name> > <value>hdfs://localhost:8020</value> > </property> > <property> > <name>hadoop.tmp.dir</name> > <value>/var/lib/hadoop/cache/${user.name}</value> > </property> > <!-- OOZIE proxy user setting --> > <property> > <name>hadoop.proxyuser.oozie.hosts</name> > <value>*</value> > </property> > <property> > <name>hadoop.proxyuser.oozie.groups</name> > <value>*</value> > </property> > </configuration> -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira