The problem is the namenode is only listening on localhost:

[root@master-1 ~]# netstat -pl --numeric-ports --numeric-hosts | grep 10975
tcp        0      0 127.0.0.1:8020              0.0.0.0:*                   
LISTEN      10975/java
tcp        0      0 127.0.0.1:50070             0.0.0.0:*                   
LISTEN      10975/java
udp        0      0 0.0.0.0:50091               0.0.0.0:*                       
        10975/java

Likely this is caused by a misconfiguration on our end.  Sorry for the false 
alarm.

Greg

From: Greg <[email protected]<mailto:[email protected]>>
Reply-To: "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Date: Tuesday, December 23, 2014 2:01 PM
To: "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Subject: Re: problem with historyserver on secondary namenode

I may have been hasty in my diagnosis.  It doesn't appear to start even after 
hdfs is up and running fine.  I'll dig more and see if I can figure out the 
real culprit here.

Greg

From: Greg <[email protected]<mailto:[email protected]>>
Reply-To: "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Date: Tuesday, December 23, 2014 1:51 PM
To: "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Subject: problem with historyserver on secondary namenode

Trying to use ambari 1.7.0 to provision and hdp 2.2 cluster.  The layout I'm 
using has the yarn history server on the same host as the secondary namenode 
(the primary namenode is on another host), but it fails to start because it 
tries to interact with hdfs before hdfs is ready.  Here's a gist with the error:

https://gist.github.com/jimbobhickville/a25cef3a2355fc273984

Is this a bug in Ambari?  Is there any way for me to control this behavior via 
configuration or in my stack layout?  I imagine this type of scenario has to 
have come up previously.

Greg

Reply via email to