Thanks for the help, Idris. I checked all the confs you mentioned, and all is as it should be. jps gives me:

24226 Jps
24073 TaskTracker
23854 JobTracker
23780 DataNode
23921 NameNode
23995 SecondaryNameNode

So that looks good. A majority of this stuff is default as set by Cloudera. Any other ideas?

Eli

On 1/9/12 3:22 PM, Idris Ali wrote:
Hi,

Looks like problem in starting DFS and MR, can you run 'jps' and see if NN,
DN, SNN, JT and TT are running,

also make sure for pseudo-distributed mode, the following entries are
present:

1. In core-site.xml
  <property>
     <name>fs.default.name</name>
     <value>hdfs://localhost:8020</value>
   </property>

   <property>
      <name>hadoop.tmp.dir</name>
      <value><SOME TMP dir with Read/Write acces not system temp></value>
   </property>
   <property>

2.  In hdfs-site.xml
<property>
     <name>dfs.replication</name>
     <value>1</value>
   </property>
   <property>
      <name>dfs.permissions</name>
      <value>false</value>
   </property>
   <property>
      <!-- specify this so that running 'hadoop namenode -format' formats
the right dir -->
      <name>dfs.name.dir</name>
      <value>Local dir with Read/Write access</value>
   </property>

3. In mapred-stie.xml
   <property>
     <name>mapred.job.tracker</name>
     <value>localhost:8021</value>
   </property>

Thanks,
-Idris

On Tue, Jan 10, 2012 at 1:07 AM, Eli Finkelshteyn<[email protected]>wrote:

Positive. Like I said before, netstat -a | grep 8020 gives me nothing.
Even if the firewall was the problem, that should still give me output that
the port is listening, but I'd just be unable to hit it from an outside box
(I tested this by blocking port 50070, at which point it still showed up in
netstat -a, but was inaccessible through http from a remote machine). This
problem is something else.


On 1/9/12 2:31 PM, zGreenfelder wrote:

On Mon, Jan 9, 2012 at 1:58 PM, Eli 
Finkelshteyn<iefinkel@gmail.**com<[email protected]>>
  wrote:

More info:

In the DataNode log, I'm also seeing:

2012-01-09 13:06:27,751 INFO org.apache.hadoop.ipc.Client: Retrying
connect
to server: localhost/127.0.0.1:8020. Already tried 9 time(s).

Why would things just not load on port 8020? I feel like all the errors
I'm
seeing are caused by this, but I can't see any errors about why this
occurred in the first place.

  are you sure there isn't a firewall in place blocking port 8020?
e.g. iptables on the local machines?   if you do
telnet localhost 8020
do you make a connection? if you use lsof and/or netstat can you see
the port open?
if you have root access you can try turning off the firewall with
iptables -F to see if things work without firewall rules.



Reply via email to