More info:

In the DataNode log, I'm also seeing:

2012-01-09 13:06:27,751 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 9 time(s).

Why would things just not load on port 8020? I feel like all the errors I'm seeing are caused by this, but I can't see any errors about why this occurred in the first place.


On 1/9/12 1:14 PM, Eli Finkelshteyn wrote:
Hi,
I've been googling, but haven't been able to find an answer. I'm currently trying to setup Hadoop in pseudo-distributed mode as a first step. I'm using the Cloudera distro and installed everything through YUM on CentOS 5.7. I can run everything just fine from my one node itself (hadoop fs -ls /, test map-red jobs, etc...), but can't get a remote client to be able to connect to it. I'm pretty sure the cause of that has to do with the fact that port 8020 and port 8021 do not seem to be listening (when I do a netstat -a, they don't show up-- all the other Hadoop related ports like 50030 and 50070 do show up). I verified that the firewall allows connections over 8020 and 8021 for tcp, and can connect through my web browser to 50030 and 50070.

Looking at the namenode log, I see the following error which looks suspicious and related to me:

    2012-01-09 12:03:38,000 INFO org.apache.hadoop.ipc.Server: IPC
    Server listener on 8020: starting
    2012-01-09 12:03:38,009 INFO org.apache.hadoop.ipc.Server: IPC
    Server handler 0 on 8020: starting
    2012-01-09 12:03:39,187 INFO org.apache.hadoop.ipc.Server: IPC
    Server handler 2 on 8020: starting
    2012-01-09 12:03:39,188 INFO org.apache.hadoop.ipc.Server: IPC
    Server handler 3 on 8020: starting
    2012-01-09 12:03:39,188 INFO org.apache.hadoop.ipc.Server: IPC
    Server handler 4 on 8020: starting
    2012-01-09 12:03:39,188 INFO org.apache.hadoop.ipc.Server: IPC
    Server handler 5 on 8020: starting
    2012-01-09 12:03:39,188 INFO org.apache.hadoop.ipc.Server: IPC
    Server handler 6 on 8020: starting
    2012-01-09 12:03:39,189 INFO org.apache.hadoop.ipc.Server: IPC
    Server handler 1 on 8020: starting
    2012-01-09 12:03:39,189 INFO org.apache.hadoop.ipc.Server: IPC
    Server handler 7 on 8020: starting
    2012-01-09 12:03:39,189 INFO org.apache.hadoop.ipc.Server: IPC
    Server handler 8 on 8020: starting
    2012-01-09 12:03:39,246 INFO org.apache.hadoop.ipc.Server: IPC
    Server handler 9 on 8020: starting
    2012-01-09 12:03:39,258 WARN
    org.apache.hadoop.util.PluginDispatcher: Unable to load
    dfs.namenode.plugins plugins
    2012-01-09 12:03:40,254 INFO org.apache.hadoop.ipc.Server: IPC
    Server handler 8 on 8020, call
    addBlock(/var/lib/hadoop-0.20/cache/mapred/mapred/system/jobtracker.info,
    DFSClient_-1779116177, null) from 127.0.0.1:39785: error:
    java.io.IOException: File
    /var/lib/hadoop-0.20/cache/mapred/mapred/system/jobtracker.info
    could only be replicated to 0 nodes, instead of 1

Anyone have any idea what my problem might be?

Cheers,
Eli

Reply via email to