Hi Andrzej,

Thanks for your help (as always).

Still getting same exception when running on standalone Hadoop cluster.
Getting same exceptions as before -  also in the datanode log I'm getting:

2009-12-09 12:20:37,805 ERROR datanode.DataNode - java.io.IOException: Call
to 10.0.0.2:9000 failed on local exception: java.io.IOException: Connection
reset by peer
    at org.apache.hadoop.ipc.Client.wrapException(Client.java:774)
    at org.apache.hadoop.ipc.Client.call(Client.java:742)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
    at $Proxy4.getProtocolVersion(Unknown Source)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:346)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:383)
    at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:314)
    at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:291)
    at
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:269)
    at
org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:216)
    at
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1283)
    at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1238)
    at
org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1246)
    at
org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1368)
Caused by: java.io.IOException: Connection reset by peer
    at sun.nio.ch.FileDispatcher.read0(Native Method)
    at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:21)
    at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:233)
    at sun.nio.ch.IOUtil.read(IOUtil.java:206)
    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:236)
    at
org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
    at
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
    at
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
    at
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
    at java.io.FilterInputStream.read(FilterInputStream.java:116)
    at
org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:276)
    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
    at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
    at java.io.DataInputStream.readInt(DataInputStream.java:370)
    at
org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:501)
    at org.apache.hadoop.ipc.Client$Connection.run(Client.java:446)

Thanks,
Eran

On Wed, Dec 9, 2009 at 12:12 PM, Andrzej Bialecki <[email protected]> wrote:

> Eran Zinman wrote:
>
>> Hi,
>>
>> Sorry to bother you guys again, but it seems that no matter what I do I
>> can't run the new version of Nutch with Hadoop 0.20.
>>
>> I am getting the following exceptions in my logs when I execute
>> bin/start-all.sh
>>
>
> Do you use the scripts in place, i.e. without deploying the nutch*.job to a
> separate Hadoop cluster? Could you please try it with a standalone Hadoop
> cluster (even if it's a pseudo-distributed, i.e. single node)?
>
>
> --
> Best regards,
> Andrzej Bialecki     <><
>  ___. ___ ___ ___ _ _   __________________________________
> [__ || __|__/|__||\/|  Information Retrieval, Semantic Web
> ___|||__||  \|  ||  |  Embedded Unix, System Integration
> http://www.sigram.com  Contact: info at sigram dot com
>
>

Reply via email to