This is the error message in task track log:   ( someone has any ideas ?)

2009-05-31 09:49:16,165 ERROR org.apache.hadoop.mapred.TaskTracker: Caught
exception: java.io.IOException: Call to localhost/127.0.0.1:9001 failed on
local exception: An existing connection was forcibly closed by the remote
host

    at org.apache.hadoop.ipc.Client.call(Client.java:699)

    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)

    at org.apache.hadoop.mapred.$Proxy4.getBuildVersion(Unknown Source)

    at
org.apache.hadoop.mapred.TaskTracker.offerService(TaskTracker.java:974)

    at org.apache.hadoop.mapred.TaskTracker.run(TaskTracker.java:1678)

    at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:2698)

Caused by: java.io.IOException: An existing connection was forcibly closed
by the remote host

    at sun.nio.ch.SocketDispatcher.read0(Native Method)

    at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:25)

    at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:233)

    at sun.nio.ch.IOUtil.read(IOUtil.java:206)

    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:236)

    at
org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)

    at
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:140)

    at
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:150)

    at
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:123)

    at java.io.FilterInputStream.read(FilterInputStream.java:116)

    at
org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:271)

    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)

    at java.io.BufferedInputStream.read(BufferedInputStream.java:237)

    at java.io.DataInputStream.readInt(DataInputStream.java:370)

    at
org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:493)

    at org.apache.hadoop.ipc.Client$Connection.run(Client.java:438)



2009-05-31 09:49:18,118 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: localhost/127.0.0.1:9001. Already tried 0 time(s).

2009-05-31 09:49:20,040 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: localhost/127.0.0.1:9001. Already tried 1 time(s).

2009-05-31 09:49:21,946 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: localhost/127.0.0.1:9001. Already tried 2 time(s).

2009-05-31 09:49:23,853 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: localhost/127.0.0.1:9001. Already tried 3 time(s).

2009-05-31 09:49:25,774 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: localhost/127.0.0.1:9001. Already tried 4 time(s).




On Sun, May 31, 2009 at 10:03 AM, Zhong Wang <wangzhong....@gmail.com>wrote:

> You should read the logs to find out what happened.
>
> On Sun, May 31, 2009 at 9:48 AM, zhang jianfeng <zjf...@gmail.com> wrote:
> > I also find the  tasktracker log is increasing, seems the task tracker
> > works, but it will exhaust my disk space.
> >
> >
> >
> > On Sun, May 31, 2009 at 9:45 AM, zhang jianfeng <zjf...@gmail.com>
> wrote:
> >
> >> Hi all,
> >>
> >> I folllow the tutorial of hadoop and run it in local pseudo-distrubuted
> >> model. But every time I run
> >>  bin/hadoop jar hadoop-0.19.0-examples.jar grep input output
> 'dfs[a-z.]+',
> >> the job will always pend, I don't know what's the reason.
> >>
> >> ps, my platform is windows XP, I run it in cygwin.
> >>
> >>
> >> Thank you
> >>
> >> Jeff Zhang
> >>
> >
>
>
>
> --
> Zhong Wang
>

Reply via email to