Re: Giraph will fail while using more workers

2011-10-10 Thread Zhiwei Gu
Thank you girapher, I'll try the latest version, and report the result
later.

2011/10/10 Avery Ching 

>  Hi Zhiwei,
>
> The issue (known) is basically from here:
>
> 2011-10-08 09:27:05,236 FATAL org.apache.hadoop.mapred.Child: Error running 
> child : java.lang.OutOfMemoryError: unable to create new native thread
>   at java.lang.Thread.start0(Native Method)
>   at java.lang.Thread.start(Thread.java:597)
>   at java.lang.UNIXProcess$1.run(UNIXProcess.java:141)
>   at java.security.AccessController.doPrivileged(Native Method)
>
> It has been addressed to in GIRAPH-12 (
> https://issues.apache.org/jira/browse/GIRAPH-12).
>
>
> 
> Currently every worker will start up a thread to communicate with every
> other workers. Hadoop RPC is used for communication. For instance if there
> are 400 workers, each worker will create 400 threads. This ends up using a
> lot of memory on the stack per worker, even with the option
>
> -Dmapred.child.java.opts="-Xss64k".
> 
>
>
> It would be good if you could try the latest Apache Giraph instead of the
> older one at Yahoo!, then you need to set GiraphJob.MSG_NUM_FLUSH_THREADS
> (giraph.msgNumFlushThreads) to a value that won't cause you to run out of
> stack space.
>
> Avery
>  On 10/10/11 11:08 AM, Zhiwei Gu wrote:
>
> Hi all,
>   In my giraph job, when I set the worker to be 200, it is ok, and while
> set to 500, it will fail due to early stage OOM exception in one (or more)
> workers. As this worker fails, other workers who wants to talk with this
> worker will keep on waiting until tried 5 times, then that worker will fail.
>
>  Have you ever faced such issue?
>
>  Best,
> -z
>
>
>  Here is the exception,
> 2011-10-08 09:26:59,108 INFO org.apache.giraph.comm.RPCCommunications:
> getRPCServer: Added jobToken Ident: 17 6a 6f 62 5f 32 30 31 31 30 38 32 36
> 30 39 31 31 5f 36 36 37 30 39 30, Pass: 12 26 1a f1 d2 51 e1 bf 2d 36 63 11
> 26 18 17 3d 53 b3 15 f6, Kind: mapreduce.job, Service:
> job_201108260911_667090
>
> 2011-10-08 09:26:59,116 INFO org.apache.hadoop.ipc.Server: Starting 
> SocketReader
> 2011-10-08 09:26:59,116 INFO org.apache.hadoop.ipc.Server: Starting 
> SocketReader
> 2011-10-08 09:26:59,117 INFO org.apache.hadoop.ipc.Server: Starting 
> SocketReader
> 2011-10-08 09:26:59,117 INFO org.apache.hadoop.ipc.Server: Starting 
> SocketReader
> 2011-10-08 09:26:59,117 INFO org.apache.hadoop.ipc.Server: Starting 
> SocketReader
> 2011-10-08 09:26:59,120 INFO 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source 
> RpcDetailedActivityForPort31250 registered.
> 2011-10-08 09:26:59,121 INFO 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source 
> RpcActivityForPort31250 registered.
> 2011-10-08 09:26:59,123 INFO org.apache.hadoop.ipc.Server: IPC Server 
> Responder: starting
> 2011-10-08 09:26:59,123 INFO org.apache.hadoop.ipc.Server: IPC Server 
> listener on 31250: starting
> 2011-10-08 09:26:59,127 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 0 on 31250: starting
> 2011-10-08 09:26:59,127 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 1 on 31250: starting
> 2011-10-08 09:26:59,133 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 2 on 31250: starting
> 2011-10-08 09:26:59,133 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 3 on 31250: starting
> 2011-10-08 09:26:59,137 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 4 on 31250: starting
> 2011-10-08 09:26:59,144 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 5 on 31250: starting
> 2011-10-08 09:26:59,144 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 6 on 31250: starting
> 2011-10-08 09:26:59,144 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 7 on 31250: starting
> 2011-10-08 09:26:59,144 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 8 on 31250: starting
> 2011-10-08 09:26:59,144 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 9 on 31250: starting
> 2011-10-08 09:26:59,145 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 10 on 31250: starting
> 2011-10-08 09:26:59,145 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 11 on 31250: starting
> 2011-10-08 09:26:59,145 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 12 on 31250: starting
> 2011-10-08 09:26:59,145 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 13 on 31250: starting
> 2011-10-08 09:26:59,145 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 14 on 31250: starting
> 2011-10-08 09:26:59,145 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 15 on 31250: starting
> 2011-10-08 09:26:59,146 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 16 on 31250: starting
> 2011-10-08 09:26:59,146 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 17 on 31250: starting
> 2011-10-08 09:26:59,146 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 18 on 31250: starting
> 2011-10-08 09:26:59,146 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 19 on 31

Re: Giraph will fail while using more workers

2011-10-10 Thread Avery Ching

Hi Zhiwei,

The issue (known) is basically from here:

2011-10-08 09:27:05,236 FATAL org.apache.hadoop.mapred.Child: Error running 
child : java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:597)
at java.lang.UNIXProcess$1.run(UNIXProcess.java:141)
at java.security.AccessController.doPrivileged(Native Method)

It has been addressed to in GIRAPH-12 
(https://issues.apache.org/jira/browse/GIRAPH-12).



Currently every worker will start up a thread to communicate with every 
other workers. Hadoop RPC is used for communication. For instance if 
there are 400 workers, each worker will create 400 threads. This ends up 
using a lot of memory on the stack per worker, even with the option


-Dmapred.child.java.opts="-Xss64k".



It would be good if you could try the latest Apache Giraph instead of 
the older one at Yahoo!, then you need to set 
GiraphJob.MSG_NUM_FLUSH_THREADS (giraph.msgNumFlushThreads) to a value 
that won't cause you to run out of stack space.


Avery

On 10/10/11 11:08 AM, Zhiwei Gu wrote:

Hi all,
  In my giraph job, when I set the worker to be 200, it is ok, and 
while set to 500, it will fail due to early stage OOM exception in one 
(or more) workers. As this worker fails, other workers who wants to 
talk with this worker will keep on waiting until tried 5 times, then 
that worker will fail.


Have you ever faced such issue?

Best,
-z


Here is the exception,
2011-10-08 09:26:59,108 INFO org.apache.giraph.comm.RPCCommunications: 
getRPCServer: Added jobToken Ident: 17 6a 6f 62 5f 32 30 31 31 30 38 
32 36 30 39 31 31 5f 36 36 37 30 39 30, Pass: 12 26 1a f1 d2 51 e1 bf 
2d 36 63 11 26 18 17 3d 53 b3 15 f6, Kind: mapreduce.job, Service: 
job_201108260911_667090

2011-10-08 09:26:59,116 INFO org.apache.hadoop.ipc.Server: Starting SocketReader
2011-10-08 09:26:59,116 INFO org.apache.hadoop.ipc.Server: Starting SocketReader
2011-10-08 09:26:59,117 INFO org.apache.hadoop.ipc.Server: Starting SocketReader
2011-10-08 09:26:59,117 INFO org.apache.hadoop.ipc.Server: Starting SocketReader
2011-10-08 09:26:59,117 INFO org.apache.hadoop.ipc.Server: Starting SocketReader
2011-10-08 09:26:59,120 INFO 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source 
RpcDetailedActivityForPort31250 registered.
2011-10-08 09:26:59,121 INFO 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source 
RpcActivityForPort31250 registered.
2011-10-08 09:26:59,123 INFO org.apache.hadoop.ipc.Server: IPC Server 
Responder: starting
2011-10-08 09:26:59,123 INFO org.apache.hadoop.ipc.Server: IPC Server listener 
on 31250: starting
2011-10-08 09:26:59,127 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 
on 31250: starting
2011-10-08 09:26:59,127 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 
on 31250: starting
2011-10-08 09:26:59,133 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 
on 31250: starting
2011-10-08 09:26:59,133 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 
on 31250: starting
2011-10-08 09:26:59,137 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 
on 31250: starting
2011-10-08 09:26:59,144 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 
on 31250: starting
2011-10-08 09:26:59,144 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 
on 31250: starting
2011-10-08 09:26:59,144 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 
on 31250: starting
2011-10-08 09:26:59,144 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 
on 31250: starting
2011-10-08 09:26:59,144 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 
on 31250: starting
2011-10-08 09:26:59,145 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
10 on 31250: starting
2011-10-08 09:26:59,145 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
11 on 31250: starting
2011-10-08 09:26:59,145 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
12 on 31250: starting
2011-10-08 09:26:59,145 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
13 on 31250: starting
2011-10-08 09:26:59,145 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
14 on 31250: starting
2011-10-08 09:26:59,145 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
15 on 31250: starting
2011-10-08 09:26:59,146 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
16 on 31250: starting
2011-10-08 09:26:59,146 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
17 on 31250: starting
2011-10-08 09:26:59,146 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
18 on 31250: starting
2011-10-08 09:26:59,146 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
19 on 31250: starting
2011-10-08 09:26:59,146 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
20 on 31250: starting
2011-10-08 09:26:59,146 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
21 on 31250: starting
2011-10-08 09:26:59,147 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
2

Re: Giraph will fail while using more workers

2011-10-10 Thread Jakob Homan
Right now, Giraph doesn't scale much past 300 workers due to its
threading model.  I'm almost done with a thrift/finagle version that
I've taken past 1k workers.  The patch should be up in the next couple
days.
-Jakob


On Mon, Oct 10, 2011 at 11:17 AM, Christian Kunz  wrote:
> Did you try something like
> -Dmapred.child.java.opts="-Xss64k?
> (see GIRAPH-12)
> Christian
> On Oct 10, 2011, at 11:08 AM, Zhiwei Gu wrote:
>
> Hi all,
>   In my giraph job, when I set the worker to be 200, it is ok, and while set
> to 500, it will fail due to early stage OOM exception in one (or more)
> workers. As this worker fails, other workers who wants to talk with this
> worker will keep on waiting until tried 5 times, then that worker will fail.
> Have you ever faced such issue?
> Best,
> -z
>
> Here is the exception,
> 2011-10-08 09:26:59,108 INFO org.apache.giraph.comm.RPCCommunications:
> getRPCServer: Added jobToken Ident: 17 6a 6f 62 5f 32 30 31 31 30 38 32 36
> 30 39 31 31 5f 36 36 37 30 39 30, Pass: 12 26 1a f1 d2 51 e1 bf 2d 36 63 11
> 26 18 17 3d 53 b3 15 f6, Kind: mapreduce.job, Service:
> job_201108260911_667090
>
> 2011-10-08 09:26:59,116 INFO org.apache.hadoop.ipc.Server: Starting
> SocketReader
> 2011-10-08 09:26:59,116 INFO org.apache.hadoop.ipc.Server: Starting
> SocketReader
> 2011-10-08 09:26:59,117 INFO org.apache.hadoop.ipc.Server: Starting
> SocketReader
> 2011-10-08 09:26:59,117 INFO org.apache.hadoop.ipc.Server: Starting
> SocketReader
> 2011-10-08 09:26:59,117 INFO org.apache.hadoop.ipc.Server: Starting
> SocketReader
> 2011-10-08 09:26:59,120 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
> RpcDetailedActivityForPort31250 registered.
> 2011-10-08 09:26:59,121 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
> RpcActivityForPort31250 registered.
> 2011-10-08 09:26:59,123 INFO org.apache.hadoop.ipc.Server: IPC Server
> Responder: starting
> 2011-10-08 09:26:59,123 INFO org.apache.hadoop.ipc.Server: IPC Server
> listener on 31250: starting
> 2011-10-08 09:26:59,127 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 0 on 31250: starting
> 2011-10-08 09:26:59,127 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 1 on 31250: starting
> 2011-10-08 09:26:59,133 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 2 on 31250: starting
> 2011-10-08 09:26:59,133 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 3 on 31250: starting
> 2011-10-08 09:26:59,137 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 4 on 31250: starting
> 2011-10-08 09:26:59,144 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 5 on 31250: starting
> 2011-10-08 09:26:59,144 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 6 on 31250: starting
> 2011-10-08 09:26:59,144 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 7 on 31250: starting
> 2011-10-08 09:26:59,144 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 8 on 31250: starting
> 2011-10-08 09:26:59,144 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 9 on 31250: starting
> 2011-10-08 09:26:59,145 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 10 on 31250: starting
> 2011-10-08 09:26:59,145 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 11 on 31250: starting
> 2011-10-08 09:26:59,145 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 12 on 31250: starting
> 2011-10-08 09:26:59,145 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 13 on 31250: starting
> 2011-10-08 09:26:59,145 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 14 on 31250: starting
> 2011-10-08 09:26:59,145 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 15 on 31250: starting
> 2011-10-08 09:26:59,146 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 16 on 31250: starting
> 2011-10-08 09:26:59,146 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 17 on 31250: starting
> 2011-10-08 09:26:59,146 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 18 on 31250: starting
> 2011-10-08 09:26:59,146 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 19 on 31250: starting
> 2011-10-08 09:26:59,146 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 20 on 31250: starting
> 2011-10-08 09:26:59,146 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 21 on 31250: starting
> 2011-10-08 09:26:59,147 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 22 on 31250: starting
> 2011-10-08 09:26:59,147 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 23 on 31250: starting
> 2011-10-08 09:26:59,147 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 24 on 31250: starting
> 2011-10-08 09:26:59,147 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 25 on 31250: starting
> 2011-10-08 09:26:59,147 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 26 on 31250: starting
> 2011-10-08 09:26:59,147 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 27 on 31250: starting
> 2011-10-08 09:26:59,148 INFO org.apache.hadoop.ipc.Serv

Re: Giraph will fail while using more workers

2011-10-10 Thread Christian Kunz
Did you try something like
-Dmapred.child.java.opts="-Xss64k?
(see GIRAPH-12)

Christian

On Oct 10, 2011, at 11:08 AM, Zhiwei Gu wrote:

> Hi all,
>   In my giraph job, when I set the worker to be 200, it is ok, and while set 
> to 500, it will fail due to early stage OOM exception in one (or more) 
> workers. As this worker fails, other workers who wants to talk with this 
> worker will keep on waiting until tried 5 times, then that worker will fail.
> 
> Have you ever faced such issue?
> 
> Best,
> -z
> 
> 
> Here is the exception,
> 2011-10-08 09:26:59,108 INFO org.apache.giraph.comm.RPCCommunications: 
> getRPCServer: Added jobToken Ident: 17 6a 6f 62 5f 32 30 31 31 30 38 32 36 30 
> 39 31 31 5f 36 36 37 30 39 30, Pass: 12 26 1a f1 d2 51 e1 bf 2d 36 63 11 26 
> 18 17 3d 53 b3 15 f6, Kind: mapreduce.job, Service: job_201108260911_667090
> 2011-10-08 09:26:59,116 INFO org.apache.hadoop.ipc.Server: Starting 
> SocketReader
> 2011-10-08 09:26:59,116 INFO org.apache.hadoop.ipc.Server: Starting 
> SocketReader
> 2011-10-08 09:26:59,117 INFO org.apache.hadoop.ipc.Server: Starting 
> SocketReader
> 2011-10-08 09:26:59,117 INFO org.apache.hadoop.ipc.Server: Starting 
> SocketReader
> 2011-10-08 09:26:59,117 INFO org.apache.hadoop.ipc.Server: Starting 
> SocketReader
> 2011-10-08 09:26:59,120 INFO 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source 
> RpcDetailedActivityForPort31250 registered.
> 2011-10-08 09:26:59,121 INFO 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source 
> RpcActivityForPort31250 registered.
> 2011-10-08 09:26:59,123 INFO org.apache.hadoop.ipc.Server: IPC Server 
> Responder: starting
> 2011-10-08 09:26:59,123 INFO org.apache.hadoop.ipc.Server: IPC Server 
> listener on 31250: starting
> 2011-10-08 09:26:59,127 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 0 on 31250: starting
> 2011-10-08 09:26:59,127 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 1 on 31250: starting
> 2011-10-08 09:26:59,133 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 2 on 31250: starting
> 2011-10-08 09:26:59,133 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 3 on 31250: starting
> 2011-10-08 09:26:59,137 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 4 on 31250: starting
> 2011-10-08 09:26:59,144 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 5 on 31250: starting
> 2011-10-08 09:26:59,144 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 6 on 31250: starting
> 2011-10-08 09:26:59,144 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 7 on 31250: starting
> 2011-10-08 09:26:59,144 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 8 on 31250: starting
> 2011-10-08 09:26:59,144 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 9 on 31250: starting
> 2011-10-08 09:26:59,145 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 10 on 31250: starting
> 2011-10-08 09:26:59,145 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 11 on 31250: starting
> 2011-10-08 09:26:59,145 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 12 on 31250: starting
> 2011-10-08 09:26:59,145 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 13 on 31250: starting
> 2011-10-08 09:26:59,145 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 14 on 31250: starting
> 2011-10-08 09:26:59,145 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 15 on 31250: starting
> 2011-10-08 09:26:59,146 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 16 on 31250: starting
> 2011-10-08 09:26:59,146 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 17 on 31250: starting
> 2011-10-08 09:26:59,146 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 18 on 31250: starting
> 2011-10-08 09:26:59,146 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 19 on 31250: starting
> 2011-10-08 09:26:59,146 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 20 on 31250: starting
> 2011-10-08 09:26:59,146 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 21 on 31250: starting
> 2011-10-08 09:26:59,147 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 22 on 31250: starting
> 2011-10-08 09:26:59,147 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 23 on 31250: starting
> 2011-10-08 09:26:59,147 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 24 on 31250: starting
> 2011-10-08 09:26:59,147 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 25 on 31250: starting
> 2011-10-08 09:26:59,147 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 26 on 31250: starting
> 2011-10-08 09:26:59,147 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 27 on 31250: starting
> 2011-10-08 09:26:59,148 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 28 on 31250: starting
> 2011-10-08 09:26:59,148 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 29 on 31250: starting
> 2011-10-08 09:26:59,148 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
>