There are patches up to deal with this from inside the code. Giraph has a strained relationship with the Hadoop progress mechanism currently. If you are so enabled on your cluster, you can set mapreduce timeouts from the command line (I can send you the specific commands if you want) but as of this week one patch went in, and another is on the way (GIRAPH-274) to deal with the most common moments in a job run where this occurs.
Sadly, I think Vishal might have called this one correctly. Given the symptoms of your application failure, it sounds like a worker ran out of memory and died during the computation. This causes the worker to stop generating heartbeats to hadoop, and eventually times out. ZooKeeper has a default timeout as well, that is much less forgiving, but on the client side tends to continue operating when the Giraph worker ceases to function. You might attempt to use GiRAPH-232 or MemoryUtils to add some memory metrics to your vertex and check them in the Mapper Detail logs on the Hadoop HTML job display. In your vertex implementation, attempt to reuse Writable value objects when its feasible to cut down on constant instantiations/destructions every superstep as well. and finally, there are also command line options to increase the size of your Netty RPC buffers. In fact, make sure your -Dgiraph.useNetty option is =true as well. Good luck, let us know how it goes, Eli On Wed, Aug 22, 2012 at 9:33 AM, Vishal Patel <[email protected]>wrote: > After several supersteps, sometimes a worker thread dies (say it ran out of > memory). Zookeeper waits for ~5 mins (600 seconds) and then decides that, > the worker is not responsive and fails the entire job. At this point if you > have a checkpoint saved it will resume from there otherwise you have to > start from scratch. > > If you run the job again it should successfully finish (or it might error > at some other superstep / worker combination). > > Vishal > > > > On Tue, Aug 21, 2012 at 10:12 PM, Amani Alonazi > <[email protected]>wrote: > > > Hi all, > > > > I'm running a minimum spanning tree compute function on Hadoop cluster > (20 > > machines). After certain supersteps (e.g. superstep 47 for a graph of > > 4,194,304 vertices and 181,566,970 edges), the execution time increased > > dramatically. This is not the only problem, the job has been killed "Task > > attempt_* failed to report status for 601 seconds. Killing! " > > > > I disabled the checkpoint feature by setting the > > "CHECKPOINT_FREQUENCY_DEFAULT = 0" in GiraphJob.java. I don't need to > write > > any data to disk neither snapshots nor output. I tested the algorithm on > > sample graph of 7 vertices and it works well. > > > > Is there any way to profile or debug Giraph job? > > In the Giraph Stats the "Aggregate finished vertices" counter is it for > > the vertices which voted to halt? Also the "sent messages" counter, is it > > per each superstep or the total msgs? > > If a vertex vote to halt, will it be activated upon receiving messages? > > > > Thanks a lot! > > > > Best, > > Amani AlOnazi > > MSc Computer Science > > King Abdullah University of Science and Technology > > Kingdom of Saudi Arabia > > > > > > ------------------------------ > > This message and its contents, including attachments are intended solely > > for the original recipient. If you are not the intended recipient or have > > received this message in error, please notify me immediately and delete > > this message from your computer system. Any unauthorized use or > > distribution is prohibited. Please consider the environment before > printing > > this email. >
