Thank you both for the replies. I have checked out the most recent version of the code and am still having the same problem. (I also tried with the previous aggregator code with no luck either.) Looking in the logs it appears that It gets through superstep -1, and the block seems to occur on superstep 0. This is the end of the log file for the non-master task:
2012-08-21 14:14:15,620 INFO org.apache.giraph.graph.BspServiceWorker:
finishSuperstep: Completed superstep -1 with global stats
(vtx=1248,finVtx=0,edges=2944,msgCount=0,haltComputation=true)
2012-08-21 14:14:15,621 INFO org.apache.giraph.comm.BasicRPCCommunications:
prepareSuperstep: Superstep 0 totalMem = 81.0625M, maxMem = 197.5M, freeMem =
68.69868M
2012-08-21 14:14:15,627 WARN org.apache.giraph.graph.BspService: process:
Unknown and unprocessed event
(path=/_hadoopBsp/job_201208211408_0002/_applicationAttemptsDir/0/_superstepDir,
type=NodeChildrenChanged, state=SyncConnected)
2012-08-21 14:14:15,629 INFO org.apache.giraph.graph.BspServiceWorker:
registerHealth: Created my health node for attempt=0, superstep=0 with
/_hadoopBsp/job_201208211408_0002/_applicationAttemptsDir/0/_superstepDir/0/_workerHealthyDir/nwest-mac.benchmark.local_1
and workerInfo= Worker(hostname=nwest-mac.benchmark.local, MRpartition=1,
port=30001)
2012-08-21 14:14:15,657 INFO org.apache.giraph.graph.BspServiceWorker:
processEvent: Job state changed, checking to see if it needs to restart
2012-08-21 14:14:15,658 INFO org.apache.giraph.graph.BspService: getJobState:
Job state already exists (/_hadoopBsp/job_201208211408_0002/_masterJobState)
and these are the last lines of the master log:
012-08-21 14:14:15,617 INFO org.apache.giraph.graph.BspServiceMaster:
aggregateWorkerStats: Aggregation found
(vtx=1248,finVtx=0,edges=2944,msgCount=0,haltComputation=false) on superstep =
-1
2012-08-21 14:14:15,652 INFO org.apache.giraph.graph.MasterThread:
masterThread: Coordination of superstep -1 took 0.945 seconds ended with state
ALL_SUPERSTEPS_DONE and is now on superstep 0
2012-08-21 14:14:15,654 INFO org.apache.giraph.graph.BspServiceMaster:
setJobState:
{"_stateKey":"FINISHED","_applicationAttemptKey":-1,"_superstepKey":-1} on
superstep 0
2012-08-21 14:14:15,662 INFO org.apache.giraph.graph.BspServiceMaster: cleanup:
Notifying master its okay to cleanup with
/_hadoopBsp/job_201208211408_0002/_cleanedUpDir/0_master
2012-08-21 14:14:15,662 INFO org.apache.giraph.graph.BspServiceMaster:
cleanUpZooKeeper: Node /_hadoopBsp/job_201208211408_0002/_cleanedUpDir already
exists, no need to create.
2012-08-21 14:14:15,663 INFO org.apache.giraph.graph.BspServiceMaster:
cleanUpZooKeeper: Got 1 of 2 desired children from
/_hadoopBsp/job_201208211408_0002/_cleanedUpDir
2012-08-21 14:14:15,663 INFO org.apache.giraph.graph.BspServiceMaster:
cleanedUpZooKeeper: Waiting for the children of
/_hadoopBsp/job_201208211408_0002/_cleanedUpDir to change since only got 1
nodes.
Again, the only addition is the use of the aggregator, otherwise the code runs
perfectly fine. Thoughts?
Thanks,
Nick
On Aug 21, 2012, at 4:06 AM, Maja Kabiljo wrote:
Hi Nick,
There were some very recent changes in the way aggregators are used. If your
code below compiles it means that you are using the version before the changes,
and looking at the example after them. The code which Kaushik attached shows
how you should do it if you are not using the newest Giraph code.
If you want to use newest code, here is how aggregators work there:
* You have to register aggregators only on master, just like you are doing
now. You can use registerAggregator or registerPersistentAggregator, depending
on whether or not you want it to be reset on every super step.
* You don't have getAggregator method anymore. During vertex computation
you can only call aggregate(name, value), and in master.compute you have
setAggregatedValue(name, value).
* There is no more registerAggregator and useAggregator on workers,
therefore you don't have to use WorkerContext in order to use aggregators.
Hope this helps,
Maja
From: KAUSHIK SARKAR <[email protected]<mailto:[email protected]>>
Reply-To: "[email protected]<mailto:[email protected]>"
<[email protected]<mailto:[email protected]>>
Date: Tuesday, August 21, 2012 7:39 AM
To: "[email protected]<mailto:[email protected]>"
<[email protected]<mailto:[email protected]>>
Subject: Re: Adding MasterCompute object causes "failed to report status" errors
Hi Nick,
Please refer to the SimpleMasterComputeWorkerContext class in the attached
SimpleMasterComputeVertex.java file (This is from the snapshot of 0.2 that I am
using. It is approx. 1 month old. It seems that the WorkerContext class is
different from the current svn version. I am not aware if this change made in
the current version to reflect some change in the API behaviour, but I followed
the WorkerContext definition from the attached file and my code worked.)
You will see that you need to register the aggregator twice - in the
initialize() method of MasterCompute (which you have done) and in the
preApplication() method of the WorkerContext. Moreover in the preSuperstep()
method of the WorkerContext, you need to call useAggregator() method.
I am not sure if this is the problem with your code, but you can give it a try
and see if it solves your issue.
Regards,
Kaushik
On Mon, Aug 20, 2012 at 3:04 PM, Nick West
<[email protected]<mailto:[email protected]>>
wrote:
I'm a little confused by the examples in SimpleMasterComputeVertex.java.
To me it looks like this is a simple example with one vertex and one aggregator
with the following behavior:
- The vertex gets the value stored in the aggregator and then adds its previous
value to it and stores the result as the new vertex value; the result is also
stored in the worker context
- The aggregator sets its value to superstep/2 + 1 every iteration and stops on
the 10th superstep
The worker context seems to serve no other purpose but to hold the value of
FINAL_SUM (not related to the aggregator) at each iteration. It also seems
like the aggregator is registered in the initialize method of the MasterCompute
object, much like I have in my code.
I see one difference between the example and my code:
1) I use the aggregate function in each vertex's compute method. If this is
not the way to have the vertices combine values, what is?
If you can provide insight to either how I'm not following the example, or what
else might wrong, that'd be great.
Thanks,
Nick
On Aug 20, 2012, at 4:52 PM, KAUSHIK SARKAR wrote:
Hi Nick,
Are you using WorkerContext to register the aggregator? You need to override
the preApplication() method in WorkerContext to register the aggregator and
then override the preSuperstep() method to to tell the workers to use the
aggregator (the useAggregator() method). Check the MasterCompute and
WorkerContext examples in Giraph.
Regards,
Kaushik
On Mon, Aug 20, 2012 at 1:26 PM, Nick West
<[email protected]<mailto:[email protected]>>
wrote:
Hi,
I have a giraph application that runs fine; however, when I add a MasterCompute
object (definition following) all of the map tasks time out. I have hadoop
configured to run with 8 map processes and giraph to use one worker.
Here's the definition of the MasterCompute object:
class BPMasterComputer extends MasterCompute{
override def compute() {
val agg =
getAggregator("VOTE_TO_STOP_AGG").asInstanceOf[BooleanAndAggregator]
val res = agg.getAggregatedValue.get
if (res) haltComputation
agg.setAggregatedValue(true)
}
override def initialize() {
registerAggregator("VOTE_TO_STOP_AGG", classOf[BooleanAndAggregator])
val agg =
getAggregator("VOTE_TO_STOP_AGG").asInstanceOf[BooleanAndAggregator]
agg.setAggregatedValue(true)
}
override def write(out: DataOutput) {}
override def readFields(in: DataInput) {}
}
(as far as I can tell, there is no state that needs to be read/written.) I
then register this class as the MasterCompute class in the giraph job:
job.setMasterComputeClass(classOf[BPMasterComputer])
and then use the aggregator in the compute method of my vertices:
class BPVertex extends EdgeListVertex[IntWritable, WrappedValue, Text,
PackagedMessage] with Loggable {
override def compute(msgs: java.util.Iterator[PackagedMessage]) {
...
var stop = false
val agg =
getAggregator("VOTE_TO_STOP_AGG").asInstanceOf[BooleanAndAggregator]
... code to modify stop and vote to halt ...
agg.aggregate(stop)
}
}
Is there some other method that I am not calling that I should? Or some step
that I'm missing? Any suggestions as to why/how these additions are causing
the processes to block would be appreciated!
Thanks,
Nick West
Benchmark Solutions
101 Park Avenue - 7th Floor
New York, NY 10178
Tel +1.212.220.4739<tel:%2B1.212.220.4739> | Mobile
+1.646.267.4324<tel:%2B1.646.267.4324>
www.benchmarksolutions.com <http://www.benchmarksolutions.com/>
<image001.png>
Nick West
Benchmark Solutions
101 Park Avenue - 7th Floor
New York, NY 10178
Tel +1.212.220.4739<tel:%2B1.212.220.4739> | Mobile
+1.646.267.4324<tel:%2B1.646.267.4324>
www.benchmarksolutions.com <http://www.benchmarksolutions.com/>
<image001.png>
<image001.png>
Nick West
Benchmark Solutions
101 Park Avenue - 7th Floor
New York, NY 10178
Tel +1.212.220.4739 | Mobile +1.646.267.4324
www.benchmarksolutions.com <http://www.benchmarksolutions.com/>
[cid:[email protected]]
<<inline: image001.png>>
