What about just storing some metadata in a special table?
Then on you second job startup you can read that meta data and set your scan 
/input splits appropriately?
Dave

-----Original Message-----
From: Vishal Kapoor [mailto:[email protected]] 
Sent: Friday, March 25, 2011 11:21 AM
To: [email protected]
Subject: Observer/Observable MapReduce

Can someone give me a direction on how to start a map reduce based on
an outcome of another map reduce? ( nothing common between them apart
from the first decides about the scope of the second.

I might also want to set the scope of my second map reduce
(from/after) my first map reduce(scope as in scan(start,stop)

typically data comes in a few tables for us and we start crunching it
and then adding some more data to man tables like info etc to get rid
of table joins.

a light weight framework will do better than a typical workflow management tool.

thanks,
Vishal Kapoor

Reply via email to