The simplest way to do this is with a thread that executes the jobs you want to 
run synchronously....

        Job job1 = ...
        job1.waitForCompletion(true);

        Job job2 = ...
        job2.waitForCompletion(true);


-----Original Message-----
From: Vishal Kapoor [mailto:[email protected]] 
Sent: Friday, March 25, 2011 3:00 PM
To: [email protected]
Cc: Buttler, David
Subject: Re: Observer/Observable MapReduce

David,
how about waking up my second map reduce job as soon as I see some rows updated 
in that table.
any thoughts on observing a column update?

thanks,
Vishal

On Fri, Mar 25, 2011 at 2:56 PM, Buttler, David <[email protected]> wrote:
> What about just storing some metadata in a special table?
> Then on you second job startup you can read that meta data and set your scan 
> /input splits appropriately?
> Dave
>
> -----Original Message-----
> From: Vishal Kapoor [mailto:[email protected]]
> Sent: Friday, March 25, 2011 11:21 AM
> To: [email protected]
> Subject: Observer/Observable MapReduce
>
> Can someone give me a direction on how to start a map reduce based on 
> an outcome of another map reduce? ( nothing common between them apart 
> from the first decides about the scope of the second.
>
> I might also want to set the scope of my second map reduce
> (from/after) my first map reduce(scope as in scan(start,stop)
>
> typically data comes in a few tables for us and we start crunching it 
> and then adding some more data to man tables like info etc to get rid 
> of table joins.
>
> a light weight framework will do better than a typical workflow management 
> tool.
>
> thanks,
> Vishal Kapoor
>

Reply via email to