Instead of using a table, how about using the available ZooKeeper
service itself? They can hold small bits of information pretty well
themselves.

On Sat, Mar 26, 2011 at 12:29 AM, Vishal Kapoor
<[email protected]> wrote:
> David,
> how about waking up my second map reduce job as soon as I see some
> rows updated in that table.
> any thoughts on observing a column update?
>
> thanks,
> Vishal
>
> On Fri, Mar 25, 2011 at 2:56 PM, Buttler, David <[email protected]> wrote:
>> What about just storing some metadata in a special table?
>> Then on you second job startup you can read that meta data and set your scan 
>> /input splits appropriately?
>> Dave
>>
>> -----Original Message-----
>> From: Vishal Kapoor [mailto:[email protected]]
>> Sent: Friday, March 25, 2011 11:21 AM
>> To: [email protected]
>> Subject: Observer/Observable MapReduce
>>
>> Can someone give me a direction on how to start a map reduce based on
>> an outcome of another map reduce? ( nothing common between them apart
>> from the first decides about the scope of the second.
>>
>> I might also want to set the scope of my second map reduce
>> (from/after) my first map reduce(scope as in scan(start,stop)
>>
>> typically data comes in a few tables for us and we start crunching it
>> and then adding some more data to man tables like info etc to get rid
>> of table joins.
>>
>> a light weight framework will do better than a typical workflow management 
>> tool.
>>
>> thanks,
>> Vishal Kapoor
>>
>



-- 
Harsh J
http://harshj.com

Reply via email to