thanks, Owen and David, I also thought of making a queue so that I can push catalog names to the end of it, while the job control loop keeps removing items off the queue until there is no more left.
However, the problem is I don't see how I can do so within the map/reduce context. All the code examples are one-shot deals and there is no iteration involved. Furthermore, what David said made sense, but to avoid infinite loop, the code must remove the record it just read from the input file. How do I do that using hadoop's fs? or does hadoop take care of it automatically? -- Jim On 10/29/07, David Balatero <[EMAIL PROTECTED]> wrote: > Aren't these questions a little advanced for a bear to be asking? > I'll be here all night... > > But seriously, if your job is inherently recursive, one possible way > to do it would be to make sure that you output in the same format > that you input. Then you can keep re-reading the outputted file back > into a new map/reduce job, until you hit some base case and you > terminate. I've had a main method before that would kick off a bunch > of jobs in a row -- but I wouldn't really recommend starting another > map/reduce job in the scope of a running map() or reduce() method. > > - David > > > On Oct 29, 2007, at 2:17 PM, Jim the Standing Bear wrote: > > > then > > -- -------------------------------------- Standing Bear Has Spoken --------------------------------------
