On Fri, 21 Dec 2007 12:24:57 PST, John Heidemann wrote: 
>On Thu, 20 Dec 2007 18:46:58 PST, Kirk True wrote: 
>>Hi all,
>>
>>A lot of the ideas I have for incorporating Hadoop into internal projects 
>>revolves around distributing long-running tasks over multiple machines. I've 
>>been able to get a quick prototype up in Hadoop for one of those projects and 
>>it seems to work pretty well. 
>>...
>He's not saying "is Hadoop optimal" for things that aren't really
>map/reduce, but "is it reasonable" for those things?
>(Kirk, is that right?)
>...

Sorry to double reply, but I left out my comment to (my view of) Kirk's
question.

In addition to what Ted said, I'm not sure how well Hadoop works with
long-running jobs, particuarlly how well that interacts with its fault
tolerance code.

And more generally, if you're not doing map/reduce than you'd probably
have to build your own fault tolerance methods.

   -John Heidemann

Reply via email to