Hmm I was under the impression that HDFS is like GFS optimized for appends
although GFS supports random writes. So let's say I want to process logs
using Hadoop. The only way I can do it is to move the entire log into Hadoop
from some place else and then perhaps run Map/Reduce jobs against it. It
seems to kind defeat the purpose. Am I missing something ?

Thanks
A

On 6/13/07, Briggs <[EMAIL PROTECTED]> wrote:

No appending, AFAIK.  Hadoop is not intended for writing in this way.
It's more of a write few read many system. Such granular writes would
be inefficient.

On 6/13/07, Phantom <[EMAIL PROTECTED]> wrote:
> Hi
>
> Can this only be done for read only and write only mode ? How do I do
> appends ? Because if I am using this for writing logs then I would want
to
> append to the file rather overwrite which is what the write only mode is
> doing.
>
> Thanks
> A
>


--
"Conscious decisions by conscious minds are what make reality real"

Reply via email to