It certainly would help me to be able to do this and I would guess it would 
ease the "limited number of files" problem that people encounter.

-----Original Message-----
From: Jeff Hammerbacher [mailto:[EMAIL PROTECTED]
Sent: Wed 8/29/2007 12:30 PM
To: [email protected]
Subject: Re: Using Map/Reduce without HDFS?
 
haven't heard much on this subject actually:
http://issues.apache.org/jira/browse/HADOOP-1700

On 8/29/07, Ted Dunning <[EMAIL PROTECTED]> wrote:
>
>
> You can't append in hadoop, AFAIK.
>
> The appending would be done outside of Hadoop with a periodic copy into
> HDFS.
>
> I hear that append operations are coming soon.
>
> -----Original Message-----
> From: mfc [mailto:[EMAIL PROTECTED]
> Sent: Mon 8/27/2007 6:48 PM
> To: [email protected]
> Subject: Re: Using Map/Reduce without HDFS?
>
>
> Hi,
>
> Can you elaborate how this is done in Hadoop?
>
> Thanks
>
>
> Ted Dunning-3 wrote:
> >
> >
> > It is often also possible to merge the receiving of the new data with
> the
> > appending to a large file.  The append nature of the writing makes this
> > very
> > mcuh more efficient than scanning a pile of old files.
> >
> >
>
> --
> View this message in context:
> http://www.nabble.com/Using-Map-Reduce-without-HDFS--tf4331338.html#a12360816
> Sent from the Hadoop Users mailing list archive at Nabble.com.
>
>
>

Reply via email to