Hadoop supports random reads.
However, it does not support random writes. 
Hadoop's file is "write-once-only". When you create a file, you can write to
it sequentially. Once you close it, it becomes read only.
In order to replace a section of a file, you can create a temp file, get
data from the desired parts of your original file, write the desired data to
the temp file, and then rename the temp file to the original name.

Runping


> -----Original Message-----
> From: Wayne Liu [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, May 22, 2007 9:09 AM
> To: [email protected]
> Subject: Re: Does Hadoop support Random Read/Write?
> 
> 2007/5/22, Chad Walters <[EMAIL PROTECTED]>:
> 
> > The HDFS is designed for writing data in large blocks and then later
> > reading
> > through those blocks. So the primary usages are sequential writing and
> > sequential reading.
> >
> > Thank you,Chad.
> This does meet most of my needs,but there are two more needs:
> 
> First : write a large image data(File1) into HDFS, and if you give me the
> start position and the  length of  the  data that you want to get , then I
> can locate it in File1 and get it.This is called Random Read.
> 
> Second: as you know, sometimes I just want to replace a section of data in
> File1. So first I have to locate it and then replace it with another
> section
> of data.And this is called Random Write.
> 
> How to solve the problem?
> Waiting for your reply, thanks a lot!

Reply via email to