2007/5/22, Chad Walters <[EMAIL PROTECTED]>:
The HDFS is designed for writing data in large blocks and then later
reading
through those blocks. So the primary usages are sequential writing and
sequential reading.
Thank you,Chad.
This does meet most of my needs,but there are two more needs:
First : write a large image data(File1) into HDFS, and if you give me the
start position and the length of the data that you want to get , then I
can locate it in File1 and get it.This is called Random Read.
Second: as you know, sometimes I just want to replace a section of data in
File1. So first I have to locate it and then replace it with another section
of data.And this is called Random Write.
How to solve the problem?
Waiting for your reply, thanks a lot!