Wayne,

If you could explain your use case in a little more detail, folks could
probably provide more helpful information.

The HDFS is designed for writing data in large blocks and then later reading
through those blocks. So the primary usages are sequential writing and
sequential reading.

Hbase (under development in the contrib section of Hadoop) builds a table
abstraction on top of HDFS. The table has a single key field for each row
and the abstraction supports random writes across the key space. Random
reads are also supported, although sequential scans through the key space
are quite a bit more performant.

Chad Walters
Powerset


On 5/21/07 5:12 PM, "Wayne Liu" <[EMAIL PROTECTED]> wrote:

> Hello,
> 
>       It says that Hadoop supports the common file system operations, so I
> just want to ask if it supports Random Read/Write the file?
> 
>       Waiting for your reply, thanks a lot!

Reply via email to