On Mon, May 5, 2008 at 6:12 PM, Clint Morgan <[EMAIL PROTECTED]> wrote:
> Actually, I think a more simple approach will get what we want here: > Give hbase a custom filesystem which writes to hdfs, then to s3, but > reads just from hdfs. Keep in mind that Amazon is about to release permanent file storage on EC2: http://aws.typepad.com/aws/2008/04/block-to-the-fu.html "These volumes can be thought of as raw, unformatted disk drives which can be formatted and then used as desired (or even used as raw storage if you'd like). Volumes can range in size from 1 GB on up to 1 TB; you can create and attach several of them to each EC2 instance. They are designed for low latency, high throughput access from Amazon EC2. Needless to say, you can use these volumes to host a relational database." This might save you or the hadoop team from a lot of unnecessary work.. or aren't you talking about EC2 instances ? -- Leon Mergen http://www.solatis.com
