Re: HDFS File Read

2007-11-17 Thread j2eeiscool
Hi Raghu, I understand that. I have also read that there is something in the works which will address some of this (Reader able to get data before Writer is completely done: HADOOP-1700). In my test the Writer and Reader are different threads (they could be even different processes). So how

hbase feature question

2007-11-17 Thread Billy
I was looking over the bigtable pdf again to make sure that's where I read this, but there setup allows Column Families to be removed from the database in garbage collection. Is this a feature that will be added to hbase? Basically it allows you to set a max ttl for a column row. I can see

RE: hbase feature question

2007-11-17 Thread Jim Kellerman
Currently, HBase supports specifying a maximum number of versions to keep. Older versions are removed during compaction. However, we do not currently support a TTL for columns. Please enter a Jira using the component contrib/hbase and request a feature improvement. Thanks. --- Jim Kellerman,

Re: hbase feature question

2007-11-17 Thread Billy
Thanks add here https://issues.apache.org/jira/browse/HADOOP- Billy Jim Kellerman [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] Currently, HBase supports specifying a maximum number of versions to keep. Older versions are removed during compaction. However, we do not currently

Re: HDFS File Read

2007-11-17 Thread Aaron Kimball
You could write the file out under a dummy name and then rename it to the target filename after the write is complete. The reader simply blocks until the correct filename exists. - Aaron j2eeiscool wrote: Hi Raghu, I understand that. I have also read that there is something in the works

Re: HDFS File Read

2007-11-17 Thread Ted Dunning
The file will not appear to exist at all until the writer is done. If you jus tlist the directory, you will see the file appear. OR... You could use an out-of-band signal from the writer to the reader. You don't have to use Hadoop for everything. It is only intended to do the things that you

Re: hbase split error

2007-11-17 Thread Billy
Bug submitted https://issues.apache.org/jira/browse/HADOOP-2223 Billy Billy [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] I running release 0.15.0 I have one table that is trying to split on a restart of hbase but it keep failing and exiting after a not a valid DFS filename

Re: when will the feature HADOOP-1700 be implemented ?

2007-11-17 Thread Mafish Liu
Hi, Dhruba I got some questions while reading your design. 1. In section Lease recovery, you said Any Datanode that has a BlockGenerationStamp that is larger than what is stored in the BlocksMap is guaranteed to contain data from the last successful write to that block. To my understanding,