Since there may be more than one qualifier for a column family, is it reasonable to add Result.getColumnLatest(byte[]) which returns the latest column for given family ?
Thanks On Mon, Nov 15, 2010 at 2:12 PM, Ryan Rawson <[email protected]> wrote: > The javadoc details what to expect. > > You can also call Result.getColumnLatest(byte[],byte[]) > > The resulting KeyValue has methods like getTimestamp() etc. > > -ryan > > > On Mon, Nov 15, 2010 at 2:02 PM, Ted Yu <[email protected]> wrote: > > We used the following code to see if the "Cell" (pardon me for using the > old > > term) returned from scanner was of certain timestamp: > > if (result.getCellValue().getTimestamp() == ts){ > > > > In 0.90, I don't see method which returns the timestamp for Result. > > I am wondering if the above check can be omitted since we specify the > > following for the Scan: > > scan.setTimeStamp(ts); > > If not, please advise the correct way of retrieving timestamp from > Result. > > > > If I call Result.raw() and get two KeyValue's, which should be the > KeyValue > > whose timestamp conforms to the semantics of our check shown at the top ? > > > > Thanks > > > > On Mon, Nov 15, 2010 at 12:38 PM, Ryan Rawson <[email protected]> > wrote: > > > >> That is correct, those classes were deprecated in 0.20, and now gone in > >> 0.90. > >> > >> Now you will want to use HTable and Result. > >> > >> Also Filter.getNextKeyHint() is an implementation detail, have a look > >> at the other filters to get a sense of what it does. > >> > >> On Mon, Nov 15, 2010 at 12:33 PM, Ted Yu <[email protected]> wrote: > >> > Just a few findings when I tried to compile our 0.20.6 based code with > >> this > >> > new release: > >> > > >> > HConstants is final class now instead of interface > >> > RowFilterInterface is gone > >> > org.apache.hadoop.hbase.io.Cell is gone > >> > org.apache.hadoop.hbase.io.RowResult is gone > >> > constructor > >> > > >> > HColumnDescriptor(byte[],int,java.lang.String,boolean,boolean,int,boolean) > >> > is gone > >> > Put.setTimeStamp() is gone > >> > org.apache.hadoop.hbase.filter.Filter has added > >> > getNextKeyHint(org.apache.hadoop.hbase.KeyValue) > >> > > >> > If you know the alternative to some of the old classes, please share. > >> > > >> > On Mon, Nov 15, 2010 at 2:51 AM, Stack <[email protected]> wrote: > >> > > >> >> The first hbase 0.90.0 release candidate is available for download: > >> >> > >> >> > >> >> http://people.apache.org/~stack/hbase-0.90.0-candidate-0/<http://people.apache.org/%7Estack/hbase-0.90.0-candidate-0/> > <http://people.apache.org/%7Estack/hbase-0.90.0-candidate-0/> > >> <http://people.apache.org/%7Estack/hbase-0.90.0-candidate-0/> > >> >> > >> >> HBase 0.90.0 is the major HBase release that follows 0.20.0 and the > >> >> fruit of the 0.89.x development release series we've been running of > >> >> late. > >> >> > >> >> More than 920 issues have been closed since 0.20.0. Release notes > are > >> >> available here: http://su.pr/8LbgvK. > >> >> > >> >> HBase 0.90.0 runs on Hadoop 0.20.x. It does not currently run on > >> >> Hadoop 0.21.0. HBase will lose data unless it is running on an > >> >> Hadoop HDFS 0.20.x that has a durable sync. Currently only the > >> >> branch-0.20-append branch [1] has this attribute. No official > releases > >> >> have been made from this branch as yet so you will have to build your > >> >> own Hadoop from the tip of this branch or install Cloudera's CDH3 [2] > >> >> (Its currently in beta). CDH3b2 or CDHb3 have the 0.20-append > patches > >> >> needed to add a durable sync. See CHANGES.txt [3] in > >> >> branch-0.20-append to see list of patches involved. > >> >> > >> >> There is no migration necessary. Your data written with HBase 0.20.x > >> >> (or with HBase 0.89.x) is readable by HBase 0.90.0. A shutdown and > >> >> restart after putting in place the new HBase should be all thats > >> >> involved. That said, once done, there is no going back to 0.20.x > once > >> >> the transition has been made. HBase 0.90.0 and HBase 0.89.x write > >> >> region names differently in the filesystem. Rolling restart from > >> >> 0.20.x or 0.89.x to 0.90.0RC0 will not work. > >> >> > >> >> Should we release this candidate as hbase 0.90.0? Take it for a > spin. > >> >> Check out the doc. Vote +1/-1 by November 22nd. > >> >> > >> >> Yours, > >> >> The HBasistas > >> >> P.S. For why the version 0.90 and whats new in HBase 0.90, see slides > >> >> 4-10 in this deck [4] > >> >> > >> >> 1. > >> http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.20-append > >> >> 2. http://archive.cloudera.com/docs/ > >> >> 3. > >> >> > >> > http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.20-append/CHANGES.txt > >> >> 4. http://hbaseblog.com/2010/07/04/hug11-hbase-0-90-preview-wrap-up/ > >> >> > >> > > >> > > >
