If it is just API compat, why not re-add the 0.19 API to 0.21.  It
hasn't even been a year since the new 0.20 API was released...
Maintaining the old API costs nothing in terms of core code (ie: its
not preventing us from doing features).

-ryan

On Mon, Jan 25, 2010 at 6:40 PM, Andrew Purtell <apurt...@apache.org> wrote:
> I have opened a branch for 0.20 (currently 0.20.3 RC2) updated to run on 
> Hadoop 0.21
> (currently 0.21-dev) It's passing all unit tests now.
>
>  http://svn.apache.org/repos/asf/hadoop/hbase/branches/0.20_on_hadoop-0.21
>
> The aim is for client API compatibility. This is an intermediate step for 
> upgrading
> to 0.21 so HDFS improvements, e.g. hflush and we hope also HDFS-630, can be
> incorporated immediately through an upgrade of the deployment but without 
> requiring
> any change to users of the HBase client API or the Thrift or Stargate 
> connectors.
>
> Unlike the 0.20 on Hadoop 0.18.3 branch, the 0.20 on 0.21 branch has a more
> aggressive strategy for back end changes -- things that touch HDFS and/or a 
> major
> performance win may go in, evaluated on a case by case basis, as long as they 
> do
> not change client side API or semantics. For example, the HLog and related 
> THBase
> improvements from HBase trunk have been backported.
>
> I do not expect to maintain this branch beyond some transition period, maybe 
> until
> 0.21.1.
>
>   - Andy
>
>
>
>
>

Reply via email to