Good writeup, Jimmy (was away for a few days due to an event in my family) Some quick questions - Has there been any thoughts on the plan to use HBASE-5394. Are we going to make the hbase protocols (like HRegionInterface) protobuf aware?
In Hadoop, I have seen the following: 1. In HDFS, the protocol definitions are not changed (like org.apache.hadoop.hdfs.protocol.ClientProtocol). Instead there are translators that are defined that implement the mapping of protobuf datastructures to application-level datastructures and vice versa (for example, have a look at ClientNamenodeProtocolTranslatorPB and ClientNamenodeProtocolServerSideTranslatorPB in the package org.apache.hadoop.hdfs.protocolPB). 2. In Yarn (MRV2), all protocol definitions are written in PB Since the base RPC still uses writables for payload encoding, a translation happens when the protobuf objects are sent/received (as an example look at org.apache.hadoop.ipc.ProtobufRpcEngine; classes RpcRequestWritable and RpcResponseWritable). What does the HBase community think about the above? On Feb 13, 2012, at 1:02 PM, Jimmy Xiang wrote: > I posted the proposal on wiki: > > http://wiki.apache.org/hadoop/Hbase/HBaseWireCompatibility > > Thanks, > Jimmy > > On Mon, Feb 13, 2012 at 11:03 AM, Ted Yu <[email protected]> wrote: > >> Can you post on wiki ? >> >> Attachment stripped. >> >> On Mon, Feb 13, 2012 at 11:01 AM, Jimmy Xiang <[email protected]> wrote: >> >>> Hello, >>> >>> As HBase installation base is getting bigger, we are ready to work on the >>> wire compatibility issue. >>> The goal is to make HBase easier for operators to upgrade, while it is >>> also easier for developers to >>> enhance, re-architect if necessary. >>> >>> The attached is a proposal we came up. We'd like to start with two >> phases: >>> >>> Phase 1: Compatibility between client applications and HBase clusters >>> Phase 2: HBase cluster rolling upgrade within same major version >>> >>> Could you please review? >>> >>> Thanks, >>> Jimmy >>> >>
