On Fri, Nov 11, 2011 at 10:45 AM, Eric Yang <[email protected]> wrote: > I recommend that one HBase release should be certified for one major release > of Hadoop to reduce risk. Perhaps when public Hadoop API are rock solid, > then it will become feasible to have a version of HBase that work across > multiple version of Hadoop.
IMO this is entirely untenable. Different users upgrade HDFS at different rates - soon a lot of people will start to use 0.23 whereas many people will be running 0.20 for years to come. New versions of HBase need to be able to run against both (perhaps with some features or improvements only available on the latest). > > In proposed HBase structure layout change (HBASE-4337), the packaging process > excludes inclusion of Hadoop jar file, and pick up from constructed class > path. In the effort of ensuring Hadoop related technology can work together > in integrated fashion (File system layout change in HADOOP-6255). This is > the starting point to ensure that Hadoop can be swap out with a different > major version for test. Once the proposed structure is adopted, HBase > community can setup integration test for HBase with multiple Hadoop major > release. I don't think the file layout is the major barrier here... the barrier is people actually helping to write integration tests, and working to fix bugs as they're found. For example, would very much appreciate anyone's help to get this build green: https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-23/ -Todd -- Todd Lipcon Software Engineer, Cloudera
