Sounds like I don't need to care about it, as we won't be attempting to deploy hbase98 on hadoop1. Thanks Andy!
> On Feb 12, 2015, at 4:04 PM, Andrew Purtell <[email protected]> wrote: > > Yes, for the 0.98 build we munge POMs because we still semi-support Hadoop > 1 with this release line. Convenience binary artifacts are incompatible > with Hadoop 2 if built for Hadoop 1, and vice versa, so we need some way to > distinguish them and Maven classifiers were not up to the task. Some of the > others around here who suffered through that could probably tell you more > about why. You don't need to care about it, but if you would like for > consistency's sake to do this for internal builds, please see below. > > Munging POMs happens post tag of a RC in git, the 0.98 build procedure is > roughly: > > 1. Update POMs and CHANGES.txt for the RC > > 2. Tag the RC > > 3. Generate Hadoop variant POMs: > > $ bash dev-support/generate-hadoopX-poms.sh $version $version-hadoop1 > $ bash dev-support/generate-hadoopX-poms.sh $version $version-hadoop2 > > 4. Build Hadoop 1 convenience binaries: > > $ mvn -f pom.xml.hadoop1 clean install -DskipTests -Prelease && \ > mvn -f pom.xml.hadoop1 install -DskipTests site assembly:single \ > -Prelease && \ > mvn -f pom.xml.hadoop1 deploy -DskipTests -Papache-release > > 5. Build Hadoop 2 convenience binaries: > > $ mvn -f pom.xml.hadoop2 clean install -DskipTests -Prelease && \ > mvn -f pom.xml.hadoop2 install -DskipTests site assembly:single \ > -Prelease && \ > mvn -f pom.xml.hadoop2 deploy -DskipTests -Papache-release > > 6. Sign everything and upload it for consideration. > > If doing this yourself, you don't need the "release" profile enabled and > can skip the "deploy" build phase. > > We used to have this documented in our online manual but since HBase 1.0 > and up won't support Hadoop 1 this detail has been dropped. > > > On Thu, Feb 12, 2015 at 12:55 PM, Ian Friedman <[email protected]> wrote: > >> Tacking a build question onto this one - I recently did as advised for our >> hbase98/hadoop2.6.0 integration and build hbase from source. I'm wondering >> though, previously I had used the maven repo artifacts which have version >> strings like 0.98.9-hadoop2, but the build produces artifacts without that >> hadoop1/2 discriminator. Is that added afterwards somehow? Do I need to >> care about it if I built my own version using the hadoop2 profile? >> >> Thanks in advance, >> Ian >> >>> On Feb 10, 2015, at 1:33 PM, Andrew Purtell <[email protected]> wrote: >>> >>> As Ted suggests it's best to recompile the source distribution of HBase >>> against more recent versions of Hadoop then what we've compiled our >>> convenience binaries against. Hadoop often makes incompatible changes >>> across point releases, which can also extend to dependencies (versions of >>> guava or protobuf libraries used to build Hadoop might be newer than >> those >>> provided in the HBase binary distribution, etc), so building from source >>> with -Dhadoop-two.version=<the_version_you_want> is best in my opinion. >>> >>> Even so there might still be an issue lurking. Hadoop 2.6.0 is pretty new >>> and I'm not sure we have done any significant testing with it. If you >>> rebuild against Hadoop 2.6.0 and are still seeing this problem please >> open >>> a ticket on our issue tracker: >> https://issues.apache.org/jira/browse/HBASE >>> >>> >>> On Mon, Feb 9, 2015 at 8:48 PM, Ted Yu <[email protected]> wrote: >>> >>>> What command did you use to build against hadoop 2.6.0 ? >>>> hadoop-2.0 is the default profile. >>>> >>>> Adding '-Dhadoop-two.version=2.6.0' to command line should do. >>>> >>>> Cheers >>>> >>>> On Mon, Feb 9, 2015 at 7:45 PM, Tong Pham <[email protected]> wrote: >>>> >>>>> Hi everyone, >>>>> >>>>> I am currently trying to set up an Hbase 0.98.9 cluster to run against >> a >>>>> Hadoop 2.6.0 HDFS filesystem. >>>>> The problem is, according to >>>>> http://hbase.apache.org/book.html#basic.prerequisites : >>>>> >>>>> In distributed mode, it is critical that the version of Hadoop that is >>>> out >>>>> on your cluster match what is under HBase. Replace the hadoop jar found >>>> in >>>>> the HBase lib directory with the hadoop jar you are running on your >>>> cluster >>>>> to avoid version mismatch issues. Make sure you replace the jar in >> HBase >>>>> everywhere on your cluster. Hadoop version mismatch issues have various >>>>> manifestations but often all looks like its hung up. >>>>> >>>>> When I tried replacing the lib/hadoop.*.jar files that came with Hbase, >>>>> and which are labeled as version 2.2.0, >>>>> with their 2.6.0 counterparts, I encountered this error starting up a >>>>> brand new Hbase master: >>>>> >>>>> @4000000054d9777010e182a4 2015-02-10T03:13:42.283+0000 DEBUG >>>>> master.ActiveMasterManager A master is now available >>>>> @4000000054d97770113b9d0c 2015-02-10T03:13:42.288+0000 INFO >>>>> Configuration.deprecation fs.default.name is deprecated. Instead, use >>>>> fs.defaultFS >>>>> @4000000054d97770140e50c4 2015-02-10T03:13:42.334+0000 FATAL >>>>> master.HMaster Unhandled exception. Starting shutdown. >>>>> @4000000054d97770140e54ac java.lang.IllegalStateException >>>>> @4000000054d97770140e5894 at >>>>> com.google.common.base.Preconditions.checkState(Preconditions.java:133) >>>>> @4000000054d97770140e5894 at >>>>> org.apache.hadoop.ipc.Client.setCallIdAndRetryCount(Client.java:117) >>>>> @4000000054d97770140e5c7c at >>>>> >>>> >> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:99) >>>>> @4000000054d97770140e5c7c at >>>>> com.sun.proxy.$Proxy19.setSafeMode(Unknown Source) >>>>> @4000000054d97770140eaa9c at >>>>> org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2430) >>>>> @4000000054d97770140eae84 at >>>>> >>>> >> org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:1036) >>>>> @4000000054d97770140eae84 at >>>>> >>>> >> org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:1020) >>>>> @4000000054d97770140eb26c at >>>>> org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:447) >>>>> @4000000054d97770140ebe24 at >>>>> org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:895) >>>>> @4000000054d97770140ec20c at >>>>> >>>> >> org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:442) >>>>> @4000000054d97770140ec20c at >>>>> >>>> >> org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:153) >>>>> @4000000054d97770140ed97c at >>>>> >>>> >> org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:129) >>>>> @4000000054d97770140edd64 at >>>>> >>>> >> org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:881) >>>>> @4000000054d97770140edd64 at >>>>> org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:684) >>>>> @4000000054d97770140edd64 at >> java.lang.Thread.run(Thread.java:724) >>>>> @4000000054d97770140ee14c 2015-02-10T03:13:42.336+0000 INFO >>>> master.HMaster >>>>> Aborting >>>>> @4000000054d977701410d934 2015-02-10T03:13:42.336+0000 DEBUG >>>>> master.HMaster Stopping service threads >>>>> >>>>> Putting the 2.2.0 Hadoop jars back into HBase’s lib/ directory to >> replace >>>>> the 2.6.0 ones allowed HBase master to run again. >>>>> >>>>> The documentation does not mention that 0.98.9 supports 2.6.0 (it lists >>>>> ’S’ for up to 2.5.x). Does this mean that we cannot run HBase against >>>> 2.6.0 >>>>> HDFS at the moment? >>>>> Has anyone managed to do this? I’ve also tried building HBase from >> source >>>>> against 2.6.0, and then replacing the lib/hbase*.jar files accordingly >>>>> whilst keeping those hadoop.*2.6.0.jar files, but I still have the same >>>>> problem. >>>>> >>>>> Thank you! >>>>> >>>>> Regards, >>>>> Tong >>>>> >>>> >>> >>> >>> >>> -- >>> Best regards, >>> >>> - Andy >>> >>> Problems worthy of attack prove their worth by hitting back. - Piet Hein >>> (via Tom White) >> >> > > > -- > Best regards, > > - Andy > > Problems worthy of attack prove their worth by hitting back. - Piet Hein > (via Tom White)
