Tacking a build question onto this one - I recently did as advised for our hbase98/hadoop2.6.0 integration and build hbase from source. I'm wondering though, previously I had used the maven repo artifacts which have version strings like 0.98.9-hadoop2, but the build produces artifacts without that hadoop1/2 discriminator. Is that added afterwards somehow? Do I need to care about it if I built my own version using the hadoop2 profile?
Thanks in advance, Ian > On Feb 10, 2015, at 1:33 PM, Andrew Purtell <[email protected]> wrote: > > As Ted suggests it's best to recompile the source distribution of HBase > against more recent versions of Hadoop then what we've compiled our > convenience binaries against. Hadoop often makes incompatible changes > across point releases, which can also extend to dependencies (versions of > guava or protobuf libraries used to build Hadoop might be newer than those > provided in the HBase binary distribution, etc), so building from source > with -Dhadoop-two.version=<the_version_you_want> is best in my opinion. > > Even so there might still be an issue lurking. Hadoop 2.6.0 is pretty new > and I'm not sure we have done any significant testing with it. If you > rebuild against Hadoop 2.6.0 and are still seeing this problem please open > a ticket on our issue tracker: https://issues.apache.org/jira/browse/HBASE > > > On Mon, Feb 9, 2015 at 8:48 PM, Ted Yu <[email protected]> wrote: > >> What command did you use to build against hadoop 2.6.0 ? >> hadoop-2.0 is the default profile. >> >> Adding '-Dhadoop-two.version=2.6.0' to command line should do. >> >> Cheers >> >> On Mon, Feb 9, 2015 at 7:45 PM, Tong Pham <[email protected]> wrote: >> >>> Hi everyone, >>> >>> I am currently trying to set up an Hbase 0.98.9 cluster to run against a >>> Hadoop 2.6.0 HDFS filesystem. >>> The problem is, according to >>> http://hbase.apache.org/book.html#basic.prerequisites : >>> >>> In distributed mode, it is critical that the version of Hadoop that is >> out >>> on your cluster match what is under HBase. Replace the hadoop jar found >> in >>> the HBase lib directory with the hadoop jar you are running on your >> cluster >>> to avoid version mismatch issues. Make sure you replace the jar in HBase >>> everywhere on your cluster. Hadoop version mismatch issues have various >>> manifestations but often all looks like its hung up. >>> >>> When I tried replacing the lib/hadoop.*.jar files that came with Hbase, >>> and which are labeled as version 2.2.0, >>> with their 2.6.0 counterparts, I encountered this error starting up a >>> brand new Hbase master: >>> >>> @4000000054d9777010e182a4 2015-02-10T03:13:42.283+0000 DEBUG >>> master.ActiveMasterManager A master is now available >>> @4000000054d97770113b9d0c 2015-02-10T03:13:42.288+0000 INFO >>> Configuration.deprecation fs.default.name is deprecated. Instead, use >>> fs.defaultFS >>> @4000000054d97770140e50c4 2015-02-10T03:13:42.334+0000 FATAL >>> master.HMaster Unhandled exception. Starting shutdown. >>> @4000000054d97770140e54ac java.lang.IllegalStateException >>> @4000000054d97770140e5894 at >>> com.google.common.base.Preconditions.checkState(Preconditions.java:133) >>> @4000000054d97770140e5894 at >>> org.apache.hadoop.ipc.Client.setCallIdAndRetryCount(Client.java:117) >>> @4000000054d97770140e5c7c at >>> >> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:99) >>> @4000000054d97770140e5c7c at >>> com.sun.proxy.$Proxy19.setSafeMode(Unknown Source) >>> @4000000054d97770140eaa9c at >>> org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2430) >>> @4000000054d97770140eae84 at >>> >> org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:1036) >>> @4000000054d97770140eae84 at >>> >> org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:1020) >>> @4000000054d97770140eb26c at >>> org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:447) >>> @4000000054d97770140ebe24 at >>> org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:895) >>> @4000000054d97770140ec20c at >>> >> org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:442) >>> @4000000054d97770140ec20c at >>> >> org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:153) >>> @4000000054d97770140ed97c at >>> >> org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:129) >>> @4000000054d97770140edd64 at >>> >> org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:881) >>> @4000000054d97770140edd64 at >>> org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:684) >>> @4000000054d97770140edd64 at java.lang.Thread.run(Thread.java:724) >>> @4000000054d97770140ee14c 2015-02-10T03:13:42.336+0000 INFO >> master.HMaster >>> Aborting >>> @4000000054d977701410d934 2015-02-10T03:13:42.336+0000 DEBUG >>> master.HMaster Stopping service threads >>> >>> Putting the 2.2.0 Hadoop jars back into HBase’s lib/ directory to replace >>> the 2.6.0 ones allowed HBase master to run again. >>> >>> The documentation does not mention that 0.98.9 supports 2.6.0 (it lists >>> ’S’ for up to 2.5.x). Does this mean that we cannot run HBase against >> 2.6.0 >>> HDFS at the moment? >>> Has anyone managed to do this? I’ve also tried building HBase from source >>> against 2.6.0, and then replacing the lib/hbase*.jar files accordingly >>> whilst keeping those hadoop.*2.6.0.jar files, but I still have the same >>> problem. >>> >>> Thank you! >>> >>> Regards, >>> Tong >>> >> > > > > -- > Best regards, > > - Andy > > Problems worthy of attack prove their worth by hitting back. - Piet Hein > (via Tom White)
