FWIW, I build Hadoop and HBase binary tarballs. As a post build step, I
extract lib/native/ from the Hadoop tarball, extract the HBase tarball,
copy Hadoop's lib/native/ to HBase's lib/native/<platform> e.g.
cp -a hadoop-2.7.5/lib/native hbase-1.4.2/lib/native/Linux-amd64-64
(you can learn your platform by untarring HBase and doing ./bin/hbase
org.apache.hadoop.util.PlatformName)
and then tar up the modified HBase distribution with native bits in place,
and send the tarball on through to its destination.
This is little different from our current advice to, prior to production
deploy, replace the Hadoop jars packaged in our convenience binary
distribution with the actual version of Hadoop you are running in
production.
On Tue, Mar 27, 2018 at 2:37 PM, rahul gidwani <[email protected]> wrote:
> It seems like the recommended approach is to do this:
>
> Build your hbase with against a particular version of hadoop with maven
> using the -Dhadoop-two.version=<some_hadoop_version>
>
> Then set your HBASE_LIRBRARY_PATH= $HADOOP_HOME/lib/native/Linux-amd64-64
>
> You could run into problems if you are upgrading your hadoop server
> binaries without updating your hbase client hadoop dependencies. If
> there are any JNI changes in the hadoop code
> this would make the hadoop jars and jni code incompatible.
>
> Or you compile against whatever version but ensure that whatever
> hadoop client dependencies you pick up for hbase is from the
> $HADOOP_HOME directory at runtime?
>
> Is there a recommended approach, because although there is no formal
> API guarantee between the JARs and JNI code but they are tightly
> coupled.
>
> Thanks
>
> rahul
>
--
Best regards,
Andrew
Words like orphans lost among the crosstalk, meaning torn from truth's
decrepit hands
- A23, Crosstalk