[ 
https://issues.apache.org/jira/browse/HBASE-22951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16919804#comment-16919804
 ] 

Sean Busbey commented on HBASE-22951:
-------------------------------------

There's a blurb in the {{hbase}} script that gets hadoop's classpath from the 
hadoop executable if it's on the path. I'd say just do that. eventually can 
move to use hadoop client facing artifacts maybe, but ATM we don't ship them in 
our omnibus tarball at all. Alternatively, we could finally isolate our own 
hadoop libs in the omnibus tarball so we can add *just* them and not everything 
in the internal classpath.

 

(or just take the easy path and move hbck to be in the set of commands that 
always needs an internal classpath)

> [HBCK2] hbase hbck throws IOE "No FileSystem for scheme: hdfs"
> --------------------------------------------------------------
>
>                 Key: HBASE-22951
>                 URL: https://issues.apache.org/jira/browse/HBASE-22951
>             Project: HBase
>          Issue Type: Bug
>            Reporter: stack
>            Priority: Major
>
> Input appreciated on this one.
> If I do the below, passing a config that is pointing at a HDFS, I get the 
> below (If I run w/o, hbck just picks up the wrong fs -- the local fs).
> {code}
> $ /vagrant/hbase/bin/hbase --config hbase-conf  hbck
> 2019-08-30 05:04:54,467 WARN  [main] util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> Exception in thread "main" java.io.IOException: No FileSystem for scheme: hdfs
>         at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2799)
>         at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2810)
>         at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:100)
>         at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2849)
>         at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2831)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:389)
>         at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
>         at 
> org.apache.hadoop.hbase.util.CommonFSUtils.getRootDir(CommonFSUtils.java:361)
>         at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:3605)
> {code}
> Its because the CLASSPATH is carefully curated so as to use shaded client 
> only; there are no hdfs classes on the CLASSPATH intentionally.
> So, how to fix? Happens whether hbck1 or hbck2 (you have to do a hdfs 
> operation for hbck2 to trigger same issue).
> Could be careful in hbck2 and note that if fs operation, you need to add hdfs 
> jars to CLASSPATH so hbck2 can go against hdfs.
> If add the ' --internal-classpath' flag, then all classes are put on the 
> CLASSPATH for hbck(2) (including the hdfs client jar which got the hdfs 
> implementation after 2.7.2 was released) and stuff 'works'.
> Could edit the bin/hbase script and make it so hdfs classes are added to the 
> hbck CLASSPATH? Could see if could do hdfs client-only?
> Anyways, putting this up for now. Others may have opinions. Thanks.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

Reply via email to