[ 
https://issues.apache.org/jira/browse/HBASE-25140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17205589#comment-17205589
 ] 

Sean Busbey commented on HBASE-25140:
-------------------------------------

{quote}
 for example with 2.4.1 the following exception occurs:
{quote}

Do you mean with Hadoop 2.4.1?

{quote}
HBaseTestingUtility.startMiniCluster() on HBase 2.2.3 works only with hadoop 
version range 2.8.0 - 3.0.3
{quote}

Please see our [reference guide for the expected Hadoop 
compatibility|http://hbase.apache.org/book.html#hadoop]. HBase 2.2.z releases 
should be used with Hadoop 2.8, 2.9, 3.1, and 3.2 (with specific maintenance 
versions mattering for some of those releases). Is the minicluster failing with 
Hadoop 3.1 or 3.2? How are you setting up dependencies?

{quote}
Also upon failure during maven run it would be great if the actual exception 
would be displayed, not just that "Master not initialized after 200000ms".
{quote}

Given the problem with the fan-out wal writer you posted I am surprised the 
entire minicluster did not fail with a clear pointer to that message. Could you 
attach logs?


> HBase test mini cluster is working only with Hadoop 2.8.0 - 3.0.3
> -----------------------------------------------------------------
>
>                 Key: HBASE-25140
>                 URL: https://issues.apache.org/jira/browse/HBASE-25140
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 2.2.3
>            Reporter: Miklos Gergely
>            Priority: Major
>
> Running HBaseTestingUtility.startMiniCluster() on HBase 2.2.3 works only with 
> hadoop version range 2.8.0 - 3.0.3, for example with 2.4.1 the following 
> exception occurs:
>  
> {code:java}
> 21:49:04,124 [RS:0;71af2d647bb3:35715] ERROR 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper [] - 
> Couldn't properly initialize access to HDFS internals. Please update your WAL 
> Provider to not make use of the 'asyncfs' provider. See HBASE-16110 for more 
> information.21:49:04,124 [RS:0;71af2d647bb3:35715] ERROR 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper [] - 
> Couldn't properly initialize access to HDFS internals. Please update your WAL 
> Provider to not make use of the 'asyncfs' provider. See HBASE-16110 for more 
> information.java.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.DFSClient.beginFileLease(long, 
> org.apache.hadoop.hdfs.DFSOutputStream) at 
> java.lang.Class.getDeclaredMethod(Class.java:2130) ~[?:1.8.0_242] at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.createLeaseManager(FanOutOneBlockAsyncDFSOutputHelper.java:198)
>  ~[hbase-server-2.2.3.jar:2.2.3] at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.<clinit>(FanOutOneBlockAsyncDFSOutputHelper.java:274)
>  [hbase-server-2.2.3.jar:2.2.3] at java.lang.Class.forName0(Native Method) 
> ~[?:1.8.0_242] at java.lang.Class.forName(Class.java:264) [?:1.8.0_242] at 
> org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:136)
>  [hbase-server-2.2.3.jar:2.2.3] at 
> org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:136) 
> [hbase-server-2.2.3.jar:2.2.3] at 
> org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) 
> [hbase-server-2.2.3.jar:2.2.3] at 
> org.apache.hadoop.hbase.wal.WALFactory.<init>(WALFactory.java:198) 
> [hbase-server-2.2.3.jar:2.2.3] at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.setupWALAndReplication(HRegionServer.java:1871)
>  [hbase-server-2.2.3.jar:2.2.3] at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1589)
>  [hbase-server-2.2.3.jar:2.2.3] at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.handleReportForDutyResponse(MiniHBaseCluster.java:157)
>  [hbase-server-2.2.3-tests.jar:2.2.3] at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1001)
>  [hbase-server-2.2.3.jar:2.2.3] at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:184)
>  [hbase-server-2.2.3-tests.jar:2.2.3] at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:130)
>  [hbase-server-2.2.3-tests.jar:2.2.3] at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:168)
>  [hbase-server-2.2.3-tests.jar:2.2.3] at 
> java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_242] at 
> javax.security.auth.Subject.doAs(Subject.java:360) [?:1.8.0_242] at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1536)
>  [hadoop-common-2.4.1.jar:?] at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:341) 
> [hbase-common-2.2.3.jar:2.2.3] at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:165)
>  [hbase-server-2.2.3-tests.jar:2.2.3] at 
> java.lang.Thread.run(Thread.java:748) [?:1.8.0_242]
> {code}
> Also upon failure during maven run it would be great if the actual exception 
> would be displayed, not just that "Master not initialized after 200000ms".
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to