[
https://issues.apache.org/jira/browse/HBASE-25140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17205614#comment-17205614
]
Miklos Gergely commented on HBASE-25140:
----------------------------------------
Yes, I meant hadoop 2.4.1, but I as it is not supported, it is not relelvant.
At Flink our nightly tests run with hadoop 2.4.1 and 3.1.3 and they both fail,
for 3.1.3 check this:
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=7093&view=logs&j=ba53eb01-1462-56a3-8e98-0dd97fbcaab5&t=bfbc6239-57a0-5db0-63f3-41551b4f7d51]
This is the maven output, as you can see it dislays onlly that "Master not
initialized after 200000ms". You can find the actual logs with the min cluster
failure here at the attached log file, here is the error:
{code:java}
java.lang.IncompatibleClassChangeError: Found interface
org.apache.hadoop.hdfs.protocol.HdfsFileStatus, but class was expected
at
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.createOutput(FanOutOneBlockAsyncDFSOutputHelper.java:496)
at
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$400(FanOutOneBlockAsyncDFSOutputHelper.java:116)
at
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$8.doCall(FanOutOneBlockAsyncDFSOutputHelper.java:576)
at
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$8.doCall(FanOutOneBlockAsyncDFSOutputHelper.java:571)
at
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.createOutput(FanOutOneBlockAsyncDFSOutputHelper.java:584)
at
org.apache.hadoop.hbase.io.asyncfs.AsyncFSOutputHelper.createOutput(AsyncFSOutputHelper.java:51)
at
org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.initOutput(AsyncProtobufLogWriter.java:169)
at
org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:166)
at
org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createAsyncWriter(AsyncFSWALProvider.java:113)
at
org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:643)
at
org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:126)
at
org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:767)
at
org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:501)
at
org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.init(AbstractFSWAL.java:442)
at
org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:156)
at
org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:61)
at org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:284)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.getWAL(HRegionServer.java:2181)
at
org.apache.hadoop.hbase.regionserver.handler.AssignRegionHandler.process(AssignRegionHandler.java:133)
at
org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
{code}
One way to reproduce this:
> HBase test mini cluster is working only with Hadoop 2.8.0 - 3.0.3
> -----------------------------------------------------------------
>
> Key: HBASE-25140
> URL: https://issues.apache.org/jira/browse/HBASE-25140
> Project: HBase
> Issue Type: Bug
> Components: documentation, hadoop2, test
> Affects Versions: 2.2.3
> Reporter: Miklos Gergely
> Priority: Major
>
> Running HBaseTestingUtility.startMiniCluster() on HBase 2.2.3 works only with
> hadoop version range 2.8.0 - 3.0.3, for example with 2.4.1 the following
> exception occurs:
>
> {code:java}
> 21:49:04,124 [RS:0;71af2d647bb3:35715] ERROR
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper [] -
> Couldn't properly initialize access to HDFS internals. Please update your WAL
> Provider to not make use of the 'asyncfs' provider. See HBASE-16110 for more
> information.21:49:04,124 [RS:0;71af2d647bb3:35715] ERROR
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper [] -
> Couldn't properly initialize access to HDFS internals. Please update your WAL
> Provider to not make use of the 'asyncfs' provider. See HBASE-16110 for more
> information.java.lang.NoSuchMethodException:
> org.apache.hadoop.hdfs.DFSClient.beginFileLease(long,
> org.apache.hadoop.hdfs.DFSOutputStream) at
> java.lang.Class.getDeclaredMethod(Class.java:2130) ~[?:1.8.0_242] at
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.createLeaseManager(FanOutOneBlockAsyncDFSOutputHelper.java:198)
> ~[hbase-server-2.2.3.jar:2.2.3] at
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.<clinit>(FanOutOneBlockAsyncDFSOutputHelper.java:274)
> [hbase-server-2.2.3.jar:2.2.3] at java.lang.Class.forName0(Native Method)
> ~[?:1.8.0_242] at java.lang.Class.forName(Class.java:264) [?:1.8.0_242] at
> org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:136)
> [hbase-server-2.2.3.jar:2.2.3] at
> org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:136)
> [hbase-server-2.2.3.jar:2.2.3] at
> org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175)
> [hbase-server-2.2.3.jar:2.2.3] at
> org.apache.hadoop.hbase.wal.WALFactory.<init>(WALFactory.java:198)
> [hbase-server-2.2.3.jar:2.2.3] at
> org.apache.hadoop.hbase.regionserver.HRegionServer.setupWALAndReplication(HRegionServer.java:1871)
> [hbase-server-2.2.3.jar:2.2.3] at
> org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1589)
> [hbase-server-2.2.3.jar:2.2.3] at
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.handleReportForDutyResponse(MiniHBaseCluster.java:157)
> [hbase-server-2.2.3-tests.jar:2.2.3] at
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1001)
> [hbase-server-2.2.3.jar:2.2.3] at
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:184)
> [hbase-server-2.2.3-tests.jar:2.2.3] at
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:130)
> [hbase-server-2.2.3-tests.jar:2.2.3] at
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:168)
> [hbase-server-2.2.3-tests.jar:2.2.3] at
> java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_242] at
> javax.security.auth.Subject.doAs(Subject.java:360) [?:1.8.0_242] at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1536)
> [hadoop-common-2.4.1.jar:?] at
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:341)
> [hbase-common-2.2.3.jar:2.2.3] at
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:165)
> [hbase-server-2.2.3-tests.jar:2.2.3] at
> java.lang.Thread.run(Thread.java:748) [?:1.8.0_242]
> {code}
> Also upon failure during maven run it would be great if the actual exception
> would be displayed, not just that "Master not initialized after 200000ms".
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)