[
https://issues.apache.org/jira/browse/HBASE-20244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16532218#comment-16532218
]
Sean Busbey commented on HBASE-20244:
-------------------------------------
{quote}
And I'm a little confused by the release lines of hadoop. In HBase we only
consider 2.7.x as stable, but look at the release page of
http://hadoop.apache.org/releases.html, they seem to remove the 'not production
ready' words silently for 2.8.x, 2.9.x, and also 3.0.x. Does this mean this
release lines are all production ready? Do we need to add them back in our pre
commit test? And also upgrade the support matrix?
{quote}
It's a bit of a mess, I'm afraid. AFAICT they only mention the 'not ready for
production' stuff in their release announcement. So I try to link to the
appropriate email in the [Hadoop support section of the ref
guide|http://hbase.apache.org/book.html#hadoop].
IIRC, Hadoop 2.8.2 's release email said it was production ready but 2.8.3 is
what [~stack] actually used when doing HBase 2.0.0 testing hence the current
support matrix. (the 2.8.2 going to "NT" was a part of HBASE-19983).
HBASE-20502 started the process of updating our support matrix for 2.9.1 losing
the "not production ready" note (as well as needing to call out 2.9.0 and 3.0.z
as X due to classpath problems). IIRC it was delayed because of hope that some
Hadoop 3.0.z version could be moved from X to NT. I pinged the ticket last week
because that doesn't seem to be the case.
Adding 2.8 and/or 2.9 to precommit is tracked in HBASE-19984. Since that ticket
stalled out the HBase 2.0.z release line added 2.8.3 as supported, so probably
we should add 2.8 in.
> NoSuchMethodException when retrieving private method
> decryptEncryptedDataEncryptionKey from DFSClient
> -----------------------------------------------------------------------------------------------------
>
> Key: HBASE-20244
> URL: https://issues.apache.org/jira/browse/HBASE-20244
> Project: HBase
> Issue Type: Sub-task
> Components: wal
> Affects Versions: 2.0.0, 2.0.1
> Reporter: Ted Yu
> Assignee: Ted Yu
> Priority: Blocker
> Fix For: 3.0.0, 2.1.0, 2.0.2, 2.2.0
>
> Attachments: 20244.v1.txt, 20244.v1.txt, 20244.v1.txt,
> HBASE-20244-v1.patch, HBASE-20244.patch
>
>
> I was running unit test against hadoop 3.0.1 RC and saw the following in test
> output:
> {code}
> ERROR [RS-EventLoopGroup-3-3]
> asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(267): Couldn't properly
> initialize access to HDFS internals. Please update your WAL Provider to not
> make use of the 'asyncfs' provider. See HBASE-16110 for more information.
> java.lang.NoSuchMethodException:
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo)
> at java.lang.Class.getDeclaredMethod(Class.java:2130)
> at
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:232)
> at
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.<clinit>(FanOutOneBlockAsyncDFSOutputSaslHelper.java:262)
> at
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:661)
> at
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:118)
> at
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:720)
> at
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:715)
> at
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
> at
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:500)
> at
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:479)
> at
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:420)
> at
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
> at
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82)
> at
> org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:306)
> at
> org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:341)
> at
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:633)
> at
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
> {code}
> The private method was moved by HDFS-12574 to HdfsKMSUtil with different
> signature.
> To accommodate the above method movement, it seems we need to call the
> following method of DFSClient :
> {code}
> public KeyProvider getKeyProvider() throws IOException {
> {code}
> Since the new decryptEncryptedDataEncryptionKey method has this signature:
> {code}
> static KeyVersion decryptEncryptedDataEncryptionKey(FileEncryptionInfo
> feInfo, KeyProvider keyProvider) throws IOException {
> {code}
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)