The problem in HBASE-22475 breaked the client-bin tarball, too. Will roll a
new RC... :-(

张铎(Duo Zhang) <[email protected]> 于2019年5月27日周一 上午8:30写道:

> Please hold on, see HBASE-22475, our nightly is failing when running on top
> of hadoop 2. But seems here we have tested the shell here and there is no
> problem... Need to find out why...
> Artem Ervits <[email protected]> 于2019年5月24日周五 上午5:04写道:
>
> > vote: 2.2.0 RC4
> >   +1 (non-binding)
> >
> >   // using hbase-vote script
> >   * Signature: ok
> >   * Checksum : ok
> >   * Rat check (1.8.0_212): ok
> >      - mvn clean apache-rat:check
> >   * Built from source (1.8.0_212): ok
> >      - mvn clean install -DskipTests
> >   * Unit tests pass (1.8.0_212): NOK (it's probably due to my
> environment,
> > I've seen a lot of test failures, need to validate individually).
> >      - mvn test -P runAllTests
> >
> > // if I get more time I will try to isolate tests that fail consistently
> > vs. timeouts.
> >
> >   hbase shell: ok
> >   logs: ok
> >   UI: ok
> >   LTT 1M write/read 20%: ok
> >   Spark Scala 2.3.3: ok // Not hbase-connectors
> >
> > still see https://issues.apache.org/jira/browse/HBASE-21458
> >
> >   installed on pseudodistributed hadoop 2.9.2
> > had to modify hbase-site with the following
> > https://issues.apache.org/jira/browse/HBASE-22465
> >
> > this is a dev environment so I'm not too concerned but once master came
> up
> > and I was able to do most of the vote steps, at some point RS went down
> > with the stack trace below. Again, I knew the implications and I'm
> working
> > on Hadoop 2.9 which according to the docs was not tested.
> >
> > 2019-05-23 18:41:49,141 WARN  [Close-WAL-Writer-15] wal.AsyncFSWAL: close
> > old writer failed
> > java.io.FileNotFoundException: File does not exist: /apps/hbase/WALs/
> > hadoop.example.com,16020,1558627341708/hadoop.example.com
> > %2C16020%2C1558627341708.1558636709617
> >         at
> >
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:72)
> >         at
> >
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:62)
> >         at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLease(FSNamesystem.java:2358)
> >         at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.recoverLease(NameNodeRpcServer.java:790)
> >         at
> >
> >
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.recoverLease(ClientNamenodeProtocolServerSideTranslatorPB.java:693)
> >         at
> >
> >
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> >         at
> >
> >
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:503)
> >         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
> >         at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:871)
> >         at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:817)
> >         at java.security.AccessController.doPrivileged(Native Method)
> >         at javax.security.auth.Subject.doAs(Subject.java:422)
> >         at
> >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893)
> >         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2606)
> >
> >         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> > Method)
> >         at
> >
> >
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> >         at
> >
> >
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> >         at
> java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> >         at
> >
> >
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
> >         at
> >
> >
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88)
> >         at
> > org.apache.hadoop.hdfs.DFSClient.recoverLease(DFSClient.java:867)
> >         at
> >
> >
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:304)
> >         at
> >
> >
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:301)
> >         at
> >
> >
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> >         at
> >
> >
> org.apache.hadoop.hdfs.DistributedFileSystem.recoverLease(DistributedFileSystem.java:301)
> >         at
> >
> org.apache.hadoop.hbase.util.FSHDFSUtils.recoverLease(FSHDFSUtils.java:283)
> >         at
> >
> >
> org.apache.hadoop.hbase.util.FSHDFSUtils.recoverDFSFileLease(FSHDFSUtils.java:216)
> >         at
> >
> >
> org.apache.hadoop.hbase.util.FSHDFSUtils.recoverFileLease(FSHDFSUtils.java:163)
> >         at
> > org.apache.hadoop.hbase.io
> >
> .asyncfs.FanOutOneBlockAsyncDFSOutput.recoverAndClose(FanOutOneBlockAsyncDFSOutput.java:555)
> >         at
> >
> >
> org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.close(AsyncProtobufLogWriter.java:157)
> >         at
> >
> >
> org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.lambda$closeWriter$6(AsyncFSWAL.java:643)
> >         at
> >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> >         at
> >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> >         at java.lang.Thread.run(Thread.java:748)
> > Caused by:
> > org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException):
> File
> > does not exist: /apps/hbase/WALs/hadoop.example.com,16020,1558627341708/
> > hadoop.example.com%2C16020%2C1558627341708.1558636709617
> >         at
> >
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:72)
> >         at
> >
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:62)
> >         at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLease(FSNamesystem.java:2358)
> >         at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.recoverLease(NameNodeRpcServer.java:790)
> >         at
> >
> >
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.recoverLease(ClientNamenodeProtocolServerSideTranslatorPB.java:693)
> >         at
> >
> >
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> >         at
> >
> >
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> >         at
> >
> >
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:503)
> >         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
> >         at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:871)
> >         at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:817)
> >         at java.security.AccessController.doPrivileged(Native Method)
> >         at javax.security.auth.Subject.doAs(Subject.java:422)
> >         at
> >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893)
> >         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2606)
> >
> >         at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1489)
> >         at org.apache.hadoop.ipc.Client.call(Client.java:1435)
> >         at org.apache.hadoop.ipc.Client.call(Client.java:1345)
> >         at
> >
> >
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
> >         at
> >
> >
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> >         at com.sun.proxy.$Proxy18.recoverLease(Unknown Source)
> >         at
> >
> >
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.recoverLease(ClientNamenodeProtocolTranslatorPB.java:626)
> >         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >         at
> >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> >         at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >         at java.lang.reflect.Method.invoke(Method.java:498)
> >         at
> >
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:409)
> >         at
> >
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163)
> >         at
> >
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
> >         at
> >
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> >         at
> >
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346)
> >
> >
> >
> > On Tue, May 21, 2019 at 6:51 PM Andrew Purtell <[email protected]>
> > wrote:
> >
> > > Actually let me change my vote to +1. None of these results are serious
> > > enough to sink the RC, although if someone besides myself can reproduce
> > the
> > > TestClusterScopeQuotaThrottle result, we should have a JIRA and the
> > release
> > > announcement could point to the JIRA as indication that particular
> > feature
> > > might need a bugfix.
> > >
> > > On Tue, May 21, 2019 at 3:43 PM Andrew Purtell <[email protected]>
> > > wrote:
> > >
> > > > +0 at this time
> > > >
> > > > Signatures and sums: ok
> > > > RAT check: ok
> > > > Build from source: ok
> > > > Unit tests: some consistent failures
> > > >
> > > > This could be a problem with the test:
> > > >
> > > > [ERROR]
> > > >
> > >
> >
> org.apache.hadoop.hbase.client.TestAsyncTableRSCrashPublish.test(org.apache.hadoop.hbase.client.TestAsyncTableRSCrashPublish)
> > > > [ERROR]   Run 1: TestAsyncTableRSCrashPublish.test:77 Waiting timed
> out
> > > > after [60,000] msec
> > > > [ERROR]   Run 2: TestAsyncTableRSCrashPublish.test:77 Waiting timed
> out
> > > > after [60,000] msec
> > > > [ERROR]   Run 3: TestAsyncTableRSCrashPublish.test:77 Waiting timed
> out
> > > > after [60,000] msec
> > > > [ERROR]   Run 4: TestAsyncTableRSCrashPublish.test:77 Waiting timed
> out
> > > > after [60,000] msec
> > > >
> > > > This is an interesting assertion failure:
> > > >
> > > > [ERROR]   TestClusterRestartFailover.test:89 serverNode should not be
> > > null
> > > > when restart whole cluster
> > > >
> > > > These failures suggest cluster scope quotas might have an
> > implementation
> > > > issue, this one is of most concern to me:
> > > >
> > > > [ERROR]
> > > >
> > >
> >
> org.apache.hadoop.hbase.quotas.TestClusterScopeQuotaThrottle.testNamespaceClusterScopeQuota(org.apache.hadoop.hbase.quotas.TestClusterScopeQuotaThrottle)
> > > > [ERROR]   Run 1:
> > > > TestClusterScopeQuotaThrottle.testNamespaceClusterScopeQuota:128
> > > > expected:<6> but was:<3>
> > > > [ERROR]   Run 2:
> > > > TestClusterScopeQuotaThrottle.testNamespaceClusterScopeQuota:128
> > > > expected:<6> but was:<3>
> > > > [ERROR]   Run 3:
> > > > TestClusterScopeQuotaThrottle.testNamespaceClusterScopeQuota:128
> > > > expected:<6> but was:<3>
> > > > [ERROR]
> > > >
> > >
> >
> org.apache.hadoop.hbase.quotas.TestClusterScopeQuotaThrottle.testUserClusterScopeQuota(org.apache.hadoop.hbase.quotas.TestClusterScopeQuotaThrottle)
> > > > [ERROR]   Run 1:
> > > > TestClusterScopeQuotaThrottle.testUserClusterScopeQuota:177
> > expected:<6>
> > > > but was:<0>
> > > > [ERROR]   Run 2:
> > > > TestClusterScopeQuotaThrottle.testUserClusterScopeQuota:177
> > expected:<6>
> > > > but was:<10>
> > > > [ERROR]   Run 3:
> > > > TestClusterScopeQuotaThrottle.testUserClusterScopeQuota:177
> > expected:<6>
> > > > but was:<10>
> > > >
> > > >
> > > > On Fri, May 17, 2019 at 11:12 PM Guanghao Zhang <[email protected]>
> > > > wrote:
> > > >
> > > >> Please vote on this release candidate (RC) for Apache HBase 2.2.0.
> > > >> This is the first release of the branch-2.2 line.
> > > >>
> > > >> The VOTE will remain open for at least 72 hours.
> > > >> [] +1
> > > >> [] +0/-0 Because ...
> > > >> [] -1 Do not release this package because ...
> > > >>
> > > >> The tag to be voted on is 2.2.0RC4. The release files, including
> > > >> signatures, digests, etc. can be found at:
> > > >> https://dist.apache.org/repos/dist/dev/hbase/2.2.0RC4/
> > > >>
> > > >> Maven artifacts are available in a staging repository at:
> > > >>
> > https://repository.apache.org/content/repositories/orgapachehbase-1312
> > > >>
> > > >> Signatures used for HBase RCs can be found in this file:
> > > >> https://dist.apache.org/repos/dist/release/hbase/KEYS
> > > >>
> > > >> The list of bug fixes going into 2.2.0 can be found in included
> > > >> CHANGES.md and RELEASENOTES.md available here:
> > > >> https://dist.apache.org/repos/dist/dev/hbase/2.2.0RC4/CHANGES.md
> > > >>
> https://dist.apache.org/repos/dist/dev/hbase/2.2.0RC4/RELEASENOTES.md
> > > >>
> > > >> A detailed source and binary compatibility report for this release
> is
> > > >> available at
> > > >>
> > > >>
> > >
> >
> https://dist.apache.org/repos/dist/dev/hbase/2.2.0RC4/api_compare_2.2.0RC4_to_2.1.4.html
> > > >>
> > > >> To learn more about Apache HBase, please see
> http://hbase.apache.org/
> > > >>
> > > >> Thanks,
> > > >> Guanghao Zhang
> > > >>
> > > >
> > > >
> > > > --
> > > > Best regards,
> > > > Andrew
> > > >
> > > > Words like orphans lost among the crosstalk, meaning torn from
> truth's
> > > > decrepit hands
> > > >    - A23, Crosstalk
> > > >
> > >
> > >
> > > --
> > > Best regards,
> > > Andrew
> > >
> > > Words like orphans lost among the crosstalk, meaning torn from truth's
> > > decrepit hands
> > >    - A23, Crosstalk
> > >
> >
>

Reply via email to