BTW, the integration test finished with 2 errors. Logs are below. Not really sure if they are relevent errors or not. It was running on the 8 nodes cluster. I will re-run that over night on the 4 nodes cluster and see.
14/04/24 14:23:36 INFO mapred.JobClient: Job complete: job_local_0003 14/04/24 14:23:36 INFO mapred.JobClient: Counters: 33 14/04/24 14:23:36 INFO mapred.JobClient: org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Verify$Counts 14/04/24 14:23:36 INFO mapred.JobClient: REFERENCED=2000000 14/04/24 14:23:36 INFO mapred.JobClient: HBase Counters 14/04/24 14:23:36 INFO mapred.JobClient: REMOTE_RPC_CALLS=205 14/04/24 14:23:36 INFO mapred.JobClient: RPC_CALLS=205 14/04/24 14:23:36 INFO mapred.JobClient: RPC_RETRIES=0 14/04/24 14:23:36 INFO mapred.JobClient: NOT_SERVING_REGION_EXCEPTION=0 14/04/24 14:23:36 INFO mapred.JobClient: NUM_SCANNER_RESTARTS=0 14/04/24 14:23:36 INFO mapred.JobClient: MILLIS_BETWEEN_NEXTS=19675 14/04/24 14:23:36 INFO mapred.JobClient: BYTES_IN_RESULTS=128000000 14/04/24 14:23:36 INFO mapred.JobClient: BYTES_IN_REMOTE_RESULTS=128000000 14/04/24 14:23:36 INFO mapred.JobClient: REGIONS_SCANNED=2 14/04/24 14:23:36 INFO mapred.JobClient: REMOTE_RPC_RETRIES=0 14/04/24 14:23:36 INFO mapred.JobClient: File Output Format Counters 14/04/24 14:23:36 INFO mapred.JobClient: Bytes Written=0 14/04/24 14:23:36 INFO mapred.JobClient: FileSystemCounters 14/04/24 14:23:36 INFO mapred.JobClient: FILE_BYTES_READ=589721439 14/04/24 14:23:36 INFO mapred.JobClient: HDFS_BYTES_READ=169666905 14/04/24 14:23:36 INFO mapred.JobClient: FILE_BYTES_WRITTEN=798086221 14/04/24 14:23:36 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=169666905 14/04/24 14:23:36 INFO mapred.JobClient: File Input Format Counters 14/04/24 14:23:36 INFO mapred.JobClient: Bytes Read=0 14/04/24 14:23:36 INFO mapred.JobClient: Map-Reduce Framework 14/04/24 14:23:36 INFO mapred.JobClient: Reduce input groups=2000000 14/04/24 14:23:36 INFO mapred.JobClient: Map output materialized bytes=138000012 14/04/24 14:23:36 INFO mapred.JobClient: Combine output records=0 14/04/24 14:23:36 INFO mapred.JobClient: Map input records=2000000 14/04/24 14:23:36 INFO mapred.JobClient: Reduce shuffle bytes=0 14/04/24 14:23:36 INFO mapred.JobClient: Physical memory (bytes) snapshot=0 14/04/24 14:23:36 INFO mapred.JobClient: Reduce output records=0 14/04/24 14:23:36 INFO mapred.JobClient: Spilled Records=12000000 14/04/24 14:23:36 INFO mapred.JobClient: Map output bytes=130000000 14/04/24 14:23:36 INFO mapred.JobClient: CPU time spent (ms)=0 14/04/24 14:23:36 INFO mapred.JobClient: Total committed heap usage (bytes)=14297186304 14/04/24 14:23:36 INFO mapred.JobClient: Virtual memory (bytes) snapshot=0 14/04/24 14:23:36 INFO mapred.JobClient: Combine input records=0 14/04/24 14:23:36 INFO mapred.JobClient: Map output records=4000000 14/04/24 14:23:36 INFO mapred.JobClient: SPLIT_RAW_BYTES=230 14/04/24 14:23:36 INFO mapred.JobClient: Reduce input records=4000000 14/04/24 14:23:36 INFO test.IntegrationTestBigLinkedList$Loop: Verify finished with succees. Total nodes=2000000 14/04/24 14:23:36 INFO hbase.HBaseCluster: Added new HBaseAdmin Time: 11 624,459 There were 2 failures: 1) testDataIngest(org.apache.hadoop.hbase.IntegrationTestDataIngestWithChaosMonkey) junit.framework.AssertionFailedError: Verification failed with error code 1 at junit.framework.Assert.fail(Assert.java:50) at org.apache.hadoop.hbase.IngestIntegrationTestBase.runIngestTest(IngestIntegrationTestBase.java:111) at org.apache.hadoop.hbase.IntegrationTestDataIngestWithChaosMonkey.testDataIngest(IntegrationTestDataIngestWithChaosMonkey.java:61) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222) at org.junit.runners.ParentRunner.run(ParentRunner.java:300) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:24) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222) at org.junit.runners.ParentRunner.run(ParentRunner.java:300) at org.junit.runner.JUnitCore.run(JUnitCore.java:157) at org.junit.runner.JUnitCore.run(JUnitCore.java:136) at org.junit.runner.JUnitCore.run(JUnitCore.java:117) at org.apache.hadoop.hbase.IntegrationTestsDriver.doWork(IntegrationTestsDriver.java:102) at org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:108) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79) at org.apache.hadoop.hbase.IntegrationTestsDriver.main(IntegrationTestsDriver.java:47) 2) testLoadAndVerify(org.apache.hadoop.hbase.test.IntegrationTestLoadAndVerify) org.apache.hadoop.hbase.TableExistsException: org.apache.hadoop.hbase.TableExistsException: IntegrationTestLoadAndVerify at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:95) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:79) at org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsync(HBaseAdmin.java:568) at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:452) at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:428) at org.apache.hadoop.hbase.test.IntegrationTestLoadAndVerify.testLoadAndVerify(IntegrationTestLoadAndVerify.java:357) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222) at org.junit.runners.ParentRunner.run(ParentRunner.java:300) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:24) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222) at org.junit.runners.ParentRunner.run(ParentRunner.java:300) at org.junit.runner.JUnitCore.run(JUnitCore.java:157) at org.junit.runner.JUnitCore.run(JUnitCore.java:136) at org.junit.runner.JUnitCore.run(JUnitCore.java:117) at org.apache.hadoop.hbase.IntegrationTestsDriver.doWork(IntegrationTestsDriver.java:102) at org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:108) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79) at org.apache.hadoop.hbase.IntegrationTestsDriver.main(IntegrationTestsDriver.java:47) Caused by: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hbase.TableExistsException: IntegrationTestLoadAndVerify at org.apache.hadoop.hbase.master.handler.CreateTableHandler.<init>(CreateTableHandler.java:92) at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1334) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:323) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1434) at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:1012) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:87) at com.sun.proxy.$Proxy18.createTable(Unknown Source) at org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsync(HBaseAdmin.java:566) ... 38 more 2014-04-24 22:55 GMT-04:00 lars hofhansl <la...@apache.org>: > Wow. Thanks JM! > > -- Lars > > > ________________________________ > From: Jean-Marc Spaggiari <jean-m...@spaggiari.org> > To: dev <dev@hbase.apache.org> > Sent: Thursday, April 24, 2014 10:09 AM > Subject: Re: [VOTE] The 1st hbase 0.94.19 release candidate is available > for download > > > So. I'm done with my heavy duty release test for 0.94.19. > > tl.tr: +1 ;) > > Here are the details. > Downloaded the jar, checked the signature, the CHANGES.txt file, the > documentation (random pickup) -> Passed. > Run the test suite -> Passed. > Tests run: 1550, Failures: 0, Errors: 0, Skipped: 16 > [INFO] > ------------------------------------------------------------------------ > [INFO] BUILD SUCCESS > [INFO] > ------------------------------------------------------------------------ > [INFO] Total time: 1:16:23.687s > [INFO] Finished at: Wed Apr 23 12:02:26 EDT 2014 > [INFO] Final Memory: 29M/983M > [INFO] > ------------------------------------------------------------------------ > > Run PE over few days and compared with 0.94.18. > Got one exception on GaussianRandomReadBenchmark for > HFilePerformanceEvaluation. This occurs multiple times but was always the > same exception: > org.apache.hadoop.hbase.io.hfile.AbstractHFileReader$NotSeekedException: > Not seeked to a key/value > at > > org.apache.hadoop.hbase.io.hfile.AbstractHFileReader$Scanner.assertSeeked(AbstractHFileReader.java:320) > at > > org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.getKey(HFileReaderV2.java:650) > at > > org.apache.hadoop.hbase.HFilePerformanceEvaluation$GaussianRandomReadBenchmark.doRow(HFilePerformanceEvaluation.java:350) > at > > org.apache.hadoop.hbase.HFilePerformanceEvaluation$RowOrientedBenchmark.run(HFilePerformanceEvaluation.java:169) > at > > org.apache.hadoop.hbase.HFilePerformanceEvaluation.runBenchmark(HFilePerformanceEvaluation.java:121) > at > > org.apache.hadoop.hbase.HFilePerformanceEvaluation$3.run(HFilePerformanceEvaluation.java:97) > at java.lang.Thread.run(Thread.java:744) > > Performance wise this release is very similar to the previous one (>0% mean > 0.94.19 is faster): > > 0.94.180.94.19 > FilteredScanTest0,23 0,230,01%RandomReadTest 8258250,01% RandomSeekScanTest > 173178 2,89%RandomScanWithRange10Test282 > 2861,58%RandomScanWithRange100Test149 > 147-1,10% RandomScanWithRange1000Test37,5739,40 > 4,85%SequentialReadTest1 2061 225 > 1,51%SequentialWriteTest 13 68713 8261,02% RandomWriteTest14 09213 > 574-3,68% > GaussianRandomReadBenchmark9 400 9 395-0,05%SequentialReadBenchmark3 035 > 361 > 3 009 210-0,86% SequentialWriteBenchmark909 881909 579 -0,03% > UniformRandomReadBenchmark10 312 10 3540,41%UniformRandomSmallScan 233 141 > 233 9310,34% LoadTestToolreal 19m24.218s > user 36m17.208s > sys 11m45.128sreal 19m11.070s > user 37m5.328s > sys 11m10.724s > IntegrationTestLoadAnVerify real 4m7.922s > user 1m31.100s > sys 0m7.324sreal 4m7.909s > user 1m30.668s > sys 0m7.136s > HLogPerformanceEvaluation10571,8310597,149 > 0,24%IntegrationTestBigLinkedListreal > 6m20.893s > user 2m55.068s > sys 0m10.728s real 6m26.026s > user 3m0.156s > sys 0m10.436s > > Ran LoadTestTool, IntegrationTestLoadAndVerify, HLogPerformanceEvaluation, > IntegrationTestBigLinkedList. I consider them as passed since I have the > same results as for 0.94.18 but I still have the > IntegrationTestLoadAndVerify reporting wrong results. > > 14/04/23 22:45:03 INFO mapred.JobClient: > org.apache.hadoop.hbase.test.IntegrationTestLoadAndVerify$Counters > 14/04/23 22:45:03 INFO mapred.JobClient: ROWS_WRITTEN=0 > 14/04/23 22:45:03 INFO mapred.JobClient: REFERENCES_CHECKED=9855522 > (Should be 10000000) > > Tried to manually create table, put, scan, flush, get, compact, drop, etc. > All passed. > > Deployed on 8 node cluster. RSs rolling restart went well. > Stopped the cluster, merged a 32 region table to a single region, restart, > major_compact, got it splitted correctly. > > Tried the default load balancer but did not worked well the first time. > Initial state: > node7.google.com <http://node7.distparser.com>,60020,1398349950181 32 > > After first run: > Regions by Region Server > Region Server Region Count > node1. <http://node1.distparser.com>google > <http://node7.distparser.com>.com<http://node1.distparser.com > >,60020,1398349948960 > 2 > node2. <http://node2.distparser.com>google > <http://node7.distparser.com>.com<http://node2.distparser.com > >,60020,1398349952018 > 3 > node3. <http://node3.distparser.com>google > <http://node7.distparser.com>.com<http://node3.distparser.com > >,60020,1398349947506 > 2 > node4. <http://node4.distparser.com>google > <http://node7.distparser.com>.com<http://node4.distparser.com > >,60020,1398349949141 > 3 > node5. <http://node5.distparser.com>google > <http://node7.distparser.com>.com<http://node5.distparser.com > >,60020,1398349946262 > 2 > node6. <http://node6.distparser.com>google > <http://node7.distparser.com>.com<http://node6.distparser.com > >,60020,1398349951095 > 2 > node7. <http://node7.distparser.com>google > <http://node7.distparser.com>.com<http://node7.distparser.com > >,60020,1398349950181 > 16 > node8. <http://node8.distparser.com>google > <http://node7.distparser.com>.com<http://node8.distparser.com > >,60020,1398349948480 > 2 > > After 2nd run: > Regions by Region Server > Region Server Region Count > node1. <http://node1.distparser.com>google > <http://node7.distparser.com>.com<http://node1.distparser.com > >,60020,1398349948960 > 4 > node2. <http://node2.distparser.com>google > <http://node7.distparser.com>.com<http://node2.distparser.com > >,60020,1398349952018 > 4 > node3. <http://node3.distparser.com>google > <http://node7.distparser.com>.com<http://node3.distparser.com > >,60020,1398349947506 > 4 > node4. <http://node4.distparser.com>google > <http://node7.distparser.com>.com<http://node4.distparser.com > >,60020,1398349949141 > 4 > node5. <http://node5.distparser.com>google > <http://node7.distparser.com>.com<http://node5.distparser.com > >,60020,1398349946262 > 4 > node6. <http://node6.distparser.com>google > <http://node7.distparser.com>.com<http://node6.distparser.com > >,60020,1398349951095 > 4 > node7. <http://node7.distparser.com>google > <http://node7.distparser.com>.com<http://node7.distparser.com > >,60020,1398349950181 > 4 > node8. <http://node8.distparser.com>google > <http://node7.distparser.com>.com<http://node8.distparser.com > >,60020,1398349948480 > 4 > > But that's not a show stopper. > Restored my default balancer, restart the cluster, got everything balanced > correctly. Might be nice to have a way to change the balancer without > having to restart HBase... > > I checked the webUI, the logs and HBCK all over the process and they are > all reporting well. > > Last, integration test is running for the 2 last hours with no issues. > > So I'm +1 with this release. > > JM > > > > 2014-04-24 2:40 GMT-04:00 Srikanth Srungarapu <srikanth...@gmail.com>: > > > +1 (non-binding) > > > > - Verified the signature and md5 > > > > - Ran test suite on both local and distributed mode (all got passed with > > two being skipped) > > > > - Inspected UI and CHANGES.txt. > > > > Thanks, > > Srikanth. > > > > > > On Wed, Apr 23, 2014 at 5:54 PM, lars hofhansl <la...@apache.org> wrote: > > > > > Thanks Ted and Esteban! > > > > > > > > > -- Lars > > > > > > > > > > > > ________________________________ > > > From: Esteban Gutierrez <este...@cloudera.com> > > > To: "dev@hbase.apache.org" <dev@hbase.apache.org> > > > Cc: lars hofhansl <la...@apache.org> > > > Sent: Tuesday, April 22, 2014 9:36 PM > > > Subject: Re: [VOTE] The 1st hbase 0.94.19 release candidate is > available > > > for download > > > > > > > > > +1 (non-binding) > > > > > > signature good, all tests passed on first run (2 skipped), ran PE > > > with SecureRpcEngine on pseudo distributed mode. > > > > > > esteban. > > > > > > > > > > > > -- > > > Cloudera, Inc. > > > > > > > > > > > > > > > On Tue, Apr 22, 2014 at 3:48 PM, Ted Yu <yuzhih...@gmail.com> wrote: > > > > > > > +1 > > > > > > > > - checked documentation and tarball > > > > > > > > - Ran unit test suite which passed (TestTableSnapshotInputFormatScan > > > passed > > > > on second run) > > > > > > > > - Ran in local and distributed mode > > > > > > > > Cheers > > > > > > > > > > > > On Mon, Apr 21, 2014 at 7:49 PM, lars hofhansl <la...@apache.org> > > wrote: > > > > > > > > > The 1st 0.94.19 RC is available for download at > > > > > http://people.apache.org/~larsh/hbase-0.94.19-rc0/ > > > > > Signed with my code signing key: C7CFE328 > > > > > > > > > > HBase 0.94.19 is a bug fix release with 29 bug and test fixes: > > > > > Bug > > > > > [HBASE-10118] - Major compact keeps deletes with future > > timestamps > > > > > [HBASE-10312] - Flooding the cluster with administrative > actions > > > > leads > > > > > to collapse > > > > > [HBASE-10533] - commands.rb is giving wrong error messages on > > > > > exceptions > > > > > [HBASE-10766] - SnapshotCleaner allows to delete referenced > files > > > > > [HBASE-10805] - Speed up KeyValueHeap.next() a bit > > > > > [HBASE-10807] - -ROOT- still stale in table.jsp if it moved > > > > > [HBASE-10845] - Memstore snapshot size isn't updated in > > > > > DefaultMemStore#rollback() > > > > > [HBASE-10847] - 0.94: drop non-secure builds, make security the > > > > default > > > > > [HBASE-10848] - Filter SingleColumnValueFilter combined with > > > > > NullComparator does not work > > > > > [HBASE-10966] - RowCounter misinterprets column names that have > > > > colons > > > > > in their qualifier > > > > > [HBASE-10991] - Port HBASE-10639 'Unload script displays wrong > > > counts > > > > > (off by one) when unloading regions' to 0.94 > > > > > [HBASE-11003] - ExportSnapshot is using the wrong fs when > staging > > > dir > > > > > is not in fs.defaultFS > > > > > [HBASE-11030] - HBaseTestingUtility.getMiniHBaseCluster should > be > > > > able > > > > > to return null > > > > > [HBASE-10921] - Port HBASE-10323 'Auto detect data block > encoding > > > in > > > > > HFileOutputFormat' to 0.94 / 0.96 > > > > > > > > > > Test > > > > > [HBASE-10782] - Hadoop2 MR tests fail occasionally because of > > > > > mapreduce.jobhistory.address is no set in job conf > > > > > [HBASE-10969] - TestDistributedLogSplitting fails frequently in > > > 0.94. > > > > > [HBASE-10982] - > > > > > TestZKProcedure.testMultiCohortWithMemberTimeoutDuringPrepare fails > > > > > frequently in 0.94 > > > > > [HBASE-10987] - Increase timeout in > > > > > TestZKLeaderManager.testLeaderSelection > > > > > [HBASE-10988] - Properly wait for server in > > TestThriftServerCmdLine > > > > > [HBASE-10989] - TestAccessController needs better timeout > > > > > [HBASE-10996] - TestTableSnapshotInputFormatScan fails > frequently > > > on > > > > > 0.94 > > > > > [HBASE-11010] - TestChangingEncoding is unnecessarily slow > > > > > [HBASE-11017] - TestHRegionBusyWait.testWritesWhileScanning > fails > > > > > frequently in 0.94 > > > > > [HBASE-11022] - Increase timeout for > > > > > TestHBaseFsck.testSplitDaughtersNotInMeta > > > > > [HBASE-11024] - TestSecureLoadIncrementalHFilesSplitRecovery > > should > > > > > wait longer for ACL table > > > > > [HBASE-11029] - Increase wait in > > > TestSplitTransactionOnCluster.split > > > > > [HBASE-11037] - Race condition in TestZKBasedOpenCloseRegion > > > > > [HBASE-11040] - TestAccessController, > TestAccessControllerFilter, > > > and > > > > > TestTablePermissions need to wait longer to ACL table > > > > > [HBASE-11042] - TestForceCacheImportantBlocks OOMs occasionally > > in > > > > 0.94 > > > > > > > > > > Notable is HBASE-10847, which drops non-secure builds and make > > security > > > > > the default. From here on there is only one release build of HBase > > > 0.94. > > > > > > > > > > The list of changes is also available here: > > > > > > > > > > > > > > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12310753&version=12326287 > > > > > > > > > > Here're the jenkins runs for this RC: > > > > > https://builds.apache.org/job/HBase-0.94.19/18/ > > > > > > > > > > Please try out the RC, check out the doc, take it for a spin, etc, > > and > > > > > vote +1/-1 by EOD April 27th on whether we should release this as > > > > 0.94.19. > > > > > > > > > > Thanks. > > > > > > > > > > -- Lars > > > > > > > > > > > > > > >