[jira] [Created] (HBASE-4977) Forward port HBASE-3848 to 0.92 and TRUNK
Forward port HBASE-3848 to 0.92 and TRUNK - Key: HBASE-4977 URL: https://issues.apache.org/jira/browse/HBASE-4977 Project: HBase Issue Type: Task Reporter: Ted Yu HBASE-3848, request is always zero in WebUI for region server, was integrated to 0.90 This JIRA is a forward port. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-4956) Control direct memory buffer consumption by HBaseClient
Control direct memory buffer consumption by HBaseClient --- Key: HBASE-4956 URL: https://issues.apache.org/jira/browse/HBASE-4956 Project: HBase Issue Type: New Feature Reporter: Ted Yu As Jonathan explained here https://groups.google.com/group/asynchbase/browse_thread/thread/c45bc7ba788b2357?pli=1 , standard hbase client inadvertently consumes large amount of direct memory. We should consider using netty for NIO-related tasks. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-4942) HMaster is unable to start of HFile V1 is used
HMaster is unable to start of HFile V1 is used -- Key: HBASE-4942 URL: https://issues.apache.org/jira/browse/HBASE-4942 Project: HBase Issue Type: Bug Components: io Affects Versions: 0.92.0 Reporter: Ted Yu Fix For: 0.92.0, 0.94.0 This was reported by HH Zhu (zhh200...@gmail.com) If the following is specified in hbase-site.xml: {code} property namehfile.format.version/name value1/value /property {code} Clear the hdfs directory hbase.rootdir so that MasterFileSystem.bootstrap() is executed. You would see: {code} java.lang.NullPointerException at org.apache.hadoop.hbase.io.hfile.HFileReaderV1.close(HFileReaderV1.java:358) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.close(StoreFile.java:1083) at org.apache.hadoop.hbase.regionserver.StoreFile.closeReader(StoreFile.java:570) at org.apache.hadoop.hbase.regionserver.Store.close(Store.java:441) at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:782) at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:717) at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:688) at org.apache.hadoop.hbase.master.MasterFileSystem.bootstrap(MasterFileSystem.java:390) at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:356) at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:128) at org.apache.hadoop.hbase.master.MasterFileSystem.init(MasterFileSystem.java:113) at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:435) at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:314) at java.lang.Thread.run(Thread.java:619) {code} The above exception would lead to: {code} java.lang.RuntimeException: HMaster Aborted at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:152) at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:103) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:76) at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1512) {code} In org.apache.hadoop.hbase.master.HMaster.HMaster(Configuration conf), we have: {code} this.conf.setFloat(CacheConfig.HFILE_BLOCK_CACHE_SIZE_KEY, 0.0f); {code} When CacheConfig is instantiated, the following is called: {code} org.apache.hadoop.hbase.io.hfile.CacheConfig.instantiateBlockCache(Configuration conf) {code} Since hfile.block.cache.size is 0.0, instantiateBlockCache() would return null, resulting in blockCache field of CacheConfig to be null. When master closes Root region, org.apache.hadoop.hbase.io.hfile.HFileReaderV1.close(boolean evictOnClose) would be called. cacheConf.getBlockCache() returns null, leading to master abort. The following should be called in HFileReaderV1.close(), similar to the code in HFileReaderV2.close(): {code} if (evictOnClose cacheConf.isBlockCacheEnabled()) {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-4887) Write full enum name of Compression.Algorithm into HFile
Write full enum name of Compression.Algorithm into HFile Key: HBASE-4887 URL: https://issues.apache.org/jira/browse/HBASE-4887 Project: HBase Issue Type: Improvement Reporter: Ted Yu Currently the ordinal of compression algorithms is used. This places unnecessary constraint when new compress algorithm is added. We should write full enum name of Compression.Algorithm into HFile -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-4876) TestDistributedLogSplitting#testWorkerAbort occasionally fails
TestDistributedLogSplitting#testWorkerAbort occasionally fails -- Key: HBASE-4876 URL: https://issues.apache.org/jira/browse/HBASE-4876 Project: HBase Issue Type: Bug Reporter: Ted Yu From https://builds.apache.org/view/G-L/view/HBase/job/HBase-TRUNK/2486/testReport/org.apache.hadoop.hbase.master/TestDistributedLogSplitting/testWorkerAbort/: {code} 2011-11-26 18:10:25,075 DEBUG [SplitLogWorker-janus.apache.org,42484,1322330994864] wal.HLogSplitter(460): Closed hdfs://localhost:47236/user/jenkins/splitlog/janus.apache.org,42484,1322330994864_hdfs%3A%2F%2Flocalhost%3A47236%2Fuser%2Fjenkins%2F.logs%2Fjanus.apache.org%2C42484%2C1322330994864%2Fjanus.apache.org%252C42484%252C1322330994864.1322330997838/table/be67e8c1df1e77e93181ff7300e77639/recovered.edits/152 2011-11-26 18:10:25,075 DEBUG [SplitLogWorker-janus.apache.org,42484,1322330994864] wal.HLogSplitter(460): Closed hdfs://localhost:47236/user/jenkins/splitlog/janus.apache.org,42484,1322330994864_hdfs%3A%2F%2Flocalhost%3A47236%2Fuser%2Fjenkins%2F.logs%2Fjanus.apache.org%2C42484%2C1322330994864%2Fjanus.apache.org%252C42484%252C1322330994864.1322330997838/table/bf112e57fbaa65c12accfafaaa4dc2b0/recovered.edits/167 2011-11-26 18:10:25,075 DEBUG [SplitLogWorker-janus.apache.org,42484,1322330994864] wal.HLogSplitter(460): Closed hdfs://localhost:47236/user/jenkins/splitlog/janus.apache.org,42484,1322330994864_hdfs%3A%2F%2Flocalhost%3A47236%2Fuser%2Fjenkins%2F.logs%2Fjanus.apache.org%2C42484%2C1322330994864%2Fjanus.apache.org%252C42484%252C1322330994864.1322330997838/table/bfb6983046589215ed8e6cb0e60dd803/recovered.edits/146 2011-11-26 18:10:25,488 INFO [SplitLogWorker-janus.apache.org,42484,1322330994864] regionserver.SplitLogWorker(308): worker janus.apache.org,42484,1322330994864 done with task /hbase/splitlog/hdfs%3A%2F%2Flocalhost%3A47236%2Fuser%2Fjenkins%2F.logs%2Fjanus.apache.org%2C42484%2C1322330994864%2Fjanus.apache.org%252C42484%252C1322330994864.1322330997838 in 13379ms 2011-11-26 18:10:25,488 ERROR [SplitLogWorker-janus.apache.org,42484,1322330994864] regionserver.SplitLogWorker(169): unexpected error java.lang.NullPointerException at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeThreads(DFSClient.java:3648) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeInternal(DFSClient.java:3691) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.close(DFSClient.java:3626) at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:61) at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:86) at org.apache.hadoop.io.SequenceFile$Writer.close(SequenceFile.java:966) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.close(SequenceFileLogWriter.java:214) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFileToTemp(HLogSplitter.java:459) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFileToTemp(HLogSplitter.java:352) at org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:113) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.grabTask(SplitLogWorker.java:266) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.taskLoop(SplitLogWorker.java:197) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:165) at java.lang.Thread.run(Thread.java:662) 2011-11-26 18:10:25,488 INFO [SplitLogWorker-janus.apache.org,42484,1322330994864] regionserver.SplitLogWorker(171): SplitLogWorker janus.apache.org,42484,1322330994864 exiting {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-4873) Port HBASE-4863 to thrift2/ThriftServer
Port HBASE-4863 to thrift2/ThriftServer --- Key: HBASE-4873 URL: https://issues.apache.org/jira/browse/HBASE-4873 Project: HBase Issue Type: Improvement Reporter: Ted Yu HBASE-4863 introduced bounded thread pool for Thrift server. thrift2/ThriftServer should have this enhancement as well. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-4875) ZKLeaderManager.handleLeaderChange() doesn't handle KeeperException$SessionExpiredException
ZKLeaderManager.handleLeaderChange() doesn't handle KeeperException$SessionExpiredException --- Key: HBASE-4875 URL: https://issues.apache.org/jira/browse/HBASE-4875 Project: HBase Issue Type: Bug Affects Versions: 0.92.0 Reporter: Ted Yu TestMasterFailover#testSimpleMasterFailover has failed twice in a row for builds 15 and 16. From https://builds.apache.org/view/G-L/view/HBase/job/HBase-0.92-security/16/testReport/org.apache.hadoop.hbase.master/TestMasterFailover/testSimpleMasterFailover/: {code} 2011-11-26 01:34:49,218 ERROR [Thread-1-EventThread] zookeeper.ZooKeeperWatcher(403): master:52934-0x133dd828131 Received unexpected KeeperException, re-throwing exception org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /hbase/tokenauth/keymaster at org.apache.zookeeper.KeeperException.create(KeeperException.java:127) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1003) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:154) at org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:225) at org.apache.hadoop.hbase.zookeeper.ZKLeaderManager.handleLeaderChange(ZKLeaderManager.java:85) at org.apache.hadoop.hbase.zookeeper.ZKLeaderManager.nodeDeleted(ZKLeaderManager.java:78) at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:281) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:521) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:497) 2011-11-26 01:34:49,216 DEBUG [RegionServer:2;hemera.apache.org,44702,1322271278232-EventThread] zookeeper.ZKUtil(230): hconnection-0x133dd828139 /hbase/master does not exist. Watcher is set. 2011-11-26 01:34:49,215 DEBUG [Thread-1-EventThread] zookeeper.ZKUtil(230): master:44883-0x133dd828132 /hbase/master does not exist. Watcher is set. 2011-11-26 01:34:49,219 DEBUG [Thread-1-EventThread] master.ActiveMasterManager(104): No master available. Notifying waiting threads 2011-11-26 01:34:49,215 INFO [Master:1;hemera.apache.org,52934,1322271278115] master.HMaster(338): HMaster main thread exiting {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-4856) Unit tests under security profile need more heap space
Unit tests under security profile need more heap space -- Key: HBASE-4856 URL: https://issues.apache.org/jira/browse/HBASE-4856 Project: HBase Issue Type: Task Reporter: Ted Yu In more than one 0.92-security builds (build #9, e.g.), we had the following: {code} Running org.apache.hadoop.hbase.master.TestDistributedLogSplitting Exception in thread ThreadedStreamConsumer java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Arrays.java:2882) at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:100) at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:390) at java.lang.StringBuffer.append(StringBuffer.java:224) at org.apache.maven.surefire.report.TestSetRunListener.getAsString(TestSetRunListener.java:201) at org.apache.maven.surefire.report.TestSetRunListener.testError(TestSetRunListener.java:139) at org.apache.maven.plugin.surefire.booterclient.output.ForkClient.consumeLine(ForkClient.java:112) Running org.apache.hadoop.hbase.master.TestMasterFailover Exception in thread ThreadedStreamConsumer java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Arrays.java:2882) at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:100) at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:390) at java.lang.StringBuffer.append(StringBuffer.java:224) at org.apache.maven.surefire.report.TestSetRunListener.getAsString(TestSetRunListener.java:201) at org.apache.maven.surefire.report.TestSetRunListener.testError(TestSetRunListener.java:139) {code} We should increase maximum heap for tests under security profile -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-4839) Re-enable TestInstantSchemaChangeFailover#testInstantSchemaOperationsInZKForMasterFailover
Re-enable TestInstantSchemaChangeFailover#testInstantSchemaOperationsInZKForMasterFailover -- Key: HBASE-4839 URL: https://issues.apache.org/jira/browse/HBASE-4839 Project: HBase Issue Type: Test Reporter: Ted Yu TestInstantSchemaChangeFailover#testInstantSchemaOperationsInZKForMasterFailover was disabled for instant schema change (HBASE-4213) after it failed on Jenkins. We should enable it and make it pass on Jenkins and dev enviroments. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-4754) FSTableDescriptors.getTableInfoPath() should be able to handle FileNotFoundException
FSTableDescriptors.getTableInfoPath() should be able to handle FileNotFoundException Key: HBASE-4754 URL: https://issues.apache.org/jira/browse/HBASE-4754 Project: HBase Issue Type: Bug Reporter: Ted Yu Assignee: Ted Yu Fix For: 0.92.0 As reported by Roman in the thread entitled 'HBase 0.92/Hadoop 0.22 test results', table creation would result in the following if hadoop 0.22 is the underlying platform: {code} 11/11/05 19:08:48 INFO handler.CreateTableHandler: Attemping to create the table b 11/11/05 19:08:48 ERROR handler.CreateTableHandler: Error trying to create the table b java.io.FileNotFoundException: File hdfs://ip-10-110-254-200.ec2.internal:17020/hbase/b does not exist. at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:387) at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1085) at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1110) at org.apache.hadoop.hbase.util.FSTableDescriptors.getTableInfoPath(FSTableDescriptors.java:257) at org.apache.hadoop.hbase.util.FSTableDescriptors.getTableInfoPath(FSTableDescriptors.java:243) at org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptor(FSTableDescriptors.java:566) at org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptor(FSTableDescriptors.java:535) at org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptor(FSTableDescriptors.java:519) at org.apache.hadoop.hbase.master.handler.CreateTableHandler.handleCreateTable(CreateTableHandler.java:140) at org.apache.hadoop.hbase.master.handler.CreateTableHandler.process(CreateTableHandler.java:126) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:168) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) {code} This was due to how DistributedFileSystem.listStatus() in 0.22 handles non-existent directory: {code} @Override public FileStatus[] listStatus(Path p) throws IOException { String src = getPathName(p); // fetch the first batch of entries in the directory DirectoryListing thisListing = dfs.listPaths( src, HdfsFileStatus.EMPTY_NAME); if (thisListing == null) { // the directory does not exist throw new FileNotFoundException(File + p + does not exist.); } {code} So in FSTableDescriptors.getTableInfoPath(), we should catch FileNotFoundException and treat it the same way as status being null. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-4750) Make thrift2 ThriftHBaseServiceHandler more friendly to concurrent tests
Make thrift2 ThriftHBaseServiceHandler more friendly to concurrent tests Key: HBASE-4750 URL: https://issues.apache.org/jira/browse/HBASE-4750 Project: HBase Issue Type: Task Reporter: Ted Yu Quite often we saw the following reported by HadoopQA: {code} testExists(org.apache.hadoop.hbase.thrift2.TestThriftHBaseServiceHandler) Time elapsed: 0.062 sec ERROR! java.lang.IllegalArgumentException: Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master. at org.apache.hadoop.hbase.zookeeper.RootRegionTracker.waitRootRegionLocation(RootRegionTracker.java:81) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:753) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:733) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:866) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:765) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:733) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:866) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:769) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:733) at org.apache.hadoop.hbase.client.HTable.init(HTable.java:202) at org.apache.hadoop.hbase.client.HTableFactory.createHTableInterface(HTableFactory.java:36) at org.apache.hadoop.hbase.client.HTablePool.createHTable(HTablePool.java:268) at org.apache.hadoop.hbase.client.HTablePool.findOrCreateTable(HTablePool.java:198) at org.apache.hadoop.hbase.client.HTablePool.getTable(HTablePool.java:173) at org.apache.hadoop.hbase.client.HTablePool.getTable(HTablePool.java:216) at org.apache.hadoop.hbase.thrift2.ThriftHBaseServiceHandler.getTable(ThriftHBaseServiceHandler.java:64) at org.apache.hadoop.hbase.thrift2.ThriftHBaseServiceHandler.exists(ThriftHBaseServiceHandler.java:115) at org.apache.hadoop.hbase.thrift2.TestThriftHBaseServiceHandler.testExists(TestThriftHBaseServiceHandler.java:123) {code} Methods in ThriftHBaseServiceHandler don't accept Configuration parameter. This makes parallelizing tests harder. Looking deeper, we can see that HTablePool methods such as getTable() and findOrCreateTable() don't accept Configuration parameter either. So we have to pass Configuration object to HTablePool ctor. This means we need to add ThriftHBaseServiceHandler ctor which takes Configuration parameter. Instead of the following in TestThriftHBaseServiceHandler: {code} ThriftHBaseServiceHandler handler = new ThriftHBaseServiceHandler(); {code} We should be using the new ThriftHBaseServiceHandler ctor and pass HBaseTestingUtility's Configuration so that HTablePool ctor can receive it. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-4751) Make TestAdmin#testEnableTableRoundRobinAssignment friendly to concurrent tests
Make TestAdmin#testEnableTableRoundRobinAssignment friendly to concurrent tests --- Key: HBASE-4751 URL: https://issues.apache.org/jira/browse/HBASE-4751 Project: HBase Issue Type: Task Reporter: Ted Yu From https://builds.apache.org/view/G-L/view/HBase/job/HBase-TRUNK/2410/artifact/trunk/target/surefire-reports/org.apache.hadoop.hbase.client.TestAdmin.txt : {code} testEnableTableRoundRobinAssignment(org.apache.hadoop.hbase.client.TestAdmin) Time elapsed: 4.345 sec ERROR! java.lang.IllegalArgumentException: Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master. at org.apache.hadoop.hbase.zookeeper.RootRegionTracker.waitRootRegionLocation(RootRegionTracker.java:81) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:753) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:733) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:866) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:765) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:733) at org.apache.hadoop.hbase.client.HTable.init(HTable.java:202) at org.apache.hadoop.hbase.client.HTable.init(HTable.java:157) at org.apache.hadoop.hbase.client.TestAdmin.testEnableTableRoundRobinAssignment(TestAdmin.java:604) {code} This was due to: {code} HTable metaTable = new HTable(HConstants.META_TABLE_NAME); {code} A few lines above, we have the correct usage: {code} HTable ht = new HTable(TEST_UTIL.getConfiguration(), tableName); {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-4745) LRU Statistics thread should be daemon
LRU Statistics thread should be daemon -- Key: HBASE-4745 URL: https://issues.apache.org/jira/browse/HBASE-4745 Project: HBase Issue Type: Bug Reporter: Ted Yu Fix For: 0.92.0 Here was from 'HBase 0.92/Hadoop 0.22 test results' discussion on dev@hbase {code} LRU Statistics #0 prio=10 tid=0x7f4edc7dd800 nid=0x211a waiting on condition [0x7f4e631e2000] java.lang.Thread.State: TIMED_WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0x7f4e88acc968 (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025) at java.util.concurrent.DelayQueue.take(DelayQueue.java:164) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:583) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:576) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907) at java.lang.Thread.run(Thread.java:619) {code} We should make this thread daemon thread. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-4747) Upgrade maven surefire plugin to 2.10
Upgrade maven surefire plugin to 2.10 - Key: HBASE-4747 URL: https://issues.apache.org/jira/browse/HBASE-4747 Project: HBase Issue Type: Task Reporter: Ted Yu Assignee: Ted Yu Quite often, we see the following when running unit tests: {code} Running org.apache.hadoop.hbase.master.TestMasterFailover Exception in thread ThreadedStreamConsumer java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Arrays.java:2882) at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:100) at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:390) at java.lang.StringBuffer.append(StringBuffer.java:224) at org.apache.maven.surefire.report.TestSetRunListener.getAsString(TestSetRunListener.java:201) at org.apache.maven.surefire.report.TestSetRunListener.testError(TestSetRunListener.java:139) at org.apache.maven.plugin.surefire.booterclient.output.ForkClient.consumeLine(ForkClient.java:112) at org.apache.maven.plugin.surefire.booterclient.output.ThreadedStreamConsumer$Pumper.run(ThreadedStreamConsumer.java:67) at java.lang.Thread.run(Thread.java:680) {code} This was due to https://jira.codehaus.org/browse/SUREFIRE-754 which has been fixed in surefire 2.10 We should upgrade to version 2.10 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-4716) Improve locking for single column family bulk load
Improve locking for single column family bulk load -- Key: HBASE-4716 URL: https://issues.apache.org/jira/browse/HBASE-4716 Project: HBase Issue Type: Bug Reporter: Ted Yu Assignee: Ted Yu HBASE-4552 changed the locking behavior for single column family bulk load, namely we don't need to take write lock. A read lock would suffice. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-4507) Create checkAndPut variant that exposes timestamp / UUID
Create checkAndPut variant that exposes timestamp / UUID Key: HBASE-4507 URL: https://issues.apache.org/jira/browse/HBASE-4507 Project: HBase Issue Type: Sub-task Reporter: Ted Yu Michael checked the checkAndPut which doesn't expose timestamp. So variant of checkAndPut should expose timestamp by writing timestamp or uuid to .META. into a new column info:editid whenever we do the metadata open update. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-4508) Backport HBASE-3777 to 0.90 branch
Backport HBASE-3777 to 0.90 branch -- Key: HBASE-4508 URL: https://issues.apache.org/jira/browse/HBASE-4508 Project: HBase Issue Type: Bug Reporter: Ted Yu See discussion here: http://search-hadoop.com/m/MJBId1aazTR1/backporting+HBASE-3777+to+0.90subj=backporting+HBASE+3777+to+0+90 Rocketfuel has been running 0.90.3 with HBASE-3777 since its resolution. They have 10 RS nodes , 1 Master and 1 Zookeeper Live writes and reads but super heavy on reads. Cache hit is pretty high. The qps on one of their data centers is 50K. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-4490) Improve TestRollingRestart to cover complex cases
Improve TestRollingRestart to cover complex cases - Key: HBASE-4490 URL: https://issues.apache.org/jira/browse/HBASE-4490 Project: HBase Issue Type: Task Reporter: Ted Yu HBASE-4455 fixed region server rolling restart scenario where ROOT and .META. regions could become invisible in AssignmentManager point of view. This JIRA would create integration test(s) that simulate the above scenario and verify that the fix in HBASE-4455 indeed works. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-4492) TestRollingRestart fails intermittently
TestRollingRestart fails intermittently --- Key: HBASE-4492 URL: https://issues.apache.org/jira/browse/HBASE-4492 Project: HBase Issue Type: Test Reporter: Ted Yu Assignee: Jonathan Gray I got the following when running test suite on TRUNK: {code} testBasicRollingRestart(org.apache.hadoop.hbase.master.TestRollingRestart) Time elapsed: 300.28 sec ERROR! java.lang.Exception: test timed out after 30 milliseconds at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hbase.master.TestRollingRestart.waitForRSShutdownToStartAndFinish(TestRollingRestart.java:313) at org.apache.hadoop.hbase.master.TestRollingRestart.testBasicRollingRestart(TestRollingRestart.java:210) {code} I ran TestRollingRestart#testBasicRollingRestart manually afterwards which wiped out test output file for the failed test. Similar failure can be found on Jenkins: https://builds.apache.org/view/G-L/view/HBase/job/HBase-0.92/19/testReport/junit/org.apache.hadoop.hbase.master/TestRollingRestart/testBasicRollingRestart/ -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira