[jira] [Commented] (HBASE-9359) Convert KeyValue to Cell in hbase-client module - Result/Put/Delete, ColumnInterpreter
[ https://issues.apache.org/jira/browse/HBASE-9359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752140#comment-13752140 ] stack commented on HBASE-9359: -- rb? In the meantime, I skimmed 1/5th and all I see is good stuff. Ain't it harmless swapping out a KeyValue for the Lower-Common-Denominator Cell? Looking good so far. Convert KeyValue to Cell in hbase-client module - Result/Put/Delete, ColumnInterpreter -- Key: HBASE-9359 URL: https://issues.apache.org/jira/browse/HBASE-9359 Project: HBase Issue Type: Sub-task Components: Client Affects Versions: 0.95.2 Reporter: Jonathan Hsieh Assignee: Jonathan Hsieh Attachments: hbase-9359.patch This path is the second half of eliminating KeyValue from the client interfaces. This percolated through quite a bit. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9230) Fix the server so it can take a pure pb request param and return a pure pb result
[ https://issues.apache.org/jira/browse/HBASE-9230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752143#comment-13752143 ] Hadoop QA commented on HBASE-9230: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12600331/9230v6.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 26 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:red}-1 core tests{color}. The patch failed these unit tests: {color:red}-1 core zombie tests{color}. There are 1 zombie test(s): at org.apache.hadoop.hbase.client.TestAsyncProcess.testSubmitWithCB(TestAsyncProcess.java:164) Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/6943//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6943//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6943//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6943//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6943//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6943//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6943//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6943//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6943//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/6943//console This message is automatically generated. Fix the server so it can take a pure pb request param and return a pure pb result - Key: HBASE-9230 URL: https://issues.apache.org/jira/browse/HBASE-9230 Project: HBase Issue Type: Bug Reporter: stack Assignee: stack Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9230.txt, 9230v2.txt, 9230v3.txt, 9230v3.txt, 9230v4.txt, 9230v5.txt, 9230v6.txt Working on the asynchbase update w/ B this afternoon so it can run against 0.95/0.96, I noticed that clients HAVE TO do cellblocks as the server is currently. That is an oversight. Lets fix so can do all pb all the time too (I thought this was there but it is not); it will make it easier dev'ing simple clients. This issue shouldn't hold up release but we should get it in to help the asynchbase convertion. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9230) Fix the server so it can take a pure pb request param and return a pure pb result
[ https://issues.apache.org/jira/browse/HBASE-9230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752144#comment-13752144 ] Hadoop QA commented on HBASE-9230: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12600331/9230v6.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 26 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:red}-1 core tests{color}. The patch failed these unit tests: {color:red}-1 core zombie tests{color}. There are 1 zombie test(s): at org.apache.hadoop.hbase.client.TestAsyncProcess.testSubmitWithCB(TestAsyncProcess.java:164) Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/6944//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6944//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6944//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6944//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6944//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6944//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6944//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6944//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6944//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/6944//console This message is automatically generated. Fix the server so it can take a pure pb request param and return a pure pb result - Key: HBASE-9230 URL: https://issues.apache.org/jira/browse/HBASE-9230 Project: HBase Issue Type: Bug Reporter: stack Assignee: stack Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9230.txt, 9230v2.txt, 9230v3.txt, 9230v3.txt, 9230v4.txt, 9230v5.txt, 9230v6.txt Working on the asynchbase update w/ B this afternoon so it can run against 0.95/0.96, I noticed that clients HAVE TO do cellblocks as the server is currently. That is an oversight. Lets fix so can do all pb all the time too (I thought this was there but it is not); it will make it easier dev'ing simple clients. This issue shouldn't hold up release but we should get it in to help the asynchbase convertion. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9359) Convert KeyValue to Cell in hbase-client module - Result/Put/Delete, ColumnInterpreter
[ https://issues.apache.org/jira/browse/HBASE-9359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752145#comment-13752145 ] Jonathan Hsieh commented on HBASE-9359: --- I was going to post but test run came up with some test failures from a few translation mistakes. I'll get a copy up when it passes and post to rb. Convert KeyValue to Cell in hbase-client module - Result/Put/Delete, ColumnInterpreter -- Key: HBASE-9359 URL: https://issues.apache.org/jira/browse/HBASE-9359 Project: HBase Issue Type: Sub-task Components: Client Affects Versions: 0.95.2 Reporter: Jonathan Hsieh Assignee: Jonathan Hsieh Attachments: hbase-9359.patch This path is the second half of eliminating KeyValue from the client interfaces. This percolated through quite a bit. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9359) Convert KeyValue to Cell in hbase-client module - Result/Put/Delete, ColumnInterpreter
[ https://issues.apache.org/jira/browse/HBASE-9359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752147#comment-13752147 ] Jonathan Hsieh commented on HBASE-9359: --- The places where it may break user applications is any method that returns a Cell, Cell[], or ListCell instead of a KeyValue, KeyValue[], or ListKeyValue. I'm pretty sure I'm going to add back #getQualifier, #getFamily, #getValue, and #getRow to the Cell interface as deprecated because these were responsible for most of the breaks. Convert KeyValue to Cell in hbase-client module - Result/Put/Delete, ColumnInterpreter -- Key: HBASE-9359 URL: https://issues.apache.org/jira/browse/HBASE-9359 Project: HBase Issue Type: Sub-task Components: Client Affects Versions: 0.95.2 Reporter: Jonathan Hsieh Assignee: Jonathan Hsieh Attachments: hbase-9359.patch This path is the second half of eliminating KeyValue from the client interfaces. This percolated through quite a bit. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8692) [AccessController] Restrict HTableDescriptor enumeration
[ https://issues.apache.org/jira/browse/HBASE-8692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752163#comment-13752163 ] Andrew Purtell commented on HBASE-8692: --- bq. Also this caused HBASE-9314, so now every time we delete a table we get a TableInfoMissingException+stack trace in the master log. Would you like this change reverted? [AccessController] Restrict HTableDescriptor enumeration Key: HBASE-8692 URL: https://issues.apache.org/jira/browse/HBASE-8692 Project: HBase Issue Type: Improvement Components: Coprocessors, security Affects Versions: 0.98.0, 0.95.1, 0.94.9 Reporter: Andrew Purtell Assignee: Andrew Purtell Fix For: 0.98.0, 0.95.2, 0.94.10 Attachments: 8692-0.94.patch, 8692-0.94.patch, 8692-0.94.patch, 8692-0.94.patch, 8692.patch, 8692.patch, 8692.patch, 8692.patch Some users are concerned about having table schema exposed to every user and would like it protected, similar to the rest of the admin operations for schema. This used to be hopeless because META would leak HTableDescriptors in HRegionInfo, but that is no longer the case in 0.94+. Consider adding CP hooks in the master for intercepting HMasterInterface#getHTableDescriptors and HMasterInterface#getHTableDescriptors(ListString). Add support in the AccessController for only allowing GLOBAL ADMIN to the first method. Add support in the AccessController for allowing access to the descriptors for the table names in the list of the second method only if the user has TABLE ADMIN privilege for all of the listed table names. Then, fix the code in HBaseAdmin (and elsewhere) that expects to be able to enumerate all table descriptors e.g. in deleteTable. A TABLE ADMIN can delete a table but won’t have GLOBAL ADMIN privilege to enumerate the total list. So a minor fixup is needed here, and in other places like this which make the same assumption. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9314) Dropping a table always prints a TableInfoMissingException in the master log
[ https://issues.apache.org/jira/browse/HBASE-9314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752166#comment-13752166 ] Andrew Purtell commented on HBASE-9314: --- Would it be sufficient to remove this warning? Dropping a table always prints a TableInfoMissingException in the master log Key: HBASE-9314 URL: https://issues.apache.org/jira/browse/HBASE-9314 Project: HBase Issue Type: Improvement Affects Versions: 0.95.2, 0.94.10 Reporter: Jean-Daniel Cryans Priority: Minor Fix For: 0.98.0, 0.94.12, 0.96.0 Everytime I drop a table I get the same stack trace in the master's log: {noformat} 2013-08-22 23:11:31,939 DEBUG [MASTER_TABLE_OPERATIONS-jdec2hbase0403-1:6-0] org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Table 't' archived! 2013-08-22 23:11:31,939 DEBUG [MASTER_TABLE_OPERATIONS-jdec2hbase0403-1:6-0] org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Removing 't' descriptor. 2013-08-22 23:11:31,940 DEBUG [MASTER_TABLE_OPERATIONS-jdec2hbase0403-1:6-0] org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Marking 't' as deleted. 2013-08-22 23:11:31,944 DEBUG [MASTER_TABLE_OPERATIONS-jdec2hbase0403-1:6-0] org.apache.hadoop.hbase.zookeeper.lock.ZKInterProcessLockBase: Released /hbase/table-lock/t/write-master:602 2013-08-22 23:11:32,024 DEBUG [RpcServer.handler=0,port=6] org.apache.hadoop.hbase.util.FSTableDescriptors: Exception during readTableDecriptor. Current table name = t org.apache.hadoop.hbase.TableInfoMissingException: No table descriptor file under hdfs://jdec2hbase0403-1.vpc.cloudera.com:9000/hbase/data/default/t at org.apache.hadoop.hbase.util.FSTableDescriptors.getTableDescriptorAndModtime(FSTableDescriptors.java:503) at org.apache.hadoop.hbase.util.FSTableDescriptors.getTableDescriptorAndModtime(FSTableDescriptors.java:496) at org.apache.hadoop.hbase.util.FSTableDescriptors.get(FSTableDescriptors.java:170) at org.apache.hadoop.hbase.master.HMaster.getTableDescriptors(HMaster.java:2629) at org.apache.hadoop.hbase.protobuf.generated.MasterMonitorProtos$MasterMonitorService$2.callBlockingMethod(MasterMonitorProtos.java:4634) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2156) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1861) 2013-08-22 23:11:32,024 WARN [RpcServer.handler=0,port=6] org.apache.hadoop.hbase.util.FSTableDescriptors: The following folder is in HBase's root directory and doesn't contain a table descriptor, do consider deleting it: t {noformat} But the operation completes. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (HBASE-9314) Dropping a table always prints a TableInfoMissingException in the master log
[ https://issues.apache.org/jira/browse/HBASE-9314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell reassigned HBASE-9314: - Assignee: Andrew Purtell Dropping a table always prints a TableInfoMissingException in the master log Key: HBASE-9314 URL: https://issues.apache.org/jira/browse/HBASE-9314 Project: HBase Issue Type: Improvement Affects Versions: 0.95.2, 0.94.10 Reporter: Jean-Daniel Cryans Assignee: Andrew Purtell Priority: Minor Fix For: 0.98.0, 0.94.12, 0.96.0 Everytime I drop a table I get the same stack trace in the master's log: {noformat} 2013-08-22 23:11:31,939 DEBUG [MASTER_TABLE_OPERATIONS-jdec2hbase0403-1:6-0] org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Table 't' archived! 2013-08-22 23:11:31,939 DEBUG [MASTER_TABLE_OPERATIONS-jdec2hbase0403-1:6-0] org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Removing 't' descriptor. 2013-08-22 23:11:31,940 DEBUG [MASTER_TABLE_OPERATIONS-jdec2hbase0403-1:6-0] org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Marking 't' as deleted. 2013-08-22 23:11:31,944 DEBUG [MASTER_TABLE_OPERATIONS-jdec2hbase0403-1:6-0] org.apache.hadoop.hbase.zookeeper.lock.ZKInterProcessLockBase: Released /hbase/table-lock/t/write-master:602 2013-08-22 23:11:32,024 DEBUG [RpcServer.handler=0,port=6] org.apache.hadoop.hbase.util.FSTableDescriptors: Exception during readTableDecriptor. Current table name = t org.apache.hadoop.hbase.TableInfoMissingException: No table descriptor file under hdfs://jdec2hbase0403-1.vpc.cloudera.com:9000/hbase/data/default/t at org.apache.hadoop.hbase.util.FSTableDescriptors.getTableDescriptorAndModtime(FSTableDescriptors.java:503) at org.apache.hadoop.hbase.util.FSTableDescriptors.getTableDescriptorAndModtime(FSTableDescriptors.java:496) at org.apache.hadoop.hbase.util.FSTableDescriptors.get(FSTableDescriptors.java:170) at org.apache.hadoop.hbase.master.HMaster.getTableDescriptors(HMaster.java:2629) at org.apache.hadoop.hbase.protobuf.generated.MasterMonitorProtos$MasterMonitorService$2.callBlockingMethod(MasterMonitorProtos.java:4634) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2156) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1861) 2013-08-22 23:11:32,024 WARN [RpcServer.handler=0,port=6] org.apache.hadoop.hbase.util.FSTableDescriptors: The following folder is in HBase's root directory and doesn't contain a table descriptor, do consider deleting it: t {noformat} But the operation completes. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8692) [AccessController] Restrict HTableDescriptor enumeration
[ https://issues.apache.org/jira/browse/HBASE-8692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752182#comment-13752182 ] Andrew Purtell commented on HBASE-8692: --- Never mind, I see HBASE-9314 marked as minor and will address this there. Apologies, when testing I had the minicluster logging at DEBUG in a terminal at 120x60 but missed it. [AccessController] Restrict HTableDescriptor enumeration Key: HBASE-8692 URL: https://issues.apache.org/jira/browse/HBASE-8692 Project: HBase Issue Type: Improvement Components: Coprocessors, security Affects Versions: 0.98.0, 0.95.1, 0.94.9 Reporter: Andrew Purtell Assignee: Andrew Purtell Fix For: 0.98.0, 0.95.2, 0.94.10 Attachments: 8692-0.94.patch, 8692-0.94.patch, 8692-0.94.patch, 8692-0.94.patch, 8692.patch, 8692.patch, 8692.patch, 8692.patch Some users are concerned about having table schema exposed to every user and would like it protected, similar to the rest of the admin operations for schema. This used to be hopeless because META would leak HTableDescriptors in HRegionInfo, but that is no longer the case in 0.94+. Consider adding CP hooks in the master for intercepting HMasterInterface#getHTableDescriptors and HMasterInterface#getHTableDescriptors(ListString). Add support in the AccessController for only allowing GLOBAL ADMIN to the first method. Add support in the AccessController for allowing access to the descriptors for the table names in the list of the second method only if the user has TABLE ADMIN privilege for all of the listed table names. Then, fix the code in HBaseAdmin (and elsewhere) that expects to be able to enumerate all table descriptors e.g. in deleteTable. A TABLE ADMIN can delete a table but won’t have GLOBAL ADMIN privilege to enumerate the total list. So a minor fixup is needed here, and in other places like this which make the same assumption. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9350) In ChaosMonkey, MoveRegionsOfTableAction throws UnknownRegionException
[ https://issues.apache.org/jira/browse/HBASE-9350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752187#comment-13752187 ] chendihao commented on HBASE-9350: -- Thanks for reviewing [~stack] :-) In ChaosMonkey, MoveRegionsOfTableAction throws UnknownRegionException -- Key: HBASE-9350 URL: https://issues.apache.org/jira/browse/HBASE-9350 Project: HBase Issue Type: Bug Components: test Affects Versions: 0.94.0 Reporter: chendihao Labels: test Fix For: 0.98.0, 0.95.0 Attachments: MoveRegionsOfTableAction.java.patch, MoveRegionsOfTableAction-v2.patch The first parameter in HBaseAdmin.move(final byte [] encodedRegionName, final byte [] destServerName) should be encoded. Otherwise, it could throw UnknowRegionException and result in failure of this action. {code} encodedRegionName The encoded region name; i.e. the hash that makes up the region name suffix: e.g. if regionname is TestTable,0094429456,1289497600452.527db22f95c8a9e0116f0cc13c680396., then the encoded region name is: 527db22f95c8a9e0116f0cc13c680396. {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-9361) [0.92] TestDistributedLogSplitting#testThreeRSAbort fails occasionally
Andrew Purtell created HBASE-9361: - Summary: [0.92] TestDistributedLogSplitting#testThreeRSAbort fails occasionally Key: HBASE-9361 URL: https://issues.apache.org/jira/browse/HBASE-9361 Project: HBase Issue Type: Bug Affects Versions: 0.92.3 Reporter: Andrew Purtell Assignee: Andrew Purtell Priority: Minor The error: {noformat} java.lang.AssertionError at org.junit.Assert.fail(Assert.java:92) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertTrue(Assert.java:54) at org.apache.hadoop.hbase.master.TestDistributedLogSplitting.testThreeRSAbort(TestDistributedLogSplitting.java:132) {noformat} Here the test has aborted three regionservers but not all of them have terminated after 60 seconds. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-9362) [0.92] TestColumnSeeking.testDuplicateVersions fails occasionally
Andrew Purtell created HBASE-9362: - Summary: [0.92] TestColumnSeeking.testDuplicateVersions fails occasionally Key: HBASE-9362 URL: https://issues.apache.org/jira/browse/HBASE-9362 Project: HBase Issue Type: Bug Affects Versions: 0.92.3 Reporter: Andrew Purtell Assignee: Andrew Purtell Priority: Minor An example failure: {noformat} java.lang.AssertionError: expected:0 but was:200 at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.junit.Assert.assertEquals(Assert.java:456) at org.apache.hadoop.hbase.regionserver.TestColumnSeeking.testDuplicateVersions(TestColumnSeeking.java:160) {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9343) Implement stateless scanner for Stargate
[ https://issues.apache.org/jira/browse/HBASE-9343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752407#comment-13752407 ] Andrew Purtell commented on HBASE-9343: --- Set aside the old scanner stuff for a moment. I agree the changes to use streaming are good. Can this be done using the existing resource types and the new query parameters you are introducing instead of also introducing ScanResource and {table}/scan ? Implement stateless scanner for Stargate Key: HBASE-9343 URL: https://issues.apache.org/jira/browse/HBASE-9343 Project: HBase Issue Type: Improvement Components: REST Affects Versions: 0.94.11 Reporter: Vandana Ayyalasomayajula Assignee: Vandana Ayyalasomayajula Priority: Minor Attachments: HBASE-9343_94.00.patch The current scanner implementation for scanner stores state and hence not very suitable for REST server failure scenarios. The current JIRA proposes to implement a stateless scanner. In the first version of the patch, a new resource class ScanResource has been added and all the scan parameters will be specified as query params. The following are the scan parameters startrow - The start row for the scan. endrow - The end row for the scan. columns - The columns to scan. starttime, endtime - To only retrieve columns within a specific range of version timestamps,both start and end time must be specified. maxversions - To limit the number of versions of each column to be returned. batchsize - To limit the maximum number of values returned for each call to next(). limit - The number of rows to return in the scan operation. More on start row, end row and limit parameters. 1. If start row, end row and limit not specified, then the whole table will be scanned. 2. If start row and limit (say N) is specified, then the scan operation will return N rows from the start row specified. 3. If only limit parameter is specified, then the scan operation will return N rows from the start of the table. 4. If limit and end row are specified, then the scan operation will return N rows from start of table till the end row. If the end row is reached before N rows ( say M and M lt; N ), then M rows will be returned to the user. 5. If start row, end row and limit (say N ) are specified and N lt; number of rows between start row and end row, then N rows from start row will be returned to the user. If N gt; (number of rows between start row and end row (say M), then M number of rows will be returned to the user. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9359) Convert KeyValue to Cell in hbase-client module - Result/Put/Delete, ColumnInterpreter
[ https://issues.apache.org/jira/browse/HBASE-9359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752446#comment-13752446 ] Jonathan Hsieh commented on HBASE-9359: --- https://reviews.apache.org/r/13884/ Convert KeyValue to Cell in hbase-client module - Result/Put/Delete, ColumnInterpreter -- Key: HBASE-9359 URL: https://issues.apache.org/jira/browse/HBASE-9359 Project: HBase Issue Type: Sub-task Components: Client Affects Versions: 0.95.2 Reporter: Jonathan Hsieh Assignee: Jonathan Hsieh Attachments: hbase-9359.patch This path is the second half of eliminating KeyValue from the client interfaces. This percolated through quite a bit. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9314) Dropping a table always prints a TableInfoMissingException in the master log
[ https://issues.apache.org/jira/browse/HBASE-9314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752523#comment-13752523 ] Jean-Daniel Cryans commented on HBASE-9314: --- Yeah that might be ok. It seems this method is only called from the client-side so also printing it in the master log doesn't add much value. Dropping a table always prints a TableInfoMissingException in the master log Key: HBASE-9314 URL: https://issues.apache.org/jira/browse/HBASE-9314 Project: HBase Issue Type: Improvement Affects Versions: 0.95.2, 0.94.10 Reporter: Jean-Daniel Cryans Assignee: Andrew Purtell Priority: Minor Fix For: 0.98.0, 0.94.12, 0.96.0 Everytime I drop a table I get the same stack trace in the master's log: {noformat} 2013-08-22 23:11:31,939 DEBUG [MASTER_TABLE_OPERATIONS-jdec2hbase0403-1:6-0] org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Table 't' archived! 2013-08-22 23:11:31,939 DEBUG [MASTER_TABLE_OPERATIONS-jdec2hbase0403-1:6-0] org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Removing 't' descriptor. 2013-08-22 23:11:31,940 DEBUG [MASTER_TABLE_OPERATIONS-jdec2hbase0403-1:6-0] org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Marking 't' as deleted. 2013-08-22 23:11:31,944 DEBUG [MASTER_TABLE_OPERATIONS-jdec2hbase0403-1:6-0] org.apache.hadoop.hbase.zookeeper.lock.ZKInterProcessLockBase: Released /hbase/table-lock/t/write-master:602 2013-08-22 23:11:32,024 DEBUG [RpcServer.handler=0,port=6] org.apache.hadoop.hbase.util.FSTableDescriptors: Exception during readTableDecriptor. Current table name = t org.apache.hadoop.hbase.TableInfoMissingException: No table descriptor file under hdfs://jdec2hbase0403-1.vpc.cloudera.com:9000/hbase/data/default/t at org.apache.hadoop.hbase.util.FSTableDescriptors.getTableDescriptorAndModtime(FSTableDescriptors.java:503) at org.apache.hadoop.hbase.util.FSTableDescriptors.getTableDescriptorAndModtime(FSTableDescriptors.java:496) at org.apache.hadoop.hbase.util.FSTableDescriptors.get(FSTableDescriptors.java:170) at org.apache.hadoop.hbase.master.HMaster.getTableDescriptors(HMaster.java:2629) at org.apache.hadoop.hbase.protobuf.generated.MasterMonitorProtos$MasterMonitorService$2.callBlockingMethod(MasterMonitorProtos.java:4634) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2156) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1861) 2013-08-22 23:11:32,024 WARN [RpcServer.handler=0,port=6] org.apache.hadoop.hbase.util.FSTableDescriptors: The following folder is in HBase's root directory and doesn't contain a table descriptor, do consider deleting it: t {noformat} But the operation completes. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9359) Convert KeyValue to Cell in hbase-client module - Result/Put/Delete, ColumnInterpreter
[ https://issues.apache.org/jira/browse/HBASE-9359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752556#comment-13752556 ] stack commented on HBASE-9359: -- I suppose. Add them as deprecated and a warning that they are expensive! because they make a copy? Convert KeyValue to Cell in hbase-client module - Result/Put/Delete, ColumnInterpreter -- Key: HBASE-9359 URL: https://issues.apache.org/jira/browse/HBASE-9359 Project: HBase Issue Type: Sub-task Components: Client Affects Versions: 0.95.2 Reporter: Jonathan Hsieh Assignee: Jonathan Hsieh Attachments: hbase-9359.patch This path is the second half of eliminating KeyValue from the client interfaces. This percolated through quite a bit. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9348) TerminatedWrapper error decoding, skipping skippable types
[ https://issues.apache.org/jira/browse/HBASE-9348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-9348: - Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Committed yesterday TerminatedWrapper error decoding, skipping skippable types -- Key: HBASE-9348 URL: https://issues.apache.org/jira/browse/HBASE-9348 Project: HBase Issue Type: Bug Affects Versions: 0.95.2 Reporter: Nick Dimiduk Assignee: Nick Dimiduk Fix For: 0.98.0, 0.96.0 Attachments: 0001-HBASE-9348-TerminatedWrapper-skippable-types-bug.patch When {{TerminatedWrapper}} wraps a type which {{isSkippable}}, it does not consider the terminator when updating the source buffer position after skipping or decoding a value. The tests only covered the non-skippable case. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9359) Convert KeyValue to Cell in hbase-client module - Result/Put/Delete, ColumnInterpreter
[ https://issues.apache.org/jira/browse/HBASE-9359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752586#comment-13752586 ] Jonathan Hsieh commented on HBASE-9359: --- Yup, that's the plan. This is going to break applications but ideally it will just be some trivial KeyValue-Cell conversion instead of that and a massive amount of work to avoid the array copies. Convert KeyValue to Cell in hbase-client module - Result/Put/Delete, ColumnInterpreter -- Key: HBASE-9359 URL: https://issues.apache.org/jira/browse/HBASE-9359 Project: HBase Issue Type: Sub-task Components: Client Affects Versions: 0.95.2 Reporter: Jonathan Hsieh Assignee: Jonathan Hsieh Attachments: hbase-9359.patch This path is the second half of eliminating KeyValue from the client interfaces. This percolated through quite a bit. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9334) Convert KeyValue to Cell in hbase-client module - Filters
[ https://issues.apache.org/jira/browse/HBASE-9334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Hsieh updated HBASE-9334: -- Status: Patch Available (was: Open) Convert KeyValue to Cell in hbase-client module - Filters - Key: HBASE-9334 URL: https://issues.apache.org/jira/browse/HBASE-9334 Project: HBase Issue Type: Sub-task Components: Client Affects Versions: 0.95.2 Reporter: Jonathan Hsieh Assignee: Jonathan Hsieh Attachments: hbase-9334.patch, hbase-9334.v2.patch The goal is is to remove KeyValue from the publicly exposed API and require clients to use the cleaner mroe encapsulated Cell API instead. For filters, this affects #filterKeyValue, #transform, #filterrow, and #getNextKeyHint. Since Cell is a base interface for KeyValue, changing these means that 0.94 apps may need a recompile but probably no modifications. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9360) Enable 0.94 - 0.96 replication to minimize upgrade down time
[ https://issues.apache.org/jira/browse/HBASE-9360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752600#comment-13752600 ] Jeffrey Zhong commented on HBASE-9360: -- Thanks a lot for the inputs on this. I haven't tried loading both versions of Hbase client in one JVM. Without #2, we still can use replication to minimize the upgrade down time to seconds. {code} Has anyone asked for it? {code} So far no one is asking for this. With 0.94-0.96 replication support, we can ease upgrade pain and encourage people to test 0.96 against their shadow clusters of 0.94 production env. Enable 0.94 - 0.96 replication to minimize upgrade down time - Key: HBASE-9360 URL: https://issues.apache.org/jira/browse/HBASE-9360 Project: HBase Issue Type: Brainstorming Components: migration Affects Versions: 0.98.0, 0.96.0 Reporter: Jeffrey Zhong As we know 0.96 is a singularity release, as of today a 0.94 hbase user has to do in-place upgrade: make corresponding client changes, recompile client application code, fully shut down existing 0.94 hbase cluster, deploy 0.96 binary, run upgrade script and then start the upgraded cluster. You can image the down time will be extended if something is wrong in between. To minimize the down time, another possible way is to setup a secondary 0.96 cluster and then setup replication between the existing 0.94 cluster and the new 0.96 slave cluster. Once the 0.96 cluster is synced, a user can switch the traffic to the 0.96 cluster and decommission the old one. The ideal steps will be: 1) Setup a 0.96 cluster 2) Setup replication between a running 0.94 cluster to the newly created 0.96 cluster 3) Wait till they're in sync in replication 4) Starts duplicated writes to both 0.94 and 0.96 clusters(could stop relocation now) 5) Forward read traffic to the slave 0.96 cluster 6) After a certain period, stop writes to the original 0.94 cluster if everything is good and completes upgrade To get us there, there are two tasks: 1) Enable replication from 0.94 - 0.96 I've run the idea with [~jdcryans], [~devaraj] and [~ndimiduk]. Currently it seems the best approach is to build a very similar service or on top of https://github.com/NGDATA/hbase-indexer/tree/master/hbase-sep with support three commands replicateLogEntries, multi and delete. Inside the three commands, we just pass down the corresponding requests to the destination 0.96 cluster as a bridge. The reason to support the multi and delete is for CopyTable to copy data from a 0.94 cluster to a 0.96 one. The other approach is to provide limited support of 0.94 RPC protocol in 0.96. While an issue on this is that a 0.94 client needs to talk to zookeeper firstly before it can connect to a 0.96 region server. Therefore, we need a faked Zookeeper setup in front of a 0.96 cluster for a 0.94 client to connect. It may also pollute 0.96 code base with 0.94 RPC code. 2) To support writes to a 0.96 cluster and a 0.94 at the same time, we need to load both hbase clients into one single JVM using different class loader. Let me know if you think this is worth to do and any better approach we could take. Thanks! -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9334) Convert KeyValue to Cell in hbase-client module - Filters
[ https://issues.apache.org/jira/browse/HBASE-9334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Hsieh updated HBASE-9334: -- Attachment: hbase-9334.v2.patch v2 still depends on HBASE-9247's hbase-9247.v2.patch. Adds deprecation hooks. Convert KeyValue to Cell in hbase-client module - Filters - Key: HBASE-9334 URL: https://issues.apache.org/jira/browse/HBASE-9334 Project: HBase Issue Type: Sub-task Components: Client Affects Versions: 0.95.2 Reporter: Jonathan Hsieh Assignee: Jonathan Hsieh Attachments: hbase-9334.patch, hbase-9334.v2.patch The goal is is to remove KeyValue from the publicly exposed API and require clients to use the cleaner mroe encapsulated Cell API instead. For filters, this affects #filterKeyValue, #transform, #filterrow, and #getNextKeyHint. Since Cell is a base interface for KeyValue, changing these means that 0.94 apps may need a recompile but probably no modifications. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9230) Fix the server so it can take a pure pb request param and return a pure pb result
[ https://issues.apache.org/jira/browse/HBASE-9230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-9230: - Attachment: 9230v7.txt Fix fun mockito issue Fix the server so it can take a pure pb request param and return a pure pb result - Key: HBASE-9230 URL: https://issues.apache.org/jira/browse/HBASE-9230 Project: HBase Issue Type: Bug Reporter: stack Assignee: stack Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9230.txt, 9230v2.txt, 9230v3.txt, 9230v3.txt, 9230v4.txt, 9230v5.txt, 9230v6.txt, 9230v7.txt Working on the asynchbase update w/ B this afternoon so it can run against 0.95/0.96, I noticed that clients HAVE TO do cellblocks as the server is currently. That is an oversight. Lets fix so can do all pb all the time too (I thought this was there but it is not); it will make it easier dev'ing simple clients. This issue shouldn't hold up release but we should get it in to help the asynchbase convertion. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9334) Convert KeyValue to Cell in hbase-client module - Filters
[ https://issues.apache.org/jira/browse/HBASE-9334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752613#comment-13752613 ] Jonathan Hsieh commented on HBASE-9334: --- Here's the key change for deprecation and compatibility (there are corresponding changes in FilterBase and FilterWrapper) {code} diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/Filter.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/Filter.java index 253b24d..3c491d3 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/Filter.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/Filter.java @@ -25,6 +25,7 @@ import java.util.List; import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.classification.InterfaceStability; import org.apache.hadoop.hbase.Cell; +import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.exceptions.DeserializationException; /** @@ -192,6 +193,9 @@ public abstract class Filter { */ abstract public boolean filterRow() throws IOException; + @Deprecated // use Cell GetNextKeyHint(final Cell) + abstract public KeyValue getNextKeyHint(final KeyValue currentKV) throws IOException; + /** * If the filter returns the match code SEEK_NEXT_USING_HINT, then it should also tell which is * the next key it must seek to. After receiving the match code SEEK_NEXT_USING_HINT, the {code} Convert KeyValue to Cell in hbase-client module - Filters - Key: HBASE-9334 URL: https://issues.apache.org/jira/browse/HBASE-9334 Project: HBase Issue Type: Sub-task Components: Client Affects Versions: 0.95.2 Reporter: Jonathan Hsieh Assignee: Jonathan Hsieh Attachments: hbase-9334.patch, hbase-9334.v2.patch The goal is is to remove KeyValue from the publicly exposed API and require clients to use the cleaner mroe encapsulated Cell API instead. For filters, this affects #filterKeyValue, #transform, #filterrow, and #getNextKeyHint. Since Cell is a base interface for KeyValue, changing these means that 0.94 apps may need a recompile but probably no modifications. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9334) Convert KeyValue to Cell in hbase-client module - Filters
[ https://issues.apache.org/jira/browse/HBASE-9334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Hsieh updated HBASE-9334: -- Attachment: hbase-9334.v3.patch v3 fixes how we handle deprecation -- I had it backwards in the v2 version. Convert KeyValue to Cell in hbase-client module - Filters - Key: HBASE-9334 URL: https://issues.apache.org/jira/browse/HBASE-9334 Project: HBase Issue Type: Sub-task Components: Client Affects Versions: 0.95.2 Reporter: Jonathan Hsieh Assignee: Jonathan Hsieh Attachments: hbase-9334.patch, hbase-9334.v2.patch, hbase-9334.v3.patch The goal is is to remove KeyValue from the publicly exposed API and require clients to use the cleaner mroe encapsulated Cell API instead. For filters, this affects #filterKeyValue, #transform, #filterrow, and #getNextKeyHint. Since Cell is a base interface for KeyValue, changing these means that 0.94 apps may need a recompile but probably no modifications. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9333) hbase.hconnection.threads.max should not be configurable else you get RejectedExecutionException
[ https://issues.apache.org/jira/browse/HBASE-9333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752648#comment-13752648 ] stack commented on HBASE-9333: -- Can we introduce a queue so we don't do this RejectedExecutionException? Seems like we should bound outsanding executors and only do rejected if queue gets really big (size or count) hbase.hconnection.threads.max should not be configurable else you get RejectedExecutionException Key: HBASE-9333 URL: https://issues.apache.org/jira/browse/HBASE-9333 Project: HBase Issue Type: Improvement Affects Versions: 0.95.2 Reporter: Jean-Daniel Cryans Fix For: 0.98.0, 0.96.0 Trying to set hbase.hconnection.threads.max to a lower number than its default of Integer.MAX_VALUE simply results in a RejectedExecutionException when the max is reached. It seems there's no good reason to keep this configurable. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9302) Column family and qualifier should be allowed to be set as null in grant shell command
[ https://issues.apache.org/jira/browse/HBASE-9302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-9302: - Resolution: Fixed Status: Resolved (was: Patch Available) This was committed a while back. Column family and qualifier should be allowed to be set as null in grant shell command -- Key: HBASE-9302 URL: https://issues.apache.org/jira/browse/HBASE-9302 Project: HBase Issue Type: Bug Affects Versions: 0.92.3, 0.98.0, 0.96.0 Reporter: Ted Yu Assignee: Ted Yu Fix For: 0.98.0, 0.96.0 Attachments: 9302.txt In 0.94, grant.rb has the following: {code} Grant users specific rights. Syntax : grant user permissions [table [column family [column qualifier]] {code} In 0.95.2, when I tried to grant permission on a table to user hrt_1, I got some exception: {code} hbase(main):003:0 grant 'hrt_1', 'R', 't1' ERROR: java.lang.NullPointerException: null Here is some help for this command: Grant users specific rights. Syntax : grant user permissions table column family column qualifier permissions is either zero or more letters from the set RWXCA. READ('R'), WRITE('W'), EXEC('X'), CREATE('C'), ADMIN('A') For example: hbase grant 'bobsmith', 'RWXCA' hbase grant 'bobsmith', 'RW', 't1', 'f1', 'col1' {code} Column family and qualifier should be allowed to be set as null in grant shell command -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9313) NamespaceJanitor is spammy when the namespace table moves
[ https://issues.apache.org/jira/browse/HBASE-9313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-9313: - Assignee: Jean-Daniel Cryans Hadoop Flags: Reviewed Status: Patch Available (was: Open) You going to commit [~jdcryans]? NamespaceJanitor is spammy when the namespace table moves - Key: HBASE-9313 URL: https://issues.apache.org/jira/browse/HBASE-9313 Project: HBase Issue Type: Improvement Affects Versions: 0.95.2 Reporter: Jean-Daniel Cryans Assignee: Jean-Daniel Cryans Fix For: 0.98.0, 0.96.0 Attachments: HBASE-9313.patch Although region movements are part of a healthy HBase lifestyle, the NamespaceJanitor WARNs about it: {noformat} 2013-08-22 22:35:48,872 WARN [NamespaceJanitor-jdec2hbase0403-1:6] org.apache.hadoop.hbase.client.RpcRetryingCaller: Call exception, tries=0, retries=350, retryTime=-4ms org.apache.hadoop.hbase.NotServingRegionException: org.apache.hadoop.hbase.NotServingRegionException: Region is not online: 640d4b4d9432f23f1638700217d34764 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:95) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:79) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:235) at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:300) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:148) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:57) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:98) at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:239) at org.apache.hadoop.hbase.client.ClientScanner.init(ClientScanner.java:153) at org.apache.hadoop.hbase.client.ClientScanner.init(ClientScanner.java:100) at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:696) at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:707) at org.apache.hadoop.hbase.master.TableNamespaceManager.list(TableNamespaceManager.java:185) at org.apache.hadoop.hbase.master.HMaster.listNamespaceDescriptors(HMaster.java:3149) at org.apache.hadoop.hbase.master.NamespaceJanitor.removeOrphans(NamespaceJanitor.java:102) at org.apache.hadoop.hbase.master.NamespaceJanitor.chore(NamespaceJanitor.java:86) at org.apache.hadoop.hbase.Chore.run(Chore.java:80) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException: org.apache.hadoop.hbase.NotServingRegionException: Region is not online: 640d4b4d9432f23f1638700217d34764 at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2565) at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:3927) at org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3004) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:26847) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2156) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1861) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1426) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1630) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1687) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:27303) at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:291) ... 15 more {noformat} This should not be printed. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9299) Generate the protobuf classes with hadoop-maven-plugin
[ https://issues.apache.org/jira/browse/HBASE-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-9299: - Resolution: Fixed Release Note: Adds in reference to the hadoop-maven-plugin (commented out). Improved documentation around how to generate protobuf classes. Status: Resolved (was: Patch Available) Commmitted to 0.95 and trunk. Thanks for the patch [~echarles] Generate the protobuf classes with hadoop-maven-plugin -- Key: HBASE-9299 URL: https://issues.apache.org/jira/browse/HBASE-9299 Project: HBase Issue Type: New Feature Reporter: Eric Charles Assignee: Eric Charles Fix For: 0.98.0, 0.96.0 Attachments: HBASE-9299.patch For now, the protobuf classes are generated once by a dev, and put in src/main/resouce. This allows the other dev to not have the correct protoc version available on their machine. However, when a dev wants to modify the protoc messages, he has to know how to generate the classes. This could be documented... Another approach would be to put a harder requirement on the hbase developers (protoc available) and let the hadoop-maven-plugin (http://central.maven.org/maven2/org/apache/hadoop/hadoop-maven-plugins/2.0.5-alpha) to do the work (I have bad experience with other maven protobuf plugins, the hadoop one works just out of the box). I don't think asking to install protoc to build hbase is so difficult, but that's an additional step between the dev and the artifcat. The advantage would be to allow to have different protobuf versions for different hbase distributions (perfectly possible but quite theorical). So option 1: We are happy to keep the classes in src/main/java option 2: We want to move to hadoop-maven-plugin option 3: I may be short of idea... any other input? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9343) Implement stateless scanner for Stargate
[ https://issues.apache.org/jira/browse/HBASE-9343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752662#comment-13752662 ] Francis Liu commented on HBASE-9343: I think there's two other options: 1. GET /table 2. GET /table/scanner #2 is not good since it just convolutes the resource. #1 is more intuitive tho it might be accidentally invoked when users are constructing/playing with URIs. Implement stateless scanner for Stargate Key: HBASE-9343 URL: https://issues.apache.org/jira/browse/HBASE-9343 Project: HBase Issue Type: Improvement Components: REST Affects Versions: 0.94.11 Reporter: Vandana Ayyalasomayajula Assignee: Vandana Ayyalasomayajula Priority: Minor Attachments: HBASE-9343_94.00.patch The current scanner implementation for scanner stores state and hence not very suitable for REST server failure scenarios. The current JIRA proposes to implement a stateless scanner. In the first version of the patch, a new resource class ScanResource has been added and all the scan parameters will be specified as query params. The following are the scan parameters startrow - The start row for the scan. endrow - The end row for the scan. columns - The columns to scan. starttime, endtime - To only retrieve columns within a specific range of version timestamps,both start and end time must be specified. maxversions - To limit the number of versions of each column to be returned. batchsize - To limit the maximum number of values returned for each call to next(). limit - The number of rows to return in the scan operation. More on start row, end row and limit parameters. 1. If start row, end row and limit not specified, then the whole table will be scanned. 2. If start row and limit (say N) is specified, then the scan operation will return N rows from the start row specified. 3. If only limit parameter is specified, then the scan operation will return N rows from the start of the table. 4. If limit and end row are specified, then the scan operation will return N rows from start of table till the end row. If the end row is reached before N rows ( say M and M lt; N ), then M rows will be returned to the user. 5. If start row, end row and limit (say N ) are specified and N lt; number of rows between start row and end row, then N rows from start row will be returned to the user. If N gt; (number of rows between start row and end row (say M), then M number of rows will be returned to the user. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9359) Convert KeyValue to Cell in hbase-client module - Result/Put/Delete, ColumnInterpreter
[ https://issues.apache.org/jira/browse/HBASE-9359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752683#comment-13752683 ] Jonathan Hsieh commented on HBASE-9359: --- In v1, basically, any method whose return type will require apps to do some code changes. Methods where arguments change from KeyValue to Cell should still work since Cell can accept KeyValues. I'm leaning towards making sure the commonly used but inefficient KeyValue methods (including #getQualifier, #getFamily, and #getValue, and #getRow) get ported into the Cell interface. Clients would essentially only have to replace KeyValue with Cell in these cases. {code} Put: - public ListKeyValue get(byte[] family, byte[] qualifier) + public ListCell get(byte[] family, byte[] qualifier) Result: - public KeyValue[] raw() { + public Cell[] raw() { - public ListKeyValue list() { + public ListCell list() { - public ListKeyValue getColumn(byte [] family, byte [] qualifier) { + public ListCell getColumn(byte [] family, byte [] qualifier) { - public KeyValue getColumnLatest(byte [] family, byte [] qualifier) { + public Cell getColumnLatest(byte [] family, byte [] qualifier) { - public KeyValue getColumnLatest(byte [] family, int foffset, int flength, + public Cell getColumnLatest(byte [] family, int foffset, int flength, byte [] qualifier, int qoffset, int qlength) { {code} For extension interfaces that have changed signatures (like filters in HBASE-9334 and in here, coprocessors) we can keep both the old and new signature and have the abstract implementation helper have the new call the old. For the shim to handle the ListKeyValue - ListCell conversion, I'm going to use a naive array copy. (Another option is to change the signature to List? extends Cell -- will look at this option one more time). {code} ColumnInterpreter: (abstract class) - public abstract T getValue(byte[] colFamily, byte[] colQualifier, KeyValue kv) + public abstract T getValue(byte[] colFamily, byte[] colQualifier, Cell kv) BaseRegionObserver: (abstract class) RegionObserver: (inteface) void preGet(final ObserverContextRegionCoprocessorEnvironment c, final Get get, - final ListKeyValue result) + final ListCell result) throws IOException; void postGet(final ObserverContextRegionCoprocessorEnvironment c, final Get get, - final ListKeyValue result) + final ListCell result) throws IOException; {code} Convert KeyValue to Cell in hbase-client module - Result/Put/Delete, ColumnInterpreter -- Key: HBASE-9359 URL: https://issues.apache.org/jira/browse/HBASE-9359 Project: HBase Issue Type: Sub-task Components: Client Affects Versions: 0.95.2 Reporter: Jonathan Hsieh Assignee: Jonathan Hsieh Attachments: hbase-9359.patch This path is the second half of eliminating KeyValue from the client interfaces. This percolated through quite a bit. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Comment Edited] (HBASE-9359) Convert KeyValue to Cell in hbase-client module - Result/Put/Delete, ColumnInterpreter
[ https://issues.apache.org/jira/browse/HBASE-9359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752683#comment-13752683 ] Jonathan Hsieh edited comment on HBASE-9359 at 8/28/13 6:30 PM: In v1, basically, any method whose return type changes will require apps to do some code changes. Methods where arguments change from KeyValue to Cell should still work since Cell can accept KeyValues. I'm leaning towards making sure the commonly used but inefficient KeyValue methods (including #getQualifier, #getFamily, and #getValue, and #getRow) get ported into the Cell interface. Clients would essentially only have to replace KeyValue with Cell in these cases. {code} Put: - public ListKeyValue get(byte[] family, byte[] qualifier) + public ListCell get(byte[] family, byte[] qualifier) Result: - public KeyValue[] raw() { + public Cell[] raw() { - public ListKeyValue list() { + public ListCell list() { - public ListKeyValue getColumn(byte [] family, byte [] qualifier) { + public ListCell getColumn(byte [] family, byte [] qualifier) { - public KeyValue getColumnLatest(byte [] family, byte [] qualifier) { + public Cell getColumnLatest(byte [] family, byte [] qualifier) { - public KeyValue getColumnLatest(byte [] family, int foffset, int flength, + public Cell getColumnLatest(byte [] family, int foffset, int flength, byte [] qualifier, int qoffset, int qlength) { {code} For extension interfaces that have changed signatures (like filters in HBASE-9334 and in here, coprocessors) we can keep both the old and new signature and have the abstract implementation helper have the new call the old. For the shim to handle the ListKeyValue - ListCell conversion, I'm going to use a naive array copy. (Another option is to change the signature to List? extends Cell -- will look at this option one more time). {code} ColumnInterpreter: (abstract class) - public abstract T getValue(byte[] colFamily, byte[] colQualifier, KeyValue kv) + public abstract T getValue(byte[] colFamily, byte[] colQualifier, Cell kv) BaseRegionObserver: (abstract class) RegionObserver: (inteface) void preGet(final ObserverContextRegionCoprocessorEnvironment c, final Get get, - final ListKeyValue result) + final ListCell result) throws IOException; void postGet(final ObserverContextRegionCoprocessorEnvironment c, final Get get, - final ListKeyValue result) + final ListCell result) throws IOException; {code} was (Author: jmhsieh): In v1, basically, any method whose return type will require apps to do some code changes. Methods where arguments change from KeyValue to Cell should still work since Cell can accept KeyValues. I'm leaning towards making sure the commonly used but inefficient KeyValue methods (including #getQualifier, #getFamily, and #getValue, and #getRow) get ported into the Cell interface. Clients would essentially only have to replace KeyValue with Cell in these cases. {code} Put: - public ListKeyValue get(byte[] family, byte[] qualifier) + public ListCell get(byte[] family, byte[] qualifier) Result: - public KeyValue[] raw() { + public Cell[] raw() { - public ListKeyValue list() { + public ListCell list() { - public ListKeyValue getColumn(byte [] family, byte [] qualifier) { + public ListCell getColumn(byte [] family, byte [] qualifier) { - public KeyValue getColumnLatest(byte [] family, byte [] qualifier) { + public Cell getColumnLatest(byte [] family, byte [] qualifier) { - public KeyValue getColumnLatest(byte [] family, int foffset, int flength, + public Cell getColumnLatest(byte [] family, int foffset, int flength, byte [] qualifier, int qoffset, int qlength) { {code} For extension interfaces that have changed signatures (like filters in HBASE-9334 and in here, coprocessors) we can keep both the old and new signature and have the abstract implementation helper have the new call the old. For the shim to handle the ListKeyValue - ListCell conversion, I'm going to use a naive array copy. (Another option is to change the signature to List? extends Cell -- will look at this option one more time). {code} ColumnInterpreter: (abstract class) - public abstract T getValue(byte[] colFamily, byte[] colQualifier, KeyValue kv) + public abstract T getValue(byte[] colFamily, byte[] colQualifier, Cell kv) BaseRegionObserver: (abstract class) RegionObserver: (inteface) void preGet(final ObserverContextRegionCoprocessorEnvironment c, final Get get, - final ListKeyValue result) + final ListCell result) throws IOException; void postGet(final ObserverContextRegionCoprocessorEnvironment c, final Get get, - final ListKeyValue result) + final ListCell result) throws IOException; {code} Convert KeyValue to Cell in hbase-client module -
[jira] [Updated] (HBASE-9283) Struct and StructIterator should properly handle trailing nulls
[ https://issues.apache.org/jira/browse/HBASE-9283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-9283: - Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Committed to 0.95 and trunk. Thanks for the patch Nick. Struct and StructIterator should properly handle trailing nulls --- Key: HBASE-9283 URL: https://issues.apache.org/jira/browse/HBASE-9283 Project: HBase Issue Type: Bug Affects Versions: 0.95.2 Reporter: Nick Dimiduk Assignee: Nick Dimiduk Fix For: 0.98.0, 0.96.0 Attachments: 0001-HBASE-9283-Struct-trailing-null-handling.patch, 0001-HBASE-9283-Struct-trailing-null-handling.patch, 0001-HBASE-9283-Struct-trailing-null-handling.patch For a composite row key, Phoenix strips off trailing null columns values in the row key. The reason this is important is that then new nullable row key columns can be added to a schema without requiring any data upgrade to existing rows. Otherwise, adding new row key columns to the end of a schema becomes extremely cumbersome, as you'd need to delete all existing rows and add them back with a row key that includes a null value. Rather than Phoenix needing to modify the iteration code everywhere (as [~ndimiduk] outlined here: https://issues.apache.org/jira/browse/HBASE-8693?focusedCommentId=13744499page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13744499), it'd be better if StructIterator handled this out-of-the-box. Otherwise, if Phoenix has to specialize this, we'd lose the interop piece which is the justification for switching our type system to this new one in the first place. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-9363) REST/Thrift web UI is not working any more
Jimmy Xiang created HBASE-9363: -- Summary: REST/Thrift web UI is not working any more Key: HBASE-9363 URL: https://issues.apache.org/jira/browse/HBASE-9363 Project: HBase Issue Type: Bug Components: REST, Thrift, UI Affects Versions: 0.96.0 Reporter: Jimmy Xiang REST/Thrift web UI is not working properly any more. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9334) Convert KeyValue to Cell in hbase-client module - Filters
[ https://issues.apache.org/jira/browse/HBASE-9334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752693#comment-13752693 ] Hadoop QA commented on HBASE-9334: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12600416/hbase-9334.v2.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 6 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:red}-1 javadoc{color}. The javadoc tool appears to have generated 6 warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/6945//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6945//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6945//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6945//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6945//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6945//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6945//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6945//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6945//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/6945//console This message is automatically generated. Convert KeyValue to Cell in hbase-client module - Filters - Key: HBASE-9334 URL: https://issues.apache.org/jira/browse/HBASE-9334 Project: HBase Issue Type: Sub-task Components: Client Affects Versions: 0.95.2 Reporter: Jonathan Hsieh Assignee: Jonathan Hsieh Attachments: hbase-9334.patch, hbase-9334.v2.patch, hbase-9334.v3.patch The goal is is to remove KeyValue from the publicly exposed API and require clients to use the cleaner mroe encapsulated Cell API instead. For filters, this affects #filterKeyValue, #transform, #filterrow, and #getNextKeyHint. Since Cell is a base interface for KeyValue, changing these means that 0.94 apps may need a recompile but probably no modifications. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (HBASE-9363) REST/Thrift web UI is not working any more
[ https://issues.apache.org/jira/browse/HBASE-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jimmy Xiang reassigned HBASE-9363: -- Assignee: Jimmy Xiang REST/Thrift web UI is not working any more --- Key: HBASE-9363 URL: https://issues.apache.org/jira/browse/HBASE-9363 Project: HBase Issue Type: Bug Components: REST, Thrift, UI Affects Versions: 0.96.0 Reporter: Jimmy Xiang Assignee: Jimmy Xiang REST/Thrift web UI is not working properly any more. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9363) REST/Thrift web UI is not working any more
[ https://issues.apache.org/jira/browse/HBASE-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jimmy Xiang updated HBASE-9363: --- Attachment: rest-web-ui.png REST/Thrift web UI is not working any more --- Key: HBASE-9363 URL: https://issues.apache.org/jira/browse/HBASE-9363 Project: HBase Issue Type: Bug Components: REST, Thrift, UI Affects Versions: 0.96.0 Reporter: Jimmy Xiang Assignee: Jimmy Xiang Attachments: rest-web-ui.png REST/Thrift web UI is not working properly any more. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9208) ReplicationLogCleaner slow at large scale
[ https://issues.apache.org/jira/browse/HBASE-9208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752701#comment-13752701 ] stack commented on HBASE-9208: -- Will commit today unless objection. [~lhofhansl] Ok to add to 0.94? ReplicationLogCleaner slow at large scale - Key: HBASE-9208 URL: https://issues.apache.org/jira/browse/HBASE-9208 Project: HBase Issue Type: Improvement Components: Replication Reporter: Dave Latham Assignee: Dave Latham Fix For: 0.94.12, 0.96.0 Attachments: HBASE-9208-0.94.patch, HBASE-9208-0.94-v2.patch, HBASE-9208.patch, HBASE-9208-v2.patch, HBASE-9208-v3.patch At a large scale the ReplicationLogCleaner fails to clean up .oldlogs as fast as the cluster is producing them. For each old HLog file that has been replicated and should be deleted the ReplicationLogCleaner checks every replication queue in ZooKeeper before removing it. This means that as a cluster scales up the number of files to delete scales as well as the time to delete each file so the cleanup chore scales quadratically. In our case it reached the point where the oldlogs were growing faster than they were being cleaned up. We're now running with a patch that allows the ReplicationLogCleaner to refresh its list of files in the replication queues from ZooKeeper just once for each batch of files the CleanerChore wants to evaluate. I'd propose updating FileCleanerDelegate to take a ListFileStatus rather than a single one at a time. This would allow file cleaners that check an external resource for references such as ZooKeeper (for ReplicationLogCleaner) or HDFS (for SnapshotLogCleaner which looks like it may also have similar trouble at scale) to load those references once per batch rather than for every log. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8701) distributedLogReplay need to apply wal edits in the receiving order of those edits
[ https://issues.apache.org/jira/browse/HBASE-8701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752704#comment-13752704 ] stack commented on HBASE-8701: -- Whats up w/ this patch? Just needs more review? Any testing [~jeffreyz]? distributedLogReplay need to apply wal edits in the receiving order of those edits -- Key: HBASE-8701 URL: https://issues.apache.org/jira/browse/HBASE-8701 Project: HBase Issue Type: Bug Components: MTTR Reporter: Jeffrey Zhong Assignee: Jeffrey Zhong Fix For: 0.98.0, 0.96.0 Attachments: 8701-v3.txt, hbase-8701-v4.patch, hbase-8701-v5.patch, hbase-8701-v6.patch, hbase-8701-v7.patch, hbase-8701-v8.patch This issue happens in distributedLogReplay mode when recovering multiple puts of the same key + version(timestamp). After replay, the value is nondeterministic of the key h5. The original concern situation raised from [~eclark]: For all edits the rowkey is the same. There's a log with: [ A (ts = 0), B (ts = 0) ] Replay the first half of the log. A user puts in C (ts = 0) Memstore has to flush A new Hfile will be created with [ C, A ] and MaxSequenceId = C's seqid. Replay the rest of the Log. Flush The issue will happen in similar situation like Put(key, t=T) in WAL1 and Put(key,t=T) in WAL2 h5. Below is the option(proposed by Ted) I'd like to use: a) During replay, we pass original wal sequence number of each edit to the receiving RS b) In receiving RS, we store negative original sequence number of wal edits into mvcc field of KVs of wal edits c) Add handling of negative MVCC in KVScannerComparator and KVComparator d) In receiving RS, write original sequence number into an optional field of wal file for chained RS failure situation e) When opening a region, we add a safety bumper(a large number) in order for the new sequence number of a newly opened region not to collide with old sequence numbers. In the future, when we stores sequence number along with KVs, we can adjust the above solution a little bit by avoiding to overload MVCC field. h5. The other alternative options are listed below for references: Option one a) disallow writes during recovery b) during replay, we pass original wal sequence ids c) hold flush till all wals of a recovering region are replayed. Memstore should hold because we only recover unflushed wal edits. For edits with same key + version, whichever with larger sequence Id wins. Option two a) During replay, we pass original wal sequence ids b) for each wal edit, we store each edit's original sequence id along with its key. c) during scanning, we use the original sequence id if it's present otherwise its store file sequence Id d) compaction can just leave put with max sequence id Please let me know if you have better ideas. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9208) ReplicationLogCleaner slow at large scale
[ https://issues.apache.org/jira/browse/HBASE-9208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752705#comment-13752705 ] Jean-Daniel Cryans commented on HBASE-9208: --- +1, I'll also give it a spin on 0.95 today. ReplicationLogCleaner slow at large scale - Key: HBASE-9208 URL: https://issues.apache.org/jira/browse/HBASE-9208 Project: HBase Issue Type: Improvement Components: Replication Reporter: Dave Latham Assignee: Dave Latham Fix For: 0.94.12, 0.96.0 Attachments: HBASE-9208-0.94.patch, HBASE-9208-0.94-v2.patch, HBASE-9208.patch, HBASE-9208-v2.patch, HBASE-9208-v3.patch At a large scale the ReplicationLogCleaner fails to clean up .oldlogs as fast as the cluster is producing them. For each old HLog file that has been replicated and should be deleted the ReplicationLogCleaner checks every replication queue in ZooKeeper before removing it. This means that as a cluster scales up the number of files to delete scales as well as the time to delete each file so the cleanup chore scales quadratically. In our case it reached the point where the oldlogs were growing faster than they were being cleaned up. We're now running with a patch that allows the ReplicationLogCleaner to refresh its list of files in the replication queues from ZooKeeper just once for each batch of files the CleanerChore wants to evaluate. I'd propose updating FileCleanerDelegate to take a ListFileStatus rather than a single one at a time. This would allow file cleaners that check an external resource for references such as ZooKeeper (for ReplicationLogCleaner) or HDFS (for SnapshotLogCleaner which looks like it may also have similar trouble at scale) to load those references once per batch rather than for every log. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-9364) Get request with multiple columns returns partial results
Vandana Ayyalasomayajula created HBASE-9364: --- Summary: Get request with multiple columns returns partial results Key: HBASE-9364 URL: https://issues.apache.org/jira/browse/HBASE-9364 Project: HBase Issue Type: Bug Components: REST Affects Versions: 0.94.11 Reporter: Vandana Ayyalasomayajula Assignee: Vandana Ayyalasomayajula Priority: Minor When a GET request is issue for a table row with multiple columns and columns have empty qualifier like f1: , results for empty qualifiers is being ignored. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-8462) Custom timestamps should not be allowed to be negative
[ https://issues.apache.org/jira/browse/HBASE-8462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-8462: - Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Committed for [~enis] Custom timestamps should not be allowed to be negative -- Key: HBASE-8462 URL: https://issues.apache.org/jira/browse/HBASE-8462 Project: HBase Issue Type: Bug Components: Client Reporter: Enis Soztutar Assignee: Enis Soztutar Fix For: 0.98.0, 0.96.0 Attachments: hbase-8462_v1.patch, hbase-8462_v2.patch, hbase-8462_v3.patch Client supplied timestamps should not be allowed to be negative, otherwise unpredictable results will follow. Especially, since we are encoding the ts using Bytes.Bytes(long), negative timestamps are sorted after positive ones. Plus, the new PB messages define ts' as uint64. Credit goes to Huned Lokhandwala for reporting this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9208) ReplicationLogCleaner slow at large scale
[ https://issues.apache.org/jira/browse/HBASE-9208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752712#comment-13752712 ] Dave Latham commented on HBASE-9208: I'm not sure that we should commit to 0.94 as is because of the potential to break compatibility for people who have added their own FileCleanerDelegate's for HFiles or HLogs. See my questions in comment #13745252: {quote} What are the compatibility requirements for log cleaners? The current patches break compatibility and would require existing log cleaners to implement the changed interface or extend the BaseFileCleanerDelegate. This seems like a bad idea for 0.94, but probably ok for 0.96. What's the best alternative? Perhaps rather than changing FileCleanerDelegate we could introduce a new interface BatchFileCleanerDelegate or some such. Then configured cleaners that implement that interface can use it but others could still work by wrapping them. Anyone agree with that approach or have other suggestions? {quote} ReplicationLogCleaner slow at large scale - Key: HBASE-9208 URL: https://issues.apache.org/jira/browse/HBASE-9208 Project: HBase Issue Type: Improvement Components: Replication Reporter: Dave Latham Assignee: Dave Latham Fix For: 0.94.12, 0.96.0 Attachments: HBASE-9208-0.94.patch, HBASE-9208-0.94-v2.patch, HBASE-9208.patch, HBASE-9208-v2.patch, HBASE-9208-v3.patch At a large scale the ReplicationLogCleaner fails to clean up .oldlogs as fast as the cluster is producing them. For each old HLog file that has been replicated and should be deleted the ReplicationLogCleaner checks every replication queue in ZooKeeper before removing it. This means that as a cluster scales up the number of files to delete scales as well as the time to delete each file so the cleanup chore scales quadratically. In our case it reached the point where the oldlogs were growing faster than they were being cleaned up. We're now running with a patch that allows the ReplicationLogCleaner to refresh its list of files in the replication queues from ZooKeeper just once for each batch of files the CleanerChore wants to evaluate. I'd propose updating FileCleanerDelegate to take a ListFileStatus rather than a single one at a time. This would allow file cleaners that check an external resource for references such as ZooKeeper (for ReplicationLogCleaner) or HDFS (for SnapshotLogCleaner which looks like it may also have similar trouble at scale) to load those references once per batch rather than for every log. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9364) Get request with multiple columns returns partial results
[ https://issues.apache.org/jira/browse/HBASE-9364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752723#comment-13752723 ] Vandana Ayyalasomayajula commented on HBASE-9364: - The problem seems to be with the KeyValue.parseColumn method. The method does not respect empty qualifiers. Get request with multiple columns returns partial results - Key: HBASE-9364 URL: https://issues.apache.org/jira/browse/HBASE-9364 Project: HBase Issue Type: Bug Components: REST Affects Versions: 0.94.11 Reporter: Vandana Ayyalasomayajula Assignee: Vandana Ayyalasomayajula Priority: Minor When a GET request is issue for a table row with multiple columns and columns have empty qualifier like f1: , results for empty qualifiers is being ignored. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8028) Append, Increment: Adding rollback support
[ https://issues.apache.org/jira/browse/HBASE-8028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752721#comment-13752721 ] stack commented on HBASE-8028: -- Should we just do as [~lhofhansl] suggested up top and make Append/Increment just-like-the-others? Append, Increment: Adding rollback support -- Key: HBASE-8028 URL: https://issues.apache.org/jira/browse/HBASE-8028 Project: HBase Issue Type: Bug Components: regionserver Affects Versions: 0.94.5 Reporter: Himanshu Vashishtha Assignee: Himanshu Vashishtha Fix For: 0.96.0 Attachments: HBase-8028-v1.patch, HBase-8028-v2.patch, HBase-8028-with-Increments-v1.patch, HBase-8028-with-Increments-v2.patch In case there is an exception while doing the log-sync, the memstore is not rollbacked, while the mvcc is _always_ forwarded to the writeentry created at the beginning of the operation. This may lead to scanners seeing results which are not synched to the fs. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-9365) Limit major compactions to off peak hours
Lars Hofhansl created HBASE-9365: Summary: Limit major compactions to off peak hours Key: HBASE-9365 URL: https://issues.apache.org/jira/browse/HBASE-9365 Project: HBase Issue Type: Bug Reporter: Lars Hofhansl We already have off peak hours (where we set a more aggressive compaction ratio) and periodic major compactions. It would be nice if we could limit this to off peak hours as well. A major compaction can be triggered in three ways: # a periodic chore checking every ~3h (10.000.000ms) by default # a minor compaction promoted to major because the last major compaction was too long ago # a minor compaction promoted to major (at the HStore level), because it touched all HFiles anyway For case #1 and #2 we could could optionally return false from Store.isMajorCompaction(...) when we're not in the off peak window. Would have a new config option to enforce that only optionally. In case #3 we're compacting all files anyway, so not need to interfere with that. Thoughts? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9365) Optionally limit major compactions to off peak hours
[ https://issues.apache.org/jira/browse/HBASE-9365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-9365: - Summary: Optionally limit major compactions to off peak hours (was: Limit major compactions to off peak hours) Optionally limit major compactions to off peak hours Key: HBASE-9365 URL: https://issues.apache.org/jira/browse/HBASE-9365 Project: HBase Issue Type: Bug Reporter: Lars Hofhansl We already have off peak hours (where we set a more aggressive compaction ratio) and periodic major compactions. It would be nice if we could limit this to off peak hours as well. A major compaction can be triggered in three ways: # a periodic chore checking every ~3h (10.000.000ms) by default # a minor compaction promoted to major because the last major compaction was too long ago # a minor compaction promoted to major (at the HStore level), because it touched all HFiles anyway For case #1 and #2 we could could optionally return false from Store.isMajorCompaction(...) when we're not in the off peak window. Would have a new config option to enforce that only optionally. In case #3 we're compacting all files anyway, so not need to interfere with that. Thoughts? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-9366) TestHTraceHooks.testTraceCreateTable ConcurrentModificationException up in htrace lib
stack created HBASE-9366: Summary: TestHTraceHooks.testTraceCreateTable ConcurrentModificationException up in htrace lib Key: HBASE-9366 URL: https://issues.apache.org/jira/browse/HBASE-9366 Project: HBase Issue Type: Bug Reporter: stack Assignee: Elliott Clark Fix For: 0.98.0, 0.96.0 See http://jenkins-public.iridiant.net/job/HBase-0.95/898/org.apache.hbase$hbase-server/testReport/junit/org.apache.hadoop.hbase.trace/TestHTraceHooks/testTraceCreateTable/ {code} Regression org.apache.hadoop.hbase.trace.TestHTraceHooks.testTraceCreateTable Failing for the past 1 build (Since Failed#898 ) Took 43 ms. Stacktrace java.util.ConcurrentModificationException at java.util.HashMap$HashIterator.nextEntry(HashMap.java:793) at java.util.HashMap$KeyIterator.next(HashMap.java:828) at org.cloudera.htrace.TraceTree.init(TraceTree.java:48) at org.apache.hadoop.hbase.trace.TestHTraceHooks.testTraceCreateTable(TestHTraceHooks.java:108) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at org.junit.runners.Suite.runChild(Suite.java:127) at org.junit.runners.Suite.runChild(Suite.java:26) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) {code} Assigning Elliott. He has keys to the htrace kingdom. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9230) Fix the server so it can take a pure pb request param and return a pure pb result
[ https://issues.apache.org/jira/browse/HBASE-9230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752734#comment-13752734 ] Hadoop QA commented on HBASE-9230: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12600420/9230v7.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 26 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:red}-1 site{color}. The patch appears to cause mvn site goal to fail. {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.regionserver.TestAtomicOperation org.apache.hadoop.hbase.coprocessor.TestRegionServerCoprocessorExceptionWithAbort Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/6946//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6946//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6946//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6946//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6946//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6946//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6946//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6946//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6946//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/6946//console This message is automatically generated. Fix the server so it can take a pure pb request param and return a pure pb result - Key: HBASE-9230 URL: https://issues.apache.org/jira/browse/HBASE-9230 Project: HBase Issue Type: Bug Reporter: stack Assignee: stack Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9230.txt, 9230v2.txt, 9230v3.txt, 9230v3.txt, 9230v4.txt, 9230v5.txt, 9230v6.txt, 9230v7.txt Working on the asynchbase update w/ B this afternoon so it can run against 0.95/0.96, I noticed that clients HAVE TO do cellblocks as the server is currently. That is an oversight. Lets fix so can do all pb all the time too (I thought this was there but it is not); it will make it easier dev'ing simple clients. This issue shouldn't hold up release but we should get it in to help the asynchbase convertion. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9366) TestHTraceHooks.testTraceCreateTable ConcurrentModificationException up in htrace lib
[ https://issues.apache.org/jira/browse/HBASE-9366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752739#comment-13752739 ] Elliott Clark commented on HBASE-9366: -- Looking at it. Thanks [~stack] TestHTraceHooks.testTraceCreateTable ConcurrentModificationException up in htrace lib - Key: HBASE-9366 URL: https://issues.apache.org/jira/browse/HBASE-9366 Project: HBase Issue Type: Bug Reporter: stack Assignee: Elliott Clark Fix For: 0.98.0, 0.96.0 See http://jenkins-public.iridiant.net/job/HBase-0.95/898/org.apache.hbase$hbase-server/testReport/junit/org.apache.hadoop.hbase.trace/TestHTraceHooks/testTraceCreateTable/ {code} Regression org.apache.hadoop.hbase.trace.TestHTraceHooks.testTraceCreateTable Failing for the past 1 build (Since Failed#898 ) Took 43 ms. Stacktrace java.util.ConcurrentModificationException at java.util.HashMap$HashIterator.nextEntry(HashMap.java:793) at java.util.HashMap$KeyIterator.next(HashMap.java:828) at org.cloudera.htrace.TraceTree.init(TraceTree.java:48) at org.apache.hadoop.hbase.trace.TestHTraceHooks.testTraceCreateTable(TestHTraceHooks.java:108) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at org.junit.runners.Suite.runChild(Suite.java:127) at org.junit.runners.Suite.runChild(Suite.java:26) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) {code} Assigning Elliott. He has keys to the htrace kingdom. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-9367) TestRegionServerCoprocessorExceptionWithAbort.testExceptionFromCoprocessorDuringPut fails
stack created HBASE-9367: Summary: TestRegionServerCoprocessorExceptionWithAbort.testExceptionFromCoprocessorDuringPut fails Key: HBASE-9367 URL: https://issues.apache.org/jira/browse/HBASE-9367 Project: HBase Issue Type: Bug Components: test Reporter: stack See http://jenkins-public.iridiant.net/job/HBase-0.95-Hadoop-2/org.apache.hbase$hbase-server/903/testReport/junit/org.apache.hadoop.hbase.coprocessor/TestRegionServerCoprocessorExceptionWithAbort/testExceptionFromCoprocessorDuringPut/ -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9110) Meta region edits not recovered while migrating to 0.96.0
[ https://issues.apache.org/jira/browse/HBASE-9110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752742#comment-13752742 ] Himanshu Vashishtha commented on HBASE-9110: Got it. Let's do offline log splitting then. I used HLogSplitter to do offline splitting for each regionserver directory, and it basically works. The changes are self contained in Upgrade script only. Will upload a patch for this. Thanks. Meta region edits not recovered while migrating to 0.96.0 - Key: HBASE-9110 URL: https://issues.apache.org/jira/browse/HBASE-9110 Project: HBase Issue Type: Sub-task Components: migration Affects Versions: 0.95.2, 0.94.10 Reporter: Himanshu Vashishtha Priority: Critical Fix For: 0.96.0 Attachments: HBase-9110-v0.patch I was doing the migration testing from 0.94.11-snapshot to 0.95.0, and faced this issue. 1) Do some edits in meta table (for eg, create a table). 2) Kill the cluster. (I used kill because we would be doing log splitting when upgrading anyway). 3) There is some dependency on WALs. Upgrade the bits to 0.95.2-snapshot. Start the cluster. Every thing comes up. I see log splitting happening as expected. But, the WAL-data for meta table is missing. I could see recovered.edits file for meta created, and placed at the right location. It is just that the new HMaster code tries to recover meta by looking at meta prefix in the log name, and if it didn't find one, just opens the meta region. So, the recovered.edits file, created afterwards, is not honored. Opening this jira to let folks give their opinions about how to tackle this migration issue. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9367) TestRegionServerCoprocessorExceptionWithAbort.testExceptionFromCoprocessorDuringPut fails
[ https://issues.apache.org/jira/browse/HBASE-9367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-9367: - Attachment: 9367.txt Add debug and tighten when the exception is thrown see if that fixes this test. TestRegionServerCoprocessorExceptionWithAbort.testExceptionFromCoprocessorDuringPut fails - Key: HBASE-9367 URL: https://issues.apache.org/jira/browse/HBASE-9367 Project: HBase Issue Type: Bug Components: test Reporter: stack Attachments: 9367.txt See http://jenkins-public.iridiant.net/job/HBase-0.95-Hadoop-2/org.apache.hbase$hbase-server/903/testReport/junit/org.apache.hadoop.hbase.coprocessor/TestRegionServerCoprocessorExceptionWithAbort/testExceptionFromCoprocessorDuringPut/ -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9367) TestRegionServerCoprocessorExceptionWithAbort.testExceptionFromCoprocessorDuringPut fails
[ https://issues.apache.org/jira/browse/HBASE-9367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752750#comment-13752750 ] stack commented on HBASE-9367: -- Applied to 0.95 and to trunk. Lets see if that takes care of it. TestRegionServerCoprocessorExceptionWithAbort.testExceptionFromCoprocessorDuringPut fails - Key: HBASE-9367 URL: https://issues.apache.org/jira/browse/HBASE-9367 Project: HBase Issue Type: Bug Components: test Reporter: stack Attachments: 9367.txt See http://jenkins-public.iridiant.net/job/HBase-0.95-Hadoop-2/org.apache.hbase$hbase-server/903/testReport/junit/org.apache.hadoop.hbase.coprocessor/TestRegionServerCoprocessorExceptionWithAbort/testExceptionFromCoprocessorDuringPut/ -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9313) NamespaceJanitor is spammy when the namespace table moves
[ https://issues.apache.org/jira/browse/HBASE-9313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jean-Daniel Cryans updated HBASE-9313: -- Resolution: Fixed Status: Resolved (was: Patch Available) Committed to branch and trunk, thanks Stack. NamespaceJanitor is spammy when the namespace table moves - Key: HBASE-9313 URL: https://issues.apache.org/jira/browse/HBASE-9313 Project: HBase Issue Type: Improvement Affects Versions: 0.95.2 Reporter: Jean-Daniel Cryans Assignee: Jean-Daniel Cryans Fix For: 0.98.0, 0.96.0 Attachments: HBASE-9313.patch Although region movements are part of a healthy HBase lifestyle, the NamespaceJanitor WARNs about it: {noformat} 2013-08-22 22:35:48,872 WARN [NamespaceJanitor-jdec2hbase0403-1:6] org.apache.hadoop.hbase.client.RpcRetryingCaller: Call exception, tries=0, retries=350, retryTime=-4ms org.apache.hadoop.hbase.NotServingRegionException: org.apache.hadoop.hbase.NotServingRegionException: Region is not online: 640d4b4d9432f23f1638700217d34764 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:95) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:79) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:235) at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:300) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:148) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:57) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:98) at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:239) at org.apache.hadoop.hbase.client.ClientScanner.init(ClientScanner.java:153) at org.apache.hadoop.hbase.client.ClientScanner.init(ClientScanner.java:100) at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:696) at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:707) at org.apache.hadoop.hbase.master.TableNamespaceManager.list(TableNamespaceManager.java:185) at org.apache.hadoop.hbase.master.HMaster.listNamespaceDescriptors(HMaster.java:3149) at org.apache.hadoop.hbase.master.NamespaceJanitor.removeOrphans(NamespaceJanitor.java:102) at org.apache.hadoop.hbase.master.NamespaceJanitor.chore(NamespaceJanitor.java:86) at org.apache.hadoop.hbase.Chore.run(Chore.java:80) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException: org.apache.hadoop.hbase.NotServingRegionException: Region is not online: 640d4b4d9432f23f1638700217d34764 at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2565) at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:3927) at org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3004) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:26847) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2156) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1861) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1426) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1630) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1687) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:27303) at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:291) ... 15 more {noformat} This should not be printed. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9110) Meta region edits not recovered while migrating to 0.96.0
[ https://issues.apache.org/jira/browse/HBASE-9110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752752#comment-13752752 ] stack commented on HBASE-9110: -- [~himan...@cloudera.com] Thanks for testing. I suppose we could end up splitting logs for servers that no longer exist but that is probably not the end of the world. Meta region edits not recovered while migrating to 0.96.0 - Key: HBASE-9110 URL: https://issues.apache.org/jira/browse/HBASE-9110 Project: HBase Issue Type: Sub-task Components: migration Affects Versions: 0.95.2, 0.94.10 Reporter: Himanshu Vashishtha Priority: Critical Fix For: 0.96.0 Attachments: HBase-9110-v0.patch I was doing the migration testing from 0.94.11-snapshot to 0.95.0, and faced this issue. 1) Do some edits in meta table (for eg, create a table). 2) Kill the cluster. (I used kill because we would be doing log splitting when upgrading anyway). 3) There is some dependency on WALs. Upgrade the bits to 0.95.2-snapshot. Start the cluster. Every thing comes up. I see log splitting happening as expected. But, the WAL-data for meta table is missing. I could see recovered.edits file for meta created, and placed at the right location. It is just that the new HMaster code tries to recover meta by looking at meta prefix in the log name, and if it didn't find one, just opens the meta region. So, the recovered.edits file, created afterwards, is not honored. Opening this jira to let folks give their opinions about how to tackle this migration issue. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7709) Infinite loop possible in Master/Master replication
[ https://issues.apache.org/jira/browse/HBASE-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752757#comment-13752757 ] Jean-Daniel Cryans commented on HBASE-7709: --- +1 Infinite loop possible in Master/Master replication --- Key: HBASE-7709 URL: https://issues.apache.org/jira/browse/HBASE-7709 Project: HBase Issue Type: Bug Components: Replication Affects Versions: 0.94.6, 0.95.1 Reporter: Lars Hofhansl Assignee: Vasu Mariyala Fix For: 0.98.0, 0.94.12, 0.96.0 Attachments: 095-trunk.patch, 0.95-trunk-rev1.patch, 0.95-trunk-rev2.patch, 0.95-trunk-rev3.patch, 0.95-trunk-rev4.patch, HBASE-7709.patch, HBASE-7709-rev1.patch, HBASE-7709-rev2.patch, HBASE-7709-rev3.patch, HBASE-7709-rev4.patch, HBASE-7709-rev5.patch We just discovered the following scenario: # Cluster A and B are setup in master/master replication # By accident we had Cluster C replicate to Cluster A. Now all edit originating from C will be bouncing between A and B. Forever! The reason is that when the edit come in from C the cluster ID is already set and won't be reset. We have a couple of options here: # Optionally only support master/master (not cycles of more than two clusters). In that case we can always reset the cluster ID in the ReplicationSource. That means that now cycles 2 will have the data cycle forever. This is the only option that requires no changes in the HLog format. # Instead of a single cluster id per edit maintain a (unordered) set of cluster id that have seen this edit. Then in ReplicationSource we drop any edit that the sink has seen already. The is the cleanest approach, but it might need a lot of data stored per edit if there are many clusters involved. # Maintain a configurable counter of the maximum cycle side we want to support. Could default to 10 (even maybe even just). Store a hop-count in the WAL and the ReplicationSource increases that hop-count on each hop. If we're over the max, just drop the edit. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9230) Fix the server so it can take a pure pb request param and return a pure pb result
[ https://issues.apache.org/jira/browse/HBASE-9230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752758#comment-13752758 ] stack commented on HBASE-9230: -- Both failures are flakies (I just made attempt at fixup of TestRegionServerCoprocessorExceptionWithAbort elsewhere). Let me rerun. Fix the server so it can take a pure pb request param and return a pure pb result - Key: HBASE-9230 URL: https://issues.apache.org/jira/browse/HBASE-9230 Project: HBase Issue Type: Bug Reporter: stack Assignee: stack Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9230.txt, 9230v2.txt, 9230v3.txt, 9230v3.txt, 9230v4.txt, 9230v5.txt, 9230v6.txt, 9230v7.txt Working on the asynchbase update w/ B this afternoon so it can run against 0.95/0.96, I noticed that clients HAVE TO do cellblocks as the server is currently. That is an oversight. Lets fix so can do all pb all the time too (I thought this was there but it is not); it will make it easier dev'ing simple clients. This issue shouldn't hold up release but we should get it in to help the asynchbase convertion. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9230) Fix the server so it can take a pure pb request param and return a pure pb result
[ https://issues.apache.org/jira/browse/HBASE-9230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-9230: - Attachment: 9230v7.txt Fix the server so it can take a pure pb request param and return a pure pb result - Key: HBASE-9230 URL: https://issues.apache.org/jira/browse/HBASE-9230 Project: HBase Issue Type: Bug Reporter: stack Assignee: stack Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9230.txt, 9230v2.txt, 9230v3.txt, 9230v3.txt, 9230v4.txt, 9230v5.txt, 9230v6.txt, 9230v7.txt, 9230v7.txt Working on the asynchbase update w/ B this afternoon so it can run against 0.95/0.96, I noticed that clients HAVE TO do cellblocks as the server is currently. That is an oversight. Lets fix so can do all pb all the time too (I thought this was there but it is not); it will make it easier dev'ing simple clients. This issue shouldn't hold up release but we should get it in to help the asynchbase convertion. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7709) Infinite loop possible in Master/Master replication
[ https://issues.apache.org/jira/browse/HBASE-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752762#comment-13752762 ] stack commented on HBASE-7709: -- Applied to 0.95 and to trunk. Want this in 0.94 [~lhofhansl]? [~vasu.mariy...@gmail.com] Thanks boss. Any chance of a release note on this issue? Infinite loop possible in Master/Master replication --- Key: HBASE-7709 URL: https://issues.apache.org/jira/browse/HBASE-7709 Project: HBase Issue Type: Bug Components: Replication Affects Versions: 0.94.6, 0.95.1 Reporter: Lars Hofhansl Assignee: Vasu Mariyala Fix For: 0.98.0, 0.94.12, 0.96.0 Attachments: 095-trunk.patch, 0.95-trunk-rev1.patch, 0.95-trunk-rev2.patch, 0.95-trunk-rev3.patch, 0.95-trunk-rev4.patch, HBASE-7709.patch, HBASE-7709-rev1.patch, HBASE-7709-rev2.patch, HBASE-7709-rev3.patch, HBASE-7709-rev4.patch, HBASE-7709-rev5.patch We just discovered the following scenario: # Cluster A and B are setup in master/master replication # By accident we had Cluster C replicate to Cluster A. Now all edit originating from C will be bouncing between A and B. Forever! The reason is that when the edit come in from C the cluster ID is already set and won't be reset. We have a couple of options here: # Optionally only support master/master (not cycles of more than two clusters). In that case we can always reset the cluster ID in the ReplicationSource. That means that now cycles 2 will have the data cycle forever. This is the only option that requires no changes in the HLog format. # Instead of a single cluster id per edit maintain a (unordered) set of cluster id that have seen this edit. Then in ReplicationSource we drop any edit that the sink has seen already. The is the cleanest approach, but it might need a lot of data stored per edit if there are many clusters involved. # Maintain a configurable counter of the maximum cycle side we want to support. Could default to 10 (even maybe even just). Store a hop-count in the WAL and the ReplicationSource increases that hop-count on each hop. If we're over the max, just drop the edit. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9365) Optionally limit major compactions to off peak hours
[ https://issues.apache.org/jira/browse/HBASE-9365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752766#comment-13752766 ] stack commented on HBASE-9365: -- Is this facility out in the fb branch? Optionally limit major compactions to off peak hours Key: HBASE-9365 URL: https://issues.apache.org/jira/browse/HBASE-9365 Project: HBase Issue Type: Bug Reporter: Lars Hofhansl We already have off peak hours (where we set a more aggressive compaction ratio) and periodic major compactions. It would be nice if we could limit this to off peak hours as well. A major compaction can be triggered in three ways: # a periodic chore checking every ~3h (10.000.000ms) by default # a minor compaction promoted to major because the last major compaction was too long ago # a minor compaction promoted to major (at the HStore level), because it touched all HFiles anyway For case #1 and #2 we could could optionally return false from Store.isMajorCompaction(...) when we're not in the off peak window. Would have a new config option to enforce that only optionally. In case #3 we're compacting all files anyway, so not need to interfere with that. Thoughts? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9230) Fix the server so it can take a pure pb request param and return a pure pb result
[ https://issues.apache.org/jira/browse/HBASE-9230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752778#comment-13752778 ] Hadoop QA commented on HBASE-9230: -- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12600420/9230v7.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 26 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/6947//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6947//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6947//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6947//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6947//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6947//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6947//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6947//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6947//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/6947//console This message is automatically generated. Fix the server so it can take a pure pb request param and return a pure pb result - Key: HBASE-9230 URL: https://issues.apache.org/jira/browse/HBASE-9230 Project: HBase Issue Type: Bug Reporter: stack Assignee: stack Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9230.txt, 9230v2.txt, 9230v3.txt, 9230v3.txt, 9230v4.txt, 9230v5.txt, 9230v6.txt, 9230v7.txt, 9230v7.txt Working on the asynchbase update w/ B this afternoon so it can run against 0.95/0.96, I noticed that clients HAVE TO do cellblocks as the server is currently. That is an oversight. Lets fix so can do all pb all the time too (I thought this was there but it is not); it will make it easier dev'ing simple clients. This issue shouldn't hold up release but we should get it in to help the asynchbase convertion. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9326) ServerName is created using getLocalSocketAddress, breaks binding to the wildcard address. Revert HBASE-8640
[ https://issues.apache.org/jira/browse/HBASE-9326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jean-Daniel Cryans updated HBASE-9326: -- Assignee: Jean-Daniel Cryans Summary: ServerName is created using getLocalSocketAddress, breaks binding to the wildcard address. Revert HBASE-8640 (was: ServerName is created using getLocalSocketAddress, breaks binding to the wildcard address) ServerName is created using getLocalSocketAddress, breaks binding to the wildcard address. Revert HBASE-8640 Key: HBASE-9326 URL: https://issues.apache.org/jira/browse/HBASE-9326 Project: HBase Issue Type: Bug Affects Versions: 0.95.2 Reporter: Jean-Daniel Cryans Assignee: Jean-Daniel Cryans Priority: Critical Fix For: 0.98.0, 0.96.0 In HBASE-8148 I added a way to bind to specific addresses, like 0.0.0.0, but right now in 0.95/trunk the ServerName is created using getLocalSocketAddress in RpcServer so 0.0.0.0 gets published in ZK. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9326) ServerName is created using getLocalSocketAddress, breaks binding to the wildcard address. Revert HBASE-8640
[ https://issues.apache.org/jira/browse/HBASE-9326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jean-Daniel Cryans updated HBASE-9326: -- Attachment: HBASE-9326.patch This is the revert and it adds a comment that clarifies why we don't pass the isa in the server name. ServerName is created using getLocalSocketAddress, breaks binding to the wildcard address. Revert HBASE-8640 Key: HBASE-9326 URL: https://issues.apache.org/jira/browse/HBASE-9326 Project: HBase Issue Type: Bug Affects Versions: 0.95.2 Reporter: Jean-Daniel Cryans Assignee: Jean-Daniel Cryans Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: HBASE-9326.patch In HBASE-8148 I added a way to bind to specific addresses, like 0.0.0.0, but right now in 0.95/trunk the ServerName is created using getLocalSocketAddress in RpcServer so 0.0.0.0 gets published in ZK. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9326) ServerName is created using getLocalSocketAddress, breaks binding to the wildcard address. Revert HBASE-8640
[ https://issues.apache.org/jira/browse/HBASE-9326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jean-Daniel Cryans updated HBASE-9326: -- Status: Patch Available (was: Open) ServerName is created using getLocalSocketAddress, breaks binding to the wildcard address. Revert HBASE-8640 Key: HBASE-9326 URL: https://issues.apache.org/jira/browse/HBASE-9326 Project: HBase Issue Type: Bug Affects Versions: 0.95.2 Reporter: Jean-Daniel Cryans Assignee: Jean-Daniel Cryans Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: HBASE-9326.patch In HBASE-8148 I added a way to bind to specific addresses, like 0.0.0.0, but right now in 0.95/trunk the ServerName is created using getLocalSocketAddress in RpcServer so 0.0.0.0 gets published in ZK. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9330) Refactor PE to create HTable the correct way
[ https://issues.apache.org/jira/browse/HBASE-9330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jean-Daniel Cryans updated HBASE-9330: -- Resolution: Fixed Assignee: Jean-Daniel Cryans Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Committed to branch and trunk, thanks stack. Refactor PE to create HTable the correct way Key: HBASE-9330 URL: https://issues.apache.org/jira/browse/HBASE-9330 Project: HBase Issue Type: Improvement Affects Versions: 0.95.2 Reporter: Jean-Daniel Cryans Assignee: Jean-Daniel Cryans Fix For: 0.98.0, 0.96.0 Attachments: HBASE-9330.patch Multithreaded clients that directly create HTables are out of style and will be crushed under thousands of threads. Our own PerformanceEvaluation is now suffering from this too, so it needs to keep an HConnection around in order to create tables. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-9368) TestDistributedLogSplitting.testDelayedDeleteOnFailure fails on occasion
stack created HBASE-9368: Summary: TestDistributedLogSplitting.testDelayedDeleteOnFailure fails on occasion Key: HBASE-9368 URL: https://issues.apache.org/jira/browse/HBASE-9368 Project: HBase Issue Type: Bug Components: test Reporter: stack Follow on from hbase-8567. See https://builds.apache.org/view/H-L/view/HBase/job/hbase-0.95-on-hadoop2/275/testReport/junit/org.apache.hadoop.hbase.master/TestDistributedLogSplitting/testDelayedDeleteOnFailure/ {code} org.apache.hadoop.hbase.master.TestDistributedLogSplitting.testDelayedDeleteOnFailure Failing for the past 1 build (Since Failed#275 ) Took 7 ms. add description Error Message test timed out after 3 milliseconds Stacktrace java.lang.Exception: test timed out after 3 milliseconds at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:640) at org.apache.zookeeper.ClientCnxn.start(ClientCnxn.java:403) at org.apache.zookeeper.ZooKeeper.init(ZooKeeper.java:450) at org.apache.zookeeper.ZooKeeper.init(ZooKeeper.java:380) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.init(RecoverableZooKeeper.java:114) at org.apache.hadoop.hbase.zookeeper.ZKUtil.connect(ZKUtil.java:135) at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.init(ZooKeeperWatcher.java:167) at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.init(ZooKeeperWatcher.java:136) at org.apache.hadoop.hbase.client.ZooKeeperKeepAliveConnection.init(ZooKeeperKeepAliveConnection.java:43) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getKeepAliveZooKeeperWatcher(HConnectionManager.java:1788) at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:82) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.retrieveClusterId(HConnectionManager.java:778) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.init(HConnectionManager.java:618) at sun.reflect.GeneratedConstructorAccessor38.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:377) at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:358) at org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:293) at org.apache.hadoop.hbase.client.HTable.init(HTable.java:191) at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:887) at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:853) at org.apache.hadoop.hbase.master.TestDistributedLogSplitting.startCluster(TestDistributedLogSplitting.java:156) at org.apache.hadoop.hbase.master.TestDistributedLogSplitting.startCluster(TestDistributedLogSplitting.java:141) at org.apache.hadoop.hbase.master.TestDistributedLogSplitting.testDelayedDeleteOnFailure(TestDistributedLogSplitting.java:970) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) {code} We seem stuck. Nothing happens after master initializes... we just wait: {code} 2013-08-28 00:42:53,492 INFO [M:0;quirinus:56658] master.HMaster(901): Master has completed initialization 2013-08-28 00:42:53,496 DEBUG [NamespaceJanitor-quirinus:56658] client.ClientScanner(218): Finished {ENCODED = 5fb8721e18f122319e47336ad5221a58, NAME = 'hbase:namespace,,1377650558736.5fb8721e18f122319e47336ad5221a58.', STARTKEY = '', ENDKEY = ''} 2013-08-28 00:42:53,534 DEBUG [CatalogJanitor-quirinus:56658] client.ClientScanner(218): Finished {ENCODED = 1588230740, NAME = 'hbase:meta,,1', STARTKEY = '', ENDKEY = ''} 2013-08-28 00:43:34,443 INFO [Thread-2407] zookeeper.RecoverableZooKeeper(122): Process identifier=hconnection-0x958cd8
[jira] [Commented] (HBASE-9326) ServerName is created using getLocalSocketAddress, breaks binding to the wildcard address. Revert HBASE-8640
[ https://issues.apache.org/jira/browse/HBASE-9326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752789#comment-13752789 ] stack commented on HBASE-9326: -- +1 ServerName is created using getLocalSocketAddress, breaks binding to the wildcard address. Revert HBASE-8640 Key: HBASE-9326 URL: https://issues.apache.org/jira/browse/HBASE-9326 Project: HBase Issue Type: Bug Affects Versions: 0.95.2 Reporter: Jean-Daniel Cryans Assignee: Jean-Daniel Cryans Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: HBASE-9326.patch In HBASE-8148 I added a way to bind to specific addresses, like 0.0.0.0, but right now in 0.95/trunk the ServerName is created using getLocalSocketAddress in RpcServer so 0.0.0.0 gets published in ZK. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9230) Fix the server so it can take a pure pb request param and return a pure pb result
[ https://issues.apache.org/jira/browse/HBASE-9230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752791#comment-13752791 ] Elliott Clark commented on HBASE-9230: -- +1 lgtm Fix the server so it can take a pure pb request param and return a pure pb result - Key: HBASE-9230 URL: https://issues.apache.org/jira/browse/HBASE-9230 Project: HBase Issue Type: Bug Reporter: stack Assignee: stack Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9230.txt, 9230v2.txt, 9230v3.txt, 9230v3.txt, 9230v4.txt, 9230v5.txt, 9230v6.txt, 9230v7.txt, 9230v7.txt Working on the asynchbase update w/ B this afternoon so it can run against 0.95/0.96, I noticed that clients HAVE TO do cellblocks as the server is currently. That is an oversight. Lets fix so can do all pb all the time too (I thought this was there but it is not); it will make it easier dev'ing simple clients. This issue shouldn't hold up release but we should get it in to help the asynchbase convertion. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7826) Improve Hbase Thrift v1 to return results in sorted order
[ https://issues.apache.org/jira/browse/HBASE-7826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752792#comment-13752792 ] Wouter Bolsterlee commented on HBASE-7826: -- Is it really safe to convert a previously required field to an optional one? Parsers using the old definition file might unconditionally expect the field when handling input generated using the newer definitions. Improve Hbase Thrift v1 to return results in sorted order - Key: HBASE-7826 URL: https://issues.apache.org/jira/browse/HBASE-7826 Project: HBase Issue Type: New Feature Components: Thrift Affects Versions: 0.94.0 Reporter: Shivendra Pratap Singh Assignee: Shivendra Pratap Singh Priority: Minor Labels: Hbase, Thrift Fix For: 0.98.0, 0.95.2, 0.94.11 Attachments: 7826-v6.patch, HBASE-7826-0.94-v7.patch, hbase_7826.patch, hbase_7826.patch, HBASE-7826.patch, hbase_7826_sortcolumnFlag.1.patch, hbase_7826_sortcolumnFlag.2.patch, hbase_7826_sortcolumnFlag.3.patch, hbase_7826_sortcolumnFlag.4.patch, hbase_7826_sortcolumnFlag.5.patch, hbase_7826_sortcolumnFlag.patch, hbase_7826_trunk.patch Hbase natively stores columns sorted based on the column qualifier. A scan is guaranteed to return sorted columns. The Java API works fine but the Thrift API is broken. Hbase uses TreeMap that ensures that sort order is maintained. However Hbase thrift specification uses a simple Map to store the data. A map, since it is unordered doesn't result in columns being returned in a sort order that is consistent with their storage in Hbase. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9208) ReplicationLogCleaner slow at large scale
[ https://issues.apache.org/jira/browse/HBASE-9208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752770#comment-13752770 ] stack commented on HBASE-9208: -- Will wait on the [~jdcryans] test run before committing. Can make new issue for whether to commit on 0.94 ReplicationLogCleaner slow at large scale - Key: HBASE-9208 URL: https://issues.apache.org/jira/browse/HBASE-9208 Project: HBase Issue Type: Improvement Components: Replication Reporter: Dave Latham Assignee: Dave Latham Fix For: 0.94.12, 0.96.0 Attachments: HBASE-9208-0.94.patch, HBASE-9208-0.94-v2.patch, HBASE-9208.patch, HBASE-9208-v2.patch, HBASE-9208-v3.patch At a large scale the ReplicationLogCleaner fails to clean up .oldlogs as fast as the cluster is producing them. For each old HLog file that has been replicated and should be deleted the ReplicationLogCleaner checks every replication queue in ZooKeeper before removing it. This means that as a cluster scales up the number of files to delete scales as well as the time to delete each file so the cleanup chore scales quadratically. In our case it reached the point where the oldlogs were growing faster than they were being cleaned up. We're now running with a patch that allows the ReplicationLogCleaner to refresh its list of files in the replication queues from ZooKeeper just once for each batch of files the CleanerChore wants to evaluate. I'd propose updating FileCleanerDelegate to take a ListFileStatus rather than a single one at a time. This would allow file cleaners that check an external resource for references such as ZooKeeper (for ReplicationLogCleaner) or HDFS (for SnapshotLogCleaner which looks like it may also have similar trouble at scale) to load those references once per batch rather than for every log. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9292) Syncer fails but we won't go down
[ https://issues.apache.org/jira/browse/HBASE-9292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-9292: - Priority: Major (was: Critical) Syncer fails but we won't go down - Key: HBASE-9292 URL: https://issues.apache.org/jira/browse/HBASE-9292 Project: HBase Issue Type: Bug Components: wal Affects Versions: 0.95.2 Environment: hadoop-2.1.0-beta and tip of 0.95 branch Reporter: stack Fix For: 0.96.0 Running some simple loading tests i ran into the following running on hadoop-2.1.0-beta. {code} 2013-08-20 16:51:56,310 DEBUG [regionserver60020.logRoller] regionserver.LogRoller: HLog roll requested 2013-08-20 16:51:56,314 DEBUG [regionserver60020.logRoller] wal.FSHLog: cleanupCurrentWriter waiting for transactions to get synced total 655761 synced till here 655750 2013-08-20 16:51:56,360 INFO [regionserver60020.logRoller] wal.FSHLog: Rolled WAL /hbase/WALs/a2434.halxg.cloudera.com,60020,1377031955847/a2434.halxg.cloudera.com%2C60020%2C1377031955847.1377042714402 with entries=985, filesize=122.5 M; new WAL /hbase/WALs/a2434.halxg.cloudera.com,60020,1377031955847/a2434.halxg.cloudera.com%2C60020%2C1377031955847.1377042716311 2013-08-20 16:51:56,378 WARN [Thread-4788] hdfs.DFSClient: DataStreamer Exception org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/WALs/a2434.halxg.cloudera.com,60020,1377031955847/a2434.halxg.cloudera.com%2C60020%2C1377031955847.1377042716311 could only be replicated to 0 nodes instead of minReplication (=1). There are 5 datanode(s) running and no node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2458) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:525) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2036) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1477) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2034) at org.apache.hadoop.ipc.Client.call(Client.java:1347) at org.apache.hadoop.ipc.Client.call(Client.java:1300) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy13.addBlock(Unknown Source) at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:188) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at $Proxy13.addBlock(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266) at $Proxy14.addBlock(Unknown Source) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1220) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1073) ... {code} Thereafter the server is up but useless and can't go down because it just keeps doing this: {code} 2013-08-20 16:51:56,380 FATAL [RpcServer.handler=3,port=60020] wal.FSHLog: Could not sync. Requesting roll of hlog org.apache.hadoop.ipc.RemoteException(java.io.IOException): File
[jira] [Updated] (HBASE-6469) Failure on enable/disable table will cause table state in zk to be left as enabling/disabling until master is restarted
[ https://issues.apache.org/jira/browse/HBASE-6469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-6469: - Fix Version/s: (was: 0.96.0) Moving out. Still being worked on. Failure on enable/disable table will cause table state in zk to be left as enabling/disabling until master is restarted --- Key: HBASE-6469 URL: https://issues.apache.org/jira/browse/HBASE-6469 Project: HBase Issue Type: Bug Components: master Affects Versions: 0.94.6 Reporter: Enis Soztutar Assignee: rajeshbabu Priority: Critical Attachments: 6469-expose-force-r3.patch, HBASE-6469_2.patch, HBASE-6469_3.patch, HBASE-6469_4.patch, HBASE-6469.patch, HBASE-6469_retry_enable_or_disable.patch In Enable/DisableTableHandler code, if something goes wrong in handling, the table state in zk is left as ENABLING / DISABLING. After that we cannot force any more action from the API or CLI, and the only recovery path is restarting the master. {code} if (done) { // Flip the table to enabled. this.assignmentManager.getZKTable().setEnabledTable( this.tableNameStr); LOG.info(Table ' + this.tableNameStr + ' was successfully enabled. Status: done= + done); } else { LOG.warn(Table ' + this.tableNameStr + ' wasn't successfully enabled. Status: done= + done); } {code} Here, if done is false, the table state is not changed. There is also no way to set skipTableStateCheck from cli / api. We have run into this issue a couple of times before. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9110) Meta region edits not recovered while migrating to 0.96.0
[ https://issues.apache.org/jira/browse/HBASE-9110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752809#comment-13752809 ] Devaraj Das commented on HBASE-9110: Yeah +1 if this has been tested. Meta region edits not recovered while migrating to 0.96.0 - Key: HBASE-9110 URL: https://issues.apache.org/jira/browse/HBASE-9110 Project: HBase Issue Type: Sub-task Components: migration Affects Versions: 0.95.2, 0.94.10 Reporter: Himanshu Vashishtha Priority: Critical Fix For: 0.96.0 Attachments: HBase-9110-v0.patch I was doing the migration testing from 0.94.11-snapshot to 0.95.0, and faced this issue. 1) Do some edits in meta table (for eg, create a table). 2) Kill the cluster. (I used kill because we would be doing log splitting when upgrading anyway). 3) There is some dependency on WALs. Upgrade the bits to 0.95.2-snapshot. Start the cluster. Every thing comes up. I see log splitting happening as expected. But, the WAL-data for meta table is missing. I could see recovered.edits file for meta created, and placed at the right location. It is just that the new HMaster code tries to recover meta by looking at meta prefix in the log name, and if it didn't find one, just opens the meta region. So, the recovered.edits file, created afterwards, is not honored. Opening this jira to let folks give their opinions about how to tackle this migration issue. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9278) Reading Pre-namespace meta table edits kills the reader
[ https://issues.apache.org/jira/browse/HBASE-9278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752813#comment-13752813 ] stack commented on HBASE-9278: -- Looks good. Why spin up a minitestcluster to add edits to a WAL? We throw the IAE if a -ROOT- edit? Is that right: + } else if (Bytes.toString(tablenameBytes).equals(TableName.OLD_ROOT_STR)) { +this.tablename = TableName.OLD_ROOT_TABLE_NAME; +throw iae; + } else throw iae; These could get annoying I'd say especially if you are printing out full stack trace: +LOG.info(Got an old META edit, continuing with new format , iae); Suggest not printing the IAE stack trace at least. I'll commit if you remove the test and address the above. Thanks H. Reading Pre-namespace meta table edits kills the reader --- Key: HBASE-9278 URL: https://issues.apache.org/jira/browse/HBASE-9278 Project: HBase Issue Type: Bug Components: migration, wal Affects Versions: 0.95.2 Reporter: Himanshu Vashishtha Assignee: Himanshu Vashishtha Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: HBase-9278-v0.patch, HBase-9278-v1-1.patch, HBase-9278-v1.patch In upgrading to 0.96, there might be some meta/root table edits. Currently, we are just killing SplitLogWorker thread in case it sees any META, or ROOT waledit, which blocks log splitting/replaying of remaining WALs. {code} 2013-08-20 15:45:16,998 ERROR regionserver.SplitLogWorker (SplitLogWorker.java:run(210)) - unexpected error java.lang.IllegalArgumentException: .META. no longer exists. The table has been renamed to hbase:meta at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:269) at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:261) at org.apache.hadoop.hbase.regionserver.wal.HLogKey.readFields(HLogKey.java:338) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1898) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1938) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.readNext(SequenceFileLogReader.java:215) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:98) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:85) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getNextLogLine(HLogSplitter.java:582) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:292) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:209) at org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:138) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.grabTask(SplitLogWorker.java:358) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.taskLoop(SplitLogWorker.java:245) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:205) at java.lang.Thread.run(Thread.java:662) 2013-08-20 15:45:16,999 INFO regionserver.SplitLogWorker (SplitLogWorker.java:run(212)) - SplitLogWorker localhost,60020,1377035111898 exiting {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9365) Optionally limit major compactions to off peak hours
[ https://issues.apache.org/jira/browse/HBASE-9365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752816#comment-13752816 ] Lars Hofhansl commented on HBASE-9365: -- AFAIK, they are running major compactions manually; so probably not (haven't checked the code). The of peak stuff is in both fb and 0.94+ The patch here would be simple, actually, just a check for the already off peak hours in some more places. Optionally limit major compactions to off peak hours Key: HBASE-9365 URL: https://issues.apache.org/jira/browse/HBASE-9365 Project: HBase Issue Type: Bug Reporter: Lars Hofhansl We already have off peak hours (where we set a more aggressive compaction ratio) and periodic major compactions. It would be nice if we could limit this to off peak hours as well. A major compaction can be triggered in three ways: # a periodic chore checking every ~3h (10.000.000ms) by default # a minor compaction promoted to major because the last major compaction was too long ago # a minor compaction promoted to major (at the HStore level), because it touched all HFiles anyway For case #1 and #2 we could could optionally return false from Store.isMajorCompaction(...) when we're not in the off peak window. Would have a new config option to enforce that only optionally. In case #3 we're compacting all files anyway, so not need to interfere with that. Thoughts? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7709) Infinite loop possible in Master/Master replication
[ https://issues.apache.org/jira/browse/HBASE-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752818#comment-13752818 ] Lars Hofhansl commented on HBASE-7709: -- Yeah, will sync up with Vasu off line and probably commit to 0.94 soon. Infinite loop possible in Master/Master replication --- Key: HBASE-7709 URL: https://issues.apache.org/jira/browse/HBASE-7709 Project: HBase Issue Type: Bug Components: Replication Affects Versions: 0.94.6, 0.95.1 Reporter: Lars Hofhansl Assignee: Vasu Mariyala Fix For: 0.98.0, 0.94.12, 0.96.0 Attachments: 095-trunk.patch, 0.95-trunk-rev1.patch, 0.95-trunk-rev2.patch, 0.95-trunk-rev3.patch, 0.95-trunk-rev4.patch, HBASE-7709.patch, HBASE-7709-rev1.patch, HBASE-7709-rev2.patch, HBASE-7709-rev3.patch, HBASE-7709-rev4.patch, HBASE-7709-rev5.patch We just discovered the following scenario: # Cluster A and B are setup in master/master replication # By accident we had Cluster C replicate to Cluster A. Now all edit originating from C will be bouncing between A and B. Forever! The reason is that when the edit come in from C the cluster ID is already set and won't be reset. We have a couple of options here: # Optionally only support master/master (not cycles of more than two clusters). In that case we can always reset the cluster ID in the ReplicationSource. That means that now cycles 2 will have the data cycle forever. This is the only option that requires no changes in the HLog format. # Instead of a single cluster id per edit maintain a (unordered) set of cluster id that have seen this edit. Then in ReplicationSource we drop any edit that the sink has seen already. The is the cleanest approach, but it might need a lot of data stored per edit if there are many clusters involved. # Maintain a configurable counter of the maximum cycle side we want to support. Could default to 10 (even maybe even just). Store a hop-count in the WAL and the ReplicationSource increases that hop-count on each hop. If we're over the max, just drop the edit. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9110) Meta region edits not recovered while migrating to 0.96.0
[ https://issues.apache.org/jira/browse/HBASE-9110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Himanshu Vashishtha updated HBASE-9110: --- Attachment: HBase-9110-v1.patch Patch that does offline log splitting as part of the upgrade process (and doesn't add any new state in zk). I tested this while migrating from a 0.94.11 cluster. I find it a bit tricky to unit test this. Meta region edits not recovered while migrating to 0.96.0 - Key: HBASE-9110 URL: https://issues.apache.org/jira/browse/HBASE-9110 Project: HBase Issue Type: Sub-task Components: migration Affects Versions: 0.95.2, 0.94.10 Reporter: Himanshu Vashishtha Priority: Critical Fix For: 0.96.0 Attachments: HBase-9110-v0.patch, HBase-9110-v1.patch I was doing the migration testing from 0.94.11-snapshot to 0.95.0, and faced this issue. 1) Do some edits in meta table (for eg, create a table). 2) Kill the cluster. (I used kill because we would be doing log splitting when upgrading anyway). 3) There is some dependency on WALs. Upgrade the bits to 0.95.2-snapshot. Start the cluster. Every thing comes up. I see log splitting happening as expected. But, the WAL-data for meta table is missing. I could see recovered.edits file for meta created, and placed at the right location. It is just that the new HMaster code tries to recover meta by looking at meta prefix in the log name, and if it didn't find one, just opens the meta region. So, the recovered.edits file, created afterwards, is not honored. Opening this jira to let folks give their opinions about how to tackle this migration issue. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9368) TestDistributedLogSplitting.testDelayedDeleteOnFailure fails on occasion
[ https://issues.apache.org/jira/browse/HBASE-9368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752823#comment-13752823 ] Jeffrey Zhong commented on HBASE-9368: -- Sure. Let me take a look at this. Thanks. TestDistributedLogSplitting.testDelayedDeleteOnFailure fails on occasion Key: HBASE-9368 URL: https://issues.apache.org/jira/browse/HBASE-9368 Project: HBase Issue Type: Bug Components: test Reporter: stack Follow on from hbase-8567. See https://builds.apache.org/view/H-L/view/HBase/job/hbase-0.95-on-hadoop2/275/testReport/junit/org.apache.hadoop.hbase.master/TestDistributedLogSplitting/testDelayedDeleteOnFailure/ {code} org.apache.hadoop.hbase.master.TestDistributedLogSplitting.testDelayedDeleteOnFailure Failing for the past 1 build (Since Failed#275 ) Took 7 ms. add description Error Message test timed out after 3 milliseconds Stacktrace java.lang.Exception: test timed out after 3 milliseconds at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:640) at org.apache.zookeeper.ClientCnxn.start(ClientCnxn.java:403) at org.apache.zookeeper.ZooKeeper.init(ZooKeeper.java:450) at org.apache.zookeeper.ZooKeeper.init(ZooKeeper.java:380) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.init(RecoverableZooKeeper.java:114) at org.apache.hadoop.hbase.zookeeper.ZKUtil.connect(ZKUtil.java:135) at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.init(ZooKeeperWatcher.java:167) at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.init(ZooKeeperWatcher.java:136) at org.apache.hadoop.hbase.client.ZooKeeperKeepAliveConnection.init(ZooKeeperKeepAliveConnection.java:43) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getKeepAliveZooKeeperWatcher(HConnectionManager.java:1788) at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:82) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.retrieveClusterId(HConnectionManager.java:778) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.init(HConnectionManager.java:618) at sun.reflect.GeneratedConstructorAccessor38.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:377) at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:358) at org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:293) at org.apache.hadoop.hbase.client.HTable.init(HTable.java:191) at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:887) at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:853) at org.apache.hadoop.hbase.master.TestDistributedLogSplitting.startCluster(TestDistributedLogSplitting.java:156) at org.apache.hadoop.hbase.master.TestDistributedLogSplitting.startCluster(TestDistributedLogSplitting.java:141) at org.apache.hadoop.hbase.master.TestDistributedLogSplitting.testDelayedDeleteOnFailure(TestDistributedLogSplitting.java:970) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) {code} We seem stuck. Nothing happens after master initializes... we just wait: {code} 2013-08-28 00:42:53,492 INFO [M:0;quirinus:56658] master.HMaster(901): Master has completed initialization 2013-08-28 00:42:53,496 DEBUG [NamespaceJanitor-quirinus:56658] client.ClientScanner(218): Finished {ENCODED = 5fb8721e18f122319e47336ad5221a58, NAME = 'hbase:namespace,,1377650558736.5fb8721e18f122319e47336ad5221a58.', STARTKEY = '',
[jira] [Commented] (HBASE-8884) Pluggable RpcScheduler
[ https://issues.apache.org/jira/browse/HBASE-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752828#comment-13752828 ] stack commented on HBASE-8884: -- I commented over in hbase-9101 [~stepinto] Pluggable RpcScheduler -- Key: HBASE-8884 URL: https://issues.apache.org/jira/browse/HBASE-8884 Project: HBase Issue Type: Improvement Components: IPC/RPC Reporter: Chao Shi Assignee: Chao Shi Fix For: 0.98.0 Attachments: hbase-8884.patch, hbase-8884-v2.patch, hbase-8884-v3.patch, hbase-8884-v4.patch, hbase-8884-v5.patch, hbase-8884-v6.patch, hbase-8884-v7.patch, hbase-8884-v8.patch Today, the RPC scheduling mechanism is pretty simple: it execute requests in isolated thread-pools based on their priority. In the current implementation, all normal get/put requests are using the same pool. We'd like to add some per-user or per-region level isolation, so that a misbehaved user/region will not saturate the thread-pool and cause DoS to others easily. The idea is similar to FairScheduler in MR. The current scheduling code is not standalone and is mixed with others (Connection#processRequest). The issue is the first step to extract it to an interface, so that people are free to write and test their own implementations. This patch doesn't make it completely pluggable yet, as some parameters are pass from constructor. This is because HMaster and HRegionServer both use RpcServer and they have different thread-pool size config. Let me know if you have a solution to this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9101) Addendum to pluggable RpcScheduler
[ https://issues.apache.org/jira/browse/HBASE-9101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752831#comment-13752831 ] Jesse Yates commented on HBASE-9101: +1 on stack's comments. We really should add a comment in HConstants that generally constants shouldn't go there, unless referenced by a lot of different things. Otherwise, ltgm. Addendum to pluggable RpcScheduler -- Key: HBASE-9101 URL: https://issues.apache.org/jira/browse/HBASE-9101 Project: HBase Issue Type: Improvement Components: IPC/RPC Reporter: Chao Shi Assignee: Chao Shi Fix For: 0.98.0 Attachments: hbase-9101.patch, hbase-9101-v2.patch This patch fixes the review comments from [~stack] and a small fix: - Make RpcScheduler fully pluggable. One can write his/her own implementation and add it to classpath and specify it by config hbase.region.server.rpc.scheduler.factory.class. - Add unit tests and fix that RpcScheduler.stop is not called (discovered by tests) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9278) Reading Pre-namespace meta table edits kills the reader
[ https://issues.apache.org/jira/browse/HBASE-9278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752836#comment-13752836 ] Himanshu Vashishtha commented on HBASE-9278: Thanks for the review, Stack. Indeed, I don't need a mini cluster. Will remove it. Yes, we need to throw that iae so that we catch in the ReaderBase to call next() (we couldn't call next from HLogKey). Will upload a revised one now. Reading Pre-namespace meta table edits kills the reader --- Key: HBASE-9278 URL: https://issues.apache.org/jira/browse/HBASE-9278 Project: HBase Issue Type: Bug Components: migration, wal Affects Versions: 0.95.2 Reporter: Himanshu Vashishtha Assignee: Himanshu Vashishtha Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: HBase-9278-v0.patch, HBase-9278-v1-1.patch, HBase-9278-v1.patch In upgrading to 0.96, there might be some meta/root table edits. Currently, we are just killing SplitLogWorker thread in case it sees any META, or ROOT waledit, which blocks log splitting/replaying of remaining WALs. {code} 2013-08-20 15:45:16,998 ERROR regionserver.SplitLogWorker (SplitLogWorker.java:run(210)) - unexpected error java.lang.IllegalArgumentException: .META. no longer exists. The table has been renamed to hbase:meta at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:269) at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:261) at org.apache.hadoop.hbase.regionserver.wal.HLogKey.readFields(HLogKey.java:338) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1898) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1938) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.readNext(SequenceFileLogReader.java:215) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:98) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:85) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getNextLogLine(HLogSplitter.java:582) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:292) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:209) at org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:138) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.grabTask(SplitLogWorker.java:358) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.taskLoop(SplitLogWorker.java:245) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:205) at java.lang.Thread.run(Thread.java:662) 2013-08-20 15:45:16,999 INFO regionserver.SplitLogWorker (SplitLogWorker.java:run(212)) - SplitLogWorker localhost,60020,1377035111898 exiting {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7709) Infinite loop possible in Master/Master replication
[ https://issues.apache.org/jira/browse/HBASE-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-7709: - Attachment: 7709-0.94-rev6.txt Discussed offline with Vasu. Proposing one small change: Don't store the first cluster in an edit twice. So the clusterID as used now holds the first cluster id and the new clusterIds in Mutation and Scopes on the WALEdit hold the 2nd and 3rd cluster ID if any. As master - master replication will be the most common scenario this seems an important optimization. Infinite loop possible in Master/Master replication --- Key: HBASE-7709 URL: https://issues.apache.org/jira/browse/HBASE-7709 Project: HBase Issue Type: Bug Components: Replication Affects Versions: 0.94.6, 0.95.1 Reporter: Lars Hofhansl Assignee: Vasu Mariyala Fix For: 0.98.0, 0.94.12, 0.96.0 Attachments: 095-trunk.patch, 0.95-trunk-rev1.patch, 0.95-trunk-rev2.patch, 0.95-trunk-rev3.patch, 0.95-trunk-rev4.patch, 7709-0.94-rev6.txt, HBASE-7709.patch, HBASE-7709-rev1.patch, HBASE-7709-rev2.patch, HBASE-7709-rev3.patch, HBASE-7709-rev4.patch, HBASE-7709-rev5.patch We just discovered the following scenario: # Cluster A and B are setup in master/master replication # By accident we had Cluster C replicate to Cluster A. Now all edit originating from C will be bouncing between A and B. Forever! The reason is that when the edit come in from C the cluster ID is already set and won't be reset. We have a couple of options here: # Optionally only support master/master (not cycles of more than two clusters). In that case we can always reset the cluster ID in the ReplicationSource. That means that now cycles 2 will have the data cycle forever. This is the only option that requires no changes in the HLog format. # Instead of a single cluster id per edit maintain a (unordered) set of cluster id that have seen this edit. Then in ReplicationSource we drop any edit that the sink has seen already. The is the cleanest approach, but it might need a lot of data stored per edit if there are many clusters involved. # Maintain a configurable counter of the maximum cycle side we want to support. Could default to 10 (even maybe even just). Store a hop-count in the WAL and the ReplicationSource increases that hop-count on each hop. If we're over the max, just drop the edit. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9326) ServerName is created using getLocalSocketAddress, breaks binding to the wildcard address. Revert HBASE-8640
[ https://issues.apache.org/jira/browse/HBASE-9326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752838#comment-13752838 ] Elliott Clark commented on HBASE-9326: -- +1 ServerName is created using getLocalSocketAddress, breaks binding to the wildcard address. Revert HBASE-8640 Key: HBASE-9326 URL: https://issues.apache.org/jira/browse/HBASE-9326 Project: HBase Issue Type: Bug Affects Versions: 0.95.2 Reporter: Jean-Daniel Cryans Assignee: Jean-Daniel Cryans Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: HBASE-9326.patch In HBASE-8148 I added a way to bind to specific addresses, like 0.0.0.0, but right now in 0.95/trunk the ServerName is created using getLocalSocketAddress in RpcServer so 0.0.0.0 gets published in ZK. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9230) Fix the server so it can take a pure pb request param and return a pure pb result
[ https://issues.apache.org/jira/browse/HBASE-9230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-9230: - Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Thanks for reviews lads. Committed to 0.95 and trunk. Fix the server so it can take a pure pb request param and return a pure pb result - Key: HBASE-9230 URL: https://issues.apache.org/jira/browse/HBASE-9230 Project: HBase Issue Type: Bug Reporter: stack Assignee: stack Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9230.095v7.txt, 9230.txt, 9230v2.txt, 9230v3.txt, 9230v3.txt, 9230v4.txt, 9230v5.txt, 9230v6.txt, 9230v7.txt, 9230v7.txt Working on the asynchbase update w/ B this afternoon so it can run against 0.95/0.96, I noticed that clients HAVE TO do cellblocks as the server is currently. That is an oversight. Lets fix so can do all pb all the time too (I thought this was there but it is not); it will make it easier dev'ing simple clients. This issue shouldn't hold up release but we should get it in to help the asynchbase convertion. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7709) Infinite loop possible in Master/Master replication
[ https://issues.apache.org/jira/browse/HBASE-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752841#comment-13752841 ] Lars Hofhansl commented on HBASE-7709: -- Will commit to 0.94 later today if there are no objections. Infinite loop possible in Master/Master replication --- Key: HBASE-7709 URL: https://issues.apache.org/jira/browse/HBASE-7709 Project: HBase Issue Type: Bug Components: Replication Affects Versions: 0.94.6, 0.95.1 Reporter: Lars Hofhansl Assignee: Vasu Mariyala Fix For: 0.98.0, 0.94.12, 0.96.0 Attachments: 095-trunk.patch, 0.95-trunk-rev1.patch, 0.95-trunk-rev2.patch, 0.95-trunk-rev3.patch, 0.95-trunk-rev4.patch, 7709-0.94-rev6.txt, HBASE-7709.patch, HBASE-7709-rev1.patch, HBASE-7709-rev2.patch, HBASE-7709-rev3.patch, HBASE-7709-rev4.patch, HBASE-7709-rev5.patch We just discovered the following scenario: # Cluster A and B are setup in master/master replication # By accident we had Cluster C replicate to Cluster A. Now all edit originating from C will be bouncing between A and B. Forever! The reason is that when the edit come in from C the cluster ID is already set and won't be reset. We have a couple of options here: # Optionally only support master/master (not cycles of more than two clusters). In that case we can always reset the cluster ID in the ReplicationSource. That means that now cycles 2 will have the data cycle forever. This is the only option that requires no changes in the HLog format. # Instead of a single cluster id per edit maintain a (unordered) set of cluster id that have seen this edit. Then in ReplicationSource we drop any edit that the sink has seen already. The is the cleanest approach, but it might need a lot of data stored per edit if there are many clusters involved. # Maintain a configurable counter of the maximum cycle side we want to support. Could default to 10 (even maybe even just). Store a hop-count in the WAL and the ReplicationSource increases that hop-count on each hop. If we're over the max, just drop the edit. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-9369) Add support for 1- and 2-byte integers in OrderedBytes and provide types
Nick Dimiduk created HBASE-9369: --- Summary: Add support for 1- and 2-byte integers in OrderedBytes and provide types Key: HBASE-9369 URL: https://issues.apache.org/jira/browse/HBASE-9369 Project: HBase Issue Type: Improvement Reporter: Nick Dimiduk -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8201) OrderedBytes: an ordered encoding strategy
[ https://issues.apache.org/jira/browse/HBASE-8201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752849#comment-13752849 ] Nick Dimiduk commented on HBASE-8201: - Hi Liangliang, We can. My preference from the beginning was to only provide the numeric encoding, because of the compatibility story. However, the fixed-length encodings seem popular. I created HBASE-9369 for this. I don't think it'll make 0.96.0RC0 though. OrderedBytes: an ordered encoding strategy -- Key: HBASE-8201 URL: https://issues.apache.org/jira/browse/HBASE-8201 Project: HBase Issue Type: Sub-task Components: Client Reporter: Nick Dimiduk Assignee: Nick Dimiduk Fix For: 0.98.0, 0.95.2 Attachments: 0001-HBASE-8201-OrderedBytes-order-preserving-encoding.patch, 0001-HBASE-8201-OrderedBytes-order-preserving-encoding.patch, 0001-HBASE-8201-OrderedBytes-order-preserving-encoding.patch, 0001-HBASE-8201-OrderedBytes-order-preserving-encoding.patch, 0001-HBASE-8201-OrderedBytes-order-preserving-encoding.patch, 0001-HBASE-8201-OrderedBytes-order-preserving-encoding.patch, 0001-HBASE-8201-OrderedBytes-order-preserving-encoding.patch, 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch, 0001-HBASE-8201-OrderedBytes-provides-order-preserving-se.patch Once the spec is agreed upon, it must be implemented. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9230) Fix the server so it can take a pure pb request param and return a pure pb result
[ https://issues.apache.org/jira/browse/HBASE-9230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-9230: - Attachment: 9230.095v7.txt Slightly different because of changes in rpc scheduler that are not in 0.95 branch. Fix the server so it can take a pure pb request param and return a pure pb result - Key: HBASE-9230 URL: https://issues.apache.org/jira/browse/HBASE-9230 Project: HBase Issue Type: Bug Reporter: stack Assignee: stack Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9230.095v7.txt, 9230.txt, 9230v2.txt, 9230v3.txt, 9230v3.txt, 9230v4.txt, 9230v5.txt, 9230v6.txt, 9230v7.txt, 9230v7.txt Working on the asynchbase update w/ B this afternoon so it can run against 0.95/0.96, I noticed that clients HAVE TO do cellblocks as the server is currently. That is an oversight. Lets fix so can do all pb all the time too (I thought this was there but it is not); it will make it easier dev'ing simple clients. This issue shouldn't hold up release but we should get it in to help the asynchbase convertion. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9326) ServerName is created using getLocalSocketAddress, breaks binding to the wildcard address. Revert HBASE-8640
[ https://issues.apache.org/jira/browse/HBASE-9326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752854#comment-13752854 ] Hadoop QA commented on HBASE-9326: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12600447/HBASE-9326.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 findbugs{color}. The patch appears to cause Findbugs (version 1.3.9) to fail. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/6951//testReport/ Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/6951//console This message is automatically generated. ServerName is created using getLocalSocketAddress, breaks binding to the wildcard address. Revert HBASE-8640 Key: HBASE-9326 URL: https://issues.apache.org/jira/browse/HBASE-9326 Project: HBase Issue Type: Bug Affects Versions: 0.95.2 Reporter: Jean-Daniel Cryans Assignee: Jean-Daniel Cryans Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: HBASE-9326.patch In HBASE-8148 I added a way to bind to specific addresses, like 0.0.0.0, but right now in 0.95/trunk the ServerName is created using getLocalSocketAddress in RpcServer so 0.0.0.0 gets published in ZK. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7709) Infinite loop possible in Master/Master replication
[ https://issues.apache.org/jira/browse/HBASE-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752858#comment-13752858 ] Hadoop QA commented on HBASE-7709: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12600457/7709-0.94-rev6.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified tests. {color:red}-1 patch{color}. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/6953//console This message is automatically generated. Infinite loop possible in Master/Master replication --- Key: HBASE-7709 URL: https://issues.apache.org/jira/browse/HBASE-7709 Project: HBase Issue Type: Bug Components: Replication Affects Versions: 0.94.6, 0.95.1 Reporter: Lars Hofhansl Assignee: Vasu Mariyala Fix For: 0.98.0, 0.94.12, 0.96.0 Attachments: 095-trunk.patch, 0.95-trunk-rev1.patch, 0.95-trunk-rev2.patch, 0.95-trunk-rev3.patch, 0.95-trunk-rev4.patch, 7709-0.94-rev6.txt, HBASE-7709.patch, HBASE-7709-rev1.patch, HBASE-7709-rev2.patch, HBASE-7709-rev3.patch, HBASE-7709-rev4.patch, HBASE-7709-rev5.patch We just discovered the following scenario: # Cluster A and B are setup in master/master replication # By accident we had Cluster C replicate to Cluster A. Now all edit originating from C will be bouncing between A and B. Forever! The reason is that when the edit come in from C the cluster ID is already set and won't be reset. We have a couple of options here: # Optionally only support master/master (not cycles of more than two clusters). In that case we can always reset the cluster ID in the ReplicationSource. That means that now cycles 2 will have the data cycle forever. This is the only option that requires no changes in the HLog format. # Instead of a single cluster id per edit maintain a (unordered) set of cluster id that have seen this edit. Then in ReplicationSource we drop any edit that the sink has seen already. The is the cleanest approach, but it might need a lot of data stored per edit if there are many clusters involved. # Maintain a configurable counter of the maximum cycle side we want to support. Could default to 10 (even maybe even just). Store a hop-count in the WAL and the ReplicationSource increases that hop-count on each hop. If we're over the max, just drop the edit. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9278) Reading Pre-namespace meta table edits kills the reader
[ https://issues.apache.org/jira/browse/HBASE-9278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Himanshu Vashishtha updated HBASE-9278: --- Attachment: HBase-9278-v2.patch Attached patch takes incorporating Stack's reviews. Thanks. Reading Pre-namespace meta table edits kills the reader --- Key: HBASE-9278 URL: https://issues.apache.org/jira/browse/HBASE-9278 Project: HBase Issue Type: Bug Components: migration, wal Affects Versions: 0.95.2 Reporter: Himanshu Vashishtha Assignee: Himanshu Vashishtha Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: HBase-9278-v0.patch, HBase-9278-v1-1.patch, HBase-9278-v1.patch, HBase-9278-v2.patch In upgrading to 0.96, there might be some meta/root table edits. Currently, we are just killing SplitLogWorker thread in case it sees any META, or ROOT waledit, which blocks log splitting/replaying of remaining WALs. {code} 2013-08-20 15:45:16,998 ERROR regionserver.SplitLogWorker (SplitLogWorker.java:run(210)) - unexpected error java.lang.IllegalArgumentException: .META. no longer exists. The table has been renamed to hbase:meta at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:269) at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:261) at org.apache.hadoop.hbase.regionserver.wal.HLogKey.readFields(HLogKey.java:338) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1898) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1938) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.readNext(SequenceFileLogReader.java:215) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:98) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:85) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getNextLogLine(HLogSplitter.java:582) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:292) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:209) at org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:138) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.grabTask(SplitLogWorker.java:358) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.taskLoop(SplitLogWorker.java:245) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:205) at java.lang.Thread.run(Thread.java:662) 2013-08-20 15:45:16,999 INFO regionserver.SplitLogWorker (SplitLogWorker.java:run(212)) - SplitLogWorker localhost,60020,1377035111898 exiting {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9278) Reading Pre-namespace meta table edits kills the reader
[ https://issues.apache.org/jira/browse/HBASE-9278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752865#comment-13752865 ] Hadoop QA commented on HBASE-9278: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12600462/HBase-9278-v2.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 2 new or modified tests. {color:red}-1 patch{color}. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/6954//console This message is automatically generated. Reading Pre-namespace meta table edits kills the reader --- Key: HBASE-9278 URL: https://issues.apache.org/jira/browse/HBASE-9278 Project: HBase Issue Type: Bug Components: migration, wal Affects Versions: 0.95.2 Reporter: Himanshu Vashishtha Assignee: Himanshu Vashishtha Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: HBase-9278-v0.patch, HBase-9278-v1-1.patch, HBase-9278-v1.patch, HBase-9278-v2.patch In upgrading to 0.96, there might be some meta/root table edits. Currently, we are just killing SplitLogWorker thread in case it sees any META, or ROOT waledit, which blocks log splitting/replaying of remaining WALs. {code} 2013-08-20 15:45:16,998 ERROR regionserver.SplitLogWorker (SplitLogWorker.java:run(210)) - unexpected error java.lang.IllegalArgumentException: .META. no longer exists. The table has been renamed to hbase:meta at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:269) at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:261) at org.apache.hadoop.hbase.regionserver.wal.HLogKey.readFields(HLogKey.java:338) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1898) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1938) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.readNext(SequenceFileLogReader.java:215) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:98) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:85) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getNextLogLine(HLogSplitter.java:582) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:292) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:209) at org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:138) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.grabTask(SplitLogWorker.java:358) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.taskLoop(SplitLogWorker.java:245) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:205) at java.lang.Thread.run(Thread.java:662) 2013-08-20 15:45:16,999 INFO regionserver.SplitLogWorker (SplitLogWorker.java:run(212)) - SplitLogWorker localhost,60020,1377035111898 exiting {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9341) Document hbase.hstore.compaction.kv.max
[ https://issues.apache.org/jira/browse/HBASE-9341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-9341: - Attachment: 9341.txt Add to hbase-default.xml so it'll show in documentation. Document hbase.hstore.compaction.kv.max --- Key: HBASE-9341 URL: https://issues.apache.org/jira/browse/HBASE-9341 Project: HBase Issue Type: Task Components: documentation Reporter: Adrien Mogenet Assignee: stack Priority: Critical Labels: compaction Fix For: 0.98.0, 0.96.0 Attachments: 9341.txt This setting is used within {{Compactor.java}}, apparently since 0.90 but has never been documented. It could be useful for people using very wide rows or those who want to fine tune their compactions. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9341) Document hbase.hstore.compaction.kv.max
[ https://issues.apache.org/jira/browse/HBASE-9341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-9341: - Status: Patch Available (was: Open) Document hbase.hstore.compaction.kv.max --- Key: HBASE-9341 URL: https://issues.apache.org/jira/browse/HBASE-9341 Project: HBase Issue Type: Task Components: documentation Reporter: Adrien Mogenet Assignee: stack Priority: Critical Labels: compaction Fix For: 0.98.0, 0.96.0 Attachments: 9341.txt This setting is used within {{Compactor.java}}, apparently since 0.90 but has never been documented. It could be useful for people using very wide rows or those who want to fine tune their compactions. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-9370) Add logging to Schema change Chaos actions.
Elliott Clark created HBASE-9370: Summary: Add logging to Schema change Chaos actions. Key: HBASE-9370 URL: https://issues.apache.org/jira/browse/HBASE-9370 Project: HBase Issue Type: Task Reporter: Elliott Clark -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9341) Document hbase.hstore.compaction.kv.max
[ https://issues.apache.org/jira/browse/HBASE-9341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752876#comment-13752876 ] Nick Dimiduk commented on HBASE-9341: - +1 Document hbase.hstore.compaction.kv.max --- Key: HBASE-9341 URL: https://issues.apache.org/jira/browse/HBASE-9341 Project: HBase Issue Type: Task Components: documentation Reporter: Adrien Mogenet Assignee: stack Priority: Critical Labels: compaction Fix For: 0.98.0, 0.96.0 Attachments: 9341.txt This setting is used within {{Compactor.java}}, apparently since 0.90 but has never been documented. It could be useful for people using very wide rows or those who want to fine tune their compactions. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8701) distributedLogReplay need to apply wal edits in the receiving order of those edits
[ https://issues.apache.org/jira/browse/HBASE-8701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752882#comment-13752882 ] Jeffrey Zhong commented on HBASE-8701: -- [~saint@gmail.com] The patch just needs rebase while it is using negative mvcc to store change sequence number of WALEdits to handle updates with same version. Since tag support is in horizon, I'd like to port this patch using tags instead of mvcc field. distributedLogReplay need to apply wal edits in the receiving order of those edits -- Key: HBASE-8701 URL: https://issues.apache.org/jira/browse/HBASE-8701 Project: HBase Issue Type: Bug Components: MTTR Reporter: Jeffrey Zhong Assignee: Jeffrey Zhong Fix For: 0.98.0, 0.96.0 Attachments: 8701-v3.txt, hbase-8701-v4.patch, hbase-8701-v5.patch, hbase-8701-v6.patch, hbase-8701-v7.patch, hbase-8701-v8.patch This issue happens in distributedLogReplay mode when recovering multiple puts of the same key + version(timestamp). After replay, the value is nondeterministic of the key h5. The original concern situation raised from [~eclark]: For all edits the rowkey is the same. There's a log with: [ A (ts = 0), B (ts = 0) ] Replay the first half of the log. A user puts in C (ts = 0) Memstore has to flush A new Hfile will be created with [ C, A ] and MaxSequenceId = C's seqid. Replay the rest of the Log. Flush The issue will happen in similar situation like Put(key, t=T) in WAL1 and Put(key,t=T) in WAL2 h5. Below is the option(proposed by Ted) I'd like to use: a) During replay, we pass original wal sequence number of each edit to the receiving RS b) In receiving RS, we store negative original sequence number of wal edits into mvcc field of KVs of wal edits c) Add handling of negative MVCC in KVScannerComparator and KVComparator d) In receiving RS, write original sequence number into an optional field of wal file for chained RS failure situation e) When opening a region, we add a safety bumper(a large number) in order for the new sequence number of a newly opened region not to collide with old sequence numbers. In the future, when we stores sequence number along with KVs, we can adjust the above solution a little bit by avoiding to overload MVCC field. h5. The other alternative options are listed below for references: Option one a) disallow writes during recovery b) during replay, we pass original wal sequence ids c) hold flush till all wals of a recovering region are replayed. Memstore should hold because we only recover unflushed wal edits. For edits with same key + version, whichever with larger sequence Id wins. Option two a) During replay, we pass original wal sequence ids b) for each wal edit, we store each edit's original sequence id along with its key. c) during scanning, we use the original sequence id if it's present otherwise its store file sequence Id d) compaction can just leave put with max sequence id Please let me know if you have better ideas. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9278) Reading Pre-namespace meta table edits kills the reader
[ https://issues.apache.org/jira/browse/HBASE-9278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Himanshu Vashishtha updated HBASE-9278: --- Attachment: HBase-9278-v2-rebase.patch Rebased. Reading Pre-namespace meta table edits kills the reader --- Key: HBASE-9278 URL: https://issues.apache.org/jira/browse/HBASE-9278 Project: HBase Issue Type: Bug Components: migration, wal Affects Versions: 0.95.2 Reporter: Himanshu Vashishtha Assignee: Himanshu Vashishtha Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: HBase-9278-v0.patch, HBase-9278-v1-1.patch, HBase-9278-v1.patch, HBase-9278-v2.patch, HBase-9278-v2-rebase.patch In upgrading to 0.96, there might be some meta/root table edits. Currently, we are just killing SplitLogWorker thread in case it sees any META, or ROOT waledit, which blocks log splitting/replaying of remaining WALs. {code} 2013-08-20 15:45:16,998 ERROR regionserver.SplitLogWorker (SplitLogWorker.java:run(210)) - unexpected error java.lang.IllegalArgumentException: .META. no longer exists. The table has been renamed to hbase:meta at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:269) at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:261) at org.apache.hadoop.hbase.regionserver.wal.HLogKey.readFields(HLogKey.java:338) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1898) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1938) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.readNext(SequenceFileLogReader.java:215) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:98) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:85) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getNextLogLine(HLogSplitter.java:582) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:292) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:209) at org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:138) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.grabTask(SplitLogWorker.java:358) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.taskLoop(SplitLogWorker.java:245) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:205) at java.lang.Thread.run(Thread.java:662) 2013-08-20 15:45:16,999 INFO regionserver.SplitLogWorker (SplitLogWorker.java:run(212)) - SplitLogWorker localhost,60020,1377035111898 exiting {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9278) Reading Pre-namespace meta table edits kills the reader
[ https://issues.apache.org/jira/browse/HBASE-9278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-9278: - Attachment: 9278v3.txt What I applied. Reading Pre-namespace meta table edits kills the reader --- Key: HBASE-9278 URL: https://issues.apache.org/jira/browse/HBASE-9278 Project: HBase Issue Type: Bug Components: migration, wal Affects Versions: 0.95.2 Reporter: Himanshu Vashishtha Assignee: Himanshu Vashishtha Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9278v3.txt, HBase-9278-v0.patch, HBase-9278-v1-1.patch, HBase-9278-v1.patch, HBase-9278-v2.patch, HBase-9278-v2-rebase.patch In upgrading to 0.96, there might be some meta/root table edits. Currently, we are just killing SplitLogWorker thread in case it sees any META, or ROOT waledit, which blocks log splitting/replaying of remaining WALs. {code} 2013-08-20 15:45:16,998 ERROR regionserver.SplitLogWorker (SplitLogWorker.java:run(210)) - unexpected error java.lang.IllegalArgumentException: .META. no longer exists. The table has been renamed to hbase:meta at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:269) at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:261) at org.apache.hadoop.hbase.regionserver.wal.HLogKey.readFields(HLogKey.java:338) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1898) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1938) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.readNext(SequenceFileLogReader.java:215) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:98) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:85) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getNextLogLine(HLogSplitter.java:582) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:292) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:209) at org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:138) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.grabTask(SplitLogWorker.java:358) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.taskLoop(SplitLogWorker.java:245) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:205) at java.lang.Thread.run(Thread.java:662) 2013-08-20 15:45:16,999 INFO regionserver.SplitLogWorker (SplitLogWorker.java:run(212)) - SplitLogWorker localhost,60020,1377035111898 exiting {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9278) Reading Pre-namespace meta table edits kills the reader
[ https://issues.apache.org/jira/browse/HBASE-9278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-9278: - Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Committed to 0.95 and trunk (my patch is what Himanshu posted but a comment on end of a method threw off application of his patch) Reading Pre-namespace meta table edits kills the reader --- Key: HBASE-9278 URL: https://issues.apache.org/jira/browse/HBASE-9278 Project: HBase Issue Type: Bug Components: migration, wal Affects Versions: 0.95.2 Reporter: Himanshu Vashishtha Assignee: Himanshu Vashishtha Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9278v3.txt, HBase-9278-v0.patch, HBase-9278-v1-1.patch, HBase-9278-v1.patch, HBase-9278-v2.patch, HBase-9278-v2-rebase.patch In upgrading to 0.96, there might be some meta/root table edits. Currently, we are just killing SplitLogWorker thread in case it sees any META, or ROOT waledit, which blocks log splitting/replaying of remaining WALs. {code} 2013-08-20 15:45:16,998 ERROR regionserver.SplitLogWorker (SplitLogWorker.java:run(210)) - unexpected error java.lang.IllegalArgumentException: .META. no longer exists. The table has been renamed to hbase:meta at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:269) at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:261) at org.apache.hadoop.hbase.regionserver.wal.HLogKey.readFields(HLogKey.java:338) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1898) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1938) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.readNext(SequenceFileLogReader.java:215) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:98) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:85) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getNextLogLine(HLogSplitter.java:582) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:292) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:209) at org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:138) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.grabTask(SplitLogWorker.java:358) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.taskLoop(SplitLogWorker.java:245) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:205) at java.lang.Thread.run(Thread.java:662) 2013-08-20 15:45:16,999 INFO regionserver.SplitLogWorker (SplitLogWorker.java:run(212)) - SplitLogWorker localhost,60020,1377035111898 exiting {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira