[jira] [Commented] (HBASE-9514) Prevent region from assigning before log splitting is done
[ https://issues.apache.org/jira/browse/HBASE-9514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769237#comment-13769237 ] Enis Soztutar commented on HBASE-9514: -- A couple of comments: - should we rename AM.acquireLock - AM.acquireRegionLock() - Why not do this for meta? {code} +if (!region.isMetaRegion() +regionStates.wasRegionOnDeadServer(encodedName)) { {code} Is it safe to expire a server like this. It means the master cannot connect to it, but it may still have the zk lease. {code} +} else { + LOG.info(server + is not reachable, expire it); + serverManager.expireServer(server); +} {code} - Should we rename RegionStates.logSplit() - markAssignable() or something like it. Is this timeout intended to be active? {code} + @Test //(timeout=6) {code} Prevent region from assigning before log splitting is done -- Key: HBASE-9514 URL: https://issues.apache.org/jira/browse/HBASE-9514 Project: HBase Issue Type: Bug Components: Region Assignment Reporter: Jimmy Xiang Assignee: Jimmy Xiang Priority: Blocker Attachments: trunk-9514_v1.patch If a region is assigned before log splitting is done by the server shutdown handler, the edits belonging to this region in the hlogs of the dead server will be lost. Generally this is not an issue if users don't assign/unassign a region from hbase shell or via hbase admin. These commands are marked for experts only in the hbase shell help too. However, chaos monkey doesn't care. If we can prevent from assigning such regions in a bad time, it would make things a little safer. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9555) Reset loadbalancer back to StochasticLoadBalancer
[ https://issues.apache.org/jira/browse/HBASE-9555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-9555: - Resolution: Fixed Assignee: stack Status: Resolved (was: Patch Available) Committed to trunk and to 0.96 Reset loadbalancer back to StochasticLoadBalancer - Key: HBASE-9555 URL: https://issues.apache.org/jira/browse/HBASE-9555 Project: HBase Issue Type: Bug Components: Balancer Affects Versions: 0.96.0 Reporter: Elliott Clark Assignee: stack Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: balancer.txt It seems like HBASE-7296 changed the loadbalancer class by mistake, removing a good deal of functionality. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9555) Reset loadbalancer back to StochasticLoadBalancer
[ https://issues.apache.org/jira/browse/HBASE-9555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769246#comment-13769246 ] Hadoop QA commented on HBASE-9555: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12603528/balancer.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 9 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.TestRegionRebalancing Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7265//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7265//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7265//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7265//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7265//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7265//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7265//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7265//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7265//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7265//console This message is automatically generated. Reset loadbalancer back to StochasticLoadBalancer - Key: HBASE-9555 URL: https://issues.apache.org/jira/browse/HBASE-9555 Project: HBase Issue Type: Bug Components: Balancer Affects Versions: 0.96.0 Reporter: Elliott Clark Assignee: stack Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: balancer.txt It seems like HBASE-7296 changed the loadbalancer class by mistake, removing a good deal of functionality. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9503) minor optimization for getRowKeyAtOrBefore
[ https://issues.apache.org/jira/browse/HBASE-9503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769249#comment-13769249 ] Liang Xie commented on HBASE-9503: -- en, agree with you [~sershe], let me close it right now, seems my finding is immature:) minor optimization for getRowKeyAtOrBefore -- Key: HBASE-9503 URL: https://issues.apache.org/jira/browse/HBASE-9503 Project: HBase Issue Type: Bug Components: regionserver Affects Versions: 0.98.0, 0.95.2, 0.94.12, 0.96.1 Reporter: Liang Xie Assignee: Liang Xie Attachments: HBASE-9503.txt, HBASE-9503-v2.txt we could shortcut the getRowKeyAtOrBefore() as quick as we found an exact matching. it'll be reasonable if there're lots of target storefiles. It's a minor change, w/o a new test case be made:) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9503) minor optimization for getRowKeyAtOrBefore
[ https://issues.apache.org/jira/browse/HBASE-9503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Liang Xie updated HBASE-9503: - Resolution: Won't Fix Status: Resolved (was: Patch Available) minor optimization for getRowKeyAtOrBefore -- Key: HBASE-9503 URL: https://issues.apache.org/jira/browse/HBASE-9503 Project: HBase Issue Type: Bug Components: regionserver Affects Versions: 0.98.0, 0.95.2, 0.94.12, 0.96.1 Reporter: Liang Xie Assignee: Liang Xie Attachments: HBASE-9503.txt, HBASE-9503-v2.txt we could shortcut the getRowKeyAtOrBefore() as quick as we found an exact matching. it'll be reasonable if there're lots of target storefiles. It's a minor change, w/o a new test case be made:) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9555) Reset loadbalancer back to StochasticLoadBalancer
[ https://issues.apache.org/jira/browse/HBASE-9555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-9555: - Attachment: 9555.addendum.txt Failed to change DefaultLoadBalancer to SimpleLoadBalancer in one place, in the failed test. Reset loadbalancer back to StochasticLoadBalancer - Key: HBASE-9555 URL: https://issues.apache.org/jira/browse/HBASE-9555 Project: HBase Issue Type: Bug Components: Balancer Affects Versions: 0.96.0 Reporter: Elliott Clark Assignee: stack Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9555.addendum.txt, balancer.txt It seems like HBASE-7296 changed the loadbalancer class by mistake, removing a good deal of functionality. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9555) Reset loadbalancer back to StochasticLoadBalancer
[ https://issues.apache.org/jira/browse/HBASE-9555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769260#comment-13769260 ] stack commented on HBASE-9555: -- I committed addendum to 0.96 and to trunk. Reset loadbalancer back to StochasticLoadBalancer - Key: HBASE-9555 URL: https://issues.apache.org/jira/browse/HBASE-9555 Project: HBase Issue Type: Bug Components: Balancer Affects Versions: 0.96.0 Reporter: Elliott Clark Assignee: stack Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9555.addendum.txt, balancer.txt It seems like HBASE-7296 changed the loadbalancer class by mistake, removing a good deal of functionality. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9153) Create a deprecation policy enforcement check
[ https://issues.apache.org/jira/browse/HBASE-9153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksandr Shulman updated HBASE-9153: - Attachment: HBASE-9153-v5-0.95.patch HBASE-9153-v5-0.94.patch HBASE-9153-v5-trunk.patch File name fixed in the comments. Create a deprecation policy enforcement check - Key: HBASE-9153 URL: https://issues.apache.org/jira/browse/HBASE-9153 Project: HBase Issue Type: Task Reporter: Jonathan Hsieh Attachments: HBASE-9153-v1.patch, HBASE-9153-v3.patch, HBASE-9153-v4-0.94.patch, HBASE-9153-v4-0.95.patch, HBASE-9153-v4-trunk.patch, HBASE-9153-v5-0.94.patch, HBASE-9153-v5-0.95.patch, HBASE-9153-v5-trunk.patch We've had a few issues now where we've removed API's without deprecating or deprecating in the late release. (HBASE-9142, HBASE-9093) We should just have a tool that enforces our api deprecation policy as a release time check or as a precommit check. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9553) Pad HFile blocks to a fixed size before placing them into the blockcache
[ https://issues.apache.org/jira/browse/HBASE-9553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769273#comment-13769273 ] Anoop Sam John commented on HBASE-9553: --- What abt when the on-cache encoding is enabled. Will the HFile block sizes can change much from block to block? Pad HFile blocks to a fixed size before placing them into the blockcache Key: HBASE-9553 URL: https://issues.apache.org/jira/browse/HBASE-9553 Project: HBase Issue Type: Bug Reporter: Lars Hofhansl In order to make it easy on the garbage collector and to avoid full compaction phases we should make sure that all (or at least a large percentage) of the HFile blocks as cached in the block cache are exactly the same size. Currently an HFile block is typically slightly larger than the declared block size, as the block will accommodate that last KV on the block. The padding would be a ColumnFamily option. In many cases 100 bytes would probably be a good value to make all blocks exactly the same size (but of course it depends on the max size of the KVs). This does not have to be perfect. The more blocks evicted and replaced in the block cache are of the exact same size the easier it should be on the GC. Thoughts? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9153) Create a deprecation policy enforcement check
[ https://issues.apache.org/jira/browse/HBASE-9153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksandr Shulman updated HBASE-9153: - Attachment: HBASE-9153-v6-trunk.patch HBASE-9153-v6-0.95.patch HBASE-9153-v6-0.94.patch Had to fix one last thing. Create a deprecation policy enforcement check - Key: HBASE-9153 URL: https://issues.apache.org/jira/browse/HBASE-9153 Project: HBase Issue Type: Task Reporter: Jonathan Hsieh Attachments: HBASE-9153-v1.patch, HBASE-9153-v3.patch, HBASE-9153-v4-0.94.patch, HBASE-9153-v4-0.95.patch, HBASE-9153-v4-trunk.patch, HBASE-9153-v5-0.94.patch, HBASE-9153-v5-0.95.patch, HBASE-9153-v5-trunk.patch, HBASE-9153-v6-0.94.patch, HBASE-9153-v6-0.95.patch, HBASE-9153-v6-trunk.patch We've had a few issues now where we've removed API's without deprecating or deprecating in the late release. (HBASE-9142, HBASE-9093) We should just have a tool that enforces our api deprecation policy as a release time check or as a precommit check. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9555) Reset loadbalancer back to StochasticLoadBalancer
[ https://issues.apache.org/jira/browse/HBASE-9555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769304#comment-13769304 ] Hudson commented on HBASE-9555: --- FAILURE: Integrated in HBase-TRUNK #4520 (See [https://builds.apache.org/job/HBase-TRUNK/4520/]) HBASE-9555 Reset loadbalancer back to StochasticLoadBalancer (stack: rev 1523914) * /hbase/trunk/hbase-common/src/main/resources/hbase-default.xml * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/DefaultLoadBalancer.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/SimpleLoadBalancer.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestZooKeeper.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentManager.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestDefaultLoadBalancer.java Reset loadbalancer back to StochasticLoadBalancer - Key: HBASE-9555 URL: https://issues.apache.org/jira/browse/HBASE-9555 Project: HBase Issue Type: Bug Components: Balancer Affects Versions: 0.96.0 Reporter: Elliott Clark Assignee: stack Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9555.addendum.txt, balancer.txt It seems like HBASE-7296 changed the loadbalancer class by mistake, removing a good deal of functionality. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9510) Namespace operations should throw clean exceptions
[ https://issues.apache.org/jira/browse/HBASE-9510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769305#comment-13769305 ] Hudson commented on HBASE-9510: --- FAILURE: Integrated in HBase-TRUNK #4520 (See [https://builds.apache.org/job/HBase-TRUNK/4520/]) HBASE-9510 Namespace operations should throw clean exceptions (stack: rev 1523903) * /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/NamespaceExistException.java * /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/NamespaceNotFoundException.java HBASE-9510 Namespace operations should throw clean exceptions (stack: rev 1523902) * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/TableNamespaceManager.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestNamespace.java Namespace operations should throw clean exceptions -- Key: HBASE-9510 URL: https://issues.apache.org/jira/browse/HBASE-9510 Project: HBase Issue Type: Bug Components: master Reporter: Enis Soztutar Assignee: Enis Soztutar Fix For: 0.98.0, 0.96.0 Attachments: 9510v4.txt, 9510v4.txt, hbase-9510_v1.patch, hbase-9510_v2.patch, hbase-9510_v3.patch Some of the namespace operations does not throw clean exceptions mimicking table exceptions (TableNotFoundException, etc). For example: {code} hbase(main):007:0 describe_namespace 'non_existing_namespace' ERROR: java.io.IOException at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2117) at org.apache.hadoop.hbase.ipc.RpcServer$CallRunner.run(RpcServer.java:1816) at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:165) at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$0(SimpleRpcScheduler.java:161) at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:113) at java.lang.Thread.run(Thread.java:680) Caused by: java.lang.NullPointerException at org.apache.hadoop.hbase.protobuf.ProtobufUtil.toProtoNamespaceDescriptor(ProtobufUtil.java:2138) at org.apache.hadoop.hbase.master.HMaster.getNamespaceDescriptor(HMaster.java:3029) at org.apache.hadoop.hbase.protobuf.generated.MasterAdminProtos$MasterAdminService$2.callBlockingMethod(MasterAdminProtos.java:32904) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2079) ... 5 more {code} We can clean up the exceptions thrown from ns commands. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9153) Create a deprecation policy enforcement check
[ https://issues.apache.org/jira/browse/HBASE-9153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769323#comment-13769323 ] Hadoop QA commented on HBASE-9153: -- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12603555/HBASE-9153-v5-0.95.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 5 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7266//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7266//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7266//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7266//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7266//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7266//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7266//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7266//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7266//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7266//console This message is automatically generated. Create a deprecation policy enforcement check - Key: HBASE-9153 URL: https://issues.apache.org/jira/browse/HBASE-9153 Project: HBase Issue Type: Task Reporter: Jonathan Hsieh Attachments: HBASE-9153-v1.patch, HBASE-9153-v3.patch, HBASE-9153-v4-0.94.patch, HBASE-9153-v4-0.95.patch, HBASE-9153-v4-trunk.patch, HBASE-9153-v5-0.94.patch, HBASE-9153-v5-0.95.patch, HBASE-9153-v5-trunk.patch, HBASE-9153-v6-0.94.patch, HBASE-9153-v6-0.95.patch, HBASE-9153-v6-trunk.patch We've had a few issues now where we've removed API's without deprecating or deprecating in the late release. (HBASE-9142, HBASE-9093) We should just have a tool that enforces our api deprecation policy as a release time check or as a precommit check. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9510) Namespace operations should throw clean exceptions
[ https://issues.apache.org/jira/browse/HBASE-9510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769332#comment-13769332 ] Hudson commented on HBASE-9510: --- SUCCESS: Integrated in hbase-0.96 #60 (See [https://builds.apache.org/job/hbase-0.96/60/]) HBASE-9510 Namespace operations should throw clean exceptions (stack: rev 1523907) * /hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/NamespaceExistException.java * /hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/NamespaceNotFoundException.java * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/master/TableNamespaceManager.java * /hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/TestNamespace.java Namespace operations should throw clean exceptions -- Key: HBASE-9510 URL: https://issues.apache.org/jira/browse/HBASE-9510 Project: HBase Issue Type: Bug Components: master Reporter: Enis Soztutar Assignee: Enis Soztutar Fix For: 0.98.0, 0.96.0 Attachments: 9510v4.txt, 9510v4.txt, hbase-9510_v1.patch, hbase-9510_v2.patch, hbase-9510_v3.patch Some of the namespace operations does not throw clean exceptions mimicking table exceptions (TableNotFoundException, etc). For example: {code} hbase(main):007:0 describe_namespace 'non_existing_namespace' ERROR: java.io.IOException at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2117) at org.apache.hadoop.hbase.ipc.RpcServer$CallRunner.run(RpcServer.java:1816) at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:165) at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$0(SimpleRpcScheduler.java:161) at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:113) at java.lang.Thread.run(Thread.java:680) Caused by: java.lang.NullPointerException at org.apache.hadoop.hbase.protobuf.ProtobufUtil.toProtoNamespaceDescriptor(ProtobufUtil.java:2138) at org.apache.hadoop.hbase.master.HMaster.getNamespaceDescriptor(HMaster.java:3029) at org.apache.hadoop.hbase.protobuf.generated.MasterAdminProtos$MasterAdminService$2.callBlockingMethod(MasterAdminProtos.java:32904) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2079) ... 5 more {code} We can clean up the exceptions thrown from ns commands. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9555) Reset loadbalancer back to StochasticLoadBalancer
[ https://issues.apache.org/jira/browse/HBASE-9555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769331#comment-13769331 ] Hudson commented on HBASE-9555: --- SUCCESS: Integrated in hbase-0.96 #60 (See [https://builds.apache.org/job/hbase-0.96/60/]) HBASE-9555 Reset loadbalancer back to StochasticLoadBalancer; ADDENDUM (stack: rev 1523927) * /hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/TestRegionRebalancing.java HBASE-9555 Reset loadbalancer back to StochasticLoadBalancer (stack: rev 1523915) * /hbase/branches/0.96/hbase-common/src/main/resources/hbase-default.xml * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/DefaultLoadBalancer.java * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/SimpleLoadBalancer.java * /hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/TestZooKeeper.java * /hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentManager.java * /hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestDefaultLoadBalancer.java Reset loadbalancer back to StochasticLoadBalancer - Key: HBASE-9555 URL: https://issues.apache.org/jira/browse/HBASE-9555 Project: HBase Issue Type: Bug Components: Balancer Affects Versions: 0.96.0 Reporter: Elliott Clark Assignee: stack Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9555.addendum.txt, balancer.txt It seems like HBASE-7296 changed the loadbalancer class by mistake, removing a good deal of functionality. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9153) Create a deprecation policy enforcement check
[ https://issues.apache.org/jira/browse/HBASE-9153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769336#comment-13769336 ] Hadoop QA commented on HBASE-9153: -- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12603559/HBASE-9153-v6-trunk.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 5 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7267//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7267//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7267//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7267//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7267//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7267//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7267//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7267//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7267//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7267//console This message is automatically generated. Create a deprecation policy enforcement check - Key: HBASE-9153 URL: https://issues.apache.org/jira/browse/HBASE-9153 Project: HBase Issue Type: Task Reporter: Jonathan Hsieh Attachments: HBASE-9153-v1.patch, HBASE-9153-v3.patch, HBASE-9153-v4-0.94.patch, HBASE-9153-v4-0.95.patch, HBASE-9153-v4-trunk.patch, HBASE-9153-v5-0.94.patch, HBASE-9153-v5-0.95.patch, HBASE-9153-v5-trunk.patch, HBASE-9153-v6-0.94.patch, HBASE-9153-v6-0.95.patch, HBASE-9153-v6-trunk.patch We've had a few issues now where we've removed API's without deprecating or deprecating in the late release. (HBASE-9142, HBASE-9093) We should just have a tool that enforces our api deprecation policy as a release time check or as a precommit check. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9488) Improve performance for small scan
[ https://issues.apache.org/jira/browse/HBASE-9488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chunhui shen updated HBASE-9488: Attachment: HBASE-9488-trunkV5.patch Rebasing the patch Improve performance for small scan -- Key: HBASE-9488 URL: https://issues.apache.org/jira/browse/HBASE-9488 Project: HBase Issue Type: Improvement Components: Client, Performance, Scanners Reporter: chunhui shen Assignee: chunhui shen Fix For: 0.98.0, 0.94.13 Attachments: hbase-9488-94-v3.patch, HBASE-9488-trunk.patch, HBASE-9488-trunkV2.patch, HBASE-9488-trunkV3.patch, HBASE-9488-trunkV4.patch, HBASE-9488-trunkV4.patch, HBASE-9488-trunkV5.patch, mergeRpcCallForScan.patch, test results.jpg review board: https://reviews.apache.org/r/14059/ *Performance Improvement* Test shows about 1.5~3X improvement for small scan where limit=50 under cache hit ratio=100%. See more performance test result from the picture attachment *Usage:* Scan scan = new Scan(startRow,stopRow); scan.setSmall(true); ResultScanner scanner = table.getScanner(scan); Set the new 'small' attribute as true for scan object, others are the same Now, one scan operation would call 3 RPC at least: openScanner(); next(); closeScanner(); I think we could reduce the RPC call to one for small scan to get better performance Also using pread is better than seek+read for small scan (For this point, see more on HBASE-7266) Implements such a small scan as the patch, and take the performance test as following: a.Environment: patched on 0.94 version one regionserver; one client with 50 concurrent threads; KV size:50/100; 100% LRU cache hit ratio; Random start row of scan b.Results: See the picture attachment -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-9545) NPE when trying to get cluster status on an hbase cluster that isn't there
[ https://issues.apache.org/jira/browse/HBASE-9545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HBASE-9545. --- Resolution: Duplicate NPE when trying to get cluster status on an hbase cluster that isn't there -- Key: HBASE-9545 URL: https://issues.apache.org/jira/browse/HBASE-9545 Project: HBase Issue Type: Bug Components: Client Environment: 0-95.3 snapshot, commit 943bffc Reporter: Steve Loughran Priority: Minor As part of some fault injection testing, I'm trying to talk to an HBaseCluster that isn't there, opening a connection and expecting things to fail. It turns out you can create an {{HBaseAdmin}} instance, but when you ask for its cluster status the NPE surfaces -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9545) NPE when trying to get cluster status on an hbase cluster that isn't there
[ https://issues.apache.org/jira/browse/HBASE-9545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769346#comment-13769346 ] Steve Loughran commented on HBASE-9545: --- you are right -it goes away on trunk. Marking as duplicate NPE when trying to get cluster status on an hbase cluster that isn't there -- Key: HBASE-9545 URL: https://issues.apache.org/jira/browse/HBASE-9545 Project: HBase Issue Type: Bug Components: Client Environment: 0-95.3 snapshot, commit 943bffc Reporter: Steve Loughran Priority: Minor As part of some fault injection testing, I'm trying to talk to an HBaseCluster that isn't there, opening a connection and expecting things to fail. It turns out you can create an {{HBaseAdmin}} instance, but when you ask for its cluster status the NPE surfaces -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9555) Reset loadbalancer back to StochasticLoadBalancer
[ https://issues.apache.org/jira/browse/HBASE-9555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769383#comment-13769383 ] Hudson commented on HBASE-9555: --- SUCCESS: Integrated in HBase-TRUNK #4521 (See [https://builds.apache.org/job/HBase-TRUNK/4521/]) HBASE-9555 Reset loadbalancer back to StochasticLoadBalancer; ADDENDUM (stack: rev 1523926) * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestRegionRebalancing.java Reset loadbalancer back to StochasticLoadBalancer - Key: HBASE-9555 URL: https://issues.apache.org/jira/browse/HBASE-9555 Project: HBase Issue Type: Bug Components: Balancer Affects Versions: 0.96.0 Reporter: Elliott Clark Assignee: stack Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9555.addendum.txt, balancer.txt It seems like HBASE-7296 changed the loadbalancer class by mistake, removing a good deal of functionality. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9488) Improve performance for small scan
[ https://issues.apache.org/jira/browse/HBASE-9488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769407#comment-13769407 ] Hadoop QA commented on HBASE-9488: -- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12603570/HBASE-9488-trunkV5.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7268//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7268//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7268//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7268//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7268//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7268//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7268//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7268//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7268//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7268//console This message is automatically generated. Improve performance for small scan -- Key: HBASE-9488 URL: https://issues.apache.org/jira/browse/HBASE-9488 Project: HBase Issue Type: Improvement Components: Client, Performance, Scanners Reporter: chunhui shen Assignee: chunhui shen Fix For: 0.98.0, 0.94.13 Attachments: hbase-9488-94-v3.patch, HBASE-9488-trunk.patch, HBASE-9488-trunkV2.patch, HBASE-9488-trunkV3.patch, HBASE-9488-trunkV4.patch, HBASE-9488-trunkV4.patch, HBASE-9488-trunkV5.patch, mergeRpcCallForScan.patch, test results.jpg review board: https://reviews.apache.org/r/14059/ *Performance Improvement* Test shows about 1.5~3X improvement for small scan where limit=50 under cache hit ratio=100%. See more performance test result from the picture attachment *Usage:* Scan scan = new Scan(startRow,stopRow); scan.setSmall(true); ResultScanner scanner = table.getScanner(scan); Set the new 'small' attribute as true for scan object, others are the same Now, one scan operation would call 3 RPC at least: openScanner(); next(); closeScanner(); I think we could reduce the RPC call to one for small scan to get better performance Also using pread is better than seek+read for small scan (For this point, see more on HBASE-7266) Implements such a small scan as the patch, and take the performance test as following: a.Environment: patched on 0.94 version one regionserver; one client with 50 concurrent threads; KV size:50/100; 100% LRU cache hit ratio; Random start row of scan b.Results: See the picture attachment -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-9557) strange dependencies for hbase-client
Nicolas Liochon created HBASE-9557: -- Summary: strange dependencies for hbase-client Key: HBASE-9557 URL: https://issues.apache.org/jira/browse/HBASE-9557 Project: HBase Issue Type: Bug Reporter: Nicolas Liochon Here is what we have with hadoop 2. In our plate we have - junit (should be test, it's not because we use it in the integration test runner - log4j (we have direct dependencies in RegionPlacementMaintainer, RESTServlet, and LogMonitoring, not counting the dependencies in the tests) The others are in hadoop. I marked the ones that were strange to me. Do we need all of them? mvn dependency:tree -pl hbase-client -Dhadoop.profile=2.0 [INFO] --- maven-dependency-plugin:2.1:tree (default-cli) @ hbase-client --- [INFO] org.apache.hbase:hbase-client:jar:0.97.0-SNAPSHOT [INFO] +- org.apache.hbase:hbase-common:jar:0.97.0-SNAPSHOT:compile [INFO] | \- commons-collections:commons-collections:jar:3.2.1:compile [INFO] +- org.apache.hbase:hbase-common:test-jar:tests:0.97.0-SNAPSHOT:test [INFO] +- org.apache.hbase:hbase-protocol:jar:0.97.0-SNAPSHOT:compile [INFO] +- commons-codec:commons-codec:jar:1.7:compile [INFO] +- commons-io:commons-io:jar:2.4:compile [INFO] +- commons-lang:commons-lang:jar:2.6:compile [INFO] +- commons-logging:commons-logging:jar:1.1.1:compile [INFO] +- com.google.guava:guava:jar:12.0.1:compile [INFO] | \- com.google.code.findbugs:jsr305:jar:1.3.9:compile [INFO] +- com.google.protobuf:protobuf-java:jar:2.5.0:compile [INFO] +- org.apache.zookeeper:zookeeper:jar:3.4.5:compile [INFO] | +- org.slf4j:slf4j-api:jar:1.6.4:compile [INFO] | \- org.slf4j:slf4j-log4j12:jar:1.6.1:compile [INFO] +- org.cloudera.htrace:htrace-core:jar:2.01:compile *[INFO] | \- org.mortbay.jetty:jetty-util:jar:6.1.26:compile === why?* [INFO] +- org.codehaus.jackson:jackson-mapper-asl:jar:1.8.8:compile [INFO] | \- org.codehaus.jackson:jackson-core-asl:jar:1.8.8:compile [INFO] +- io.netty:netty:jar:3.5.9.Final:compile [INFO] +- log4j:log4j:jar:1.2.17:test (scope not updated to compile) [INFO] +- org.apache.hadoop:hadoop-common:jar:2.1.0-beta:compile [INFO] | +- commons-cli:commons-cli:jar:1.2:compile [INFO] | +- org.apache.commons:commons-math:jar:2.2:compile (version managed from 2.1) [INFO] | +- xmlenc:xmlenc:jar:0.52:compile *[INFO] | +- commons-httpclient:commons-httpclient:jar:3.0.1:compile (version managed from 3.1) = decrease the version. dangerous. But why hadoop does this?* [INFO] | +- commons-net:commons-net:jar:3.1:compile *[INFO] | +- javax.servlet:servlet-api:jar:2.5:compile why a servlet api in hbase-client or hadoop common?* [INFO] | +- org.mortbay.jetty:jetty:jar:6.1.26:compile [INFO] | +- com.sun.jersey:jersey-core:jar:1.8:compile [INFO] | +- com.sun.jersey:jersey-json:jar:1.8:compile *[INFO] | | +- org.codehaus.jettison:jettison:jar:1.3.1:compile (version managed from 1.1)* [INFO] | | | \- stax:stax-api:jar:1.0.1:compile [INFO] | | +- com.sun.xml.bind:jaxb-impl:jar:2.2.3-1:compile *[INFO] | | | \- javax.xml.bind:jaxb-api:jar:2.1:compile (version managed from 2.2.2) = decrease the version. dangerous* [INFO] | | | \- javax.activation:activation:jar:1.1:compile [INFO] | | +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.8:compile (version managed from 1.7.1) [INFO] | | \- org.codehaus.jackson:jackson-xc:jar:1.8.8:compile (version managed from 1.7.1) *[INFO] | +- com.sun.jersey:jersey-server:jar:1.8:compile = Why a server in a common piece of code? could we exclude it from hbase client?* [INFO] | | \- asm:asm:jar:3.1:compile *[INFO] | +- tomcat:jasper-compiler:jar:5.5.23:runtime === ??? why * *[INFO] | +- tomcat:jasper-runtime:jar:5.5.23:runtime === ??? why* *[INFO] | +- javax.servlet.jsp:jsp-api:jar:2.1:runtime Why? could we exclude it from hbase client?* [INFO] | +- commons-el:commons-el:jar:1.0:runtime [INFO] | +- net.java.dev.jets3t:jets3t:jar:0.6.1:compile [INFO] | +- commons-configuration:commons-configuration:jar:1.6:compile [INFO] | | +- commons-digester:commons-digester:jar:1.8:compile [INFO] | | | \- commons-beanutils:commons-beanutils:jar:1.7.0:compile [INFO] | | \- commons-beanutils:commons-beanutils-core:jar:1.8.0:compile [INFO] | +- org.apache.avro:avro:jar:1.5.3:compile [INFO] | | +- com.thoughtworks.paranamer:paranamer:jar:2.3:compile [INFO] | | \- org.xerial.snappy:snappy-java:jar:1.0.3.2:compile [INFO] | +- com.jcraft:jsch:jar:0.1.42:compile [INFO] | \- org.apache.commons:commons-compress:jar:1.4:compile [INFO] | \- org.tukaani:xz:jar:1.0:compile [INFO] +- org.apache.hadoop:hadoop-auth:jar:2.1.0-beta:compile [INFO] +- org.apache.hadoop:hadoop-mapreduce-client-core:jar:2.1.0-beta:compile [INFO] | +- org.apache.hadoop:hadoop-yarn-common:jar:2.1.0-beta:compile [INFO] | | +- org.apache.hadoop:hadoop-yarn-api:jar:2.1.0-beta:compile [INFO] | | +-
[jira] [Updated] (HBASE-9557) strange dependencies for hbase-client
[ https://issues.apache.org/jira/browse/HBASE-9557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicolas Liochon updated HBASE-9557: --- Status: Patch Available (was: Open) strange dependencies for hbase-client - Key: HBASE-9557 URL: https://issues.apache.org/jira/browse/HBASE-9557 Project: HBase Issue Type: Bug Reporter: Nicolas Liochon Attachments: 9557.v1.patch Here is what we have with hadoop 2. In our plate we have - junit (should be test, it's not because we use it in the integration test runner - log4j (we have direct dependencies in RegionPlacementMaintainer, RESTServlet, and LogMonitoring, not counting the dependencies in the tests) The others are in hadoop. I marked the ones that were strange to me. Do we need all of them? mvn dependency:tree -pl hbase-client -Dhadoop.profile=2.0 [INFO] --- maven-dependency-plugin:2.1:tree (default-cli) @ hbase-client --- [INFO] org.apache.hbase:hbase-client:jar:0.97.0-SNAPSHOT [INFO] +- org.apache.hbase:hbase-common:jar:0.97.0-SNAPSHOT:compile [INFO] | \- commons-collections:commons-collections:jar:3.2.1:compile [INFO] +- org.apache.hbase:hbase-common:test-jar:tests:0.97.0-SNAPSHOT:test [INFO] +- org.apache.hbase:hbase-protocol:jar:0.97.0-SNAPSHOT:compile [INFO] +- commons-codec:commons-codec:jar:1.7:compile [INFO] +- commons-io:commons-io:jar:2.4:compile [INFO] +- commons-lang:commons-lang:jar:2.6:compile [INFO] +- commons-logging:commons-logging:jar:1.1.1:compile [INFO] +- com.google.guava:guava:jar:12.0.1:compile [INFO] | \- com.google.code.findbugs:jsr305:jar:1.3.9:compile [INFO] +- com.google.protobuf:protobuf-java:jar:2.5.0:compile [INFO] +- org.apache.zookeeper:zookeeper:jar:3.4.5:compile [INFO] | +- org.slf4j:slf4j-api:jar:1.6.4:compile [INFO] | \- org.slf4j:slf4j-log4j12:jar:1.6.1:compile [INFO] +- org.cloudera.htrace:htrace-core:jar:2.01:compile *[INFO] | \- org.mortbay.jetty:jetty-util:jar:6.1.26:compile === why?* [INFO] +- org.codehaus.jackson:jackson-mapper-asl:jar:1.8.8:compile [INFO] | \- org.codehaus.jackson:jackson-core-asl:jar:1.8.8:compile [INFO] +- io.netty:netty:jar:3.5.9.Final:compile [INFO] +- log4j:log4j:jar:1.2.17:test (scope not updated to compile) [INFO] +- org.apache.hadoop:hadoop-common:jar:2.1.0-beta:compile [INFO] | +- commons-cli:commons-cli:jar:1.2:compile [INFO] | +- org.apache.commons:commons-math:jar:2.2:compile (version managed from 2.1) [INFO] | +- xmlenc:xmlenc:jar:0.52:compile *[INFO] | +- commons-httpclient:commons-httpclient:jar:3.0.1:compile (version managed from 3.1) = decrease the version. dangerous. But why hadoop does this?* [INFO] | +- commons-net:commons-net:jar:3.1:compile *[INFO] | +- javax.servlet:servlet-api:jar:2.5:compile why a servlet api in hbase-client or hadoop common?* [INFO] | +- org.mortbay.jetty:jetty:jar:6.1.26:compile [INFO] | +- com.sun.jersey:jersey-core:jar:1.8:compile [INFO] | +- com.sun.jersey:jersey-json:jar:1.8:compile *[INFO] | | +- org.codehaus.jettison:jettison:jar:1.3.1:compile (version managed from 1.1)* [INFO] | | | \- stax:stax-api:jar:1.0.1:compile [INFO] | | +- com.sun.xml.bind:jaxb-impl:jar:2.2.3-1:compile *[INFO] | | | \- javax.xml.bind:jaxb-api:jar:2.1:compile (version managed from 2.2.2) = decrease the version. dangerous* [INFO] | | | \- javax.activation:activation:jar:1.1:compile [INFO] | | +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.8:compile (version managed from 1.7.1) [INFO] | | \- org.codehaus.jackson:jackson-xc:jar:1.8.8:compile (version managed from 1.7.1) *[INFO] | +- com.sun.jersey:jersey-server:jar:1.8:compile = Why a server in a common piece of code? could we exclude it from hbase client?* [INFO] | | \- asm:asm:jar:3.1:compile *[INFO] | +- tomcat:jasper-compiler:jar:5.5.23:runtime === ??? why * *[INFO] | +- tomcat:jasper-runtime:jar:5.5.23:runtime === ??? why* *[INFO] | +- javax.servlet.jsp:jsp-api:jar:2.1:runtime Why? could we exclude it from hbase client?* [INFO] | +- commons-el:commons-el:jar:1.0:runtime [INFO] | +- net.java.dev.jets3t:jets3t:jar:0.6.1:compile [INFO] | +- commons-configuration:commons-configuration:jar:1.6:compile [INFO] | | +- commons-digester:commons-digester:jar:1.8:compile [INFO] | | | \- commons-beanutils:commons-beanutils:jar:1.7.0:compile [INFO] | | \- commons-beanutils:commons-beanutils-core:jar:1.8.0:compile [INFO] | +- org.apache.avro:avro:jar:1.5.3:compile [INFO] | | +- com.thoughtworks.paranamer:paranamer:jar:2.3:compile [INFO] | | \- org.xerial.snappy:snappy-java:jar:1.0.3.2:compile [INFO] | +- com.jcraft:jsch:jar:0.1.42:compile [INFO] | \- org.apache.commons:commons-compress:jar:1.4:compile [INFO] | \- org.tukaani:xz:jar:1.0:compile [INFO] +-
[jira] [Updated] (HBASE-9557) strange dependencies for hbase-client
[ https://issues.apache.org/jira/browse/HBASE-9557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicolas Liochon updated HBASE-9557: --- Attachment: 9557.v1.patch strange dependencies for hbase-client - Key: HBASE-9557 URL: https://issues.apache.org/jira/browse/HBASE-9557 Project: HBase Issue Type: Bug Reporter: Nicolas Liochon Attachments: 9557.v1.patch Here is what we have with hadoop 2. In our plate we have - junit (should be test, it's not because we use it in the integration test runner - log4j (we have direct dependencies in RegionPlacementMaintainer, RESTServlet, and LogMonitoring, not counting the dependencies in the tests) The others are in hadoop. I marked the ones that were strange to me. Do we need all of them? mvn dependency:tree -pl hbase-client -Dhadoop.profile=2.0 [INFO] --- maven-dependency-plugin:2.1:tree (default-cli) @ hbase-client --- [INFO] org.apache.hbase:hbase-client:jar:0.97.0-SNAPSHOT [INFO] +- org.apache.hbase:hbase-common:jar:0.97.0-SNAPSHOT:compile [INFO] | \- commons-collections:commons-collections:jar:3.2.1:compile [INFO] +- org.apache.hbase:hbase-common:test-jar:tests:0.97.0-SNAPSHOT:test [INFO] +- org.apache.hbase:hbase-protocol:jar:0.97.0-SNAPSHOT:compile [INFO] +- commons-codec:commons-codec:jar:1.7:compile [INFO] +- commons-io:commons-io:jar:2.4:compile [INFO] +- commons-lang:commons-lang:jar:2.6:compile [INFO] +- commons-logging:commons-logging:jar:1.1.1:compile [INFO] +- com.google.guava:guava:jar:12.0.1:compile [INFO] | \- com.google.code.findbugs:jsr305:jar:1.3.9:compile [INFO] +- com.google.protobuf:protobuf-java:jar:2.5.0:compile [INFO] +- org.apache.zookeeper:zookeeper:jar:3.4.5:compile [INFO] | +- org.slf4j:slf4j-api:jar:1.6.4:compile [INFO] | \- org.slf4j:slf4j-log4j12:jar:1.6.1:compile [INFO] +- org.cloudera.htrace:htrace-core:jar:2.01:compile *[INFO] | \- org.mortbay.jetty:jetty-util:jar:6.1.26:compile === why?* [INFO] +- org.codehaus.jackson:jackson-mapper-asl:jar:1.8.8:compile [INFO] | \- org.codehaus.jackson:jackson-core-asl:jar:1.8.8:compile [INFO] +- io.netty:netty:jar:3.5.9.Final:compile [INFO] +- log4j:log4j:jar:1.2.17:test (scope not updated to compile) [INFO] +- org.apache.hadoop:hadoop-common:jar:2.1.0-beta:compile [INFO] | +- commons-cli:commons-cli:jar:1.2:compile [INFO] | +- org.apache.commons:commons-math:jar:2.2:compile (version managed from 2.1) [INFO] | +- xmlenc:xmlenc:jar:0.52:compile *[INFO] | +- commons-httpclient:commons-httpclient:jar:3.0.1:compile (version managed from 3.1) = decrease the version. dangerous. But why hadoop does this?* [INFO] | +- commons-net:commons-net:jar:3.1:compile *[INFO] | +- javax.servlet:servlet-api:jar:2.5:compile why a servlet api in hbase-client or hadoop common?* [INFO] | +- org.mortbay.jetty:jetty:jar:6.1.26:compile [INFO] | +- com.sun.jersey:jersey-core:jar:1.8:compile [INFO] | +- com.sun.jersey:jersey-json:jar:1.8:compile *[INFO] | | +- org.codehaus.jettison:jettison:jar:1.3.1:compile (version managed from 1.1)* [INFO] | | | \- stax:stax-api:jar:1.0.1:compile [INFO] | | +- com.sun.xml.bind:jaxb-impl:jar:2.2.3-1:compile *[INFO] | | | \- javax.xml.bind:jaxb-api:jar:2.1:compile (version managed from 2.2.2) = decrease the version. dangerous* [INFO] | | | \- javax.activation:activation:jar:1.1:compile [INFO] | | +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.8:compile (version managed from 1.7.1) [INFO] | | \- org.codehaus.jackson:jackson-xc:jar:1.8.8:compile (version managed from 1.7.1) *[INFO] | +- com.sun.jersey:jersey-server:jar:1.8:compile = Why a server in a common piece of code? could we exclude it from hbase client?* [INFO] | | \- asm:asm:jar:3.1:compile *[INFO] | +- tomcat:jasper-compiler:jar:5.5.23:runtime === ??? why * *[INFO] | +- tomcat:jasper-runtime:jar:5.5.23:runtime === ??? why* *[INFO] | +- javax.servlet.jsp:jsp-api:jar:2.1:runtime Why? could we exclude it from hbase client?* [INFO] | +- commons-el:commons-el:jar:1.0:runtime [INFO] | +- net.java.dev.jets3t:jets3t:jar:0.6.1:compile [INFO] | +- commons-configuration:commons-configuration:jar:1.6:compile [INFO] | | +- commons-digester:commons-digester:jar:1.8:compile [INFO] | | | \- commons-beanutils:commons-beanutils:jar:1.7.0:compile [INFO] | | \- commons-beanutils:commons-beanutils-core:jar:1.8.0:compile [INFO] | +- org.apache.avro:avro:jar:1.5.3:compile [INFO] | | +- com.thoughtworks.paranamer:paranamer:jar:2.3:compile [INFO] | | \- org.xerial.snappy:snappy-java:jar:1.0.3.2:compile [INFO] | +- com.jcraft:jsch:jar:0.1.42:compile [INFO] | \- org.apache.commons:commons-compress:jar:1.4:compile [INFO] | \- org.tukaani:xz:jar:1.0:compile [INFO] +-
[jira] [Commented] (HBASE-9555) Reset loadbalancer back to StochasticLoadBalancer
[ https://issues.apache.org/jira/browse/HBASE-9555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769464#comment-13769464 ] Hudson commented on HBASE-9555: --- SUCCESS: Integrated in hbase-0.96-hadoop2 #33 (See [https://builds.apache.org/job/hbase-0.96-hadoop2/33/]) HBASE-9555 Reset loadbalancer back to StochasticLoadBalancer; ADDENDUM (stack: rev 1523927) * /hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/TestRegionRebalancing.java HBASE-9555 Reset loadbalancer back to StochasticLoadBalancer (stack: rev 1523915) * /hbase/branches/0.96/hbase-common/src/main/resources/hbase-default.xml * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/DefaultLoadBalancer.java * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/SimpleLoadBalancer.java * /hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/TestZooKeeper.java * /hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentManager.java * /hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestDefaultLoadBalancer.java Reset loadbalancer back to StochasticLoadBalancer - Key: HBASE-9555 URL: https://issues.apache.org/jira/browse/HBASE-9555 Project: HBase Issue Type: Bug Components: Balancer Affects Versions: 0.96.0 Reporter: Elliott Clark Assignee: stack Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9555.addendum.txt, balancer.txt It seems like HBASE-7296 changed the loadbalancer class by mistake, removing a good deal of functionality. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9554) TestOfflineMetaRebuildOverlap#testMetaRebuildOverlapFail fails due to NPE
[ https://issues.apache.org/jira/browse/HBASE-9554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769462#comment-13769462 ] Hudson commented on HBASE-9554: --- SUCCESS: Integrated in hbase-0.96-hadoop2 #33 (See [https://builds.apache.org/job/hbase-0.96-hadoop2/33/]) HBASE-9554 TestOfflineMetaRebuildOverlap#testMetaRebuildOverlapFail fails due to NPE (stack: rev 1523869) * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java TestOfflineMetaRebuildOverlap#testMetaRebuildOverlapFail fails due to NPE - Key: HBASE-9554 URL: https://issues.apache.org/jira/browse/HBASE-9554 Project: HBase Issue Type: Bug Reporter: Ted Yu Assignee: Ted Yu Fix For: 0.98.0, 0.96.0 Attachments: 9554-v1.txt From http://54.241.6.143/job/HBase-TRUNK/org.apache.hbase$hbase-server/496/testReport/org.apache.hadoop.hbase.util.hbck/TestOfflineMetaRebuildOverlap/testMetaRebuildOverlapFail/ : {code} java.lang.Exception: test timed out after 12 milliseconds at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:148) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:94) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3153) at org.apache.hadoop.hbase.client.HBaseAdmin.unassign(HBaseAdmin.java:1714) at org.apache.hadoop.hbase.util.hbck.OfflineMetaRebuildTestCore.wipeOutMeta(OfflineMetaRebuildTestCore.java:242) at org.apache.hadoop.hbase.util.hbck.TestOfflineMetaRebuildOverlap.testMetaRebuildOverlapFail(TestOfflineMetaRebuildOverlap.java:54) ... 2013-09-17 00:59:52,928 ERROR [FifoRpcScheduler.handler1-thread-2] ipc.RpcServer(2016): Unexpected throwable object java.lang.NullPointerException at org.apache.hadoop.hbase.master.AssignmentManager.unassign(AssignmentManager.java:2301) at org.apache.hadoop.hbase.master.AssignmentManager.unassign(AssignmentManager.java:2381) at org.apache.hadoop.hbase.master.HMaster.unassignRegion(HMaster.java:2499) at org.apache.hadoop.hbase.protobuf.generated.MasterAdminProtos$MasterAdminService$2.callBlockingMethod(MasterAdminProtos.java:32854) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:1979) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:90) at org.apache.hadoop.hbase.ipc.FifoRpcScheduler$1.run(FifoRpcScheduler.java:73) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9510) Namespace operations should throw clean exceptions
[ https://issues.apache.org/jira/browse/HBASE-9510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769466#comment-13769466 ] Hudson commented on HBASE-9510: --- SUCCESS: Integrated in hbase-0.96-hadoop2 #33 (See [https://builds.apache.org/job/hbase-0.96-hadoop2/33/]) HBASE-9510 Namespace operations should throw clean exceptions (stack: rev 1523907) * /hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/NamespaceExistException.java * /hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/NamespaceNotFoundException.java * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/master/TableNamespaceManager.java * /hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/TestNamespace.java Namespace operations should throw clean exceptions -- Key: HBASE-9510 URL: https://issues.apache.org/jira/browse/HBASE-9510 Project: HBase Issue Type: Bug Components: master Reporter: Enis Soztutar Assignee: Enis Soztutar Fix For: 0.98.0, 0.96.0 Attachments: 9510v4.txt, 9510v4.txt, hbase-9510_v1.patch, hbase-9510_v2.patch, hbase-9510_v3.patch Some of the namespace operations does not throw clean exceptions mimicking table exceptions (TableNotFoundException, etc). For example: {code} hbase(main):007:0 describe_namespace 'non_existing_namespace' ERROR: java.io.IOException at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2117) at org.apache.hadoop.hbase.ipc.RpcServer$CallRunner.run(RpcServer.java:1816) at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:165) at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$0(SimpleRpcScheduler.java:161) at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:113) at java.lang.Thread.run(Thread.java:680) Caused by: java.lang.NullPointerException at org.apache.hadoop.hbase.protobuf.ProtobufUtil.toProtoNamespaceDescriptor(ProtobufUtil.java:2138) at org.apache.hadoop.hbase.master.HMaster.getNamespaceDescriptor(HMaster.java:3029) at org.apache.hadoop.hbase.protobuf.generated.MasterAdminProtos$MasterAdminService$2.callBlockingMethod(MasterAdminProtos.java:32904) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2079) ... 5 more {code} We can clean up the exceptions thrown from ns commands. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9467) write can be totally blocked temporarily by a write-heavy region
[ https://issues.apache.org/jira/browse/HBASE-9467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769465#comment-13769465 ] Hudson commented on HBASE-9467: --- SUCCESS: Integrated in hbase-0.96-hadoop2 #33 (See [https://builds.apache.org/job/hbase-0.96-hadoop2/33/]) HBASE-9467 write can be totally blocked temporarily by a write-heavy region (stack: rev 1523880) * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java * /hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java write can be totally blocked temporarily by a write-heavy region Key: HBASE-9467 URL: https://issues.apache.org/jira/browse/HBASE-9467 Project: HBase Issue Type: Improvement Reporter: Feng Honghua Assignee: Feng Honghua Fix For: 0.98.0, 0.96.0 Attachments: HBASE-9467-trunk-v0.patch, HBASE-9467-trunk-v1.patch, HBASE-9467-trunk-v1.patch, HBASE-9467-trunk-v1.patch Write to a region can be blocked temporarily if the memstore of that region reaches the threshold(hbase.hregion.memstore.block.multiplier * hbase.hregion.flush.size) until the memstore of that region is flushed. For a write-heavy region, if its write requests saturates all the handler threads of that RS when write blocking for that region occurs, requests of other regions/tables to that RS also can't be served due to no available handler threads...until the pending writes of that write-heavy region are served after the flush is done. Hence during this time period, from the RS perspective it can't serve any request from any table/region just due to a single write-heavy region. This sounds not very reasonable, right? Maybe write requests from a region can only be served by a sub-set of the handler threads, and then write blocking of any single region can't lead to the scenario mentioned above? Comment? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9539) Handle post namespace snapshot files when checking for HFile V1
[ https://issues.apache.org/jira/browse/HBASE-9539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769463#comment-13769463 ] Hudson commented on HBASE-9539: --- SUCCESS: Integrated in hbase-0.96-hadoop2 #33 (See [https://builds.apache.org/job/hbase-0.96-hadoop2/33/]) HBASE-9539 Handle post namespace snapshot files when checking for HFile V1 (stack: rev 1523868) * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/migration/UpgradeTo96.java * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HFileV1Detector.java * /hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/migration/TestUpgradeTo96.java Handle post namespace snapshot files when checking for HFile V1 - Key: HBASE-9539 URL: https://issues.apache.org/jira/browse/HBASE-9539 Project: HBase Issue Type: Bug Components: migration Affects Versions: 0.95.2 Reporter: Himanshu Vashishtha Assignee: Himanshu Vashishtha Fix For: 0.98.0, 0.96.0 Attachments: HBase-9539.patch, HBase-9539-v1.patch When checking for HFileV1 before upgrading to 96, the snapshot file links tries to read from post-namespace locations. The migration script needs to be run on 94 cluster, and it requires reading the old (94) layout to check for HFileV1. {code} Got exception while reading trailer for file: hdfs://xxx:41020/cops/cluster_collection_events_snapshot/2086db948c484be62dcd76c170fe0b17/meta/cluster_collection_event=42037b88dbc34abff6cbfbb1fde2c900-c24b358ddd2f4429a7287258142841a2 java.io.FileNotFoundException: Unable to open link: org.apache.hadoop.hbase.io.HFileLink locations=[hdfs://xxx:41020/hbase-96/data/default/cluster_collection_event/42037b88dbc34abff6cbfbb1fde2c900/meta/c24b358ddd2f4429a7287258142841a2, hdfs://xxx:41020/hbase-96/.tmp/data/default/cluster_collection_event/42037b88dbc34abff6cbfbb1fde2c900/meta/c24b358ddd2f4429a7287258142841a2, hdfs://xxx:41020/hbase-96/archive/data/default/cluster_collection_event/42037b88dbc34abff6cbfbb1fde2c900/meta/c24b358ddd2f4429a7287258142841a2] {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9557) strange dependencies for hbase-client
[ https://issues.apache.org/jira/browse/HBASE-9557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769476#comment-13769476 ] Hadoop QA commented on HBASE-9557: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12603582/9557.v1.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7269//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7269//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7269//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7269//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7269//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7269//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7269//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7269//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7269//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7269//console This message is automatically generated. strange dependencies for hbase-client - Key: HBASE-9557 URL: https://issues.apache.org/jira/browse/HBASE-9557 Project: HBase Issue Type: Bug Reporter: Nicolas Liochon Attachments: 9557.v1.patch Here is what we have with hadoop 2. In our plate we have - junit (should be test, it's not because we use it in the integration test runner - log4j (we have direct dependencies in RegionPlacementMaintainer, RESTServlet, and LogMonitoring, not counting the dependencies in the tests) The others are in hadoop. I marked the ones that were strange to me. Do we need all of them? mvn dependency:tree -pl hbase-client -Dhadoop.profile=2.0 [INFO] --- maven-dependency-plugin:2.1:tree (default-cli) @ hbase-client --- [INFO] org.apache.hbase:hbase-client:jar:0.97.0-SNAPSHOT [INFO] +- org.apache.hbase:hbase-common:jar:0.97.0-SNAPSHOT:compile [INFO] | \- commons-collections:commons-collections:jar:3.2.1:compile [INFO] +- org.apache.hbase:hbase-common:test-jar:tests:0.97.0-SNAPSHOT:test [INFO] +- org.apache.hbase:hbase-protocol:jar:0.97.0-SNAPSHOT:compile [INFO] +- commons-codec:commons-codec:jar:1.7:compile [INFO] +- commons-io:commons-io:jar:2.4:compile [INFO] +- commons-lang:commons-lang:jar:2.6:compile [INFO] +- commons-logging:commons-logging:jar:1.1.1:compile [INFO] +- com.google.guava:guava:jar:12.0.1:compile [INFO] | \- com.google.code.findbugs:jsr305:jar:1.3.9:compile [INFO] +- com.google.protobuf:protobuf-java:jar:2.5.0:compile [INFO] +- org.apache.zookeeper:zookeeper:jar:3.4.5:compile [INFO] | +- org.slf4j:slf4j-api:jar:1.6.4:compile [INFO] | \- org.slf4j:slf4j-log4j12:jar:1.6.1:compile [INFO] +- org.cloudera.htrace:htrace-core:jar:2.01:compile *[INFO] | \- org.mortbay.jetty:jetty-util:jar:6.1.26:compile
[jira] [Commented] (HBASE-9510) Namespace operations should throw clean exceptions
[ https://issues.apache.org/jira/browse/HBASE-9510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769497#comment-13769497 ] Hudson commented on HBASE-9510: --- SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #736 (See [https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/736/]) HBASE-9510 Namespace operations should throw clean exceptions (stack: rev 1523903) * /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/NamespaceExistException.java * /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/NamespaceNotFoundException.java HBASE-9510 Namespace operations should throw clean exceptions (stack: rev 1523902) * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/TableNamespaceManager.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestNamespace.java Namespace operations should throw clean exceptions -- Key: HBASE-9510 URL: https://issues.apache.org/jira/browse/HBASE-9510 Project: HBase Issue Type: Bug Components: master Reporter: Enis Soztutar Assignee: Enis Soztutar Fix For: 0.98.0, 0.96.0 Attachments: 9510v4.txt, 9510v4.txt, hbase-9510_v1.patch, hbase-9510_v2.patch, hbase-9510_v3.patch Some of the namespace operations does not throw clean exceptions mimicking table exceptions (TableNotFoundException, etc). For example: {code} hbase(main):007:0 describe_namespace 'non_existing_namespace' ERROR: java.io.IOException at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2117) at org.apache.hadoop.hbase.ipc.RpcServer$CallRunner.run(RpcServer.java:1816) at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:165) at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$0(SimpleRpcScheduler.java:161) at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:113) at java.lang.Thread.run(Thread.java:680) Caused by: java.lang.NullPointerException at org.apache.hadoop.hbase.protobuf.ProtobufUtil.toProtoNamespaceDescriptor(ProtobufUtil.java:2138) at org.apache.hadoop.hbase.master.HMaster.getNamespaceDescriptor(HMaster.java:3029) at org.apache.hadoop.hbase.protobuf.generated.MasterAdminProtos$MasterAdminService$2.callBlockingMethod(MasterAdminProtos.java:32904) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2079) ... 5 more {code} We can clean up the exceptions thrown from ns commands. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9555) Reset loadbalancer back to StochasticLoadBalancer
[ https://issues.apache.org/jira/browse/HBASE-9555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769496#comment-13769496 ] Hudson commented on HBASE-9555: --- SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #736 (See [https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/736/]) HBASE-9555 Reset loadbalancer back to StochasticLoadBalancer; ADDENDUM (stack: rev 1523926) * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestRegionRebalancing.java HBASE-9555 Reset loadbalancer back to StochasticLoadBalancer (stack: rev 1523914) * /hbase/trunk/hbase-common/src/main/resources/hbase-default.xml * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/DefaultLoadBalancer.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/SimpleLoadBalancer.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestZooKeeper.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentManager.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestDefaultLoadBalancer.java Reset loadbalancer back to StochasticLoadBalancer - Key: HBASE-9555 URL: https://issues.apache.org/jira/browse/HBASE-9555 Project: HBase Issue Type: Bug Components: Balancer Affects Versions: 0.96.0 Reporter: Elliott Clark Assignee: stack Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9555.addendum.txt, balancer.txt It seems like HBASE-7296 changed the loadbalancer class by mistake, removing a good deal of functionality. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-9558) PerformanceEvaluation is in hbase-server, and create a dependency to MiniDFSCluster
Nicolas Liochon created HBASE-9558: -- Summary: PerformanceEvaluation is in hbase-server, and create a dependency to MiniDFSCluster Key: HBASE-9558 URL: https://issues.apache.org/jira/browse/HBASE-9558 Project: HBase Issue Type: Bug Affects Versions: 0.98.0 Reporter: Nicolas Liochon Priority: Minor It's the only dependency that is not in the tests package. I'm not clear on how to fix it. Any idea? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9558) PerformanceEvaluation is in hbase-server, and create a dependency to MiniDFSCluster
[ https://issues.apache.org/jira/browse/HBASE-9558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769536#comment-13769536 ] Nicolas Liochon commented on HBASE-9558: [INFO] org.apache.hbase:hbase-server:jar:0.97.0-SNAPSHOT [INFO] +- org.apache.hbase:hbase-hadoop2-compat:jar:0.97.0-SNAPSHOT:compile [INFO] | \- org.apache.hadoop:hadoop-minicluster:jar:2.1.0-beta:compile [INFO] | +- org.apache.hadoop:hadoop-common:test-jar:tests:2.1.0-beta:compile [INFO] | +- org.apache.hadoop:hadoop-hdfs:test-jar:tests:2.1.0-beta:compile [INFO] | | +- commons-daemon:commons-daemon:jar:1.0.13:compile [INFO] | | \- javax.servlet.jsp:jsp-api:jar:2.1:compile [INFO] | +- org.apache.hadoop:hadoop-yarn-server-tests:test-jar:tests:2.1.0-beta:compile [INFO] | | \- org.apache.hadoop:hadoop-yarn-server-resourcemanager:jar:2.1.0-beta:compile [INFO] | \- org.apache.hadoop:hadoop-mapreduce-client-hs:jar:2.1.0-beta:compile [INFO] +- org.apache.hbase:hbase-hadoop2-compat:test-jar:tests:0.97.0-SNAPSHOT:test We can at least remove the dependency to hdfs-test with this modification in the pom.xml dependency groupIdorg.apache.hadoop/groupId artifactIdhadoop-minicluster/artifactId version${hadoop-two.version}/version exclusions exclusion !-- We use a newer version of netty -- groupIdorg.jboss.netty/groupId artifactIdnetty/artifactId /exclusion exclusion groupIdorg.apache.hadoop/groupId artifactIdhadoop-hdfs/artifactId = new /exclusion /exclusions /dependency PerformanceEvaluation is in hbase-server, and create a dependency to MiniDFSCluster --- Key: HBASE-9558 URL: https://issues.apache.org/jira/browse/HBASE-9558 Project: HBase Issue Type: Bug Affects Versions: 0.98.0 Reporter: Nicolas Liochon Priority: Minor It's the only dependency that is not in the tests package. I'm not clear on how to fix it. Any idea? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9357) Rest server shouldn't need to initiate ZK connection to print usage information
[ https://issues.apache.org/jira/browse/HBASE-9357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769567#comment-13769567 ] Gustavo Anatoly commented on HBASE-9357: Hi, Nick. Could you please review my proposal to fix this bug? http://goo.gl/BY4udm Thanks. Rest server shouldn't need to initiate ZK connection to print usage information --- Key: HBASE-9357 URL: https://issues.apache.org/jira/browse/HBASE-9357 Project: HBase Issue Type: Bug Components: REST Affects Versions: 0.95.2 Reporter: Nick Dimiduk Priority: Minor When there's no ZK available, running `bin/hbase rest` must timeout before printing usage information. Initiating a connection should happen after parsing CLI options, not before. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7462) TestDrainingServer is an integration test. It should be a unit test instead
[ https://issues.apache.org/jira/browse/HBASE-7462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicolas Liochon updated HBASE-7462: --- Status: Patch Available (was: Open) TestDrainingServer is an integration test. It should be a unit test instead --- Key: HBASE-7462 URL: https://issues.apache.org/jira/browse/HBASE-7462 Project: HBase Issue Type: Wish Components: test Affects Versions: 0.95.2 Reporter: Nicolas Liochon Assignee: Gustavo Anatoly Priority: Trivial Labels: noob Attachments: 7462.v3.patch, HBASE-7462-v2.patch TestDrainingServer tests the function that allows to say that a regionserver should not get new regions. As it is written today, it's an integration test: it starts stops a cluster. The test would be more efficient if it would just check that the AssignmentManager does not use the drained region server; whatever the circumstances (bulk assign or not for example). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7462) TestDrainingServer is an integration test. It should be a unit test instead
[ https://issues.apache.org/jira/browse/HBASE-7462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicolas Liochon updated HBASE-7462: --- Attachment: 7462.v3.patch TestDrainingServer is an integration test. It should be a unit test instead --- Key: HBASE-7462 URL: https://issues.apache.org/jira/browse/HBASE-7462 Project: HBase Issue Type: Wish Components: test Affects Versions: 0.95.2 Reporter: Nicolas Liochon Assignee: Gustavo Anatoly Priority: Trivial Labels: noob Attachments: 7462.v3.patch, HBASE-7462-v2.patch TestDrainingServer tests the function that allows to say that a regionserver should not get new regions. As it is written today, it's an integration test: it starts stops a cluster. The test would be more efficient if it would just check that the AssignmentManager does not use the drained region server; whatever the circumstances (bulk assign or not for example). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7462) TestDrainingServer is an integration test. It should be a unit test instead
[ https://issues.apache.org/jira/browse/HBASE-7462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769592#comment-13769592 ] Nicolas Liochon commented on HBASE-7462: The patch is ok. I've done some changes, all minor except one. See v3. I plan to commit it tomorrow. The test now takes 2 seconds instead of 38 seconds to run. Nice. Thanks Gustavo. TestDrainingServer is an integration test. It should be a unit test instead --- Key: HBASE-7462 URL: https://issues.apache.org/jira/browse/HBASE-7462 Project: HBase Issue Type: Wish Components: test Affects Versions: 0.95.2 Reporter: Nicolas Liochon Assignee: Gustavo Anatoly Priority: Trivial Labels: noob Attachments: 7462.v3.patch, HBASE-7462-v2.patch TestDrainingServer tests the function that allows to say that a regionserver should not get new regions. As it is written today, it's an integration test: it starts stops a cluster. The test would be more efficient if it would just check that the AssignmentManager does not use the drained region server; whatever the circumstances (bulk assign or not for example). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7462) TestDrainingServer is an integration test. It should be a unit test instead
[ https://issues.apache.org/jira/browse/HBASE-7462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicolas Liochon updated HBASE-7462: --- Status: Open (was: Patch Available) TestDrainingServer is an integration test. It should be a unit test instead --- Key: HBASE-7462 URL: https://issues.apache.org/jira/browse/HBASE-7462 Project: HBase Issue Type: Wish Components: test Affects Versions: 0.95.2 Reporter: Nicolas Liochon Assignee: Gustavo Anatoly Priority: Trivial Labels: noob Attachments: 7462.v3.patch, HBASE-7462-v2.patch TestDrainingServer tests the function that allows to say that a regionserver should not get new regions. As it is written today, it's an integration test: it starts stops a cluster. The test would be more efficient if it would just check that the AssignmentManager does not use the drained region server; whatever the circumstances (bulk assign or not for example). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9557) strange dependencies for hbase-client
[ https://issues.apache.org/jira/browse/HBASE-9557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769600#comment-13769600 ] stack commented on HBASE-9557: -- bq. log4j (we have direct dependencies in RegionPlacementMaintainer, RESTServlet, and LogMonitoring, not counting the dependencies in the tests) I thought I'd undone this. Agree with all your other jaw hanging whys. I'm pretty sure I've looked at the jsp compiler jars and just not seen them presuming we needed them all (and when you point it, now they are glaring, silly includes). This stuff all comes in via transitive include from hadoop? strange dependencies for hbase-client - Key: HBASE-9557 URL: https://issues.apache.org/jira/browse/HBASE-9557 Project: HBase Issue Type: Bug Reporter: Nicolas Liochon Attachments: 9557.v1.patch Here is what we have with hadoop 2. In our plate we have - junit (should be test, it's not because we use it in the integration test runner - log4j (we have direct dependencies in RegionPlacementMaintainer, RESTServlet, and LogMonitoring, not counting the dependencies in the tests) The others are in hadoop. I marked the ones that were strange to me. Do we need all of them? mvn dependency:tree -pl hbase-client -Dhadoop.profile=2.0 [INFO] --- maven-dependency-plugin:2.1:tree (default-cli) @ hbase-client --- [INFO] org.apache.hbase:hbase-client:jar:0.97.0-SNAPSHOT [INFO] +- org.apache.hbase:hbase-common:jar:0.97.0-SNAPSHOT:compile [INFO] | \- commons-collections:commons-collections:jar:3.2.1:compile [INFO] +- org.apache.hbase:hbase-common:test-jar:tests:0.97.0-SNAPSHOT:test [INFO] +- org.apache.hbase:hbase-protocol:jar:0.97.0-SNAPSHOT:compile [INFO] +- commons-codec:commons-codec:jar:1.7:compile [INFO] +- commons-io:commons-io:jar:2.4:compile [INFO] +- commons-lang:commons-lang:jar:2.6:compile [INFO] +- commons-logging:commons-logging:jar:1.1.1:compile [INFO] +- com.google.guava:guava:jar:12.0.1:compile [INFO] | \- com.google.code.findbugs:jsr305:jar:1.3.9:compile [INFO] +- com.google.protobuf:protobuf-java:jar:2.5.0:compile [INFO] +- org.apache.zookeeper:zookeeper:jar:3.4.5:compile [INFO] | +- org.slf4j:slf4j-api:jar:1.6.4:compile [INFO] | \- org.slf4j:slf4j-log4j12:jar:1.6.1:compile [INFO] +- org.cloudera.htrace:htrace-core:jar:2.01:compile *[INFO] | \- org.mortbay.jetty:jetty-util:jar:6.1.26:compile === why?* [INFO] +- org.codehaus.jackson:jackson-mapper-asl:jar:1.8.8:compile [INFO] | \- org.codehaus.jackson:jackson-core-asl:jar:1.8.8:compile [INFO] +- io.netty:netty:jar:3.5.9.Final:compile [INFO] +- log4j:log4j:jar:1.2.17:test (scope not updated to compile) [INFO] +- org.apache.hadoop:hadoop-common:jar:2.1.0-beta:compile [INFO] | +- commons-cli:commons-cli:jar:1.2:compile [INFO] | +- org.apache.commons:commons-math:jar:2.2:compile (version managed from 2.1) [INFO] | +- xmlenc:xmlenc:jar:0.52:compile *[INFO] | +- commons-httpclient:commons-httpclient:jar:3.0.1:compile (version managed from 3.1) = decrease the version. dangerous. But why hadoop does this?* [INFO] | +- commons-net:commons-net:jar:3.1:compile *[INFO] | +- javax.servlet:servlet-api:jar:2.5:compile why a servlet api in hbase-client or hadoop common?* [INFO] | +- org.mortbay.jetty:jetty:jar:6.1.26:compile [INFO] | +- com.sun.jersey:jersey-core:jar:1.8:compile [INFO] | +- com.sun.jersey:jersey-json:jar:1.8:compile *[INFO] | | +- org.codehaus.jettison:jettison:jar:1.3.1:compile (version managed from 1.1)* [INFO] | | | \- stax:stax-api:jar:1.0.1:compile [INFO] | | +- com.sun.xml.bind:jaxb-impl:jar:2.2.3-1:compile *[INFO] | | | \- javax.xml.bind:jaxb-api:jar:2.1:compile (version managed from 2.2.2) = decrease the version. dangerous* [INFO] | | | \- javax.activation:activation:jar:1.1:compile [INFO] | | +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.8:compile (version managed from 1.7.1) [INFO] | | \- org.codehaus.jackson:jackson-xc:jar:1.8.8:compile (version managed from 1.7.1) *[INFO] | +- com.sun.jersey:jersey-server:jar:1.8:compile = Why a server in a common piece of code? could we exclude it from hbase client?* [INFO] | | \- asm:asm:jar:3.1:compile *[INFO] | +- tomcat:jasper-compiler:jar:5.5.23:runtime === ??? why * *[INFO] | +- tomcat:jasper-runtime:jar:5.5.23:runtime === ??? why* *[INFO] | +- javax.servlet.jsp:jsp-api:jar:2.1:runtime Why? could we exclude it from hbase client?* [INFO] | +- commons-el:commons-el:jar:1.0:runtime [INFO] | +- net.java.dev.jets3t:jets3t:jar:0.6.1:compile [INFO] | +- commons-configuration:commons-configuration:jar:1.6:compile [INFO] | | +- commons-digester:commons-digester:jar:1.8:compile [INFO] | | | \-
[jira] [Commented] (HBASE-9557) strange dependencies for hbase-client
[ https://issues.apache.org/jira/browse/HBASE-9557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769602#comment-13769602 ] stack commented on HBASE-9557: -- +1 on the patch. If a prob w/ it, we'll know soon enough. Mark 0.96.1 and will pull it in if new RC after the one I am currently building. strange dependencies for hbase-client - Key: HBASE-9557 URL: https://issues.apache.org/jira/browse/HBASE-9557 Project: HBase Issue Type: Bug Reporter: Nicolas Liochon Attachments: 9557.v1.patch Here is what we have with hadoop 2. In our plate we have - junit (should be test, it's not because we use it in the integration test runner - log4j (we have direct dependencies in RegionPlacementMaintainer, RESTServlet, and LogMonitoring, not counting the dependencies in the tests) The others are in hadoop. I marked the ones that were strange to me. Do we need all of them? mvn dependency:tree -pl hbase-client -Dhadoop.profile=2.0 [INFO] --- maven-dependency-plugin:2.1:tree (default-cli) @ hbase-client --- [INFO] org.apache.hbase:hbase-client:jar:0.97.0-SNAPSHOT [INFO] +- org.apache.hbase:hbase-common:jar:0.97.0-SNAPSHOT:compile [INFO] | \- commons-collections:commons-collections:jar:3.2.1:compile [INFO] +- org.apache.hbase:hbase-common:test-jar:tests:0.97.0-SNAPSHOT:test [INFO] +- org.apache.hbase:hbase-protocol:jar:0.97.0-SNAPSHOT:compile [INFO] +- commons-codec:commons-codec:jar:1.7:compile [INFO] +- commons-io:commons-io:jar:2.4:compile [INFO] +- commons-lang:commons-lang:jar:2.6:compile [INFO] +- commons-logging:commons-logging:jar:1.1.1:compile [INFO] +- com.google.guava:guava:jar:12.0.1:compile [INFO] | \- com.google.code.findbugs:jsr305:jar:1.3.9:compile [INFO] +- com.google.protobuf:protobuf-java:jar:2.5.0:compile [INFO] +- org.apache.zookeeper:zookeeper:jar:3.4.5:compile [INFO] | +- org.slf4j:slf4j-api:jar:1.6.4:compile [INFO] | \- org.slf4j:slf4j-log4j12:jar:1.6.1:compile [INFO] +- org.cloudera.htrace:htrace-core:jar:2.01:compile *[INFO] | \- org.mortbay.jetty:jetty-util:jar:6.1.26:compile === why?* [INFO] +- org.codehaus.jackson:jackson-mapper-asl:jar:1.8.8:compile [INFO] | \- org.codehaus.jackson:jackson-core-asl:jar:1.8.8:compile [INFO] +- io.netty:netty:jar:3.5.9.Final:compile [INFO] +- log4j:log4j:jar:1.2.17:test (scope not updated to compile) [INFO] +- org.apache.hadoop:hadoop-common:jar:2.1.0-beta:compile [INFO] | +- commons-cli:commons-cli:jar:1.2:compile [INFO] | +- org.apache.commons:commons-math:jar:2.2:compile (version managed from 2.1) [INFO] | +- xmlenc:xmlenc:jar:0.52:compile *[INFO] | +- commons-httpclient:commons-httpclient:jar:3.0.1:compile (version managed from 3.1) = decrease the version. dangerous. But why hadoop does this?* [INFO] | +- commons-net:commons-net:jar:3.1:compile *[INFO] | +- javax.servlet:servlet-api:jar:2.5:compile why a servlet api in hbase-client or hadoop common?* [INFO] | +- org.mortbay.jetty:jetty:jar:6.1.26:compile [INFO] | +- com.sun.jersey:jersey-core:jar:1.8:compile [INFO] | +- com.sun.jersey:jersey-json:jar:1.8:compile *[INFO] | | +- org.codehaus.jettison:jettison:jar:1.3.1:compile (version managed from 1.1)* [INFO] | | | \- stax:stax-api:jar:1.0.1:compile [INFO] | | +- com.sun.xml.bind:jaxb-impl:jar:2.2.3-1:compile *[INFO] | | | \- javax.xml.bind:jaxb-api:jar:2.1:compile (version managed from 2.2.2) = decrease the version. dangerous* [INFO] | | | \- javax.activation:activation:jar:1.1:compile [INFO] | | +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.8:compile (version managed from 1.7.1) [INFO] | | \- org.codehaus.jackson:jackson-xc:jar:1.8.8:compile (version managed from 1.7.1) *[INFO] | +- com.sun.jersey:jersey-server:jar:1.8:compile = Why a server in a common piece of code? could we exclude it from hbase client?* [INFO] | | \- asm:asm:jar:3.1:compile *[INFO] | +- tomcat:jasper-compiler:jar:5.5.23:runtime === ??? why * *[INFO] | +- tomcat:jasper-runtime:jar:5.5.23:runtime === ??? why* *[INFO] | +- javax.servlet.jsp:jsp-api:jar:2.1:runtime Why? could we exclude it from hbase client?* [INFO] | +- commons-el:commons-el:jar:1.0:runtime [INFO] | +- net.java.dev.jets3t:jets3t:jar:0.6.1:compile [INFO] | +- commons-configuration:commons-configuration:jar:1.6:compile [INFO] | | +- commons-digester:commons-digester:jar:1.8:compile [INFO] | | | \- commons-beanutils:commons-beanutils:jar:1.7.0:compile [INFO] | | \- commons-beanutils:commons-beanutils-core:jar:1.8.0:compile [INFO] | +- org.apache.avro:avro:jar:1.5.3:compile [INFO] | | +- com.thoughtworks.paranamer:paranamer:jar:2.3:compile [INFO] | | \- org.xerial.snappy:snappy-java:jar:1.0.3.2:compile [INFO] | +-
[jira] [Updated] (HBASE-9557) strange dependencies for hbase-client
[ https://issues.apache.org/jira/browse/HBASE-9557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicolas Liochon updated HBASE-9557: --- Resolution: Fixed Fix Version/s: 0.96.1 0.98.0 Assignee: Nicolas Liochon Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Committed, thanks for the review, Stack. strange dependencies for hbase-client - Key: HBASE-9557 URL: https://issues.apache.org/jira/browse/HBASE-9557 Project: HBase Issue Type: Bug Reporter: Nicolas Liochon Assignee: Nicolas Liochon Fix For: 0.98.0, 0.96.1 Attachments: 9557.v1.patch Here is what we have with hadoop 2. In our plate we have - junit (should be test, it's not because we use it in the integration test runner - log4j (we have direct dependencies in RegionPlacementMaintainer, RESTServlet, and LogMonitoring, not counting the dependencies in the tests) The others are in hadoop. I marked the ones that were strange to me. Do we need all of them? mvn dependency:tree -pl hbase-client -Dhadoop.profile=2.0 [INFO] --- maven-dependency-plugin:2.1:tree (default-cli) @ hbase-client --- [INFO] org.apache.hbase:hbase-client:jar:0.97.0-SNAPSHOT [INFO] +- org.apache.hbase:hbase-common:jar:0.97.0-SNAPSHOT:compile [INFO] | \- commons-collections:commons-collections:jar:3.2.1:compile [INFO] +- org.apache.hbase:hbase-common:test-jar:tests:0.97.0-SNAPSHOT:test [INFO] +- org.apache.hbase:hbase-protocol:jar:0.97.0-SNAPSHOT:compile [INFO] +- commons-codec:commons-codec:jar:1.7:compile [INFO] +- commons-io:commons-io:jar:2.4:compile [INFO] +- commons-lang:commons-lang:jar:2.6:compile [INFO] +- commons-logging:commons-logging:jar:1.1.1:compile [INFO] +- com.google.guava:guava:jar:12.0.1:compile [INFO] | \- com.google.code.findbugs:jsr305:jar:1.3.9:compile [INFO] +- com.google.protobuf:protobuf-java:jar:2.5.0:compile [INFO] +- org.apache.zookeeper:zookeeper:jar:3.4.5:compile [INFO] | +- org.slf4j:slf4j-api:jar:1.6.4:compile [INFO] | \- org.slf4j:slf4j-log4j12:jar:1.6.1:compile [INFO] +- org.cloudera.htrace:htrace-core:jar:2.01:compile *[INFO] | \- org.mortbay.jetty:jetty-util:jar:6.1.26:compile === why?* [INFO] +- org.codehaus.jackson:jackson-mapper-asl:jar:1.8.8:compile [INFO] | \- org.codehaus.jackson:jackson-core-asl:jar:1.8.8:compile [INFO] +- io.netty:netty:jar:3.5.9.Final:compile [INFO] +- log4j:log4j:jar:1.2.17:test (scope not updated to compile) [INFO] +- org.apache.hadoop:hadoop-common:jar:2.1.0-beta:compile [INFO] | +- commons-cli:commons-cli:jar:1.2:compile [INFO] | +- org.apache.commons:commons-math:jar:2.2:compile (version managed from 2.1) [INFO] | +- xmlenc:xmlenc:jar:0.52:compile *[INFO] | +- commons-httpclient:commons-httpclient:jar:3.0.1:compile (version managed from 3.1) = decrease the version. dangerous. But why hadoop does this?* [INFO] | +- commons-net:commons-net:jar:3.1:compile *[INFO] | +- javax.servlet:servlet-api:jar:2.5:compile why a servlet api in hbase-client or hadoop common?* [INFO] | +- org.mortbay.jetty:jetty:jar:6.1.26:compile [INFO] | +- com.sun.jersey:jersey-core:jar:1.8:compile [INFO] | +- com.sun.jersey:jersey-json:jar:1.8:compile *[INFO] | | +- org.codehaus.jettison:jettison:jar:1.3.1:compile (version managed from 1.1)* [INFO] | | | \- stax:stax-api:jar:1.0.1:compile [INFO] | | +- com.sun.xml.bind:jaxb-impl:jar:2.2.3-1:compile *[INFO] | | | \- javax.xml.bind:jaxb-api:jar:2.1:compile (version managed from 2.2.2) = decrease the version. dangerous* [INFO] | | | \- javax.activation:activation:jar:1.1:compile [INFO] | | +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.8:compile (version managed from 1.7.1) [INFO] | | \- org.codehaus.jackson:jackson-xc:jar:1.8.8:compile (version managed from 1.7.1) *[INFO] | +- com.sun.jersey:jersey-server:jar:1.8:compile = Why a server in a common piece of code? could we exclude it from hbase client?* [INFO] | | \- asm:asm:jar:3.1:compile *[INFO] | +- tomcat:jasper-compiler:jar:5.5.23:runtime === ??? why * *[INFO] | +- tomcat:jasper-runtime:jar:5.5.23:runtime === ??? why* *[INFO] | +- javax.servlet.jsp:jsp-api:jar:2.1:runtime Why? could we exclude it from hbase client?* [INFO] | +- commons-el:commons-el:jar:1.0:runtime [INFO] | +- net.java.dev.jets3t:jets3t:jar:0.6.1:compile [INFO] | +- commons-configuration:commons-configuration:jar:1.6:compile [INFO] | | +- commons-digester:commons-digester:jar:1.8:compile [INFO] | | | \- commons-beanutils:commons-beanutils:jar:1.7.0:compile [INFO] | | \- commons-beanutils:commons-beanutils-core:jar:1.8.0:compile [INFO] | +- org.apache.avro:avro:jar:1.5.3:compile [INFO] | | +-
[jira] [Commented] (HBASE-9556) Provide key range support to bulkload to avoid too many reducers even the data belongs to few regions
[ https://issues.apache.org/jira/browse/HBASE-9556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769637#comment-13769637 ] Nick Dimiduk commented on HBASE-9556: - Better still, what if we didn't need the user to provide this information; have the job launcher inspect the ranges and configure the job accordingly. Provide key range support to bulkload to avoid too many reducers even the data belongs to few regions - Key: HBASE-9556 URL: https://issues.apache.org/jira/browse/HBASE-9556 Project: HBase Issue Type: Improvement Components: mapreduce Reporter: rajeshbabu Assignee: rajeshbabu Priority: Minor Presently the number of reducers in bulk load are equal to number of regions. Lets suppose a table has 500 regions and import data only belongs 10 regions, still we are starting 500(equal to no. of regions) reducers instead of 10. Which will consume more time and resources. If user knows the row key range of import data, then we can pass startkey and/or endkey as input and based on the key range we can define the partitions and number of reducers(regions to which the data belongs). This helps to avoid too many reducers to start and do nothing and also avoids contention in shuffling. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9514) Prevent region from assigning before log splitting is done
[ https://issues.apache.org/jira/browse/HBASE-9514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jimmy Xiang updated HBASE-9514: --- Status: Open (was: Patch Available) Prevent region from assigning before log splitting is done -- Key: HBASE-9514 URL: https://issues.apache.org/jira/browse/HBASE-9514 Project: HBase Issue Type: Bug Components: Region Assignment Reporter: Jimmy Xiang Assignee: Jimmy Xiang Priority: Blocker Attachments: trunk-9514_v1.patch If a region is assigned before log splitting is done by the server shutdown handler, the edits belonging to this region in the hlogs of the dead server will be lost. Generally this is not an issue if users don't assign/unassign a region from hbase shell or via hbase admin. These commands are marked for experts only in the hbase shell help too. However, chaos monkey doesn't care. If we can prevent from assigning such regions in a bad time, it would make things a little safer. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9557) strange dependencies for hbase-client
[ https://issues.apache.org/jira/browse/HBASE-9557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769640#comment-13769640 ] Nick Dimiduk commented on HBASE-9557: - Careful changing version numbers of dependencies with patch releases. For folk who depend on our stuff, they may also depend on features of our dependencies that we done. Those deps are effectively a part of our API. When our deps make breaking changes when we upgrade them, it can cause havoc in what our users thought was a simple patch bump. These changes should be fine, assuming those project all respect API stability, but you never know... strange dependencies for hbase-client - Key: HBASE-9557 URL: https://issues.apache.org/jira/browse/HBASE-9557 Project: HBase Issue Type: Bug Reporter: Nicolas Liochon Assignee: Nicolas Liochon Fix For: 0.98.0, 0.96.1 Attachments: 9557.v1.patch Here is what we have with hadoop 2. In our plate we have - junit (should be test, it's not because we use it in the integration test runner - log4j (we have direct dependencies in RegionPlacementMaintainer, RESTServlet, and LogMonitoring, not counting the dependencies in the tests) The others are in hadoop. I marked the ones that were strange to me. Do we need all of them? mvn dependency:tree -pl hbase-client -Dhadoop.profile=2.0 [INFO] --- maven-dependency-plugin:2.1:tree (default-cli) @ hbase-client --- [INFO] org.apache.hbase:hbase-client:jar:0.97.0-SNAPSHOT [INFO] +- org.apache.hbase:hbase-common:jar:0.97.0-SNAPSHOT:compile [INFO] | \- commons-collections:commons-collections:jar:3.2.1:compile [INFO] +- org.apache.hbase:hbase-common:test-jar:tests:0.97.0-SNAPSHOT:test [INFO] +- org.apache.hbase:hbase-protocol:jar:0.97.0-SNAPSHOT:compile [INFO] +- commons-codec:commons-codec:jar:1.7:compile [INFO] +- commons-io:commons-io:jar:2.4:compile [INFO] +- commons-lang:commons-lang:jar:2.6:compile [INFO] +- commons-logging:commons-logging:jar:1.1.1:compile [INFO] +- com.google.guava:guava:jar:12.0.1:compile [INFO] | \- com.google.code.findbugs:jsr305:jar:1.3.9:compile [INFO] +- com.google.protobuf:protobuf-java:jar:2.5.0:compile [INFO] +- org.apache.zookeeper:zookeeper:jar:3.4.5:compile [INFO] | +- org.slf4j:slf4j-api:jar:1.6.4:compile [INFO] | \- org.slf4j:slf4j-log4j12:jar:1.6.1:compile [INFO] +- org.cloudera.htrace:htrace-core:jar:2.01:compile *[INFO] | \- org.mortbay.jetty:jetty-util:jar:6.1.26:compile === why?* [INFO] +- org.codehaus.jackson:jackson-mapper-asl:jar:1.8.8:compile [INFO] | \- org.codehaus.jackson:jackson-core-asl:jar:1.8.8:compile [INFO] +- io.netty:netty:jar:3.5.9.Final:compile [INFO] +- log4j:log4j:jar:1.2.17:test (scope not updated to compile) [INFO] +- org.apache.hadoop:hadoop-common:jar:2.1.0-beta:compile [INFO] | +- commons-cli:commons-cli:jar:1.2:compile [INFO] | +- org.apache.commons:commons-math:jar:2.2:compile (version managed from 2.1) [INFO] | +- xmlenc:xmlenc:jar:0.52:compile *[INFO] | +- commons-httpclient:commons-httpclient:jar:3.0.1:compile (version managed from 3.1) = decrease the version. dangerous. But why hadoop does this?* [INFO] | +- commons-net:commons-net:jar:3.1:compile *[INFO] | +- javax.servlet:servlet-api:jar:2.5:compile why a servlet api in hbase-client or hadoop common?* [INFO] | +- org.mortbay.jetty:jetty:jar:6.1.26:compile [INFO] | +- com.sun.jersey:jersey-core:jar:1.8:compile [INFO] | +- com.sun.jersey:jersey-json:jar:1.8:compile *[INFO] | | +- org.codehaus.jettison:jettison:jar:1.3.1:compile (version managed from 1.1)* [INFO] | | | \- stax:stax-api:jar:1.0.1:compile [INFO] | | +- com.sun.xml.bind:jaxb-impl:jar:2.2.3-1:compile *[INFO] | | | \- javax.xml.bind:jaxb-api:jar:2.1:compile (version managed from 2.2.2) = decrease the version. dangerous* [INFO] | | | \- javax.activation:activation:jar:1.1:compile [INFO] | | +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.8:compile (version managed from 1.7.1) [INFO] | | \- org.codehaus.jackson:jackson-xc:jar:1.8.8:compile (version managed from 1.7.1) *[INFO] | +- com.sun.jersey:jersey-server:jar:1.8:compile = Why a server in a common piece of code? could we exclude it from hbase client?* [INFO] | | \- asm:asm:jar:3.1:compile *[INFO] | +- tomcat:jasper-compiler:jar:5.5.23:runtime === ??? why * *[INFO] | +- tomcat:jasper-runtime:jar:5.5.23:runtime === ??? why* *[INFO] | +- javax.servlet.jsp:jsp-api:jar:2.1:runtime Why? could we exclude it from hbase client?* [INFO] | +- commons-el:commons-el:jar:1.0:runtime [INFO] | +- net.java.dev.jets3t:jets3t:jar:0.6.1:compile [INFO] | +- commons-configuration:commons-configuration:jar:1.6:compile [INFO] | | +-
[jira] [Updated] (HBASE-9249) Add cp hook before setting PONR in split
[ https://issues.apache.org/jira/browse/HBASE-9249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] rajeshbabu updated HBASE-9249: -- Attachment: HBASE-9249_v9.patch This is what committed to trunk. Thanks all for reviews. Add cp hook before setting PONR in split Key: HBASE-9249 URL: https://issues.apache.org/jira/browse/HBASE-9249 Project: HBase Issue Type: Sub-task Affects Versions: 0.98.0 Reporter: rajeshbabu Assignee: rajeshbabu Fix For: 0.98.0 Attachments: HBASE-9249.patch, HBASE-9249_v2.patch, HBASE-9249_v3.patch, HBASE-9249_v4.patch, HBASE-9249_v5.patch, HBASE-9249_v6.patch, HBASE-9249_v7.patch, HBASE-9249_v7.patch, HBASE-9249_v8.patch, HBASE-9249_v8.patch, HBASE-9249_v9.patch This hook helps to perform split on user region and corresponding index region such that both will be split or none. With this hook split for user and index region as follows user region === 1) Create splitting znode for user region split 2) Close parent user region 3) split user region storefiles 4) instantiate child regions of user region Through the new hook we can call index region transitions as below index region === 5) Create splitting znode for index region split 6) Close parent index region 7) Split storefiles of index region 8) instantiate child regions of the index region If any failures in 5,6,7,8 rollback the steps and return null, on null return throw exception to rollback for 1,2,3,4 9) set PONR 10) do batch put of offline and split entries for user and index regions index region 11) open daughers of index regions and transition znode to split. This step we will do through preSplitAfterPONR hook. Opening index regions before opening user regions helps to avoid put failures if there is colocation mismatch(this can happen if user regions opening completed but index regions opening in progress) user region === 12) open daughers of user regions and transition znode to split. Even if region server crashes also at the end both user and index regions will be split or none -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9558) PerformanceEvaluation is in hbase-server, and create a dependency to MiniDFSCluster
[ https://issues.apache.org/jira/browse/HBASE-9558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769643#comment-13769643 ] Nick Dimiduk commented on HBASE-9558: - Move PerfEval into the IT module? PerformanceEvaluation is in hbase-server, and create a dependency to MiniDFSCluster --- Key: HBASE-9558 URL: https://issues.apache.org/jira/browse/HBASE-9558 Project: HBase Issue Type: Bug Affects Versions: 0.98.0 Reporter: Nicolas Liochon Priority: Minor It's the only dependency that is not in the tests package. I'm not clear on how to fix it. Any idea? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9249) Add cp hook before setting PONR in split
[ https://issues.apache.org/jira/browse/HBASE-9249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769649#comment-13769649 ] Hadoop QA commented on HBASE-9249: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12603609/HBASE-9249_v9.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 6 new or modified tests. {color:red}-1 patch{color}. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7271//console This message is automatically generated. Add cp hook before setting PONR in split Key: HBASE-9249 URL: https://issues.apache.org/jira/browse/HBASE-9249 Project: HBase Issue Type: Sub-task Affects Versions: 0.98.0 Reporter: rajeshbabu Assignee: rajeshbabu Fix For: 0.98.0 Attachments: HBASE-9249.patch, HBASE-9249_v2.patch, HBASE-9249_v3.patch, HBASE-9249_v4.patch, HBASE-9249_v5.patch, HBASE-9249_v6.patch, HBASE-9249_v7.patch, HBASE-9249_v7.patch, HBASE-9249_v8.patch, HBASE-9249_v8.patch, HBASE-9249_v9.patch This hook helps to perform split on user region and corresponding index region such that both will be split or none. With this hook split for user and index region as follows user region === 1) Create splitting znode for user region split 2) Close parent user region 3) split user region storefiles 4) instantiate child regions of user region Through the new hook we can call index region transitions as below index region === 5) Create splitting znode for index region split 6) Close parent index region 7) Split storefiles of index region 8) instantiate child regions of the index region If any failures in 5,6,7,8 rollback the steps and return null, on null return throw exception to rollback for 1,2,3,4 9) set PONR 10) do batch put of offline and split entries for user and index regions index region 11) open daughers of index regions and transition znode to split. This step we will do through preSplitAfterPONR hook. Opening index regions before opening user regions helps to avoid put failures if there is colocation mismatch(this can happen if user regions opening completed but index regions opening in progress) user region === 12) open daughers of user regions and transition znode to split. Even if region server crashes also at the end both user and index regions will be split or none -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9295) Allow test-patch.sh to detect TreeMap keyed by byte[] which doesn't use proper comparator
[ https://issues.apache.org/jira/browse/HBASE-9295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769689#comment-13769689 ] Ted Yu commented on HBASE-9295: --- Here is sample output where the tested patch omits comparator for TreeMap: {code} {color:red}-1 Anti-pattern{color}. The patch appears to have anti-pattern : + Mapbyte[], Integer kvCount = new TreeMapbyte[], Integer();. {code} Allow test-patch.sh to detect TreeMap keyed by byte[] which doesn't use proper comparator - Key: HBASE-9295 URL: https://issues.apache.org/jira/browse/HBASE-9295 Project: HBase Issue Type: Task Reporter: Ted Yu Assignee: Ted Yu Fix For: 0.98.0 Attachments: 9295-v1.txt There were two recent bug fixes (HBASE-9285 and HBASE-9238) for the case where the TreeMap keyed by byte[] doesn't use proper comparator: {code} new TreeMapbyte[], ...() {code} test-patch.sh should be able to detect this situation and report accordingly. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9295) Allow test-patch.sh to detect TreeMap keyed by byte[] which doesn't use proper comparator
[ https://issues.apache.org/jira/browse/HBASE-9295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-9295: -- Attachment: 9295-v1.txt Allow test-patch.sh to detect TreeMap keyed by byte[] which doesn't use proper comparator - Key: HBASE-9295 URL: https://issues.apache.org/jira/browse/HBASE-9295 Project: HBase Issue Type: Task Reporter: Ted Yu Assignee: Ted Yu Fix For: 0.98.0 Attachments: 9295-v1.txt There were two recent bug fixes (HBASE-9285 and HBASE-9238) for the case where the TreeMap keyed by byte[] doesn't use proper comparator: {code} new TreeMapbyte[], ...() {code} test-patch.sh should be able to detect this situation and report accordingly. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-9357) Rest server shouldn't need to initiate ZK connection to print usage information
[ https://issues.apache.org/jira/browse/HBASE-9357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk resolved HBASE-9357. - Resolution: Invalid Thanks for the gist, Gustavo. Actually, it looks like this is fixed on trunk: {noformat} $ ./bin/hbase rest 2013-09-17 09:05:23,087 INFO [main] util.VersionInfo: HBase 0.97.0-SNAPSHOT 2013-09-17 09:05:23,089 INFO [main] util.VersionInfo: Subversion git://soleil.local/Users/ndimiduk/repos/hbase -r 28a3eed1fd1cf184e25e96e86dc319cade2c992d 2013-09-17 09:05:23,089 INFO [main] util.VersionInfo: Compiled by ndimiduk on Mon Sep 16 16:10:46 PDT 2013 2013-09-17 09:05:23,629 INFO [main] impl.MetricsConfig: loaded properties from hadoop-metrics2-hbase.properties 2013-09-17 09:05:23,710 INFO [main] impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered. 2013-09-17 09:05:23,722 INFO [main] impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s). 2013-09-17 09:05:23,722 INFO [main] impl.MetricsSystemImpl: HBase metrics system started 2013-09-17 09:05:23,722 INFO [main] impl.MetricsSourceAdapter: MBean for source ugi registered. 2013-09-17 09:05:23,722 WARN [main] impl.MetricsSystemImpl: Source name ugi already exists! 2013-09-17 09:05:23,732 INFO [main] impl.MetricsSourceAdapter: MBean for source jvm registered. 2013-09-17 09:05:23,734 INFO [main] impl.MetricsSourceAdapter: MBean for source REST registered. usage: bin/hbase rest start [--infoport arg] [-p arg] [-ro] --infoport arg Port for web UI -p,--port arg Port to bind to [default: 8080] -ro,--readonlyRespond only to GET HTTP method requests [default: false] To run the REST server as a daemon, execute bin/hbase-daemon.sh start|stop rest [--infoport port] [-p port] [-ro] {noformat} Closing as invalid. Rest server shouldn't need to initiate ZK connection to print usage information --- Key: HBASE-9357 URL: https://issues.apache.org/jira/browse/HBASE-9357 Project: HBase Issue Type: Bug Components: REST Affects Versions: 0.95.2 Reporter: Nick Dimiduk Priority: Minor When there's no ZK available, running `bin/hbase rest` must timeout before printing usage information. Initiating a connection should happen after parsing CLI options, not before. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9295) Allow test-patch.sh to detect TreeMap keyed by byte[] which doesn't use proper comparator
[ https://issues.apache.org/jira/browse/HBASE-9295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-9295: -- Attachment: (was: 9295-v1.txt) Allow test-patch.sh to detect TreeMap keyed by byte[] which doesn't use proper comparator - Key: HBASE-9295 URL: https://issues.apache.org/jira/browse/HBASE-9295 Project: HBase Issue Type: Task Reporter: Ted Yu Assignee: Ted Yu Fix For: 0.98.0 There were two recent bug fixes (HBASE-9285 and HBASE-9238) for the case where the TreeMap keyed by byte[] doesn't use proper comparator: {code} new TreeMapbyte[], ...() {code} test-patch.sh should be able to detect this situation and report accordingly. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9556) Provide key range support to bulkload to avoid too many reducers even the data belongs to few regions
[ https://issues.apache.org/jira/browse/HBASE-9556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769676#comment-13769676 ] rajeshbabu commented on HBASE-9556: --- if user not specify the start and/or end key range then we are setting the regions count as number of reducers and regions start keys as split points. If user known the range beforehand then we can identify the proper split points within the range and reduce number of reducers. {code} ListImmutableBytesWritable startKeys = getRegionStartKeys(table); LOG.info(Configuring + startKeys.size() + reduce partitions + to match current region count); job.setNumReduceTasks(startKeys.size()); configurePartitioner(job, startKeys); {code} Provide key range support to bulkload to avoid too many reducers even the data belongs to few regions - Key: HBASE-9556 URL: https://issues.apache.org/jira/browse/HBASE-9556 Project: HBase Issue Type: Improvement Components: mapreduce Reporter: rajeshbabu Assignee: rajeshbabu Priority: Minor Presently the number of reducers in bulk load are equal to number of regions. Lets suppose a table has 500 regions and import data only belongs 10 regions, still we are starting 500(equal to no. of regions) reducers instead of 10. Which will consume more time and resources. If user knows the row key range of import data, then we can pass startkey and/or endkey as input and based on the key range we can define the partitions and number of reducers(regions to which the data belongs). This helps to avoid too many reducers to start and do nothing and also avoids contention in shuffling. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Comment Edited] (HBASE-9556) Provide key range support to bulkload to avoid too many reducers even the data belongs to few regions
[ https://issues.apache.org/jira/browse/HBASE-9556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769676#comment-13769676 ] rajeshbabu edited comment on HBASE-9556 at 9/17/13 5:29 PM: Yes Nick, while configuring job setting the regions count as number of reducers and regions start keys as partition points. If user known the range beforehand then we can identify the proper split points within the range and reduce number of reducers. {code} ListImmutableBytesWritable startKeys = getRegionStartKeys(table); LOG.info(Configuring + startKeys.size() + reduce partitions + to match current region count); job.setNumReduceTasks(startKeys.size()); configurePartitioner(job, startKeys); {code} was (Author: rajesh23): if user not specify the start and/or end key range then we are setting the regions count as number of reducers and regions start keys as split points. If user known the range beforehand then we can identify the proper split points within the range and reduce number of reducers. {code} ListImmutableBytesWritable startKeys = getRegionStartKeys(table); LOG.info(Configuring + startKeys.size() + reduce partitions + to match current region count); job.setNumReduceTasks(startKeys.size()); configurePartitioner(job, startKeys); {code} Provide key range support to bulkload to avoid too many reducers even the data belongs to few regions - Key: HBASE-9556 URL: https://issues.apache.org/jira/browse/HBASE-9556 Project: HBase Issue Type: Improvement Components: mapreduce Reporter: rajeshbabu Assignee: rajeshbabu Priority: Minor Presently the number of reducers in bulk load are equal to number of regions. Lets suppose a table has 500 regions and import data only belongs 10 regions, still we are starting 500(equal to no. of regions) reducers instead of 10. Which will consume more time and resources. If user knows the row key range of import data, then we can pass startkey and/or endkey as input and based on the key range we can define the partitions and number of reducers(regions to which the data belongs). This helps to avoid too many reducers to start and do nothing and also avoids contention in shuffling. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9557) strange dependencies for hbase-client
[ https://issues.apache.org/jira/browse/HBASE-9557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769612#comment-13769612 ] Nicolas Liochon commented on HBASE-9557: bq. This stuff all comes in via transitive include from hadoop? Yes. I'm going to create a jira in hadoop as well. bq. I thought I'd undone this. The client itself is ok. Server side we have the issue. strange dependencies for hbase-client - Key: HBASE-9557 URL: https://issues.apache.org/jira/browse/HBASE-9557 Project: HBase Issue Type: Bug Reporter: Nicolas Liochon Attachments: 9557.v1.patch Here is what we have with hadoop 2. In our plate we have - junit (should be test, it's not because we use it in the integration test runner - log4j (we have direct dependencies in RegionPlacementMaintainer, RESTServlet, and LogMonitoring, not counting the dependencies in the tests) The others are in hadoop. I marked the ones that were strange to me. Do we need all of them? mvn dependency:tree -pl hbase-client -Dhadoop.profile=2.0 [INFO] --- maven-dependency-plugin:2.1:tree (default-cli) @ hbase-client --- [INFO] org.apache.hbase:hbase-client:jar:0.97.0-SNAPSHOT [INFO] +- org.apache.hbase:hbase-common:jar:0.97.0-SNAPSHOT:compile [INFO] | \- commons-collections:commons-collections:jar:3.2.1:compile [INFO] +- org.apache.hbase:hbase-common:test-jar:tests:0.97.0-SNAPSHOT:test [INFO] +- org.apache.hbase:hbase-protocol:jar:0.97.0-SNAPSHOT:compile [INFO] +- commons-codec:commons-codec:jar:1.7:compile [INFO] +- commons-io:commons-io:jar:2.4:compile [INFO] +- commons-lang:commons-lang:jar:2.6:compile [INFO] +- commons-logging:commons-logging:jar:1.1.1:compile [INFO] +- com.google.guava:guava:jar:12.0.1:compile [INFO] | \- com.google.code.findbugs:jsr305:jar:1.3.9:compile [INFO] +- com.google.protobuf:protobuf-java:jar:2.5.0:compile [INFO] +- org.apache.zookeeper:zookeeper:jar:3.4.5:compile [INFO] | +- org.slf4j:slf4j-api:jar:1.6.4:compile [INFO] | \- org.slf4j:slf4j-log4j12:jar:1.6.1:compile [INFO] +- org.cloudera.htrace:htrace-core:jar:2.01:compile *[INFO] | \- org.mortbay.jetty:jetty-util:jar:6.1.26:compile === why?* [INFO] +- org.codehaus.jackson:jackson-mapper-asl:jar:1.8.8:compile [INFO] | \- org.codehaus.jackson:jackson-core-asl:jar:1.8.8:compile [INFO] +- io.netty:netty:jar:3.5.9.Final:compile [INFO] +- log4j:log4j:jar:1.2.17:test (scope not updated to compile) [INFO] +- org.apache.hadoop:hadoop-common:jar:2.1.0-beta:compile [INFO] | +- commons-cli:commons-cli:jar:1.2:compile [INFO] | +- org.apache.commons:commons-math:jar:2.2:compile (version managed from 2.1) [INFO] | +- xmlenc:xmlenc:jar:0.52:compile *[INFO] | +- commons-httpclient:commons-httpclient:jar:3.0.1:compile (version managed from 3.1) = decrease the version. dangerous. But why hadoop does this?* [INFO] | +- commons-net:commons-net:jar:3.1:compile *[INFO] | +- javax.servlet:servlet-api:jar:2.5:compile why a servlet api in hbase-client or hadoop common?* [INFO] | +- org.mortbay.jetty:jetty:jar:6.1.26:compile [INFO] | +- com.sun.jersey:jersey-core:jar:1.8:compile [INFO] | +- com.sun.jersey:jersey-json:jar:1.8:compile *[INFO] | | +- org.codehaus.jettison:jettison:jar:1.3.1:compile (version managed from 1.1)* [INFO] | | | \- stax:stax-api:jar:1.0.1:compile [INFO] | | +- com.sun.xml.bind:jaxb-impl:jar:2.2.3-1:compile *[INFO] | | | \- javax.xml.bind:jaxb-api:jar:2.1:compile (version managed from 2.2.2) = decrease the version. dangerous* [INFO] | | | \- javax.activation:activation:jar:1.1:compile [INFO] | | +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.8:compile (version managed from 1.7.1) [INFO] | | \- org.codehaus.jackson:jackson-xc:jar:1.8.8:compile (version managed from 1.7.1) *[INFO] | +- com.sun.jersey:jersey-server:jar:1.8:compile = Why a server in a common piece of code? could we exclude it from hbase client?* [INFO] | | \- asm:asm:jar:3.1:compile *[INFO] | +- tomcat:jasper-compiler:jar:5.5.23:runtime === ??? why * *[INFO] | +- tomcat:jasper-runtime:jar:5.5.23:runtime === ??? why* *[INFO] | +- javax.servlet.jsp:jsp-api:jar:2.1:runtime Why? could we exclude it from hbase client?* [INFO] | +- commons-el:commons-el:jar:1.0:runtime [INFO] | +- net.java.dev.jets3t:jets3t:jar:0.6.1:compile [INFO] | +- commons-configuration:commons-configuration:jar:1.6:compile [INFO] | | +- commons-digester:commons-digester:jar:1.8:compile [INFO] | | | \- commons-beanutils:commons-beanutils:jar:1.7.0:compile [INFO] | | \- commons-beanutils:commons-beanutils-core:jar:1.8.0:compile [INFO] | +- org.apache.avro:avro:jar:1.5.3:compile [INFO] | | +- com.thoughtworks.paranamer:paranamer:jar:2.3:compile [INFO] |
[jira] [Updated] (HBASE-9549) KeyValue#parseColumn(byte[]) does not handle empty qualifier
[ https://issues.apache.org/jira/browse/HBASE-9549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-9549: Attachment: HBASE-9549.01.patch This patch includes a correction in the way rest/RowSpec builds a get from the query URI. It also is more pedantic about interpreting the results of a call to KeyValue#parseColumn. It will throw exceptions in most places when an invalid column is encountered. It also updates the documentation in Thrift to be more clear about API differences between that and the Java API. KeyValue#parseColumn(byte[]) does not handle empty qualifier Key: HBASE-9549 URL: https://issues.apache.org/jira/browse/HBASE-9549 Project: HBase Issue Type: Bug Components: mapreduce, REST, Thrift, util Affects Versions: 0.95.2 Reporter: Nick Dimiduk Assignee: Nick Dimiduk Priority: Minor Fix For: 0.98.0, 0.96.1 Attachments: HBASE-9549.00.patch, HBASE-9549.01.patch HTable allows a user to interact directly with a KeyValue with an empty qualifier, yet {{KeyValue#parseColumn(byte[])}} treats this as a reference to a column family. No qualifier delimiter and an empty qualifier are treated as the same: {code} if (index == -1) { // If no delimiter, return array of size 1 return new byte [][] { c }; } else if(index == c.length - 1) { // Only a family, return array size 1 byte [] family = new byte[c.length-1]; System.arraycopy(c, 0, family, 0, family.length); return new byte [][] { family }; } ... {code} This inconsistency breaks external interfaces which depend on {{parseColumn}}, for instance, the shell: {noformat} # shell interactions with KV with an empty qualifier hbase(main):001:0 create 'foo', 'f1' 0 row(s) in 1.4130 seconds = Hbase::Table - foo hbase(main):002:0 put 'foo', 'rk1', 'f1:', 'empty?' 0 row(s) in 0.0750 seconds # = put works hbase(main):003:0 put 'foo', 'rk1', 'f1:bar', 'value' 0 row(s) in 0.0070 seconds # attempt to retrieve just the kv with empty qualifier hbase(main):004:0 get 'foo', 'rk1', 'f1:' COLUMN CELL f1:timestamp=1379363480020, value=empty? f1:bar timestamp=1379363546360, value=value 2 row(s) in 0.0360 seconds # = returns more than expected! hbase(main):005:0 get 'foo', 'rk1', 'f1' COLUMN CELL f1:timestamp=1379363480020, value=empty? f1:bar timestamp=1379363546360, value=value 2 row(s) in 0.0120 seconds hbase(main):006:0 delete 'foo', 'rk1', 'f1:' 0 row(s) in 0.0290 seconds # = delete works hbase(main):007:0 get 'foo', 'rk1', 'f1:' COLUMN CELL f1:bar timestamp=1379363546360, value=value 1 row(s) in 0.0260 seconds hbase(main):008:0 get 'foo', 'rk1', 'f1' COLUMN CELL f1:bar timestamp=1379363546360, value=value 1 row(s) in 0.0080 seconds # restore the empty qual kv for HTable test hbase(main):011:0 put 'foo', 'rk1', 'f1:', 'empty?' 0 row(s) in 0.0950 seconds hbase(main):010:0 get 'foo', 'rk1', 'f1:' COLUMN CELL f1:timestamp=1379365262555, value=empty? f1:bar timestamp=1379365134135, value=value 2 row(s) in 0.0290 seconds hbase(main):011:0 get 'foo', 'rk1', 'f1' COLUMN CELL f1:timestamp=1379365262555, value=empty? f1:bar timestamp=1379365134135, value=value 2 row(s) in 0.0080 seconds hbase(main):012:0 hconf = org.apache.hadoop.hbase.HBaseConfiguration.create() = #Java::OrgApacheHadoopConf::Configuration:0x208e2fb5 hbase(main):013:0 t = org.apache.hadoop.hbase.client.HTable.new(hconf,'foo') = #Java::OrgApacheHadoopHbaseClient::HTable:0x437d51a6 # create a Get requesting the empty qualifier only, works hbase(main):014:0 g1 = org.apache.hadoop.hbase.client.Get.new(org.apache.hadoop.hbase.util.Bytes.toBytes('rk1')) = #Java::OrgApacheHadoopHbaseClient::Get:0x796523ab hbase(main):015:0 g1.addColumn(org.apache.hadoop.hbase.util.Bytes.toBytes('f1'), nil) = #Java::OrgApacheHadoopHbaseClient::Get:0x796523ab hbase(main):016:0 t.get(g1).toString() = keyvalues={rk1/f1:/1379365262555/Put/vlen=6/mvcc=0} # create a Get
[jira] [Commented] (HBASE-9357) Rest server shouldn't need to initiate ZK connection to print usage information
[ https://issues.apache.org/jira/browse/HBASE-9357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769756#comment-13769756 ] Gustavo Anatoly commented on HBASE-9357: I've tested start the RESTServer (HBase 0.96) without bin/hbase-daemon.sh start, and I when try access the main page, RESTServer logs throws connection exception. I was thinking that RESTServer could not be started, without a valid ZK connection. Thanks Nick. Rest server shouldn't need to initiate ZK connection to print usage information --- Key: HBASE-9357 URL: https://issues.apache.org/jira/browse/HBASE-9357 Project: HBase Issue Type: Bug Components: REST Affects Versions: 0.95.2 Reporter: Nick Dimiduk Priority: Minor When there's no ZK available, running `bin/hbase rest` must timeout before printing usage information. Initiating a connection should happen after parsing CLI options, not before. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9549) KeyValue#parseColumn(byte[]) does not handle empty qualifier
[ https://issues.apache.org/jira/browse/HBASE-9549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-9549: Status: Patch Available (was: Open) KeyValue#parseColumn(byte[]) does not handle empty qualifier Key: HBASE-9549 URL: https://issues.apache.org/jira/browse/HBASE-9549 Project: HBase Issue Type: Bug Components: mapreduce, REST, Thrift, util Affects Versions: 0.95.2 Reporter: Nick Dimiduk Assignee: Nick Dimiduk Priority: Minor Fix For: 0.98.0, 0.96.1 Attachments: HBASE-9549.00.patch, HBASE-9549.01.patch HTable allows a user to interact directly with a KeyValue with an empty qualifier, yet {{KeyValue#parseColumn(byte[])}} treats this as a reference to a column family. No qualifier delimiter and an empty qualifier are treated as the same: {code} if (index == -1) { // If no delimiter, return array of size 1 return new byte [][] { c }; } else if(index == c.length - 1) { // Only a family, return array size 1 byte [] family = new byte[c.length-1]; System.arraycopy(c, 0, family, 0, family.length); return new byte [][] { family }; } ... {code} This inconsistency breaks external interfaces which depend on {{parseColumn}}, for instance, the shell: {noformat} # shell interactions with KV with an empty qualifier hbase(main):001:0 create 'foo', 'f1' 0 row(s) in 1.4130 seconds = Hbase::Table - foo hbase(main):002:0 put 'foo', 'rk1', 'f1:', 'empty?' 0 row(s) in 0.0750 seconds # = put works hbase(main):003:0 put 'foo', 'rk1', 'f1:bar', 'value' 0 row(s) in 0.0070 seconds # attempt to retrieve just the kv with empty qualifier hbase(main):004:0 get 'foo', 'rk1', 'f1:' COLUMN CELL f1:timestamp=1379363480020, value=empty? f1:bar timestamp=1379363546360, value=value 2 row(s) in 0.0360 seconds # = returns more than expected! hbase(main):005:0 get 'foo', 'rk1', 'f1' COLUMN CELL f1:timestamp=1379363480020, value=empty? f1:bar timestamp=1379363546360, value=value 2 row(s) in 0.0120 seconds hbase(main):006:0 delete 'foo', 'rk1', 'f1:' 0 row(s) in 0.0290 seconds # = delete works hbase(main):007:0 get 'foo', 'rk1', 'f1:' COLUMN CELL f1:bar timestamp=1379363546360, value=value 1 row(s) in 0.0260 seconds hbase(main):008:0 get 'foo', 'rk1', 'f1' COLUMN CELL f1:bar timestamp=1379363546360, value=value 1 row(s) in 0.0080 seconds # restore the empty qual kv for HTable test hbase(main):011:0 put 'foo', 'rk1', 'f1:', 'empty?' 0 row(s) in 0.0950 seconds hbase(main):010:0 get 'foo', 'rk1', 'f1:' COLUMN CELL f1:timestamp=1379365262555, value=empty? f1:bar timestamp=1379365134135, value=value 2 row(s) in 0.0290 seconds hbase(main):011:0 get 'foo', 'rk1', 'f1' COLUMN CELL f1:timestamp=1379365262555, value=empty? f1:bar timestamp=1379365134135, value=value 2 row(s) in 0.0080 seconds hbase(main):012:0 hconf = org.apache.hadoop.hbase.HBaseConfiguration.create() = #Java::OrgApacheHadoopConf::Configuration:0x208e2fb5 hbase(main):013:0 t = org.apache.hadoop.hbase.client.HTable.new(hconf,'foo') = #Java::OrgApacheHadoopHbaseClient::HTable:0x437d51a6 # create a Get requesting the empty qualifier only, works hbase(main):014:0 g1 = org.apache.hadoop.hbase.client.Get.new(org.apache.hadoop.hbase.util.Bytes.toBytes('rk1')) = #Java::OrgApacheHadoopHbaseClient::Get:0x796523ab hbase(main):015:0 g1.addColumn(org.apache.hadoop.hbase.util.Bytes.toBytes('f1'), nil) = #Java::OrgApacheHadoopHbaseClient::Get:0x796523ab hbase(main):016:0 t.get(g1).toString() = keyvalues={rk1/f1:/1379365262555/Put/vlen=6/mvcc=0} # create a Get requesting the whole family, works hbase(main):017:0 g2 = org.apache.hadoop.hbase.client.Get.new(org.apache.hadoop.hbase.util.Bytes.toBytes('rk1')) = #Java::OrgApacheHadoopHbaseClient::Get:0x52e5376a hbase(main):018:0 g2.addFamily(org.apache.hadoop.hbase.util.Bytes.toBytes('f1')) = #Java::OrgApacheHadoopHbaseClient::Get:0x52e5376a hbase(main):019:0 t.get(g2).toString() =
[jira] [Updated] (HBASE-9249) Add cp hook before setting PONR in split
[ https://issues.apache.org/jira/browse/HBASE-9249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] rajeshbabu updated HBASE-9249: -- Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Add cp hook before setting PONR in split Key: HBASE-9249 URL: https://issues.apache.org/jira/browse/HBASE-9249 Project: HBase Issue Type: Sub-task Affects Versions: 0.98.0 Reporter: rajeshbabu Assignee: rajeshbabu Fix For: 0.98.0 Attachments: HBASE-9249.patch, HBASE-9249_v2.patch, HBASE-9249_v3.patch, HBASE-9249_v4.patch, HBASE-9249_v5.patch, HBASE-9249_v6.patch, HBASE-9249_v7.patch, HBASE-9249_v7.patch, HBASE-9249_v8.patch, HBASE-9249_v8.patch, HBASE-9249_v9.patch This hook helps to perform split on user region and corresponding index region such that both will be split or none. With this hook split for user and index region as follows user region === 1) Create splitting znode for user region split 2) Close parent user region 3) split user region storefiles 4) instantiate child regions of user region Through the new hook we can call index region transitions as below index region === 5) Create splitting znode for index region split 6) Close parent index region 7) Split storefiles of index region 8) instantiate child regions of the index region If any failures in 5,6,7,8 rollback the steps and return null, on null return throw exception to rollback for 1,2,3,4 9) set PONR 10) do batch put of offline and split entries for user and index regions index region 11) open daughers of index regions and transition znode to split. This step we will do through preSplitAfterPONR hook. Opening index regions before opening user regions helps to avoid put failures if there is colocation mismatch(this can happen if user regions opening completed but index regions opening in progress) user region === 12) open daughers of user regions and transition znode to split. Even if region server crashes also at the end both user and index regions will be split or none -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9514) Prevent region from assigning before log splitting is done
[ https://issues.apache.org/jira/browse/HBASE-9514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769761#comment-13769761 ] Enis Soztutar commented on HBASE-9514: -- bq. We have waited for longer than the ZK session timeout. It should be expired. It is ok to expire twice. My understanding is that server expiry should ONLY come from a zookeeper session timeout. The master being not able to connect to RS for more than zk session timeout does not necessarily mean that the session has actually expired. If a network partition happens and master cannot talk to RS, but RS still holds the zk lease, then the master will think that the server is dead, while RS will happily continue to serve the region. Given that the RS will be getting YouAreDeadException if it talks the master afterwards, and we are forcing lease recovery on the RS logs on log splitting, but I fear, while this process is going on there will be an inconsistency window where master will think RS is dead, while it may not be. Prevent region from assigning before log splitting is done -- Key: HBASE-9514 URL: https://issues.apache.org/jira/browse/HBASE-9514 Project: HBase Issue Type: Bug Components: Region Assignment Reporter: Jimmy Xiang Assignee: Jimmy Xiang Priority: Blocker Attachments: trunk-9514_v1.patch If a region is assigned before log splitting is done by the server shutdown handler, the edits belonging to this region in the hlogs of the dead server will be lost. Generally this is not an issue if users don't assign/unassign a region from hbase shell or via hbase admin. These commands are marked for experts only in the hbase shell help too. However, chaos monkey doesn't care. If we can prevent from assigning such regions in a bad time, it would make things a little safer. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9295) Allow test-patch.sh to detect TreeMap keyed by byte[] which doesn't use proper comparator
[ https://issues.apache.org/jira/browse/HBASE-9295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-9295: -- Attachment: 9295-v2.txt How about patch v2 ? Allow test-patch.sh to detect TreeMap keyed by byte[] which doesn't use proper comparator - Key: HBASE-9295 URL: https://issues.apache.org/jira/browse/HBASE-9295 Project: HBase Issue Type: Task Reporter: Ted Yu Assignee: Ted Yu Fix For: 0.98.0 Attachments: 9295-v1.txt, 9295-v2.txt There were two recent bug fixes (HBASE-9285 and HBASE-9238) for the case where the TreeMap keyed by byte[] doesn't use proper comparator: {code} new TreeMapbyte[], ...() {code} test-patch.sh should be able to detect this situation and report accordingly. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9514) Prevent region from assigning before log splitting is done
[ https://issues.apache.org/jira/browse/HBASE-9514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769660#comment-13769660 ] Jimmy Xiang commented on HBASE-9514: bq. should we rename AM.acquireLock - AM.acquireRegionLock() Sure, will do. bq. Why not do this for meta? Let me think how to cover meta as well. bq. Is it safe to expire a server like this. It means the master cannot connect to it, but it may still have the zk lease. We have waited for longer than the ZK session timeout. It should be expired. It is ok to expire twice. The timeout should be active. I will fix it. The idea of the patch is to remember the last known region server a region is assigned to. Whenever we try to assign a region, we check if the last known region server of the region is done with log splitting. If not, we don't assign it, and let SSH to complete log splitting and re-assign. We clear the last known region server info when SSH finishes log splitting, or the region is properly closed. The idea is simple but there are several racing points to take care. Prevent region from assigning before log splitting is done -- Key: HBASE-9514 URL: https://issues.apache.org/jira/browse/HBASE-9514 Project: HBase Issue Type: Bug Components: Region Assignment Reporter: Jimmy Xiang Assignee: Jimmy Xiang Priority: Blocker Attachments: trunk-9514_v1.patch If a region is assigned before log splitting is done by the server shutdown handler, the edits belonging to this region in the hlogs of the dead server will be lost. Generally this is not an issue if users don't assign/unassign a region from hbase shell or via hbase admin. These commands are marked for experts only in the hbase shell help too. However, chaos monkey doesn't care. If we can prevent from assigning such regions in a bad time, it would make things a little safer. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9557) strange dependencies for hbase-client
[ https://issues.apache.org/jira/browse/HBASE-9557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769741#comment-13769741 ] Hudson commented on HBASE-9557: --- SUCCESS: Integrated in HBase-TRUNK #4522 (See [https://builds.apache.org/job/HBase-TRUNK/4522/]) HBASE-9557 strange dependencies for hbase-client (nkeywal: rev 1524095) * /hbase/trunk/hbase-client/pom.xml * /hbase/trunk/hbase-protocol/pom.xml * /hbase/trunk/pom.xml strange dependencies for hbase-client - Key: HBASE-9557 URL: https://issues.apache.org/jira/browse/HBASE-9557 Project: HBase Issue Type: Bug Reporter: Nicolas Liochon Assignee: Nicolas Liochon Fix For: 0.98.0, 0.96.1 Attachments: 9557.v1.patch Here is what we have with hadoop 2. In our plate we have - junit (should be test, it's not because we use it in the integration test runner - log4j (we have direct dependencies in RegionPlacementMaintainer, RESTServlet, and LogMonitoring, not counting the dependencies in the tests) The others are in hadoop. I marked the ones that were strange to me. Do we need all of them? mvn dependency:tree -pl hbase-client -Dhadoop.profile=2.0 [INFO] --- maven-dependency-plugin:2.1:tree (default-cli) @ hbase-client --- [INFO] org.apache.hbase:hbase-client:jar:0.97.0-SNAPSHOT [INFO] +- org.apache.hbase:hbase-common:jar:0.97.0-SNAPSHOT:compile [INFO] | \- commons-collections:commons-collections:jar:3.2.1:compile [INFO] +- org.apache.hbase:hbase-common:test-jar:tests:0.97.0-SNAPSHOT:test [INFO] +- org.apache.hbase:hbase-protocol:jar:0.97.0-SNAPSHOT:compile [INFO] +- commons-codec:commons-codec:jar:1.7:compile [INFO] +- commons-io:commons-io:jar:2.4:compile [INFO] +- commons-lang:commons-lang:jar:2.6:compile [INFO] +- commons-logging:commons-logging:jar:1.1.1:compile [INFO] +- com.google.guava:guava:jar:12.0.1:compile [INFO] | \- com.google.code.findbugs:jsr305:jar:1.3.9:compile [INFO] +- com.google.protobuf:protobuf-java:jar:2.5.0:compile [INFO] +- org.apache.zookeeper:zookeeper:jar:3.4.5:compile [INFO] | +- org.slf4j:slf4j-api:jar:1.6.4:compile [INFO] | \- org.slf4j:slf4j-log4j12:jar:1.6.1:compile [INFO] +- org.cloudera.htrace:htrace-core:jar:2.01:compile *[INFO] | \- org.mortbay.jetty:jetty-util:jar:6.1.26:compile === why?* [INFO] +- org.codehaus.jackson:jackson-mapper-asl:jar:1.8.8:compile [INFO] | \- org.codehaus.jackson:jackson-core-asl:jar:1.8.8:compile [INFO] +- io.netty:netty:jar:3.5.9.Final:compile [INFO] +- log4j:log4j:jar:1.2.17:test (scope not updated to compile) [INFO] +- org.apache.hadoop:hadoop-common:jar:2.1.0-beta:compile [INFO] | +- commons-cli:commons-cli:jar:1.2:compile [INFO] | +- org.apache.commons:commons-math:jar:2.2:compile (version managed from 2.1) [INFO] | +- xmlenc:xmlenc:jar:0.52:compile *[INFO] | +- commons-httpclient:commons-httpclient:jar:3.0.1:compile (version managed from 3.1) = decrease the version. dangerous. But why hadoop does this?* [INFO] | +- commons-net:commons-net:jar:3.1:compile *[INFO] | +- javax.servlet:servlet-api:jar:2.5:compile why a servlet api in hbase-client or hadoop common?* [INFO] | +- org.mortbay.jetty:jetty:jar:6.1.26:compile [INFO] | +- com.sun.jersey:jersey-core:jar:1.8:compile [INFO] | +- com.sun.jersey:jersey-json:jar:1.8:compile *[INFO] | | +- org.codehaus.jettison:jettison:jar:1.3.1:compile (version managed from 1.1)* [INFO] | | | \- stax:stax-api:jar:1.0.1:compile [INFO] | | +- com.sun.xml.bind:jaxb-impl:jar:2.2.3-1:compile *[INFO] | | | \- javax.xml.bind:jaxb-api:jar:2.1:compile (version managed from 2.2.2) = decrease the version. dangerous* [INFO] | | | \- javax.activation:activation:jar:1.1:compile [INFO] | | +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.8:compile (version managed from 1.7.1) [INFO] | | \- org.codehaus.jackson:jackson-xc:jar:1.8.8:compile (version managed from 1.7.1) *[INFO] | +- com.sun.jersey:jersey-server:jar:1.8:compile = Why a server in a common piece of code? could we exclude it from hbase client?* [INFO] | | \- asm:asm:jar:3.1:compile *[INFO] | +- tomcat:jasper-compiler:jar:5.5.23:runtime === ??? why * *[INFO] | +- tomcat:jasper-runtime:jar:5.5.23:runtime === ??? why* *[INFO] | +- javax.servlet.jsp:jsp-api:jar:2.1:runtime Why? could we exclude it from hbase client?* [INFO] | +- commons-el:commons-el:jar:1.0:runtime [INFO] | +- net.java.dev.jets3t:jets3t:jar:0.6.1:compile [INFO] | +- commons-configuration:commons-configuration:jar:1.6:compile [INFO] | | +- commons-digester:commons-digester:jar:1.8:compile [INFO] | | | \- commons-beanutils:commons-beanutils:jar:1.7.0:compile [INFO] | | \- commons-beanutils:commons-beanutils-core:jar:1.8.0:compile [INFO] |
[jira] [Commented] (HBASE-9295) Allow test-patch.sh to detect TreeMap keyed by byte[] which doesn't use proper comparator
[ https://issues.apache.org/jira/browse/HBASE-9295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769736#comment-13769736 ] Jesse Yates commented on HBASE-9295: I was thinking that each anti-pattern would actually be a check to grep against and a description when found. Otherwise, we are just looking for the BYTES_COMPARATOR anti-pattern Allow test-patch.sh to detect TreeMap keyed by byte[] which doesn't use proper comparator - Key: HBASE-9295 URL: https://issues.apache.org/jira/browse/HBASE-9295 Project: HBase Issue Type: Task Reporter: Ted Yu Assignee: Ted Yu Fix For: 0.98.0 Attachments: 9295-v1.txt, 9295-v2.txt There were two recent bug fixes (HBASE-9285 and HBASE-9238) for the case where the TreeMap keyed by byte[] doesn't use proper comparator: {code} new TreeMapbyte[], ...() {code} test-patch.sh should be able to detect this situation and report accordingly. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9514) Prevent region from assigning before log splitting is done
[ https://issues.apache.org/jira/browse/HBASE-9514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769768#comment-13769768 ] Jimmy Xiang commented on HBASE-9514: That's a good point. Let me fix it. Prevent region from assigning before log splitting is done -- Key: HBASE-9514 URL: https://issues.apache.org/jira/browse/HBASE-9514 Project: HBase Issue Type: Bug Components: Region Assignment Reporter: Jimmy Xiang Assignee: Jimmy Xiang Priority: Blocker Attachments: trunk-9514_v1.patch If a region is assigned before log splitting is done by the server shutdown handler, the edits belonging to this region in the hlogs of the dead server will be lost. Generally this is not an issue if users don't assign/unassign a region from hbase shell or via hbase admin. These commands are marked for experts only in the hbase shell help too. However, chaos monkey doesn't care. If we can prevent from assigning such regions in a bad time, it would make things a little safer. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9510) Namespace operations should throw clean exceptions
[ https://issues.apache.org/jira/browse/HBASE-9510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769771#comment-13769771 ] Francis Liu commented on HBASE-9510: Thanks Stack. I had one more comment which could come as a follow-up. [~enis] Should we add an api to test wether a namespace exists? I'm not fond of having to catch an exception to test existence, it makes the code less readable. Namespace operations should throw clean exceptions -- Key: HBASE-9510 URL: https://issues.apache.org/jira/browse/HBASE-9510 Project: HBase Issue Type: Bug Components: master Reporter: Enis Soztutar Assignee: Enis Soztutar Fix For: 0.98.0, 0.96.0 Attachments: 9510v4.txt, 9510v4.txt, hbase-9510_v1.patch, hbase-9510_v2.patch, hbase-9510_v3.patch Some of the namespace operations does not throw clean exceptions mimicking table exceptions (TableNotFoundException, etc). For example: {code} hbase(main):007:0 describe_namespace 'non_existing_namespace' ERROR: java.io.IOException at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2117) at org.apache.hadoop.hbase.ipc.RpcServer$CallRunner.run(RpcServer.java:1816) at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:165) at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$0(SimpleRpcScheduler.java:161) at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:113) at java.lang.Thread.run(Thread.java:680) Caused by: java.lang.NullPointerException at org.apache.hadoop.hbase.protobuf.ProtobufUtil.toProtoNamespaceDescriptor(ProtobufUtil.java:2138) at org.apache.hadoop.hbase.master.HMaster.getNamespaceDescriptor(HMaster.java:3029) at org.apache.hadoop.hbase.protobuf.generated.MasterAdminProtos$MasterAdminService$2.callBlockingMethod(MasterAdminProtos.java:32904) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2079) ... 5 more {code} We can clean up the exceptions thrown from ns commands. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9549) KeyValue#parseColumn(byte[]) does not handle empty qualifier
[ https://issues.apache.org/jira/browse/HBASE-9549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-9549: Fix Version/s: 0.96.1 0.98.0 KeyValue#parseColumn(byte[]) does not handle empty qualifier Key: HBASE-9549 URL: https://issues.apache.org/jira/browse/HBASE-9549 Project: HBase Issue Type: Bug Components: mapreduce, REST, Thrift, util Affects Versions: 0.95.2 Reporter: Nick Dimiduk Assignee: Nick Dimiduk Priority: Minor Fix For: 0.98.0, 0.96.1 Attachments: HBASE-9549.00.patch HTable allows a user to interact directly with a KeyValue with an empty qualifier, yet {{KeyValue#parseColumn(byte[])}} treats this as a reference to a column family. No qualifier delimiter and an empty qualifier are treated as the same: {code} if (index == -1) { // If no delimiter, return array of size 1 return new byte [][] { c }; } else if(index == c.length - 1) { // Only a family, return array size 1 byte [] family = new byte[c.length-1]; System.arraycopy(c, 0, family, 0, family.length); return new byte [][] { family }; } ... {code} This inconsistency breaks external interfaces which depend on {{parseColumn}}, for instance, the shell: {noformat} # shell interactions with KV with an empty qualifier hbase(main):001:0 create 'foo', 'f1' 0 row(s) in 1.4130 seconds = Hbase::Table - foo hbase(main):002:0 put 'foo', 'rk1', 'f1:', 'empty?' 0 row(s) in 0.0750 seconds # = put works hbase(main):003:0 put 'foo', 'rk1', 'f1:bar', 'value' 0 row(s) in 0.0070 seconds # attempt to retrieve just the kv with empty qualifier hbase(main):004:0 get 'foo', 'rk1', 'f1:' COLUMN CELL f1:timestamp=1379363480020, value=empty? f1:bar timestamp=1379363546360, value=value 2 row(s) in 0.0360 seconds # = returns more than expected! hbase(main):005:0 get 'foo', 'rk1', 'f1' COLUMN CELL f1:timestamp=1379363480020, value=empty? f1:bar timestamp=1379363546360, value=value 2 row(s) in 0.0120 seconds hbase(main):006:0 delete 'foo', 'rk1', 'f1:' 0 row(s) in 0.0290 seconds # = delete works hbase(main):007:0 get 'foo', 'rk1', 'f1:' COLUMN CELL f1:bar timestamp=1379363546360, value=value 1 row(s) in 0.0260 seconds hbase(main):008:0 get 'foo', 'rk1', 'f1' COLUMN CELL f1:bar timestamp=1379363546360, value=value 1 row(s) in 0.0080 seconds # restore the empty qual kv for HTable test hbase(main):011:0 put 'foo', 'rk1', 'f1:', 'empty?' 0 row(s) in 0.0950 seconds hbase(main):010:0 get 'foo', 'rk1', 'f1:' COLUMN CELL f1:timestamp=1379365262555, value=empty? f1:bar timestamp=1379365134135, value=value 2 row(s) in 0.0290 seconds hbase(main):011:0 get 'foo', 'rk1', 'f1' COLUMN CELL f1:timestamp=1379365262555, value=empty? f1:bar timestamp=1379365134135, value=value 2 row(s) in 0.0080 seconds hbase(main):012:0 hconf = org.apache.hadoop.hbase.HBaseConfiguration.create() = #Java::OrgApacheHadoopConf::Configuration:0x208e2fb5 hbase(main):013:0 t = org.apache.hadoop.hbase.client.HTable.new(hconf,'foo') = #Java::OrgApacheHadoopHbaseClient::HTable:0x437d51a6 # create a Get requesting the empty qualifier only, works hbase(main):014:0 g1 = org.apache.hadoop.hbase.client.Get.new(org.apache.hadoop.hbase.util.Bytes.toBytes('rk1')) = #Java::OrgApacheHadoopHbaseClient::Get:0x796523ab hbase(main):015:0 g1.addColumn(org.apache.hadoop.hbase.util.Bytes.toBytes('f1'), nil) = #Java::OrgApacheHadoopHbaseClient::Get:0x796523ab hbase(main):016:0 t.get(g1).toString() = keyvalues={rk1/f1:/1379365262555/Put/vlen=6/mvcc=0} # create a Get requesting the whole family, works hbase(main):017:0 g2 = org.apache.hadoop.hbase.client.Get.new(org.apache.hadoop.hbase.util.Bytes.toBytes('rk1')) = #Java::OrgApacheHadoopHbaseClient::Get:0x52e5376a hbase(main):018:0 g2.addFamily(org.apache.hadoop.hbase.util.Bytes.toBytes('f1')) = #Java::OrgApacheHadoopHbaseClient::Get:0x52e5376a hbase(main):019:0 t.get(g2).toString() =
[jira] [Updated] (HBASE-8633) Document namespaces in HBase book
[ https://issues.apache.org/jira/browse/HBASE-8633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-8633: - Resolution: Fixed Fix Version/s: 0.96.0 0.98.0 Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Committed this. Thanks Francis. We can do follow ups for more documentation if we need. Document namespaces in HBase book - Key: HBASE-8633 URL: https://issues.apache.org/jira/browse/HBASE-8633 Project: HBase Issue Type: Sub-task Reporter: Enis Soztutar Assignee: Francis Liu Fix For: 0.98.0, 0.96.0 Attachments: HBASE-8633.patch We need to add documentation about the namespaces feature. It should go into the HBase book. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9295) Allow test-patch.sh to detect TreeMap keyed by byte[] which doesn't use proper comparator
[ https://issues.apache.org/jira/browse/HBASE-9295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-9295: -- Attachment: 9295-v1.txt Allow test-patch.sh to detect TreeMap keyed by byte[] which doesn't use proper comparator - Key: HBASE-9295 URL: https://issues.apache.org/jira/browse/HBASE-9295 Project: HBase Issue Type: Task Reporter: Ted Yu Assignee: Ted Yu Fix For: 0.98.0 Attachments: 9295-v1.txt There were two recent bug fixes (HBASE-9285 and HBASE-9238) for the case where the TreeMap keyed by byte[] doesn't use proper comparator: {code} new TreeMapbyte[], ...() {code} test-patch.sh should be able to detect this situation and report accordingly. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9534) Short-Circuit Coprocessor HTable access when on the same server
[ https://issues.apache.org/jira/browse/HBASE-9534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769764#comment-13769764 ] Lars Hofhansl commented on HBASE-9534: -- +1 from me. Moving this to client package is not ideal, but it is also not wrong (it is the client as used from the coprocessor). Good stuff. Short-Circuit Coprocessor HTable access when on the same server --- Key: HBASE-9534 URL: https://issues.apache.org/jira/browse/HBASE-9534 Project: HBase Issue Type: Bug Reporter: Jesse Yates Assignee: Jesse Yates Labels: coprocessors, performance, regionserver Fix For: 0.98.0 Attachments: hbase-9534-0.94-v0.patch, hbase-9534-0.94-v1.patch, hbase-9534-trunk-v0.patch Coprocessors currently create a full HTable when they want to write. However, we know that coprocessors must run from within an HBase server (either master or RS). For the master, its rare that we are going to be doing performance sensitive operations, but RS calls could be very time-intensive. Therefore, we should be able to tell when a call from a CP attempts to talk to the RS on which it lives and just short-circuit to calling that RS, rather than going the long way around (which does the full marshalling/unmarshalling of data, as well as going over the loopback interface). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9557) strange dependencies for hbase-client
[ https://issues.apache.org/jira/browse/HBASE-9557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769663#comment-13769663 ] Nicolas Liochon commented on HBASE-9557: Agreed, but if we have a third party testes with a component in version X, we should ship with a version = X, ideally X except if we are really sure of ourselves (and I doubt it's the case here :-) ). strange dependencies for hbase-client - Key: HBASE-9557 URL: https://issues.apache.org/jira/browse/HBASE-9557 Project: HBase Issue Type: Bug Reporter: Nicolas Liochon Assignee: Nicolas Liochon Fix For: 0.98.0, 0.96.1 Attachments: 9557.v1.patch Here is what we have with hadoop 2. In our plate we have - junit (should be test, it's not because we use it in the integration test runner - log4j (we have direct dependencies in RegionPlacementMaintainer, RESTServlet, and LogMonitoring, not counting the dependencies in the tests) The others are in hadoop. I marked the ones that were strange to me. Do we need all of them? mvn dependency:tree -pl hbase-client -Dhadoop.profile=2.0 [INFO] --- maven-dependency-plugin:2.1:tree (default-cli) @ hbase-client --- [INFO] org.apache.hbase:hbase-client:jar:0.97.0-SNAPSHOT [INFO] +- org.apache.hbase:hbase-common:jar:0.97.0-SNAPSHOT:compile [INFO] | \- commons-collections:commons-collections:jar:3.2.1:compile [INFO] +- org.apache.hbase:hbase-common:test-jar:tests:0.97.0-SNAPSHOT:test [INFO] +- org.apache.hbase:hbase-protocol:jar:0.97.0-SNAPSHOT:compile [INFO] +- commons-codec:commons-codec:jar:1.7:compile [INFO] +- commons-io:commons-io:jar:2.4:compile [INFO] +- commons-lang:commons-lang:jar:2.6:compile [INFO] +- commons-logging:commons-logging:jar:1.1.1:compile [INFO] +- com.google.guava:guava:jar:12.0.1:compile [INFO] | \- com.google.code.findbugs:jsr305:jar:1.3.9:compile [INFO] +- com.google.protobuf:protobuf-java:jar:2.5.0:compile [INFO] +- org.apache.zookeeper:zookeeper:jar:3.4.5:compile [INFO] | +- org.slf4j:slf4j-api:jar:1.6.4:compile [INFO] | \- org.slf4j:slf4j-log4j12:jar:1.6.1:compile [INFO] +- org.cloudera.htrace:htrace-core:jar:2.01:compile *[INFO] | \- org.mortbay.jetty:jetty-util:jar:6.1.26:compile === why?* [INFO] +- org.codehaus.jackson:jackson-mapper-asl:jar:1.8.8:compile [INFO] | \- org.codehaus.jackson:jackson-core-asl:jar:1.8.8:compile [INFO] +- io.netty:netty:jar:3.5.9.Final:compile [INFO] +- log4j:log4j:jar:1.2.17:test (scope not updated to compile) [INFO] +- org.apache.hadoop:hadoop-common:jar:2.1.0-beta:compile [INFO] | +- commons-cli:commons-cli:jar:1.2:compile [INFO] | +- org.apache.commons:commons-math:jar:2.2:compile (version managed from 2.1) [INFO] | +- xmlenc:xmlenc:jar:0.52:compile *[INFO] | +- commons-httpclient:commons-httpclient:jar:3.0.1:compile (version managed from 3.1) = decrease the version. dangerous. But why hadoop does this?* [INFO] | +- commons-net:commons-net:jar:3.1:compile *[INFO] | +- javax.servlet:servlet-api:jar:2.5:compile why a servlet api in hbase-client or hadoop common?* [INFO] | +- org.mortbay.jetty:jetty:jar:6.1.26:compile [INFO] | +- com.sun.jersey:jersey-core:jar:1.8:compile [INFO] | +- com.sun.jersey:jersey-json:jar:1.8:compile *[INFO] | | +- org.codehaus.jettison:jettison:jar:1.3.1:compile (version managed from 1.1)* [INFO] | | | \- stax:stax-api:jar:1.0.1:compile [INFO] | | +- com.sun.xml.bind:jaxb-impl:jar:2.2.3-1:compile *[INFO] | | | \- javax.xml.bind:jaxb-api:jar:2.1:compile (version managed from 2.2.2) = decrease the version. dangerous* [INFO] | | | \- javax.activation:activation:jar:1.1:compile [INFO] | | +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.8:compile (version managed from 1.7.1) [INFO] | | \- org.codehaus.jackson:jackson-xc:jar:1.8.8:compile (version managed from 1.7.1) *[INFO] | +- com.sun.jersey:jersey-server:jar:1.8:compile = Why a server in a common piece of code? could we exclude it from hbase client?* [INFO] | | \- asm:asm:jar:3.1:compile *[INFO] | +- tomcat:jasper-compiler:jar:5.5.23:runtime === ??? why * *[INFO] | +- tomcat:jasper-runtime:jar:5.5.23:runtime === ??? why* *[INFO] | +- javax.servlet.jsp:jsp-api:jar:2.1:runtime Why? could we exclude it from hbase client?* [INFO] | +- commons-el:commons-el:jar:1.0:runtime [INFO] | +- net.java.dev.jets3t:jets3t:jar:0.6.1:compile [INFO] | +- commons-configuration:commons-configuration:jar:1.6:compile [INFO] | | +- commons-digester:commons-digester:jar:1.8:compile [INFO] | | | \- commons-beanutils:commons-beanutils:jar:1.7.0:compile [INFO] | | \- commons-beanutils:commons-beanutils-core:jar:1.8.0:compile [INFO] | +- org.apache.avro:avro:jar:1.5.3:compile
[jira] [Commented] (HBASE-9557) strange dependencies for hbase-client
[ https://issues.apache.org/jira/browse/HBASE-9557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769745#comment-13769745 ] Hudson commented on HBASE-9557: --- SUCCESS: Integrated in hbase-0.96 #62 (See [https://builds.apache.org/job/hbase-0.96/62/]) HBASE-9557 strange dependencies for hbase-client (nkeywal: rev 1524096) * /hbase/branches/0.96/hbase-client/pom.xml * /hbase/branches/0.96/hbase-protocol/pom.xml * /hbase/branches/0.96/pom.xml strange dependencies for hbase-client - Key: HBASE-9557 URL: https://issues.apache.org/jira/browse/HBASE-9557 Project: HBase Issue Type: Bug Reporter: Nicolas Liochon Assignee: Nicolas Liochon Fix For: 0.98.0, 0.96.1 Attachments: 9557.v1.patch Here is what we have with hadoop 2. In our plate we have - junit (should be test, it's not because we use it in the integration test runner - log4j (we have direct dependencies in RegionPlacementMaintainer, RESTServlet, and LogMonitoring, not counting the dependencies in the tests) The others are in hadoop. I marked the ones that were strange to me. Do we need all of them? mvn dependency:tree -pl hbase-client -Dhadoop.profile=2.0 [INFO] --- maven-dependency-plugin:2.1:tree (default-cli) @ hbase-client --- [INFO] org.apache.hbase:hbase-client:jar:0.97.0-SNAPSHOT [INFO] +- org.apache.hbase:hbase-common:jar:0.97.0-SNAPSHOT:compile [INFO] | \- commons-collections:commons-collections:jar:3.2.1:compile [INFO] +- org.apache.hbase:hbase-common:test-jar:tests:0.97.0-SNAPSHOT:test [INFO] +- org.apache.hbase:hbase-protocol:jar:0.97.0-SNAPSHOT:compile [INFO] +- commons-codec:commons-codec:jar:1.7:compile [INFO] +- commons-io:commons-io:jar:2.4:compile [INFO] +- commons-lang:commons-lang:jar:2.6:compile [INFO] +- commons-logging:commons-logging:jar:1.1.1:compile [INFO] +- com.google.guava:guava:jar:12.0.1:compile [INFO] | \- com.google.code.findbugs:jsr305:jar:1.3.9:compile [INFO] +- com.google.protobuf:protobuf-java:jar:2.5.0:compile [INFO] +- org.apache.zookeeper:zookeeper:jar:3.4.5:compile [INFO] | +- org.slf4j:slf4j-api:jar:1.6.4:compile [INFO] | \- org.slf4j:slf4j-log4j12:jar:1.6.1:compile [INFO] +- org.cloudera.htrace:htrace-core:jar:2.01:compile *[INFO] | \- org.mortbay.jetty:jetty-util:jar:6.1.26:compile === why?* [INFO] +- org.codehaus.jackson:jackson-mapper-asl:jar:1.8.8:compile [INFO] | \- org.codehaus.jackson:jackson-core-asl:jar:1.8.8:compile [INFO] +- io.netty:netty:jar:3.5.9.Final:compile [INFO] +- log4j:log4j:jar:1.2.17:test (scope not updated to compile) [INFO] +- org.apache.hadoop:hadoop-common:jar:2.1.0-beta:compile [INFO] | +- commons-cli:commons-cli:jar:1.2:compile [INFO] | +- org.apache.commons:commons-math:jar:2.2:compile (version managed from 2.1) [INFO] | +- xmlenc:xmlenc:jar:0.52:compile *[INFO] | +- commons-httpclient:commons-httpclient:jar:3.0.1:compile (version managed from 3.1) = decrease the version. dangerous. But why hadoop does this?* [INFO] | +- commons-net:commons-net:jar:3.1:compile *[INFO] | +- javax.servlet:servlet-api:jar:2.5:compile why a servlet api in hbase-client or hadoop common?* [INFO] | +- org.mortbay.jetty:jetty:jar:6.1.26:compile [INFO] | +- com.sun.jersey:jersey-core:jar:1.8:compile [INFO] | +- com.sun.jersey:jersey-json:jar:1.8:compile *[INFO] | | +- org.codehaus.jettison:jettison:jar:1.3.1:compile (version managed from 1.1)* [INFO] | | | \- stax:stax-api:jar:1.0.1:compile [INFO] | | +- com.sun.xml.bind:jaxb-impl:jar:2.2.3-1:compile *[INFO] | | | \- javax.xml.bind:jaxb-api:jar:2.1:compile (version managed from 2.2.2) = decrease the version. dangerous* [INFO] | | | \- javax.activation:activation:jar:1.1:compile [INFO] | | +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.8:compile (version managed from 1.7.1) [INFO] | | \- org.codehaus.jackson:jackson-xc:jar:1.8.8:compile (version managed from 1.7.1) *[INFO] | +- com.sun.jersey:jersey-server:jar:1.8:compile = Why a server in a common piece of code? could we exclude it from hbase client?* [INFO] | | \- asm:asm:jar:3.1:compile *[INFO] | +- tomcat:jasper-compiler:jar:5.5.23:runtime === ??? why * *[INFO] | +- tomcat:jasper-runtime:jar:5.5.23:runtime === ??? why* *[INFO] | +- javax.servlet.jsp:jsp-api:jar:2.1:runtime Why? could we exclude it from hbase client?* [INFO] | +- commons-el:commons-el:jar:1.0:runtime [INFO] | +- net.java.dev.jets3t:jets3t:jar:0.6.1:compile [INFO] | +- commons-configuration:commons-configuration:jar:1.6:compile [INFO] | | +- commons-digester:commons-digester:jar:1.8:compile [INFO] | | | \- commons-beanutils:commons-beanutils:jar:1.7.0:compile [INFO] | | \-
[jira] [Commented] (HBASE-8751) Enable peer cluster to choose/change the ColumnFamilies/Tables it really want to replicate from a source cluster
[ https://issues.apache.org/jira/browse/HBASE-8751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769737#comment-13769737 ] Santosh Banerjee commented on HBASE-8751: - | The ReplicationPeer.tableCFs map can only be updated by the (same/single) event thread of zookeeper(ZooKeeperWatcher) of ReplicationPeer, and ReplicationSource calls zkHelper.getTableCFs(peerId) for each hlog entry. Seems not a must to declare it volatile? Well, even in that case the variable is being read and modified in different threads, and should therefore qualify for some form of synchronization.Hence declaring it volatile sounds necessary.What do you say? Enable peer cluster to choose/change the ColumnFamilies/Tables it really want to replicate from a source cluster Key: HBASE-8751 URL: https://issues.apache.org/jira/browse/HBASE-8751 Project: HBase Issue Type: Improvement Components: Replication Reporter: Feng Honghua Attachments: HBASE-8751-0.94-V0.patch Consider scenarios (all cf are with replication-scope=1): 1) cluster S has 3 tables, table A has cfA,cfB, table B has cfX,cfY, table C has cf1,cf2. 2) cluster X wants to replicate table A : cfA, table B : cfX and table C from cluster S. 3) cluster Y wants to replicate table B : cfY, table C : cf2 from cluster S. Current replication implementation can't achieve this since it'll push the data of all the replicatable column-families from cluster S to all its peers, X/Y in this scenario. This improvement provides a fine-grained replication theme which enable peer cluster to choose the column-families/tables they really want from the source cluster: A). Set the table:cf-list for a peer when addPeer: hbase-shell add_peer '3', zk:1100:/hbase, table1; table2:cf1,cf2; table3:cf2 B). View the table:cf-list config for a peer using show_peer_tableCFs: hbase-shell show_peer_tableCFs 1 C). Change/set the table:cf-list for a peer using set_peer_tableCFs: hbase-shell set_peer_tableCFs '2', table1:cfX; table2:cf1; table3:cf1,cf2 In this theme, replication-scope=1 only means a column-family CAN be replicated to other clusters, but only the 'table:cf-list list' determines WHICH cf/table will actually be replicated to a specific peer. To provide back-compatibility, empty 'table:cf-list list' will replicate all replicatable cf/table. (this means we don't allow a peer which replicates nothing from a source cluster, we think it's reasonable: if replicating nothing why bother adding a peer?) This improvement addresses the exact problem raised by the first FAQ in http://hbase.apache.org/replication.html: GLOBAL means replicate? Any provision to replicate only to cluster X and not to cluster Y? or is that for later? Yes, this is for much later. I also noticed somebody mentioned replication-scope as integer rather than a boolean is for such fine-grained replication purpose, but I think extending replication-scope can't achieve the same replication granularity flexibility as providing above per-peer replication configurations. This improvement has been running smoothly in our production clusters (Xiaomi) for several months. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9549) KeyValue#parseColumn(byte[]) does not handle empty qualifier
[ https://issues.apache.org/jira/browse/HBASE-9549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-9549: Status: Open (was: Patch Available) KeyValue#parseColumn(byte[]) does not handle empty qualifier Key: HBASE-9549 URL: https://issues.apache.org/jira/browse/HBASE-9549 Project: HBase Issue Type: Bug Components: mapreduce, REST, Thrift, util Affects Versions: 0.95.2 Reporter: Nick Dimiduk Assignee: Nick Dimiduk Priority: Minor Fix For: 0.98.0, 0.96.1 Attachments: HBASE-9549.00.patch HTable allows a user to interact directly with a KeyValue with an empty qualifier, yet {{KeyValue#parseColumn(byte[])}} treats this as a reference to a column family. No qualifier delimiter and an empty qualifier are treated as the same: {code} if (index == -1) { // If no delimiter, return array of size 1 return new byte [][] { c }; } else if(index == c.length - 1) { // Only a family, return array size 1 byte [] family = new byte[c.length-1]; System.arraycopy(c, 0, family, 0, family.length); return new byte [][] { family }; } ... {code} This inconsistency breaks external interfaces which depend on {{parseColumn}}, for instance, the shell: {noformat} # shell interactions with KV with an empty qualifier hbase(main):001:0 create 'foo', 'f1' 0 row(s) in 1.4130 seconds = Hbase::Table - foo hbase(main):002:0 put 'foo', 'rk1', 'f1:', 'empty?' 0 row(s) in 0.0750 seconds # = put works hbase(main):003:0 put 'foo', 'rk1', 'f1:bar', 'value' 0 row(s) in 0.0070 seconds # attempt to retrieve just the kv with empty qualifier hbase(main):004:0 get 'foo', 'rk1', 'f1:' COLUMN CELL f1:timestamp=1379363480020, value=empty? f1:bar timestamp=1379363546360, value=value 2 row(s) in 0.0360 seconds # = returns more than expected! hbase(main):005:0 get 'foo', 'rk1', 'f1' COLUMN CELL f1:timestamp=1379363480020, value=empty? f1:bar timestamp=1379363546360, value=value 2 row(s) in 0.0120 seconds hbase(main):006:0 delete 'foo', 'rk1', 'f1:' 0 row(s) in 0.0290 seconds # = delete works hbase(main):007:0 get 'foo', 'rk1', 'f1:' COLUMN CELL f1:bar timestamp=1379363546360, value=value 1 row(s) in 0.0260 seconds hbase(main):008:0 get 'foo', 'rk1', 'f1' COLUMN CELL f1:bar timestamp=1379363546360, value=value 1 row(s) in 0.0080 seconds # restore the empty qual kv for HTable test hbase(main):011:0 put 'foo', 'rk1', 'f1:', 'empty?' 0 row(s) in 0.0950 seconds hbase(main):010:0 get 'foo', 'rk1', 'f1:' COLUMN CELL f1:timestamp=1379365262555, value=empty? f1:bar timestamp=1379365134135, value=value 2 row(s) in 0.0290 seconds hbase(main):011:0 get 'foo', 'rk1', 'f1' COLUMN CELL f1:timestamp=1379365262555, value=empty? f1:bar timestamp=1379365134135, value=value 2 row(s) in 0.0080 seconds hbase(main):012:0 hconf = org.apache.hadoop.hbase.HBaseConfiguration.create() = #Java::OrgApacheHadoopConf::Configuration:0x208e2fb5 hbase(main):013:0 t = org.apache.hadoop.hbase.client.HTable.new(hconf,'foo') = #Java::OrgApacheHadoopHbaseClient::HTable:0x437d51a6 # create a Get requesting the empty qualifier only, works hbase(main):014:0 g1 = org.apache.hadoop.hbase.client.Get.new(org.apache.hadoop.hbase.util.Bytes.toBytes('rk1')) = #Java::OrgApacheHadoopHbaseClient::Get:0x796523ab hbase(main):015:0 g1.addColumn(org.apache.hadoop.hbase.util.Bytes.toBytes('f1'), nil) = #Java::OrgApacheHadoopHbaseClient::Get:0x796523ab hbase(main):016:0 t.get(g1).toString() = keyvalues={rk1/f1:/1379365262555/Put/vlen=6/mvcc=0} # create a Get requesting the whole family, works hbase(main):017:0 g2 = org.apache.hadoop.hbase.client.Get.new(org.apache.hadoop.hbase.util.Bytes.toBytes('rk1')) = #Java::OrgApacheHadoopHbaseClient::Get:0x52e5376a hbase(main):018:0 g2.addFamily(org.apache.hadoop.hbase.util.Bytes.toBytes('f1')) = #Java::OrgApacheHadoopHbaseClient::Get:0x52e5376a hbase(main):019:0 t.get(g2).toString() =
[jira] [Commented] (HBASE-8633) Document namespaces in HBase book
[ https://issues.apache.org/jira/browse/HBASE-8633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769787#comment-13769787 ] Francis Liu commented on HBASE-8633: Thanks Enis. I'll add the ACL docs once the namespace ACL jira is done. Document namespaces in HBase book - Key: HBASE-8633 URL: https://issues.apache.org/jira/browse/HBASE-8633 Project: HBase Issue Type: Sub-task Reporter: Enis Soztutar Assignee: Francis Liu Fix For: 0.98.0, 0.96.0 Attachments: HBASE-8633.patch We need to add documentation about the namespaces feature. It should go into the HBase book. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9510) Namespace operations should throw clean exceptions
[ https://issues.apache.org/jira/browse/HBASE-9510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769785#comment-13769785 ] Enis Soztutar commented on HBASE-9510: -- bq. Should we add an api to test wether a namespace exists? Sounds good. Namespace operations should throw clean exceptions -- Key: HBASE-9510 URL: https://issues.apache.org/jira/browse/HBASE-9510 Project: HBase Issue Type: Bug Components: master Reporter: Enis Soztutar Assignee: Enis Soztutar Fix For: 0.98.0, 0.96.0 Attachments: 9510v4.txt, 9510v4.txt, hbase-9510_v1.patch, hbase-9510_v2.patch, hbase-9510_v3.patch Some of the namespace operations does not throw clean exceptions mimicking table exceptions (TableNotFoundException, etc). For example: {code} hbase(main):007:0 describe_namespace 'non_existing_namespace' ERROR: java.io.IOException at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2117) at org.apache.hadoop.hbase.ipc.RpcServer$CallRunner.run(RpcServer.java:1816) at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:165) at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$0(SimpleRpcScheduler.java:161) at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:113) at java.lang.Thread.run(Thread.java:680) Caused by: java.lang.NullPointerException at org.apache.hadoop.hbase.protobuf.ProtobufUtil.toProtoNamespaceDescriptor(ProtobufUtil.java:2138) at org.apache.hadoop.hbase.master.HMaster.getNamespaceDescriptor(HMaster.java:3029) at org.apache.hadoop.hbase.protobuf.generated.MasterAdminProtos$MasterAdminService$2.callBlockingMethod(MasterAdminProtos.java:32904) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2079) ... 5 more {code} We can clean up the exceptions thrown from ns commands. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9549) KeyValue#parseColumn(byte[]) does not handle empty qualifier
[ https://issues.apache.org/jira/browse/HBASE-9549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769662#comment-13769662 ] Nick Dimiduk commented on HBASE-9549: - I agree, it is incompatible in a subtle way. But our handling of an empty qualifier is inconsistent -- we need to break someone to fix that. KeyValue#parseColumn(byte[]) does not handle empty qualifier Key: HBASE-9549 URL: https://issues.apache.org/jira/browse/HBASE-9549 Project: HBase Issue Type: Bug Components: mapreduce, REST, Thrift, util Affects Versions: 0.95.2 Reporter: Nick Dimiduk Assignee: Nick Dimiduk Priority: Minor Attachments: HBASE-9549.00.patch HTable allows a user to interact directly with a KeyValue with an empty qualifier, yet {{KeyValue#parseColumn(byte[])}} treats this as a reference to a column family. No qualifier delimiter and an empty qualifier are treated as the same: {code} if (index == -1) { // If no delimiter, return array of size 1 return new byte [][] { c }; } else if(index == c.length - 1) { // Only a family, return array size 1 byte [] family = new byte[c.length-1]; System.arraycopy(c, 0, family, 0, family.length); return new byte [][] { family }; } ... {code} This inconsistency breaks external interfaces which depend on {{parseColumn}}, for instance, the shell: {noformat} # shell interactions with KV with an empty qualifier hbase(main):001:0 create 'foo', 'f1' 0 row(s) in 1.4130 seconds = Hbase::Table - foo hbase(main):002:0 put 'foo', 'rk1', 'f1:', 'empty?' 0 row(s) in 0.0750 seconds # = put works hbase(main):003:0 put 'foo', 'rk1', 'f1:bar', 'value' 0 row(s) in 0.0070 seconds # attempt to retrieve just the kv with empty qualifier hbase(main):004:0 get 'foo', 'rk1', 'f1:' COLUMN CELL f1:timestamp=1379363480020, value=empty? f1:bar timestamp=1379363546360, value=value 2 row(s) in 0.0360 seconds # = returns more than expected! hbase(main):005:0 get 'foo', 'rk1', 'f1' COLUMN CELL f1:timestamp=1379363480020, value=empty? f1:bar timestamp=1379363546360, value=value 2 row(s) in 0.0120 seconds hbase(main):006:0 delete 'foo', 'rk1', 'f1:' 0 row(s) in 0.0290 seconds # = delete works hbase(main):007:0 get 'foo', 'rk1', 'f1:' COLUMN CELL f1:bar timestamp=1379363546360, value=value 1 row(s) in 0.0260 seconds hbase(main):008:0 get 'foo', 'rk1', 'f1' COLUMN CELL f1:bar timestamp=1379363546360, value=value 1 row(s) in 0.0080 seconds # restore the empty qual kv for HTable test hbase(main):011:0 put 'foo', 'rk1', 'f1:', 'empty?' 0 row(s) in 0.0950 seconds hbase(main):010:0 get 'foo', 'rk1', 'f1:' COLUMN CELL f1:timestamp=1379365262555, value=empty? f1:bar timestamp=1379365134135, value=value 2 row(s) in 0.0290 seconds hbase(main):011:0 get 'foo', 'rk1', 'f1' COLUMN CELL f1:timestamp=1379365262555, value=empty? f1:bar timestamp=1379365134135, value=value 2 row(s) in 0.0080 seconds hbase(main):012:0 hconf = org.apache.hadoop.hbase.HBaseConfiguration.create() = #Java::OrgApacheHadoopConf::Configuration:0x208e2fb5 hbase(main):013:0 t = org.apache.hadoop.hbase.client.HTable.new(hconf,'foo') = #Java::OrgApacheHadoopHbaseClient::HTable:0x437d51a6 # create a Get requesting the empty qualifier only, works hbase(main):014:0 g1 = org.apache.hadoop.hbase.client.Get.new(org.apache.hadoop.hbase.util.Bytes.toBytes('rk1')) = #Java::OrgApacheHadoopHbaseClient::Get:0x796523ab hbase(main):015:0 g1.addColumn(org.apache.hadoop.hbase.util.Bytes.toBytes('f1'), nil) = #Java::OrgApacheHadoopHbaseClient::Get:0x796523ab hbase(main):016:0 t.get(g1).toString() = keyvalues={rk1/f1:/1379365262555/Put/vlen=6/mvcc=0} # create a Get requesting the whole family, works hbase(main):017:0 g2 = org.apache.hadoop.hbase.client.Get.new(org.apache.hadoop.hbase.util.Bytes.toBytes('rk1')) = #Java::OrgApacheHadoopHbaseClient::Get:0x52e5376a hbase(main):018:0
[jira] [Assigned] (HBASE-9295) Allow test-patch.sh to detect TreeMap keyed by byte[] which doesn't use proper comparator
[ https://issues.apache.org/jira/browse/HBASE-9295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu reassigned HBASE-9295: - Assignee: Ted Yu Allow test-patch.sh to detect TreeMap keyed by byte[] which doesn't use proper comparator - Key: HBASE-9295 URL: https://issues.apache.org/jira/browse/HBASE-9295 Project: HBase Issue Type: Task Reporter: Ted Yu Assignee: Ted Yu Fix For: 0.98.0 There were two recent bug fixes (HBASE-9285 and HBASE-9238) for the case where the TreeMap keyed by byte[] doesn't use proper comparator: {code} new TreeMapbyte[], ...() {code} test-patch.sh should be able to detect this situation and report accordingly. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9560) bin/habse clean --cleanAll should not skip data cleaning if in local mode
[ https://issues.apache.org/jira/browse/HBASE-9560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-9560: - Attachment: hbase-9560_v1.patch Simple patch. bin/habse clean --cleanAll should not skip data cleaning if in local mode - Key: HBASE-9560 URL: https://issues.apache.org/jira/browse/HBASE-9560 Project: HBase Issue Type: Improvement Components: scripts Reporter: Enis Soztutar Assignee: Enis Soztutar Fix For: 0.98.0, 0.96.1 Attachments: hbase-9560_v1.patch I don't see a reason why we are skipping cleaning in local mode: {code} Eniss-MacBook-Pro:hbase-0.96$ bin/hbase clean --cleanAll Skipping hbase data clearing in standalone mode. {code} I use this often for standalone mode. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9513) Why is PE#RandomSeekScanTest way slower in 0.96 than in 0.94?
[ https://issues.apache.org/jira/browse/HBASE-9513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769827#comment-13769827 ] Matteo Bertozzi commented on HBASE-9513: hadoop-2 for me, I've not tried with hadoop-1 Why is PE#RandomSeekScanTest way slower in 0.96 than in 0.94? - Key: HBASE-9513 URL: https://issues.apache.org/jira/browse/HBASE-9513 Project: HBase Issue Type: Bug Reporter: stack Assignee: stack Priority: Critical Fix For: 0.98.0, 0.96.0 Our JMS reported this on the 0.96.0RC0 thread. Our Matteo found similar on an offline thread. Whats up here? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9560) bin/habse clean --cleanAll should not skip data cleaning if in local mode
[ https://issues.apache.org/jira/browse/HBASE-9560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769797#comment-13769797 ] stack commented on HBASE-9560: -- +1 bin/habse clean --cleanAll should not skip data cleaning if in local mode - Key: HBASE-9560 URL: https://issues.apache.org/jira/browse/HBASE-9560 Project: HBase Issue Type: Improvement Components: scripts Reporter: Enis Soztutar Assignee: Enis Soztutar Fix For: 0.98.0, 0.96.1 Attachments: hbase-9560_v1.patch I don't see a reason why we are skipping cleaning in local mode: {code} Eniss-MacBook-Pro:hbase-0.96$ bin/hbase clean --cleanAll Skipping hbase data clearing in standalone mode. {code} I use this often for standalone mode. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9557) strange dependencies for hbase-client
[ https://issues.apache.org/jira/browse/HBASE-9557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769795#comment-13769795 ] Hudson commented on HBASE-9557: --- SUCCESS: Integrated in hbase-0.96-hadoop2 #34 (See [https://builds.apache.org/job/hbase-0.96-hadoop2/34/]) HBASE-9557 strange dependencies for hbase-client (nkeywal: rev 1524096) * /hbase/branches/0.96/hbase-client/pom.xml * /hbase/branches/0.96/hbase-protocol/pom.xml * /hbase/branches/0.96/pom.xml strange dependencies for hbase-client - Key: HBASE-9557 URL: https://issues.apache.org/jira/browse/HBASE-9557 Project: HBase Issue Type: Bug Reporter: Nicolas Liochon Assignee: Nicolas Liochon Fix For: 0.98.0, 0.96.1 Attachments: 9557.v1.patch Here is what we have with hadoop 2. In our plate we have - junit (should be test, it's not because we use it in the integration test runner - log4j (we have direct dependencies in RegionPlacementMaintainer, RESTServlet, and LogMonitoring, not counting the dependencies in the tests) The others are in hadoop. I marked the ones that were strange to me. Do we need all of them? mvn dependency:tree -pl hbase-client -Dhadoop.profile=2.0 [INFO] --- maven-dependency-plugin:2.1:tree (default-cli) @ hbase-client --- [INFO] org.apache.hbase:hbase-client:jar:0.97.0-SNAPSHOT [INFO] +- org.apache.hbase:hbase-common:jar:0.97.0-SNAPSHOT:compile [INFO] | \- commons-collections:commons-collections:jar:3.2.1:compile [INFO] +- org.apache.hbase:hbase-common:test-jar:tests:0.97.0-SNAPSHOT:test [INFO] +- org.apache.hbase:hbase-protocol:jar:0.97.0-SNAPSHOT:compile [INFO] +- commons-codec:commons-codec:jar:1.7:compile [INFO] +- commons-io:commons-io:jar:2.4:compile [INFO] +- commons-lang:commons-lang:jar:2.6:compile [INFO] +- commons-logging:commons-logging:jar:1.1.1:compile [INFO] +- com.google.guava:guava:jar:12.0.1:compile [INFO] | \- com.google.code.findbugs:jsr305:jar:1.3.9:compile [INFO] +- com.google.protobuf:protobuf-java:jar:2.5.0:compile [INFO] +- org.apache.zookeeper:zookeeper:jar:3.4.5:compile [INFO] | +- org.slf4j:slf4j-api:jar:1.6.4:compile [INFO] | \- org.slf4j:slf4j-log4j12:jar:1.6.1:compile [INFO] +- org.cloudera.htrace:htrace-core:jar:2.01:compile *[INFO] | \- org.mortbay.jetty:jetty-util:jar:6.1.26:compile === why?* [INFO] +- org.codehaus.jackson:jackson-mapper-asl:jar:1.8.8:compile [INFO] | \- org.codehaus.jackson:jackson-core-asl:jar:1.8.8:compile [INFO] +- io.netty:netty:jar:3.5.9.Final:compile [INFO] +- log4j:log4j:jar:1.2.17:test (scope not updated to compile) [INFO] +- org.apache.hadoop:hadoop-common:jar:2.1.0-beta:compile [INFO] | +- commons-cli:commons-cli:jar:1.2:compile [INFO] | +- org.apache.commons:commons-math:jar:2.2:compile (version managed from 2.1) [INFO] | +- xmlenc:xmlenc:jar:0.52:compile *[INFO] | +- commons-httpclient:commons-httpclient:jar:3.0.1:compile (version managed from 3.1) = decrease the version. dangerous. But why hadoop does this?* [INFO] | +- commons-net:commons-net:jar:3.1:compile *[INFO] | +- javax.servlet:servlet-api:jar:2.5:compile why a servlet api in hbase-client or hadoop common?* [INFO] | +- org.mortbay.jetty:jetty:jar:6.1.26:compile [INFO] | +- com.sun.jersey:jersey-core:jar:1.8:compile [INFO] | +- com.sun.jersey:jersey-json:jar:1.8:compile *[INFO] | | +- org.codehaus.jettison:jettison:jar:1.3.1:compile (version managed from 1.1)* [INFO] | | | \- stax:stax-api:jar:1.0.1:compile [INFO] | | +- com.sun.xml.bind:jaxb-impl:jar:2.2.3-1:compile *[INFO] | | | \- javax.xml.bind:jaxb-api:jar:2.1:compile (version managed from 2.2.2) = decrease the version. dangerous* [INFO] | | | \- javax.activation:activation:jar:1.1:compile [INFO] | | +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.8:compile (version managed from 1.7.1) [INFO] | | \- org.codehaus.jackson:jackson-xc:jar:1.8.8:compile (version managed from 1.7.1) *[INFO] | +- com.sun.jersey:jersey-server:jar:1.8:compile = Why a server in a common piece of code? could we exclude it from hbase client?* [INFO] | | \- asm:asm:jar:3.1:compile *[INFO] | +- tomcat:jasper-compiler:jar:5.5.23:runtime === ??? why * *[INFO] | +- tomcat:jasper-runtime:jar:5.5.23:runtime === ??? why* *[INFO] | +- javax.servlet.jsp:jsp-api:jar:2.1:runtime Why? could we exclude it from hbase client?* [INFO] | +- commons-el:commons-el:jar:1.0:runtime [INFO] | +- net.java.dev.jets3t:jets3t:jar:0.6.1:compile [INFO] | +- commons-configuration:commons-configuration:jar:1.6:compile [INFO] | | +- commons-digester:commons-digester:jar:1.8:compile [INFO] | | | \- commons-beanutils:commons-beanutils:jar:1.7.0:compile [INFO] | | \-
[jira] [Created] (HBASE-9560) bin/habse clean --cleanAll should not skip data cleaning if in local mode
Enis Soztutar created HBASE-9560: Summary: bin/habse clean --cleanAll should not skip data cleaning if in local mode Key: HBASE-9560 URL: https://issues.apache.org/jira/browse/HBASE-9560 Project: HBase Issue Type: Improvement Components: scripts Reporter: Enis Soztutar Assignee: Enis Soztutar Fix For: 0.98.0, 0.96.1 Attachments: hbase-9560_v1.patch I don't see a reason why we are skipping cleaning in local mode: {code} Eniss-MacBook-Pro:hbase-0.96$ bin/hbase clean --cleanAll Skipping hbase data clearing in standalone mode. {code} I use this often for standalone mode. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9513) Why is PE#RandomSeekScanTest way slower in 0.96 than in 0.94?
[ https://issues.apache.org/jira/browse/HBASE-9513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769824#comment-13769824 ] Devaraj Das commented on HBASE-9513: Was this observed on Hadoop-1 or Hadoop-2? [~jeanmarcc] [~mbertozzi] Why is PE#RandomSeekScanTest way slower in 0.96 than in 0.94? - Key: HBASE-9513 URL: https://issues.apache.org/jira/browse/HBASE-9513 Project: HBase Issue Type: Bug Reporter: stack Assignee: stack Priority: Critical Fix For: 0.98.0, 0.96.0 Our JMS reported this on the 0.96.0RC0 thread. Our Matteo found similar on an offline thread. Whats up here? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9510) Namespace operations should throw clean exceptions
[ https://issues.apache.org/jira/browse/HBASE-9510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769802#comment-13769802 ] stack commented on HBASE-9510: -- Sounds good yeah. New issue? Namespace operations should throw clean exceptions -- Key: HBASE-9510 URL: https://issues.apache.org/jira/browse/HBASE-9510 Project: HBase Issue Type: Bug Components: master Reporter: Enis Soztutar Assignee: Enis Soztutar Fix For: 0.98.0, 0.96.0 Attachments: 9510v4.txt, 9510v4.txt, hbase-9510_v1.patch, hbase-9510_v2.patch, hbase-9510_v3.patch Some of the namespace operations does not throw clean exceptions mimicking table exceptions (TableNotFoundException, etc). For example: {code} hbase(main):007:0 describe_namespace 'non_existing_namespace' ERROR: java.io.IOException at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2117) at org.apache.hadoop.hbase.ipc.RpcServer$CallRunner.run(RpcServer.java:1816) at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:165) at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$0(SimpleRpcScheduler.java:161) at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:113) at java.lang.Thread.run(Thread.java:680) Caused by: java.lang.NullPointerException at org.apache.hadoop.hbase.protobuf.ProtobufUtil.toProtoNamespaceDescriptor(ProtobufUtil.java:2138) at org.apache.hadoop.hbase.master.HMaster.getNamespaceDescriptor(HMaster.java:3029) at org.apache.hadoop.hbase.protobuf.generated.MasterAdminProtos$MasterAdminService$2.callBlockingMethod(MasterAdminProtos.java:32904) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2079) ... 5 more {code} We can clean up the exceptions thrown from ns commands. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8348) Polish the migration to 0.96
[ https://issues.apache.org/jira/browse/HBASE-8348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769800#comment-13769800 ] stack commented on HBASE-8348: -- [~himan...@cloudera.com] Excellent Polish the migration to 0.96 Key: HBASE-8348 URL: https://issues.apache.org/jira/browse/HBASE-8348 Project: HBase Issue Type: Bug Affects Versions: 0.95.0 Reporter: Jean-Daniel Cryans Assignee: rajeshbabu Priority: Blocker Fix For: 0.96.0 Attachments: 8348v5.txt, 8348v5.txt, HBASE-8348-approach-2.patch, HBASE-8348-approach-2-v2.1.patch, HBASE-8348-approach-2-v2.2.patch, HBASE-8348-approach-2-v2.3.patch, HBASE-8348-approach-2-v2.4.patch, HBASE-8348-approach-3.patch, HBASE-8348_trunk.patch, HBASE-8348_trunk_v2.patch, HBASE-8348_trunk_v3.patch, log, log-2, Upgradeto96.docx, Upgradeto96.pdf Currently, migration works but there's still a couple of rough edges: - HBASE-8045 finished the .META. migration but didn't remove ROOT, so it's still on the filesystem. - Data in ZK needs to be removed manually. Either we fix up the data in ZK or we delete it ourselves. - TestMetaMigrationRemovingHTD has a testMetaUpdatedFlagInROOT method, but ROOT is gone now. Elliott was also mentioning that we could have hbase migrate do the HFileV1 checks, clear ZK, remove ROOT, etc. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9295) Allow test-patch.sh to detect TreeMap keyed by byte[] which doesn't use proper comparator
[ https://issues.apache.org/jira/browse/HBASE-9295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769713#comment-13769713 ] Jesse Yates commented on HBASE-9295: Would be nice if each anti-patter came with a description of why it was bad (i.e. in this case, omits byte[] comparator for a treeMap). Otherwise, a cool idea! Allow test-patch.sh to detect TreeMap keyed by byte[] which doesn't use proper comparator - Key: HBASE-9295 URL: https://issues.apache.org/jira/browse/HBASE-9295 Project: HBase Issue Type: Task Reporter: Ted Yu Assignee: Ted Yu Fix For: 0.98.0 Attachments: 9295-v1.txt There were two recent bug fixes (HBASE-9285 and HBASE-9238) for the case where the TreeMap keyed by byte[] doesn't use proper comparator: {code} new TreeMapbyte[], ...() {code} test-patch.sh should be able to detect this situation and report accordingly. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7462) TestDrainingServer is an integration test. It should be a unit test instead
[ https://issues.apache.org/jira/browse/HBASE-7462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769724#comment-13769724 ] Gustavo Anatoly commented on HBASE-7462: Thanks (Nicolas and Stack) for review the patch, for the patience and changes that you did. I hope has helped a little bit. TestDrainingServer is an integration test. It should be a unit test instead --- Key: HBASE-7462 URL: https://issues.apache.org/jira/browse/HBASE-7462 Project: HBase Issue Type: Wish Components: test Affects Versions: 0.95.2 Reporter: Nicolas Liochon Assignee: Gustavo Anatoly Priority: Trivial Labels: noob Attachments: 7462.v3.patch, HBASE-7462-v2.patch TestDrainingServer tests the function that allows to say that a regionserver should not get new regions. As it is written today, it's an integration test: it starts stops a cluster. The test would be more efficient if it would just check that the AssignmentManager does not use the drained region server; whatever the circumstances (bulk assign or not for example). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-9561) hbase-server-tests.jar contains a test mapred-site.xml
Elliott Clark created HBASE-9561: Summary: hbase-server-tests.jar contains a test mapred-site.xml Key: HBASE-9561 URL: https://issues.apache.org/jira/browse/HBASE-9561 Project: HBase Issue Type: Bug Reporter: Elliott Clark Assignee: Elliott Clark Having a mapred-site.xml causes all the mapreduce jobs run from the hbase command line to fail. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9510) Namespace operations should throw clean exceptions
[ https://issues.apache.org/jira/browse/HBASE-9510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769722#comment-13769722 ] Enis Soztutar commented on HBASE-9510: -- Thanks Stack. I was waiting for Francis to see whether he has any more comments. Namespace operations should throw clean exceptions -- Key: HBASE-9510 URL: https://issues.apache.org/jira/browse/HBASE-9510 Project: HBase Issue Type: Bug Components: master Reporter: Enis Soztutar Assignee: Enis Soztutar Fix For: 0.98.0, 0.96.0 Attachments: 9510v4.txt, 9510v4.txt, hbase-9510_v1.patch, hbase-9510_v2.patch, hbase-9510_v3.patch Some of the namespace operations does not throw clean exceptions mimicking table exceptions (TableNotFoundException, etc). For example: {code} hbase(main):007:0 describe_namespace 'non_existing_namespace' ERROR: java.io.IOException at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2117) at org.apache.hadoop.hbase.ipc.RpcServer$CallRunner.run(RpcServer.java:1816) at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:165) at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$0(SimpleRpcScheduler.java:161) at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:113) at java.lang.Thread.run(Thread.java:680) Caused by: java.lang.NullPointerException at org.apache.hadoop.hbase.protobuf.ProtobufUtil.toProtoNamespaceDescriptor(ProtobufUtil.java:2138) at org.apache.hadoop.hbase.master.HMaster.getNamespaceDescriptor(HMaster.java:3029) at org.apache.hadoop.hbase.protobuf.generated.MasterAdminProtos$MasterAdminService$2.callBlockingMethod(MasterAdminProtos.java:32904) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2079) ... 5 more {code} We can clean up the exceptions thrown from ns commands. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9550) IntegrationTestBigLinkedList used to be able to run on pseudo-distributed clusters
[ https://issues.apache.org/jira/browse/HBASE-9550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769823#comment-13769823 ] Enis Soztutar commented on HBASE-9550: -- Any more comments? I want to commit this to 0.96.0. IntegrationTestBigLinkedList used to be able to run on pseudo-distributed clusters -- Key: HBASE-9550 URL: https://issues.apache.org/jira/browse/HBASE-9550 Project: HBase Issue Type: Bug Components: test Reporter: Enis Soztutar Assignee: Enis Soztutar Fix For: 0.98.0, 0.96.0 Attachments: hbase-9550_v1.patch IntegrationTestBigLinkedList was able to run on clusters with 1 node (in single node deployments). We should bring that. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-9559) getRowKeyAtOrBefore may be incorrect for some cases
Sergey Shelukhin created HBASE-9559: --- Summary: getRowKeyAtOrBefore may be incorrect for some cases Key: HBASE-9559 URL: https://issues.apache.org/jira/browse/HBASE-9559 Project: HBase Issue Type: Bug Reporter: Sergey Shelukhin Priority: Minor See also HBASE-9503. Unless I'm missing something, getRowKeyAtOrBefore does not handle cross-file deletes correctly. It also doesn't handle timestamps between two candidates of the same row if they are in different file (latest by ts is going to be returned). It is only used for meta, so it might be working due to low update rate, lack of anomalies and the fact that row values in meta are reasonably persistent, new ones are only added in split. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9560) bin/hbase clean --cleanAll should not skip data cleaning if in local mode
[ https://issues.apache.org/jira/browse/HBASE-9560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-9560: - Summary: bin/hbase clean --cleanAll should not skip data cleaning if in local mode (was: bin/habse clean --cleanAll should not skip data cleaning if in local mode) bin/hbase clean --cleanAll should not skip data cleaning if in local mode - Key: HBASE-9560 URL: https://issues.apache.org/jira/browse/HBASE-9560 Project: HBase Issue Type: Improvement Components: scripts Reporter: Enis Soztutar Assignee: Enis Soztutar Fix For: 0.98.0, 0.96.1 Attachments: hbase-9560_v1.patch I don't see a reason why we are skipping cleaning in local mode: {code} Eniss-MacBook-Pro:hbase-0.96$ bin/hbase clean --cleanAll Skipping hbase data clearing in standalone mode. {code} I use this often for standalone mode. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9549) KeyValue#parseColumn(byte[]) does not handle empty qualifier
[ https://issues.apache.org/jira/browse/HBASE-9549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769872#comment-13769872 ] Nick Dimiduk commented on HBASE-9549: - Seems likely. Having a look. KeyValue#parseColumn(byte[]) does not handle empty qualifier Key: HBASE-9549 URL: https://issues.apache.org/jira/browse/HBASE-9549 Project: HBase Issue Type: Bug Components: mapreduce, REST, Thrift, util Affects Versions: 0.95.2 Reporter: Nick Dimiduk Assignee: Nick Dimiduk Priority: Minor Fix For: 0.98.0, 0.96.1 Attachments: HBASE-9549.00.patch, HBASE-9549.01.patch HTable allows a user to interact directly with a KeyValue with an empty qualifier, yet {{KeyValue#parseColumn(byte[])}} treats this as a reference to a column family. No qualifier delimiter and an empty qualifier are treated as the same: {code} if (index == -1) { // If no delimiter, return array of size 1 return new byte [][] { c }; } else if(index == c.length - 1) { // Only a family, return array size 1 byte [] family = new byte[c.length-1]; System.arraycopy(c, 0, family, 0, family.length); return new byte [][] { family }; } ... {code} This inconsistency breaks external interfaces which depend on {{parseColumn}}, for instance, the shell: {noformat} # shell interactions with KV with an empty qualifier hbase(main):001:0 create 'foo', 'f1' 0 row(s) in 1.4130 seconds = Hbase::Table - foo hbase(main):002:0 put 'foo', 'rk1', 'f1:', 'empty?' 0 row(s) in 0.0750 seconds # = put works hbase(main):003:0 put 'foo', 'rk1', 'f1:bar', 'value' 0 row(s) in 0.0070 seconds # attempt to retrieve just the kv with empty qualifier hbase(main):004:0 get 'foo', 'rk1', 'f1:' COLUMN CELL f1:timestamp=1379363480020, value=empty? f1:bar timestamp=1379363546360, value=value 2 row(s) in 0.0360 seconds # = returns more than expected! hbase(main):005:0 get 'foo', 'rk1', 'f1' COLUMN CELL f1:timestamp=1379363480020, value=empty? f1:bar timestamp=1379363546360, value=value 2 row(s) in 0.0120 seconds hbase(main):006:0 delete 'foo', 'rk1', 'f1:' 0 row(s) in 0.0290 seconds # = delete works hbase(main):007:0 get 'foo', 'rk1', 'f1:' COLUMN CELL f1:bar timestamp=1379363546360, value=value 1 row(s) in 0.0260 seconds hbase(main):008:0 get 'foo', 'rk1', 'f1' COLUMN CELL f1:bar timestamp=1379363546360, value=value 1 row(s) in 0.0080 seconds # restore the empty qual kv for HTable test hbase(main):011:0 put 'foo', 'rk1', 'f1:', 'empty?' 0 row(s) in 0.0950 seconds hbase(main):010:0 get 'foo', 'rk1', 'f1:' COLUMN CELL f1:timestamp=1379365262555, value=empty? f1:bar timestamp=1379365134135, value=value 2 row(s) in 0.0290 seconds hbase(main):011:0 get 'foo', 'rk1', 'f1' COLUMN CELL f1:timestamp=1379365262555, value=empty? f1:bar timestamp=1379365134135, value=value 2 row(s) in 0.0080 seconds hbase(main):012:0 hconf = org.apache.hadoop.hbase.HBaseConfiguration.create() = #Java::OrgApacheHadoopConf::Configuration:0x208e2fb5 hbase(main):013:0 t = org.apache.hadoop.hbase.client.HTable.new(hconf,'foo') = #Java::OrgApacheHadoopHbaseClient::HTable:0x437d51a6 # create a Get requesting the empty qualifier only, works hbase(main):014:0 g1 = org.apache.hadoop.hbase.client.Get.new(org.apache.hadoop.hbase.util.Bytes.toBytes('rk1')) = #Java::OrgApacheHadoopHbaseClient::Get:0x796523ab hbase(main):015:0 g1.addColumn(org.apache.hadoop.hbase.util.Bytes.toBytes('f1'), nil) = #Java::OrgApacheHadoopHbaseClient::Get:0x796523ab hbase(main):016:0 t.get(g1).toString() = keyvalues={rk1/f1:/1379365262555/Put/vlen=6/mvcc=0} # create a Get requesting the whole family, works hbase(main):017:0 g2 = org.apache.hadoop.hbase.client.Get.new(org.apache.hadoop.hbase.util.Bytes.toBytes('rk1')) = #Java::OrgApacheHadoopHbaseClient::Get:0x52e5376a hbase(main):018:0 g2.addFamily(org.apache.hadoop.hbase.util.Bytes.toBytes('f1')) =
[jira] [Resolved] (HBASE-9560) bin/hbase clean --cleanAll should not skip data cleaning if in local mode
[ https://issues.apache.org/jira/browse/HBASE-9560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar resolved HBASE-9560. -- Resolution: Fixed Fix Version/s: (was: 0.96.1) 0.96.0 Hadoop Flags: Reviewed Committed this. Thanks Stack for review. bin/hbase clean --cleanAll should not skip data cleaning if in local mode - Key: HBASE-9560 URL: https://issues.apache.org/jira/browse/HBASE-9560 Project: HBase Issue Type: Improvement Components: scripts Reporter: Enis Soztutar Assignee: Enis Soztutar Fix For: 0.98.0, 0.96.0 Attachments: hbase-9560_v1.patch I don't see a reason why we are skipping cleaning in local mode: {code} Eniss-MacBook-Pro:hbase-0.96$ bin/hbase clean --cleanAll Skipping hbase data clearing in standalone mode. {code} I use this often for standalone mode. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9561) hbase-server-tests.jar contains a test mapred-site.xml
[ https://issues.apache.org/jira/browse/HBASE-9561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-9561: - Priority: Blocker (was: Major) Fix Version/s: (was: 0.96.1) 0.96.0 hbase-server-tests.jar contains a test mapred-site.xml -- Key: HBASE-9561 URL: https://issues.apache.org/jira/browse/HBASE-9561 Project: HBase Issue Type: Bug Components: build, mapreduce Affects Versions: 0.96.0 Reporter: Elliott Clark Assignee: Elliott Clark Priority: Blocker Fix For: 0.96.0 Attachments: HBASE-9561-0.patch Having a mapred-site.xml causes all the mapreduce jobs run from the hbase command line to fail. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9549) KeyValue#parseColumn(byte[]) does not handle empty qualifier
[ https://issues.apache.org/jira/browse/HBASE-9549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769863#comment-13769863 ] stack commented on HBASE-9549: -- [~ndimiduk] Are those failures because of this patch? KeyValue#parseColumn(byte[]) does not handle empty qualifier Key: HBASE-9549 URL: https://issues.apache.org/jira/browse/HBASE-9549 Project: HBase Issue Type: Bug Components: mapreduce, REST, Thrift, util Affects Versions: 0.95.2 Reporter: Nick Dimiduk Assignee: Nick Dimiduk Priority: Minor Fix For: 0.98.0, 0.96.1 Attachments: HBASE-9549.00.patch, HBASE-9549.01.patch HTable allows a user to interact directly with a KeyValue with an empty qualifier, yet {{KeyValue#parseColumn(byte[])}} treats this as a reference to a column family. No qualifier delimiter and an empty qualifier are treated as the same: {code} if (index == -1) { // If no delimiter, return array of size 1 return new byte [][] { c }; } else if(index == c.length - 1) { // Only a family, return array size 1 byte [] family = new byte[c.length-1]; System.arraycopy(c, 0, family, 0, family.length); return new byte [][] { family }; } ... {code} This inconsistency breaks external interfaces which depend on {{parseColumn}}, for instance, the shell: {noformat} # shell interactions with KV with an empty qualifier hbase(main):001:0 create 'foo', 'f1' 0 row(s) in 1.4130 seconds = Hbase::Table - foo hbase(main):002:0 put 'foo', 'rk1', 'f1:', 'empty?' 0 row(s) in 0.0750 seconds # = put works hbase(main):003:0 put 'foo', 'rk1', 'f1:bar', 'value' 0 row(s) in 0.0070 seconds # attempt to retrieve just the kv with empty qualifier hbase(main):004:0 get 'foo', 'rk1', 'f1:' COLUMN CELL f1:timestamp=1379363480020, value=empty? f1:bar timestamp=1379363546360, value=value 2 row(s) in 0.0360 seconds # = returns more than expected! hbase(main):005:0 get 'foo', 'rk1', 'f1' COLUMN CELL f1:timestamp=1379363480020, value=empty? f1:bar timestamp=1379363546360, value=value 2 row(s) in 0.0120 seconds hbase(main):006:0 delete 'foo', 'rk1', 'f1:' 0 row(s) in 0.0290 seconds # = delete works hbase(main):007:0 get 'foo', 'rk1', 'f1:' COLUMN CELL f1:bar timestamp=1379363546360, value=value 1 row(s) in 0.0260 seconds hbase(main):008:0 get 'foo', 'rk1', 'f1' COLUMN CELL f1:bar timestamp=1379363546360, value=value 1 row(s) in 0.0080 seconds # restore the empty qual kv for HTable test hbase(main):011:0 put 'foo', 'rk1', 'f1:', 'empty?' 0 row(s) in 0.0950 seconds hbase(main):010:0 get 'foo', 'rk1', 'f1:' COLUMN CELL f1:timestamp=1379365262555, value=empty? f1:bar timestamp=1379365134135, value=value 2 row(s) in 0.0290 seconds hbase(main):011:0 get 'foo', 'rk1', 'f1' COLUMN CELL f1:timestamp=1379365262555, value=empty? f1:bar timestamp=1379365134135, value=value 2 row(s) in 0.0080 seconds hbase(main):012:0 hconf = org.apache.hadoop.hbase.HBaseConfiguration.create() = #Java::OrgApacheHadoopConf::Configuration:0x208e2fb5 hbase(main):013:0 t = org.apache.hadoop.hbase.client.HTable.new(hconf,'foo') = #Java::OrgApacheHadoopHbaseClient::HTable:0x437d51a6 # create a Get requesting the empty qualifier only, works hbase(main):014:0 g1 = org.apache.hadoop.hbase.client.Get.new(org.apache.hadoop.hbase.util.Bytes.toBytes('rk1')) = #Java::OrgApacheHadoopHbaseClient::Get:0x796523ab hbase(main):015:0 g1.addColumn(org.apache.hadoop.hbase.util.Bytes.toBytes('f1'), nil) = #Java::OrgApacheHadoopHbaseClient::Get:0x796523ab hbase(main):016:0 t.get(g1).toString() = keyvalues={rk1/f1:/1379365262555/Put/vlen=6/mvcc=0} # create a Get requesting the whole family, works hbase(main):017:0 g2 = org.apache.hadoop.hbase.client.Get.new(org.apache.hadoop.hbase.util.Bytes.toBytes('rk1')) = #Java::OrgApacheHadoopHbaseClient::Get:0x52e5376a hbase(main):018:0 g2.addFamily(org.apache.hadoop.hbase.util.Bytes.toBytes('f1')) =
[jira] [Commented] (HBASE-9549) KeyValue#parseColumn(byte[]) does not handle empty qualifier
[ https://issues.apache.org/jira/browse/HBASE-9549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769882#comment-13769882 ] Enis Soztutar commented on HBASE-9549: -- If we are breaking BC, better we do it as a part of 0.96.0 I say. KeyValue#parseColumn(byte[]) does not handle empty qualifier Key: HBASE-9549 URL: https://issues.apache.org/jira/browse/HBASE-9549 Project: HBase Issue Type: Bug Components: mapreduce, REST, Thrift, util Affects Versions: 0.95.2 Reporter: Nick Dimiduk Assignee: Nick Dimiduk Priority: Minor Fix For: 0.98.0, 0.96.1 Attachments: HBASE-9549.00.patch, HBASE-9549.01.patch HTable allows a user to interact directly with a KeyValue with an empty qualifier, yet {{KeyValue#parseColumn(byte[])}} treats this as a reference to a column family. No qualifier delimiter and an empty qualifier are treated as the same: {code} if (index == -1) { // If no delimiter, return array of size 1 return new byte [][] { c }; } else if(index == c.length - 1) { // Only a family, return array size 1 byte [] family = new byte[c.length-1]; System.arraycopy(c, 0, family, 0, family.length); return new byte [][] { family }; } ... {code} This inconsistency breaks external interfaces which depend on {{parseColumn}}, for instance, the shell: {noformat} # shell interactions with KV with an empty qualifier hbase(main):001:0 create 'foo', 'f1' 0 row(s) in 1.4130 seconds = Hbase::Table - foo hbase(main):002:0 put 'foo', 'rk1', 'f1:', 'empty?' 0 row(s) in 0.0750 seconds # = put works hbase(main):003:0 put 'foo', 'rk1', 'f1:bar', 'value' 0 row(s) in 0.0070 seconds # attempt to retrieve just the kv with empty qualifier hbase(main):004:0 get 'foo', 'rk1', 'f1:' COLUMN CELL f1:timestamp=1379363480020, value=empty? f1:bar timestamp=1379363546360, value=value 2 row(s) in 0.0360 seconds # = returns more than expected! hbase(main):005:0 get 'foo', 'rk1', 'f1' COLUMN CELL f1:timestamp=1379363480020, value=empty? f1:bar timestamp=1379363546360, value=value 2 row(s) in 0.0120 seconds hbase(main):006:0 delete 'foo', 'rk1', 'f1:' 0 row(s) in 0.0290 seconds # = delete works hbase(main):007:0 get 'foo', 'rk1', 'f1:' COLUMN CELL f1:bar timestamp=1379363546360, value=value 1 row(s) in 0.0260 seconds hbase(main):008:0 get 'foo', 'rk1', 'f1' COLUMN CELL f1:bar timestamp=1379363546360, value=value 1 row(s) in 0.0080 seconds # restore the empty qual kv for HTable test hbase(main):011:0 put 'foo', 'rk1', 'f1:', 'empty?' 0 row(s) in 0.0950 seconds hbase(main):010:0 get 'foo', 'rk1', 'f1:' COLUMN CELL f1:timestamp=1379365262555, value=empty? f1:bar timestamp=1379365134135, value=value 2 row(s) in 0.0290 seconds hbase(main):011:0 get 'foo', 'rk1', 'f1' COLUMN CELL f1:timestamp=1379365262555, value=empty? f1:bar timestamp=1379365134135, value=value 2 row(s) in 0.0080 seconds hbase(main):012:0 hconf = org.apache.hadoop.hbase.HBaseConfiguration.create() = #Java::OrgApacheHadoopConf::Configuration:0x208e2fb5 hbase(main):013:0 t = org.apache.hadoop.hbase.client.HTable.new(hconf,'foo') = #Java::OrgApacheHadoopHbaseClient::HTable:0x437d51a6 # create a Get requesting the empty qualifier only, works hbase(main):014:0 g1 = org.apache.hadoop.hbase.client.Get.new(org.apache.hadoop.hbase.util.Bytes.toBytes('rk1')) = #Java::OrgApacheHadoopHbaseClient::Get:0x796523ab hbase(main):015:0 g1.addColumn(org.apache.hadoop.hbase.util.Bytes.toBytes('f1'), nil) = #Java::OrgApacheHadoopHbaseClient::Get:0x796523ab hbase(main):016:0 t.get(g1).toString() = keyvalues={rk1/f1:/1379365262555/Put/vlen=6/mvcc=0} # create a Get requesting the whole family, works hbase(main):017:0 g2 = org.apache.hadoop.hbase.client.Get.new(org.apache.hadoop.hbase.util.Bytes.toBytes('rk1')) = #Java::OrgApacheHadoopHbaseClient::Get:0x52e5376a hbase(main):018:0 g2.addFamily(org.apache.hadoop.hbase.util.Bytes.toBytes('f1')) =
[jira] [Updated] (HBASE-9561) hbase-server-tests.jar contains a test mapred-site.xml
[ https://issues.apache.org/jira/browse/HBASE-9561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elliott Clark updated HBASE-9561: - Attachment: HBASE-9561-0.patch Patch to exclude mapred-site.xml. I also included zoo.cfg since it seemed like it shouldn't be there. hbase-server-tests.jar contains a test mapred-site.xml -- Key: HBASE-9561 URL: https://issues.apache.org/jira/browse/HBASE-9561 Project: HBase Issue Type: Bug Components: build, mapreduce Affects Versions: 0.96.0 Reporter: Elliott Clark Assignee: Elliott Clark Fix For: 0.96.1 Attachments: HBASE-9561-0.patch Having a mapred-site.xml causes all the mapreduce jobs run from the hbase command line to fail. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9549) KeyValue#parseColumn(byte[]) does not handle empty qualifier
[ https://issues.apache.org/jira/browse/HBASE-9549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769880#comment-13769880 ] Nick Dimiduk commented on HBASE-9549: - Yep, new patch on the way. KeyValue#parseColumn(byte[]) does not handle empty qualifier Key: HBASE-9549 URL: https://issues.apache.org/jira/browse/HBASE-9549 Project: HBase Issue Type: Bug Components: mapreduce, REST, Thrift, util Affects Versions: 0.95.2 Reporter: Nick Dimiduk Assignee: Nick Dimiduk Priority: Minor Fix For: 0.98.0, 0.96.1 Attachments: HBASE-9549.00.patch, HBASE-9549.01.patch HTable allows a user to interact directly with a KeyValue with an empty qualifier, yet {{KeyValue#parseColumn(byte[])}} treats this as a reference to a column family. No qualifier delimiter and an empty qualifier are treated as the same: {code} if (index == -1) { // If no delimiter, return array of size 1 return new byte [][] { c }; } else if(index == c.length - 1) { // Only a family, return array size 1 byte [] family = new byte[c.length-1]; System.arraycopy(c, 0, family, 0, family.length); return new byte [][] { family }; } ... {code} This inconsistency breaks external interfaces which depend on {{parseColumn}}, for instance, the shell: {noformat} # shell interactions with KV with an empty qualifier hbase(main):001:0 create 'foo', 'f1' 0 row(s) in 1.4130 seconds = Hbase::Table - foo hbase(main):002:0 put 'foo', 'rk1', 'f1:', 'empty?' 0 row(s) in 0.0750 seconds # = put works hbase(main):003:0 put 'foo', 'rk1', 'f1:bar', 'value' 0 row(s) in 0.0070 seconds # attempt to retrieve just the kv with empty qualifier hbase(main):004:0 get 'foo', 'rk1', 'f1:' COLUMN CELL f1:timestamp=1379363480020, value=empty? f1:bar timestamp=1379363546360, value=value 2 row(s) in 0.0360 seconds # = returns more than expected! hbase(main):005:0 get 'foo', 'rk1', 'f1' COLUMN CELL f1:timestamp=1379363480020, value=empty? f1:bar timestamp=1379363546360, value=value 2 row(s) in 0.0120 seconds hbase(main):006:0 delete 'foo', 'rk1', 'f1:' 0 row(s) in 0.0290 seconds # = delete works hbase(main):007:0 get 'foo', 'rk1', 'f1:' COLUMN CELL f1:bar timestamp=1379363546360, value=value 1 row(s) in 0.0260 seconds hbase(main):008:0 get 'foo', 'rk1', 'f1' COLUMN CELL f1:bar timestamp=1379363546360, value=value 1 row(s) in 0.0080 seconds # restore the empty qual kv for HTable test hbase(main):011:0 put 'foo', 'rk1', 'f1:', 'empty?' 0 row(s) in 0.0950 seconds hbase(main):010:0 get 'foo', 'rk1', 'f1:' COLUMN CELL f1:timestamp=1379365262555, value=empty? f1:bar timestamp=1379365134135, value=value 2 row(s) in 0.0290 seconds hbase(main):011:0 get 'foo', 'rk1', 'f1' COLUMN CELL f1:timestamp=1379365262555, value=empty? f1:bar timestamp=1379365134135, value=value 2 row(s) in 0.0080 seconds hbase(main):012:0 hconf = org.apache.hadoop.hbase.HBaseConfiguration.create() = #Java::OrgApacheHadoopConf::Configuration:0x208e2fb5 hbase(main):013:0 t = org.apache.hadoop.hbase.client.HTable.new(hconf,'foo') = #Java::OrgApacheHadoopHbaseClient::HTable:0x437d51a6 # create a Get requesting the empty qualifier only, works hbase(main):014:0 g1 = org.apache.hadoop.hbase.client.Get.new(org.apache.hadoop.hbase.util.Bytes.toBytes('rk1')) = #Java::OrgApacheHadoopHbaseClient::Get:0x796523ab hbase(main):015:0 g1.addColumn(org.apache.hadoop.hbase.util.Bytes.toBytes('f1'), nil) = #Java::OrgApacheHadoopHbaseClient::Get:0x796523ab hbase(main):016:0 t.get(g1).toString() = keyvalues={rk1/f1:/1379365262555/Put/vlen=6/mvcc=0} # create a Get requesting the whole family, works hbase(main):017:0 g2 = org.apache.hadoop.hbase.client.Get.new(org.apache.hadoop.hbase.util.Bytes.toBytes('rk1')) = #Java::OrgApacheHadoopHbaseClient::Get:0x52e5376a hbase(main):018:0 g2.addFamily(org.apache.hadoop.hbase.util.Bytes.toBytes('f1')) =