[jira] [Comment Edited] (HBASE-13099) Scans as in DynamoDB
[ https://issues.apache.org/jira/browse/HBASE-13099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14338032#comment-14338032 ] Lars Hofhansl edited comment on HBASE-13099 at 2/26/15 7:06 AM: That's what small scans do (in a nutshell), when they are not small :) That does mean that at every 1mb chunk we need to reseek all \{region|store|storeFile\}Scanners. I.e. the server state allows us to avoid the expensive seeking each RPC. Maybe with 1mb chunks it does not matter. (but you can pull 1mb over 1ge in < 10ms, which is less then the seek time of an HDD). Some of the chunking logic we get with HBASE-12976. was (Author: lhofhansl): That's what small scans do (in a nutshell), when they are not small :) That does mean that at every 1mb chunk we need to reseek are {region|store|storeFile}Scanners. I.e. the server state allows us to avoid the expensive seeking each RPC. Maybe with 1mb chunks it does not matter. (but you can pull 1mb over 1ge in < 10ms, which is less then the seek time of an HDD). Some of the chunking logic we get with HBASE-12976. > Scans as in DynamoDB > > > Key: HBASE-13099 > URL: https://issues.apache.org/jira/browse/HBASE-13099 > Project: HBase > Issue Type: Brainstorming > Components: Client, regionserver >Reporter: Nicolas Liochon > > cc: [~saint@gmail.com] - as discussed offline. > DynamoDB has a very simple way to manage scans server side: > ??citation?? > The data returned from a Query or Scan operation is limited to 1 MB; this > means that if you scan a table that has more than 1 MB of data, you'll need > to perform another Scan operation to continue to the next 1 MB of data in the > table. > If you query or scan for specific attributes that match values that amount to > more than 1 MB of data, you'll need to perform another Query or Scan request > for the next 1 MB of data. To do this, take the LastEvaluatedKey value from > the previous request, and use that value as the ExclusiveStartKey in the next > request. This will let you progressively query or scan for new data in 1 MB > increments. > When the entire result set from a Query or Scan has been processed, the > LastEvaluatedKey is null. This indicates that the result set is complete > (i.e. the operation processed the “last page” of data). > ??citation?? > This means that there is no state server side: the work is done client side. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13099) Scans as in DynamoDB
[ https://issues.apache.org/jira/browse/HBASE-13099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14338032#comment-14338032 ] Lars Hofhansl commented on HBASE-13099: --- That's what small scans do (in a nutshell), when they are not small :) That does mean that at every 1mb chunk we need to reseek are {region|store|storeFile}Scanners. I.e. the server state allows us to avoid the expensive seeking each RPC. Maybe with 1mb chunks it does not matter. (but you can pull 1mb over 1ge in < 10ms, which is less then the seek time of an HDD). Some of the chunking logic we get with HBASE-12976. > Scans as in DynamoDB > > > Key: HBASE-13099 > URL: https://issues.apache.org/jira/browse/HBASE-13099 > Project: HBase > Issue Type: Brainstorming > Components: Client, regionserver >Reporter: Nicolas Liochon > > cc: [~saint@gmail.com] - as discussed offline. > DynamoDB has a very simple way to manage scans server side: > ??citation?? > The data returned from a Query or Scan operation is limited to 1 MB; this > means that if you scan a table that has more than 1 MB of data, you'll need > to perform another Scan operation to continue to the next 1 MB of data in the > table. > If you query or scan for specific attributes that match values that amount to > more than 1 MB of data, you'll need to perform another Query or Scan request > for the next 1 MB of data. To do this, take the LastEvaluatedKey value from > the previous request, and use that value as the ExclusiveStartKey in the next > request. This will let you progressively query or scan for new data in 1 MB > increments. > When the entire result set from a Query or Scan has been processed, the > LastEvaluatedKey is null. This indicates that the result set is complete > (i.e. the operation processed the “last page” of data). > ??citation?? > This means that there is no state server side: the work is done client side. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-13084) Add labels to VisibilityLabelsCache asynchronously causes TestShell flakey
[ https://issues.apache.org/jira/browse/HBASE-13084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhangduo updated HBASE-13084: - Status: Patch Available (was: Reopened) > Add labels to VisibilityLabelsCache asynchronously causes TestShell flakey > -- > > Key: HBASE-13084 > URL: https://issues.apache.org/jira/browse/HBASE-13084 > Project: HBase > Issue Type: Bug > Components: test >Reporter: zhangduo >Assignee: zhangduo > Fix For: 2.0.0, 1.1.0 > > Attachments: HBASE-13084-addendum.patch, HBASE-13084.patch, > HBASE-13084_1.patch, HBASE-13084_2.patch, HBASE-13084_2.patch, > HBASE-13084_2.patch, HBASE-13084_2.patch, HBASE-13084_2_disable_test.patch > > > As discussed in HBASE-12953, we found this error in PreCommit log > https://builds.apache.org/job/PreCommit-HBASE-Build/12918/artifact/hbase-shell/target/surefire-reports/org.apache.hadoop.hbase.client.TestShell-output.txt > {noformat} > 1) Error: > test_The_get/put_methods_should_work_for_data_written_with_Visibility(Hbase::VisibilityLabelsAdminMethodsTest): > ArgumentError: org.apache.hadoop.hbase.DoNotRetryIOException: > org.apache.hadoop.hbase.security.visibility.InvalidLabelException: Label > 'TEST_VISIBILITY' doesn't exists > at > org.apache.hadoop.hbase.security.visibility.VisibilityController.setAuths(VisibilityController.java:808) > at > org.apache.hadoop.hbase.protobuf.generated.VisibilityLabelsProtos$VisibilityLabelsService$1.setAuths(VisibilityLabelsProtos.java:6036) > at > org.apache.hadoop.hbase.protobuf.generated.VisibilityLabelsProtos$VisibilityLabelsService.callMethod(VisibilityLabelsProtos.java:6219) > at > org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:6867) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1707) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1689) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31309) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2038) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107) > at > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130) > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107) > at java.lang.Thread.run(Thread.java:744) > > /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-shell/src/main/ruby/hbase/visibility_labels.rb:84:in > `set_auths' > ./src/test/ruby/hbase/visibility_labels_admin_test.rb:77:in > `test_The_get/put_methods_should_work_for_data_written_with_Visibility' > org/jruby/RubyProc.java:270:in `call' > org/jruby/RubyKernel.java:2105:in `send' > org/jruby/RubyArray.java:1620:in `each' > org/jruby/RubyArray.java:1620:in `each' > 2) Error: > test_The_set/clear_methods_should_work_with_authorizations(Hbase::VisibilityLabelsAdminMethodsTest): > ArgumentError: No authentication set for the given user jenkins > > /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-shell/src/main/ruby/hbase/visibility_labels.rb:97:in > `get_auths' > ./src/test/ruby/hbase/visibility_labels_admin_test.rb:57:in > `test_The_set/clear_methods_should_work_with_authorizations' > org/jruby/RubyProc.java:270:in `call' > org/jruby/RubyKernel.java:2105:in `send' > org/jruby/RubyArray.java:1620:in `each' > org/jruby/RubyArray.java:1620:in `each' > {noformat} > This is the test code > {code:title=visibility_labels_admin_test.rb} > label = 'TEST_VISIBILITY' > user = org.apache.hadoop.hbase.security.User.getCurrent().getName(); > visibility_admin.add_labels(label) > visibility_admin.set_auths(user, label) > {code} > It says 'label does not exists' when calling set_auths. > Then I add some ugly logs in DefaultVisibilityLabelServiceImpl and > VisibilityLabelsCache. > {code:title=DefaultVisibilityLabelServiceImpl.java} > public OperationStatus[] addLabels(List labels) throws IOException { > ... > if (mutateLabelsRegion(puts, finalOpStatus)) { > updateZk(true); > } > for (byte[] label : labels) { > String labelStr = Bytes.toString(label); > LOG.info(labelStr + "=" + > this.labelsCache.getLabelOrdinal(labelStr)); > } > ... > } > {code} > {code:title=VisibilityLabelsCache.java} > public void refreshLabelsCache(byte[] data) throws IOException { > LOG.info("refresh", new Exception()); > ... > } > {code} > And I modified TestVisibilityLabelsWithCustomVisLabService to use > DefaultVisibilityLabelServiceImpl, then collected the logs of setupBeforeClass > {noformat} > 2015
[jira] [Updated] (HBASE-13084) Add labels to VisibilityLabelsCache asynchronously causes TestShell flakey
[ https://issues.apache.org/jira/browse/HBASE-13084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhangduo updated HBASE-13084: - Attachment: HBASE-13084-addendum.patch move replication_admin_test.rb to TestReplicationShell. > Add labels to VisibilityLabelsCache asynchronously causes TestShell flakey > -- > > Key: HBASE-13084 > URL: https://issues.apache.org/jira/browse/HBASE-13084 > Project: HBase > Issue Type: Bug > Components: test >Reporter: zhangduo >Assignee: zhangduo > Fix For: 2.0.0, 1.1.0 > > Attachments: HBASE-13084-addendum.patch, HBASE-13084.patch, > HBASE-13084_1.patch, HBASE-13084_2.patch, HBASE-13084_2.patch, > HBASE-13084_2.patch, HBASE-13084_2.patch, HBASE-13084_2_disable_test.patch > > > As discussed in HBASE-12953, we found this error in PreCommit log > https://builds.apache.org/job/PreCommit-HBASE-Build/12918/artifact/hbase-shell/target/surefire-reports/org.apache.hadoop.hbase.client.TestShell-output.txt > {noformat} > 1) Error: > test_The_get/put_methods_should_work_for_data_written_with_Visibility(Hbase::VisibilityLabelsAdminMethodsTest): > ArgumentError: org.apache.hadoop.hbase.DoNotRetryIOException: > org.apache.hadoop.hbase.security.visibility.InvalidLabelException: Label > 'TEST_VISIBILITY' doesn't exists > at > org.apache.hadoop.hbase.security.visibility.VisibilityController.setAuths(VisibilityController.java:808) > at > org.apache.hadoop.hbase.protobuf.generated.VisibilityLabelsProtos$VisibilityLabelsService$1.setAuths(VisibilityLabelsProtos.java:6036) > at > org.apache.hadoop.hbase.protobuf.generated.VisibilityLabelsProtos$VisibilityLabelsService.callMethod(VisibilityLabelsProtos.java:6219) > at > org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:6867) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1707) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1689) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31309) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2038) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107) > at > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130) > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107) > at java.lang.Thread.run(Thread.java:744) > > /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-shell/src/main/ruby/hbase/visibility_labels.rb:84:in > `set_auths' > ./src/test/ruby/hbase/visibility_labels_admin_test.rb:77:in > `test_The_get/put_methods_should_work_for_data_written_with_Visibility' > org/jruby/RubyProc.java:270:in `call' > org/jruby/RubyKernel.java:2105:in `send' > org/jruby/RubyArray.java:1620:in `each' > org/jruby/RubyArray.java:1620:in `each' > 2) Error: > test_The_set/clear_methods_should_work_with_authorizations(Hbase::VisibilityLabelsAdminMethodsTest): > ArgumentError: No authentication set for the given user jenkins > > /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-shell/src/main/ruby/hbase/visibility_labels.rb:97:in > `get_auths' > ./src/test/ruby/hbase/visibility_labels_admin_test.rb:57:in > `test_The_set/clear_methods_should_work_with_authorizations' > org/jruby/RubyProc.java:270:in `call' > org/jruby/RubyKernel.java:2105:in `send' > org/jruby/RubyArray.java:1620:in `each' > org/jruby/RubyArray.java:1620:in `each' > {noformat} > This is the test code > {code:title=visibility_labels_admin_test.rb} > label = 'TEST_VISIBILITY' > user = org.apache.hadoop.hbase.security.User.getCurrent().getName(); > visibility_admin.add_labels(label) > visibility_admin.set_auths(user, label) > {code} > It says 'label does not exists' when calling set_auths. > Then I add some ugly logs in DefaultVisibilityLabelServiceImpl and > VisibilityLabelsCache. > {code:title=DefaultVisibilityLabelServiceImpl.java} > public OperationStatus[] addLabels(List labels) throws IOException { > ... > if (mutateLabelsRegion(puts, finalOpStatus)) { > updateZk(true); > } > for (byte[] label : labels) { > String labelStr = Bytes.toString(label); > LOG.info(labelStr + "=" + > this.labelsCache.getLabelOrdinal(labelStr)); > } > ... > } > {code} > {code:title=VisibilityLabelsCache.java} > public void refreshLabelsCache(byte[] data) throws IOException { > LOG.info("refresh", new Exception()); > ... > } > {code} > And I modified TestVisibilityLabelsWithCustomVisLabService to use > DefaultVisibilityLabelServiceImpl, then col
[jira] [Commented] (HBASE-13082) Coarsen StoreScanner locks to RegionScanner
[ https://issues.apache.org/jira/browse/HBASE-13082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14338000#comment-14338000 ] Lars Hofhansl commented on HBASE-13082: --- The StoreScanner.next() loop we can simply exit after some time limit with a empty result but returning true (i.e. more rows expected). That would be the same that happens when we exhaust the region, the region scanner will continue. In RegionScanner we could do the same and return a special indicator that the client just ignores (as described above). I guess what's tricky coprocessors that wrap a region scanner (such as Phoenix does). They'd have to honor the protocol and pass the marker results to the client (or at the very least ignore them). Let's do that in another jira, though. This patch will not make things worse in principle. A store scanner can be stuck exhausting the entire store in a single next(...) call while holding the lock, prevent flushes from finishing. See extremely long scan times we've seen have other reasons too - see HBASE-13109. The only detriment this patch can cause is that one store scanner is stuck this way, and now prevent other stores in the region from flushing/compacting. (and note that that is only the case when no Cells in the store are returned by the store scanner). > Coarsen StoreScanner locks to RegionScanner > --- > > Key: HBASE-13082 > URL: https://issues.apache.org/jira/browse/HBASE-13082 > Project: HBase > Issue Type: Bug >Reporter: Lars Hofhansl > Attachments: 13082-test.txt, 13082.txt > > > Continuing where HBASE-10015 left of. > We can avoid locking (and memory fencing) inside StoreScanner by deferring to > the lock already held by the RegionScanner. > In tests this shows quite a scan improvement and reduced CPU (the fences make > the cores wait for memory fetches). > There are some drawbacks too: > * All calls to RegionScanner need to be remain synchronized > * Implementors of coprocessors need to be diligent in following the locking > contract. For example Phoenix does not lock RegionScanner.nextRaw() and > required in the documentation (not picking on Phoenix, this one is my fault > as I told them it's OK) > * possible starving of flushes and compaction with heavy read load. > RegionScanner operations would keep getting the locks and the > flushes/compactions would not be able finalize the set of files. > I'll have a patch soon. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13106) Ensure endpoint-only table coprocessors can be dynamically loaded
[ https://issues.apache.org/jira/browse/HBASE-13106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337947#comment-14337947 ] Hudson commented on HBASE-13106: FAILURE: Integrated in HBase-0.98 #874 (See [https://builds.apache.org/job/HBase-0.98/874/]) HBASE-13106 Ensure endpoint-only table coprocessors can be dynamically loaded (apurtell: rev 2d7f60c27ccb0e2a141b74e84c457344f3736eb1) * hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestCoprocessorTableEndpoint.java * hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestCoprocessorEndpoint.java HBASE-13106 Ensure endpoint-only table coprocessors can be dynamically loaded (apurtell: rev bcf14c84397040822a4be783e059448a3e480be7) * hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestCoprocessorEndpoint.java * hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestCoprocessorTableEndpoint.java > Ensure endpoint-only table coprocessors can be dynamically loaded > - > > Key: HBASE-13106 > URL: https://issues.apache.org/jira/browse/HBASE-13106 > Project: HBase > Issue Type: Test >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Trivial > Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11 > > Attachments: HBASE-13106.patch, HBASE-13106.patch > > > I came across the blog post > http://www.3pillarglobal.com/insights/hbase-coprocessors and this interesting > bit: > {quote} > This means that you can load both Observer and Endpoint Coprocessor > statically using the following Method of HTableDescriptor: > {noformat} > addCoprocessor(String className, org.apache.hadoop.fs.Path jarFilePath, int > priority, Map kvs) throws IOException > {noformat} > In my case, the above method worked fine for Observer Coprocessor *but didn’t > work for Endpoint Coprocessor, causing the table to become unavailable and > finally I had to restart my HBase*. The same Endpoint Coprocessor worked fine > when loaded statically. Use the above method for Endpoint Coprocessor with > caution. > {quote} > To check this I wrote a test, attached. It passes, all seems ok. Guessing the > complaint was due to user error, probably jar placement/path issues. > Let's still commit the test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13049) wal_roll ruby command doesn't work.
[ https://issues.apache.org/jira/browse/HBASE-13049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337880#comment-14337880 ] Hudson commented on HBASE-13049: FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #831 (See [https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/831/]) HBASE-13049 wal_roll ruby command doesn't work (apurtell: rev 50b806b5ecb96ef563a7620e2f5288f54a1bc122) * hbase-shell/src/main/ruby/hbase/admin.rb > wal_roll ruby command doesn't work. > > > Key: HBASE-13049 > URL: https://issues.apache.org/jira/browse/HBASE-13049 > Project: HBase > Issue Type: Bug > Components: shell >Affects Versions: 1.0.0, 2.0.0 >Reporter: Bhupendra Kumar Jain >Assignee: Bhupendra Kumar Jain > Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11 > > Attachments: 0001-HBASE-13049-wal_roll-ruby-command-doesn-t-work.patch > > > On execution of wal_roll command in shell, error message gets displayed as > shown below > hbase(main):005:0> wal_roll 'host-10-19-92-94,16201,1424081618286' > *ERROR: cannot convert instance of class org.jruby.RubyString to class > org.apache.hadoop.hbase.ServerName* > its because Admin Java api expecting a ServerName object but script passes > the ServerName as string. > currently script is as below > {code} > @admin.rollWALWriter(server_name) > {code} > It should be like > {code} > @admin.rollWALWriter(ServerName.valueOf(server_name)) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13104) ZooKeeper session timeout cannot be changed for standalone HBase
[ https://issues.apache.org/jira/browse/HBASE-13104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337879#comment-14337879 ] Hudson commented on HBASE-13104: FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #831 (See [https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/831/]) HBASE-13104 ZooKeeper session timeout cannot be changed for standalone HBase (Alex Araujo) (apurtell: rev 761408f6e789233d50a13a507a485429f9968e91) * hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMasterCommandLine.java > ZooKeeper session timeout cannot be changed for standalone HBase > > > Key: HBASE-13104 > URL: https://issues.apache.org/jira/browse/HBASE-13104 > Project: HBase > Issue Type: Bug > Components: master >Affects Versions: 0.98.10.1 >Reporter: Alex Araujo >Assignee: Alex Araujo > Fix For: 0.98.11 > > Attachments: HBASE-13104-0.98.patch > > > It's not possible to increase the ZooKeeper session timeout in standalone > HBase due to a hardcoded 10s timeout in HMasterCommandLine: > https://github.com/apache/hbase/blob/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMasterCommandLine.java#L176 > In trunk you can append .localHBaseCluster to the ZK session timeout property > name to change the timeout: > https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMasterCommandLine.java#L169-171 > We should allow changing the timeout in 0.98 and other versions where it's > not possible to do so. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13106) Ensure endpoint-only table coprocessors can be dynamically loaded
[ https://issues.apache.org/jira/browse/HBASE-13106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337876#comment-14337876 ] Hudson commented on HBASE-13106: SUCCESS: Integrated in HBase-1.0 #776 (See [https://builds.apache.org/job/HBase-1.0/776/]) HBASE-13106 Ensure endpoint-only table coprocessors can be dynamically loaded (apurtell: rev a16603f18942804abe040b4f13f27584b5e72863) * hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestCoprocessorEndpoint.java * hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestCoprocessorTableEndpoint.java > Ensure endpoint-only table coprocessors can be dynamically loaded > - > > Key: HBASE-13106 > URL: https://issues.apache.org/jira/browse/HBASE-13106 > Project: HBase > Issue Type: Test >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Trivial > Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11 > > Attachments: HBASE-13106.patch, HBASE-13106.patch > > > I came across the blog post > http://www.3pillarglobal.com/insights/hbase-coprocessors and this interesting > bit: > {quote} > This means that you can load both Observer and Endpoint Coprocessor > statically using the following Method of HTableDescriptor: > {noformat} > addCoprocessor(String className, org.apache.hadoop.fs.Path jarFilePath, int > priority, Map kvs) throws IOException > {noformat} > In my case, the above method worked fine for Observer Coprocessor *but didn’t > work for Endpoint Coprocessor, causing the table to become unavailable and > finally I had to restart my HBase*. The same Endpoint Coprocessor worked fine > when loaded statically. Use the above method for Endpoint Coprocessor with > caution. > {quote} > To check this I wrote a test, attached. It passes, all seems ok. Guessing the > complaint was due to user error, probably jar placement/path issues. > Let's still commit the test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13106) Ensure endpoint-only table coprocessors can be dynamically loaded
[ https://issues.apache.org/jira/browse/HBASE-13106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337872#comment-14337872 ] Hudson commented on HBASE-13106: FAILURE: Integrated in HBase-1.1 #215 (See [https://builds.apache.org/job/HBase-1.1/215/]) HBASE-13106 Ensure endpoint-only table coprocessors can be dynamically loaded (apurtell: rev 3e17ed9c3e49111b13d81f239a5b137893a355e2) * hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestCoprocessorEndpoint.java * hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestCoprocessorTableEndpoint.java > Ensure endpoint-only table coprocessors can be dynamically loaded > - > > Key: HBASE-13106 > URL: https://issues.apache.org/jira/browse/HBASE-13106 > Project: HBase > Issue Type: Test >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Trivial > Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11 > > Attachments: HBASE-13106.patch, HBASE-13106.patch > > > I came across the blog post > http://www.3pillarglobal.com/insights/hbase-coprocessors and this interesting > bit: > {quote} > This means that you can load both Observer and Endpoint Coprocessor > statically using the following Method of HTableDescriptor: > {noformat} > addCoprocessor(String className, org.apache.hadoop.fs.Path jarFilePath, int > priority, Map kvs) throws IOException > {noformat} > In my case, the above method worked fine for Observer Coprocessor *but didn’t > work for Endpoint Coprocessor, causing the table to become unavailable and > finally I had to restart my HBase*. The same Endpoint Coprocessor worked fine > when loaded statically. Use the above method for Endpoint Coprocessor with > caution. > {quote} > To check this I wrote a test, attached. It passes, all seems ok. Guessing the > complaint was due to user error, probably jar placement/path issues. > Let's still commit the test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13108) Reduce Connection creations in TestAcidGuarantees
[ https://issues.apache.org/jira/browse/HBASE-13108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337868#comment-14337868 ] Hadoop QA commented on HBASE-13108: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12700953/HBASE-13108.patch against master branch at commit 1c957b65b16a8706caee140c18b84ea48a0dc0aa. ATTACHMENT ID: 12700953 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 4 new or modified tests. {color:green}+1 hadoop versions{color}. The patch compiles with all supported hadoop versions (2.4.1 2.5.2 2.6.0) {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 checkstyle{color}. The applied patch does not increase the total number of checkstyle errors {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 2.0.3) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:red}-1 core tests{color}. The patch failed these unit tests: {color:red}-1 core zombie tests{color}. There are 1 zombie test(s): at org.apache.hadoop.hbase.regionserver.wal.TestWALReplay.testReplayEditsAfterRegionMovedWithMultiCF(TestWALReplay.java:245) Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/12973//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12973//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12973//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12973//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12973//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12973//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12973//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12973//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12973//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12973//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12973//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12973//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Checkstyle Errors: https://builds.apache.org/job/PreCommit-HBASE-Build/12973//artifact/patchprocess/checkstyle-aggregate.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/12973//console This message is automatically generated. > Reduce Connection creations in TestAcidGuarantees > - > > Key: HBASE-13108 > URL: https://issues.apache.org/jira/browse/HBASE-13108 > Project: HBase > Issue Type: Sub-task > Components: IPC/RPC, test >Affects Versions: 2.0.0, 1.1.0 >Reporter: zhangduo >Assignee: zhangduo > Fix For: 2.0.0, 1.1.0 > > Attachments: HBASE-13108.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13108) Reduce Connection creations in TestAcidGuarantees
[ https://issues.apache.org/jira/browse/HBASE-13108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337847#comment-14337847 ] zhangduo commented on HBASE-13108: -- {quote} I think it is important to have separate connections doing the scans/gets/puts here {quote} Yes, that's what I worried about. The ideal way is making different AsyncRpcClient use same EventLoopGroup I think. May introduce a EventLoopGroup cache or manager which returns same EventLoopGroup object if config is same? > Reduce Connection creations in TestAcidGuarantees > - > > Key: HBASE-13108 > URL: https://issues.apache.org/jira/browse/HBASE-13108 > Project: HBase > Issue Type: Sub-task > Components: IPC/RPC, test >Affects Versions: 2.0.0, 1.1.0 >Reporter: zhangduo >Assignee: zhangduo > Fix For: 2.0.0, 1.1.0 > > Attachments: HBASE-13108.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13106) Ensure endpoint-only table coprocessors can be dynamically loaded
[ https://issues.apache.org/jira/browse/HBASE-13106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337846#comment-14337846 ] Hudson commented on HBASE-13106: FAILURE: Integrated in HBase-TRUNK #6173 (See [https://builds.apache.org/job/HBase-TRUNK/6173/]) HBASE-13106 Ensure endpoint-only table coprocessors can be dynamically loaded (apurtell: rev 1c957b65b16a8706caee140c18b84ea48a0dc0aa) * hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestCoprocessorTableEndpoint.java * hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestCoprocessorEndpoint.java > Ensure endpoint-only table coprocessors can be dynamically loaded > - > > Key: HBASE-13106 > URL: https://issues.apache.org/jira/browse/HBASE-13106 > Project: HBase > Issue Type: Test >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Trivial > Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11 > > Attachments: HBASE-13106.patch, HBASE-13106.patch > > > I came across the blog post > http://www.3pillarglobal.com/insights/hbase-coprocessors and this interesting > bit: > {quote} > This means that you can load both Observer and Endpoint Coprocessor > statically using the following Method of HTableDescriptor: > {noformat} > addCoprocessor(String className, org.apache.hadoop.fs.Path jarFilePath, int > priority, Map kvs) throws IOException > {noformat} > In my case, the above method worked fine for Observer Coprocessor *but didn’t > work for Endpoint Coprocessor, causing the table to become unavailable and > finally I had to restart my HBase*. The same Endpoint Coprocessor worked fine > when loaded statically. Use the above method for Endpoint Coprocessor with > caution. > {quote} > To check this I wrote a test, attached. It passes, all seems ok. Guessing the > complaint was due to user error, probably jar placement/path issues. > Let's still commit the test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13104) ZooKeeper session timeout cannot be changed for standalone HBase
[ https://issues.apache.org/jira/browse/HBASE-13104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337837#comment-14337837 ] Hudson commented on HBASE-13104: SUCCESS: Integrated in HBase-0.98 #873 (See [https://builds.apache.org/job/HBase-0.98/873/]) HBASE-13104 ZooKeeper session timeout cannot be changed for standalone HBase (Alex Araujo) (apurtell: rev 761408f6e789233d50a13a507a485429f9968e91) * hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMasterCommandLine.java > ZooKeeper session timeout cannot be changed for standalone HBase > > > Key: HBASE-13104 > URL: https://issues.apache.org/jira/browse/HBASE-13104 > Project: HBase > Issue Type: Bug > Components: master >Affects Versions: 0.98.10.1 >Reporter: Alex Araujo >Assignee: Alex Araujo > Fix For: 0.98.11 > > Attachments: HBASE-13104-0.98.patch > > > It's not possible to increase the ZooKeeper session timeout in standalone > HBase due to a hardcoded 10s timeout in HMasterCommandLine: > https://github.com/apache/hbase/blob/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMasterCommandLine.java#L176 > In trunk you can append .localHBaseCluster to the ZK session timeout property > name to change the timeout: > https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMasterCommandLine.java#L169-171 > We should allow changing the timeout in 0.98 and other versions where it's > not possible to do so. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13049) wal_roll ruby command doesn't work.
[ https://issues.apache.org/jira/browse/HBASE-13049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337838#comment-14337838 ] Hudson commented on HBASE-13049: SUCCESS: Integrated in HBase-0.98 #873 (See [https://builds.apache.org/job/HBase-0.98/873/]) HBASE-13049 wal_roll ruby command doesn't work (apurtell: rev 50b806b5ecb96ef563a7620e2f5288f54a1bc122) * hbase-shell/src/main/ruby/hbase/admin.rb > wal_roll ruby command doesn't work. > > > Key: HBASE-13049 > URL: https://issues.apache.org/jira/browse/HBASE-13049 > Project: HBase > Issue Type: Bug > Components: shell >Affects Versions: 1.0.0, 2.0.0 >Reporter: Bhupendra Kumar Jain >Assignee: Bhupendra Kumar Jain > Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11 > > Attachments: 0001-HBASE-13049-wal_roll-ruby-command-doesn-t-work.patch > > > On execution of wal_roll command in shell, error message gets displayed as > shown below > hbase(main):005:0> wal_roll 'host-10-19-92-94,16201,1424081618286' > *ERROR: cannot convert instance of class org.jruby.RubyString to class > org.apache.hadoop.hbase.ServerName* > its because Admin Java api expecting a ServerName object but script passes > the ServerName as string. > currently script is as below > {code} > @admin.rollWALWriter(server_name) > {code} > It should be like > {code} > @admin.rollWALWriter(ServerName.valueOf(server_name)) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13108) Reduce Connection creations in TestAcidGuarantees
[ https://issues.apache.org/jira/browse/HBASE-13108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337833#comment-14337833 ] Jonathan Hsieh commented on HBASE-13108: I think it is important to have separate connections doing the scans/gets/puts here. I'm fine with reducing the number but making them all go through one connection seems like it would reduced the chances of triggering as much interleaving to expose acid problems. > Reduce Connection creations in TestAcidGuarantees > - > > Key: HBASE-13108 > URL: https://issues.apache.org/jira/browse/HBASE-13108 > Project: HBase > Issue Type: Sub-task > Components: IPC/RPC, test >Affects Versions: 2.0.0, 1.1.0 >Reporter: zhangduo >Assignee: zhangduo > Fix For: 2.0.0, 1.1.0 > > Attachments: HBASE-13108.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-12795) Backport HBASE-12429 (Add port to ClusterManager's actions) to 0.98
[ https://issues.apache.org/jira/browse/HBASE-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337802#comment-14337802 ] Hadoop QA commented on HBASE-12795: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12700926/HBASE-12795-0.98.patch against 0.98 branch at commit 7195f62114ce68dfa94115443d2b27cd2d7df01c. ATTACHMENT ID: 12700926 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 24 new or modified tests. {color:green}+1 hadoop versions{color}. The patch compiles with all supported hadoop versions (2.4.1 2.5.2 2.6.0) {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 javadoc{color}. The javadoc tool appears to have generated 25 warning messages. {color:green}+1 checkstyle{color}. The applied patch does not increase the total number of checkstyle errors {color:red}-1 findbugs{color}. The patch appears to introduce 4 new Findbugs (version 2.0.3) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/12972//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12972//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12972//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12972//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12972//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12972//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12972//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12972//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12972//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12972//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12972//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12972//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html Checkstyle Errors: https://builds.apache.org/job/PreCommit-HBASE-Build/12972//artifact/patchprocess/checkstyle-aggregate.html Javadoc warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12972//artifact/patchprocess/patchJavadocWarnings.txt Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/12972//console This message is automatically generated. > Backport HBASE-12429 (Add port to ClusterManager's actions) to 0.98 > --- > > Key: HBASE-12795 > URL: https://issues.apache.org/jira/browse/HBASE-12795 > Project: HBase > Issue Type: Task >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Minor > Fix For: 0.98.11 > > Attachments: HBASE-12795-0.98.patch > > > As of HBASE-12371 we are following along with improvements in the integration > test module. Evaluate HBASE-12429 (Add port to ClusterManager's actions) for > backport to 0.98. This improves testing with chaos to support testing on a > cluster with multiple regionservers running on a host. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HBASE-13107) Refactor MOB Snapshot logic to reduce code duplication.
[ https://issues.apache.org/jira/browse/HBASE-13107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jingcheng Du reassigned HBASE-13107: Assignee: Jingcheng Du > Refactor MOB Snapshot logic to reduce code duplication. > --- > > Key: HBASE-13107 > URL: https://issues.apache.org/jira/browse/HBASE-13107 > Project: HBase > Issue Type: Sub-task > Components: mob, snapshots >Affects Versions: hbase-11339 >Reporter: Jonathan Hsieh >Assignee: Jingcheng Du > Fix For: hbase-11339 > > > The MOB Snapshot code contains a lot of code duplication with the normal > snapshot code path. We should do some refactoring to clean this up before > merging. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13110) TestCoprocessorEndpoint hangs on trunk
[ https://issues.apache.org/jira/browse/HBASE-13110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337760#comment-14337760 ] Andrew Purtell commented on HBASE-13110: Let me bisect and look for where this started behaving oddly. > TestCoprocessorEndpoint hangs on trunk > -- > > Key: HBASE-13110 > URL: https://issues.apache.org/jira/browse/HBASE-13110 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0 >Reporter: Andrew Purtell > > TestCoprocessorEndpoint hangs with repeated RPC retries > (RpcRetryingCallerImpl.callWithRetries) after the ProtobufCoprocessorService > throws the test exception. Looks like a change on trunk has broken > TestCoprocessorEndpoint. > jstack of interest: > {noformat} > "main" prio=5 tid=0x7f87eb003000 nid=0x1303 in Object.wait() > [0x000105173000] >java.lang.Thread.State: TIMED_WAITING (on object monitor) > at java.lang.Object.wait(Native Method) > - waiting on <0x0007c91aedf8> (a > java.util.concurrent.atomic.AtomicBoolean) > at > org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:162) > - locked <0x0007c91aedf8> (a > java.util.concurrent.atomic.AtomicBoolean) > at > org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel.callExecService(RegionCoprocessorRpcChannel.java: > 95) > at > org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callBlockingMethod(CoprocessorRpcChannel.java:73) > at > org.apache.hadoop.hbase.ipc.protobuf.generated.TestRpcServiceProtos$TestProtobufRpcProto$BlockingStub.error(TestRpcServiceProtos.java:378) > at > org.apache.hadoop.hbase.coprocessor.TestCoprocessorEndpoint.testCoprocessorError(TestCoprocessorEndpoint.java:308) > {noformat} > Tail of the log has entries like: > {noformat} > 2015-02-25 18:50:03,659 DEBUG > [B.defaultRpcServer.handler=3,queue=0,port=56093] ipc.CallRunner(110): > B.defaultRpcServer.handler=3,queue=0,port=56093: callId: 75 service: > ClientService methodName: ExecService size: 141 connection: 10.3.31.30:56149 > java.io.IOException: Test exception > at > org.apache.hadoop.hbase.coprocessor.ProtobufCoprocessorService.error(ProtobufCoprocessorService.java:64) > at > org.apache.hadoop.hbase.ipc.protobuf.generated.TestRpcServiceProtos$TestProtobufRpcProto.callMethod(TestRpcServiceProtos.java:210) > at > org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:6883) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1696) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1678) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31309) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2038) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107) > at > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130) > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107) > at java.lang.Thread.run(Thread.java:745) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13106) Ensure endpoint-only table coprocessors can be dynamically loaded
[ https://issues.apache.org/jira/browse/HBASE-13106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337759#comment-14337759 ] Andrew Purtell commented on HBASE-13106: Pushed to 0.98+ > Ensure endpoint-only table coprocessors can be dynamically loaded > - > > Key: HBASE-13106 > URL: https://issues.apache.org/jira/browse/HBASE-13106 > Project: HBase > Issue Type: Test >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Trivial > Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11 > > Attachments: HBASE-13106.patch, HBASE-13106.patch > > > I came across the blog post > http://www.3pillarglobal.com/insights/hbase-coprocessors and this interesting > bit: > {quote} > This means that you can load both Observer and Endpoint Coprocessor > statically using the following Method of HTableDescriptor: > {noformat} > addCoprocessor(String className, org.apache.hadoop.fs.Path jarFilePath, int > priority, Map kvs) throws IOException > {noformat} > In my case, the above method worked fine for Observer Coprocessor *but didn’t > work for Endpoint Coprocessor, causing the table to become unavailable and > finally I had to restart my HBase*. The same Endpoint Coprocessor worked fine > when loaded statically. Use the above method for Endpoint Coprocessor with > caution. > {quote} > To check this I wrote a test, attached. It passes, all seems ok. Guessing the > complaint was due to user error, probably jar placement/path issues. > Let's still commit the test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-13110) TestCoprocessorEndpoint hangs on trunk
Andrew Purtell created HBASE-13110: -- Summary: TestCoprocessorEndpoint hangs on trunk Key: HBASE-13110 URL: https://issues.apache.org/jira/browse/HBASE-13110 Project: HBase Issue Type: Bug Affects Versions: 2.0.0 Reporter: Andrew Purtell TestCoprocessorEndpoint hangs with repeated RPC retries (RpcRetryingCallerImpl.callWithRetries) after the ProtobufCoprocessorService throws the test exception. Looks like a change on trunk has broken TestCoprocessorEndpoint. jstack of interest: {noformat} "main" prio=5 tid=0x7f87eb003000 nid=0x1303 in Object.wait() [0x000105173000] java.lang.Thread.State: TIMED_WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x0007c91aedf8> (a java.util.concurrent.atomic.AtomicBoolean) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:162) - locked <0x0007c91aedf8> (a java.util.concurrent.atomic.AtomicBoolean) at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel.callExecService(RegionCoprocessorRpcChannel.java: 95) at org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callBlockingMethod(CoprocessorRpcChannel.java:73) at org.apache.hadoop.hbase.ipc.protobuf.generated.TestRpcServiceProtos$TestProtobufRpcProto$BlockingStub.error(TestRpcServiceProtos.java:378) at org.apache.hadoop.hbase.coprocessor.TestCoprocessorEndpoint.testCoprocessorError(TestCoprocessorEndpoint.java:308) {noformat} Tail of the log has entries like: {noformat} 2015-02-25 18:50:03,659 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=56093] ipc.CallRunner(110): B.defaultRpcServer.handler=3,queue=0,port=56093: callId: 75 service: ClientService methodName: ExecService size: 141 connection: 10.3.31.30:56149 java.io.IOException: Test exception at org.apache.hadoop.hbase.coprocessor.ProtobufCoprocessorService.error(ProtobufCoprocessorService.java:64) at org.apache.hadoop.hbase.ipc.protobuf.generated.TestRpcServiceProtos$TestProtobufRpcProto.callMethod(TestRpcServiceProtos.java:210) at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:6883) at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1696) at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1678) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31309) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2038) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107) at java.lang.Thread.run(Thread.java:745) {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13106) Ensure endpoint-only table coprocessors can be dynamically loaded
[ https://issues.apache.org/jira/browse/HBASE-13106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337758#comment-14337758 ] Hadoop QA commented on HBASE-13106: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12700920/HBASE-13106.patch against master branch at commit 7195f62114ce68dfa94115443d2b27cd2d7df01c. ATTACHMENT ID: 12700920 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 8 new or modified tests. {color:green}+1 hadoop versions{color}. The patch compiles with all supported hadoop versions (2.4.1 2.5.2 2.6.0) {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 checkstyle{color}. The applied patch does not increase the total number of checkstyle errors {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 2.0.3) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 lineLengths{color}. The patch introduces the following lines longer than 100: + desc.addCoprocessor(org.apache.hadoop.hbase.coprocessor.ColumnAggregationEndpoint.class.getName()); + desc.addCoprocessor(org.apache.hadoop.hbase.coprocessor.ColumnAggregationEndpoint.class.getName()); {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:red}-1 core tests{color}. The patch failed these unit tests: {color:red}-1 core zombie tests{color}. There are 2 zombie test(s): at org.apache.hadoop.hbase.coprocessor.TestMasterObserver.testRegionTransitionOperations(TestMasterObserver.java:1604) at org.apache.curator.test.TestingZooKeeperMain.runFromConfig(TestingZooKeeperMain.java:73) at org.apache.curator.test.TestingZooKeeperServer$1.run(TestingZooKeeperServer.java:148) at org.apache.curator.test.Timing.sleepABit(Timing.java:199) at org.apache.hadoop.util.curator.TestChildReaper.testSimple(TestChildReaper.java:120) Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/12971//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12971//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12971//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12971//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12971//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12971//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12971//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12971//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12971//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12971//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12971//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12971//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html Checkstyle Errors: https://builds.apache.org/job/PreCommit-HBASE-Build/12971//artifact/patchprocess/checkstyle-aggregate.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/12971//console This message is automatically generated. > Ensure endpoint-only table coprocessors can be dynamically loaded > - > > Key: HBASE-13106 > URL: https://issues.apache.org/jira/browse/HBASE-13106 > Project: HBase > Issue Type: Test >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Trivial > Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11 > > Attachments: HBASE-13106.patch, HBASE-13
[jira] [Updated] (HBASE-13108) Reduce Connection creations in TestAcidGuarantees
[ https://issues.apache.org/jira/browse/HBASE-13108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhangduo updated HBASE-13108: - Fix Version/s: 1.1.0 2.0.0 Status: Patch Available (was: Open) > Reduce Connection creations in TestAcidGuarantees > - > > Key: HBASE-13108 > URL: https://issues.apache.org/jira/browse/HBASE-13108 > Project: HBase > Issue Type: Sub-task > Components: IPC/RPC, test >Affects Versions: 2.0.0, 1.1.0 >Reporter: zhangduo >Assignee: zhangduo > Fix For: 2.0.0, 1.1.0 > > Attachments: HBASE-13108.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-13108) Reduce Connection creations in TestAcidGuarantees
[ https://issues.apache.org/jira/browse/HBASE-13108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhangduo updated HBASE-13108: - Attachment: HBASE-13108.patch Use HBaseTestingUtility.getConnection instead of ConnectionFactory.createConnection. Note that TestAcidGurantees is used in integration tests(IntegrationTestAcidGuarantees), so I'm not sure if this modification is acceptable. [~stack] [~jurmous] > Reduce Connection creations in TestAcidGuarantees > - > > Key: HBASE-13108 > URL: https://issues.apache.org/jira/browse/HBASE-13108 > Project: HBase > Issue Type: Sub-task > Components: IPC/RPC, test >Affects Versions: 2.0.0, 1.1.0 >Reporter: zhangduo >Assignee: zhangduo > Fix For: 2.0.0, 1.1.0 > > Attachments: HBASE-13108.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13106) Ensure endpoint-only table coprocessors can be dynamically loaded
[ https://issues.apache.org/jira/browse/HBASE-13106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337751#comment-14337751 ] Andrew Purtell commented on HBASE-13106: Interesting. This test and TestCoprocessorEndpoint when ported back to branches 0.98 and branch-1 pass repeatedly, but on master TestCoprocessorEndpoint (with and without the minor changes I made in this patch) hangs with repeated RPC retries (RpcRetryingCallerImpl.callWithRetries) after the ProtobufCoprocessorService throws the test exception. Looks like a previous change on trunk has broken TestCoprocessorEndpoint. The new test TestCoprocessorTableEndpoint passes quickly and repeatedly. I will file an issue to follow up on the TestCoprocessorEndpoint issue. > Ensure endpoint-only table coprocessors can be dynamically loaded > - > > Key: HBASE-13106 > URL: https://issues.apache.org/jira/browse/HBASE-13106 > Project: HBase > Issue Type: Test >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Trivial > Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11 > > Attachments: HBASE-13106.patch, HBASE-13106.patch > > > I came across the blog post > http://www.3pillarglobal.com/insights/hbase-coprocessors and this interesting > bit: > {quote} > This means that you can load both Observer and Endpoint Coprocessor > statically using the following Method of HTableDescriptor: > {noformat} > addCoprocessor(String className, org.apache.hadoop.fs.Path jarFilePath, int > priority, Map kvs) throws IOException > {noformat} > In my case, the above method worked fine for Observer Coprocessor *but didn’t > work for Endpoint Coprocessor, causing the table to become unavailable and > finally I had to restart my HBase*. The same Endpoint Coprocessor worked fine > when loaded statically. Use the above method for Endpoint Coprocessor with > caution. > {quote} > To check this I wrote a test, attached. It passes, all seems ok. Guessing the > complaint was due to user error, probably jar placement/path issues. > Let's still commit the test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-13106) Ensure endpoint-only table coprocessors can be dynamically loaded
[ https://issues.apache.org/jira/browse/HBASE-13106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-13106: --- Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) > Ensure endpoint-only table coprocessors can be dynamically loaded > - > > Key: HBASE-13106 > URL: https://issues.apache.org/jira/browse/HBASE-13106 > Project: HBase > Issue Type: Test >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Trivial > Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11 > > Attachments: HBASE-13106.patch, HBASE-13106.patch > > > I came across the blog post > http://www.3pillarglobal.com/insights/hbase-coprocessors and this interesting > bit: > {quote} > This means that you can load both Observer and Endpoint Coprocessor > statically using the following Method of HTableDescriptor: > {noformat} > addCoprocessor(String className, org.apache.hadoop.fs.Path jarFilePath, int > priority, Map kvs) throws IOException > {noformat} > In my case, the above method worked fine for Observer Coprocessor *but didn’t > work for Endpoint Coprocessor, causing the table to become unavailable and > finally I had to restart my HBase*. The same Endpoint Coprocessor worked fine > when loaded statically. Use the above method for Endpoint Coprocessor with > caution. > {quote} > To check this I wrote a test, attached. It passes, all seems ok. Guessing the > complaint was due to user error, probably jar placement/path issues. > Let's still commit the test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-13109) Use scanner look ahead for timeranges as well
[ https://issues.apache.org/jira/browse/HBASE-13109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-13109: -- Attachment: 13109-0.98.txt 0.98 patch. Untested, I only checked that it compiles. Just parking. > Use scanner look ahead for timeranges as well > - > > Key: HBASE-13109 > URL: https://issues.apache.org/jira/browse/HBASE-13109 > Project: HBase > Issue Type: Bug >Reporter: Lars Hofhansl >Priority: Minor > Attachments: 13109-0.98.txt > > > This is a continuation of HBASE-9778. > We've seen a scenario of a very slow scan over a region using a timerange > that happens to fall after the ts of any Cell in the region. > Turns out we spend a lot of time seeking. > Tested with a 5 column table, and the scan is 5x faster when the timerange > falls before all Cells' ts. > We can use the lookahead hint introduced in HBASE-9778 to do opportunistic > SKIPing before we actually seek. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-13109) Use scanner look ahead for timeranges as well
Lars Hofhansl created HBASE-13109: - Summary: Use scanner look ahead for timeranges as well Key: HBASE-13109 URL: https://issues.apache.org/jira/browse/HBASE-13109 Project: HBase Issue Type: Bug Reporter: Lars Hofhansl Priority: Minor This is a continuation of HBASE-9778. We've seen a scenario of a very slow scan over a region using a timerange that happens to fall after the ts of any Cell in the region. Turns out we spend a lot of time seeking. Tested with a 5 column table, and the scan is 5x faster when the timerange falls before all Cells' ts. We can use the lookahead hint introduced in HBASE-9778 to do opportunistic SKIPing before we actually seek. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13104) ZooKeeper session timeout cannot be changed for standalone HBase
[ https://issues.apache.org/jira/browse/HBASE-13104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337725#comment-14337725 ] Hadoop QA commented on HBASE-13104: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12700905/HBASE-13104-0.98.patch against 0.98 branch at commit c651271f5759f39f28209a50ab88a62d86b7. ATTACHMENT ID: 12700905 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 hadoop versions{color}. The patch compiles with all supported hadoop versions (2.4.1 2.5.2 2.6.0) {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 javadoc{color}. The javadoc tool appears to have generated 25 warning messages. {color:green}+1 checkstyle{color}. The applied patch does not increase the total number of checkstyle errors {color:red}-1 findbugs{color}. The patch appears to introduce 4 new Findbugs (version 2.0.3) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/12969//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12969//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12969//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12969//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12969//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12969//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12969//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12969//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12969//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12969//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12969//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12969//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Checkstyle Errors: https://builds.apache.org/job/PreCommit-HBASE-Build/12969//artifact/patchprocess/checkstyle-aggregate.html Javadoc warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12969//artifact/patchprocess/patchJavadocWarnings.txt Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/12969//console This message is automatically generated. > ZooKeeper session timeout cannot be changed for standalone HBase > > > Key: HBASE-13104 > URL: https://issues.apache.org/jira/browse/HBASE-13104 > Project: HBase > Issue Type: Bug > Components: master >Affects Versions: 0.98.10.1 >Reporter: Alex Araujo >Assignee: Alex Araujo > Fix For: 0.98.11 > > Attachments: HBASE-13104-0.98.patch > > > It's not possible to increase the ZooKeeper session timeout in standalone > HBase due to a hardcoded 10s timeout in HMasterCommandLine: > https://github.com/apache/hbase/blob/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMasterCommandLine.java#L176 > In trunk you can append .localHBaseCluster to the ZK session timeout property > name to change the timeout: > https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop
[jira] [Commented] (HBASE-12935) Does any one consider the performance of HBase on SSD?
[ https://issues.apache.org/jira/browse/HBASE-12935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337719#comment-14337719 ] Andrew Purtell commented on HBASE-12935: So don't compact on SSDs. (smile) > Does any one consider the performance of HBase on SSD? > --- > > Key: HBASE-12935 > URL: https://issues.apache.org/jira/browse/HBASE-12935 > Project: HBase > Issue Type: Improvement >Reporter: Liang Lee > > Some features of HBase doesn't mathch features of SSD. Such as compaction is > harmful for SSD life span. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-12935) Does any one consider the performance of HBase on SSD?
[ https://issues.apache.org/jira/browse/HBASE-12935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Liang Lee updated HBASE-12935: -- Description: Some features of HBase doesn't mathch features of SSD. Such as compaction is harmful for SSD life span. (was: Some features of HBase doesn't mathch features of SSD. Such as comapction is harmful for SSD life span.) > Does any one consider the performance of HBase on SSD? > --- > > Key: HBASE-12935 > URL: https://issues.apache.org/jira/browse/HBASE-12935 > Project: HBase > Issue Type: Improvement >Reporter: Liang Lee > > Some features of HBase doesn't mathch features of SSD. Such as compaction is > harmful for SSD life span. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-13108) Reduce Connection creations in TestAcidGuarantees
zhangduo created HBASE-13108: Summary: Reduce Connection creations in TestAcidGuarantees Key: HBASE-13108 URL: https://issues.apache.org/jira/browse/HBASE-13108 Project: HBase Issue Type: Sub-task Components: IPC/RPC, test Affects Versions: 2.0.0, 1.1.0 Reporter: zhangduo Assignee: zhangduo -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-11544) [Ergonomics] hbase.client.scanner.caching is dogged and will try to return batch even if it means OOME
[ https://issues.apache.org/jira/browse/HBASE-11544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337713#comment-14337713 ] Hadoop QA commented on HBASE-11544: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12700917/HBASE-11544-v5.patch against master branch at commit 7195f62114ce68dfa94115443d2b27cd2d7df01c. ATTACHMENT ID: 12700917 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 148 new or modified tests. {color:green}+1 hadoop versions{color}. The patch compiles with all supported hadoop versions (2.4.1 2.5.2 2.6.0) {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 javadoc{color}. The javadoc tool appears to have generated 5 warning messages. {color:green}+1 checkstyle{color}. The applied patch does not increase the total number of checkstyle errors {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 2.0.3) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 lineLengths{color}. The patch introduces the following lines longer than 100: + if (!((mutable_bitField0_ & 0x0040) == 0x0040) && input.getBytesUntilLimit() > 0) { + private java.util.List partialFlagPerResult_ = java.util.Collections.emptyList(); + " \001(\014\022\020\n\010stop_row\030\004 \001(\014\022\027\n\006filter\030\005 \001(\0132\007" + + new java.lang.String[] { "Cell", "AssociatedCellCount", "Exists", "Stale", "Partial", }); + new java.lang.String[] { "Region", "Scan", "ScannerId", "NumberOfRows", "CloseScanner", "NextCallSeq", "ClientHandlesPartials", }); + new java.lang.String[] { "CellsPerResult", "ScannerId", "MoreResults", "Ttl", "Results", "Stale", "PartialFlagPerResult", }); {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.regionserver.TestHRegion org.apache.hadoop.hbase.regionserver.TestStoreScanner Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/12970//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12970//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12970//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12970//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12970//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12970//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12970//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12970//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12970//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12970//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12970//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12970//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Checkstyle Errors: https://builds.apache.org/job/PreCommit-HBASE-Build/12970//artifact/patchprocess/checkstyle-aggregate.html Javadoc warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12970//artifact/patchprocess/patchJavadocWarnings.txt Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/12970//console This message is automatically generated. > [Ergonomics] hbase.client.scanner.caching is dogged and will try to return > batch even if it means OOME > -- > > Key: HBASE-11544 > URL: https://issues.apache.org/jira/browse/HBASE-11
[jira] [Updated] (HBASE-13097) Reduce Connection and RpcClient creations in unit tests
[ https://issues.apache.org/jira/browse/HBASE-13097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhangduo updated HBASE-13097: - Description: In some unit tests(such as TestAcidGuarantees) we create multiple Connection instance. If we use AsyncRpcClient, then there will be multiple netty Bootstrap and every Bootstrap has its own PooledByteBufAllocator. I haven't read the code clearly but it uses some threadlocal technics and jmap shows io.netty.buffer.PoolThreadCache$MemoryRegionCache$Entry is the biggest things on Heap. See https://builds.apache.org/job/HBase-TRUNK/6168/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.TestAcidGuarantees-output.txt {noformat} 2015-02-24 23:50:29,704 WARN [JvmPauseMonitor] util.JvmPauseMonitor$Monitor(167): Detected pause in JVM or host machine (eg GC): pause of approximately 20133ms GC pool 'PS MarkSweep' had collection(s): count=15 time=55525ms {noformat} Update: We use a singleton PooledByteBufAllocator so the reason should be too many threads. So we will work on reduce the connections and rpclients in unit tests. was: In some unit tests(such as TestAcidGuarantees) we create multiple Connection instance. If we use AsyncRpcClient, then there will be multiple netty Bootstrap and every Bootstrap has its own PooledByteBufAllocator. I haven't read the code clearly but it uses some threadlocal technics and jmap shows io.netty.buffer.PoolThreadCache$MemoryRegionCache$Entry is the biggest things on Heap. See https://builds.apache.org/job/HBase-TRUNK/6168/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.TestAcidGuarantees-output.txt {noformat} 2015-02-24 23:50:29,704 WARN [JvmPauseMonitor] util.JvmPauseMonitor$Monitor(167): Detected pause in JVM or host machine (eg GC): pause of approximately 20133ms GC pool 'PS MarkSweep' had collection(s): count=15 time=55525ms {noformat} > Reduce Connection and RpcClient creations in unit tests > --- > > Key: HBASE-13097 > URL: https://issues.apache.org/jira/browse/HBASE-13097 > Project: HBase > Issue Type: Bug > Components: IPC/RPC, test >Affects Versions: 2.0.0, 1.1.0 >Reporter: zhangduo > > In some unit tests(such as TestAcidGuarantees) we create multiple Connection > instance. If we use AsyncRpcClient, then there will be multiple netty > Bootstrap and every Bootstrap has its own PooledByteBufAllocator. > I haven't read the code clearly but it uses some threadlocal technics and > jmap shows io.netty.buffer.PoolThreadCache$MemoryRegionCache$Entry is the > biggest things on Heap. > See > https://builds.apache.org/job/HBase-TRUNK/6168/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.TestAcidGuarantees-output.txt > {noformat} > 2015-02-24 23:50:29,704 WARN [JvmPauseMonitor] > util.JvmPauseMonitor$Monitor(167): Detected pause in JVM or host machine (eg > GC): pause of approximately 20133ms > GC pool 'PS MarkSweep' had collection(s): count=15 time=55525ms > {noformat} > Update: We use a singleton PooledByteBufAllocator so the reason should be too > many threads. So we will work on reduce the connections and rpclients in unit > tests. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-13097) Reduce Connection and RpcClient creations in unit tests
[ https://issues.apache.org/jira/browse/HBASE-13097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhangduo updated HBASE-13097: - Summary: Reduce Connection and RpcClient creations in unit tests (was: Netty PooledByteBufAllocator cause OOM in some unit test) > Reduce Connection and RpcClient creations in unit tests > --- > > Key: HBASE-13097 > URL: https://issues.apache.org/jira/browse/HBASE-13097 > Project: HBase > Issue Type: Bug > Components: IPC/RPC, test >Affects Versions: 2.0.0, 1.1.0 >Reporter: zhangduo > > In some unit tests(such as TestAcidGuarantees) we create multiple Connection > instance. If we use AsyncRpcClient, then there will be multiple netty > Bootstrap and every Bootstrap has its own PooledByteBufAllocator. > I haven't read the code clearly but it uses some threadlocal technics and > jmap shows io.netty.buffer.PoolThreadCache$MemoryRegionCache$Entry is the > biggest things on Heap. > See > https://builds.apache.org/job/HBase-TRUNK/6168/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.TestAcidGuarantees-output.txt > {noformat} > 2015-02-24 23:50:29,704 WARN [JvmPauseMonitor] > util.JvmPauseMonitor$Monitor(167): Detected pause in JVM or host machine (eg > GC): pause of approximately 20133ms > GC pool 'PS MarkSweep' had collection(s): count=15 time=55525ms > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13102) Fix Pseudo-distributed Mode which was broken in 1.0.0
[ https://issues.apache.org/jira/browse/HBASE-13102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337698#comment-14337698 ] Hudson commented on HBASE-13102: SUCCESS: Integrated in HBase-1.0 #775 (See [https://builds.apache.org/job/HBase-1.0/775/]) HBASE-13102 Fix Pseudo-distributed Mode which was broken in 1.0.0 (eclark: rev 92ba3edd4b37f1980fe0167e4daf9df4bd76) * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java > Fix Pseudo-distributed Mode which was broken in 1.0.0 > - > > Key: HBASE-13102 > URL: https://issues.apache.org/jira/browse/HBASE-13102 > Project: HBase > Issue Type: Bug >Affects Versions: 1.0.0, 1.1.0 >Reporter: Elliott Clark >Assignee: Elliott Clark > Fix For: 2.0.0, 1.0.1, 1.1.0 > > Attachments: HBASE-13102.patch > > > {code} > 2015-02-25 10:42:17,686 FATAL [main] regionserver.RSRpcServices: The hostname > of regionserver cannot be set to localhost in a fully-distributed setup > because it won't be reachable. See "Getting Started" for more information. > 2015-02-25 10:42:17,687 ERROR [main] master.HMasterCommandLine: Master exiting > java.lang.RuntimeException: Failed construction of Master: class > org.apache.hadoop.hbase.master.HMaster > at > org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2051) > at > org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:198) > at > org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at > org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126) > at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2065) > Caused by: java.io.IOException: The hostname of regionserver cannot be set to > localhost in a fully-distributed setup because it won't be reachable. See > "Getting Started" for more information. > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.(RSRpcServices.java:793) > at > org.apache.hadoop.hbase.master.MasterRpcServices.(MasterRpcServices.java:198) > at > org.apache.hadoop.hbase.master.HMaster.createRpcServices(HMaster.java:486) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:500) > at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:337) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:526) > at > org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2046) > ... 5 more > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13106) Ensure endpoint-only table coprocessors can be dynamically loaded
[ https://issues.apache.org/jira/browse/HBASE-13106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337697#comment-14337697 ] Srikanth Srungarapu commented on HBASE-13106: - Got it. Thanks! +1 (non-binding) > Ensure endpoint-only table coprocessors can be dynamically loaded > - > > Key: HBASE-13106 > URL: https://issues.apache.org/jira/browse/HBASE-13106 > Project: HBase > Issue Type: Test >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Trivial > Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11 > > Attachments: HBASE-13106.patch, HBASE-13106.patch > > > I came across the blog post > http://www.3pillarglobal.com/insights/hbase-coprocessors and this interesting > bit: > {quote} > This means that you can load both Observer and Endpoint Coprocessor > statically using the following Method of HTableDescriptor: > {noformat} > addCoprocessor(String className, org.apache.hadoop.fs.Path jarFilePath, int > priority, Map kvs) throws IOException > {noformat} > In my case, the above method worked fine for Observer Coprocessor *but didn’t > work for Endpoint Coprocessor, causing the table to become unavailable and > finally I had to restart my HBase*. The same Endpoint Coprocessor worked fine > when loaded statically. Use the above method for Endpoint Coprocessor with > caution. > {quote} > To check this I wrote a test, attached. It passes, all seems ok. Guessing the > complaint was due to user error, probably jar placement/path issues. > Let's still commit the test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13106) Ensure endpoint-only table coprocessors can be dynamically loaded
[ https://issues.apache.org/jira/browse/HBASE-13106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337694#comment-14337694 ] Andrew Purtell commented on HBASE-13106: Yes. The Admin in HTU is shared. I don't think we are supposed to be closing it. Although in the case of this test the reference wasn't used afterward. > Ensure endpoint-only table coprocessors can be dynamically loaded > - > > Key: HBASE-13106 > URL: https://issues.apache.org/jira/browse/HBASE-13106 > Project: HBase > Issue Type: Test >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Trivial > Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11 > > Attachments: HBASE-13106.patch, HBASE-13106.patch > > > I came across the blog post > http://www.3pillarglobal.com/insights/hbase-coprocessors and this interesting > bit: > {quote} > This means that you can load both Observer and Endpoint Coprocessor > statically using the following Method of HTableDescriptor: > {noformat} > addCoprocessor(String className, org.apache.hadoop.fs.Path jarFilePath, int > priority, Map kvs) throws IOException > {noformat} > In my case, the above method worked fine for Observer Coprocessor *but didn’t > work for Endpoint Coprocessor, causing the table to become unavailable and > finally I had to restart my HBase*. The same Endpoint Coprocessor worked fine > when loaded statically. Use the above method for Endpoint Coprocessor with > caution. > {quote} > To check this I wrote a test, attached. It passes, all seems ok. Guessing the > complaint was due to user error, probably jar placement/path issues. > Let's still commit the test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13106) Ensure endpoint-only table coprocessors can be dynamically loaded
[ https://issues.apache.org/jira/browse/HBASE-13106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337687#comment-14337687 ] Srikanth Srungarapu commented on HBASE-13106: - Good test! Also wondering whether the following change in TestCoprocessorEndpoint is intentional? {code} -admin.close(); {code} > Ensure endpoint-only table coprocessors can be dynamically loaded > - > > Key: HBASE-13106 > URL: https://issues.apache.org/jira/browse/HBASE-13106 > Project: HBase > Issue Type: Test >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Trivial > Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11 > > Attachments: HBASE-13106.patch, HBASE-13106.patch > > > I came across the blog post > http://www.3pillarglobal.com/insights/hbase-coprocessors and this interesting > bit: > {quote} > This means that you can load both Observer and Endpoint Coprocessor > statically using the following Method of HTableDescriptor: > {noformat} > addCoprocessor(String className, org.apache.hadoop.fs.Path jarFilePath, int > priority, Map kvs) throws IOException > {noformat} > In my case, the above method worked fine for Observer Coprocessor *but didn’t > work for Endpoint Coprocessor, causing the table to become unavailable and > finally I had to restart my HBase*. The same Endpoint Coprocessor worked fine > when loaded statically. Use the above method for Endpoint Coprocessor with > caution. > {quote} > To check this I wrote a test, attached. It passes, all seems ok. Guessing the > complaint was due to user error, probably jar placement/path issues. > Let's still commit the test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13102) Fix Pseudo-distributed Mode which was broken in 1.0.0
[ https://issues.apache.org/jira/browse/HBASE-13102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337684#comment-14337684 ] Hudson commented on HBASE-13102: FAILURE: Integrated in HBase-1.1 #214 (See [https://builds.apache.org/job/HBase-1.1/214/]) HBASE-13102 Fix Pseudo-distributed Mode which was broken in 1.0.0 (eclark: rev 228637124370b59baead2e707d368c0f937618fb) * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java > Fix Pseudo-distributed Mode which was broken in 1.0.0 > - > > Key: HBASE-13102 > URL: https://issues.apache.org/jira/browse/HBASE-13102 > Project: HBase > Issue Type: Bug >Affects Versions: 1.0.0, 1.1.0 >Reporter: Elliott Clark >Assignee: Elliott Clark > Fix For: 2.0.0, 1.0.1, 1.1.0 > > Attachments: HBASE-13102.patch > > > {code} > 2015-02-25 10:42:17,686 FATAL [main] regionserver.RSRpcServices: The hostname > of regionserver cannot be set to localhost in a fully-distributed setup > because it won't be reachable. See "Getting Started" for more information. > 2015-02-25 10:42:17,687 ERROR [main] master.HMasterCommandLine: Master exiting > java.lang.RuntimeException: Failed construction of Master: class > org.apache.hadoop.hbase.master.HMaster > at > org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2051) > at > org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:198) > at > org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at > org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126) > at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2065) > Caused by: java.io.IOException: The hostname of regionserver cannot be set to > localhost in a fully-distributed setup because it won't be reachable. See > "Getting Started" for more information. > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.(RSRpcServices.java:793) > at > org.apache.hadoop.hbase.master.MasterRpcServices.(MasterRpcServices.java:198) > at > org.apache.hadoop.hbase.master.HMaster.createRpcServices(HMaster.java:486) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:500) > at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:337) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:526) > at > org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2046) > ... 5 more > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13106) Ensure endpoint-only table coprocessors can be dynamically loaded
[ https://issues.apache.org/jira/browse/HBASE-13106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337667#comment-14337667 ] stack commented on HBASE-13106: --- +1 Nice test. > Ensure endpoint-only table coprocessors can be dynamically loaded > - > > Key: HBASE-13106 > URL: https://issues.apache.org/jira/browse/HBASE-13106 > Project: HBase > Issue Type: Test >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Trivial > Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11 > > Attachments: HBASE-13106.patch, HBASE-13106.patch > > > I came across the blog post > http://www.3pillarglobal.com/insights/hbase-coprocessors and this interesting > bit: > {quote} > This means that you can load both Observer and Endpoint Coprocessor > statically using the following Method of HTableDescriptor: > {noformat} > addCoprocessor(String className, org.apache.hadoop.fs.Path jarFilePath, int > priority, Map kvs) throws IOException > {noformat} > In my case, the above method worked fine for Observer Coprocessor *but didn’t > work for Endpoint Coprocessor, causing the table to become unavailable and > finally I had to restart my HBase*. The same Endpoint Coprocessor worked fine > when loaded statically. Use the above method for Endpoint Coprocessor with > caution. > {quote} > To check this I wrote a test, attached. It passes, all seems ok. Guessing the > complaint was due to user error, probably jar placement/path issues. > Let's still commit the test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13102) Fix Pseudo-distributed Mode which was broken in 1.0.0
[ https://issues.apache.org/jira/browse/HBASE-13102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337647#comment-14337647 ] Hudson commented on HBASE-13102: FAILURE: Integrated in HBase-TRUNK #6172 (See [https://builds.apache.org/job/HBase-TRUNK/6172/]) HBASE-13102 Fix Pseudo-distributed Mode which was broken in 1.0.0 (eclark: rev 7195f62114ce68dfa94115443d2b27cd2d7df01c) * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java > Fix Pseudo-distributed Mode which was broken in 1.0.0 > - > > Key: HBASE-13102 > URL: https://issues.apache.org/jira/browse/HBASE-13102 > Project: HBase > Issue Type: Bug >Affects Versions: 1.0.0, 1.1.0 >Reporter: Elliott Clark >Assignee: Elliott Clark > Fix For: 2.0.0, 1.0.1, 1.1.0 > > Attachments: HBASE-13102.patch > > > {code} > 2015-02-25 10:42:17,686 FATAL [main] regionserver.RSRpcServices: The hostname > of regionserver cannot be set to localhost in a fully-distributed setup > because it won't be reachable. See "Getting Started" for more information. > 2015-02-25 10:42:17,687 ERROR [main] master.HMasterCommandLine: Master exiting > java.lang.RuntimeException: Failed construction of Master: class > org.apache.hadoop.hbase.master.HMaster > at > org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2051) > at > org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:198) > at > org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at > org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126) > at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2065) > Caused by: java.io.IOException: The hostname of regionserver cannot be set to > localhost in a fully-distributed setup because it won't be reachable. See > "Getting Started" for more information. > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.(RSRpcServices.java:793) > at > org.apache.hadoop.hbase.master.MasterRpcServices.(MasterRpcServices.java:198) > at > org.apache.hadoop.hbase.master.HMaster.createRpcServices(HMaster.java:486) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:500) > at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:337) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:526) > at > org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2046) > ... 5 more > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13106) Ensure endpoint-only table coprocessors can be dynamically loaded
[ https://issues.apache.org/jira/browse/HBASE-13106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337637#comment-14337637 ] Andrew Purtell commented on HBASE-13106: Can I get a quick looksee [~stack] ? > Ensure endpoint-only table coprocessors can be dynamically loaded > - > > Key: HBASE-13106 > URL: https://issues.apache.org/jira/browse/HBASE-13106 > Project: HBase > Issue Type: Test >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Trivial > Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11 > > Attachments: HBASE-13106.patch, HBASE-13106.patch > > > I came across the blog post > http://www.3pillarglobal.com/insights/hbase-coprocessors and this interesting > bit: > {quote} > This means that you can load both Observer and Endpoint Coprocessor > statically using the following Method of HTableDescriptor: > {noformat} > addCoprocessor(String className, org.apache.hadoop.fs.Path jarFilePath, int > priority, Map kvs) throws IOException > {noformat} > In my case, the above method worked fine for Observer Coprocessor *but didn’t > work for Endpoint Coprocessor, causing the table to become unavailable and > finally I had to restart my HBase*. The same Endpoint Coprocessor worked fine > when loaded statically. Use the above method for Endpoint Coprocessor with > caution. > {quote} > To check this I wrote a test, attached. It passes, all seems ok. Guessing the > complaint was due to user error, probably jar placement/path issues. > Let's still commit the test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13102) Fix Pseudo-distributed Mode which was broken in 1.0.0
[ https://issues.apache.org/jira/browse/HBASE-13102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337620#comment-14337620 ] Enis Soztutar commented on HBASE-13102: --- Thanks Elliot. I think this is fine to wait for the 1.0.1 release which should in March if we can get the release train in action. It is unfortunate enough though. > Fix Pseudo-distributed Mode which was broken in 1.0.0 > - > > Key: HBASE-13102 > URL: https://issues.apache.org/jira/browse/HBASE-13102 > Project: HBase > Issue Type: Bug >Affects Versions: 1.0.0, 1.1.0 >Reporter: Elliott Clark >Assignee: Elliott Clark > Fix For: 2.0.0, 1.0.1, 1.1.0 > > Attachments: HBASE-13102.patch > > > {code} > 2015-02-25 10:42:17,686 FATAL [main] regionserver.RSRpcServices: The hostname > of regionserver cannot be set to localhost in a fully-distributed setup > because it won't be reachable. See "Getting Started" for more information. > 2015-02-25 10:42:17,687 ERROR [main] master.HMasterCommandLine: Master exiting > java.lang.RuntimeException: Failed construction of Master: class > org.apache.hadoop.hbase.master.HMaster > at > org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2051) > at > org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:198) > at > org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at > org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126) > at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2065) > Caused by: java.io.IOException: The hostname of regionserver cannot be set to > localhost in a fully-distributed setup because it won't be reachable. See > "Getting Started" for more information. > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.(RSRpcServices.java:793) > at > org.apache.hadoop.hbase.master.MasterRpcServices.(MasterRpcServices.java:198) > at > org.apache.hadoop.hbase.master.HMaster.createRpcServices(HMaster.java:486) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:500) > at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:337) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:526) > at > org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2046) > ... 5 more > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (HBASE-13058) Hbase shell command 'scan' for non existent table shows unnecessary info for one unrelated existent table.
[ https://issues.apache.org/jira/browse/HBASE-13058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell reopened HBASE-13058: Reopened due to revert > Hbase shell command 'scan' for non existent table shows unnecessary info for > one unrelated existent table. > -- > > Key: HBASE-13058 > URL: https://issues.apache.org/jira/browse/HBASE-13058 > Project: HBase > Issue Type: Bug > Components: Client >Reporter: Abhishek Kumar >Assignee: Abhishek Kumar >Priority: Trivial > Fix For: 2.0.0, 1.1.0 > > Attachments: 0001-HBASE-13058-Error-messages-in-scan-table.patch, > 0001-HBASE-13058-shell-unknown-table-message-update.patch > > > When scanning for a non existent table in hbase shell, error message in shell > sometimes(based on META table content) displays completely unrelated table > info , which seems to be unnecessary and inconsistent with other error > messages: > {noformat} > hbase(main):016:0> scan 'noTable' > ROW COLUMN+CELL > ERROR: Unknown table Table 'noTable' was not found, got: hbase:namespace.! > - > hbase(main):017:0> scan '01_noTable' > ROW COLUMN+CELL > ERROR: Unknown table 01_noTable! > -- > {noformat} > Its happening when doing a META table scan (to locate input table ) and > scanner stops at row of another table (beyond which table can not exist) in > ConnectionManager.locateRegionInMeta: > {noformat} > private RegionLocations locateRegionInMeta(TableName tableName, byte[] row, >boolean useCache, boolean retry, int replicaId) throws > IOException { > . > > // possible we got a region of a different table... > if (!regionInfo.getTable().equals(tableName)) { > throw new TableNotFoundException( > "Table '" + tableName + "' was not found, got: " + > regionInfo.getTable() + "."); > } > ... > ... > {noformat} > Here, we can simply put a debug message(if required) and just throw the > TableNotFoundException(tableName) with only tableName instead of with > scanner positioned row. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-13058) Hbase shell command 'scan' for non existent table shows unnecessary info for one unrelated existent table.
[ https://issues.apache.org/jira/browse/HBASE-13058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-13058: --- Assignee: Abhishek Kumar > Hbase shell command 'scan' for non existent table shows unnecessary info for > one unrelated existent table. > -- > > Key: HBASE-13058 > URL: https://issues.apache.org/jira/browse/HBASE-13058 > Project: HBase > Issue Type: Bug > Components: Client >Reporter: Abhishek Kumar >Assignee: Abhishek Kumar >Priority: Trivial > Fix For: 2.0.0, 1.1.0 > > Attachments: 0001-HBASE-13058-Error-messages-in-scan-table.patch, > 0001-HBASE-13058-shell-unknown-table-message-update.patch > > > When scanning for a non existent table in hbase shell, error message in shell > sometimes(based on META table content) displays completely unrelated table > info , which seems to be unnecessary and inconsistent with other error > messages: > {noformat} > hbase(main):016:0> scan 'noTable' > ROW COLUMN+CELL > ERROR: Unknown table Table 'noTable' was not found, got: hbase:namespace.! > - > hbase(main):017:0> scan '01_noTable' > ROW COLUMN+CELL > ERROR: Unknown table 01_noTable! > -- > {noformat} > Its happening when doing a META table scan (to locate input table ) and > scanner stops at row of another table (beyond which table can not exist) in > ConnectionManager.locateRegionInMeta: > {noformat} > private RegionLocations locateRegionInMeta(TableName tableName, byte[] row, >boolean useCache, boolean retry, int replicaId) throws > IOException { > . > > // possible we got a region of a different table... > if (!regionInfo.getTable().equals(tableName)) { > throw new TableNotFoundException( > "Table '" + tableName + "' was not found, got: " + > regionInfo.getTable() + "."); > } > ... > ... > {noformat} > Here, we can simply put a debug message(if required) and just throw the > TableNotFoundException(tableName) with only tableName instead of with > scanner positioned row. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (HBASE-13106) Ensure endpoint-only table coprocessors can be dynamically loaded
[ https://issues.apache.org/jira/browse/HBASE-13106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337532#comment-14337532 ] Andrew Purtell edited comment on HBASE-13106 at 2/26/15 1:01 AM: - Hmm.. Cancelling patch. Let me add a case that adds the CP with a schema update and then tries the endpoint invocation. was (Author: apurtell): Hmm.. Cancelling patch. Let me add a case that adds the CP with online update and then tries the endpoint invocation. > Ensure endpoint-only table coprocessors can be dynamically loaded > - > > Key: HBASE-13106 > URL: https://issues.apache.org/jira/browse/HBASE-13106 > Project: HBase > Issue Type: Test >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Trivial > Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11 > > Attachments: HBASE-13106.patch, HBASE-13106.patch > > > I came across the blog post > http://www.3pillarglobal.com/insights/hbase-coprocessors and this interesting > bit: > {quote} > This means that you can load both Observer and Endpoint Coprocessor > statically using the following Method of HTableDescriptor: > {noformat} > addCoprocessor(String className, org.apache.hadoop.fs.Path jarFilePath, int > priority, Map kvs) throws IOException > {noformat} > In my case, the above method worked fine for Observer Coprocessor *but didn’t > work for Endpoint Coprocessor, causing the table to become unavailable and > finally I had to restart my HBase*. The same Endpoint Coprocessor worked fine > when loaded statically. Use the above method for Endpoint Coprocessor with > caution. > {quote} > To check this I wrote a test, attached. It passes, all seems ok. Guessing the > complaint was due to user error, probably jar placement/path issues. > Let's still commit the test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-12795) Backport HBASE-12429 (Add port to ClusterManager's actions) to 0.98
[ https://issues.apache.org/jira/browse/HBASE-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337606#comment-14337606 ] Andrew Purtell commented on HBASE-12795: Do you see any issues with this [~eclark]? > Backport HBASE-12429 (Add port to ClusterManager's actions) to 0.98 > --- > > Key: HBASE-12795 > URL: https://issues.apache.org/jira/browse/HBASE-12795 > Project: HBase > Issue Type: Task >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Minor > Fix For: 0.98.11 > > Attachments: HBASE-12795-0.98.patch > > > As of HBASE-12371 we are following along with improvements in the integration > test module. Evaluate HBASE-12429 (Add port to ClusterManager's actions) for > backport to 0.98. This improves testing with chaos to support testing on a > cluster with multiple regionservers running on a host. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-12795) Backport HBASE-12429 (Add port to ClusterManager's actions) to 0.98
[ https://issues.apache.org/jira/browse/HBASE-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337605#comment-14337605 ] Andrew Purtell commented on HBASE-12795: Patch for 0.98. I needed to merge back diffs between the 0.98 and branch-1 versions of RESTApiClusterManager. > Backport HBASE-12429 (Add port to ClusterManager's actions) to 0.98 > --- > > Key: HBASE-12795 > URL: https://issues.apache.org/jira/browse/HBASE-12795 > Project: HBase > Issue Type: Task >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Minor > Fix For: 0.98.11 > > Attachments: HBASE-12795-0.98.patch > > > As of HBASE-12371 we are following along with improvements in the integration > test module. Evaluate HBASE-12429 (Add port to ClusterManager's actions) for > backport to 0.98. This improves testing with chaos to support testing on a > cluster with multiple regionservers running on a host. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-12795) Backport HBASE-12429 (Add port to ClusterManager's actions) to 0.98
[ https://issues.apache.org/jira/browse/HBASE-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-12795: --- Status: Patch Available (was: Open) > Backport HBASE-12429 (Add port to ClusterManager's actions) to 0.98 > --- > > Key: HBASE-12795 > URL: https://issues.apache.org/jira/browse/HBASE-12795 > Project: HBase > Issue Type: Task >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Minor > Fix For: 0.98.11 > > Attachments: HBASE-12795-0.98.patch > > > As of HBASE-12371 we are following along with improvements in the integration > test module. Evaluate HBASE-12429 (Add port to ClusterManager's actions) for > backport to 0.98. This improves testing with chaos to support testing on a > cluster with multiple regionservers running on a host. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-12795) Backport HBASE-12429 (Add port to ClusterManager's actions) to 0.98
[ https://issues.apache.org/jira/browse/HBASE-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-12795: --- Attachment: HBASE-12795-0.98.patch > Backport HBASE-12429 (Add port to ClusterManager's actions) to 0.98 > --- > > Key: HBASE-12795 > URL: https://issues.apache.org/jira/browse/HBASE-12795 > Project: HBase > Issue Type: Task >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Minor > Fix For: 0.98.11 > > Attachments: HBASE-12795-0.98.patch > > > As of HBASE-12371 we are following along with improvements in the integration > test module. Evaluate HBASE-12429 (Add port to ClusterManager's actions) for > backport to 0.98. This improves testing with chaos to support testing on a > cluster with multiple regionservers running on a host. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13097) Netty PooledByteBufAllocator cause OOM in some unit test
[ https://issues.apache.org/jira/browse/HBASE-13097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337601#comment-14337601 ] stack commented on HBASE-13097: --- Changing thrust of issue is good by me. Reducing connections and rpclients in tests is a fine goal. > Netty PooledByteBufAllocator cause OOM in some unit test > > > Key: HBASE-13097 > URL: https://issues.apache.org/jira/browse/HBASE-13097 > Project: HBase > Issue Type: Bug > Components: IPC/RPC, test >Affects Versions: 2.0.0, 1.1.0 >Reporter: zhangduo > > In some unit tests(such as TestAcidGuarantees) we create multiple Connection > instance. If we use AsyncRpcClient, then there will be multiple netty > Bootstrap and every Bootstrap has its own PooledByteBufAllocator. > I haven't read the code clearly but it uses some threadlocal technics and > jmap shows io.netty.buffer.PoolThreadCache$MemoryRegionCache$Entry is the > biggest things on Heap. > See > https://builds.apache.org/job/HBase-TRUNK/6168/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.TestAcidGuarantees-output.txt > {noformat} > 2015-02-24 23:50:29,704 WARN [JvmPauseMonitor] > util.JvmPauseMonitor$Monitor(167): Detected pause in JVM or host machine (eg > GC): pause of approximately 20133ms > GC pool 'PS MarkSweep' had collection(s): count=15 time=55525ms > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-13107) Refactor MOB Snapshot logic to reduce code duplication.
Jonathan Hsieh created HBASE-13107: -- Summary: Refactor MOB Snapshot logic to reduce code duplication. Key: HBASE-13107 URL: https://issues.apache.org/jira/browse/HBASE-13107 Project: HBase Issue Type: Sub-task Components: mob, snapshots Affects Versions: hbase-11339 Reporter: Jonathan Hsieh The MOB Snapshot code contains a lot of code duplication with the normal snapshot code path. We should do some refactoring to clean this up before merging. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-13106) Ensure endpoint-only table coprocessors can be dynamically loaded
[ https://issues.apache.org/jira/browse/HBASE-13106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-13106: --- Attachment: HBASE-13106.patch > Ensure endpoint-only table coprocessors can be dynamically loaded > - > > Key: HBASE-13106 > URL: https://issues.apache.org/jira/browse/HBASE-13106 > Project: HBase > Issue Type: Test >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Trivial > Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11 > > Attachments: HBASE-13106.patch, HBASE-13106.patch > > > I came across the blog post > http://www.3pillarglobal.com/insights/hbase-coprocessors and this interesting > bit: > {quote} > This means that you can load both Observer and Endpoint Coprocessor > statically using the following Method of HTableDescriptor: > {noformat} > addCoprocessor(String className, org.apache.hadoop.fs.Path jarFilePath, int > priority, Map kvs) throws IOException > {noformat} > In my case, the above method worked fine for Observer Coprocessor *but didn’t > work for Endpoint Coprocessor, causing the table to become unavailable and > finally I had to restart my HBase*. The same Endpoint Coprocessor worked fine > when loaded statically. Use the above method for Endpoint Coprocessor with > caution. > {quote} > To check this I wrote a test, attached. It passes, all seems ok. Guessing the > complaint was due to user error, probably jar placement/path issues. > Let's still commit the test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-13106) Ensure endpoint-only table coprocessors can be dynamically loaded
[ https://issues.apache.org/jira/browse/HBASE-13106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-13106: --- Status: Patch Available (was: Open) v2 > Ensure endpoint-only table coprocessors can be dynamically loaded > - > > Key: HBASE-13106 > URL: https://issues.apache.org/jira/browse/HBASE-13106 > Project: HBase > Issue Type: Test >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Trivial > Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11 > > Attachments: HBASE-13106.patch, HBASE-13106.patch > > > I came across the blog post > http://www.3pillarglobal.com/insights/hbase-coprocessors and this interesting > bit: > {quote} > This means that you can load both Observer and Endpoint Coprocessor > statically using the following Method of HTableDescriptor: > {noformat} > addCoprocessor(String className, org.apache.hadoop.fs.Path jarFilePath, int > priority, Map kvs) throws IOException > {noformat} > In my case, the above method worked fine for Observer Coprocessor *but didn’t > work for Endpoint Coprocessor, causing the table to become unavailable and > finally I had to restart my HBase*. The same Endpoint Coprocessor worked fine > when loaded statically. Use the above method for Endpoint Coprocessor with > caution. > {quote} > To check this I wrote a test, attached. It passes, all seems ok. Guessing the > complaint was due to user error, probably jar placement/path issues. > Let's still commit the test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13104) ZooKeeper session timeout cannot be changed for standalone HBase
[ https://issues.apache.org/jira/browse/HBASE-13104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337569#comment-14337569 ] Lars Hofhansl commented on HBASE-13104: --- +1 > ZooKeeper session timeout cannot be changed for standalone HBase > > > Key: HBASE-13104 > URL: https://issues.apache.org/jira/browse/HBASE-13104 > Project: HBase > Issue Type: Bug > Components: master >Affects Versions: 0.98.10.1 >Reporter: Alex Araujo >Assignee: Alex Araujo > Fix For: 0.98.11 > > Attachments: HBASE-13104-0.98.patch > > > It's not possible to increase the ZooKeeper session timeout in standalone > HBase due to a hardcoded 10s timeout in HMasterCommandLine: > https://github.com/apache/hbase/blob/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMasterCommandLine.java#L176 > In trunk you can append .localHBaseCluster to the ZK session timeout property > name to change the timeout: > https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMasterCommandLine.java#L169-171 > We should allow changing the timeout in 0.98 and other versions where it's > not possible to do so. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13097) Netty PooledByteBufAllocator cause OOM in some unit test
[ https://issues.apache.org/jira/browse/HBASE-13097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337549#comment-14337549 ] zhangduo commented on HBASE-13097: -- Oh yeah, it is PooledByteBufAllocator.DEFAULT, not PooledByteBufAllocator.class, sorry... So the problem is still that we create too many EventLoopGroup which has its own thread pool that causes too many alive threads? If you all agree, we can change the title of this issue to "Reduce Connection and RpcClient creations in unit tests", and create sub tasks to handle tests one by one? Thanks. [~jurmous] [~stack] > Netty PooledByteBufAllocator cause OOM in some unit test > > > Key: HBASE-13097 > URL: https://issues.apache.org/jira/browse/HBASE-13097 > Project: HBase > Issue Type: Bug > Components: IPC/RPC, test >Affects Versions: 2.0.0, 1.1.0 >Reporter: zhangduo > > In some unit tests(such as TestAcidGuarantees) we create multiple Connection > instance. If we use AsyncRpcClient, then there will be multiple netty > Bootstrap and every Bootstrap has its own PooledByteBufAllocator. > I haven't read the code clearly but it uses some threadlocal technics and > jmap shows io.netty.buffer.PoolThreadCache$MemoryRegionCache$Entry is the > biggest things on Heap. > See > https://builds.apache.org/job/HBase-TRUNK/6168/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.TestAcidGuarantees-output.txt > {noformat} > 2015-02-24 23:50:29,704 WARN [JvmPauseMonitor] > util.JvmPauseMonitor$Monitor(167): Detected pause in JVM or host machine (eg > GC): pause of approximately 20133ms > GC pool 'PS MarkSweep' had collection(s): count=15 time=55525ms > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-11544) [Ergonomics] hbase.client.scanner.caching is dogged and will try to return batch even if it means OOME
[ https://issues.apache.org/jira/browse/HBASE-11544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Lawlor updated HBASE-11544: Status: Patch Available (was: Open) > [Ergonomics] hbase.client.scanner.caching is dogged and will try to return > batch even if it means OOME > -- > > Key: HBASE-11544 > URL: https://issues.apache.org/jira/browse/HBASE-11544 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: Jonathan Lawlor >Priority: Critical > Labels: beginner > Attachments: HBASE-11544-v1.patch, HBASE-11544-v2.patch, > HBASE-11544-v3.patch, HBASE-11544-v4.patch, HBASE-11544-v5.patch > > > Running some tests, I set hbase.client.scanner.caching=1000. Dataset has > large cells. I kept OOME'ing. > Serverside, we should measure how much we've accumulated and return to the > client whatever we've gathered once we pass out a certain size threshold > rather than keep accumulating till we OOME. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-11544) [Ergonomics] hbase.client.scanner.caching is dogged and will try to return batch even if it means OOME
[ https://issues.apache.org/jira/browse/HBASE-11544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Lawlor updated HBASE-11544: Attachment: HBASE-11544-v5.patch - Fix line lengths - Fix test failure of TestPrefixTree to recognize new return type - Throw exceptions in the case that a NextState is observed to be invalid > [Ergonomics] hbase.client.scanner.caching is dogged and will try to return > batch even if it means OOME > -- > > Key: HBASE-11544 > URL: https://issues.apache.org/jira/browse/HBASE-11544 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: Jonathan Lawlor >Priority: Critical > Labels: beginner > Attachments: HBASE-11544-v1.patch, HBASE-11544-v2.patch, > HBASE-11544-v3.patch, HBASE-11544-v4.patch, HBASE-11544-v5.patch > > > Running some tests, I set hbase.client.scanner.caching=1000. Dataset has > large cells. I kept OOME'ing. > Serverside, we should measure how much we've accumulated and return to the > client whatever we've gathered once we pass out a certain size threshold > rather than keep accumulating till we OOME. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-11544) [Ergonomics] hbase.client.scanner.caching is dogged and will try to return batch even if it means OOME
[ https://issues.apache.org/jira/browse/HBASE-11544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Lawlor updated HBASE-11544: Status: Open (was: Patch Available) > [Ergonomics] hbase.client.scanner.caching is dogged and will try to return > batch even if it means OOME > -- > > Key: HBASE-11544 > URL: https://issues.apache.org/jira/browse/HBASE-11544 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: Jonathan Lawlor >Priority: Critical > Labels: beginner > Attachments: HBASE-11544-v1.patch, HBASE-11544-v2.patch, > HBASE-11544-v3.patch, HBASE-11544-v4.patch > > > Running some tests, I set hbase.client.scanner.caching=1000. Dataset has > large cells. I kept OOME'ing. > Serverside, we should measure how much we've accumulated and return to the > client whatever we've gathered once we pass out a certain size threshold > rather than keep accumulating till we OOME. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-13106) Ensure endpoint-only table coprocessors can be dynamically loaded
[ https://issues.apache.org/jira/browse/HBASE-13106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-13106: --- Status: Open (was: Patch Available) Hmm.. Cancelling patch. Let me add a case that adds the CP with online update and then tries the endpoint invocation. > Ensure endpoint-only table coprocessors can be dynamically loaded > - > > Key: HBASE-13106 > URL: https://issues.apache.org/jira/browse/HBASE-13106 > Project: HBase > Issue Type: Test >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Trivial > Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11 > > Attachments: HBASE-13106.patch > > > I came across the blog post > http://www.3pillarglobal.com/insights/hbase-coprocessors and this interesting > bit: > {quote} > This means that you can load both Observer and Endpoint Coprocessor > statically using the following Method of HTableDescriptor: > {noformat} > addCoprocessor(String className, org.apache.hadoop.fs.Path jarFilePath, int > priority, Map kvs) throws IOException > {noformat} > In my case, the above method worked fine for Observer Coprocessor *but didn’t > work for Endpoint Coprocessor, causing the table to become unavailable and > finally I had to restart my HBase*. The same Endpoint Coprocessor worked fine > when loaded statically. Use the above method for Endpoint Coprocessor with > caution. > {quote} > To check this I wrote a test, attached. It passes, all seems ok. Guessing the > complaint was due to user error, probably jar placement/path issues. > Let's still commit the test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-13106) Ensure endpoint-only table coprocessors can be dynamically loaded
[ https://issues.apache.org/jira/browse/HBASE-13106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-13106: --- Assignee: Andrew Purtell Status: Patch Available (was: Open) > Ensure endpoint-only table coprocessors can be dynamically loaded > - > > Key: HBASE-13106 > URL: https://issues.apache.org/jira/browse/HBASE-13106 > Project: HBase > Issue Type: Test >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Trivial > Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11 > > Attachments: HBASE-13106.patch > > > I came across the blog post > http://www.3pillarglobal.com/insights/hbase-coprocessors and this interesting > bit: > {quote} > This means that you can load both Observer and Endpoint Coprocessor > statically using the following Method of HTableDescriptor: > {noformat} > addCoprocessor(String className, org.apache.hadoop.fs.Path jarFilePath, int > priority, Map kvs) throws IOException > {noformat} > In my case, the above method worked fine for Observer Coprocessor *but didn’t > work for Endpoint Coprocessor, causing the table to become unavailable and > finally I had to restart my HBase*. The same Endpoint Coprocessor worked fine > when loaded statically. Use the above method for Endpoint Coprocessor with > caution. > {quote} > To check this I wrote a test, attached. It passes, all seems ok. Guessing the > complaint was due to user error, probably jar placement/path issues. > Let's still commit the test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-13106) Ensure endpoint-only table coprocessors can be dynamically loaded
[ https://issues.apache.org/jira/browse/HBASE-13106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-13106: --- Attachment: HBASE-13106.patch > Ensure endpoint-only table coprocessors can be dynamically loaded > - > > Key: HBASE-13106 > URL: https://issues.apache.org/jira/browse/HBASE-13106 > Project: HBase > Issue Type: Test >Reporter: Andrew Purtell >Priority: Trivial > Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11 > > Attachments: HBASE-13106.patch > > > I came across the blog post > http://www.3pillarglobal.com/insights/hbase-coprocessors and this interesting > bit: > {quote} > This means that you can load both Observer and Endpoint Coprocessor > statically using the following Method of HTableDescriptor: > {noformat} > addCoprocessor(String className, org.apache.hadoop.fs.Path jarFilePath, int > priority, Map kvs) throws IOException > {noformat} > In my case, the above method worked fine for Observer Coprocessor *but didn’t > work for Endpoint Coprocessor, causing the table to become unavailable and > finally I had to restart my HBase*. The same Endpoint Coprocessor worked fine > when loaded statically. Use the above method for Endpoint Coprocessor with > caution. > {quote} > To check this I wrote a test, attached. It passes, all seems ok. Guessing the > complaint was due to user error, probably jar placement/path issues. > Let's still commit the test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13086) Show ZK root node on Master WebUI
[ https://issues.apache.org/jira/browse/HBASE-13086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337528#comment-14337528 ] Hudson commented on HBASE-13086: FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #830 (See [https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/830/]) HBASE-13086 Show ZK root node on Master WebUI (addendum) (ndimiduk: rev 58c1c7434f22b5a2a923de1d6504df6c061885ee) * hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterStatusServlet.java > Show ZK root node on Master WebUI > - > > Key: HBASE-13086 > URL: https://issues.apache.org/jira/browse/HBASE-13086 > Project: HBase > Issue Type: Improvement > Components: master >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Minor > Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11 > > Attachments: 13068.jpg, HBASE-13068.00.patch, > HBASE-13086-0.98.addendum0.patch > > > Currently we show a well-formed ZK quorum on the master webUI but not the > root node. Root node can be changed based on deployment, so we should list it > here explicitly. This information is helpful for folks playing around with > phoenix. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-13106) Ensure endpoint-only table coprocessors can be dynamically loaded
Andrew Purtell created HBASE-13106: -- Summary: Ensure endpoint-only table coprocessors can be dynamically loaded Key: HBASE-13106 URL: https://issues.apache.org/jira/browse/HBASE-13106 Project: HBase Issue Type: Test Reporter: Andrew Purtell Priority: Trivial Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11 I came across the blog post http://www.3pillarglobal.com/insights/hbase-coprocessors and this interesting bit: {quote} This means that you can load both Observer and Endpoint Coprocessor statically using the following Method of HTableDescriptor: {noformat} addCoprocessor(String className, org.apache.hadoop.fs.Path jarFilePath, int priority, Map kvs) throws IOException {noformat} In my case, the above method worked fine for Observer Coprocessor *but didn’t work for Endpoint Coprocessor, causing the table to become unavailable and finally I had to restart my HBase*. The same Endpoint Coprocessor worked fine when loaded statically. Use the above method for Endpoint Coprocessor with caution. {quote} To check this I wrote a test, attached. It passes, all seems ok. Guessing the complaint was due to user error, probably jar placement/path issues. Let's still commit the test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-13102) Fix Pseudo-distributed Mode which was broken in 1.0.0
[ https://issues.apache.org/jira/browse/HBASE-13102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elliott Clark updated HBASE-13102: -- Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) > Fix Pseudo-distributed Mode which was broken in 1.0.0 > - > > Key: HBASE-13102 > URL: https://issues.apache.org/jira/browse/HBASE-13102 > Project: HBase > Issue Type: Bug >Affects Versions: 1.0.0, 1.1.0 >Reporter: Elliott Clark >Assignee: Elliott Clark > Fix For: 2.0.0, 1.0.1, 1.1.0 > > Attachments: HBASE-13102.patch > > > {code} > 2015-02-25 10:42:17,686 FATAL [main] regionserver.RSRpcServices: The hostname > of regionserver cannot be set to localhost in a fully-distributed setup > because it won't be reachable. See "Getting Started" for more information. > 2015-02-25 10:42:17,687 ERROR [main] master.HMasterCommandLine: Master exiting > java.lang.RuntimeException: Failed construction of Master: class > org.apache.hadoop.hbase.master.HMaster > at > org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2051) > at > org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:198) > at > org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at > org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126) > at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2065) > Caused by: java.io.IOException: The hostname of regionserver cannot be set to > localhost in a fully-distributed setup because it won't be reachable. See > "Getting Started" for more information. > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.(RSRpcServices.java:793) > at > org.apache.hadoop.hbase.master.MasterRpcServices.(MasterRpcServices.java:198) > at > org.apache.hadoop.hbase.master.HMaster.createRpcServices(HMaster.java:486) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:500) > at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:337) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:526) > at > org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2046) > ... 5 more > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-13104) ZooKeeper session timeout cannot be changed for standalone HBase
[ https://issues.apache.org/jira/browse/HBASE-13104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-13104: --- Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) +1 Pushed to 0.98 Thanks for the patch [~alexaraujo]! > ZooKeeper session timeout cannot be changed for standalone HBase > > > Key: HBASE-13104 > URL: https://issues.apache.org/jira/browse/HBASE-13104 > Project: HBase > Issue Type: Bug > Components: master >Affects Versions: 0.98.10.1 >Reporter: Alex Araujo >Assignee: Alex Araujo > Fix For: 0.98.11 > > Attachments: HBASE-13104-0.98.patch > > > It's not possible to increase the ZooKeeper session timeout in standalone > HBase due to a hardcoded 10s timeout in HMasterCommandLine: > https://github.com/apache/hbase/blob/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMasterCommandLine.java#L176 > In trunk you can append .localHBaseCluster to the ZK session timeout property > name to change the timeout: > https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMasterCommandLine.java#L169-171 > We should allow changing the timeout in 0.98 and other versions where it's > not possible to do so. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-13050) Hbase shell create_namespace command throws ArrayIndexOutOfBoundException for (invalid) empty text input.
[ https://issues.apache.org/jira/browse/HBASE-13050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-13050: --- Assignee: Abhishek Kumar > Hbase shell create_namespace command throws ArrayIndexOutOfBoundException for > (invalid) empty text input. > - > > Key: HBASE-13050 > URL: https://issues.apache.org/jira/browse/HBASE-13050 > Project: HBase > Issue Type: Bug >Reporter: Abhishek Kumar >Assignee: Abhishek Kumar >Priority: Trivial > Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11 > > Attachments: 0001-HBASE-13050-Empty-Namespace-validation.patch > > > {noformat} > hbase(main):008:0> create_namespace '' > ERROR: java.io.IOException: 0 > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2072) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107) > at > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130) > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.ArrayIndexOutOfBoundsException: 0 > at > org.apache.hadoop.hbase.TableName.isLegalNamespaceName(TableName.java:222) > at > org.apache.hadoop.hbase.TableName.isLegalNamespaceName(TableName.java:205) > {noformat} > TableName.isLegalNamespaceName tries to access namespaceName[offset] in case > of empty text input and also this check for 'offset==length' in this method > seems to be unnecessary and an empty input validation check can be put in > the beginning of this method instead: > {noformat} > public static void isLegalNamespaceName(byte[] namespaceName, int offset, > int length) { > // can add empty check in the beginning > if(length == 0) { > throw new IllegalArgumentException("Namespace name must not be empty"); > } > // end > for (int i = offset; i < length; i++) { > if (Character.isLetterOrDigit(namespaceName[i])|| namespaceName[i] == > '_') { > continue; > } > throw new IllegalArgumentException("Illegal character <" + > namespaceName[i] + > "> at " + i + ". Namespaces can only contain " + > "'alphanumeric characters': i.e. [a-zA-Z_0-9]: " + > Bytes.toString(namespaceName, > offset, length)); > } > // can remove below check >if(offset == length) > throw new IllegalArgumentException("Illegal character <" + > _namespaceName[offset] _+ > "> at " + offset + ". Namespaces can only contain " + > "'alphanumeric characters': i.e. [a-zA-Z_0-9]: " + > Bytes.toString(namespaceName, > offset, length)); > // > } > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13102) Fix Pseudo-distributed Mode which was broken in 1.0.0
[ https://issues.apache.org/jira/browse/HBASE-13102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337508#comment-14337508 ] Elliott Clark commented on HBASE-13102: --- K, I'll commit this to branch-1.0 + > Fix Pseudo-distributed Mode which was broken in 1.0.0 > - > > Key: HBASE-13102 > URL: https://issues.apache.org/jira/browse/HBASE-13102 > Project: HBase > Issue Type: Bug >Affects Versions: 1.0.0, 1.1.0 >Reporter: Elliott Clark >Assignee: Elliott Clark > Fix For: 2.0.0, 1.0.1, 1.1.0 > > Attachments: HBASE-13102.patch > > > {code} > 2015-02-25 10:42:17,686 FATAL [main] regionserver.RSRpcServices: The hostname > of regionserver cannot be set to localhost in a fully-distributed setup > because it won't be reachable. See "Getting Started" for more information. > 2015-02-25 10:42:17,687 ERROR [main] master.HMasterCommandLine: Master exiting > java.lang.RuntimeException: Failed construction of Master: class > org.apache.hadoop.hbase.master.HMaster > at > org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2051) > at > org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:198) > at > org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at > org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126) > at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2065) > Caused by: java.io.IOException: The hostname of regionserver cannot be set to > localhost in a fully-distributed setup because it won't be reachable. See > "Getting Started" for more information. > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.(RSRpcServices.java:793) > at > org.apache.hadoop.hbase.master.MasterRpcServices.(MasterRpcServices.java:198) > at > org.apache.hadoop.hbase.master.HMaster.createRpcServices(HMaster.java:486) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:500) > at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:337) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:526) > at > org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2046) > ... 5 more > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13049) wal_roll ruby command doesn't work.
[ https://issues.apache.org/jira/browse/HBASE-13049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337496#comment-14337496 ] Andrew Purtell commented on HBASE-13049: Pushed to 0.98 > wal_roll ruby command doesn't work. > > > Key: HBASE-13049 > URL: https://issues.apache.org/jira/browse/HBASE-13049 > Project: HBase > Issue Type: Bug > Components: shell >Affects Versions: 1.0.0, 2.0.0 >Reporter: Bhupendra Kumar Jain >Assignee: Bhupendra Kumar Jain > Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11 > > Attachments: 0001-HBASE-13049-wal_roll-ruby-command-doesn-t-work.patch > > > On execution of wal_roll command in shell, error message gets displayed as > shown below > hbase(main):005:0> wal_roll 'host-10-19-92-94,16201,1424081618286' > *ERROR: cannot convert instance of class org.jruby.RubyString to class > org.apache.hadoop.hbase.ServerName* > its because Admin Java api expecting a ServerName object but script passes > the ServerName as string. > currently script is as below > {code} > @admin.rollWALWriter(server_name) > {code} > It should be like > {code} > @admin.rollWALWriter(ServerName.valueOf(server_name)) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13102) Fix Pseudo-distributed Mode which was broken in 1.0.0
[ https://issues.apache.org/jira/browse/HBASE-13102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337494#comment-14337494 ] Hadoop QA commented on HBASE-13102: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12700868/HBASE-13102.patch against master branch at commit c651271f5759f39f28209a50ab88a62d86b7. ATTACHMENT ID: 12700868 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 hadoop versions{color}. The patch compiles with all supported hadoop versions (2.4.1 2.5.2 2.6.0) {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 checkstyle{color}. The applied patch does not increase the total number of checkstyle errors {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 2.0.3) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:red}-1 core tests{color}. The patch failed these unit tests: {color:red}-1 core zombie tests{color}. There are 1 zombie test(s): at org.apache.hadoop.hbase.client.TestTimestampsFilter.testWithVersionDeletes(TestTimestampsFilter.java:236) at org.apache.hadoop.hbase.client.TestTimestampsFilter.testWithVersionDeletes(TestTimestampsFilter.java:223) Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/12967//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12967//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12967//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12967//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12967//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12967//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12967//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12967//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12967//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12967//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12967//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12967//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Checkstyle Errors: https://builds.apache.org/job/PreCommit-HBASE-Build/12967//artifact/patchprocess/checkstyle-aggregate.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/12967//console This message is automatically generated. > Fix Pseudo-distributed Mode which was broken in 1.0.0 > - > > Key: HBASE-13102 > URL: https://issues.apache.org/jira/browse/HBASE-13102 > Project: HBase > Issue Type: Bug >Affects Versions: 1.0.0, 1.1.0 >Reporter: Elliott Clark >Assignee: Elliott Clark > Fix For: 2.0.0, 1.0.1, 1.1.0 > > Attachments: HBASE-13102.patch > > > {code} > 2015-02-25 10:42:17,686 FATAL [main] regionserver.RSRpcServices: The hostname > of regionserver cannot be set to localhost in a fully-distributed setup > because it won't be reachable. See "Getting Started" for more information. > 2015-02-25 10:42:17,687 ERROR [main] master.HMasterCommandLine: Master exiting >
[jira] [Updated] (HBASE-13049) wal_roll ruby command doesn't work.
[ https://issues.apache.org/jira/browse/HBASE-13049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-13049: --- Fix Version/s: 0.98.11 Assignee: Bhupendra Kumar Jain > wal_roll ruby command doesn't work. > > > Key: HBASE-13049 > URL: https://issues.apache.org/jira/browse/HBASE-13049 > Project: HBase > Issue Type: Bug > Components: shell >Affects Versions: 1.0.0, 2.0.0 >Reporter: Bhupendra Kumar Jain >Assignee: Bhupendra Kumar Jain > Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11 > > Attachments: 0001-HBASE-13049-wal_roll-ruby-command-doesn-t-work.patch > > > On execution of wal_roll command in shell, error message gets displayed as > shown below > hbase(main):005:0> wal_roll 'host-10-19-92-94,16201,1424081618286' > *ERROR: cannot convert instance of class org.jruby.RubyString to class > org.apache.hadoop.hbase.ServerName* > its because Admin Java api expecting a ServerName object but script passes > the ServerName as string. > currently script is as below > {code} > @admin.rollWALWriter(server_name) > {code} > It should be like > {code} > @admin.rollWALWriter(ServerName.valueOf(server_name)) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-11544) [Ergonomics] hbase.client.scanner.caching is dogged and will try to return batch even if it means OOME
[ https://issues.apache.org/jira/browse/HBASE-11544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337491#comment-14337491 ] Hadoop QA commented on HBASE-11544: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12700890/HBASE-11544-v4.patch against master branch at commit c651271f5759f39f28209a50ab88a62d86b7. ATTACHMENT ID: 12700890 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 148 new or modified tests. {color:green}+1 hadoop versions{color}. The patch compiles with all supported hadoop versions (2.4.1 2.5.2 2.6.0) {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 javadoc{color}. The javadoc tool appears to have generated 5 warning messages. {color:red}-1 checkstyle{color}. The applied patch generated 1944 checkstyle errors (more than the master's current 1938 errors). {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 2.0.3) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 lineLengths{color}. The patch introduces the following lines longer than 100: + if (!((mutable_bitField0_ & 0x0040) == 0x0040) && input.getBytesUntilLimit() > 0) { + private java.util.List partialFlagPerResult_ = java.util.Collections.emptyList(); + " \001(\014\022\020\n\010stop_row\030\004 \001(\014\022\027\n\006filter\030\005 \001(\0132\007" + + new java.lang.String[] { "Cell", "AssociatedCellCount", "Exists", "Stale", "Partial", }); + new java.lang.String[] { "Region", "Scan", "ScannerId", "NumberOfRows", "CloseScanner", "NextCallSeq", "ClientHandlesPartials", }); + new java.lang.String[] { "CellsPerResult", "ScannerId", "MoreResults", "Ttl", "Results", "Stale", "PartialFlagPerResult", }); + joinedHeapState != null && joinedHeapState.hasResultSizeEstimate() ? joinedHeapState + * @return state where {@link NextState#hasMoreValues()} is true if more rows exist after this one, + * @return state where {@link NextState#hasMoreValues()} is true if more rows exist after this one, + * @return a state where {@link NextState#hasMoreValues()} is true when more rows exist, false when {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.io.encoding.TestPrefixTree Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/12968//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12968//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12968//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12968//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12968//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12968//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12968//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12968//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12968//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12968//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12968//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12968//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html Checkstyle Errors: https://builds.apache.org/job/PreCommit-HBASE-Build/12968//artifact/patchprocess/checkstyle-aggregate.html Javadoc warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12968//artifact/patchprocess/patchJavadocWarnings.txt Console output: https://builds.apache.org/job/PreCo
[jira] [Updated] (HBASE-13104) ZooKeeper session timeout cannot be changed for standalone HBase
[ https://issues.apache.org/jira/browse/HBASE-13104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Araujo updated HBASE-13104: Status: Patch Available (was: Open) 0.94 does not appear to set the ZK session timeout in HMasterCommandLine. Attaching a patch for 0.98. > ZooKeeper session timeout cannot be changed for standalone HBase > > > Key: HBASE-13104 > URL: https://issues.apache.org/jira/browse/HBASE-13104 > Project: HBase > Issue Type: Bug > Components: master >Affects Versions: 0.98.10.1 >Reporter: Alex Araujo >Assignee: Alex Araujo > Fix For: 0.98.11 > > Attachments: HBASE-13104-0.98.patch > > > It's not possible to increase the ZooKeeper session timeout in standalone > HBase due to a hardcoded 10s timeout in HMasterCommandLine: > https://github.com/apache/hbase/blob/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMasterCommandLine.java#L176 > In trunk you can append .localHBaseCluster to the ZK session timeout property > name to change the timeout: > https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMasterCommandLine.java#L169-171 > We should allow changing the timeout in 0.98 and other versions where it's > not possible to do so. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-13105) [hbck] Add option to reconstruct hbase:namespace if corrupt
[ https://issues.apache.org/jira/browse/HBASE-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Esteban Gutierrez updated HBASE-13105: -- Issue Type: Improvement (was: Bug) > [hbck] Add option to reconstruct hbase:namespace if corrupt > --- > > Key: HBASE-13105 > URL: https://issues.apache.org/jira/browse/HBASE-13105 > Project: HBase > Issue Type: Improvement >Reporter: Esteban Gutierrez > > If the HFile containing the namespaces gets corrupted, we don't have a way to > gracefully fix it. hbck should handle this in a similar way to > OfflineMetaRepair. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-13105) [hbck] Add option to reconstruct hbase:namespace if corrupt
Esteban Gutierrez created HBASE-13105: - Summary: [hbck] Add option to reconstruct hbase:namespace if corrupt Key: HBASE-13105 URL: https://issues.apache.org/jira/browse/HBASE-13105 Project: HBase Issue Type: Bug Reporter: Esteban Gutierrez If the HFile containing the namespaces gets corrupted, we don't have a way to gracefully fix it. hbck should handle this in a similar way to OfflineMetaRepair. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-13104) ZooKeeper session timeout cannot be changed for standalone HBase
[ https://issues.apache.org/jira/browse/HBASE-13104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Araujo updated HBASE-13104: Attachment: HBASE-13104-0.98.patch > ZooKeeper session timeout cannot be changed for standalone HBase > > > Key: HBASE-13104 > URL: https://issues.apache.org/jira/browse/HBASE-13104 > Project: HBase > Issue Type: Bug > Components: master >Affects Versions: 0.98.10.1 >Reporter: Alex Araujo >Assignee: Alex Araujo > Fix For: 0.98.11 > > Attachments: HBASE-13104-0.98.patch > > > It's not possible to increase the ZooKeeper session timeout in standalone > HBase due to a hardcoded 10s timeout in HMasterCommandLine: > https://github.com/apache/hbase/blob/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMasterCommandLine.java#L176 > In trunk you can append .localHBaseCluster to the ZK session timeout property > name to change the timeout: > https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMasterCommandLine.java#L169-171 > We should allow changing the timeout in 0.98 and other versions where it's > not possible to do so. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-13100) Shell command to retrieve table splits
[ https://issues.apache.org/jira/browse/HBASE-13100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-13100: --- Fix Version/s: 2.0.0 > Shell command to retrieve table splits > -- > > Key: HBASE-13100 > URL: https://issues.apache.org/jira/browse/HBASE-13100 > Project: HBase > Issue Type: Improvement > Components: shell >Reporter: Sean Busbey >Priority: Minor > Labels: beginner > Fix For: 2.0.0, 1.1.0 > > > Add a shell command that returns the splits for a table. > Doing this yourself is currently possible, but involves going outside of the > public api. > {code} > jruby-1.7.3 :012 > create 'example_table', 'f1', SPLITS => ["10", "20", "30", > "40"] > 0 row(s) in 0.5500 seconds > => Hbase::Table - example_table > jruby-1.7.3 :013 > > get_table('example_table').table.get_all_region_locations.map do |location| > org.apache.hadoop.hbase.util.Bytes::toStringBinary(location.get_region_info.get_start_key) > end > 0 row(s) in 0.0130 seconds > => ["", "10", "20", "30", "40"] > jruby-1.7.3 :014 > > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13086) Show ZK root node on Master WebUI
[ https://issues.apache.org/jira/browse/HBASE-13086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337410#comment-14337410 ] Hudson commented on HBASE-13086: SUCCESS: Integrated in HBase-0.98 #872 (See [https://builds.apache.org/job/HBase-0.98/872/]) HBASE-13086 Show ZK root node on Master WebUI (addendum) (ndimiduk: rev 58c1c7434f22b5a2a923de1d6504df6c061885ee) * hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterStatusServlet.java > Show ZK root node on Master WebUI > - > > Key: HBASE-13086 > URL: https://issues.apache.org/jira/browse/HBASE-13086 > Project: HBase > Issue Type: Improvement > Components: master >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Minor > Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11 > > Attachments: 13068.jpg, HBASE-13068.00.patch, > HBASE-13086-0.98.addendum0.patch > > > Currently we show a well-formed ZK quorum on the master webUI but not the > root node. Root node can be changed based on deployment, so we should list it > here explicitly. This information is helpful for folks playing around with > phoenix. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13098) HBase Connection Control
[ https://issues.apache.org/jira/browse/HBASE-13098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337409#comment-14337409 ] Andrew Purtell commented on HBASE-13098: We already have a hierarchy of RPC connection controllers descending from the {{RpcController}} interface, pluggable via {{RpcControllerFactories}}, and in use by client apps such as Apache Phoenix. Can this be implemented within that framework? I skimmed the patch and the {{ConnectionControl}} concept seems similar in some respects (controlling RPC) but more limited in others (can only accept or reject connections). > HBase Connection Control > > > Key: HBASE-13098 > URL: https://issues.apache.org/jira/browse/HBASE-13098 > Project: HBase > Issue Type: New Feature > Components: security >Affects Versions: 0.98.10 >Reporter: Ashish Singhi >Assignee: Ashish Singhi > Fix For: 2.0.0, 1.1.0, 0.98.11 > > Attachments: HBASE-13098.patch, HBase Connection Control.pdf > > > It is desirable to set the limit on the number of client connections > permitted to the HBase server by controlling with certain system > variables/parameters. Too many connections to the HBase server imply too many > queries and MR jobs running on HBase. This can slow down the performance of > the system and lead to denial of service. Hence such connections need to be > controlled. Using too many connections may just cause thrashing rather than > get more useful work done. > This is kind off inspired from > http://www.ebaytechblog.com/2014/08/21/quality-of-service-in-hadoop/#.VO2JXXyUe9y -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-13098) HBase Connection Control
[ https://issues.apache.org/jira/browse/HBASE-13098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-13098: --- Fix Version/s: (was: 0.98.11) (was: 1.1.0) (was: 2.0.0) Affects Version/s: (was: 0.98.10) Status: Open (was: Patch Available) > HBase Connection Control > > > Key: HBASE-13098 > URL: https://issues.apache.org/jira/browse/HBASE-13098 > Project: HBase > Issue Type: New Feature > Components: security >Reporter: Ashish Singhi >Assignee: Ashish Singhi > Attachments: HBASE-13098.patch, HBase Connection Control.pdf > > > It is desirable to set the limit on the number of client connections > permitted to the HBase server by controlling with certain system > variables/parameters. Too many connections to the HBase server imply too many > queries and MR jobs running on HBase. This can slow down the performance of > the system and lead to denial of service. Hence such connections need to be > controlled. Using too many connections may just cause thrashing rather than > get more useful work done. > This is kind off inspired from > http://www.ebaytechblog.com/2014/08/21/quality-of-service-in-hadoop/#.VO2JXXyUe9y -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13101) RPC throttling to protect against malicious clients
[ https://issues.apache.org/jira/browse/HBASE-13101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337399#comment-14337399 ] Andrew Purtell commented on HBASE-13101: Yes, we could start with a backport of HBASE-11598 > RPC throttling to protect against malicious clients > --- > > Key: HBASE-13101 > URL: https://issues.apache.org/jira/browse/HBASE-13101 > Project: HBase > Issue Type: Brainstorming > Components: regionserver >Reporter: Nick Dimiduk > > We should protect a region server from poorly designed/implemented > clients/schemas that result in a "hotspot" which overwhelms a single machine. > A client that creates a new connection for each request is an example of this > case, where META gets completely flooded and kills the RS. Master diligently > brings it up on another host, which sends the traffic along to the next > victim, and will slowly bring down the whole cluster. > My suggestion is rate-limiting per client, implemented at the RPC level, but > I'm looking for other suggestions. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-11544) [Ergonomics] hbase.client.scanner.caching is dogged and will try to return batch even if it means OOME
[ https://issues.apache.org/jira/browse/HBASE-11544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Lawlor updated HBASE-11544: Status: Patch Available (was: Open) > [Ergonomics] hbase.client.scanner.caching is dogged and will try to return > batch even if it means OOME > -- > > Key: HBASE-11544 > URL: https://issues.apache.org/jira/browse/HBASE-11544 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: Jonathan Lawlor >Priority: Critical > Labels: beginner > Attachments: HBASE-11544-v1.patch, HBASE-11544-v2.patch, > HBASE-11544-v3.patch, HBASE-11544-v4.patch > > > Running some tests, I set hbase.client.scanner.caching=1000. Dataset has > large cells. I kept OOME'ing. > Serverside, we should measure how much we've accumulated and return to the > client whatever we've gathered once we pass out a certain size threshold > rather than keep accumulating till we OOME. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-11544) [Ergonomics] hbase.client.scanner.caching is dogged and will try to return batch even if it means OOME
[ https://issues.apache.org/jira/browse/HBASE-11544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Lawlor updated HBASE-11544: Attachment: (was: HBASE-11544-v5.patch) > [Ergonomics] hbase.client.scanner.caching is dogged and will try to return > batch even if it means OOME > -- > > Key: HBASE-11544 > URL: https://issues.apache.org/jira/browse/HBASE-11544 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: Jonathan Lawlor >Priority: Critical > Labels: beginner > Attachments: HBASE-11544-v1.patch, HBASE-11544-v2.patch, > HBASE-11544-v3.patch, HBASE-11544-v4.patch > > > Running some tests, I set hbase.client.scanner.caching=1000. Dataset has > large cells. I kept OOME'ing. > Serverside, we should measure how much we've accumulated and return to the > client whatever we've gathered once we pass out a certain size threshold > rather than keep accumulating till we OOME. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-11544) [Ergonomics] hbase.client.scanner.caching is dogged and will try to return batch even if it means OOME
[ https://issues.apache.org/jira/browse/HBASE-11544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Lawlor updated HBASE-11544: Attachment: HBASE-11544-v4.patch Whoops, wrong patch posted before... correct one here > [Ergonomics] hbase.client.scanner.caching is dogged and will try to return > batch even if it means OOME > -- > > Key: HBASE-11544 > URL: https://issues.apache.org/jira/browse/HBASE-11544 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: Jonathan Lawlor >Priority: Critical > Labels: beginner > Attachments: HBASE-11544-v1.patch, HBASE-11544-v2.patch, > HBASE-11544-v3.patch, HBASE-11544-v4.patch > > > Running some tests, I set hbase.client.scanner.caching=1000. Dataset has > large cells. I kept OOME'ing. > Serverside, we should measure how much we've accumulated and return to the > client whatever we've gathered once we pass out a certain size threshold > rather than keep accumulating till we OOME. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-11544) [Ergonomics] hbase.client.scanner.caching is dogged and will try to return batch even if it means OOME
[ https://issues.apache.org/jira/browse/HBASE-11544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Lawlor updated HBASE-11544: Status: Open (was: Patch Available) > [Ergonomics] hbase.client.scanner.caching is dogged and will try to return > batch even if it means OOME > -- > > Key: HBASE-11544 > URL: https://issues.apache.org/jira/browse/HBASE-11544 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: Jonathan Lawlor >Priority: Critical > Labels: beginner > Attachments: HBASE-11544-v1.patch, HBASE-11544-v2.patch, > HBASE-11544-v3.patch, HBASE-11544-v5.patch > > > Running some tests, I set hbase.client.scanner.caching=1000. Dataset has > large cells. I kept OOME'ing. > Serverside, we should measure how much we've accumulated and return to the > client whatever we've gathered once we pass out a certain size threshold > rather than keep accumulating till we OOME. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-11544) [Ergonomics] hbase.client.scanner.caching is dogged and will try to return batch even if it means OOME
[ https://issues.apache.org/jira/browse/HBASE-11544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Lawlor updated HBASE-11544: Status: Patch Available (was: Open) > [Ergonomics] hbase.client.scanner.caching is dogged and will try to return > batch even if it means OOME > -- > > Key: HBASE-11544 > URL: https://issues.apache.org/jira/browse/HBASE-11544 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: Jonathan Lawlor >Priority: Critical > Labels: beginner > Attachments: HBASE-11544-v1.patch, HBASE-11544-v2.patch, > HBASE-11544-v3.patch, HBASE-11544-v5.patch > > > Running some tests, I set hbase.client.scanner.caching=1000. Dataset has > large cells. I kept OOME'ing. > Serverside, we should measure how much we've accumulated and return to the > client whatever we've gathered once we pass out a certain size threshold > rather than keep accumulating till we OOME. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-11544) [Ergonomics] hbase.client.scanner.caching is dogged and will try to return batch even if it means OOME
[ https://issues.apache.org/jira/browse/HBASE-11544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Lawlor updated HBASE-11544: Attachment: HBASE-11544-v5.patch New patch to reflect the most recent feedback from ReviewBoard. The failures that have been seen with respect the TestAcidGuarantees seem to be unrelated and have been called out in HBASE-13097. One of the more significant changes that this patch introduces is a rework of the return type of InternalScanner#next(). Rather than simply return a boolean, a state object is now returned. This allows callers of InternalScanner#next() determine important state information about the scanner. It also helps us avoid unnecessary replication of size calculations. > [Ergonomics] hbase.client.scanner.caching is dogged and will try to return > batch even if it means OOME > -- > > Key: HBASE-11544 > URL: https://issues.apache.org/jira/browse/HBASE-11544 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: Jonathan Lawlor >Priority: Critical > Labels: beginner > Attachments: HBASE-11544-v1.patch, HBASE-11544-v2.patch, > HBASE-11544-v3.patch, HBASE-11544-v5.patch > > > Running some tests, I set hbase.client.scanner.caching=1000. Dataset has > large cells. I kept OOME'ing. > Serverside, we should measure how much we've accumulated and return to the > client whatever we've gathered once we pass out a certain size threshold > rather than keep accumulating till we OOME. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-11544) [Ergonomics] hbase.client.scanner.caching is dogged and will try to return batch even if it means OOME
[ https://issues.apache.org/jira/browse/HBASE-11544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Lawlor updated HBASE-11544: Status: Open (was: Patch Available) > [Ergonomics] hbase.client.scanner.caching is dogged and will try to return > batch even if it means OOME > -- > > Key: HBASE-11544 > URL: https://issues.apache.org/jira/browse/HBASE-11544 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: Jonathan Lawlor >Priority: Critical > Labels: beginner > Attachments: HBASE-11544-v1.patch, HBASE-11544-v2.patch, > HBASE-11544-v3.patch > > > Running some tests, I set hbase.client.scanner.caching=1000. Dataset has > large cells. I kept OOME'ing. > Serverside, we should measure how much we've accumulated and return to the > client whatever we've gathered once we pass out a certain size threshold > rather than keep accumulating till we OOME. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13091) Split ZK Quorum on Master WebUI
[ https://issues.apache.org/jira/browse/HBASE-13091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337340#comment-14337340 ] Enis Soztutar commented on HBASE-13091: --- Why don't we put a max limit on the table column in html and let the browser deal with the splitting. We cannot do the custom logic for every row. > Split ZK Quorum on Master WebUI > --- > > Key: HBASE-13091 > URL: https://issues.apache.org/jira/browse/HBASE-13091 > Project: HBase > Issue Type: Bug >Affects Versions: 1.0.1, 0.98.10.1 >Reporter: Jean-Marc Spaggiari >Assignee: Jean-Marc Spaggiari >Priority: Minor > Attachments: HBASE-13091-v0-trunk.patch, HBASE-13091-v1-trunk.patch, > screenshot.png > > > When using ZK servers or more, on the Master WebUI, this create a very large > column and so reduce a lot the others, splitting all the lines and creating > tall cells > Splitting the ZK quorum with one per line will make it nicer and easier to > read. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13102) Fix Pseudo-distributed Mode which was broken in 1.0.0
[ https://issues.apache.org/jira/browse/HBASE-13102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337333#comment-14337333 ] Enis Soztutar commented on HBASE-13102: --- +1. > Fix Pseudo-distributed Mode which was broken in 1.0.0 > - > > Key: HBASE-13102 > URL: https://issues.apache.org/jira/browse/HBASE-13102 > Project: HBase > Issue Type: Bug >Affects Versions: 1.0.0, 1.1.0 >Reporter: Elliott Clark >Assignee: Elliott Clark > Fix For: 2.0.0, 1.0.1, 1.1.0 > > Attachments: HBASE-13102.patch > > > {code} > 2015-02-25 10:42:17,686 FATAL [main] regionserver.RSRpcServices: The hostname > of regionserver cannot be set to localhost in a fully-distributed setup > because it won't be reachable. See "Getting Started" for more information. > 2015-02-25 10:42:17,687 ERROR [main] master.HMasterCommandLine: Master exiting > java.lang.RuntimeException: Failed construction of Master: class > org.apache.hadoop.hbase.master.HMaster > at > org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2051) > at > org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:198) > at > org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at > org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126) > at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2065) > Caused by: java.io.IOException: The hostname of regionserver cannot be set to > localhost in a fully-distributed setup because it won't be reachable. See > "Getting Started" for more information. > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.(RSRpcServices.java:793) > at > org.apache.hadoop.hbase.master.MasterRpcServices.(MasterRpcServices.java:198) > at > org.apache.hadoop.hbase.master.HMaster.createRpcServices(HMaster.java:486) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:500) > at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:337) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:526) > at > org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2046) > ... 5 more > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13086) Show ZK root node on Master WebUI
[ https://issues.apache.org/jira/browse/HBASE-13086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337321#comment-14337321 ] Andrew Purtell commented on HBASE-13086: Thanks [~ndimiduk] > Show ZK root node on Master WebUI > - > > Key: HBASE-13086 > URL: https://issues.apache.org/jira/browse/HBASE-13086 > Project: HBase > Issue Type: Improvement > Components: master >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Minor > Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11 > > Attachments: 13068.jpg, HBASE-13068.00.patch, > HBASE-13086-0.98.addendum0.patch > > > Currently we show a well-formed ZK quorum on the master webUI but not the > root node. Root node can be changed based on deployment, so we should list it > here explicitly. This information is helpful for folks playing around with > phoenix. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-13104) ZooKeeper session timeout cannot be changed for standalone HBase
[ https://issues.apache.org/jira/browse/HBASE-13104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-13104: --- Fix Version/s: 0.98.11 Assignee: Alex Araujo > ZooKeeper session timeout cannot be changed for standalone HBase > > > Key: HBASE-13104 > URL: https://issues.apache.org/jira/browse/HBASE-13104 > Project: HBase > Issue Type: Bug > Components: master >Affects Versions: 0.98.10.1 >Reporter: Alex Araujo >Assignee: Alex Araujo > Fix For: 0.98.11 > > > It's not possible to increase the ZooKeeper session timeout in standalone > HBase due to a hardcoded 10s timeout in HMasterCommandLine: > https://github.com/apache/hbase/blob/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMasterCommandLine.java#L176 > In trunk you can append .localHBaseCluster to the ZK session timeout property > name to change the timeout: > https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMasterCommandLine.java#L169-171 > We should allow changing the timeout in 0.98 and other versions where it's > not possible to do so. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-13104) ZooKeeper session timeout cannot be changed for standalone HBase
Alex Araujo created HBASE-13104: --- Summary: ZooKeeper session timeout cannot be changed for standalone HBase Key: HBASE-13104 URL: https://issues.apache.org/jira/browse/HBASE-13104 Project: HBase Issue Type: Bug Components: master Affects Versions: 0.98.10.1 Reporter: Alex Araujo It's not possible to increase the ZooKeeper session timeout in standalone HBase due to a hardcoded 10s timeout in HMasterCommandLine: https://github.com/apache/hbase/blob/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMasterCommandLine.java#L176 In trunk you can append .localHBaseCluster to the ZK session timeout property name to change the timeout: https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMasterCommandLine.java#L169-171 We should allow changing the timeout in 0.98 and other versions where it's not possible to do so. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13086) Show ZK root node on Master WebUI
[ https://issues.apache.org/jira/browse/HBASE-13086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337272#comment-14337272 ] Hadoop QA commented on HBASE-13086: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12700839/HBASE-13086-0.98.addendum0.patch against 0.98 branch at commit c651271f5759f39f28209a50ab88a62d86b7. ATTACHMENT ID: 12700839 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 4 new or modified tests. {color:green}+1 hadoop versions{color}. The patch compiles with all supported hadoop versions (2.4.1 2.5.2 2.6.0) {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 javadoc{color}. The javadoc tool appears to have generated 25 warning messages. {color:green}+1 checkstyle{color}. The applied patch does not increase the total number of checkstyle errors {color:red}-1 findbugs{color}. The patch appears to introduce 4 new Findbugs (version 2.0.3) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/12966//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12966//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12966//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12966//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12966//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12966//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12966//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12966//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12966//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12966//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12966//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12966//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Checkstyle Errors: https://builds.apache.org/job/PreCommit-HBASE-Build/12966//artifact/patchprocess/checkstyle-aggregate.html Javadoc warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12966//artifact/patchprocess/patchJavadocWarnings.txt Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/12966//console This message is automatically generated. > Show ZK root node on Master WebUI > - > > Key: HBASE-13086 > URL: https://issues.apache.org/jira/browse/HBASE-13086 > Project: HBase > Issue Type: Improvement > Components: master >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Minor > Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11 > > Attachments: 13068.jpg, HBASE-13068.00.patch, > HBASE-13086-0.98.addendum0.patch > > > Currently we show a well-formed ZK quorum on the master webUI but not the > root node. Root node can be changed based on deployment, so we should list it > here explicitly. This information is helpful for folks playing around with > phoenix. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13102) Fix Pseudo-distributed Mode which was broken in 1.0.0
[ https://issues.apache.org/jira/browse/HBASE-13102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337252#comment-14337252 ] Esteban Gutierrez commented on HBASE-13102: --- +1 > Fix Pseudo-distributed Mode which was broken in 1.0.0 > - > > Key: HBASE-13102 > URL: https://issues.apache.org/jira/browse/HBASE-13102 > Project: HBase > Issue Type: Bug >Affects Versions: 1.0.0, 1.1.0 >Reporter: Elliott Clark >Assignee: Elliott Clark > Fix For: 2.0.0, 1.0.1, 1.1.0 > > Attachments: HBASE-13102.patch > > > {code} > 2015-02-25 10:42:17,686 FATAL [main] regionserver.RSRpcServices: The hostname > of regionserver cannot be set to localhost in a fully-distributed setup > because it won't be reachable. See "Getting Started" for more information. > 2015-02-25 10:42:17,687 ERROR [main] master.HMasterCommandLine: Master exiting > java.lang.RuntimeException: Failed construction of Master: class > org.apache.hadoop.hbase.master.HMaster > at > org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2051) > at > org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:198) > at > org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at > org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126) > at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2065) > Caused by: java.io.IOException: The hostname of regionserver cannot be set to > localhost in a fully-distributed setup because it won't be reachable. See > "Getting Started" for more information. > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.(RSRpcServices.java:793) > at > org.apache.hadoop.hbase.master.MasterRpcServices.(MasterRpcServices.java:198) > at > org.apache.hadoop.hbase.master.HMaster.createRpcServices(HMaster.java:486) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:500) > at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:337) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:526) > at > org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2046) > ... 5 more > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-13102) Fix Pseudo-distributed Mode which was broken in 1.0.0
[ https://issues.apache.org/jira/browse/HBASE-13102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elliott Clark updated HBASE-13102: -- Attachment: HBASE-13102.patch This is what worked for me while debugging a different issue. However it will mean that a better solution needs to be found like [~esteban] suggests. > Fix Pseudo-distributed Mode which was broken in 1.0.0 > - > > Key: HBASE-13102 > URL: https://issues.apache.org/jira/browse/HBASE-13102 > Project: HBase > Issue Type: Bug >Affects Versions: 1.0.0, 1.1.0 >Reporter: Elliott Clark > Fix For: 2.0.0, 1.0.1, 1.1.0 > > Attachments: HBASE-13102.patch > > > {code} > 2015-02-25 10:42:17,686 FATAL [main] regionserver.RSRpcServices: The hostname > of regionserver cannot be set to localhost in a fully-distributed setup > because it won't be reachable. See "Getting Started" for more information. > 2015-02-25 10:42:17,687 ERROR [main] master.HMasterCommandLine: Master exiting > java.lang.RuntimeException: Failed construction of Master: class > org.apache.hadoop.hbase.master.HMaster > at > org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2051) > at > org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:198) > at > org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at > org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126) > at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2065) > Caused by: java.io.IOException: The hostname of regionserver cannot be set to > localhost in a fully-distributed setup because it won't be reachable. See > "Getting Started" for more information. > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.(RSRpcServices.java:793) > at > org.apache.hadoop.hbase.master.MasterRpcServices.(MasterRpcServices.java:198) > at > org.apache.hadoop.hbase.master.HMaster.createRpcServices(HMaster.java:486) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:500) > at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:337) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:526) > at > org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2046) > ... 5 more > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-13102) Fix Pseudo-distributed Mode which was broken in 1.0.0
[ https://issues.apache.org/jira/browse/HBASE-13102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elliott Clark updated HBASE-13102: -- Status: Patch Available (was: Open) > Fix Pseudo-distributed Mode which was broken in 1.0.0 > - > > Key: HBASE-13102 > URL: https://issues.apache.org/jira/browse/HBASE-13102 > Project: HBase > Issue Type: Bug >Affects Versions: 1.0.0, 1.1.0 >Reporter: Elliott Clark >Assignee: Elliott Clark > Fix For: 2.0.0, 1.0.1, 1.1.0 > > Attachments: HBASE-13102.patch > > > {code} > 2015-02-25 10:42:17,686 FATAL [main] regionserver.RSRpcServices: The hostname > of regionserver cannot be set to localhost in a fully-distributed setup > because it won't be reachable. See "Getting Started" for more information. > 2015-02-25 10:42:17,687 ERROR [main] master.HMasterCommandLine: Master exiting > java.lang.RuntimeException: Failed construction of Master: class > org.apache.hadoop.hbase.master.HMaster > at > org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2051) > at > org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:198) > at > org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at > org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126) > at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2065) > Caused by: java.io.IOException: The hostname of regionserver cannot be set to > localhost in a fully-distributed setup because it won't be reachable. See > "Getting Started" for more information. > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.(RSRpcServices.java:793) > at > org.apache.hadoop.hbase.master.MasterRpcServices.(MasterRpcServices.java:198) > at > org.apache.hadoop.hbase.master.HMaster.createRpcServices(HMaster.java:486) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:500) > at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:337) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:526) > at > org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2046) > ... 5 more > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HBASE-13102) Fix Pseudo-distributed Mode which was broken in 1.0.0
[ https://issues.apache.org/jira/browse/HBASE-13102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elliott Clark reassigned HBASE-13102: - Assignee: Elliott Clark > Fix Pseudo-distributed Mode which was broken in 1.0.0 > - > > Key: HBASE-13102 > URL: https://issues.apache.org/jira/browse/HBASE-13102 > Project: HBase > Issue Type: Bug >Affects Versions: 1.0.0, 1.1.0 >Reporter: Elliott Clark >Assignee: Elliott Clark > Fix For: 2.0.0, 1.0.1, 1.1.0 > > Attachments: HBASE-13102.patch > > > {code} > 2015-02-25 10:42:17,686 FATAL [main] regionserver.RSRpcServices: The hostname > of regionserver cannot be set to localhost in a fully-distributed setup > because it won't be reachable. See "Getting Started" for more information. > 2015-02-25 10:42:17,687 ERROR [main] master.HMasterCommandLine: Master exiting > java.lang.RuntimeException: Failed construction of Master: class > org.apache.hadoop.hbase.master.HMaster > at > org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2051) > at > org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:198) > at > org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at > org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126) > at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2065) > Caused by: java.io.IOException: The hostname of regionserver cannot be set to > localhost in a fully-distributed setup because it won't be reachable. See > "Getting Started" for more information. > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.(RSRpcServices.java:793) > at > org.apache.hadoop.hbase.master.MasterRpcServices.(MasterRpcServices.java:198) > at > org.apache.hadoop.hbase.master.HMaster.createRpcServices(HMaster.java:486) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:500) > at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:337) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:526) > at > org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2046) > ... 5 more > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13103) [ergonomics] add shell,API to "reshape" a table
[ https://issues.apache.org/jira/browse/HBASE-13103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337249#comment-14337249 ] Enis Soztutar commented on HBASE-13103: --- Related, Accumulo has merge command which merges a range into a single tablet. We can do this and the merge range together for max flexibility. > [ergonomics] add shell,API to "reshape" a table > --- > > Key: HBASE-13103 > URL: https://issues.apache.org/jira/browse/HBASE-13103 > Project: HBase > Issue Type: Brainstorming > Components: Usability >Reporter: Nick Dimiduk > > Often enough, folks miss-judge split points or otherwise end up with a > suboptimal number of regions. We should have an automated, reliable way to > "reshape" or "balance" a table's region boundaries. This would be for tables > that contain existing data. This might look like: > {noformat} > Admin#reshapeTable(TableName, int numSplits); > {noformat} > or from the shell: > {noformat} > > reshape TABLE, numSplits > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)