[jira] [Commented] (HBASE-9809) RegionTooBusyException should provide region name which was too busy
[ https://issues.apache.org/jira/browse/HBASE-9809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13818428#comment-13818428 ] Hudson commented on HBASE-9809: --- FAILURE: Integrated in HBase-0.94-security #333 (See [https://builds.apache.org/job/HBase-0.94-security/333/]) HBASE-9809 RegionTooBusyException should provide region name which was too busy (tedyu: rev 1540442) * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java RegionTooBusyException should provide region name which was too busy Key: HBASE-9809 URL: https://issues.apache.org/jira/browse/HBASE-9809 Project: HBase Issue Type: Bug Affects Versions: 0.94.14 Reporter: Ted Yu Assignee: Gustavo Anatoly Fix For: 0.94.14 Attachments: HBASE-9809.patch Under this thread: http://search-hadoop.com/m/WSfKp1yJOFJ, John showed log from LoadIncrementalHFiles where the following is a snippet: {code} 04:18:07,110 INFO LoadIncrementalHFiles:451 - Trying to load hfile=hdfs://pc08.pool.ifis.uni-luebeck.de:8020/tmp/bulkLoadDirectory/PO_S_rowBufferHFile/Hexa/_tmp/PO_S,9.bottom first=http://purl.org/dc/elements/1.1/title,emulates drylot births^^http://www.w3.org/2001/XMLSchema#string last=http://purl.org/dc/e$ org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=10, exceptions: Sun Oct 20 04:15:50 CEST 2013, org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$3@4cfdfc98, org.apache.hadoop.hbase.RegionTooBusyException: org.apache.hadoop.hbase.RegionTooBusyException: failed to get a lock in 6ms at org.apache.hadoop.hbase.regionserver.HRegion.lock(HRegion.java:5778) at org.apache.hadoop.hbase.regionserver.HRegion.lock(HRegion.java:5764) at org.apache.hadoop.hbase.regionserver.HRegion.startBulkRegionOperation(HRegion.java:5723) at org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:3534) at org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:3517) at org.apache.hadoop.hbase.regionserver.HRegionServer.bulkLoadHFiles(HRegionServer.java:2793) at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) {code} Looking at the above, it is not immediately clear which region was busy. Region name should be included in the exception so that user can correlate with the region server where the problem occurs. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9809) RegionTooBusyException should provide region name which was too busy
[ https://issues.apache.org/jira/browse/HBASE-9809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13818432#comment-13818432 ] Hudson commented on HBASE-9809: --- SUCCESS: Integrated in HBase-0.94 #1199 (See [https://builds.apache.org/job/HBase-0.94/1199/]) HBASE-9809 RegionTooBusyException should provide region name which was too busy (tedyu: rev 1540442) * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java RegionTooBusyException should provide region name which was too busy Key: HBASE-9809 URL: https://issues.apache.org/jira/browse/HBASE-9809 Project: HBase Issue Type: Bug Affects Versions: 0.94.14 Reporter: Ted Yu Assignee: Gustavo Anatoly Fix For: 0.94.14 Attachments: HBASE-9809.patch Under this thread: http://search-hadoop.com/m/WSfKp1yJOFJ, John showed log from LoadIncrementalHFiles where the following is a snippet: {code} 04:18:07,110 INFO LoadIncrementalHFiles:451 - Trying to load hfile=hdfs://pc08.pool.ifis.uni-luebeck.de:8020/tmp/bulkLoadDirectory/PO_S_rowBufferHFile/Hexa/_tmp/PO_S,9.bottom first=http://purl.org/dc/elements/1.1/title,emulates drylot births^^http://www.w3.org/2001/XMLSchema#string last=http://purl.org/dc/e$ org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=10, exceptions: Sun Oct 20 04:15:50 CEST 2013, org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$3@4cfdfc98, org.apache.hadoop.hbase.RegionTooBusyException: org.apache.hadoop.hbase.RegionTooBusyException: failed to get a lock in 6ms at org.apache.hadoop.hbase.regionserver.HRegion.lock(HRegion.java:5778) at org.apache.hadoop.hbase.regionserver.HRegion.lock(HRegion.java:5764) at org.apache.hadoop.hbase.regionserver.HRegion.startBulkRegionOperation(HRegion.java:5723) at org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:3534) at org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:3517) at org.apache.hadoop.hbase.regionserver.HRegionServer.bulkLoadHFiles(HRegionServer.java:2793) at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) {code} Looking at the above, it is not immediately clear which region was busy. Region name should be included in the exception so that user can correlate with the region server where the problem occurs. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9809) RegionTooBusyException should provide region name which was too busy
[ https://issues.apache.org/jira/browse/HBASE-9809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13818449#comment-13818449 ] Gustavo Anatoly commented on HBASE-9809: Thank you too. RegionTooBusyException should provide region name which was too busy Key: HBASE-9809 URL: https://issues.apache.org/jira/browse/HBASE-9809 Project: HBase Issue Type: Bug Affects Versions: 0.94.14 Reporter: Ted Yu Assignee: Gustavo Anatoly Fix For: 0.94.14 Attachments: HBASE-9809.patch Under this thread: http://search-hadoop.com/m/WSfKp1yJOFJ, John showed log from LoadIncrementalHFiles where the following is a snippet: {code} 04:18:07,110 INFO LoadIncrementalHFiles:451 - Trying to load hfile=hdfs://pc08.pool.ifis.uni-luebeck.de:8020/tmp/bulkLoadDirectory/PO_S_rowBufferHFile/Hexa/_tmp/PO_S,9.bottom first=http://purl.org/dc/elements/1.1/title,emulates drylot births^^http://www.w3.org/2001/XMLSchema#string last=http://purl.org/dc/e$ org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=10, exceptions: Sun Oct 20 04:15:50 CEST 2013, org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$3@4cfdfc98, org.apache.hadoop.hbase.RegionTooBusyException: org.apache.hadoop.hbase.RegionTooBusyException: failed to get a lock in 6ms at org.apache.hadoop.hbase.regionserver.HRegion.lock(HRegion.java:5778) at org.apache.hadoop.hbase.regionserver.HRegion.lock(HRegion.java:5764) at org.apache.hadoop.hbase.regionserver.HRegion.startBulkRegionOperation(HRegion.java:5723) at org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:3534) at org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:3517) at org.apache.hadoop.hbase.regionserver.HRegionServer.bulkLoadHFiles(HRegionServer.java:2793) at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) {code} Looking at the above, it is not immediately clear which region was busy. Region name should be included in the exception so that user can correlate with the region server where the problem occurs. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Resolved] (HBASE-9841) Provide tracking URL for cluster which is immune to master failover
[ https://issues.apache.org/jira/browse/HBASE-9841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu resolved HBASE-9841. --- Resolution: Duplicate Provide tracking URL for cluster which is immune to master failover --- Key: HBASE-9841 URL: https://issues.apache.org/jira/browse/HBASE-9841 Project: HBase Issue Type: Task Reporter: Ted Yu Currently each master provides web UI whose URL would not survive master failure. Here is the use case from Hoya point of view: When starting the HBase cluster, Hoya requests two containers for HMaster and multiple containers for region servers. It is unknown at time of request which nodes would host HMasters. After cluster starts running, the container for active master may go down. This would make the standby master active meanwhile YARN would start another container hosting HMaster to serve as the new standby master. It is desirable for HBase to provide an API which would tell Hoya about the location of the active master. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HBASE-9046) Some region servers keep using an older version of coprocessor
[ https://issues.apache.org/jira/browse/HBASE-9046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benoit Sigoure updated HBASE-9046: -- Affects Version/s: 0.96.0 Some region servers keep using an older version of coprocessor --- Key: HBASE-9046 URL: https://issues.apache.org/jira/browse/HBASE-9046 Project: HBase Issue Type: Bug Components: Coprocessors Affects Versions: 0.94.8, 0.96.0 Environment: FreeBSD 8.2-RELEASE FreeBSD 8.2-RELEASE #0 r220198: Thu Mar 31 21:46:45 PDT 2011 amd64 java version 1.6.0_07 Diablo Java(TM) SE Runtime Environment (build 1.6.0_07-b02) Diablo Java HotSpot(TM) 64-Bit Server VM (build 10.0-b23, mixed mode) hbase: 0.94.8, r1485407 hadoop: 1.0.4, r1393290 Reporter: iain wright Priority: Minor My team and another user from the mailing list have run into an issue where replacing the coprocessor jar in HDFS and reloading the table does not load the latest jar. It may load the latest version on some percentage of RS but not all of them. This may be a config oversight or a lack of understanding of a caching mechanism that has a purge capability, but I thought I would log it here for confirmation. Workaround is to name the coprocessor JAR uniquely, place in HDFS, and re-enable the table using the new jar's name. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9046) Some region servers keep using an older version of coprocessor
[ https://issues.apache.org/jira/browse/HBASE-9046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13818526#comment-13818526 ] Benoit Sigoure commented on HBASE-9046: --- I think the problem is that {{CoprocessorClassLoader.classLoadersCache}} retains the previous cache loader in its cache. This is a cache that maps the path of the .jar file to its corresponding {{CoprocessorClassLoader}}. The values in the cache are weak references, but that doesn't guarantee that they will go away in a timely fashion. Therefore if you edit the schema of your table to unset the coprocessor and re-set it, most of the time you will get the same {{CoprocessorClassLoader}} as before and the new jar won't be loaded. I can reproduce this trivially and consistently on a single-node non-distributed HBase instance. Some region servers keep using an older version of coprocessor --- Key: HBASE-9046 URL: https://issues.apache.org/jira/browse/HBASE-9046 Project: HBase Issue Type: Bug Components: Coprocessors Affects Versions: 0.94.8, 0.96.0 Environment: FreeBSD 8.2-RELEASE FreeBSD 8.2-RELEASE #0 r220198: Thu Mar 31 21:46:45 PDT 2011 amd64 java version 1.6.0_07 Diablo Java(TM) SE Runtime Environment (build 1.6.0_07-b02) Diablo Java HotSpot(TM) 64-Bit Server VM (build 10.0-b23, mixed mode) hbase: 0.94.8, r1485407 hadoop: 1.0.4, r1393290 Reporter: iain wright Priority: Minor My team and another user from the mailing list have run into an issue where replacing the coprocessor jar in HDFS and reloading the table does not load the latest jar. It may load the latest version on some percentage of RS but not all of them. This may be a config oversight or a lack of understanding of a caching mechanism that has a purge capability, but I thought I would log it here for confirmation. Workaround is to name the coprocessor JAR uniquely, place in HDFS, and re-enable the table using the new jar's name. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9046) Some region servers keep using an older version of coprocessor
[ https://issues.apache.org/jira/browse/HBASE-9046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13818530#comment-13818530 ] Benoit Sigoure commented on HBASE-9046: --- I can further confirm this because in my current environment I use a single coprocessor, so I devised a workaround for this bug: my coprocessor class has a {{static int}} I use as a reference count: every time my coprocessor's {{start}} is called, I increment it, and in {{stop}} I decrement it. In {{stop}}, when the count drops down to 0, I call {{CoprocessorClassLoader.clearCache()}}. This fixes the problem for me. This trick doesn't work for multiple co-processors, because {{clearCache()}} would clear everything. Also note that {{clearCache()}} is only exposed for testing purposes so it's technically not part of the public API. Another workaround I can think of (but haven't tried) would be to use reflection to access the underlying map and clear out the entry. I think the right way to fix this bug is to maintain the reference count manually by doing the increment/decrement from the {{startup()}} and {{shutdown()}} methods of {{CoprocessorHost$Environment}}. Some region servers keep using an older version of coprocessor --- Key: HBASE-9046 URL: https://issues.apache.org/jira/browse/HBASE-9046 Project: HBase Issue Type: Bug Components: Coprocessors Affects Versions: 0.94.8, 0.96.0 Environment: FreeBSD 8.2-RELEASE FreeBSD 8.2-RELEASE #0 r220198: Thu Mar 31 21:46:45 PDT 2011 amd64 java version 1.6.0_07 Diablo Java(TM) SE Runtime Environment (build 1.6.0_07-b02) Diablo Java HotSpot(TM) 64-Bit Server VM (build 10.0-b23, mixed mode) hbase: 0.94.8, r1485407 hadoop: 1.0.4, r1393290 Reporter: iain wright Priority: Minor My team and another user from the mailing list have run into an issue where replacing the coprocessor jar in HDFS and reloading the table does not load the latest jar. It may load the latest version on some percentage of RS but not all of them. This may be a config oversight or a lack of understanding of a caching mechanism that has a purge capability, but I thought I would log it here for confirmation. Workaround is to name the coprocessor JAR uniquely, place in HDFS, and re-enable the table using the new jar's name. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HBASE-9939) All HBase client threads are locked out on network failure
[ https://issues.apache.org/jira/browse/HBASE-9939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-9939: - Priority: Major (was: Minor) Fix Version/s: 0.94.14 All HBase client threads are locked out on network failure -- Key: HBASE-9939 URL: https://issues.apache.org/jira/browse/HBASE-9939 Project: HBase Issue Type: Bug Components: Client Affects Versions: 0.94.6 Reporter: Rohit Joshi Fix For: 0.94.14 Under load when I disabled network interface, all HBase threads were locked out. I was expecting these threads to be released based on client.operation.timeout and rpc,timeout. Here is a link for thread dump. https://www.dropbox.com/s/y1ng3yoywq09x2u/HBaseClient_Threaddump.txt -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9939) All HBase client threads are locked out on network failure
[ https://issues.apache.org/jira/browse/HBASE-9939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13818556#comment-13818556 ] Lars Hofhansl commented on HBASE-9939: -- Good call. Looks like we should just timeout accordingly on the Future.get() call. All HBase client threads are locked out on network failure -- Key: HBASE-9939 URL: https://issues.apache.org/jira/browse/HBASE-9939 Project: HBase Issue Type: Bug Components: Client Affects Versions: 0.94.6 Reporter: Rohit Joshi Fix For: 0.94.14 Under load when I disabled network interface, all HBase threads were locked out. I was expecting these threads to be released based on client.operation.timeout and rpc,timeout. Here is a link for thread dump. https://www.dropbox.com/s/y1ng3yoywq09x2u/HBaseClient_Threaddump.txt -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9939) All HBase client threads are locked out on network failure
[ https://issues.apache.org/jira/browse/HBASE-9939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13818561#comment-13818561 ] Lars Hofhansl commented on HBASE-9939: -- Although one would expect the callable itself to timeout and throw an exception. Since you unplugged the network you're probably mostly seeing the ZK session's attempts to be reestablished. I've seen this with our clients; we observed that if the entire cluster is down it takes (with the 0.94 defaults) up to 20 mins before after the various retries the client eventually times out. Some of this was improved by avoiding nested retry loops (see HBASE-6326), but it still takes a long time with the default. In our systems we use different ZK timeouts and retry count in the server (where this is used for server to server communication) and in the client (where we prefer fast timeouts so that we do not tie up our AppServer threads). This looks a bit different though: {code} hbase-tablepool-7-thread-4 id=43 idx=0xc0 tid=22572 prio=5 alive, waiting, native_blocked, daemon -- Waiting for notification on: org/apache/hadoop/hbase/ipc/HBaseClient$Call@0x058F1F38[fat lock] at jrockit/vm/Threads.waitForNotifySignal(JLjava/lang/Object;)Z(Native Method) at java/lang/Object.wait(J)V(Native Method) at java/lang/Object.wait(Object.java:485) at org/apache/hadoop/hbase/ipc/HBaseClient.call(HBaseClient.java:981) ^-- Lock released while waiting: org/apache/hadoop/hbase/ipc/HBaseClient$Call@0x058F1F38[fat lock] at org/apache/hadoop/hbase/ipc/SecureRpcEngine$Invoker.invoke(SecureRpcEngine.java:104) at $Proxy7.multi(Lorg/apache/hadoop/hbase/client/MultiAction;)Lorg/apache/hadoop/hbase/client/MultiResponse;(Unknown Source) at org/apache/hadoop/hbase/client/HConnectionManager$HConnectionImplementation$3$1.call(HConnectionManager.java:1398) at org/apache/hadoop/hbase/client/HConnectionManager$HConnectionImplementation$3$1.call(HConnectionManager.java:1396) at org/apache/hadoop/hbase/client/ServerCallable.withoutRetries(ServerCallable.java:210) at org/apache/hadoop/hbase/client/HConnectionManager$HConnectionImplementation$3.call(HConnectionManager.java:1405) {code} So we need to look into this. All HBase client threads are locked out on network failure -- Key: HBASE-9939 URL: https://issues.apache.org/jira/browse/HBASE-9939 Project: HBase Issue Type: Bug Components: Client Affects Versions: 0.94.6 Reporter: Rohit Joshi Fix For: 0.94.14 Under load when I disabled network interface, all HBase threads were locked out. I was expecting these threads to be released based on client.operation.timeout and rpc,timeout. Here is a link for thread dump. https://www.dropbox.com/s/y1ng3yoywq09x2u/HBaseClient_Threaddump.txt -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9775) Client write path perf issues
[ https://issues.apache.org/jira/browse/HBASE-9775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13818576#comment-13818576 ] Jean-Marc Spaggiari commented on HBASE-9775: BTW, numbers are rows/min for the first, and rows/sec for the nexts. Client write path perf issues - Key: HBASE-9775 URL: https://issues.apache.org/jira/browse/HBASE-9775 Project: HBase Issue Type: Bug Components: Client Affects Versions: 0.96.0 Reporter: Elliott Clark Priority: Critical Attachments: 9775.rig.txt, 9775.rig.v2.patch, 9775.rig.v3.patch, Charts Search Cloudera Manager - ITBLL.png, Charts Search Cloudera Manager.png, hbase-9775.patch, job_run.log, short_ycsb.png, ycsb.png, ycsb_insert_94_vs_96.png Testing on larger clusters has not had the desired throughput increases. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Created] (HBASE-9940) PerformanceEvaluation should have a test with many table options on (Bloom, compression, FAST_DIFF, etc.)
Jean-Marc Spaggiari created HBASE-9940: -- Summary: PerformanceEvaluation should have a test with many table options on (Bloom, compression, FAST_DIFF, etc.) Key: HBASE-9940 URL: https://issues.apache.org/jira/browse/HBASE-9940 Project: HBase Issue Type: Bug Components: Performance Affects Versions: 0.94.13, 0.96.0 Reporter: Jean-Marc Spaggiari Priority: Minor -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9939) All HBase client threads are locked out on network failure
[ https://issues.apache.org/jira/browse/HBASE-9939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13818581#comment-13818581 ] Rohit Joshi commented on HBASE-9939: Lars, Thanks for looking into this. To release the client threads running under weblogic container, I used following configuration and it is able to release threads under 100 seconds (zookeeper timeout + retry + HBase timeout + retry). But I was able to reproduce threads lockout twice and it did not release for long time. hbase.rpc.timeout=3000 hbase.client.operation.timeout=3000 hbase.client.pause=100 hbase.client.retries.number=2 zookeeper.session.timeout=1 zookeeper.recovery.retry.intervalmill=100 zookeeper.recovery.retry=1 All HBase client threads are locked out on network failure -- Key: HBASE-9939 URL: https://issues.apache.org/jira/browse/HBASE-9939 Project: HBase Issue Type: Bug Components: Client Affects Versions: 0.94.6 Reporter: Rohit Joshi Fix For: 0.94.14 Under load when I disabled network interface, all HBase threads were locked out. I was expecting these threads to be released based on client.operation.timeout and rpc,timeout. Here is a link for thread dump. https://www.dropbox.com/s/y1ng3yoywq09x2u/HBaseClient_Threaddump.txt -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9935) Slight perf improvement: Avoid KeyValue.getRowLength() at some places
[ https://issues.apache.org/jira/browse/HBASE-9935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13818605#comment-13818605 ] Lars Hofhansl commented on HBASE-9935: -- Did more some tests. The steady state of this is even improved when I remove the keyLength caching from KeyValue. I my case I do full scans through 25m KVs, so in each run the extra keyLength cache produces 100mb of extra garbage to be collected. I also observed a slight slowdown (mostly within the noise, though) in this scenario when I do the 2 byte rowLength caching. So my proposal is this: # remove the keyLength caching # look through the callers of getFamilyOffset/getFamilyLength, etc, and see where we can optimize this while hiding all of this in the KeyValue class # no cache for the rowLength Slight perf improvement: Avoid KeyValue.getRowLength() at some places - Key: HBASE-9935 URL: https://issues.apache.org/jira/browse/HBASE-9935 Project: HBase Issue Type: Bug Reporter: Lars Hofhansl Here's an example: {code} KeyValue.createLastOnRow( kv.getBuffer(), kv.getRowOffset(), kv.getRowLength(), kv.getBuffer(), kv.getFamilyOffset(), kv.getFamilyLength(), kv.getBuffer(), kv.getQualifierOffset(), kv.getQualifierLength()); {code} Looks harmless enough, but that actually recalculates the rowlength 5 times. And each time it needs to decode the rowlength again from the bytes of the KV. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9935) Slight perf improvement: Avoid KeyValue.getRowLength() at some places
[ https://issues.apache.org/jira/browse/HBASE-9935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13818625#comment-13818625 ] Lars Hofhansl commented on HBASE-9935: -- I tried a big'ish patch. Didn't see any improvements. The JVM is probably smart enough to do the right thing anyway. Unless somebody has some new ideas, I'll close this as invalid (or maybe just remove the keyLenght caching). Slight perf improvement: Avoid KeyValue.getRowLength() at some places - Key: HBASE-9935 URL: https://issues.apache.org/jira/browse/HBASE-9935 Project: HBase Issue Type: Bug Reporter: Lars Hofhansl Here's an example: {code} KeyValue.createLastOnRow( kv.getBuffer(), kv.getRowOffset(), kv.getRowLength(), kv.getBuffer(), kv.getFamilyOffset(), kv.getFamilyLength(), kv.getBuffer(), kv.getQualifierOffset(), kv.getQualifierLength()); {code} Looks harmless enough, but that actually recalculates the rowlength 5 times. And each time it needs to decode the rowlength again from the bytes of the KV. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HBASE-9935) Slight perf improvement: Avoid KeyValue.getRowLength() at some places
[ https://issues.apache.org/jira/browse/HBASE-9935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-9935: - Priority: Minor (was: Major) Slight perf improvement: Avoid KeyValue.getRowLength() at some places - Key: HBASE-9935 URL: https://issues.apache.org/jira/browse/HBASE-9935 Project: HBase Issue Type: Bug Reporter: Lars Hofhansl Priority: Minor Here's an example: {code} KeyValue.createLastOnRow( kv.getBuffer(), kv.getRowOffset(), kv.getRowLength(), kv.getBuffer(), kv.getFamilyOffset(), kv.getFamilyLength(), kv.getBuffer(), kv.getQualifierOffset(), kv.getQualifierLength()); {code} Looks harmless enough, but that actually recalculates the rowlength 5 times. And each time it needs to decode the rowlength again from the bytes of the KV. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Assigned] (HBASE-9046) Some region servers keep using an older version of coprocessor
[ https://issues.apache.org/jira/browse/HBASE-9046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu reassigned HBASE-9046: - Assignee: Ted Yu Some region servers keep using an older version of coprocessor --- Key: HBASE-9046 URL: https://issues.apache.org/jira/browse/HBASE-9046 Project: HBase Issue Type: Bug Components: Coprocessors Affects Versions: 0.94.8, 0.96.0 Environment: FreeBSD 8.2-RELEASE FreeBSD 8.2-RELEASE #0 r220198: Thu Mar 31 21:46:45 PDT 2011 amd64 java version 1.6.0_07 Diablo Java(TM) SE Runtime Environment (build 1.6.0_07-b02) Diablo Java HotSpot(TM) 64-Bit Server VM (build 10.0-b23, mixed mode) hbase: 0.94.8, r1485407 hadoop: 1.0.4, r1393290 Reporter: iain wright Assignee: Ted Yu Priority: Minor My team and another user from the mailing list have run into an issue where replacing the coprocessor jar in HDFS and reloading the table does not load the latest jar. It may load the latest version on some percentage of RS but not all of them. This may be a config oversight or a lack of understanding of a caching mechanism that has a purge capability, but I thought I would log it here for confirmation. Workaround is to name the coprocessor JAR uniquely, place in HDFS, and re-enable the table using the new jar's name. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9940) PerformanceEvaluation should have a test with many table options on (Bloom, compression, FAST_DIFF, etc.)
[ https://issues.apache.org/jira/browse/HBASE-9940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13818653#comment-13818653 ] Nick Dimiduk commented on HBASE-9940: - Can you be more specific as to what additional features you'd like exposed via cli options? The list so far looks like: - bloom filters (0.96 enables row-level filters by default. We can expose this) - -compression- (this is already supported via {{--compress=}}) - -block encodings- (this is already supported via {{--blockEncoding=}}) PerformanceEvaluation should have a test with many table options on (Bloom, compression, FAST_DIFF, etc.) - Key: HBASE-9940 URL: https://issues.apache.org/jira/browse/HBASE-9940 Project: HBase Issue Type: Bug Components: Performance Affects Versions: 0.96.0, 0.94.13 Reporter: Jean-Marc Spaggiari Priority: Minor -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9893) Incorrect assert condition in OrderedBytes decoding
[ https://issues.apache.org/jira/browse/HBASE-9893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13818662#comment-13818662 ] He Liangliang commented on HBASE-9893: -- I just catch this in my application unit test code, and this assert is valid and good for robustness. Incorrect assert condition in OrderedBytes decoding --- Key: HBASE-9893 URL: https://issues.apache.org/jira/browse/HBASE-9893 Project: HBase Issue Type: Bug Components: Client Affects Versions: 0.96.0 Reporter: He Liangliang Assignee: Nick Dimiduk Priority: Minor Attachments: HBASE-9893.patch The following assert condition is incorrect when decoding blob var byte array. {code} assert t == 0 : Unexpected bits remaining after decoding blob.; {code} When the number of bytes to decode is multiples of 8 (i.e the original number of bytes is multiples of 7), this assert may fail. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9940) PerformanceEvaluation should have a test with many table options on (Bloom, compression, FAST_DIFF, etc.)
[ https://issues.apache.org/jira/browse/HBASE-9940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13818672#comment-13818672 ] Matt Corgan commented on HBASE-9940: blockSize could be valuable. If you leave blockSize set to the default 64KB, then encoded blocks with long keys and small values could really be, say, 32KB in the block cache. If you then double the blockSize setting to 128KB in order to get the encoded size to 64KB, PerformanceEvaluation will show slower random seeks because of the sequential seeking within blocks. PerformanceEvaluation should have a test with many table options on (Bloom, compression, FAST_DIFF, etc.) - Key: HBASE-9940 URL: https://issues.apache.org/jira/browse/HBASE-9940 Project: HBase Issue Type: Bug Components: Performance Affects Versions: 0.96.0, 0.94.13 Reporter: Jean-Marc Spaggiari Priority: Minor -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HBASE-9816) Address review comments in HBASE-8496
[ https://issues.apache.org/jira/browse/HBASE-9816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan updated HBASE-9816: -- Status: Open (was: Patch Available) Address review comments in HBASE-8496 - Key: HBASE-9816 URL: https://issues.apache.org/jira/browse/HBASE-9816 Project: HBase Issue Type: Bug Affects Versions: 0.98.0 Reporter: ramkrishna.s.vasudevan Assignee: ramkrishna.s.vasudevan Fix For: 0.98.0 Attachments: HBASE-9816.patch, HBASE-9816_1.patch, HBASE-9816_1.patch, HBASE-9816_2.patch This JIRA would be used to address the review comments in HBASE-8496. Any more comments would be addressed and committed as part of this. There are already few comments from Stack on the RB. https://reviews.apache.org/r/13311/ -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HBASE-9816) Address review comments in HBASE-8496
[ https://issues.apache.org/jira/browse/HBASE-9816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan updated HBASE-9816: -- Attachment: HBASE-9816_3.patch Latest updated patch against latest code. Will commit later in the evening Address review comments in HBASE-8496 - Key: HBASE-9816 URL: https://issues.apache.org/jira/browse/HBASE-9816 Project: HBase Issue Type: Bug Affects Versions: 0.98.0 Reporter: ramkrishna.s.vasudevan Assignee: ramkrishna.s.vasudevan Fix For: 0.98.0 Attachments: HBASE-9816.patch, HBASE-9816_1.patch, HBASE-9816_1.patch, HBASE-9816_2.patch, HBASE-9816_3.patch This JIRA would be used to address the review comments in HBASE-8496. Any more comments would be addressed and committed as part of this. There are already few comments from Stack on the RB. https://reviews.apache.org/r/13311/ -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Created] (HBASE-9941) The context ClassLoader isn't set while calling into a coprocessor
Benoit Sigoure created HBASE-9941: - Summary: The context ClassLoader isn't set while calling into a coprocessor Key: HBASE-9941 URL: https://issues.apache.org/jira/browse/HBASE-9941 Project: HBase Issue Type: Bug Components: Coprocessors Affects Versions: 0.96.0 Reporter: Benoit Sigoure Whenever one of the methods of a coprocessor is invoked, the context {{ClassLoader}} isn't set to be the {{CoprocessorClassLoader}}. It's only set properly when calling the coprocessor's {{start}} method. This means that if the coprocessor code attempts to load classes using the context {{ClassLoader}}, it will fail to find the classes it's looking for. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-7663) [Per-KV security] Visibility labels
[ https://issues.apache.org/jira/browse/HBASE-7663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13818724#comment-13818724 ] Anoop Sam John commented on HBASE-7663: --- +1 for having a base class like Mutation. +1 for the name Query. There are many setter and getters which are common for Scan and Get, like setFilter, setTimeRange, setTimeStamp etc. All these setters are returning current object for chained call. So moving these to its base class will change the signature! New methods for Authorizations and acl can be in super class. [Per-KV security] Visibility labels --- Key: HBASE-7663 URL: https://issues.apache.org/jira/browse/HBASE-7663 Project: HBase Issue Type: Sub-task Components: Coprocessors, security Affects Versions: 0.98.0 Reporter: Andrew Purtell Assignee: Anoop Sam John Fix For: 0.98.0 Attachments: HBASE-7663.patch, HBASE-7663_V2.patch, HBASE-7663_V3.patch, HBASE-7663_V4.patch, HBASE-7663_V5.patch, HBASE-7663_V6.patch Implement Accumulo-style visibility labels. Consider the following design principles: - Coprocessor based implementation - Minimal to no changes to core code - Use KeyValue tags (HBASE-7448) to carry labels - Use OperationWithAttributes# {get,set}Attribute for handling visibility labels in the API - Implement a new filter for evaluating visibility labels as KVs are streamed through. This approach would be consistent in deployment and API details with other per-KV security work, supporting environments where they might be both be employed, even stacked on some tables. See the parent issue for more discussion. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-7663) [Per-KV security] Visibility labels
[ https://issues.apache.org/jira/browse/HBASE-7663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13818725#comment-13818725 ] ramkrishna.s.vasudevan commented on HBASE-7663: --- +1. For now can move authorizations and acl things into that super class. [Per-KV security] Visibility labels --- Key: HBASE-7663 URL: https://issues.apache.org/jira/browse/HBASE-7663 Project: HBase Issue Type: Sub-task Components: Coprocessors, security Affects Versions: 0.98.0 Reporter: Andrew Purtell Assignee: Anoop Sam John Fix For: 0.98.0 Attachments: HBASE-7663.patch, HBASE-7663_V2.patch, HBASE-7663_V3.patch, HBASE-7663_V4.patch, HBASE-7663_V5.patch, HBASE-7663_V6.patch Implement Accumulo-style visibility labels. Consider the following design principles: - Coprocessor based implementation - Minimal to no changes to core code - Use KeyValue tags (HBASE-7448) to carry labels - Use OperationWithAttributes# {get,set}Attribute for handling visibility labels in the API - Implement a new filter for evaluating visibility labels as KVs are streamed through. This approach would be consistent in deployment and API details with other per-KV security work, supporting environments where they might be both be employed, even stacked on some tables. See the parent issue for more discussion. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HBASE-9935) Slight perf improvement: Avoid KeyValue.getRowLength() at some places
[ https://issues.apache.org/jira/browse/HBASE-9935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-9935: - Attachment: 9935-0.94.txt Here's a proposed patch. Two parts: # remove keyLength caching in KeyValue and save 4 bytes on every KV. # improves the ScanQueryMatcher code. In match() we carefully decode the KeyValue manually and then go back and call kv.getTimestamp() and kv.getType(), both of which do all the decoding again The performance change is in the noise it seems. But it's good to save 4 bytes on a object that we are creating over and over. Slight perf improvement: Avoid KeyValue.getRowLength() at some places - Key: HBASE-9935 URL: https://issues.apache.org/jira/browse/HBASE-9935 Project: HBase Issue Type: Bug Reporter: Lars Hofhansl Priority: Minor Attachments: 9935-0.94.txt Here's an example: {code} KeyValue.createLastOnRow( kv.getBuffer(), kv.getRowOffset(), kv.getRowLength(), kv.getBuffer(), kv.getFamilyOffset(), kv.getFamilyLength(), kv.getBuffer(), kv.getQualifierOffset(), kv.getQualifierLength()); {code} Looks harmless enough, but that actually recalculates the rowlength 5 times. And each time it needs to decode the rowlength again from the bytes of the KV. -- This message was sent by Atlassian JIRA (v6.1#6144)