[jira] [Resolved] (HBASE-5792) HLog Performance Evaluation Tool
[ https://issues.apache.org/jira/browse/HBASE-5792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-5792. -- Resolution: Fixed HLog Performance Evaluation Tool Key: HBASE-5792 URL: https://issues.apache.org/jira/browse/HBASE-5792 Project: HBase Issue Type: Test Components: wal Reporter: Matteo Bertozzi Assignee: Matteo Bertozzi Priority: Minor Labels: performance, wal Fix For: 0.94.0, 0.96.0 Attachments: HBASE-5792-v0.patch, HBASE-5792-v1.patch, HBASE-5792-v2.patch, verify.txt, verify.txt Related to HDFS-3280 and the HBase WAL slowdown on 0.23+ It would be nice to have a simple tool like HFilePerformanceEvaluation, ... to be able to check easily the HLog performance. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-5736) ThriftServerRunner.HbaseHandler.mutateRow() does not use ByteBuffer correctly
[ https://issues.apache.org/jira/browse/HBASE-5736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-5736. -- Resolution: Fixed Committed, marking as fixed. ThriftServerRunner.HbaseHandler.mutateRow() does not use ByteBuffer correctly - Key: HBASE-5736 URL: https://issues.apache.org/jira/browse/HBASE-5736 Project: HBase Issue Type: Bug Reporter: Scott Chen Assignee: Scott Chen Fix For: 0.94.0, 0.96.0 Attachments: 5736-94.txt, HBASE-5736.D2649.1.patch, HBASE-5736.D2649.2.patch, HBASE-5736.D2649.3.patch We have fixed similar bug in https://issues.apache.org/jira/browse/HBASE-5507 It uses ByteBuffer.array() to read the ByteBuffer. This will ignore the offset return the whole underlying byte array. The bug can be triggered by using framed Transport thrift servers. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-3443) ICV optimization to look in memstore first and then store files (HBASE-3082) does not work when deletes are in the mix
[ https://issues.apache.org/jira/browse/HBASE-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-3443. -- Resolution: Fixed Hadoop Flags: Reviewed Committed to trunk ICV optimization to look in memstore first and then store files (HBASE-3082) does not work when deletes are in the mix -- Key: HBASE-3443 URL: https://issues.apache.org/jira/browse/HBASE-3443 Project: HBase Issue Type: Bug Components: regionserver Affects Versions: 0.90.0, 0.90.1, 0.90.2, 0.90.3, 0.90.4, 0.90.5, 0.90.6, 0.92.0, 0.92.1 Reporter: Kannan Muthukkaruppan Assignee: Lars Hofhansl Priority: Critical Labels: corruption Attachments: 3443.txt For incrementColumnValue() HBASE-3082 adds an optimization to check memstores first, and only if not present in the memstore then check the store files. In the presence of deletes, the above optimization is not reliable. If the column is marked as deleted in the memstore, one should not look further into the store files. But currently, the code does so. Sample test code outline: {code} admin.createTable(desc) table = HTable.new(conf, tableName) table.incrementColumnValue(Bytes.toBytes(row), cf1name, Bytes.toBytes(column), 5); admin.flush(tableName) sleep(2) del = Delete.new(Bytes.toBytes(row)) table.delete(del) table.incrementColumnValue(Bytes.toBytes(row), cf1name, Bytes.toBytes(column), 5); get = Get.new(Bytes.toBytes(row)) keyValues = table.get(get).raw() keyValues.each do |keyValue| puts Expect 5; Got Value=#{Bytes.toLong(keyValue.getValue())}; end {code} The above prints: {code} Expect 5; Got Value=10 {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-5774) Add documentation for WALPlayer to HBase reference guide.
[ https://issues.apache.org/jira/browse/HBASE-5774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-5774. -- Resolution: Duplicate Add documentation for WALPlayer to HBase reference guide. - Key: HBASE-5774 URL: https://issues.apache.org/jira/browse/HBASE-5774 Project: HBase Issue Type: Sub-task Reporter: Lars Hofhansl Assignee: Lars Hofhansl Attachments: 5774.txt -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-5724) Row cache of KeyValue should be cleared in readFields().
[ https://issues.apache.org/jira/browse/HBASE-5724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-5724. -- Resolution: Fixed Stack added missing import. Row cache of KeyValue should be cleared in readFields(). Key: HBASE-5724 URL: https://issues.apache.org/jira/browse/HBASE-5724 Project: HBase Issue Type: Bug Affects Versions: 0.92.1 Reporter: Teruyoshi Zenmyo Assignee: Teruyoshi Zenmyo Fix For: 0.90.7, 0.92.2, 0.94.0 Attachments: 5724.092.txt, HBASE-5724.txt, HBASE-5724v2.txt KeyValue does not clear its row cache in reading new values (readFields()). Therefore, If a KeyValue (kv) which caches its row bytes reads another KeyValue instance, kv.getRow() returns a wrong value. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-5682) Allow HConnectionImplementation to recover from ZK connection loss (for 0.94 only)
[ https://issues.apache.org/jira/browse/HBASE-5682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-5682. -- Resolution: Fixed Hadoop Flags: Reviewed Committed to 0.94 only Allow HConnectionImplementation to recover from ZK connection loss (for 0.94 only) -- Key: HBASE-5682 URL: https://issues.apache.org/jira/browse/HBASE-5682 Project: HBase Issue Type: Improvement Components: client Reporter: Lars Hofhansl Assignee: Lars Hofhansl Priority: Critical Fix For: 0.94.0 Attachments: 5682-all-v2.txt, 5682-all-v3.txt, 5682-all-v4.txt, 5682-all.txt, 5682-v2.txt, 5682.txt Just realized that without this HBASE-4805 is broken. I.e. there's no point keeping a persistent HConnection around if it can be rendered permanently unusable if the ZK connection is lost temporarily. Note that this is fixed in 0.96 with HBASE-5399 (but that seems to big to backport) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-5097) RegionObserver implementation whose preScannerOpen and postScannerOpen Impl return null can stall the system initialization through NPE
[ https://issues.apache.org/jira/browse/HBASE-5097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-5097. -- Resolution: Fixed To answer my own question: Yes :) RegionObserver implementation whose preScannerOpen and postScannerOpen Impl return null can stall the system initialization through NPE --- Key: HBASE-5097 URL: https://issues.apache.org/jira/browse/HBASE-5097 Project: HBase Issue Type: Bug Components: coprocessors Reporter: ramkrishna.s.vasudevan Assignee: ramkrishna.s.vasudevan Fix For: 0.92.2, 0.94.0, 0.96.0 Attachments: HBASE-5097.patch, HBASE-5097_1.patch, HBASE-5097_2.patch In HRegionServer.java openScanner() {code} r.prepareScanner(scan); RegionScanner s = null; if (r.getCoprocessorHost() != null) { s = r.getCoprocessorHost().preScannerOpen(scan); } if (s == null) { s = r.getScanner(scan); } if (r.getCoprocessorHost() != null) { s = r.getCoprocessorHost().postScannerOpen(scan, s); } {code} If we dont have implemention for postScannerOpen the RegionScanner is null and so throwing nullpointer {code} java.lang.NullPointerException at java.util.concurrent.ConcurrentHashMap.put(ConcurrentHashMap.java:881) at org.apache.hadoop.hbase.regionserver.HRegionServer.addScanner(HRegionServer.java:2282) at org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2272) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326) {code} Making this defect as blocker.. Pls feel free to change the priority if am wrong. Also correct me if my way of trying out coprocessors without implementing postScannerOpen is wrong. Am just a learner. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-5084) Allow different HTable instances to share one ExecutorService
[ https://issues.apache.org/jira/browse/HBASE-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-5084. -- Resolution: Fixed Hadoop Flags: Reviewed Committed to 0.94 and 0.96 (just noticed I mispelled the commit message, oh well) Allow different HTable instances to share one ExecutorService - Key: HBASE-5084 URL: https://issues.apache.org/jira/browse/HBASE-5084 Project: HBase Issue Type: Task Reporter: Zhihong Yu Assignee: Lars Hofhansl Fix For: 0.94.1 Attachments: 5084-0.94.txt, 5084-trunk.txt This came out of Lily 1.1.1 release: Use a shared ExecutorService for all HTable instances, leading to better (or actual) thread reuse -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-5639) The logic used in waiting for region servers during startup is broken
[ https://issues.apache.org/jira/browse/HBASE-5639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-5639. -- Resolution: Fixed Hadoop Flags: Reviewed Committed to 0.94 and 0.96 The logic used in waiting for region servers during startup is broken - Key: HBASE-5639 URL: https://issues.apache.org/jira/browse/HBASE-5639 Project: HBase Issue Type: Bug Reporter: Jean-Daniel Cryans Assignee: Jean-Daniel Cryans Priority: Blocker Fix For: 0.94.0 Attachments: HBASE-5639.patch See the tail of HBASE-4993, which I'll report here: Me: {quote} I think a bug was introduced here. Here's the new waiting logic in waitForRegionServers: the 'hbase.master.wait.on.regionservers.mintostart' is reached AND there have been no new region server in for 'hbase.master.wait.on.regionservers.interval' time And the code that verifies that: !(lastCountChange+interval now count = minToStart) {quote} Nic: {quote} It seems that changing the code to (count minToStart || lastCountChange+interval now) would make the code works as documented. If you have 0 region servers that checked in and you are under the interval, you wait: (true or true) = true. If you have 0 region servers but you are above the interval, you wait: (true or false) = true. If you have 1 or more region servers that checked in and you are under the interval, you wait: (false or true) = true. {quote} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-4657) Improve the efficiency of our MR jobs with a few configurations
[ https://issues.apache.org/jira/browse/HBASE-4657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-4657. -- Resolution: Fixed Fix Version/s: 0.96.0 Hadoop Flags: Reviewed Committed to 0.94 and trunk. Thanks for the reviews. Improve the efficiency of our MR jobs with a few configurations --- Key: HBASE-4657 URL: https://issues.apache.org/jira/browse/HBASE-4657 Project: HBase Issue Type: Improvement Affects Versions: 0.90.4 Reporter: Jean-Daniel Cryans Assignee: Lars Hofhansl Fix For: 0.94.0, 0.96.0 Attachments: 4657.txt This is a low hanging fruit, some of our MR jobs like RowCounter and CopyTable don't even setCacheBlocks on the scan object which out of the box completely screws up a running system. Another thing would be to disable speculative execution. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-5371) Introduce AccessControllerProtocol.checkPermissions(Permission[] permissons) API
[ https://issues.apache.org/jira/browse/HBASE-5371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-5371. -- Resolution: Fixed This was committed, marking as fixed. Introduce AccessControllerProtocol.checkPermissions(Permission[] permissons) API Key: HBASE-5371 URL: https://issues.apache.org/jira/browse/HBASE-5371 Project: HBase Issue Type: Sub-task Components: security Affects Versions: 0.92.1, 0.94.0 Reporter: Enis Soztutar Assignee: Enis Soztutar Fix For: 0.94.0 Attachments: HBASE-5371-addendum_v1.patch, HBASE-5371_v2.patch, HBASE-5371_v3-noprefix.patch, HBASE-5371_v3.patch We need to introduce something like AccessControllerProtocol.checkPermissions(Permission[] permissions) API, so that clients can check access rights before carrying out the operations. We need this kind of operation for HCATALOG-245, which introduces authorization providers for hbase over hcat. We cannot use getUserPermissions() since it requires ADMIN permissions on the global/table level. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-4542) add filter info to slow query logging
[ https://issues.apache.org/jira/browse/HBASE-4542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-4542. -- Resolution: Fixed Hadoop Flags: Reviewed Comitted to 0.94 aswell. add filter info to slow query logging - Key: HBASE-4542 URL: https://issues.apache.org/jira/browse/HBASE-4542 Project: HBase Issue Type: Improvement Affects Versions: 0.89.20100924 Reporter: Kannan Muthukkaruppan Assignee: Madhuwanti Vaidya Fix For: 0.94.0, 0.96.0 Attachments: 0001-jira-HBASE-4542-Add-filter-info-to-slow-query-loggin.patch, Add-filter-info-to-slow-query-logging-2012-03-06_14_28_13.patch, D1263.2.patch, D1539.1.patch Slow query log doesn't report filters in effect. For example: {code} (operationTooSlow): \ {processingtimems:3468,client:10.138.43.206:40035,timeRange: [0,9223372036854775807],\ starttimems:1317772005821,responsesize:42411, \ class:HRegionServer,table:myTable,families:{CF1:ALL]},\ row:6c3b8efa132f0219b7621ed1e5c8c70b,queuetimems:0,\ method:get,totalColumns:1,maxVersions:1,storeLimit:-1} {code} the above would suggest that all columns of myTable:CF1 are being requested for the given row. But in reality there could be filters in effect (such as ColumnPrefixFilter, ColumnRangeFilter, TimestampsFilter() etc.). We should enhance the slow query log to capture report this information. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-4612) Allow ColumnPrefixFilter to support multiple prefixes
[ https://issues.apache.org/jira/browse/HBASE-4612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-4612. -- Resolution: Won't Fix I am closing this, because we already have MultipleColumnPrefixFilter. Please reopen if I misunderstood. Allow ColumnPrefixFilter to support multiple prefixes - Key: HBASE-4612 URL: https://issues.apache.org/jira/browse/HBASE-4612 Project: HBase Issue Type: Improvement Components: filters Affects Versions: 0.90.4 Reporter: Eran Kutner Assignee: Eran Kutner Priority: Minor Fix For: 0.94.0 Attachments: HBASE-4612-0.90.patch, HBASE-4612.patch When having a lot of columns grouped by name I've found that it would be very useful to be able to scan them using multiple prefixes, allowing to fetch specific groups in one scan, without fetching the entire row. This is impossible to achieve using a FilterList, so I've added such support to the existing ColmnPrefixFilter while keeping backward compatibility. The attached patch is based on 0.90.4, I noticed that the 0.92 branch has a new method to support instantiating filters using Thrift. I'm not sure how the serialization works there so I didn't implement that, but the rest of my code should work in 0.92 as well. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-5592) Make it easier to get a table from shell
[ https://issues.apache.org/jira/browse/HBASE-5592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-5592. -- Resolution: Fixed Fix Version/s: (was: 0.92.2) (was: 0.94.0) 0.96.0 Made versions reflect the comments. Closing this issue. Make it easier to get a table from shell Key: HBASE-5592 URL: https://issues.apache.org/jira/browse/HBASE-5592 Project: HBase Issue Type: Improvement Components: shell Affects Versions: 0.94.0 Reporter: Ben West Assignee: Ben West Priority: Trivial Labels: shell Fix For: 0.96.0 Attachments: publicTable.patch The one argument constructor to HTable was removed at some point, which means that you now have to pass in a Configuration to instantiate an HTable. This is annoying for me when I create quick scripts. This JIRA is a tiny patch which lets you get an HTable instance in the shell by doing {code}foo_table = @shell.hbase_table('foo').table{code} Basically, it is changing table to be a public member rather than a private one. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-4940) hadoop-metrics.properties can include configuration of the rest context for ganglia
[ https://issues.apache.org/jira/browse/HBASE-4940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-4940. -- Resolution: Fixed Fix Version/s: 0.96.0 Committed to 0.94 and trunk. hadoop-metrics.properties can include configuration of the rest context for ganglia - Key: HBASE-4940 URL: https://issues.apache.org/jira/browse/HBASE-4940 Project: HBase Issue Type: Improvement Components: metrics Affects Versions: 0.90.5 Environment: HBase-0.90.1 Reporter: Mubarak Seyed Assignee: Mubarak Seyed Priority: Minor Labels: hbase-rest Fix For: 0.94.0, 0.96.0 Attachments: HBASE-4940.patch, HBASE-4940.trunk.v1.patch, HBASE-4940.trunk.v2.patch It appears from hadoop-metrics.properties that configuration for rest context is missing. It would be good if we add the rest context and commented out them, if anyone is using rest-server and if they want to monitor using ganglia context then they can uncomment the rest context and use them for rest-server monitoring using ganglia. {code} # Configuration of the rest context for ganglia #rest.class=org.apache.hadoop.metrics.ganglia.GangliaContext #rest.period=10 #rest.servers=ganglia-metad-hostname:port {code} Working on the patch, will submit it. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-5592) Make it easier to get a table from shell
[ https://issues.apache.org/jira/browse/HBASE-5592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-5592. -- Resolution: Fixed Let's just mark this one fix. Jesse will revert with his changes (in 0.94+) if necessary. Make it easier to get a table from shell Key: HBASE-5592 URL: https://issues.apache.org/jira/browse/HBASE-5592 Project: HBase Issue Type: Improvement Components: shell Affects Versions: 0.94.0 Reporter: Ben West Assignee: Ben West Priority: Trivial Labels: shell Fix For: 0.92.2, 0.94.0, 0.96.0 Attachments: publicTable.patch The one argument constructor to HTable was removed at some point, which means that you now have to pass in a Configuration to instantiate an HTable. This is annoying for me when I create quick scripts. This JIRA is a tiny patch which lets you get an HTable instance in the shell by doing {code}foo_table = @shell.hbase_table('foo').table{code} Basically, it is changing table to be a public member rather than a private one. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-5551) Some functions should not be used by customer code and must be deprecated in 0.94
[ https://issues.apache.org/jira/browse/HBASE-5551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-5551. -- Resolution: Fixed Committed to 0.94. Some functions should not be used by customer code and must be deprecated in 0.94 - Key: HBASE-5551 URL: https://issues.apache.org/jira/browse/HBASE-5551 Project: HBase Issue Type: Improvement Affects Versions: 0.92.0 Reporter: nkeywal Assignee: nkeywal Fix For: 0.94.0 Attachments: 5551.092.patch They are: HBaseAdmin#getMaster HConnection#getZooKeeperWatcher HConnection#getMaster -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-5074) support checksums in HBase block cache
[ https://issues.apache.org/jira/browse/HBASE-5074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-5074. -- Resolution: Fixed Hadoop Flags: Reviewed Comitted to 0.94 as well. support checksums in HBase block cache -- Key: HBASE-5074 URL: https://issues.apache.org/jira/browse/HBASE-5074 Project: HBase Issue Type: Improvement Components: regionserver Reporter: dhruba borthakur Assignee: dhruba borthakur Fix For: 0.94.0 Attachments: 5074-0.94.txt, D1521.1.patch, D1521.1.patch, D1521.10.patch, D1521.10.patch, D1521.10.patch, D1521.10.patch, D1521.10.patch, D1521.11.patch, D1521.11.patch, D1521.12.patch, D1521.12.patch, D1521.13.patch, D1521.13.patch, D1521.14.patch, D1521.14.patch, D1521.2.patch, D1521.2.patch, D1521.3.patch, D1521.3.patch, D1521.4.patch, D1521.4.patch, D1521.5.patch, D1521.5.patch, D1521.6.patch, D1521.6.patch, D1521.7.patch, D1521.7.patch, D1521.8.patch, D1521.8.patch, D1521.9.patch, D1521.9.patch The current implementation of HDFS stores the data in one block file and the metadata(checksum) in another block file. This means that every read into the HBase block cache actually consumes two disk iops, one to the datafile and one to the checksum file. This is a major problem for scaling HBase, because HBase is usually bottlenecked on the number of random disk iops that the storage-hardware offers. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-5497) Add protobuf as M/R dependency jar (mapred)
[ https://issues.apache.org/jira/browse/HBASE-5497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-5497. -- Resolution: Fixed Hadoop Flags: (was: Reviewed) Committed to 0.94 and trunk (exact same change as HBASE-5460, but for mapred) Add protobuf as M/R dependency jar (mapred) --- Key: HBASE-5497 URL: https://issues.apache.org/jira/browse/HBASE-5497 Project: HBase Issue Type: Sub-task Components: mapreduce Reporter: Lars Hofhansl Assignee: Lars Hofhansl Fix For: 0.94.0 Getting this from M/R jobs (Export for example): Error: java.lang.ClassNotFoundException: com.google.protobuf.Message at java.net.URLClassLoader$1.run(URLClassLoader.java:217) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:205) at java.lang.ClassLoader.loadClass(ClassLoader.java:321) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294) at java.lang.ClassLoader.loadClass(ClassLoader.java:266) at org.apache.hadoop.hbase.io.HbaseObjectWritable.clinit(HbaseObjectWritable.java:262) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-5440) Allow Import to optionally use HFileOutputFormat
[ https://issues.apache.org/jira/browse/HBASE-5440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-5440. -- Resolution: Fixed Hadoop Flags: Reviewed Committed to trunk. Thanks for reviewing stack! Allow Import to optionally use HFileOutputFormat Key: HBASE-5440 URL: https://issues.apache.org/jira/browse/HBASE-5440 Project: HBase Issue Type: Improvement Components: mapreduce Reporter: Lars Hofhansl Assignee: Lars Hofhansl Priority: Minor Fix For: 0.94.0 Attachments: 5440-v2.txt, 5440.txt importtsv support importing into a life table or to generate HFiles for bulk load. import should allow the same. Could even consider merging these tools into one (in principle the only difference is the parsing part - although that is maybe for a different jira). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-5460) Add protobuf as M/R dependency jar
[ https://issues.apache.org/jira/browse/HBASE-5460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-5460. -- Resolution: Fixed Add protobuf as M/R dependency jar -- Key: HBASE-5460 URL: https://issues.apache.org/jira/browse/HBASE-5460 Project: HBase Issue Type: Sub-task Components: mapreduce Reporter: Lars Hofhansl Assignee: Lars Hofhansl Fix For: 0.94.0 Attachments: 5460.txt Getting this from M/R jobs (Export for example): Error: java.lang.ClassNotFoundException: com.google.protobuf.Message at java.net.URLClassLoader$1.run(URLClassLoader.java:217) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:205) at java.lang.ClassLoader.loadClass(ClassLoader.java:321) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294) at java.lang.ClassLoader.loadClass(ClassLoader.java:266) at org.apache.hadoop.hbase.io.HbaseObjectWritable.clinit(HbaseObjectWritable.java:262) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-5229) Provide basic building blocks for multi-row local transactions.
[ https://issues.apache.org/jira/browse/HBASE-5229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-5229. -- Resolution: Fixed Hadoop Flags: Reviewed Committed to trunnk. Thanks everyone for bearing with me. Provide basic building blocks for multi-row local transactions. - Key: HBASE-5229 URL: https://issues.apache.org/jira/browse/HBASE-5229 Project: HBase Issue Type: New Feature Components: client, regionserver Reporter: Lars Hofhansl Assignee: Lars Hofhansl Fix For: 0.94.0 Attachments: 5229-endpoint.txt, 5229-multiRow-v2.txt, 5229-multiRow.txt, 5229-seekto-v2.txt, 5229-seekto.txt, 5229.txt In the final iteration, this issue provides a generalized, public mutateRowsWithLocks method on HRegion, that can be used by coprocessors to implement atomic operations efficiently. Coprocessors are already region aware, which makes this is a good pairing of APIs. This feature is by design not available to the client via the HTable API. It took a long time to arrive at this and I apologize for the public exposure of my (erratic in retrospect) thought processes. Was: HBase should provide basic building blocks for multi-row local transactions. Local means that we do this by co-locating the data. Global (cross region) transactions are not discussed here. After a bit of discussion two solutions have emerged: 1. Keep the row-key for determining grouping and location and allow efficient intra-row scanning. A client application would then model tables as HBase-rows. 2. Define a prefix-length in HTableDescriptor that defines a grouping of rows. Regions will then never be split inside a grouping prefix. #1 is true to the current storage paradigm of HBase. #2 is true to the current client side API. I will explore these two with sample patches here. Was: As discussed (at length) on the dev mailing list with the HBASE-3584 and HBASE-5203 committed, supporting atomic cross row transactions within a region becomes simple. I am aware of the hesitation about the usefulness of this feature, but we have to start somewhere. Let's use this jira for discussion, I'll attach a patch (with tests) momentarily to make this concrete. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-5318) Support Eclipse Indigo
[ https://issues.apache.org/jira/browse/HBASE-5318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-5318. -- Resolution: Fixed Verified on my Ecplipse... Committed to trunk. Thanks for the patch Jesse. Support Eclipse Indigo --- Key: HBASE-5318 URL: https://issues.apache.org/jira/browse/HBASE-5318 Project: HBase Issue Type: Improvement Components: build Affects Versions: 0.94.0 Environment: Eclipse Indigo (1.4.1) which includes m2eclipse (1.0 SR1). Reporter: Jesse Yates Assignee: Jesse Yates Priority: Minor Labels: maven Attachments: mvn_HBASE-5318_r0.patch The current 'standard' release of Eclipse (indigo) comes with m2eclipse installed. However, as of m2e v1.0, interesting lifecycle phases are now handled via a 'connector'. However, several of the plugins we use don't support connectors. This means that eclipse bails out and won't build the project or view it as 'working' even though it builds just fine from the the command line. Since Eclipse is one of the major java IDEs and that Indigo has been around for a while, we should make it easy to for new devs to pick up the code and for older devs to upgrade painlessly. The original build should not be modified in any significant way. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-5266) Add documentation for ColumnRangeFilter
[ https://issues.apache.org/jira/browse/HBASE-5266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-5266. -- Resolution: Fixed Committed to trunk Add documentation for ColumnRangeFilter --- Key: HBASE-5266 URL: https://issues.apache.org/jira/browse/HBASE-5266 Project: HBase Issue Type: Sub-task Components: documentation Reporter: Lars Hofhansl Assignee: Lars Hofhansl Priority: Minor Fix For: 0.94.0 Attachments: 5266-v2.txt, 5266-v3.txt, 5266.txt There are only a few lines of documentation for ColumnRangeFilter. Given the usefulness of this filter for efficient intra-row scanning (see HBASE-5229 and HBASE-4256), we should make this filter more prominent in the documentation. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-5268) Add delete column prefix delete marker
[ https://issues.apache.org/jira/browse/HBASE-5268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-5268. -- Resolution: Won't Fix Add delete column prefix delete marker -- Key: HBASE-5268 URL: https://issues.apache.org/jira/browse/HBASE-5268 Project: HBase Issue Type: Improvement Components: client, regionserver Reporter: Lars Hofhansl Attachments: 5268-proof.txt, 5268-v2.txt, 5268-v3.txt, 5268-v4.txt, 5268-v5.txt, 5268.txt This is another part missing in the wide row challenge. Currently entire families of a row can be deleted or individual columns or versions. There is no facility to mark multiple columns for deletion by column prefix. Turns out that be achieve with very little code (it's possible that I missed some of the new delete bloom filter code, so please review this thoroughly). I'll attach a patch soon, just working on some tests now. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-5229) Explore building blocks for multi-row local transactions.
[ https://issues.apache.org/jira/browse/HBASE-5229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-5229. -- Resolution: Not A Problem Explore building blocks for multi-row local transactions. --- Key: HBASE-5229 URL: https://issues.apache.org/jira/browse/HBASE-5229 Project: HBase Issue Type: New Feature Components: client, regionserver Reporter: Lars Hofhansl Assignee: Lars Hofhansl Fix For: 0.94.0 Attachments: 5229-seekto-v2.txt, 5229-seekto.txt, 5229.txt HBase should provide basic building blocks for multi-row local transactions. Local means that we do this by co-locating the data. Global (cross region) transactions are not discussed here. After a bit of discussion two solutions have emerged: 1. Keep the row-key for determining grouping and location and allow efficient intra-row scanning. A client application would then model tables as HBase-rows. 2. Define a prefix-length in HTableDescriptor that defines a grouping of rows. Regions will then never be split inside a grouping prefix. #1 is true to the current storage paradigm of HBase. #2 is true to the current client side API. I will explore these two with sample patches here. Was: As discussed (at length) on the dev mailing list with the HBASE-3584 and HBASE-5203 committed, supporting atomic cross row transactions within a region becomes simple. I am aware of the hesitation about the usefulness of this feature, but we have to start somewhere. Let's use this jira for discussion, I'll attach a patch (with tests) momentarily to make this concrete. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-4583) Integrate RWCC with Append and Increment operations
[ https://issues.apache.org/jira/browse/HBASE-4583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-4583. -- Resolution: Won't Fix There currently is no good solution for this. Integrate RWCC with Append and Increment operations --- Key: HBASE-4583 URL: https://issues.apache.org/jira/browse/HBASE-4583 Project: HBase Issue Type: Bug Reporter: Lars Hofhansl Assignee: Lars Hofhansl Attachments: 4583-v2.txt, 4583-v3.txt, 4583-v4.txt, 4583.txt Currently Increment and Append operations do not work with RWCC and hence a client could see the results of multiple such operation mixed in the same Get/Scan. The semantics might be a bit more interesting here as upsert adds and removes to and from the memstore. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-5088) A concurrency issue on SoftValueSortedMap
[ https://issues.apache.org/jira/browse/HBASE-5088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-5088. -- Resolution: Fixed Hadoop Flags: Reviewed Committed to 0.92 and trunk. Not entirely convinced we should make this change in 0.90 A concurrency issue on SoftValueSortedMap - Key: HBASE-5088 URL: https://issues.apache.org/jira/browse/HBASE-5088 Project: HBase Issue Type: Bug Components: client Affects Versions: 0.90.4, 0.94.0 Reporter: Jieshan Bean Assignee: Lars Hofhansl Priority: Critical Fix For: 0.92.0, 0.94.0 Attachments: 5088-final.txt, 5088-final2.txt, 5088-final3.txt, 5088-syncObj.txt, 5088-useMapInterfaces.txt, 5088.generics.txt, HBase-5088-90.patch, HBase-5088-trunk.patch, HBase5088-90-replaceSoftValueSortedMap.patch, HBase5088-90-replaceTreeMap.patch, HBase5088-trunk-replaceTreeMap.patch, HBase5088Reproduce.java, PerformanceTestResults.png SoftValueSortedMap is backed by a TreeMap. All the methods in this class are synchronized. If we use this method to add/delete elements, it's ok. But in HConnectionManager#getCachedLocation, it use headMap to get a view from SoftValueSortedMap#internalMap. Once we operate on this view map(like add/delete) in other threads, a concurrency issue may occur. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-5118) Fix Scan documentation
[ https://issues.apache.org/jira/browse/HBASE-5118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-5118. -- Resolution: Fixed Fix Version/s: 0.94.0 Hadoop Flags: Reviewed Committed to trunk. Thanks for the review Stack. Fix Scan documentation -- Key: HBASE-5118 URL: https://issues.apache.org/jira/browse/HBASE-5118 Project: HBase Issue Type: Sub-task Components: documentation Affects Versions: 0.94.0 Reporter: Lars Hofhansl Assignee: Lars Hofhansl Priority: Trivial Fix For: 0.94.0 Attachments: 5118.txt Current documentation for scan states: {code} Scan scan = new Scan(); scan.addColumn(Bytes.toBytes(cf),Bytes.toBytes(attr)); scan.setStartRow( Bytes.toBytes(row)); // start key is inclusive scan.setStopRow( Bytes.toBytes(row + new byte[] {0})); // stop key is exclusive for(Result result : htable.getScanner(scan)) { // process Result instance } {code} row + new byte[] {0} is not correct. That should row + (char)0 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-4970) Add a parameter so that keepAliveTime of Htable thread pool can be changed
[ https://issues.apache.org/jira/browse/HBASE-4970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-4970. -- Resolution: Fixed Fix Version/s: 0.92.1 0.94.0 Hadoop Flags: Reviewed Committed to 0.90, 0.92, and trunk. Thanks for the patch and your patience, gaojinchao. Add a parameter so that keepAliveTime of Htable thread pool can be changed -- Key: HBASE-4970 URL: https://issues.apache.org/jira/browse/HBASE-4970 Project: HBase Issue Type: Improvement Components: client Affects Versions: 0.90.4 Reporter: gaojinchao Assignee: gaojinchao Priority: Trivial Fix For: 0.94.0, 0.92.1, 0.90.6 Attachments: HBASE-4970_Branch90.patch, HBASE-4970_Branch90_V1_trial.patch, HBASE-4970_Branch90_V2.patch, HBASE-4970_Branch92_V2.patch, HBASE-4970_Trunk_V2.patch In my cluster, I changed keepAliveTime from 60 s to 3600 s. Increasing RES is slowed down. Why increasing keepAliveTime of HBase thread pool is slowing down our problem occurance [RES value increase]? You can go through the source of sun.nio.ch.Util. Every thread hold 3 softreference of direct buffer(mustangsrc) for reusage. The code names the 3 softreferences buffercache. If the buffer was all occupied or none was suitable in size, and new request comes, new direct buffer is allocated. After the service, the bigger one replaces the smaller one in buffercache. The replaced buffer is released. So I think we can add a parameter to change keepAliveTime of Htable thread pool. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-5058) Allow HBaseAmin to use an existing connection
[ https://issues.apache.org/jira/browse/HBASE-5058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-5058. -- Resolution: Invalid Marking this as invalid. HBaseAdmin is not used nearly as often as HTable, and a bit churn through HConnction is not so bad. There are layers of retrying (which is bad) and RPC runtime exceptions are passed to up to HBaseAdmin (also bad). But none of those are horrible. The entire client needs to be revisited, this is not the jira to do that. Allow HBaseAmin to use an existing connection - Key: HBASE-5058 URL: https://issues.apache.org/jira/browse/HBASE-5058 Project: HBase Issue Type: Sub-task Components: client Affects Versions: 0.94.0 Reporter: Lars Hofhansl Assignee: Lars Hofhansl Priority: Minor Fix For: 0.94.0 Attachments: 5058.txt What HBASE-4805 does for HTables, this should do for HBaseAdmin. Along with this the shared error handling and retrying between HBaseAdmin and HConnectionManager can also be improved. I'll attach a first pass patch soon. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-4998) Support deleted rows in CopyTable
[ https://issues.apache.org/jira/browse/HBASE-4998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-4998. -- Resolution: Fixed Hadoop Flags: Reviewed Committed to trunk Support deleted rows in CopyTable - Key: HBASE-4998 URL: https://issues.apache.org/jira/browse/HBASE-4998 Project: HBase Issue Type: Sub-task Components: regionserver Reporter: Lars Hofhansl Assignee: Lars Hofhansl Priority: Minor Fix For: 0.94.0 Attachments: 4998-v1.txt, 4998-v2.txt It turns out that with HBASE-4682 in place, it is trivial to add this to CopyTable as well. This would be another tools in the backup arsenal. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-5005) Add DEFAULT_MIN_VERSIONS to HColumnDescriptor.DEFAULT_VALUES
[ https://issues.apache.org/jira/browse/HBASE-5005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-5005. -- Resolution: Fixed Hadoop Flags: Reviewed Committed to trunk. Thanks for quick review stack. Add DEFAULT_MIN_VERSIONS to HColumnDescriptor.DEFAULT_VALUES Key: HBASE-5005 URL: https://issues.apache.org/jira/browse/HBASE-5005 Project: HBase Issue Type: Sub-task Components: regionserver Reporter: Lars Hofhansl Assignee: Lars Hofhansl Priority: Trivial Fix For: 0.94.0 Attachments: 5005.txt So that it won't show as MIN_VERSIONS=0 everywhere. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-4981) add raw scan support to shell
[ https://issues.apache.org/jira/browse/HBASE-4981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-4981. -- Resolution: Fixed Hadoop Flags: Reviewed Thanks for the review stack. add raw scan support to shell - Key: HBASE-4981 URL: https://issues.apache.org/jira/browse/HBASE-4981 Project: HBase Issue Type: Sub-task Components: shell Affects Versions: 0.94.0 Reporter: Lars Hofhansl Assignee: Lars Hofhansl Fix For: 0.94.0 Attachments: 4981-v1.txt, 4981-v2.txt, 4981-v3.txt Parent adds raw scan support to include delete markers and deleted rows in scan results. Would be nice it that would available in the shell to see exactly what exists in a table. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-4979) Setting KEEP_DELETE_CELLS fails in shell
[ https://issues.apache.org/jira/browse/HBASE-4979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-4979. -- Resolution: Fixed Obvious, uncontentious change... Committed to trunk. Setting KEEP_DELETE_CELLS fails in shell Key: HBASE-4979 URL: https://issues.apache.org/jira/browse/HBASE-4979 Project: HBase Issue Type: Sub-task Components: shell Affects Versions: 0.94.0 Reporter: Lars Hofhansl Assignee: Lars Hofhansl Fix For: 0.94.0 Attachments: 4979.txt admin.rb uses wrong method on HColumnDescriptor to enable keeping of deleted cells. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-4903) Return a result from RegionObserver.preIncrement()
[ https://issues.apache.org/jira/browse/HBASE-4903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-4903. -- Resolution: Fixed Fix Version/s: 0.92.1 0.94.0 Hadoop Flags: Reviewed Committed to 0.92 and trunk. Return a result from RegionObserver.preIncrement() -- Key: HBASE-4903 URL: https://issues.apache.org/jira/browse/HBASE-4903 Project: HBase Issue Type: Improvement Reporter: Daniel Gómez Ferro Fix For: 0.94.0, 0.92.1 Attachments: HBASE-4903-0.92.patch, HBASE-4903.patch, HBASE-4903.patch The only way to return a result from RegionObserver.preIncrement() is to use Result.readFields() after serializing the correct result. This can be fixed either returning a Result object from that function or adding setters to Result. Another option is to modify the parameters and receive a ListKeyValue as preGet() does. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-4886) truncate fails in HBase shell
[ https://issues.apache.org/jira/browse/HBASE-4886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-4886. -- Resolution: Fixed Hadoop Flags: Reviewed Committed to trunk truncate fails in HBase shell - Key: HBASE-4886 URL: https://issues.apache.org/jira/browse/HBASE-4886 Project: HBase Issue Type: Bug Components: shell Affects Versions: 0.94.0 Reporter: Lars Hofhansl Assignee: Lars Hofhansl Priority: Minor Fix For: 0.94.0 Attachments: 4886.txt Seeing this in trunk: {noformat} hbase(main):001:0 truncate 'table' Truncating 'table' table (it may take a while): ERROR: wrong number of arguments (1 for 3) Here is some help for this command: Disables, drops and recreates the specified table. {noformat} ... caused by the removal of the HTable(String) constructor. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-4874) Run tests with non-secure random, some tests hang otherwise
[ https://issues.apache.org/jira/browse/HBASE-4874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-4874. -- Resolution: Fixed Assignee: Lars Hofhansl Committed to 0.92 and trunk Run tests with non-secure random, some tests hang otherwise --- Key: HBASE-4874 URL: https://issues.apache.org/jira/browse/HBASE-4874 Project: HBase Issue Type: Bug Affects Versions: 0.92.0, 0.94.0 Reporter: Ted Yu Assignee: Lars Hofhansl Fix For: 0.92.0, 0.94.0 Attachments: 4874.txt TestHCM#testClosing fails on Linux if not enough entropy is available in /dev/random -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-4838) Port 2856 (TestAcidGuarantee is failing) to 0.92
[ https://issues.apache.org/jira/browse/HBASE-4838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-4838. -- Resolution: Fixed Hadoop Flags: Reviewed Committed to 0.92 Port 2856 (TestAcidGuarantee is failing) to 0.92 Key: HBASE-4838 URL: https://issues.apache.org/jira/browse/HBASE-4838 Project: HBase Issue Type: Sub-task Reporter: Lars Hofhansl Assignee: Lars Hofhansl Fix For: 0.92.0 Attachments: 4838-v1.txt, 4838-v3.txt Moving back port into a separate issue (as suggested by JonH), because this not trivial. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-4800) Result.compareResults is incorrect
[ https://issues.apache.org/jira/browse/HBASE-4800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-4800. -- Resolution: Fixed Hadoop Flags: Reviewed Committed to 0.90, 0.92, and trunk. Result.compareResults is incorrect -- Key: HBASE-4800 URL: https://issues.apache.org/jira/browse/HBASE-4800 Project: HBase Issue Type: Bug Components: client Affects Versions: 0.90.4, 0.92.0, 0.94.0 Reporter: Lars Hofhansl Assignee: Lars Hofhansl Fix For: 0.92.0, 0.94.0, 0.90.5 Attachments: 4800.txt A coworker of mine (James Taylor) found a bug in Result.compareResults(...). This condition: {code} if (!ourKVs[i].equals(replicatedKVs[i]) !Bytes.equals(ourKVs[i].getValue(), replicatedKVs[i].getValue())) { throw new Exception(This result was different: {code} should be {code} if (!ourKVs[i].equals(replicatedKVs[i]) || !Bytes.equals(ourKVs[i].getValue(), replicatedKVs[i].getValue())) { throw new Exception(This result was different: {code} Just checked, this is wrong in all branches. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-4691) Remove more unnecessary byte[] copies from KeyValues
[ https://issues.apache.org/jira/browse/HBASE-4691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-4691. -- Resolution: Fixed Hadoop Flags: Reviewed Committed to trunk Remove more unnecessary byte[] copies from KeyValues Key: HBASE-4691 URL: https://issues.apache.org/jira/browse/HBASE-4691 Project: HBase Issue Type: Bug Reporter: Lars Hofhansl Assignee: Lars Hofhansl Priority: Minor Fix For: 0.94.0 Attachments: 4691.txt Just looking through the code I found some more spots where we unnecessarily copy byte[] rather than just passing offset and length around. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-4648) Bytes.toBigDecimal() doesn't use offset
[ https://issues.apache.org/jira/browse/HBASE-4648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-4648. -- Resolution: Fixed Fix Version/s: 0.94.0 0.92.0 Committed to 0.92 and trunk. Since this is a (slightly) incompatible change I did not commit it to 0.90... Open again if you feel otherwise. Bytes.toBigDecimal() doesn't use offset --- Key: HBASE-4648 URL: https://issues.apache.org/jira/browse/HBASE-4648 Project: HBase Issue Type: Bug Components: util Affects Versions: 0.90.4 Environment: Java 1.6.0_26, Mac OS X 10.7 and CentOS 6 Reporter: Bryan Keller Fix For: 0.92.0, 0.94.0 Attachments: bigdec.patch, bigdec2.patch The Bytes.toBigDecimal(byte[], offset, len) method does not use the offset, thus you will get an incorrect result for the BigDecimal unless the BigDecimal's bytes are at the beginning of the byte array. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-4673) NPE in HFileReaderV2.close during major compaction when hfile.block.cache.size is set to 0
[ https://issues.apache.org/jira/browse/HBASE-4673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-4673. -- Resolution: Fixed Fix Version/s: 0.94.0 Hadoop Flags: Reviewed NPE in HFileReaderV2.close during major compaction when hfile.block.cache.size is set to 0 --- Key: HBASE-4673 URL: https://issues.apache.org/jira/browse/HBASE-4673 Project: HBase Issue Type: Bug Affects Versions: 0.94.0 Reporter: Lars Hofhansl Assignee: Lars Hofhansl Priority: Minor Fix For: 0.94.0 Attachments: 4673.txt On a test system got this exception when hfile.block.cache.size is set to 0: java.lang.NullPointerException at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.close(HFileReaderV2.java:321) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.close(StoreFile.java:1065) at org.apache.hadoop.hbase.regionserver.StoreFile.closeReader(StoreFile.java:539) at org.apache.hadoop.hbase.regionserver.StoreFile.deleteReader(StoreFile.java:549) at org.apache.hadoop.hbase.regionserver.Store.completeCompaction(Store.java:1314) at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:686) at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1016) at org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest.run(CompactionRequest.java:178) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) Minor issue as nobody in their right mind with have hfile.block.cache.size=0 Looks like this is due to HBASE-4422 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-4673) NPE in HFileReaderV2.close during major compaction when hfile.block.cache.size is set to 0
[ https://issues.apache.org/jira/browse/HBASE-4673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-4673. -- Resolution: Fixed Fix Version/s: 0.92.0 Done NPE in HFileReaderV2.close during major compaction when hfile.block.cache.size is set to 0 --- Key: HBASE-4673 URL: https://issues.apache.org/jira/browse/HBASE-4673 Project: HBase Issue Type: Bug Affects Versions: 0.94.0 Reporter: Lars Hofhansl Assignee: Lars Hofhansl Priority: Minor Fix For: 0.92.0, 0.94.0 Attachments: 4673.txt On a test system got this exception when hfile.block.cache.size is set to 0: java.lang.NullPointerException at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.close(HFileReaderV2.java:321) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.close(StoreFile.java:1065) at org.apache.hadoop.hbase.regionserver.StoreFile.closeReader(StoreFile.java:539) at org.apache.hadoop.hbase.regionserver.StoreFile.deleteReader(StoreFile.java:549) at org.apache.hadoop.hbase.regionserver.Store.completeCompaction(Store.java:1314) at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:686) at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1016) at org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest.run(CompactionRequest.java:178) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) Minor issue as nobody in their right mind with have hfile.block.cache.size=0 Looks like this is due to HBASE-4422 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-4626) Filters unnecessarily copy byte arrays...
[ https://issues.apache.org/jira/browse/HBASE-4626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-4626. -- Resolution: Fixed Fix Version/s: (was: 0.92.0) Hadoop Flags: Reviewed I prefer to keep this in trunk only for now. Please pull back in for 0.92 if you feel otherwise (the patch is simple and safe). Filters unnecessarily copy byte arrays... - Key: HBASE-4626 URL: https://issues.apache.org/jira/browse/HBASE-4626 Project: HBase Issue Type: Bug Components: regionserver Reporter: Lars Hofhansl Assignee: Lars Hofhansl Fix For: 0.94.0 Attachments: 4626-v2.txt, 4626-v3.txt, 4626.txt Just looked at SingleCol and ValueFilter... And on every column compared they create a copy of the column and/or value portion of the KV. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-4562) When split doing offlineParentInMeta encounters error, it'll cause data loss
[ https://issues.apache.org/jira/browse/HBASE-4562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-4562. -- Resolution: Fixed Hadoop Flags: Reviewed Committed to 0.90, 0.92, and trunk. The 0.90 patch still did not apply to the current 0.90 branch. I applied the changes manually this time, but in the future it would be great to base patches of the latests state of the branch in SVN. When split doing offlineParentInMeta encounters error, it'll cause data loss Key: HBASE-4562 URL: https://issues.apache.org/jira/browse/HBASE-4562 Project: HBase Issue Type: Bug Components: regionserver Affects Versions: 0.90.4 Reporter: bluedavy Assignee: bluedavy Priority: Blocker Fix For: 0.90.5 Attachments: HBASE-4562-0.90.4.patch, HBASE-4562-0.90.patch, HBASE-4562-0.92.patch, HBASE-4562-trunk.patch, test-4562-0.90.4.txt, test-4562-0.90.txt, test-4562-0.92.txt, test-4562-trunk.txt Follow below steps to replay the problem: 1. change the SplitTransaction.java as below,just like mock the timeout error. {code:title=SplitTransaction.java|borderStyle=solid} if (!testing) { MetaEditor.offlineParentInMeta(server.getCatalogTracker(), this.parent.getRegionInfo(), a.getRegionInfo(), b.getRegionInfo()); throw new IOException(some unexpected error in split); } {code} 2. update the regionserver code,restart; 3. create a table put some data to the table; 4. split the table; 5. kill the regionserver hosted the table; 6. wait some time after master ServerShutdownHandler.process execute,then scan the table,u'll find the data wrote before lost. We can fix the bug just use the patch. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-4563) When error occurs in this.parent.close(false) of split, the split region cannot write or read
[ https://issues.apache.org/jira/browse/HBASE-4563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-4563. -- Resolution: Fixed Hadoop Flags: Reviewed Committed to 0.90, 0.92, and trunk When error occurs in this.parent.close(false) of split, the split region cannot write or read - Key: HBASE-4563 URL: https://issues.apache.org/jira/browse/HBASE-4563 Project: HBase Issue Type: Bug Components: regionserver Affects Versions: 0.90.4, 0.92.0 Reporter: bluedavy Assignee: bluedavy Priority: Blocker Fix For: 0.90.5 Attachments: HBASE-4563-0.90.patch, HBASE-4563-0.92.patch, HBASE-4563-trunk.patch, test-4563-0.90.txt, test-4563-0.92.txt, test-4563-trunk.txt Follow below steps to replay the problem: 1. change the SplitTransaction.java as below,just like mock the hdfs error. {code:title=SplitTransaction.java|borderStyle=solid} ListStoreFile hstoreFilesToSplit = this.parent.close(false); throw new IOException(some unexpected error in close store files); {code} 2. update the regionserver code,restart; 3. create a table put some data to the table; 4. split the table; 5. scan the table,then it'll fail. We can fix the bug just use the patch. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-4556) Fix all incorrect uses of InternalScanner.next(...)
[ https://issues.apache.org/jira/browse/HBASE-4556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-4556. -- Resolution: Fixed Committed to 0.92 and trunk. Fix all incorrect uses of InternalScanner.next(...) --- Key: HBASE-4556 URL: https://issues.apache.org/jira/browse/HBASE-4556 Project: HBase Issue Type: Bug Reporter: Lars Hofhansl Assignee: Lars Hofhansl Attachments: 4556-v1.txt, 4556.txt There are cases all over the code where InternalScanner.next(...) is not used correctly. I see this a lot: {code} while(scanner.next(...)) { } {code} The correct pattern is: {code} boolean more = false; do { more = scanner.next(...); } while (more); {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-4102) atomicAppend: A put that appends to the latest version of a cell; i.e. reads current value then adds the bytes offered by the client to the tail and writes out a new ent
[ https://issues.apache.org/jira/browse/HBASE-4102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-4102. -- Resolution: Fixed Hadoop Flags: Reviewed Committed to trunk atomicAppend: A put that appends to the latest version of a cell; i.e. reads current value then adds the bytes offered by the client to the tail and writes out a new entry --- Key: HBASE-4102 URL: https://issues.apache.org/jira/browse/HBASE-4102 Project: HBase Issue Type: New Feature Reporter: stack Assignee: Lars Hofhansl Fix For: 0.94.0 Attachments: 4102-v1.txt, 4102.txt Its come up a few times that clients want to add to an existing cell rather than make a new cell each time. At our place, the frontend keeps a list of urls a user has visited -- their md5s -- and updates it as user progresses. Rather than read, modify client-side, then write new value back to hbase, it would be sweet if could do it all in one operation in hbase server. TSDB aims to be space efficient. Rather than pay the cost of the KV wrapper per metric, it would rather have a KV for an interval an in this KV have a value that is all the metrics for the period. It could be done as a coprocessor but this feels more like a fundamental feature. Benoît suggests that atomicAppend take a flag to indicate whether or not the client wants to see the resulting cell; often a client won't want to see the result and in this case, why pay the price formulating and delivering a response that client just drops. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-4335) Splits can create temporary holes in .META. that confuse clients and regionservers
[ https://issues.apache.org/jira/browse/HBASE-4335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-4335. -- Resolution: Fixed Hadoop Flags: Reviewed Integrated into 0.92 and trunk. Splits can create temporary holes in .META. that confuse clients and regionservers -- Key: HBASE-4335 URL: https://issues.apache.org/jira/browse/HBASE-4335 Project: HBase Issue Type: Bug Components: regionserver Affects Versions: 0.90.4 Reporter: Joe Pallas Assignee: Lars Hofhansl Priority: Critical Fix For: 0.92.0 Attachments: 4335-v2.txt, 4335-v3.txt, 4335-v4.txt, 4335-v5.txt, 4335.txt When a SplitTransaction is performed, three updates are done to .META.: 1. The parent region is marked as splitting (and hence offline) 2. The first daughter region is added (same start key as parent) 3. The second daughter region is added (split key is start key) (later, the original parent region is deleted, but that's not important to this discussion) Steps 2 and 3 are actually done concurrently by SplitTransaction.DaughterOpener threads. While the master is notified when a split is complete, the only visibility that clients have is whether the daughter regions have appeared in .META. If the second daughter is added to .META. first, then .META. will contain the (offline) parent region followed by the second daughter region. If the client looks up a key that is greater than (or equal to) the split, the client will find the second daughter region and use it. If the key is less than the split key, the client will find the parent region and see that it is offline, triggering a retry. If the first daughter is added to .META. before the second daughter, there is a window during which .META. has a hole: the first daughter effectively hides the parent region (same start key), but there is no entry for the second daughter. A region lookup will find the first daughter for all keys in the parent's range, but the first daughter does not include keys at or beyond the split key. See HBASE-4333 and HBASE-4334 for details on how this causes problems and suggestions for mitigating this in the client and regionserver. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-4488) Store could miss rows during flush
[ https://issues.apache.org/jira/browse/HBASE-4488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-4488. -- Resolution: Fixed Store could miss rows during flush -- Key: HBASE-4488 URL: https://issues.apache.org/jira/browse/HBASE-4488 Project: HBase Issue Type: Sub-task Components: regionserver Affects Versions: 0.92.0, 0.94.0 Reporter: Lars Hofhansl Assignee: Lars Hofhansl Fix For: 0.92.0 Attachments: 4488-add.txt, 4488.txt While looking at HBASE-4344 I found that my change HBASE-4241 contains a critical mistake: The while(scanner.next(kvs)) loop is incorrect and might miss the last edits. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira