[jira] [Commented] (HBASE-9249) Add cp hook before setting PONR in split
[ https://issues.apache.org/jira/browse/HBASE-9249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13756037#comment-13756037 ] Anoop Sam John commented on HBASE-9249: --- SimpleRegionObserver {quote} + public int getCtPreSplitAfterPONR() { +return ctPreSplitBeforePONR.get(); + } {quote} Have to use ctPreSplitAfterPONR . {quote} +if (this.parent.getCoprocessorHost() != null) { + this.parent.getCoprocessorHost().preSplitBeforePONR(this.splitrow); +} {quote} This hook returns SplitInfo. Where this will get used? Add cp hook before setting PONR in split Key: HBASE-9249 URL: https://issues.apache.org/jira/browse/HBASE-9249 Project: HBase Issue Type: Sub-task Reporter: rajeshbabu Assignee: rajeshbabu Attachments: HBASE-9249.patch, HBASE-9249_v2.patch, HBASE-9249_v3.patch, HBASE-9249_v4.patch This hook helps to perform split on user region and corresponding index region such that both will be split or none. With this hook split for user and index region as follows user region === 1) Create splitting znode for user region split 2) Close parent user region 3) split user region storefiles 4) instantiate child regions of user region Through the new hook we can call index region transitions as below index region === 5) Create splitting znode for index region split 6) Close parent index region 7) Split storefiles of index region 8) instantiate child regions of the index region If any failures in 5,6,7,8 rollback the steps and return null, on null return throw exception to rollback for 1,2,3,4 9) set PONR 10) do batch put of offline and split entries for user and index regions index region 11) open daughers of index regions and transition znode to split. This step we will do through preSplitAfterPONR hook. Opening index regions before opening user regions helps to avoid put failures if there is colocation mismatch(this can happen if user regions opening completed but index regions opening in progress) user region === 12) open daughers of user regions and transition znode to split. Even if region server crashes also at the end both user and index regions will be split or none -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-6581) Build with hadoop.profile=3.0
[ https://issues.apache.org/jira/browse/HBASE-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Charles updated HBASE-6581: Attachment: HBASE-6581-5.patch new patch with hsync name + lazy instanciate of the method + cache method Ho additional synchronization, as existing FSDataOutputStream field is not synchronized, relying on callers synchronization. Build with hadoop.profile=3.0 - Key: HBASE-6581 URL: https://issues.apache.org/jira/browse/HBASE-6581 Project: HBase Issue Type: Bug Reporter: Eric Charles Assignee: Eric Charles Attachments: HBASE-6581-1.patch, HBASE-6581-20130821.patch, HBASE-6581-2.patch, HBASE-6581-3.patch, HBASE-6581-4.patch, HBASE-6581-5.patch, HBASE-6581.diff, HBASE-6581.diff Building trunk with hadoop.profile=3.0 gives exceptions (see [1]) due to change in the hadoop maven modules naming (and also usage of 3.0-SNAPSHOT instead of 3.0.0-SNAPSHOT in hbase-common). I can provide a patch that would move most of hadoop dependencies in their respective profiles and will define the correct hadoop deps in the 3.0 profile. Please tell me if that's ok to go this way. Thx, Eric [1] $ mvn clean install -Dhadoop.profile=3.0 [INFO] Scanning for projects... [ERROR] The build could not read 3 projects - [Help 1] [ERROR] [ERROR] The project org.apache.hbase:hbase-server:0.95-SNAPSHOT (/d/hbase.svn/hbase-server/pom.xml) has 3 errors [ERROR] 'dependencies.dependency.version' for org.apache.hadoop:hadoop-common:jar is missing. @ line 655, column 21 [ERROR] 'dependencies.dependency.version' for org.apache.hadoop:hadoop-annotations:jar is missing. @ line 659, column 21 [ERROR] 'dependencies.dependency.version' for org.apache.hadoop:hadoop-minicluster:jar is missing. @ line 663, column 21 [ERROR] [ERROR] The project org.apache.hbase:hbase-common:0.95-SNAPSHOT (/d/hbase.svn/hbase-common/pom.xml) has 3 errors [ERROR] 'dependencies.dependency.version' for org.apache.hadoop:hadoop-common:jar is missing. @ line 170, column 21 [ERROR] 'dependencies.dependency.version' for org.apache.hadoop:hadoop-annotations:jar is missing. @ line 174, column 21 [ERROR] 'dependencies.dependency.version' for org.apache.hadoop:hadoop-minicluster:jar is missing. @ line 178, column 21 [ERROR] [ERROR] The project org.apache.hbase:hbase-it:0.95-SNAPSHOT (/d/hbase.svn/hbase-it/pom.xml) has 3 errors [ERROR] 'dependencies.dependency.version' for org.apache.hadoop:hadoop-common:jar is missing. @ line 220, column 18 [ERROR] 'dependencies.dependency.version' for org.apache.hadoop:hadoop-annotations:jar is missing. @ line 224, column 21 [ERROR] 'dependencies.dependency.version' for org.apache.hadoop:hadoop-minicluster:jar is missing. @ line 228, column 21 [ERROR] -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9249) Add cp hook before setting PONR in split
[ https://issues.apache.org/jira/browse/HBASE-9249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13756055#comment-13756055 ] rajeshbabu commented on HBASE-9249: --- bq. Have to use ctPreSplitAfterPONR My bad. I will change this. bq. This hook returns SplitInfo. Where this will get used? Here some more changes are needed for secondary index Anoop. presently not using the SplitInfo. Add cp hook before setting PONR in split Key: HBASE-9249 URL: https://issues.apache.org/jira/browse/HBASE-9249 Project: HBase Issue Type: Sub-task Reporter: rajeshbabu Assignee: rajeshbabu Attachments: HBASE-9249.patch, HBASE-9249_v2.patch, HBASE-9249_v3.patch, HBASE-9249_v4.patch This hook helps to perform split on user region and corresponding index region such that both will be split or none. With this hook split for user and index region as follows user region === 1) Create splitting znode for user region split 2) Close parent user region 3) split user region storefiles 4) instantiate child regions of user region Through the new hook we can call index region transitions as below index region === 5) Create splitting znode for index region split 6) Close parent index region 7) Split storefiles of index region 8) instantiate child regions of the index region If any failures in 5,6,7,8 rollback the steps and return null, on null return throw exception to rollback for 1,2,3,4 9) set PONR 10) do batch put of offline and split entries for user and index regions index region 11) open daughers of index regions and transition znode to split. This step we will do through preSplitAfterPONR hook. Opening index regions before opening user regions helps to avoid put failures if there is colocation mismatch(this can happen if user regions opening completed but index regions opening in progress) user region === 12) open daughers of user regions and transition znode to split. Even if region server crashes also at the end both user and index regions will be split or none -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-9411) Increment / decrement of rpcCount in RpcServer#Connection is not protected by synchronization
Ted Yu created HBASE-9411: - Summary: Increment / decrement of rpcCount in RpcServer#Connection is not protected by synchronization Key: HBASE-9411 URL: https://issues.apache.org/jira/browse/HBASE-9411 Project: HBase Issue Type: Bug Reporter: Ted Yu Priority: Minor Here is related code: {code} /* Decrement the outstanding RPC count */ protected void decRpcCount() { rpcCount--; } /* Increment the outstanding RPC count */ protected void incRpcCount() { rpcCount++; } {code} Even though rpcCount is volatile, in non atomic operations (increment / decrement) different threads may get unexpected result. See http://stackoverflow.com/questions/7805192/is-a-volatile-int-in-java-thread-safe -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-9412) Startup scripts create 2 .out files.
Jean-Marc Spaggiari created HBASE-9412: -- Summary: Startup scripts create 2 .out files. Key: HBASE-9412 URL: https://issues.apache.org/jira/browse/HBASE-9412 Project: HBase Issue Type: Bug Components: scripts Affects Versions: 0.96.0 Reporter: Jean-Marc Spaggiari Priority: Minor When start HBase with bin/start-hbase.sh, script creates 2 out files. {code} -rw-r--r-- 1 jmspaggiari jmspaggiari 0 Aug 31 15:38 hbase-jmspaggiari-master-t430s.out -rw-r--r-- 1 jmspaggiari jmspaggiari 0 Aug 31 15:38 hbase-jmspaggiari-master-t430s.out.1 {code} Should create only one. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-9413) WebUI says .META. table but table got renames to hbase:meta. Meed to update the UI.
Jean-Marc Spaggiari created HBASE-9413: -- Summary: WebUI says .META. table but table got renames to hbase:meta. Meed to update the UI. Key: HBASE-9413 URL: https://issues.apache.org/jira/browse/HBASE-9413 Project: HBase Issue Type: Bug Affects Versions: 0.96.0 Reporter: Jean-Marc Spaggiari Priority: Minor In the UI, we say The .META. table holds references to all User Table regions but the table name is hbase:meta and not .META. We need to update this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6581) Build with hadoop.profile=3.0
[ https://issues.apache.org/jira/browse/HBASE-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13756187#comment-13756187 ] Eric Charles commented on HBASE-6581: - [~saint@gmail.com] don't commit for now, I have an issue running hserver with that last patch (not sure if I messed my env with all those various hadoop/hbase versions, or the 2 weeks changes I just pulled made things break here...). Anyway, any feedback on the last patch always welcome. Build with hadoop.profile=3.0 - Key: HBASE-6581 URL: https://issues.apache.org/jira/browse/HBASE-6581 Project: HBase Issue Type: Bug Reporter: Eric Charles Assignee: Eric Charles Attachments: HBASE-6581-1.patch, HBASE-6581-20130821.patch, HBASE-6581-2.patch, HBASE-6581-3.patch, HBASE-6581-4.patch, HBASE-6581-5.patch, HBASE-6581.diff, HBASE-6581.diff Building trunk with hadoop.profile=3.0 gives exceptions (see [1]) due to change in the hadoop maven modules naming (and also usage of 3.0-SNAPSHOT instead of 3.0.0-SNAPSHOT in hbase-common). I can provide a patch that would move most of hadoop dependencies in their respective profiles and will define the correct hadoop deps in the 3.0 profile. Please tell me if that's ok to go this way. Thx, Eric [1] $ mvn clean install -Dhadoop.profile=3.0 [INFO] Scanning for projects... [ERROR] The build could not read 3 projects - [Help 1] [ERROR] [ERROR] The project org.apache.hbase:hbase-server:0.95-SNAPSHOT (/d/hbase.svn/hbase-server/pom.xml) has 3 errors [ERROR] 'dependencies.dependency.version' for org.apache.hadoop:hadoop-common:jar is missing. @ line 655, column 21 [ERROR] 'dependencies.dependency.version' for org.apache.hadoop:hadoop-annotations:jar is missing. @ line 659, column 21 [ERROR] 'dependencies.dependency.version' for org.apache.hadoop:hadoop-minicluster:jar is missing. @ line 663, column 21 [ERROR] [ERROR] The project org.apache.hbase:hbase-common:0.95-SNAPSHOT (/d/hbase.svn/hbase-common/pom.xml) has 3 errors [ERROR] 'dependencies.dependency.version' for org.apache.hadoop:hadoop-common:jar is missing. @ line 170, column 21 [ERROR] 'dependencies.dependency.version' for org.apache.hadoop:hadoop-annotations:jar is missing. @ line 174, column 21 [ERROR] 'dependencies.dependency.version' for org.apache.hadoop:hadoop-minicluster:jar is missing. @ line 178, column 21 [ERROR] [ERROR] The project org.apache.hbase:hbase-it:0.95-SNAPSHOT (/d/hbase.svn/hbase-it/pom.xml) has 3 errors [ERROR] 'dependencies.dependency.version' for org.apache.hadoop:hadoop-common:jar is missing. @ line 220, column 18 [ERROR] 'dependencies.dependency.version' for org.apache.hadoop:hadoop-annotations:jar is missing. @ line 224, column 21 [ERROR] 'dependencies.dependency.version' for org.apache.hadoop:hadoop-minicluster:jar is missing. @ line 228, column 21 [ERROR] -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-9414) start-hbase.cmd doesn't need the execute flag.
Jean-Marc Spaggiari created HBASE-9414: -- Summary: start-hbase.cmd doesn't need the execute flag. Key: HBASE-9414 URL: https://issues.apache.org/jira/browse/HBASE-9414 Project: HBase Issue Type: Bug Affects Versions: 0.96.0 Reporter: Jean-Marc Spaggiari Priority: Trivial When you do start- and tabulation, since there start-hbase.cmd has the execution flag, completion go only up to start-hbase. We should remove the execution flag for this script. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9249) Add cp hook before setting PONR in split
[ https://issues.apache.org/jira/browse/HBASE-9249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13756199#comment-13756199 ] Anoop Sam John commented on HBASE-9249: --- bq.Here some more changes are needed for secondary index Anoop. presently not using the SplitInfo. Going through the changes what is done in HIndex. 1. Doing some ops in the before PONR hook and if that is not success don't want o continue with the split but to rollback. This can be done using bypass() on the CP? In case of bypass the core code cab throw an IOE so as to allow upper layer to do the rollback. 2. Both actual and index region got split. We would like to write to Meta both the data (daughter region entries) as one Put. For that there should be some provision provided in this beforePONR or after PONR CP hook so that that can change what is getting written to META tabl?. Do how we handle in case of batch put. We pass the WALEdit also to the hook. Hook can change it and finally core writes the resultant edit.. Same way can follow here also? Add cp hook before setting PONR in split Key: HBASE-9249 URL: https://issues.apache.org/jira/browse/HBASE-9249 Project: HBase Issue Type: Sub-task Reporter: rajeshbabu Assignee: rajeshbabu Attachments: HBASE-9249.patch, HBASE-9249_v2.patch, HBASE-9249_v3.patch, HBASE-9249_v4.patch This hook helps to perform split on user region and corresponding index region such that both will be split or none. With this hook split for user and index region as follows user region === 1) Create splitting znode for user region split 2) Close parent user region 3) split user region storefiles 4) instantiate child regions of user region Through the new hook we can call index region transitions as below index region === 5) Create splitting znode for index region split 6) Close parent index region 7) Split storefiles of index region 8) instantiate child regions of the index region If any failures in 5,6,7,8 rollback the steps and return null, on null return throw exception to rollback for 1,2,3,4 9) set PONR 10) do batch put of offline and split entries for user and index regions index region 11) open daughers of index regions and transition znode to split. This step we will do through preSplitAfterPONR hook. Opening index regions before opening user regions helps to avoid put failures if there is colocation mismatch(this can happen if user regions opening completed but index regions opening in progress) user region === 12) open daughers of user regions and transition znode to split. Even if region server crashes also at the end both user and index regions will be split or none -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-4811) Support reverse Scan
[ https://issues.apache.org/jira/browse/HBASE-4811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chunhui shen updated HBASE-4811: Attachment: hbase-4811-trunkv20.patch Patch-v20 removes the config in Reversed Scan and now it's no necessary to set start row in Reversed Scan Support reverse Scan Key: HBASE-4811 URL: https://issues.apache.org/jira/browse/HBASE-4811 Project: HBase Issue Type: New Feature Components: Client Affects Versions: 0.20.6, 0.94.7 Reporter: John Carrino Assignee: chunhui shen Fix For: 0.98.0 Attachments: 4811-0.94-v3.txt, 4811-trunk-v10.txt, 4811-trunk-v5.patch, HBase-4811-0.94.3modified.txt, HBase-4811-0.94-v2.txt, hbase-4811-trunkv11.patch, hbase-4811-trunkv12.patch, hbase-4811-trunkv13.patch, hbase-4811-trunkv14.patch, hbase-4811-trunkv15.patch, hbase-4811-trunkv16.patch, hbase-4811-trunkv17.patch, hbase-4811-trunkv18.patch, hbase-4811-trunkv19.patch, hbase-4811-trunkv1.patch, hbase-4811-trunkv20.patch, hbase-4811-trunkv4.patch, hbase-4811-trunkv6.patch, hbase-4811-trunkv7.patch, hbase-4811-trunkv8.patch, hbase-4811-trunkv9.patch Reversed scan means scan the rows backward. And StartRow bigger than StopRow in a reversed scan. For example, for the following rows: aaa/c1:q1/value1 aaa/c1:q2/value2 bbb/c1:q1/value1 bbb/c1:q2/value2 ccc/c1:q1/value1 ccc/c1:q2/value2 ddd/c1:q1/value1 ddd/c1:q2/value2 eee/c1:q1/value1 eee/c1:q2/value2 you could do a reversed scan from 'ddd' to 'bbb'(exclude) like this: Scan scan = new Scan(); scan.setStartRow('ddd'); scan.setStopRow('bbb'); scan.setReversed(true); for(Result result:htable.getScanner(scan)){ System.out.println(result); } Aslo you could do the reversed scan with shell like this: hbase scan 'table',{REVERSED = true,STARTROW='ddd', STOPROW='bbb'} And the output is: ddd/c1:q1/value1 ddd/c1:q2/value2 ccc/c1:q1/value1 ccc/c1:q2/value2 NOTE: when setting reversed as true for a client scan, you must set the start row, else will throw exception. Through {@link Scan#createBiggestByteArray(int)},you could get a big enough byte array as the start row All the documentation I find about HBase says that if you want forward and reverse scans you should just build 2 tables and one be ascending and one descending. Is there a fundamental reason that HBase only supports forward Scan? It seems like a lot of extra space overhead and coding overhead (to keep them in sync) to support 2 tables. I am assuming this has been discussed before, but I can't find the discussions anywhere about it or why it would be infeasible. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-4811) Support reverse Scan
[ https://issues.apache.org/jira/browse/HBASE-4811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chunhui shen updated HBASE-4811: Release Note: Do a reversed client scan by setting 'reversed' as true in Scan.Java. (was: Do a reversed client scan by setting 'reversed' as true in Scan.Java. A new configuration hbase.client.reversedscanner.maxbyte.length is imported for reversed scan. Its value represents how much consecutive (byte)0xff will the region's endRow ends with at most. e.g. The default value is 9, means the number of consecutive (byte)0xff which region's endRow ends with is less than nine for all regions) Support reverse Scan Key: HBASE-4811 URL: https://issues.apache.org/jira/browse/HBASE-4811 Project: HBase Issue Type: New Feature Components: Client Affects Versions: 0.20.6, 0.94.7 Reporter: John Carrino Assignee: chunhui shen Fix For: 0.98.0 Attachments: 4811-0.94-v3.txt, 4811-trunk-v10.txt, 4811-trunk-v5.patch, HBase-4811-0.94.3modified.txt, HBase-4811-0.94-v2.txt, hbase-4811-trunkv11.patch, hbase-4811-trunkv12.patch, hbase-4811-trunkv13.patch, hbase-4811-trunkv14.patch, hbase-4811-trunkv15.patch, hbase-4811-trunkv16.patch, hbase-4811-trunkv17.patch, hbase-4811-trunkv18.patch, hbase-4811-trunkv19.patch, hbase-4811-trunkv1.patch, hbase-4811-trunkv20.patch, hbase-4811-trunkv4.patch, hbase-4811-trunkv6.patch, hbase-4811-trunkv7.patch, hbase-4811-trunkv8.patch, hbase-4811-trunkv9.patch Reversed scan means scan the rows backward. And StartRow bigger than StopRow in a reversed scan. For example, for the following rows: aaa/c1:q1/value1 aaa/c1:q2/value2 bbb/c1:q1/value1 bbb/c1:q2/value2 ccc/c1:q1/value1 ccc/c1:q2/value2 ddd/c1:q1/value1 ddd/c1:q2/value2 eee/c1:q1/value1 eee/c1:q2/value2 you could do a reversed scan from 'ddd' to 'bbb'(exclude) like this: Scan scan = new Scan(); scan.setStartRow('ddd'); scan.setStopRow('bbb'); scan.setReversed(true); for(Result result:htable.getScanner(scan)){ System.out.println(result); } Aslo you could do the reversed scan with shell like this: hbase scan 'table',{REVERSED = true,STARTROW='ddd', STOPROW='bbb'} And the output is: ddd/c1:q1/value1 ddd/c1:q2/value2 ccc/c1:q1/value1 ccc/c1:q2/value2 NOTE: when setting reversed as true for a client scan, you must set the start row, else will throw exception. Through {@link Scan#createBiggestByteArray(int)},you could get a big enough byte array as the start row All the documentation I find about HBase says that if you want forward and reverse scans you should just build 2 tables and one be ascending and one descending. Is there a fundamental reason that HBase only supports forward Scan? It seems like a lot of extra space overhead and coding overhead (to keep them in sync) to support 2 tables. I am assuming this has been discussed before, but I can't find the discussions anywhere about it or why it would be infeasible. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9383) Have Cell interface extend HeapSize interface
[ https://issues.apache.org/jira/browse/HBASE-9383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13756361#comment-13756361 ] Matt Corgan commented on HBASE-9383: Not sure I understand where it would be used, but when data block encoding is applied to a group of cells, they lose their individual heap sizes because they are composed of diffs of previous cells or references into a dictionary of common byte strings. Would this return the heap size of the cell as if it were a KeyValue? Have Cell interface extend HeapSize interface - Key: HBASE-9383 URL: https://issues.apache.org/jira/browse/HBASE-9383 Project: HBase Issue Type: Sub-task Affects Versions: 0.96.0 Reporter: Jonathan Hsieh From the review of HBASE-9359. bq. Stack: Cell should implement HeapSize? That sounds right. bq. Ram: +1 for Cell extending HeapSize. bq. Jon: I'll look into Cell extending HeapSize as a follow on. It doesn't interfere with the signature and if a older 0.96 client talks to a 0.96.1 rs that has the change, it won't matter. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4811) Support reverse Scan
[ https://issues.apache.org/jira/browse/HBASE-4811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13756371#comment-13756371 ] Hadoop QA commented on HBASE-4811: -- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12601109/hbase-4811-trunkv20.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 18 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7006//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7006//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7006//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7006//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7006//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7006//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7006//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7006//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7006//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7006//console This message is automatically generated. Support reverse Scan Key: HBASE-4811 URL: https://issues.apache.org/jira/browse/HBASE-4811 Project: HBase Issue Type: New Feature Components: Client Affects Versions: 0.20.6, 0.94.7 Reporter: John Carrino Assignee: chunhui shen Fix For: 0.98.0 Attachments: 4811-0.94-v3.txt, 4811-trunk-v10.txt, 4811-trunk-v5.patch, HBase-4811-0.94.3modified.txt, HBase-4811-0.94-v2.txt, hbase-4811-trunkv11.patch, hbase-4811-trunkv12.patch, hbase-4811-trunkv13.patch, hbase-4811-trunkv14.patch, hbase-4811-trunkv15.patch, hbase-4811-trunkv16.patch, hbase-4811-trunkv17.patch, hbase-4811-trunkv18.patch, hbase-4811-trunkv19.patch, hbase-4811-trunkv1.patch, hbase-4811-trunkv20.patch, hbase-4811-trunkv4.patch, hbase-4811-trunkv6.patch, hbase-4811-trunkv7.patch, hbase-4811-trunkv8.patch, hbase-4811-trunkv9.patch Reversed scan means scan the rows backward. And StartRow bigger than StopRow in a reversed scan. For example, for the following rows: aaa/c1:q1/value1 aaa/c1:q2/value2 bbb/c1:q1/value1 bbb/c1:q2/value2 ccc/c1:q1/value1 ccc/c1:q2/value2 ddd/c1:q1/value1 ddd/c1:q2/value2 eee/c1:q1/value1 eee/c1:q2/value2 you could do a reversed scan from 'ddd' to 'bbb'(exclude) like this: Scan scan = new Scan(); scan.setStartRow('ddd'); scan.setStopRow('bbb'); scan.setReversed(true); for(Result result:htable.getScanner(scan)){ System.out.println(result); } Aslo you could do the reversed scan with shell like this: hbase scan 'table',{REVERSED = true,STARTROW='ddd', STOPROW='bbb'} And the output is: ddd/c1:q1/value1 ddd/c1:q2/value2 ccc/c1:q1/value1 ccc/c1:q2/value2 NOTE: when setting reversed as true for a client scan, you must set the start row, else will throw exception. Through {@link Scan#createBiggestByteArray(int)},you could get a big enough byte array as the start row All the documentation I find about HBase says that if you want forward and reverse