[jira] [Commented] (HBASE-10855) Enable hfilev3 by default
[ https://issues.apache.org/jira/browse/HBASE-10855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956147#comment-13956147 ] ramkrishna.s.vasudevan commented on HBASE-10855: For me too it passes locally changing the version to V3. But just seeing the test case one guess would be that because we use V3 once we flush we would atleast write the tag length (of type short). Here we only make two puts for two diff families. The size of the HFile may be a bigger by 2 bytes now. Will that be a reason here? Enable hfilev3 by default - Key: HBASE-10855 URL: https://issues.apache.org/jira/browse/HBASE-10855 Project: HBase Issue Type: Sub-task Components: HFile Reporter: stack Assignee: stack Fix For: 0.99.0 Attachments: 10855.txt, 10855.txt, 10855.txt, 10855.txt Distributed log replay needs this. Should be on by default in 1.0/0.99. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10855) Enable hfilev3 by default
[ https://issues.apache.org/jira/browse/HBASE-10855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956150#comment-13956150 ] stack commented on HBASE-10855: --- Let me look at it [~ram_krish] I have a little rig here so can dig. Will bug you fellows if can't figure it. It seems like a good test but a little fragile anyways Enable hfilev3 by default - Key: HBASE-10855 URL: https://issues.apache.org/jira/browse/HBASE-10855 Project: HBase Issue Type: Sub-task Components: HFile Reporter: stack Assignee: stack Fix For: 0.99.0 Attachments: 10855.txt, 10855.txt, 10855.txt, 10855.txt Distributed log replay needs this. Should be on by default in 1.0/0.99. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10867) TestRegionPlacement#testRegionPlacement occasionally fails
[ https://issues.apache.org/jira/browse/HBASE-10867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956153#comment-13956153 ] Hudson commented on HBASE-10867: FAILURE: Integrated in HBase-TRUNK #5053 (See [https://builds.apache.org/job/HBase-TRUNK/5053/]) HBASE-10867 TestRegionPlacement#testRegionPlacement occasionally fails (tedyu: rev 1583515) * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestRegionPlacement.java TestRegionPlacement#testRegionPlacement occasionally fails -- Key: HBASE-10867 URL: https://issues.apache.org/jira/browse/HBASE-10867 Project: HBase Issue Type: Test Reporter: Ted Yu Assignee: Ted Yu Priority: Minor Fix For: 0.99.0 Attachments: 10867-v1.txt, 10867-v2.txt From https://builds.apache.org/job/HBase-TRUNK/5047/testReport/org.apache.hadoop.hbase.master/TestRegionPlacement/testRegionPlacement/ : {code} java.lang.ArrayIndexOutOfBoundsException: 10 at java.util.concurrent.CopyOnWriteArrayList.get(CopyOnWriteArrayList.java:368) at java.util.concurrent.CopyOnWriteArrayList.get(CopyOnWriteArrayList.java:377) at org.apache.hadoop.hbase.LocalHBaseCluster.getRegionServer(LocalHBaseCluster.java:224) at org.apache.hadoop.hbase.MiniHBaseCluster.getRegionServer(MiniHBaseCluster.java:609) at org.apache.hadoop.hbase.master.TestRegionPlacement.killRandomServerAndVerifyAssignment(TestRegionPlacement.java:303) at org.apache.hadoop.hbase.master.TestRegionPlacement.testRegionPlacement(TestRegionPlacement.java:270) {code} In the setup: {code} TEST_UTIL.startMiniCluster(SLAVES); {code} where SLAVES is 10. So when 10 was used in TEST_UTIL.getHBaseCluster().getRegionServer(killIndex), we would get ArrayIndexOutOfBoundsException. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10848) Filter SingleColumnValueFilter combined with NullComparator does not work
[ https://issues.apache.org/jira/browse/HBASE-10848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956152#comment-13956152 ] Hudson commented on HBASE-10848: FAILURE: Integrated in HBase-TRUNK #5053 (See [https://builds.apache.org/job/HBase-TRUNK/5053/]) HBASE-10848 Filter SingleColumnValueFilter combined with NullComparator does not work (Fabien) (tedyu: rev 1583511) * /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/NullComparator.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestNullComparator.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestSingleColumnValueFilter.java Filter SingleColumnValueFilter combined with NullComparator does not work - Key: HBASE-10848 URL: https://issues.apache.org/jira/browse/HBASE-10848 Project: HBase Issue Type: Bug Components: Filters Affects Versions: 0.96.1.1 Reporter: Fabien Le Gallo Assignee: Fabien Le Gallo Fix For: 0.99.0, 0.98.2 Attachments: HBASE-10848.patch, HBASE_10848-v2.patch, HBASE_10848-v3.patch, HBASE_10848-v4.patch, HBaseRegression.java, TestScanWithNullComparable.java I want to filter out from the scan the rows that does not have a specific column qualifier. For this purpose I use the filter SingleColumnValueFilter combined with the NullComparator. But every time I use this in a scan, I get the following exception: {code} java.lang.RuntimeException: org.apache.hadoop.hbase.DoNotRetryIOException: Failed after retry of OutOfOrderScannerNextException: was there a rpc timeout? at org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:47) at com.xxx.xxx.test.HBaseRegression.nullComparator(HBaseRegression.java:92) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: Failed after retry of OutOfOrderScannerNextException: was there a rpc timeout? at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:391) at org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:44) ... 25 more Caused by: org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected nextCallSeq: 1 But the nextCallSeq got from client: 0; request=scanner_id: 7998309028985532303 number_of_rows: 100 close_scanner: false next_call_seq: 0 at org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3011) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:26929) at
[jira] [Commented] (HBASE-10866) Decouple HLogSplitterHandler from ZooKeeper
[ https://issues.apache.org/jira/browse/HBASE-10866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956155#comment-13956155 ] Mikhail Antonov commented on HBASE-10866: - [~ lhofhansl] thanks for feedback! Yeah, totally understand this abstraction effort needs to be finished everywhere (and also the fact that it's quite a bit of work, affecting many places in the codebase). The piecemeal approach would provide for easier review/feedbacks of patches to ensure they're in line with the goal (as [~stack] and [~cos] noted), and for better work structuring and parallelization. Decouple HLogSplitterHandler from ZooKeeper --- Key: HBASE-10866 URL: https://issues.apache.org/jira/browse/HBASE-10866 Project: HBase Issue Type: Improvement Components: regionserver, Zookeeper Reporter: Mikhail Antonov Attachments: HBASE-10866.patch, HBASE-10866.patch, HBASE-10866.patch, HBASE-10866.patch, HBaseConsensus.pdf As some sort of follow-up or initial step towards HBASE-10296... Whatever consensus algorithm/library may be the chosen, perhaps on of first practical steps towards this goal would be to better abstract ZK-related API and details, which are now throughout the codebase (mostly leaked throuth ZkUtil, ZooKeeperWatcher and listeners). I'd like to propose a series of patches to help better abstract out zookeeper (and then help develop consensus APIs). Here is first version of patch for initial review (then I'm planning to work on another handlers in regionserver, and then perhaps start working on abstracting listeners). -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10881) Support reverse scan in thrift2
[ https://issues.apache.org/jira/browse/HBASE-10881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956159#comment-13956159 ] Hadoop QA commented on HBASE-10881: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12637993/HBASE-10881-trunk-v1.diff against trunk revision . ATTACHMENT ID: 12637993 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified tests. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 lineLengths{color}. The patch introduces the following lines longer than 100: + private static final org.apache.thrift.protocol.TField REVERSED_FIELD_DESC = new org.apache.thrift.protocol.TField(reversed, org.apache.thrift.protocol.TType.BOOL, (short)11); + private _Fields optionals[] = {_Fields.START_ROW,_Fields.STOP_ROW,_Fields.COLUMNS,_Fields.CACHING,_Fields.MAX_VERSIONS,_Fields.TIME_RANGE,_Fields.FILTER_STRING,_Fields.BATCH_SIZE,_Fields.ATTRIBUTES,_Fields.AUTHORIZATIONS,_Fields.REVERSED}; +tmpMap.put(_Fields.REVERSED, new org.apache.thrift.meta_data.FieldMetaData(reversed, org.apache.thrift.TFieldRequirementType.OPTIONAL, + org.apache.thrift.protocol.TList _list117 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32()); + org.apache.thrift.protocol.TMap _map120 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRING, iprot.readI32()); {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:red}-1 core tests{color}. The patch failed these unit tests: {color:red}-1 core zombie tests{color}. There are 1 zombie test(s): at org.apache.hadoop.hbase.mapreduce.TestTableMapReduceBase.testMultiRegionTable(TestTableMapReduceBase.java:96) Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/9152//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9152//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9152//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9152//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9152//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9152//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9152//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9152//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9152//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9152//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/9152//console This message is automatically generated. Support reverse scan in thrift2 --- Key: HBASE-10881 URL: https://issues.apache.org/jira/browse/HBASE-10881 Project: HBase Issue Type: New Feature Affects Versions: 0.99.0 Reporter: Liu Shaohui Assignee: Liu Shaohui Priority: Minor Attachments: HBASE-10881-trunk-v1.diff Support reverse scan in thrift2. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10848) Filter SingleColumnValueFilter combined with NullComparator does not work
[ https://issues.apache.org/jira/browse/HBASE-10848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956164#comment-13956164 ] Hudson commented on HBASE-10848: SUCCESS: Integrated in HBase-0.98 #258 (See [https://builds.apache.org/job/HBase-0.98/258/]) HBASE-10848 Filter SingleColumnValueFilter combined with NullComparator does not work (Fabien) (tedyu: rev 1583510) * /hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/NullComparator.java * /hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestNullComparator.java * /hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestSingleColumnValueFilter.java Filter SingleColumnValueFilter combined with NullComparator does not work - Key: HBASE-10848 URL: https://issues.apache.org/jira/browse/HBASE-10848 Project: HBase Issue Type: Bug Components: Filters Affects Versions: 0.96.1.1 Reporter: Fabien Le Gallo Assignee: Fabien Le Gallo Fix For: 0.99.0, 0.98.2 Attachments: HBASE-10848.patch, HBASE_10848-v2.patch, HBASE_10848-v3.patch, HBASE_10848-v4.patch, HBaseRegression.java, TestScanWithNullComparable.java I want to filter out from the scan the rows that does not have a specific column qualifier. For this purpose I use the filter SingleColumnValueFilter combined with the NullComparator. But every time I use this in a scan, I get the following exception: {code} java.lang.RuntimeException: org.apache.hadoop.hbase.DoNotRetryIOException: Failed after retry of OutOfOrderScannerNextException: was there a rpc timeout? at org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:47) at com.xxx.xxx.test.HBaseRegression.nullComparator(HBaseRegression.java:92) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: Failed after retry of OutOfOrderScannerNextException: was there a rpc timeout? at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:391) at org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:44) ... 25 more Caused by: org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected nextCallSeq: 1 But the nextCallSeq got from client: 0; request=scanner_id: 7998309028985532303 number_of_rows: 100 close_scanner: false next_call_seq: 0 at org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3011) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:26929) at
[jira] [Commented] (HBASE-10848) Filter SingleColumnValueFilter combined with NullComparator does not work
[ https://issues.apache.org/jira/browse/HBASE-10848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956165#comment-13956165 ] Hudson commented on HBASE-10848: FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #242 (See [https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/242/]) HBASE-10848 Filter SingleColumnValueFilter combined with NullComparator does not work (Fabien) (tedyu: rev 1583510) * /hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/NullComparator.java * /hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestNullComparator.java * /hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestSingleColumnValueFilter.java Filter SingleColumnValueFilter combined with NullComparator does not work - Key: HBASE-10848 URL: https://issues.apache.org/jira/browse/HBASE-10848 Project: HBase Issue Type: Bug Components: Filters Affects Versions: 0.96.1.1 Reporter: Fabien Le Gallo Assignee: Fabien Le Gallo Fix For: 0.99.0, 0.98.2 Attachments: HBASE-10848.patch, HBASE_10848-v2.patch, HBASE_10848-v3.patch, HBASE_10848-v4.patch, HBaseRegression.java, TestScanWithNullComparable.java I want to filter out from the scan the rows that does not have a specific column qualifier. For this purpose I use the filter SingleColumnValueFilter combined with the NullComparator. But every time I use this in a scan, I get the following exception: {code} java.lang.RuntimeException: org.apache.hadoop.hbase.DoNotRetryIOException: Failed after retry of OutOfOrderScannerNextException: was there a rpc timeout? at org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:47) at com.xxx.xxx.test.HBaseRegression.nullComparator(HBaseRegression.java:92) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: Failed after retry of OutOfOrderScannerNextException: was there a rpc timeout? at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:391) at org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:44) ... 25 more Caused by: org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected nextCallSeq: 1 But the nextCallSeq got from client: 0; request=scanner_id: 7998309028985532303 number_of_rows: 100 close_scanner: false next_call_seq: 0 at org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3011) at
[jira] [Commented] (HBASE-10531) Revisit how the key byte[] is passed to HFileScanner.seekTo and reseekTo
[ https://issues.apache.org/jira/browse/HBASE-10531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956168#comment-13956168 ] ramkrishna.s.vasudevan commented on HBASE-10531: The test failure {code} org.apache.hadoop.hbase.master.TestMasterFailover.testSimpleMasterFailover org.apache.hadoop.hbase.regionserver.TestHRegionBusyWait.testBatchPut {code} did not occur locally in my testruns and also in hadoopQA run. Also the subsequent runs does not have this failure. JFYI. Revisit how the key byte[] is passed to HFileScanner.seekTo and reseekTo Key: HBASE-10531 URL: https://issues.apache.org/jira/browse/HBASE-10531 Project: HBase Issue Type: Sub-task Reporter: ramkrishna.s.vasudevan Assignee: ramkrishna.s.vasudevan Fix For: 0.99.0 Attachments: HBASE-10531.patch, HBASE-10531_1.patch, HBASE-10531_12.patch, HBASE-10531_13.patch, HBASE-10531_13.patch, HBASE-10531_2.patch, HBASE-10531_3.patch, HBASE-10531_4.patch, HBASE-10531_5.patch, HBASE-10531_6.patch, HBASE-10531_7.patch, HBASE-10531_8.patch, HBASE-10531_9.patch Currently the byte[] key passed to HFileScanner.seekTo and HFileScanner.reseekTo, is a combination of row, cf, qual, type and ts. And the caller forms this by using kv.getBuffer, which is actually deprecated. So see how this can be achieved considering kv.getBuffer is removed. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10878) Operator | for visibility label doesn't work
[ https://issues.apache.org/jira/browse/HBASE-10878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956181#comment-13956181 ] Andrew Purtell commented on HBASE-10878: Thanks for trying this stuff out [~tedyu]. I filed HBASE-10883 as follow up to this issue. Operator | for visibility label doesn't work Key: HBASE-10878 URL: https://issues.apache.org/jira/browse/HBASE-10878 Project: HBase Issue Type: Bug Reporter: Ted Yu I used setup similar to that from HBASE-10863, with fix for HBASE-10863 : {code} hbase(main):003:0 scan 'hbase:labels' ROW COLUMN+CELL \x00\x00\x00\x01column=f:\x00, timestamp=1395944796030, value=system \x00\x00\x00\x01column=f:hbase, timestamp=1395944796030, value= \x00\x00\x00\x02column=f:\x00, timestamp=1395951045442, value=TOP_SECRET \x00\x00\x00\x02column=f:hrt_qa, timestamp=1395951229682, value= \x00\x00\x00\x02column=f:hrt_qa1, timestamp=1395951270297, value= \x00\x00\x00\x02column=f:mapred, timestamp=1395958442326, value= \x00\x00\x00\x03column=f:\x00, timestamp=1395952069731, value=TOP_TOP_SECRET \x00\x00\x00\x03column=f:mapred, timestamp=1395956032141, value= \x00\x00\x00\x04column=f:\x00, timestamp=1395971516605, value=A \x00\x00\x00\x04column=f:oozie, timestamp=1395971647859, value= \x00\x00\x00\x05column=f:\x00, timestamp=1395971520327, value=B 5 row(s) in 0.0580 seconds {code} I did the following as user oozie using hbase shell: {code} hbase(main):001:0 scan 'tb', { AUTHORIZATIONS = ['A']} ROW COLUMN+CELL row column=f1:q, timestamp=1395971660859, value=v1 row2column=f1:q, timestamp=1395972271343, value=v2 row3column=f1:q, timestamp=1396067477702, value=v3 3 row(s) in 0.2050 seconds hbase(main):002:0 scan 'tb', { AUTHORIZATIONS = ['A|B']} ROW COLUMN+CELL row2column=f1:q, timestamp=1395972271343, value=v2 1 row(s) in 0.0150 seconds hbase(main):003:0 scan 'tb', { AUTHORIZATIONS = ['B|A']} ROW COLUMN+CELL row2column=f1:q, timestamp=1395972271343, value=v2 1 row(s) in 0.0260 seconds {code} Rows 'row' and 'row3' were inserted with label 'A'. Row 'row2' was inserted without label. Row 'row1' was inserted with label 'B'. I would expect row1 to also be returned. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HBASE-10883) Restrict the universe of labels and authorizations to [A-Za-z0-9\_][A-Za-z0-9\_\-]*
Andrew Purtell created HBASE-10883: -- Summary: Restrict the universe of labels and authorizations to [A-Za-z0-9\_][A-Za-z0-9\_\-]* Key: HBASE-10883 URL: https://issues.apache.org/jira/browse/HBASE-10883 Project: HBase Issue Type: Improvement Affects Versions: 0.98.1 Reporter: Andrew Purtell Fix For: 0.99.0, 0.98.2 Currently we allow any string as visibility label or request authorization. However as seen on HBASE-10878, we accept for authorizations strings that would not work if provided as labels in visibility expressions. We should throw an exception at least in cases where someone tries to define or use a label or authorization including visibility expression operators '', '|', '!', '(', ')'. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10883) Restrict the universe of labels and authorizations to [A-Za-z0-9\_][A-Za-z0-9\_\-]*
[ https://issues.apache.org/jira/browse/HBASE-10883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956184#comment-13956184 ] Andrew Purtell commented on HBASE-10883: I think we should use the common pattern for identifiers in various programming languages, [A-Za-z\_][A-Za-z0-9\_\-]* Restrict the universe of labels and authorizations to [A-Za-z0-9\_][A-Za-z0-9\_\-]* --- Key: HBASE-10883 URL: https://issues.apache.org/jira/browse/HBASE-10883 Project: HBase Issue Type: Improvement Affects Versions: 0.98.1 Reporter: Andrew Purtell Fix For: 0.99.0, 0.98.2 Currently we allow any string as visibility label or request authorization. However as seen on HBASE-10878, we accept for authorizations strings that would not work if provided as labels in visibility expressions. We should throw an exception at least in cases where someone tries to define or use a label or authorization including visibility expression operators '', '|', '!', '(', ')'. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10883) Restrict the universe of labels and authorizations
[ https://issues.apache.org/jira/browse/HBASE-10883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-10883: --- Summary: Restrict the universe of labels and authorizations (was: Restrict the universe of labels and authorizations to [A-Za-z0-9\_][A-Za-z0-9\_\-]*) Restrict the universe of labels and authorizations -- Key: HBASE-10883 URL: https://issues.apache.org/jira/browse/HBASE-10883 Project: HBase Issue Type: Improvement Affects Versions: 0.98.1 Reporter: Andrew Purtell Fix For: 0.99.0, 0.98.2 Currently we allow any string as visibility label or request authorization. However as seen on HBASE-10878, we accept for authorizations strings that would not work if provided as labels in visibility expressions. We should throw an exception at least in cases where someone tries to define or use a label or authorization including visibility expression operators '', '|', '!', '(', ')'. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HBASE-10884) [REST] Do not disable block caching when scanning
Andrew Purtell created HBASE-10884: -- Summary: [REST] Do not disable block caching when scanning Key: HBASE-10884 URL: https://issues.apache.org/jira/browse/HBASE-10884 Project: HBase Issue Type: Improvement Affects Versions: 0.94.18, 0.96.1.1, 0.98.1 Reporter: Andrew Purtell The REST gateway pessimistically disables block caching when issuing Scans to the cluster, using Scan#setCacheBlocks(false) in ScannerResultGenerator. It does not do this when issuing Gets on behalf of HTTP clients in RowResultGenerator. This is an old idea now, the reasons for doing so lost sometime back in the era when HBase walked the earth with dinosaurs ( 0.20). We probably should not be penalizing REST scans in this way. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Assigned] (HBASE-10884) [REST] Do not disable block caching when scanning
[ https://issues.apache.org/jira/browse/HBASE-10884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell reassigned HBASE-10884: -- Assignee: Andrew Purtell [REST] Do not disable block caching when scanning - Key: HBASE-10884 URL: https://issues.apache.org/jira/browse/HBASE-10884 Project: HBase Issue Type: Improvement Affects Versions: 0.98.1, 0.96.1.1, 0.94.18 Reporter: Andrew Purtell Assignee: Andrew Purtell The REST gateway pessimistically disables block caching when issuing Scans to the cluster, using Scan#setCacheBlocks(false) in ScannerResultGenerator. It does not do this when issuing Gets on behalf of HTTP clients in RowResultGenerator. This is an old idea now, the reasons for doing so lost sometime back in the era when HBase walked the earth with dinosaurs ( 0.20). We probably should not be penalizing REST scans in this way. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10879) user_permission shell command on namespace doesn't work
[ https://issues.apache.org/jira/browse/HBASE-10879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956199#comment-13956199 ] Andrew Purtell commented on HBASE-10879: +1 for trunk and 0.98 Ping [~stack] - should go into 0.96 too, another namespace nit user_permission shell command on namespace doesn't work --- Key: HBASE-10879 URL: https://issues.apache.org/jira/browse/HBASE-10879 Project: HBase Issue Type: Bug Reporter: Ted Yu Assignee: Ted Yu Attachments: 10879-v1.txt, 10879-v2.txt Currently user_permission command on namespace, e.g. {code} user_permission '@ns' {code} would result in the following exception: {code} Exception `NameError' at /usr/lib/hbase/lib/ruby/hbase/security.rb:170 - no method 'getUserPermissions' for arguments (org.apache.hadoop.hbase.protobuf.generated. AccessControlProtos.AccessControlService.BlockingStub,org.jruby.java.proxies.ArrayJavaProxy) on Java::OrgApacheHadoopHbaseProtobuf::ProtobufUtil ERROR: no method 'getUserPermissions' for arguments (org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.AccessControlService.BlockingStub,org.jruby.java. proxies.ArrayJavaProxy) on Java::OrgApacheHadoopHbaseProtobuf::ProtobufUtil Backtrace: /usr/lib/hbase/lib/ruby/hbase/security.rb:170:in `user_permission' /usr/lib/hbase/lib/ruby/shell/commands/user_permission.rb:39:in `command' org/jruby/RubyKernel.java:2109:in `send' /usr/lib/hbase/lib/ruby/shell/commands.rb:34:in `command_safe' /usr/lib/hbase/lib/ruby/shell/commands.rb:91:in `translate_hbase_exceptions' /usr/lib/hbase/lib/ruby/shell/commands.rb:34:in `command_safe' /usr/lib/hbase/lib/ruby/shell.rb:127:in `internal_command' /usr/lib/hbase/lib/ruby/shell.rb:119:in `command' (eval):2:in `user_permission' (hbase):1:in `evaluate' org/jruby/RubyKernel.java:1112:in `eval' {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HBASE-10885) Support visibility expressions on Deletes
Andrew Purtell created HBASE-10885: -- Summary: Support visibility expressions on Deletes Key: HBASE-10885 URL: https://issues.apache.org/jira/browse/HBASE-10885 Project: HBase Issue Type: Improvement Affects Versions: 0.98.1 Reporter: Andrew Purtell Fix For: 0.99.0, 0.98.2 Accumulo can specify visibility expressions for delete markers. During compaction the cells covered by the tombstone are determined in part by matching the visibility expression. This is useful for the use case of data set coalescing, where entries from multiple data sets carrying different labels are combined into one common large table. Later, a subset of entries can be conveniently removed using visibility expressions. Currently doing the same in HBase would only be possible with a custom coprocessor. Otherwise, a Delete will affect all cells covered by the tombstone regardless if they are visible to the user issuing the delete or not. This is correct behavior in that no data spill is possible, but certainly could be surprising, and is only meant to be transitional. We decided not to support visibility expressions on Deletes to control the complexity of the initial implementation. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10885) Support visibility expressions on Deletes
[ https://issues.apache.org/jira/browse/HBASE-10885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956211#comment-13956211 ] Andrew Purtell commented on HBASE-10885: How we handle deletes in the AccessController is to only allow the delete if it has covering permission. By that I mean CF ACLs and any ACLs in cells that would be covered by the tombstone not already covered by a tombstone must grant the subject access. If any do not, the delete is denied. We run an internal RegionScanner to enumerate the cells that would be affected by the operation. One way to do this for the VisibilityController is we could similarly run an internal RegionScanner with the parameters of the submitted Delete op, filter cells by effective visibility, and issue deletes scoped only to those cells visible to the subject. This would be easier than trying to hook compaction scanners and evaluating visibility expression tags there, because we will have the effective label set for the user in the RPC context, not at compaction time. Support visibility expressions on Deletes - Key: HBASE-10885 URL: https://issues.apache.org/jira/browse/HBASE-10885 Project: HBase Issue Type: Improvement Affects Versions: 0.98.1 Reporter: Andrew Purtell Fix For: 0.99.0, 0.98.2 Accumulo can specify visibility expressions for delete markers. During compaction the cells covered by the tombstone are determined in part by matching the visibility expression. This is useful for the use case of data set coalescing, where entries from multiple data sets carrying different labels are combined into one common large table. Later, a subset of entries can be conveniently removed using visibility expressions. Currently doing the same in HBase would only be possible with a custom coprocessor. Otherwise, a Delete will affect all cells covered by the tombstone regardless if they are visible to the user issuing the delete or not. This is correct behavior in that no data spill is possible, but certainly could be surprising, and is only meant to be transitional. We decided not to support visibility expressions on Deletes to control the complexity of the initial implementation. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10881) Support reverse scan in thrift2
[ https://issues.apache.org/jira/browse/HBASE-10881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956248#comment-13956248 ] Liang Xie commented on HBASE-10881: --- [~tedyu] [~ram_krish] seems still had a complain even HBASE-10824 committed. mind opening a new jira to track it? Support reverse scan in thrift2 --- Key: HBASE-10881 URL: https://issues.apache.org/jira/browse/HBASE-10881 Project: HBase Issue Type: New Feature Affects Versions: 0.99.0 Reporter: Liu Shaohui Assignee: Liu Shaohui Priority: Minor Attachments: HBASE-10881-trunk-v1.diff Support reverse scan in thrift2. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10881) Support reverse scan in thrift2
[ https://issues.apache.org/jira/browse/HBASE-10881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Liang Xie updated HBASE-10881: -- Component/s: Thrift Support reverse scan in thrift2 --- Key: HBASE-10881 URL: https://issues.apache.org/jira/browse/HBASE-10881 Project: HBase Issue Type: New Feature Components: Thrift Affects Versions: 0.99.0 Reporter: Liu Shaohui Assignee: Liu Shaohui Priority: Minor Fix For: 0.99.0 Attachments: HBASE-10881-trunk-v1.diff Support reverse scan in thrift2. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10881) Support reverse scan in thrift2
[ https://issues.apache.org/jira/browse/HBASE-10881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Liang Xie updated HBASE-10881: -- Fix Version/s: 0.99.0 Support reverse scan in thrift2 --- Key: HBASE-10881 URL: https://issues.apache.org/jira/browse/HBASE-10881 Project: HBase Issue Type: New Feature Components: Thrift Affects Versions: 0.99.0 Reporter: Liu Shaohui Assignee: Liu Shaohui Priority: Minor Fix For: 0.99.0 Attachments: HBASE-10881-trunk-v1.diff Support reverse scan in thrift2. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10881) Support reverse scan in thrift2
[ https://issues.apache.org/jira/browse/HBASE-10881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956249#comment-13956249 ] Liang Xie commented on HBASE-10881: --- +1 for attached patch Support reverse scan in thrift2 --- Key: HBASE-10881 URL: https://issues.apache.org/jira/browse/HBASE-10881 Project: HBase Issue Type: New Feature Components: Thrift Affects Versions: 0.99.0 Reporter: Liu Shaohui Assignee: Liu Shaohui Priority: Minor Fix For: 0.99.0 Attachments: HBASE-10881-trunk-v1.diff Support reverse scan in thrift2. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10881) Support reverse scan in thrift2
[ https://issues.apache.org/jira/browse/HBASE-10881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956252#comment-13956252 ] ramkrishna.s.vasudevan commented on HBASE-10881: bq. mind opening a new jira to track it? Okie. We might have to include {code} hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated {code} package also. Support reverse scan in thrift2 --- Key: HBASE-10881 URL: https://issues.apache.org/jira/browse/HBASE-10881 Project: HBase Issue Type: New Feature Components: Thrift Affects Versions: 0.99.0 Reporter: Liu Shaohui Assignee: Liu Shaohui Priority: Minor Fix For: 0.99.0 Attachments: HBASE-10881-trunk-v1.diff Support reverse scan in thrift2. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HBASE-10886) add htrace-zipkin to the runtime dependencies again
Masatake Iwasaki created HBASE-10886: Summary: add htrace-zipkin to the runtime dependencies again Key: HBASE-10886 URL: https://issues.apache.org/jira/browse/HBASE-10886 Project: HBase Issue Type: Improvement Reporter: Masatake Iwasaki Assignee: Masatake Iwasaki Priority: Minor Once htrace-zipkin was removed from depencencies in HBASE-9700. Because all of the depencencies of htrace-zipkin is bundled with HBase now, it is good to add it for the ease of use. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10886) add htrace-zipkin to the runtime dependencies again
[ https://issues.apache.org/jira/browse/HBASE-10886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Masatake Iwasaki updated HBASE-10886: - Attachment: HBASE-10886-0.patch add htrace-zipkin to the runtime dependencies again --- Key: HBASE-10886 URL: https://issues.apache.org/jira/browse/HBASE-10886 Project: HBase Issue Type: Improvement Reporter: Masatake Iwasaki Assignee: Masatake Iwasaki Priority: Minor Attachments: HBASE-10886-0.patch Once htrace-zipkin was removed from depencencies in HBASE-9700. Because all of the depencencies of htrace-zipkin is bundled with HBase now, it is good to add it for the ease of use. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10886) add htrace-zipkin to the runtime dependencies again
[ https://issues.apache.org/jira/browse/HBASE-10886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Masatake Iwasaki updated HBASE-10886: - Status: Patch Available (was: Open) add htrace-zipkin to the runtime dependencies again --- Key: HBASE-10886 URL: https://issues.apache.org/jira/browse/HBASE-10886 Project: HBase Issue Type: Improvement Reporter: Masatake Iwasaki Assignee: Masatake Iwasaki Priority: Minor Attachments: HBASE-10886-0.patch Once htrace-zipkin was removed from depencencies in HBASE-9700. Because all of the depencencies of htrace-zipkin is bundled with HBase now, it is good to add it for the ease of use. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10881) Support reverse scan in thrift2
[ https://issues.apache.org/jira/browse/HBASE-10881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956263#comment-13956263 ] Liang Xie commented on HBASE-10881: --- [~liushaohui], just realized the old thrift(maybe we can say thrift1?) also doesn't support reverse scan in current codebase, could you enhance it if get any change? thks! Support reverse scan in thrift2 --- Key: HBASE-10881 URL: https://issues.apache.org/jira/browse/HBASE-10881 Project: HBase Issue Type: New Feature Components: Thrift Affects Versions: 0.99.0 Reporter: Liu Shaohui Assignee: Liu Shaohui Priority: Minor Fix For: 0.99.0 Attachments: HBASE-10881-trunk-v1.diff Support reverse scan in thrift2. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10883) Restrict the universe of labels and authorizations
[ https://issues.apache.org/jira/browse/HBASE-10883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan updated HBASE-10883: --- Attachment: HBASE-10883.patch Patch for trunk. Should apply on 0.98 also. Tested with shell also. Throws exception if any special characters are used. Restrict the universe of labels and authorizations -- Key: HBASE-10883 URL: https://issues.apache.org/jira/browse/HBASE-10883 Project: HBase Issue Type: Improvement Affects Versions: 0.98.1 Reporter: Andrew Purtell Fix For: 0.99.0, 0.98.2 Attachments: HBASE-10883.patch Currently we allow any string as visibility label or request authorization. However as seen on HBASE-10878, we accept for authorizations strings that would not work if provided as labels in visibility expressions. We should throw an exception at least in cases where someone tries to define or use a label or authorization including visibility expression operators '', '|', '!', '(', ')'. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10883) Restrict the universe of labels and authorizations
[ https://issues.apache.org/jira/browse/HBASE-10883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan updated HBASE-10883: --- Status: Patch Available (was: Open) Restrict the universe of labels and authorizations -- Key: HBASE-10883 URL: https://issues.apache.org/jira/browse/HBASE-10883 Project: HBase Issue Type: Improvement Affects Versions: 0.98.1 Reporter: Andrew Purtell Fix For: 0.99.0, 0.98.2 Attachments: HBASE-10883.patch Currently we allow any string as visibility label or request authorization. However as seen on HBASE-10878, we accept for authorizations strings that would not work if provided as labels in visibility expressions. We should throw an exception at least in cases where someone tries to define or use a label or authorization including visibility expression operators '', '|', '!', '(', ')'. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Assigned] (HBASE-7781) Update security unit tests to use a KDC if available
[ https://issues.apache.org/jira/browse/HBASE-7781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan reassigned HBASE-7781: - Assignee: (was: ramkrishna.s.vasudevan) Update security unit tests to use a KDC if available Key: HBASE-7781 URL: https://issues.apache.org/jira/browse/HBASE-7781 Project: HBase Issue Type: Test Components: security, test Reporter: Gary Helmling Priority: Blocker We currently have large holes in the test coverage of HBase with security enabled. Two recent examples of bugs which really should have been caught with testing are HBASE-7771 and HBASE-7772. The long standing problem with testing with security enabled has been the requirement for supporting kerberos infrastructure. We need to close this gap and provide some automated testing with security enabled, if necessary standing up and provisioning a temporary KDC as an option for running integration tests, see HADOOP-8078 and HADOOP-9004 where a similar approach was taken. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10830) Integration test MR jobs attempt to load htrace jars from the wrong location
[ https://issues.apache.org/jira/browse/HBASE-10830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956292#comment-13956292 ] Andrew Purtell commented on HBASE-10830: +1 thanks [~ndimiduk]! Integration test MR jobs attempt to load htrace jars from the wrong location Key: HBASE-10830 URL: https://issues.apache.org/jira/browse/HBASE-10830 Project: HBase Issue Type: Bug Affects Versions: 0.98.1 Reporter: Andrew Purtell Priority: Minor Fix For: 0.99.0, 0.98.2 Attachments: HBASE-10830.00.patch, HBASE-10830.01.patch The MapReduce jobs submitted by IntegrationTestImportTsv want to load the htrace JAR from the local Maven cache but get confused and use a HDFS URI. {noformat} Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 8.489 sec FAILURE! testGenerateAndLoad(org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv) Time elapsed: 0.488 sec ERROR! java.io.FileNotFoundException: File does not exist: hdfs://localhost:37548/home/apurtell/.m2/repository/org/cloudera/htrace/htrace-core/2.04/htrace-core-2.04.jar at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1110) at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1102) at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288) at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224) at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:93) at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57) at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:264) at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:300) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:387) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1286) at org.apache.hadoop.hbase.mapreduce.ImportTsv.run(ImportTsv.java:603) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.hbase.mapreduce.TestImportTsv.doMROnTableTest(TestImportTsv.java:270) at org.apache.hadoop.hbase.mapreduce.TestImportTsv.doMROnTableTest(TestImportTsv.java:232) at org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv.testGenerateAndLoad(IntegrationTestImportTsv.java:206) {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10885) Support visibility expressions on Deletes
[ https://issues.apache.org/jira/browse/HBASE-10885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956297#comment-13956297 ] ramkrishna.s.vasudevan commented on HBASE-10885: Delete.setCellVisibility() should be supported now. And these labels passed here will be only a list of labels and not visibility expressions like A|B!C? Which means if an existing put had AB now the delete should pass AB or it could be A,B like how we pass for Authorization? Support visibility expressions on Deletes - Key: HBASE-10885 URL: https://issues.apache.org/jira/browse/HBASE-10885 Project: HBase Issue Type: Improvement Affects Versions: 0.98.1 Reporter: Andrew Purtell Fix For: 0.99.0, 0.98.2 Accumulo can specify visibility expressions for delete markers. During compaction the cells covered by the tombstone are determined in part by matching the visibility expression. This is useful for the use case of data set coalescing, where entries from multiple data sets carrying different labels are combined into one common large table. Later, a subset of entries can be conveniently removed using visibility expressions. Currently doing the same in HBase would only be possible with a custom coprocessor. Otherwise, a Delete will affect all cells covered by the tombstone regardless if they are visible to the user issuing the delete or not. This is correct behavior in that no data spill is possible, but certainly could be surprising, and is only meant to be transitional. We decided not to support visibility expressions on Deletes to control the complexity of the initial implementation. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HBASE-10887) tidy ThriftUtilities format
Liang Xie created HBASE-10887: - Summary: tidy ThriftUtilities format Key: HBASE-10887 URL: https://issues.apache.org/jira/browse/HBASE-10887 Project: HBase Issue Type: Improvement Components: Thrift Affects Versions: 0.99.0 Reporter: Liang Xie Assignee: Liang Xie Priority: Trivial Fix For: 0.99.0 Attachments: HBASE-10887.txt Just found the weird code format during reviewing another patch, let's remove the unneccessary tab -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10887) tidy ThriftUtilities format
[ https://issues.apache.org/jira/browse/HBASE-10887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Liang Xie updated HBASE-10887: -- Attachment: HBASE-10887.txt will commit it shortly, due to just a dummy change tidy ThriftUtilities format --- Key: HBASE-10887 URL: https://issues.apache.org/jira/browse/HBASE-10887 Project: HBase Issue Type: Improvement Components: Thrift Affects Versions: 0.99.0 Reporter: Liang Xie Assignee: Liang Xie Priority: Trivial Fix For: 0.99.0 Attachments: HBASE-10887.txt Just found the weird code format during reviewing another patch, let's remove the unneccessary tab -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10887) tidy ThriftUtilities format
[ https://issues.apache.org/jira/browse/HBASE-10887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Liang Xie updated HBASE-10887: -- Fix Version/s: 0.98.1 Affects Version/s: 0.98.1 Status: Patch Available (was: Open) tidy ThriftUtilities format --- Key: HBASE-10887 URL: https://issues.apache.org/jira/browse/HBASE-10887 Project: HBase Issue Type: Improvement Components: Thrift Affects Versions: 0.98.1, 0.99.0 Reporter: Liang Xie Assignee: Liang Xie Priority: Trivial Fix For: 0.99.0, 0.98.1 Attachments: HBASE-10887.txt Just found the weird code format during reviewing another patch, let's remove the unneccessary tab -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10887) tidy ThriftUtilities format
[ https://issues.apache.org/jira/browse/HBASE-10887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Liang Xie updated HBASE-10887: -- Resolution: Fixed Status: Resolved (was: Patch Available) tidy ThriftUtilities format --- Key: HBASE-10887 URL: https://issues.apache.org/jira/browse/HBASE-10887 Project: HBase Issue Type: Improvement Components: Thrift Affects Versions: 0.98.1, 0.99.0 Reporter: Liang Xie Assignee: Liang Xie Priority: Trivial Fix For: 0.99.0, 0.98.1 Attachments: HBASE-10887.txt Just found the weird code format during reviewing another patch, let's remove the unneccessary tab -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10885) Support visibility expressions on Deletes
[ https://issues.apache.org/jira/browse/HBASE-10885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956348#comment-13956348 ] Andrew Purtell commented on HBASE-10885: bq. Delete.setCellVisibility() should be supported now. Yes. bq. And these labels passed here will be only a list of labels and not visibility expressions like A|B!C? No. Deletes should support visibility expressions just like Put, etc. The supplied visibility expression is then associated with the delete marker(s). Actually, scratch what I said above in the first comment. I think we can check that the supplied expression does not exceed the maximal authorization set for the user submitting the Delete in the preDelete hook and then store the delete marker as a cell with a visibility expression tag and do the rest of the work later, by hooking the compaction scanner. The big change then would be checking for visibility expressions in tags on delete markers at compaction time. If we find one, then we have to filter only the cells covered by the tombstone that have a matching expression. If we are not storing visibility expression terminals (LeafExpressionNodes) in sorted order by ordinal we probably should consider it. (I don't think we are.) Because e.g. A|B == B|A. It would be most efficient if we can simply do byte comparison of serialized visibility expressions. Support visibility expressions on Deletes - Key: HBASE-10885 URL: https://issues.apache.org/jira/browse/HBASE-10885 Project: HBase Issue Type: Improvement Affects Versions: 0.98.1 Reporter: Andrew Purtell Fix For: 0.99.0, 0.98.2 Accumulo can specify visibility expressions for delete markers. During compaction the cells covered by the tombstone are determined in part by matching the visibility expression. This is useful for the use case of data set coalescing, where entries from multiple data sets carrying different labels are combined into one common large table. Later, a subset of entries can be conveniently removed using visibility expressions. Currently doing the same in HBase would only be possible with a custom coprocessor. Otherwise, a Delete will affect all cells covered by the tombstone regardless if they are visible to the user issuing the delete or not. This is correct behavior in that no data spill is possible, but certainly could be surprising, and is only meant to be transitional. We decided not to support visibility expressions on Deletes to control the complexity of the initial implementation. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10881) Support reverse scan in thrift2
[ https://issues.apache.org/jira/browse/HBASE-10881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Liu Shaohui updated HBASE-10881: Attachment: HBASE-10881-trunk-v2.diff Support reverse scan in thrift(1) too. Support reverse scan in thrift2 --- Key: HBASE-10881 URL: https://issues.apache.org/jira/browse/HBASE-10881 Project: HBase Issue Type: New Feature Components: Thrift Affects Versions: 0.99.0 Reporter: Liu Shaohui Assignee: Liu Shaohui Priority: Minor Fix For: 0.99.0 Attachments: HBASE-10881-trunk-v1.diff, HBASE-10881-trunk-v2.diff Support reverse scan in thrift2. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Comment Edited] (HBASE-10885) Support visibility expressions on Deletes
[ https://issues.apache.org/jira/browse/HBASE-10885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956348#comment-13956348 ] Andrew Purtell edited comment on HBASE-10885 at 4/1/14 10:25 AM: - bq. Delete.setCellVisibility() should be supported now. Yes. bq. And these labels passed here will be only a list of labels and not visibility expressions like A|B!C? No. Deletes should support visibility expressions just like Put, etc. The supplied visibility expression is then associated with the delete marker(s). Actually, scratch what I said above in the first comment. I think we can check that the supplied expression does not exceed the maximal authorization set for the user submitting the Delete in the preDelete hook and then store the delete marker as a cell with a visibility expression tag and do the rest of the work later, by hooking the compaction scanner. The big change then would be checking for visibility expressions in tags on delete markers at compaction time. If we find one, then we have to filter only the cells covered by the tombstone that have a matching expression. If we are not storing visibility expression terminals (LeafExpressionNodes) in sorted order by ordinal we probably should consider it. (I don't think we are.) Because e.g. A|B == B|A. It would be most efficient if we can simply do byte comparison of serialized visibility expressions on the delete marker and any found while enumerating cells covered by it. If a delete marker has a visibility expression, then we only apply it to cells with matching visibility expressions. If a cell has no visibility tag then it does not match. (A|B != nil) was (Author: apurtell): bq. Delete.setCellVisibility() should be supported now. Yes. bq. And these labels passed here will be only a list of labels and not visibility expressions like A|B!C? No. Deletes should support visibility expressions just like Put, etc. The supplied visibility expression is then associated with the delete marker(s). Actually, scratch what I said above in the first comment. I think we can check that the supplied expression does not exceed the maximal authorization set for the user submitting the Delete in the preDelete hook and then store the delete marker as a cell with a visibility expression tag and do the rest of the work later, by hooking the compaction scanner. The big change then would be checking for visibility expressions in tags on delete markers at compaction time. If we find one, then we have to filter only the cells covered by the tombstone that have a matching expression. If we are not storing visibility expression terminals (LeafExpressionNodes) in sorted order by ordinal we probably should consider it. (I don't think we are.) Because e.g. A|B == B|A. It would be most efficient if we can simply do byte comparison of serialized visibility expressions. Support visibility expressions on Deletes - Key: HBASE-10885 URL: https://issues.apache.org/jira/browse/HBASE-10885 Project: HBase Issue Type: Improvement Affects Versions: 0.98.1 Reporter: Andrew Purtell Fix For: 0.99.0, 0.98.2 Accumulo can specify visibility expressions for delete markers. During compaction the cells covered by the tombstone are determined in part by matching the visibility expression. This is useful for the use case of data set coalescing, where entries from multiple data sets carrying different labels are combined into one common large table. Later, a subset of entries can be conveniently removed using visibility expressions. Currently doing the same in HBase would only be possible with a custom coprocessor. Otherwise, a Delete will affect all cells covered by the tombstone regardless if they are visible to the user issuing the delete or not. This is correct behavior in that no data spill is possible, but certainly could be surprising, and is only meant to be transitional. We decided not to support visibility expressions on Deletes to control the complexity of the initial implementation. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10886) add htrace-zipkin to the runtime dependencies again
[ https://issues.apache.org/jira/browse/HBASE-10886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956352#comment-13956352 ] Hadoop QA commented on HBASE-10886: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12638025/HBASE-10886-0.patch against trunk revision . ATTACHMENT ID: 12638025 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.mapreduce.TestSecureLoadIncrementalHFilesSplitRecovery Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/9154//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9154//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9154//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9154//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9154//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9154//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9154//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9154//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9154//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9154//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/9154//console This message is automatically generated. add htrace-zipkin to the runtime dependencies again --- Key: HBASE-10886 URL: https://issues.apache.org/jira/browse/HBASE-10886 Project: HBase Issue Type: Improvement Reporter: Masatake Iwasaki Assignee: Masatake Iwasaki Priority: Minor Attachments: HBASE-10886-0.patch Once htrace-zipkin was removed from depencencies in HBASE-9700. Because all of the depencencies of htrace-zipkin is bundled with HBase now, it is good to add it for the ease of use. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10885) Support visibility expressions on Deletes
[ https://issues.apache.org/jira/browse/HBASE-10885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-10885: --- Description: Accumulo can specify visibility expressions for delete markers. During compaction the cells covered by the tombstone are determined in part by matching the visibility expression. This is useful for the use case of data set coalescing, where entries from multiple data sets carrying different labels are combined into one common large table. Later, a subset of entries can be conveniently removed using visibility expressions. Currently doing the same in HBase would only be possible with a custom coprocessor. Otherwise, a Delete will affect all cells covered by the tombstone regardless of any visibility expression scoping. This is correct behavior in that no data spill is possible, but certainly could be surprising, and is only meant to be transitional. We decided not to support visibility expressions on Deletes to control the complexity of the initial implementation. was: Accumulo can specify visibility expressions for delete markers. During compaction the cells covered by the tombstone are determined in part by matching the visibility expression. This is useful for the use case of data set coalescing, where entries from multiple data sets carrying different labels are combined into one common large table. Later, a subset of entries can be conveniently removed using visibility expressions. Currently doing the same in HBase would only be possible with a custom coprocessor. Otherwise, a Delete will affect all cells covered by the tombstone regardless if they are visible to the user issuing the delete or not. This is correct behavior in that no data spill is possible, but certainly could be surprising, and is only meant to be transitional. We decided not to support visibility expressions on Deletes to control the complexity of the initial implementation. Support visibility expressions on Deletes - Key: HBASE-10885 URL: https://issues.apache.org/jira/browse/HBASE-10885 Project: HBase Issue Type: Improvement Affects Versions: 0.98.1 Reporter: Andrew Purtell Fix For: 0.99.0, 0.98.2 Accumulo can specify visibility expressions for delete markers. During compaction the cells covered by the tombstone are determined in part by matching the visibility expression. This is useful for the use case of data set coalescing, where entries from multiple data sets carrying different labels are combined into one common large table. Later, a subset of entries can be conveniently removed using visibility expressions. Currently doing the same in HBase would only be possible with a custom coprocessor. Otherwise, a Delete will affect all cells covered by the tombstone regardless of any visibility expression scoping. This is correct behavior in that no data spill is possible, but certainly could be surprising, and is only meant to be transitional. We decided not to support visibility expressions on Deletes to control the complexity of the initial implementation. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10883) Restrict the universe of labels and authorizations
[ https://issues.apache.org/jira/browse/HBASE-10883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956365#comment-13956365 ] Anoop Sam John commented on HBASE-10883: Referring VisibilityLabelsValidator we allow some special characters to be part of labels. So this check should contain those also. Restrict the universe of labels and authorizations -- Key: HBASE-10883 URL: https://issues.apache.org/jira/browse/HBASE-10883 Project: HBase Issue Type: Improvement Affects Versions: 0.98.1 Reporter: Andrew Purtell Fix For: 0.99.0, 0.98.2 Attachments: HBASE-10883.patch Currently we allow any string as visibility label or request authorization. However as seen on HBASE-10878, we accept for authorizations strings that would not work if provided as labels in visibility expressions. We should throw an exception at least in cases where someone tries to define or use a label or authorization including visibility expression operators '', '|', '!', '(', ')'. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Comment Edited] (HBASE-10885) Support visibility expressions on Deletes
[ https://issues.apache.org/jira/browse/HBASE-10885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956348#comment-13956348 ] Andrew Purtell edited comment on HBASE-10885 at 4/1/14 10:40 AM: - bq. Delete.setCellVisibility() should be supported now. Yes. bq. And these labels passed here will be only a list of labels and not visibility expressions like A|B!C? No. Deletes should support visibility expressions just like Put, etc. The supplied visibility expression is then associated with the delete marker(s). Actually, scratch what I said above in the first comment. We can store the delete marker as a cell with a visibility expression tag and do the work later, by hooking the compaction scanner. We would check for visibility expressions in tags on delete markers at compaction time. If we find one, then we have to filter only the cells covered by the tombstone that have a matching expression. If we are not storing visibility expression terminals (LeafExpressionNodes) in sorted order by ordinal we probably should consider it. (I don't think we are.) Because e.g. A|B == B|A. It would be most efficient if we can simply do byte comparison of serialized visibility expressions on the delete marker and any found while enumerating cells covered by it. If a delete marker has a visibility expression, then we only apply it to cells with matching visibility expressions. If a cell has no visibility tag then it does not match. (A|B != nil) Should we check that the supplied expression does not exceed the maximal authorization set for the user submitting the Delete in the preDelete hook? In other words, should we we allow a user only granted authorization A to submit a delete with visibility expression A|B? We should not, in my opinion. It is different for the delete case than others because delete is a destructive operation. Recommend we answer this question for other op types on another JIRA, should there be any. was (Author: apurtell): bq. Delete.setCellVisibility() should be supported now. Yes. bq. And these labels passed here will be only a list of labels and not visibility expressions like A|B!C? No. Deletes should support visibility expressions just like Put, etc. The supplied visibility expression is then associated with the delete marker(s). Actually, scratch what I said above in the first comment. I think we can check that the supplied expression does not exceed the maximal authorization set for the user submitting the Delete in the preDelete hook and then store the delete marker as a cell with a visibility expression tag and do the rest of the work later, by hooking the compaction scanner. The big change then would be checking for visibility expressions in tags on delete markers at compaction time. If we find one, then we have to filter only the cells covered by the tombstone that have a matching expression. If we are not storing visibility expression terminals (LeafExpressionNodes) in sorted order by ordinal we probably should consider it. (I don't think we are.) Because e.g. A|B == B|A. It would be most efficient if we can simply do byte comparison of serialized visibility expressions on the delete marker and any found while enumerating cells covered by it. If a delete marker has a visibility expression, then we only apply it to cells with matching visibility expressions. If a cell has no visibility tag then it does not match. (A|B != nil) Support visibility expressions on Deletes - Key: HBASE-10885 URL: https://issues.apache.org/jira/browse/HBASE-10885 Project: HBase Issue Type: Improvement Affects Versions: 0.98.1 Reporter: Andrew Purtell Fix For: 0.99.0, 0.98.2 Accumulo can specify visibility expressions for delete markers. During compaction the cells covered by the tombstone are determined in part by matching the visibility expression. This is useful for the use case of data set coalescing, where entries from multiple data sets carrying different labels are combined into one common large table. Later, a subset of entries can be conveniently removed using visibility expressions. Currently doing the same in HBase would only be possible with a custom coprocessor. Otherwise, a Delete will affect all cells covered by the tombstone regardless of any visibility expression scoping. This is correct behavior in that no data spill is possible, but certainly could be surprising, and is only meant to be transitional. We decided not to support visibility expressions on Deletes to control the complexity of the initial implementation. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Comment Edited] (HBASE-10885) Support visibility expressions on Deletes
[ https://issues.apache.org/jira/browse/HBASE-10885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956348#comment-13956348 ] Andrew Purtell edited comment on HBASE-10885 at 4/1/14 10:42 AM: - bq. Delete.setCellVisibility() should be supported now. Yes. bq. And these labels passed here will be only a list of labels and not visibility expressions like A|B!C? No. Deletes should support visibility expressions just like Put, etc. The supplied visibility expression is then associated with the delete marker(s). Actually, scratch what I said above in the first comment. We can store the delete marker as a cell with a visibility expression tag and do the work later, by hooking the compaction scanner. We would check for visibility expressions in tags on delete markers at compaction time. If we find one, then we have to filter only the cells covered by the tombstone that have a matching expression. If we are not storing visibility expression terminals (LeafExpressionNodes) in sorted order by ordinal we probably should consider it. (I don't think we are.) Because e.g. A|B == B|A. It would be most efficient if we can simply do byte comparison of serialized visibility expressions on the delete marker and any found while enumerating cells covered by it. If a delete marker has a visibility expression, then we only apply it to cells with matching visibility expressions. If a cell has no visibility tag then it does not match. (A|B != nil) Should we check that the supplied expression does not exceed the maximal authorization set for the user submitting the Delete in the preDelete hook? In other words, should we we allow a user only granted authorization A to submit a delete with visibility expression A|B? We should not, in my opinion. Recommend we answer this question for other op types on another JIRA, should there be any. was (Author: apurtell): bq. Delete.setCellVisibility() should be supported now. Yes. bq. And these labels passed here will be only a list of labels and not visibility expressions like A|B!C? No. Deletes should support visibility expressions just like Put, etc. The supplied visibility expression is then associated with the delete marker(s). Actually, scratch what I said above in the first comment. We can store the delete marker as a cell with a visibility expression tag and do the work later, by hooking the compaction scanner. We would check for visibility expressions in tags on delete markers at compaction time. If we find one, then we have to filter only the cells covered by the tombstone that have a matching expression. If we are not storing visibility expression terminals (LeafExpressionNodes) in sorted order by ordinal we probably should consider it. (I don't think we are.) Because e.g. A|B == B|A. It would be most efficient if we can simply do byte comparison of serialized visibility expressions on the delete marker and any found while enumerating cells covered by it. If a delete marker has a visibility expression, then we only apply it to cells with matching visibility expressions. If a cell has no visibility tag then it does not match. (A|B != nil) Should we check that the supplied expression does not exceed the maximal authorization set for the user submitting the Delete in the preDelete hook? In other words, should we we allow a user only granted authorization A to submit a delete with visibility expression A|B? We should not, in my opinion. It is different for the delete case than others because delete is a destructive operation. Recommend we answer this question for other op types on another JIRA, should there be any. Support visibility expressions on Deletes - Key: HBASE-10885 URL: https://issues.apache.org/jira/browse/HBASE-10885 Project: HBase Issue Type: Improvement Affects Versions: 0.98.1 Reporter: Andrew Purtell Fix For: 0.99.0, 0.98.2 Accumulo can specify visibility expressions for delete markers. During compaction the cells covered by the tombstone are determined in part by matching the visibility expression. This is useful for the use case of data set coalescing, where entries from multiple data sets carrying different labels are combined into one common large table. Later, a subset of entries can be conveniently removed using visibility expressions. Currently doing the same in HBase would only be possible with a custom coprocessor. Otherwise, a Delete will affect all cells covered by the tombstone regardless of any visibility expression scoping. This is correct behavior in that no data spill is possible, but certainly could be surprising, and is only meant to be transitional. We decided not to support visibility expressions on Deletes to control the complexity of the
[jira] [Commented] (HBASE-10883) Restrict the universe of labels and authorizations
[ https://issues.apache.org/jira/browse/HBASE-10883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956382#comment-13956382 ] Andrew Purtell commented on HBASE-10883: {code} + private final String regex = [A-Za-z_0-9]*; {code} I think this should be {code} + private final String regex = [A-Za-z\\_][A-Za-z0-9\\_\\-]*; {code} but this is a minor thing, if you want the other then no problem. The regex should be precompiled since it may be applied often. Like Anoop says, we need to validate authorizations and labels the same way now. Restrict the universe of labels and authorizations -- Key: HBASE-10883 URL: https://issues.apache.org/jira/browse/HBASE-10883 Project: HBase Issue Type: Improvement Affects Versions: 0.98.1 Reporter: Andrew Purtell Fix For: 0.99.0, 0.98.2 Attachments: HBASE-10883.patch Currently we allow any string as visibility label or request authorization. However as seen on HBASE-10878, we accept for authorizations strings that would not work if provided as labels in visibility expressions. We should throw an exception at least in cases where someone tries to define or use a label or authorization including visibility expression operators '', '|', '!', '(', ')'. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10883) Restrict the universe of labels and authorizations
[ https://issues.apache.org/jira/browse/HBASE-10883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956383#comment-13956383 ] Andrew Purtell commented on HBASE-10883: Well, you do have to change the current regexp somehow because it will match the empty string and that's not valid. Restrict the universe of labels and authorizations -- Key: HBASE-10883 URL: https://issues.apache.org/jira/browse/HBASE-10883 Project: HBase Issue Type: Improvement Affects Versions: 0.98.1 Reporter: Andrew Purtell Fix For: 0.99.0, 0.98.2 Attachments: HBASE-10883.patch Currently we allow any string as visibility label or request authorization. However as seen on HBASE-10878, we accept for authorizations strings that would not work if provided as labels in visibility expressions. We should throw an exception at least in cases where someone tries to define or use a label or authorization including visibility expression operators '', '|', '!', '(', ')'. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10882) Bulkload process hangs on regions randomly and finally throws RegionTooBusyException
[ https://issues.apache.org/jira/browse/HBASE-10882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956398#comment-13956398 ] Liang Xie commented on HBASE-10882: --- It would be better to ask at mail list in future:) Why you have such many table pool instances ? bq. The most ridiculous thing is NO ONE OWNED THE LOCK! I searched the jstack output carefully, but cannot find any process who claimed to own the lock. The thread dump doesn't show the lock holder while using Lock, but you could see it if using Synchronized.:) Bulkload process hangs on regions randomly and finally throws RegionTooBusyException Key: HBASE-10882 URL: https://issues.apache.org/jira/browse/HBASE-10882 Project: HBase Issue Type: Bug Components: regionserver Affects Versions: 0.94.10 Environment: rhel 5.6, jdk1.7.0_45, hadoop-2.2.0-cdh5.0.0 Reporter: Victor Xu Attachments: jstack_5105.log I came across the problem in the early morning several days ago. It happened when I used hadoop completebulkload command to bulk load some hdfs files into hbase table. Several regions hung and after retried three times they all threw RegionTooBusyExceptions. Fortunately, I caught one of the exceptional region’s HRegionServer process’s jstack info just in time. I found that the bulkload process was waiting for a write lock: at java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.tryLock(ReentrantReadWriteLock.java:1115) The lock id is 0x0004054ecbf0. In the meantime, many other Get/Scan operations were also waiting for the same lock id. And, of course, they were waiting for the read lock: at java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:873) The most ridiculous thing is NO ONE OWNED THE LOCK! I searched the jstack output carefully, but cannot find any process who claimed to own the lock. When I restart the bulk load process, it failed at different regions but with the same RegionTooBusyExceptions. I guess maybe the region was doing some compactions at that time and owned the lock, but I couldn’t find compaction info in the hbase-logs. Finally, after several days’ hard work, the only temporary solution to this problem was found, that is TRIGGERING A MAJOR COMPACTION BEFORE THE BULKLOAD, So which process owned the lock? Has anyone came across the same problem before? -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10883) Restrict the universe of labels and authorizations
[ https://issues.apache.org/jira/browse/HBASE-10883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan updated HBASE-10883: --- Attachment: HBASE-10883_1.patch Now validation is sync with VisibilityLabelsValidator. Also checks for empty string or white space String. Restrict the universe of labels and authorizations -- Key: HBASE-10883 URL: https://issues.apache.org/jira/browse/HBASE-10883 Project: HBase Issue Type: Improvement Affects Versions: 0.98.1 Reporter: Andrew Purtell Fix For: 0.99.0, 0.98.2 Attachments: HBASE-10883.patch, HBASE-10883_1.patch Currently we allow any string as visibility label or request authorization. However as seen on HBASE-10878, we accept for authorizations strings that would not work if provided as labels in visibility expressions. We should throw an exception at least in cases where someone tries to define or use a label or authorization including visibility expression operators '', '|', '!', '(', ')'. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10883) Restrict the universe of labels and authorizations
[ https://issues.apache.org/jira/browse/HBASE-10883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan updated HBASE-10883: --- Status: Patch Available (was: Open) Restrict the universe of labels and authorizations -- Key: HBASE-10883 URL: https://issues.apache.org/jira/browse/HBASE-10883 Project: HBase Issue Type: Improvement Affects Versions: 0.98.1 Reporter: Andrew Purtell Fix For: 0.99.0, 0.98.2 Attachments: HBASE-10883.patch, HBASE-10883_1.patch Currently we allow any string as visibility label or request authorization. However as seen on HBASE-10878, we accept for authorizations strings that would not work if provided as labels in visibility expressions. We should throw an exception at least in cases where someone tries to define or use a label or authorization including visibility expression operators '', '|', '!', '(', ')'. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10883) Restrict the universe of labels and authorizations
[ https://issues.apache.org/jira/browse/HBASE-10883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan updated HBASE-10883: --- Status: Open (was: Patch Available) Restrict the universe of labels and authorizations -- Key: HBASE-10883 URL: https://issues.apache.org/jira/browse/HBASE-10883 Project: HBase Issue Type: Improvement Affects Versions: 0.98.1 Reporter: Andrew Purtell Fix For: 0.99.0, 0.98.2 Attachments: HBASE-10883.patch, HBASE-10883_1.patch Currently we allow any string as visibility label or request authorization. However as seen on HBASE-10878, we accept for authorizations strings that would not work if provided as labels in visibility expressions. We should throw an exception at least in cases where someone tries to define or use a label or authorization including visibility expression operators '', '|', '!', '(', ')'. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10883) Restrict the universe of labels and authorizations
[ https://issues.apache.org/jira/browse/HBASE-10883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956436#comment-13956436 ] Ashish Singhi commented on HBASE-10883: --- Minor nit. {code} } catch (IllegalArgumentException e) { + fail(Should have failed for -B); +} {code} Message should be something like should not fail. Restrict the universe of labels and authorizations -- Key: HBASE-10883 URL: https://issues.apache.org/jira/browse/HBASE-10883 Project: HBase Issue Type: Improvement Affects Versions: 0.98.1 Reporter: Andrew Purtell Fix For: 0.99.0, 0.98.2 Attachments: HBASE-10883.patch, HBASE-10883_1.patch Currently we allow any string as visibility label or request authorization. However as seen on HBASE-10878, we accept for authorizations strings that would not work if provided as labels in visibility expressions. We should throw an exception at least in cases where someone tries to define or use a label or authorization including visibility expression operators '', '|', '!', '(', ')'. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10883) Restrict the universe of labels and authorizations
[ https://issues.apache.org/jira/browse/HBASE-10883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956450#comment-13956450 ] Hadoop QA commented on HBASE-10883: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12638028/HBASE-10883.patch against trunk revision . ATTACHMENT ID: 12638028 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified tests. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 findbugs{color}. The patch appears to introduce 1 new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/9155//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9155//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9155//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9155//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9155//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9155//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9155//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9155//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9155//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9155//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/9155//console This message is automatically generated. Restrict the universe of labels and authorizations -- Key: HBASE-10883 URL: https://issues.apache.org/jira/browse/HBASE-10883 Project: HBase Issue Type: Improvement Affects Versions: 0.98.1 Reporter: Andrew Purtell Fix For: 0.99.0, 0.98.2 Attachments: HBASE-10883.patch, HBASE-10883_1.patch Currently we allow any string as visibility label or request authorization. However as seen on HBASE-10878, we accept for authorizations strings that would not work if provided as labels in visibility expressions. We should throw an exception at least in cases where someone tries to define or use a label or authorization including visibility expression operators '', '|', '!', '(', ')'. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10881) Support reverse scan in thrift2
[ https://issues.apache.org/jira/browse/HBASE-10881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956466#comment-13956466 ] Hadoop QA commented on HBASE-10881: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12638043/HBASE-10881-trunk-v2.diff against trunk revision . ATTACHMENT ID: 12638043 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 6 new or modified tests. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 lineLengths{color}. The patch introduces the following lines longer than 100: + private static final org.apache.thrift.protocol.TField REVERSED_FIELD_DESC = new org.apache.thrift.protocol.TField(reversed, org.apache.thrift.protocol.TType.BOOL, (short)9); + private _Fields optionals[] = {_Fields.START_ROW,_Fields.STOP_ROW,_Fields.TIMESTAMP,_Fields.COLUMNS,_Fields.CACHING,_Fields.FILTER_STRING,_Fields.BATCH_SIZE,_Fields.SORT_COLUMNS,_Fields.REVERSED}; +tmpMap.put(_Fields.REVERSED, new org.apache.thrift.meta_data.FieldMetaData(reversed, org.apache.thrift.TFieldRequirementType.OPTIONAL, + private static final org.apache.thrift.protocol.TField REVERSED_FIELD_DESC = new org.apache.thrift.protocol.TField(reversed, org.apache.thrift.protocol.TType.BOOL, (short)11); + private _Fields optionals[] = {_Fields.START_ROW,_Fields.STOP_ROW,_Fields.COLUMNS,_Fields.CACHING,_Fields.MAX_VERSIONS,_Fields.TIME_RANGE,_Fields.FILTER_STRING,_Fields.BATCH_SIZE,_Fields.ATTRIBUTES,_Fields.AUTHORIZATIONS,_Fields.REVERSED}; +tmpMap.put(_Fields.REVERSED, new org.apache.thrift.meta_data.FieldMetaData(reversed, org.apache.thrift.TFieldRequirementType.OPTIONAL, + org.apache.thrift.protocol.TList _list117 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32()); + org.apache.thrift.protocol.TMap _map120 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRING, iprot.readI32()); {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.regionserver.wal.TestLogRolling Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/9156//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9156//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9156//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9156//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9156//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9156//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9156//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9156//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9156//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9156//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/9156//console This message is automatically generated. Support reverse scan in thrift2 --- Key: HBASE-10881 URL: https://issues.apache.org/jira/browse/HBASE-10881 Project: HBase Issue Type: New Feature Components: Thrift Affects Versions: 0.99.0 Reporter: Liu Shaohui Assignee: Liu Shaohui Priority: Minor Fix For: 0.99.0 Attachments: HBASE-10881-trunk-v1.diff, HBASE-10881-trunk-v2.diff Support reverse scan in thrift2. -- This message was
[jira] [Commented] (HBASE-10887) tidy ThriftUtilities format
[ https://issues.apache.org/jira/browse/HBASE-10887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956477#comment-13956477 ] Hudson commented on HBASE-10887: SUCCESS: Integrated in HBase-0.98 #259 (See [https://builds.apache.org/job/HBase-0.98/259/]) HBASE-10887 tidy ThriftUtilities format (liangxie: rev 1583595) * /hbase/branches/0.98/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftUtilities.java tidy ThriftUtilities format --- Key: HBASE-10887 URL: https://issues.apache.org/jira/browse/HBASE-10887 Project: HBase Issue Type: Improvement Components: Thrift Affects Versions: 0.98.1, 0.99.0 Reporter: Liang Xie Assignee: Liang Xie Priority: Trivial Fix For: 0.98.1, 0.99.0 Attachments: HBASE-10887.txt Just found the weird code format during reviewing another patch, let's remove the unneccessary tab -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10887) tidy ThriftUtilities format
[ https://issues.apache.org/jira/browse/HBASE-10887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956509#comment-13956509 ] Hudson commented on HBASE-10887: FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #243 (See [https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/243/]) HBASE-10887 tidy ThriftUtilities format (liangxie: rev 1583595) * /hbase/branches/0.98/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftUtilities.java tidy ThriftUtilities format --- Key: HBASE-10887 URL: https://issues.apache.org/jira/browse/HBASE-10887 Project: HBase Issue Type: Improvement Components: Thrift Affects Versions: 0.98.1, 0.99.0 Reporter: Liang Xie Assignee: Liang Xie Priority: Trivial Fix For: 0.98.1, 0.99.0 Attachments: HBASE-10887.txt Just found the weird code format during reviewing another patch, let's remove the unneccessary tab -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10883) Restrict the universe of labels and authorizations
[ https://issues.apache.org/jira/browse/HBASE-10883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956525#comment-13956525 ] Hadoop QA commented on HBASE-10883: --- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12638051/HBASE-10883_1.patch against trunk revision . ATTACHMENT ID: 12638051 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified tests. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/9157//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9157//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9157//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9157//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9157//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9157//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9157//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9157//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9157//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9157//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/9157//console This message is automatically generated. Restrict the universe of labels and authorizations -- Key: HBASE-10883 URL: https://issues.apache.org/jira/browse/HBASE-10883 Project: HBase Issue Type: Improvement Affects Versions: 0.98.1 Reporter: Andrew Purtell Fix For: 0.99.0, 0.98.2 Attachments: HBASE-10883.patch, HBASE-10883_1.patch Currently we allow any string as visibility label or request authorization. However as seen on HBASE-10878, we accept for authorizations strings that would not work if provided as labels in visibility expressions. We should throw an exception at least in cases where someone tries to define or use a label or authorization including visibility expression operators '', '|', '!', '(', ')'. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10887) tidy ThriftUtilities format
[ https://issues.apache.org/jira/browse/HBASE-10887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956526#comment-13956526 ] Hudson commented on HBASE-10887: FAILURE: Integrated in HBase-TRUNK #5054 (See [https://builds.apache.org/job/HBase-TRUNK/5054/]) HBASE-10887 tidy ThriftUtilities format (liangxie: rev 1583593) * /hbase/trunk/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftUtilities.java tidy ThriftUtilities format --- Key: HBASE-10887 URL: https://issues.apache.org/jira/browse/HBASE-10887 Project: HBase Issue Type: Improvement Components: Thrift Affects Versions: 0.98.1, 0.99.0 Reporter: Liang Xie Assignee: Liang Xie Priority: Trivial Fix For: 0.98.1, 0.99.0 Attachments: HBASE-10887.txt Just found the weird code format during reviewing another patch, let's remove the unneccessary tab -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10830) Integration test MR jobs attempt to load htrace jars from the wrong location
[ https://issues.apache.org/jira/browse/HBASE-10830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956547#comment-13956547 ] Nick Dimiduk commented on HBASE-10830: -- I ran with v01 last night, this is how far the suite made it before my lappy ran out of disk space. I guess these tests don't clean up after themselves by design. {noformat} --- T E S T S --- Running org.apache.hadoop.hbase.IntegrationTestIngest Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 751.134 sec Running org.apache.hadoop.hbase.IntegrationTestIngestStripeCompactions Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 836.726 sec Running org.apache.hadoop.hbase.IntegrationTestIngestWithACL Running org.apache.hadoop.hbase.IntegrationTestIngestWithEncryption Running org.apache.hadoop.hbase.IntegrationTestIngestWithTags Running org.apache.hadoop.hbase.IntegrationTestIngestWithVisibilityLabels Running org.apache.hadoop.hbase.IntegrationTestLazyCfLoading Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 366.592 sec FAILURE! Running org.apache.hadoop.hbase.IntegrationTestManyRegions Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 350.176 sec FAILURE! Running org.apache.hadoop.hbase.mapreduce.IntegrationTestBulkLoad Running org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv Running org.apache.hadoop.hbase.mapreduce.IntegrationTestTableMapReduceUtil Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 1.977 sec Running org.apache.hadoop.hbase.mttr.IntegrationTestMTTR Running org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 17.081 sec FAILURE! Results : Tests in error: testReadersAndWriters(org.apache.hadoop.hbase.IntegrationTestLazyCfLoading): Shutting down testReadersAndWriters(org.apache.hadoop.hbase.IntegrationTestLazyCfLoading): callTimeout=6, callDuration=69647: row 'IntegrationTestLazyCfLoading,,' on table 'hbase:meta testCreateTableWithRegions(org.apache.hadoop.hbase.IntegrationTestManyRegions): Shutting down testCreateTableWithRegions(org.apache.hadoop.hbase.IntegrationTestManyRegions): callTimeout=6, callDuration=69858: row 'IntegrationTestManyRegions,,' on table 'hbase:meta testContinuousIngest(org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList): Cannot create directory /Users/ndimiduk/repos/hbase/hbase-it/target/test-data/61da49bb-d076-4d2f-a1bc-8d8c5cb5fff3/dfsc luster_8c91e743-a7be-4fb4-ae07-142554f3379a/dfs/name1/current {noformat} Integration test MR jobs attempt to load htrace jars from the wrong location Key: HBASE-10830 URL: https://issues.apache.org/jira/browse/HBASE-10830 Project: HBase Issue Type: Bug Affects Versions: 0.98.1 Reporter: Andrew Purtell Priority: Minor Fix For: 0.99.0, 0.98.2 Attachments: HBASE-10830.00.patch, HBASE-10830.01.patch The MapReduce jobs submitted by IntegrationTestImportTsv want to load the htrace JAR from the local Maven cache but get confused and use a HDFS URI. {noformat} Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 8.489 sec FAILURE! testGenerateAndLoad(org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv) Time elapsed: 0.488 sec ERROR! java.io.FileNotFoundException: File does not exist: hdfs://localhost:37548/home/apurtell/.m2/repository/org/cloudera/htrace/htrace-core/2.04/htrace-core-2.04.jar at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1110) at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1102) at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288) at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224) at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:93) at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57) at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:264) at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:300) at
[jira] [Commented] (HBASE-10854) Multiple Row/VisibilityLabels visible while in the memstore
[ https://issues.apache.org/jira/browse/HBASE-10854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956550#comment-13956550 ] Anoop Sam John commented on HBASE-10854: How we go with this issue? Ping [~mbertozzi] Multiple Row/VisibilityLabels visible while in the memstore --- Key: HBASE-10854 URL: https://issues.apache.org/jira/browse/HBASE-10854 Project: HBase Issue Type: Bug Components: security Affects Versions: 0.98.1 Reporter: Matteo Bertozzi If we update the row multiple times with different visibility labels we are able to get the old version of the row until is flushed {code} $ sudo -u hbase hbase shell hbase add_labels 'A' hbase add_labels 'B' hbase create 'tb', 'f1' hbase put 'tb', 'row', 'f1:q', 'v1', {VISIBILITY='A'} hbase put 'tb', 'row', 'f1:q', 'v1all' hbase put 'tb', 'row', 'f1:q', 'v1aOrB', {VISIBILITY='A|B'} hbase put 'tb', 'row', 'f1:q', 'v1aAndB', {VISIBILITY='AB'} hbase scan 'tb' row column=f1:q, timestamp=1395948168154, value=v1aAndB 1 row $ sudo -u testuser hbase shell hbase scan 'tb' row column=f1:q, timestamp=1395948168102, value=v1all 1 row {code} When we flush the memstore we get a single row (the last one inserted) so the testuser get 0 rows now. {code} $ sudo -u hbase hbase shell hbase flush 'tb' hbase scan 'tb' row column=f1:q, timestamp=1395948168154, value=v1aAndB 1 row $ sudo -u testuser hbase shell hbase scan 'tb' 0 row {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10883) Restrict the universe of labels and authorizations
[ https://issues.apache.org/jira/browse/HBASE-10883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956541#comment-13956541 ] Anoop Sam John commented on HBASE-10883: Why to add pattern and regex to HConstants? We better keep the validation at server side? And move it to VisibilityLabelsValidator? VisibilityLabelsValidator#isValidLabel(String) Restrict the universe of labels and authorizations -- Key: HBASE-10883 URL: https://issues.apache.org/jira/browse/HBASE-10883 Project: HBase Issue Type: Improvement Affects Versions: 0.98.1 Reporter: Andrew Purtell Fix For: 0.99.0, 0.98.2 Attachments: HBASE-10883.patch, HBASE-10883_1.patch Currently we allow any string as visibility label or request authorization. However as seen on HBASE-10878, we accept for authorizations strings that would not work if provided as labels in visibility expressions. We should throw an exception at least in cases where someone tries to define or use a label or authorization including visibility expression operators '', '|', '!', '(', ')'. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10879) user_permission shell command on namespace doesn't work
[ https://issues.apache.org/jira/browse/HBASE-10879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956583#comment-13956583 ] stack commented on HBASE-10879: --- +1 for 0.96 user_permission shell command on namespace doesn't work --- Key: HBASE-10879 URL: https://issues.apache.org/jira/browse/HBASE-10879 Project: HBase Issue Type: Bug Reporter: Ted Yu Assignee: Ted Yu Attachments: 10879-v1.txt, 10879-v2.txt Currently user_permission command on namespace, e.g. {code} user_permission '@ns' {code} would result in the following exception: {code} Exception `NameError' at /usr/lib/hbase/lib/ruby/hbase/security.rb:170 - no method 'getUserPermissions' for arguments (org.apache.hadoop.hbase.protobuf.generated. AccessControlProtos.AccessControlService.BlockingStub,org.jruby.java.proxies.ArrayJavaProxy) on Java::OrgApacheHadoopHbaseProtobuf::ProtobufUtil ERROR: no method 'getUserPermissions' for arguments (org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.AccessControlService.BlockingStub,org.jruby.java. proxies.ArrayJavaProxy) on Java::OrgApacheHadoopHbaseProtobuf::ProtobufUtil Backtrace: /usr/lib/hbase/lib/ruby/hbase/security.rb:170:in `user_permission' /usr/lib/hbase/lib/ruby/shell/commands/user_permission.rb:39:in `command' org/jruby/RubyKernel.java:2109:in `send' /usr/lib/hbase/lib/ruby/shell/commands.rb:34:in `command_safe' /usr/lib/hbase/lib/ruby/shell/commands.rb:91:in `translate_hbase_exceptions' /usr/lib/hbase/lib/ruby/shell/commands.rb:34:in `command_safe' /usr/lib/hbase/lib/ruby/shell.rb:127:in `internal_command' /usr/lib/hbase/lib/ruby/shell.rb:119:in `command' (eval):2:in `user_permission' (hbase):1:in `evaluate' org/jruby/RubyKernel.java:1112:in `eval' {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10883) Restrict the universe of labels and authorizations
[ https://issues.apache.org/jira/browse/HBASE-10883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956654#comment-13956654 ] Andrew Purtell commented on HBASE-10883: Why not re-use the code for validating labels? Checking on the client is fine but the check has to happen on the server side as well since we can't guarantee a validating client (REST? Thrift?) Restrict the universe of labels and authorizations -- Key: HBASE-10883 URL: https://issues.apache.org/jira/browse/HBASE-10883 Project: HBase Issue Type: Improvement Affects Versions: 0.98.1 Reporter: Andrew Purtell Fix For: 0.99.0, 0.98.2 Attachments: HBASE-10883.patch, HBASE-10883_1.patch Currently we allow any string as visibility label or request authorization. However as seen on HBASE-10878, we accept for authorizations strings that would not work if provided as labels in visibility expressions. We should throw an exception at least in cases where someone tries to define or use a label or authorization including visibility expression operators '', '|', '!', '(', ')'. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HBASE-10882) Bulkload process hangs on regions randomly and finally throws RegionTooBusyException
[ https://issues.apache.org/jira/browse/HBASE-10882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-10882. Resolution: Invalid Please ask for assistance on the u...@hbase.apache.org mailing list. Bulkload process hangs on regions randomly and finally throws RegionTooBusyException Key: HBASE-10882 URL: https://issues.apache.org/jira/browse/HBASE-10882 Project: HBase Issue Type: Bug Components: regionserver Affects Versions: 0.94.10 Environment: rhel 5.6, jdk1.7.0_45, hadoop-2.2.0-cdh5.0.0 Reporter: Victor Xu Attachments: jstack_5105.log I came across the problem in the early morning several days ago. It happened when I used hadoop completebulkload command to bulk load some hdfs files into hbase table. Several regions hung and after retried three times they all threw RegionTooBusyExceptions. Fortunately, I caught one of the exceptional region’s HRegionServer process’s jstack info just in time. I found that the bulkload process was waiting for a write lock: at java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.tryLock(ReentrantReadWriteLock.java:1115) The lock id is 0x0004054ecbf0. In the meantime, many other Get/Scan operations were also waiting for the same lock id. And, of course, they were waiting for the read lock: at java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:873) The most ridiculous thing is NO ONE OWNED THE LOCK! I searched the jstack output carefully, but cannot find any process who claimed to own the lock. When I restart the bulk load process, it failed at different regions but with the same RegionTooBusyExceptions. I guess maybe the region was doing some compactions at that time and owned the lock, but I couldn’t find compaction info in the hbase-logs. Finally, after several days’ hard work, the only temporary solution to this problem was found, that is TRIGGERING A MAJOR COMPACTION BEFORE THE BULKLOAD, So which process owned the lock? Has anyone came across the same problem before? -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10855) Enable hfilev3 by default
[ https://issues.apache.org/jira/browse/HBASE-10855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-10855: -- Attachment: 10855v2.txt Don't set the block size then the premise we are asserting holds. Setting block size too small has block content spread evenly across the three datanodes which is causing us violate the assertion most of the time this test runs. hfilev3 has bigger kvs so it putting us over the block size. Enable hfilev3 by default - Key: HBASE-10855 URL: https://issues.apache.org/jira/browse/HBASE-10855 Project: HBase Issue Type: Sub-task Components: HFile Reporter: stack Assignee: stack Fix For: 0.99.0 Attachments: 10855.txt, 10855.txt, 10855.txt, 10855.txt, 10855v2.txt Distributed log replay needs this. Should be on by default in 1.0/0.99. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10851) Wait for regionservers to join the cluster
[ https://issues.apache.org/jira/browse/HBASE-10851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jimmy Xiang updated HBASE-10851: Status: Open (was: Patch Available) Wait for regionservers to join the cluster -- Key: HBASE-10851 URL: https://issues.apache.org/jira/browse/HBASE-10851 Project: HBase Issue Type: Bug Reporter: Jimmy Xiang Assignee: Jimmy Xiang Priority: Critical Attachments: hbase-10851.patch With HBASE-10569, if regionservers are started a while after the master, all regions will be assigned to the master. That may not be what users expect. A work-around is to always start regionservers before masters. I was wondering if the master can wait a little for other regionservers to join. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10851) Wait for regionservers to join the cluster
[ https://issues.apache.org/jira/browse/HBASE-10851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956673#comment-13956673 ] Jimmy Xiang commented on HBASE-10851: - Good question. I was thinking about this last night too. Wait for regionservers to join the cluster -- Key: HBASE-10851 URL: https://issues.apache.org/jira/browse/HBASE-10851 Project: HBase Issue Type: Bug Reporter: Jimmy Xiang Assignee: Jimmy Xiang Priority: Critical Attachments: hbase-10851.patch With HBASE-10569, if regionservers are started a while after the master, all regions will be assigned to the master. That may not be what users expect. A work-around is to always start regionservers before masters. I was wondering if the master can wait a little for other regionservers to join. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10859) HStore.openStoreFiles() should pass the StoreFileInfo object to createStoreFileAndReader()
[ https://issues.apache.org/jira/browse/HBASE-10859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956675#comment-13956675 ] stack commented on HBASE-10859: --- Patch looks great [~enis] +1 HStore.openStoreFiles() should pass the StoreFileInfo object to createStoreFileAndReader() -- Key: HBASE-10859 URL: https://issues.apache.org/jira/browse/HBASE-10859 Project: HBase Issue Type: Sub-task Reporter: Enis Soztutar Assignee: Enis Soztutar Fix For: hbase-10070 Attachments: hbase-10859_v1.patch, hbase-10859_v2.patch We sometimes see the following stack trace on test logs (TestReplicasClient), but this is not test-specific: {code} 2014-03-26 21:44:18,662 ERROR [RS_OPEN_REGION-c64-s12:35852-2] handler.OpenRegionHandler(481): Failed open of region=TestReplicasClient,,1395895445056_0001.5f8b8db27e36d2dde781193d92a05730., starting to roll back the global memstore size. java.io.IOException: java.io.IOException: java.io.FileNotFoundException: File does not exist: hdfs://localhost:56276/user/jenkins/hbase/data/default/TestReplicasClient/856934fb87781c9030975706b66137a5/info/589000f197b048e0897e1d81dd7e3a90 at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionStores(HRegion.java:739) at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:646) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:617) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4447) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4417) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4389) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4345) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4296) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722) Caused by: java.io.IOException: java.io.FileNotFoundException: File does not exist: hdfs://localhost:56276/user/jenkins/hbase/data/default/TestReplicasClient/856934fb87781c9030975706b66137a5/info/589000f197b048e0897e1d81dd7e3a90 at org.apache.hadoop.hbase.regionserver.HStore.openStoreFiles(HStore.java:531) at org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:486) at org.apache.hadoop.hbase.regionserver.HStore.init(HStore.java:254) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:3357) at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:710) at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:707) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) ... 3 more Caused by: java.io.FileNotFoundException: File does not exist: hdfs://localhost:56276/user/jenkins/hbase/data/default/TestReplicasClient/856934fb87781c9030975706b66137a5/info/589000f197b048e0897e1d81dd7e3a90 at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1128) at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1120) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1120) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:397) at org.apache.hadoop.hbase.regionserver.StoreFileInfo.init(StoreFileInfo.java:95) at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:600) at org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:121) at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:506) at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:503) ... 8 more {code} The region fails to open for the region replica, because at this time, the primary region is performing a compaction. The file is moved to the archive directory in between listing of store
[jira] [Commented] (HBASE-8234) Introducing recovering region state in AM to mark a region in recovering status used in distributedLogReplay
[ https://issues.apache.org/jira/browse/HBASE-8234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956682#comment-13956682 ] stack commented on HBASE-8234: -- [~jeffreyz] Is this nice-to-have or a requirement when we have distributed log replay enabled? If table is disabling will we lose data w/o it? Thanks [~jeffreyz] Introducing recovering region state in AM to mark a region in recovering status used in distributedLogReplay -- Key: HBASE-8234 URL: https://issues.apache.org/jira/browse/HBASE-8234 Project: HBase Issue Type: Sub-task Components: MTTR Reporter: Jeffrey Zhong There are two advantages to have this new recovering state in Assignment Manager for a region: 1) Instead of mark a region recovering in ZK, we can consolidate all region states in one place and be aware by assignment manager 2) When handing disabling table, we have to have this new state so that regions of a disabling table can be transitioned into this state for recovering. Notes: In the initial release of distributed log replay, we may not do this subtask for simplifications. Without the new state, we still need to create recovered edits files for regions of a disabling table. With the new state, we can retire recover edits files creation business totally. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10879) user_permission shell command on namespace doesn't work
[ https://issues.apache.org/jira/browse/HBASE-10879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-10879: --- Fix Version/s: 0.96.3 0.98.2 Hadoop Flags: Reviewed user_permission shell command on namespace doesn't work --- Key: HBASE-10879 URL: https://issues.apache.org/jira/browse/HBASE-10879 Project: HBase Issue Type: Bug Reporter: Ted Yu Assignee: Ted Yu Fix For: 0.98.2, 0.96.3 Attachments: 10879-v1.txt, 10879-v2.txt Currently user_permission command on namespace, e.g. {code} user_permission '@ns' {code} would result in the following exception: {code} Exception `NameError' at /usr/lib/hbase/lib/ruby/hbase/security.rb:170 - no method 'getUserPermissions' for arguments (org.apache.hadoop.hbase.protobuf.generated. AccessControlProtos.AccessControlService.BlockingStub,org.jruby.java.proxies.ArrayJavaProxy) on Java::OrgApacheHadoopHbaseProtobuf::ProtobufUtil ERROR: no method 'getUserPermissions' for arguments (org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.AccessControlService.BlockingStub,org.jruby.java. proxies.ArrayJavaProxy) on Java::OrgApacheHadoopHbaseProtobuf::ProtobufUtil Backtrace: /usr/lib/hbase/lib/ruby/hbase/security.rb:170:in `user_permission' /usr/lib/hbase/lib/ruby/shell/commands/user_permission.rb:39:in `command' org/jruby/RubyKernel.java:2109:in `send' /usr/lib/hbase/lib/ruby/shell/commands.rb:34:in `command_safe' /usr/lib/hbase/lib/ruby/shell/commands.rb:91:in `translate_hbase_exceptions' /usr/lib/hbase/lib/ruby/shell/commands.rb:34:in `command_safe' /usr/lib/hbase/lib/ruby/shell.rb:127:in `internal_command' /usr/lib/hbase/lib/ruby/shell.rb:119:in `command' (eval):2:in `user_permission' (hbase):1:in `evaluate' org/jruby/RubyKernel.java:1112:in `eval' {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10879) user_permission shell command on namespace doesn't work
[ https://issues.apache.org/jira/browse/HBASE-10879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-10879: --- Resolution: Fixed Status: Resolved (was: Patch Available) Thanks for the reviews. user_permission shell command on namespace doesn't work --- Key: HBASE-10879 URL: https://issues.apache.org/jira/browse/HBASE-10879 Project: HBase Issue Type: Bug Reporter: Ted Yu Assignee: Ted Yu Fix For: 0.98.2, 0.96.3 Attachments: 10879-v1.txt, 10879-v2.txt Currently user_permission command on namespace, e.g. {code} user_permission '@ns' {code} would result in the following exception: {code} Exception `NameError' at /usr/lib/hbase/lib/ruby/hbase/security.rb:170 - no method 'getUserPermissions' for arguments (org.apache.hadoop.hbase.protobuf.generated. AccessControlProtos.AccessControlService.BlockingStub,org.jruby.java.proxies.ArrayJavaProxy) on Java::OrgApacheHadoopHbaseProtobuf::ProtobufUtil ERROR: no method 'getUserPermissions' for arguments (org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.AccessControlService.BlockingStub,org.jruby.java. proxies.ArrayJavaProxy) on Java::OrgApacheHadoopHbaseProtobuf::ProtobufUtil Backtrace: /usr/lib/hbase/lib/ruby/hbase/security.rb:170:in `user_permission' /usr/lib/hbase/lib/ruby/shell/commands/user_permission.rb:39:in `command' org/jruby/RubyKernel.java:2109:in `send' /usr/lib/hbase/lib/ruby/shell/commands.rb:34:in `command_safe' /usr/lib/hbase/lib/ruby/shell/commands.rb:91:in `translate_hbase_exceptions' /usr/lib/hbase/lib/ruby/shell/commands.rb:34:in `command_safe' /usr/lib/hbase/lib/ruby/shell.rb:127:in `internal_command' /usr/lib/hbase/lib/ruby/shell.rb:119:in `command' (eval):2:in `user_permission' (hbase):1:in `evaluate' org/jruby/RubyKernel.java:1112:in `eval' {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10867) TestRegionPlacement#testRegionPlacement occasionally fails
[ https://issues.apache.org/jira/browse/HBASE-10867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956705#comment-13956705 ] Ted Yu commented on HBASE-10867: In 0.98 and prior releases, servers[] is of size 10. So there is no chance of encountering ArrayIndexOutOfBoundsException. TestRegionPlacement#testRegionPlacement occasionally fails -- Key: HBASE-10867 URL: https://issues.apache.org/jira/browse/HBASE-10867 Project: HBase Issue Type: Test Reporter: Ted Yu Assignee: Ted Yu Priority: Minor Fix For: 0.99.0 Attachments: 10867-v1.txt, 10867-v2.txt From https://builds.apache.org/job/HBase-TRUNK/5047/testReport/org.apache.hadoop.hbase.master/TestRegionPlacement/testRegionPlacement/ : {code} java.lang.ArrayIndexOutOfBoundsException: 10 at java.util.concurrent.CopyOnWriteArrayList.get(CopyOnWriteArrayList.java:368) at java.util.concurrent.CopyOnWriteArrayList.get(CopyOnWriteArrayList.java:377) at org.apache.hadoop.hbase.LocalHBaseCluster.getRegionServer(LocalHBaseCluster.java:224) at org.apache.hadoop.hbase.MiniHBaseCluster.getRegionServer(MiniHBaseCluster.java:609) at org.apache.hadoop.hbase.master.TestRegionPlacement.killRandomServerAndVerifyAssignment(TestRegionPlacement.java:303) at org.apache.hadoop.hbase.master.TestRegionPlacement.testRegionPlacement(TestRegionPlacement.java:270) {code} In the setup: {code} TEST_UTIL.startMiniCluster(SLAVES); {code} where SLAVES is 10. So when 10 was used in TEST_UTIL.getHBaseCluster().getRegionServer(killIndex), we would get ArrayIndexOutOfBoundsException. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HBASE-10888) Enable distributed log replay as default
stack created HBASE-10888: - Summary: Enable distributed log replay as default Key: HBASE-10888 URL: https://issues.apache.org/jira/browse/HBASE-10888 Project: HBase Issue Type: Sub-task Reporter: stack Enable 'distributed log replay' by default. Depends on hfilev3 being enabled. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Assigned] (HBASE-10888) Enable distributed log replay as default
[ https://issues.apache.org/jira/browse/HBASE-10888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack reassigned HBASE-10888: - Assignee: stack Enable distributed log replay as default Key: HBASE-10888 URL: https://issues.apache.org/jira/browse/HBASE-10888 Project: HBase Issue Type: Sub-task Reporter: stack Assignee: stack Attachments: 10888.txt Enable 'distributed log replay' by default. Depends on hfilev3 being enabled. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10888) Enable distributed log replay as default
[ https://issues.apache.org/jira/browse/HBASE-10888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-10888: -- Attachment: 10888.txt Enable distributed log replay as default. Checks that hfile is at least version 3 also. Enable distributed log replay as default Key: HBASE-10888 URL: https://issues.apache.org/jira/browse/HBASE-10888 Project: HBase Issue Type: Sub-task Reporter: stack Attachments: 10888.txt Enable 'distributed log replay' by default. Depends on hfilev3 being enabled. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10888) Enable distributed log replay as default
[ https://issues.apache.org/jira/browse/HBASE-10888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956728#comment-13956728 ] stack commented on HBASE-10888: --- FYI [~jeffreyz] You seen any issues w/ this sir? Enable distributed log replay as default Key: HBASE-10888 URL: https://issues.apache.org/jira/browse/HBASE-10888 Project: HBase Issue Type: Sub-task Reporter: stack Assignee: stack Attachments: 10888.txt Enable 'distributed log replay' by default. Depends on hfilev3 being enabled. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10867) TestRegionPlacement#testRegionPlacement occasionally fails
[ https://issues.apache.org/jira/browse/HBASE-10867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956731#comment-13956731 ] stack commented on HBASE-10867: --- bq. In 0.98 and prior releases, servers[] is of size 10. I don't follow. I see servers set to 10 in all versions. In 0.98: private final static int SLAVES = 10; ... TEST_UTIL.startMiniCluster(SLAVES); ServerName servers[] = oldStatus.getServers().toArray(new ServerName[10]); ServerName serverToKill = null; int killIndex = 0; Random random = new Random(System.currentTimeMillis()); ServerName metaServer = TEST_UTIL.getHBaseCluster().getServerHoldingMeta(); LOG.debug(Server holding meta + metaServer); boolean isNamespaceServer = false; do { // kill a random non-meta server carrying at least one region killIndex = random.nextInt(servers.length); In trunk the same. TestRegionPlacement#testRegionPlacement occasionally fails -- Key: HBASE-10867 URL: https://issues.apache.org/jira/browse/HBASE-10867 Project: HBase Issue Type: Test Reporter: Ted Yu Assignee: Ted Yu Priority: Minor Fix For: 0.99.0 Attachments: 10867-v1.txt, 10867-v2.txt From https://builds.apache.org/job/HBase-TRUNK/5047/testReport/org.apache.hadoop.hbase.master/TestRegionPlacement/testRegionPlacement/ : {code} java.lang.ArrayIndexOutOfBoundsException: 10 at java.util.concurrent.CopyOnWriteArrayList.get(CopyOnWriteArrayList.java:368) at java.util.concurrent.CopyOnWriteArrayList.get(CopyOnWriteArrayList.java:377) at org.apache.hadoop.hbase.LocalHBaseCluster.getRegionServer(LocalHBaseCluster.java:224) at org.apache.hadoop.hbase.MiniHBaseCluster.getRegionServer(MiniHBaseCluster.java:609) at org.apache.hadoop.hbase.master.TestRegionPlacement.killRandomServerAndVerifyAssignment(TestRegionPlacement.java:303) at org.apache.hadoop.hbase.master.TestRegionPlacement.testRegionPlacement(TestRegionPlacement.java:270) {code} In the setup: {code} TEST_UTIL.startMiniCluster(SLAVES); {code} where SLAVES is 10. So when 10 was used in TEST_UTIL.getHBaseCluster().getRegionServer(killIndex), we would get ArrayIndexOutOfBoundsException. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10851) Wait for regionservers to join the cluster
[ https://issues.apache.org/jira/browse/HBASE-10851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jimmy Xiang updated HBASE-10851: Fix Version/s: 0.99.0 Status: Patch Available (was: Open) Wait for regionservers to join the cluster -- Key: HBASE-10851 URL: https://issues.apache.org/jira/browse/HBASE-10851 Project: HBase Issue Type: Bug Reporter: Jimmy Xiang Assignee: Jimmy Xiang Priority: Critical Fix For: 0.99.0 Attachments: hbase-10851.patch, hbase-10851_v2.patch With HBASE-10569, if regionservers are started a while after the master, all regions will be assigned to the master. That may not be what users expect. A work-around is to always start regionservers before masters. I was wondering if the master can wait a little for other regionservers to join. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10851) Wait for regionservers to join the cluster
[ https://issues.apache.org/jira/browse/HBASE-10851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jimmy Xiang updated HBASE-10851: Attachment: hbase-10851_v2.patch Attached v2: countRegionServers ignores backup masters only if they are configured not to host any region. Wait for regionservers to join the cluster -- Key: HBASE-10851 URL: https://issues.apache.org/jira/browse/HBASE-10851 Project: HBase Issue Type: Bug Reporter: Jimmy Xiang Assignee: Jimmy Xiang Priority: Critical Fix For: 0.99.0 Attachments: hbase-10851.patch, hbase-10851_v2.patch With HBASE-10569, if regionservers are started a while after the master, all regions will be assigned to the master. That may not be what users expect. A work-around is to always start regionservers before masters. I was wondering if the master can wait a little for other regionservers to join. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10830) Integration test MR jobs attempt to load htrace jars from the wrong location
[ https://issues.apache.org/jira/browse/HBASE-10830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956736#comment-13956736 ] stack commented on HBASE-10830: --- Mine is still running but looks better than yours: {code} --- T E S T S --- Running org.apache.hadoop.hbase.IntegrationTestIngest 2014-03-31 21:51:02.256 java[92055:1903] Unable to load realm info from SCDynamicStore Running org.apache.hadoop.hbase.IntegrationTestIngestStripeCompactions 2014-03-31 22:21:02.560 java[93207:1303] Unable to load realm info from SCDynamicStore Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 710.048 sec Running org.apache.hadoop.hbase.IntegrationTestIngestWithACL 2014-03-31 22:32:54.059 java[93367:1903] Unable to load realm info from SCDynamicStore Running org.apache.hadoop.hbase.IntegrationTestIngestWithEncryption 2014-03-31 23:02:53.716 java[94539:1903] Unable to load realm info from SCDynamicStore Running org.apache.hadoop.hbase.IntegrationTestIngestWithTags 2014-04-01 07:44:28.460 java[94813:1903] Unable to load realm info from SCDynamicStore Running org.apache.hadoop.hbase.IntegrationTestIngestWithVisibilityLabels 2014-04-01 08:14:26.307 java[95753:1903] Unable to load realm info from SCDynamicStore Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 813.749 sec Running org.apache.hadoop.hbase.IntegrationTestLazyCfLoading 2014-04-01 08:28:02.137 java[95997:1903] Unable to load realm info from SCDynamicStore Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 361.083 sec Running org.apache.hadoop.hbase.IntegrationTestManyRegions 2014-04-01 08:34:03.947 java[96122:1903] Unable to load realm info from SCDynamicStore Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 127.061 sec Running org.apache.hadoop.hbase.mapreduce.IntegrationTestBulkLoad 2014-04-01 08:36:11.439 java[96202:1303] Unable to load realm info from SCDynamicStore Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 238.66 sec FAILURE! Running org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv 2014-04-01 08:40:10.650 java[97633:1303] Unable to load realm info from SCDynamicStore Running org.apache.hadoop.hbase.mapreduce.IntegrationTestTableMapReduceUtil Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 1.166 sec Running org.apache.hadoop.hbase.mttr.IntegrationTestMTTR 2014-04-01 09:10:12.759 java[99159:1303] Unable to load realm info from SCDynamicStore Running org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList 2014-04-01 09:40:13.860 java[99543:1903] Unable to load realm info from SCDynamicStore {code} Integration test MR jobs attempt to load htrace jars from the wrong location Key: HBASE-10830 URL: https://issues.apache.org/jira/browse/HBASE-10830 Project: HBase Issue Type: Bug Affects Versions: 0.98.1 Reporter: Andrew Purtell Priority: Minor Fix For: 0.99.0, 0.98.2 Attachments: HBASE-10830.00.patch, HBASE-10830.01.patch The MapReduce jobs submitted by IntegrationTestImportTsv want to load the htrace JAR from the local Maven cache but get confused and use a HDFS URI. {noformat} Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 8.489 sec FAILURE! testGenerateAndLoad(org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv) Time elapsed: 0.488 sec ERROR! java.io.FileNotFoundException: File does not exist: hdfs://localhost:37548/home/apurtell/.m2/repository/org/cloudera/htrace/htrace-core/2.04/htrace-core-2.04.jar at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1110) at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1102) at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288) at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224) at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:93) at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57) at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:264) at
[jira] [Commented] (HBASE-10851) Wait for regionservers to join the cluster
[ https://issues.apache.org/jira/browse/HBASE-10851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956742#comment-13956742 ] stack commented on HBASE-10851: --- This looks better Jimmy. The evaluation of excluded servers is done in one place only, in the balancer. +1 Wait for regionservers to join the cluster -- Key: HBASE-10851 URL: https://issues.apache.org/jira/browse/HBASE-10851 Project: HBase Issue Type: Bug Reporter: Jimmy Xiang Assignee: Jimmy Xiang Priority: Critical Fix For: 0.99.0 Attachments: hbase-10851.patch, hbase-10851_v2.patch With HBASE-10569, if regionservers are started a while after the master, all regions will be assigned to the master. That may not be what users expect. A work-around is to always start regionservers before masters. I was wondering if the master can wait a little for other regionservers to join. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10867) TestRegionPlacement#testRegionPlacement occasionally fails
[ https://issues.apache.org/jira/browse/HBASE-10867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956743#comment-13956743 ] Ted Yu commented on HBASE-10867: In trunk, if you place a breakpoint at the second assignment below: {code} ServerName servers[] = oldStatus.getServers().toArray(new ServerName[10]); ServerName serverToKill = null; {code} You can see that servers array is of size 11. Previously the size was of 10. As Shaohui pointed out: bq. the Master is also a RegionServer since HBASE-10569 TestRegionPlacement#testRegionPlacement occasionally fails -- Key: HBASE-10867 URL: https://issues.apache.org/jira/browse/HBASE-10867 Project: HBase Issue Type: Test Reporter: Ted Yu Assignee: Ted Yu Priority: Minor Fix For: 0.99.0 Attachments: 10867-v1.txt, 10867-v2.txt From https://builds.apache.org/job/HBase-TRUNK/5047/testReport/org.apache.hadoop.hbase.master/TestRegionPlacement/testRegionPlacement/ : {code} java.lang.ArrayIndexOutOfBoundsException: 10 at java.util.concurrent.CopyOnWriteArrayList.get(CopyOnWriteArrayList.java:368) at java.util.concurrent.CopyOnWriteArrayList.get(CopyOnWriteArrayList.java:377) at org.apache.hadoop.hbase.LocalHBaseCluster.getRegionServer(LocalHBaseCluster.java:224) at org.apache.hadoop.hbase.MiniHBaseCluster.getRegionServer(MiniHBaseCluster.java:609) at org.apache.hadoop.hbase.master.TestRegionPlacement.killRandomServerAndVerifyAssignment(TestRegionPlacement.java:303) at org.apache.hadoop.hbase.master.TestRegionPlacement.testRegionPlacement(TestRegionPlacement.java:270) {code} In the setup: {code} TEST_UTIL.startMiniCluster(SLAVES); {code} where SLAVES is 10. So when 10 was used in TEST_UTIL.getHBaseCluster().getRegionServer(killIndex), we would get ArrayIndexOutOfBoundsException. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10867) TestRegionPlacement#testRegionPlacement occasionally fails
[ https://issues.apache.org/jira/browse/HBASE-10867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956756#comment-13956756 ] stack commented on HBASE-10867: --- That makes sense. Thanks. I missed [~liushaohui]'s comment (was focused on your remarks looking for explanation...) TestRegionPlacement#testRegionPlacement occasionally fails -- Key: HBASE-10867 URL: https://issues.apache.org/jira/browse/HBASE-10867 Project: HBase Issue Type: Test Reporter: Ted Yu Assignee: Ted Yu Priority: Minor Fix For: 0.99.0 Attachments: 10867-v1.txt, 10867-v2.txt From https://builds.apache.org/job/HBase-TRUNK/5047/testReport/org.apache.hadoop.hbase.master/TestRegionPlacement/testRegionPlacement/ : {code} java.lang.ArrayIndexOutOfBoundsException: 10 at java.util.concurrent.CopyOnWriteArrayList.get(CopyOnWriteArrayList.java:368) at java.util.concurrent.CopyOnWriteArrayList.get(CopyOnWriteArrayList.java:377) at org.apache.hadoop.hbase.LocalHBaseCluster.getRegionServer(LocalHBaseCluster.java:224) at org.apache.hadoop.hbase.MiniHBaseCluster.getRegionServer(MiniHBaseCluster.java:609) at org.apache.hadoop.hbase.master.TestRegionPlacement.killRandomServerAndVerifyAssignment(TestRegionPlacement.java:303) at org.apache.hadoop.hbase.master.TestRegionPlacement.testRegionPlacement(TestRegionPlacement.java:270) {code} In the setup: {code} TEST_UTIL.startMiniCluster(SLAVES); {code} where SLAVES is 10. So when 10 was used in TEST_UTIL.getHBaseCluster().getRegionServer(killIndex), we would get ArrayIndexOutOfBoundsException. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10867) TestRegionPlacement#testRegionPlacement occasionally fails
[ https://issues.apache.org/jira/browse/HBASE-10867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-10867: --- Attachment: 10867.addendum Addendum removes two unused local variables. TestRegionPlacement#testRegionPlacement occasionally fails -- Key: HBASE-10867 URL: https://issues.apache.org/jira/browse/HBASE-10867 Project: HBase Issue Type: Test Reporter: Ted Yu Assignee: Ted Yu Priority: Minor Fix For: 0.99.0 Attachments: 10867-v1.txt, 10867-v2.txt, 10867.addendum From https://builds.apache.org/job/HBase-TRUNK/5047/testReport/org.apache.hadoop.hbase.master/TestRegionPlacement/testRegionPlacement/ : {code} java.lang.ArrayIndexOutOfBoundsException: 10 at java.util.concurrent.CopyOnWriteArrayList.get(CopyOnWriteArrayList.java:368) at java.util.concurrent.CopyOnWriteArrayList.get(CopyOnWriteArrayList.java:377) at org.apache.hadoop.hbase.LocalHBaseCluster.getRegionServer(LocalHBaseCluster.java:224) at org.apache.hadoop.hbase.MiniHBaseCluster.getRegionServer(MiniHBaseCluster.java:609) at org.apache.hadoop.hbase.master.TestRegionPlacement.killRandomServerAndVerifyAssignment(TestRegionPlacement.java:303) at org.apache.hadoop.hbase.master.TestRegionPlacement.testRegionPlacement(TestRegionPlacement.java:270) {code} In the setup: {code} TEST_UTIL.startMiniCluster(SLAVES); {code} where SLAVES is 10. So when 10 was used in TEST_UTIL.getHBaseCluster().getRegionServer(killIndex), we would get ArrayIndexOutOfBoundsException. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-8234) Introducing recovering region state in AM to mark a region in recovering status used in distributedLogReplay
[ https://issues.apache.org/jira/browse/HBASE-8234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956776#comment-13956776 ] Jeffrey Zhong commented on HBASE-8234: -- This is nice to have thing. We won't loss data when recovering a disabling table because recovering logic automatically switch back to recovered edits files for regions of a disabling table. Introducing the new state will complicate AM logic and still can't get rid of recovered edit implementations as it's also used in snapshot. Therefore, the benefits to have a new state isn't that much. Thanks. Introducing recovering region state in AM to mark a region in recovering status used in distributedLogReplay -- Key: HBASE-8234 URL: https://issues.apache.org/jira/browse/HBASE-8234 Project: HBase Issue Type: Sub-task Components: MTTR Reporter: Jeffrey Zhong There are two advantages to have this new recovering state in Assignment Manager for a region: 1) Instead of mark a region recovering in ZK, we can consolidate all region states in one place and be aware by assignment manager 2) When handing disabling table, we have to have this new state so that regions of a disabling table can be transitioned into this state for recovering. Notes: In the initial release of distributed log replay, we may not do this subtask for simplifications. Without the new state, we still need to create recovered edits files for regions of a disabling table. With the new state, we can retire recover edits files creation business totally. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10888) Enable distributed log replay as default
[ https://issues.apache.org/jira/browse/HBASE-10888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956778#comment-13956778 ] Jeffrey Zhong commented on HBASE-10888: --- No, let's go for it. Cheers! Enable distributed log replay as default Key: HBASE-10888 URL: https://issues.apache.org/jira/browse/HBASE-10888 Project: HBase Issue Type: Sub-task Reporter: stack Assignee: stack Attachments: 10888.txt Enable 'distributed log replay' by default. Depends on hfilev3 being enabled. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10091) Exposing HBase DataTypes to non-Java interfaces
[ https://issues.apache.org/jira/browse/HBASE-10091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956775#comment-13956775 ] Nick Dimiduk commented on HBASE-10091: -- Looks like [~navis] has been thinking about how to specify a composite as well. Exposing HBase DataTypes to non-Java interfaces --- Key: HBASE-10091 URL: https://issues.apache.org/jira/browse/HBASE-10091 Project: HBase Issue Type: Sub-task Components: Client Reporter: Nick Dimiduk Access to the DataType implementations introduced in HBASE-8693 is currently limited to consumers of the Java API. It is not easy to specify a data type in non-Java environments, such as the HBase shell, REST or Thrift Gateways, command-line arguments to our utility MapReduce jobs, or in integration points such as a (hypothetical extension to) Hive's HBaseStorageHandler. See examples where this limitation impedes in HBASE-8593 and HBASE-10071. I propose the implementation of a type definition DSL, similar to the language defined for Filters in HBASE-4176. By implementing this in core HBase, it can be reused in all of the situations described previously. The parser for this DSL must support arbitrary type extensions, just as the Filter parser allows for new Filter types to be registered at runtime. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10867) TestRegionPlacement#testRegionPlacement occasionally fails
[ https://issues.apache.org/jira/browse/HBASE-10867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956779#comment-13956779 ] stack commented on HBASE-10867: --- +1 TestRegionPlacement#testRegionPlacement occasionally fails -- Key: HBASE-10867 URL: https://issues.apache.org/jira/browse/HBASE-10867 Project: HBase Issue Type: Test Reporter: Ted Yu Assignee: Ted Yu Priority: Minor Fix For: 0.99.0 Attachments: 10867-v1.txt, 10867-v2.txt, 10867.addendum From https://builds.apache.org/job/HBase-TRUNK/5047/testReport/org.apache.hadoop.hbase.master/TestRegionPlacement/testRegionPlacement/ : {code} java.lang.ArrayIndexOutOfBoundsException: 10 at java.util.concurrent.CopyOnWriteArrayList.get(CopyOnWriteArrayList.java:368) at java.util.concurrent.CopyOnWriteArrayList.get(CopyOnWriteArrayList.java:377) at org.apache.hadoop.hbase.LocalHBaseCluster.getRegionServer(LocalHBaseCluster.java:224) at org.apache.hadoop.hbase.MiniHBaseCluster.getRegionServer(MiniHBaseCluster.java:609) at org.apache.hadoop.hbase.master.TestRegionPlacement.killRandomServerAndVerifyAssignment(TestRegionPlacement.java:303) at org.apache.hadoop.hbase.master.TestRegionPlacement.testRegionPlacement(TestRegionPlacement.java:270) {code} In the setup: {code} TEST_UTIL.startMiniCluster(SLAVES); {code} where SLAVES is 10. So when 10 was used in TEST_UTIL.getHBaseCluster().getRegionServer(killIndex), we would get ArrayIndexOutOfBoundsException. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10883) Restrict the universe of labels and authorizations
[ https://issues.apache.org/jira/browse/HBASE-10883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956787#comment-13956787 ] ramkrishna.s.vasudevan commented on HBASE-10883: bq.We better keep the validation at server side? I thought HConstants would help in client side validation. I thought of unifying but decided not to as I thought better to be client side, just on creation of Authorization object. Restrict the universe of labels and authorizations -- Key: HBASE-10883 URL: https://issues.apache.org/jira/browse/HBASE-10883 Project: HBase Issue Type: Improvement Affects Versions: 0.98.1 Reporter: Andrew Purtell Fix For: 0.99.0, 0.98.2 Attachments: HBASE-10883.patch, HBASE-10883_1.patch Currently we allow any string as visibility label or request authorization. However as seen on HBASE-10878, we accept for authorizations strings that would not work if provided as labels in visibility expressions. We should throw an exception at least in cases where someone tries to define or use a label or authorization including visibility expression operators '', '|', '!', '(', ')'. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10859) HStore.openStoreFiles() should pass the StoreFileInfo object to createStoreFileAndReader()
[ https://issues.apache.org/jira/browse/HBASE-10859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956783#comment-13956783 ] Nick Dimiduk commented on HBASE-10859: -- Nice. Only one question/nit: for these loops, does it make sense/matter to throw the first exception instead of the last one? {noformat} +for (int i = 0; i this.link.getLocations().length; i++) { + // HFileLink Reference + try { +status = link.getFileStatus(fs); +return computeRefFileHDFSBlockDistribution(fs, reference, status); + } catch (FileNotFoundException ex) { +// try the other location +exToThrow = ex; {noformat} +1 HStore.openStoreFiles() should pass the StoreFileInfo object to createStoreFileAndReader() -- Key: HBASE-10859 URL: https://issues.apache.org/jira/browse/HBASE-10859 Project: HBase Issue Type: Sub-task Reporter: Enis Soztutar Assignee: Enis Soztutar Fix For: hbase-10070 Attachments: hbase-10859_v1.patch, hbase-10859_v2.patch We sometimes see the following stack trace on test logs (TestReplicasClient), but this is not test-specific: {code} 2014-03-26 21:44:18,662 ERROR [RS_OPEN_REGION-c64-s12:35852-2] handler.OpenRegionHandler(481): Failed open of region=TestReplicasClient,,1395895445056_0001.5f8b8db27e36d2dde781193d92a05730., starting to roll back the global memstore size. java.io.IOException: java.io.IOException: java.io.FileNotFoundException: File does not exist: hdfs://localhost:56276/user/jenkins/hbase/data/default/TestReplicasClient/856934fb87781c9030975706b66137a5/info/589000f197b048e0897e1d81dd7e3a90 at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionStores(HRegion.java:739) at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:646) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:617) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4447) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4417) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4389) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4345) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4296) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722) Caused by: java.io.IOException: java.io.FileNotFoundException: File does not exist: hdfs://localhost:56276/user/jenkins/hbase/data/default/TestReplicasClient/856934fb87781c9030975706b66137a5/info/589000f197b048e0897e1d81dd7e3a90 at org.apache.hadoop.hbase.regionserver.HStore.openStoreFiles(HStore.java:531) at org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:486) at org.apache.hadoop.hbase.regionserver.HStore.init(HStore.java:254) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:3357) at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:710) at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:707) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) ... 3 more Caused by: java.io.FileNotFoundException: File does not exist: hdfs://localhost:56276/user/jenkins/hbase/data/default/TestReplicasClient/856934fb87781c9030975706b66137a5/info/589000f197b048e0897e1d81dd7e3a90 at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1128) at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1120) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1120) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:397) at org.apache.hadoop.hbase.regionserver.StoreFileInfo.init(StoreFileInfo.java:95) at
[jira] [Created] (HBASE-10889) test-patch.sh should exclude thrift generated code from long line detection
Ted Yu created HBASE-10889: -- Summary: test-patch.sh should exclude thrift generated code from long line detection Key: HBASE-10889 URL: https://issues.apache.org/jira/browse/HBASE-10889 Project: HBase Issue Type: Task Reporter: Ted Yu Assignee: Ted Yu As can be seen from HBASE-10881 : {code} {color:red}-1 lineLengths{color}. The patch introduces the following lines longer than 100: + private static final org.apache.thrift.protocol.TField REVERSED_FIELD_DESC = new org.apache.thrift.protocol.TField(reversed, org.apache.thrift.protocol.TType.BOOL, (short)11); {code} test-patch.sh should exclude thrift generated code from long line detection. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10881) Support reverse scan in thrift2
[ https://issues.apache.org/jira/browse/HBASE-10881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956801#comment-13956801 ] Ted Yu commented on HBASE-10881: Logged HBASE-10889 for long line detection enhancement. Support reverse scan in thrift2 --- Key: HBASE-10881 URL: https://issues.apache.org/jira/browse/HBASE-10881 Project: HBase Issue Type: New Feature Components: Thrift Affects Versions: 0.99.0 Reporter: Liu Shaohui Assignee: Liu Shaohui Priority: Minor Fix For: 0.99.0 Attachments: HBASE-10881-trunk-v1.diff, HBASE-10881-trunk-v2.diff Support reverse scan in thrift2. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10889) test-patch.sh should exclude thrift generated code from long line detection
[ https://issues.apache.org/jira/browse/HBASE-10889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-10889: --- Priority: Minor (was: Major) test-patch.sh should exclude thrift generated code from long line detection --- Key: HBASE-10889 URL: https://issues.apache.org/jira/browse/HBASE-10889 Project: HBase Issue Type: Task Reporter: Ted Yu Assignee: Ted Yu Priority: Minor Attachments: 10889-v1.txt As can be seen from HBASE-10881 : {code} {color:red}-1 lineLengths{color}. The patch introduces the following lines longer than 100: + private static final org.apache.thrift.protocol.TField REVERSED_FIELD_DESC = new org.apache.thrift.protocol.TField(reversed, org.apache.thrift.protocol.TType.BOOL, (short)11); {code} test-patch.sh should exclude thrift generated code from long line detection. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10889) test-patch.sh should exclude thrift generated code from long line detection
[ https://issues.apache.org/jira/browse/HBASE-10889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-10889: --- Status: Patch Available (was: Open) test-patch.sh should exclude thrift generated code from long line detection --- Key: HBASE-10889 URL: https://issues.apache.org/jira/browse/HBASE-10889 Project: HBase Issue Type: Task Reporter: Ted Yu Assignee: Ted Yu Priority: Minor Attachments: 10889-v1.txt As can be seen from HBASE-10881 : {code} {color:red}-1 lineLengths{color}. The patch introduces the following lines longer than 100: + private static final org.apache.thrift.protocol.TField REVERSED_FIELD_DESC = new org.apache.thrift.protocol.TField(reversed, org.apache.thrift.protocol.TType.BOOL, (short)11); {code} test-patch.sh should exclude thrift generated code from long line detection. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10889) test-patch.sh should exclude thrift generated code from long line detection
[ https://issues.apache.org/jira/browse/HBASE-10889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-10889: --- Attachment: 10889-v1.txt test-patch.sh should exclude thrift generated code from long line detection --- Key: HBASE-10889 URL: https://issues.apache.org/jira/browse/HBASE-10889 Project: HBase Issue Type: Task Reporter: Ted Yu Assignee: Ted Yu Attachments: 10889-v1.txt As can be seen from HBASE-10881 : {code} {color:red}-1 lineLengths{color}. The patch introduces the following lines longer than 100: + private static final org.apache.thrift.protocol.TField REVERSED_FIELD_DESC = new org.apache.thrift.protocol.TField(reversed, org.apache.thrift.protocol.TType.BOOL, (short)11); {code} test-patch.sh should exclude thrift generated code from long line detection. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10886) add htrace-zipkin to the runtime dependencies again
[ https://issues.apache.org/jira/browse/HBASE-10886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956804#comment-13956804 ] Nick Dimiduk commented on HBASE-10886: -- Please correct me if I'm wrong, but the zipkin integration should only be needed where SpanReceiverHost is used, right? {noformat} $ grep -riIn --exclude '*.xml' SpanReceiverHost * | cut -d/ -f1 | sort | uniq hbase-it hbase-server hbase-shell {noformat} add htrace-zipkin to the runtime dependencies again --- Key: HBASE-10886 URL: https://issues.apache.org/jira/browse/HBASE-10886 Project: HBase Issue Type: Improvement Reporter: Masatake Iwasaki Assignee: Masatake Iwasaki Priority: Minor Attachments: HBASE-10886-0.patch Once htrace-zipkin was removed from depencencies in HBASE-9700. Because all of the depencencies of htrace-zipkin is bundled with HBase now, it is good to add it for the ease of use. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10883) Restrict the universe of labels and authorizations
[ https://issues.apache.org/jira/browse/HBASE-10883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan updated HBASE-10883: --- Status: Open (was: Patch Available) Restrict the universe of labels and authorizations -- Key: HBASE-10883 URL: https://issues.apache.org/jira/browse/HBASE-10883 Project: HBase Issue Type: Improvement Affects Versions: 0.98.1 Reporter: Andrew Purtell Fix For: 0.99.0, 0.98.2 Attachments: HBASE-10883.patch, HBASE-10883_1.patch Currently we allow any string as visibility label or request authorization. However as seen on HBASE-10878, we accept for authorizations strings that would not work if provided as labels in visibility expressions. We should throw an exception at least in cases where someone tries to define or use a label or authorization including visibility expression operators '', '|', '!', '(', ')'. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10855) Enable hfilev3 by default
[ https://issues.apache.org/jira/browse/HBASE-10855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956813#comment-13956813 ] Hadoop QA commented on HBASE-10855: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12638081/10855v2.txt against trunk revision . ATTACHMENT ID: 12638081 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified tests. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.procedure.TestZKProcedure org.apache.hadoop.hbase.master.TestMasterNoCluster {color:red}-1 core zombie tests{color}. There are 1 zombie test(s): Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/9158//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9158//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9158//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9158//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9158//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9158//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9158//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9158//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9158//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9158//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/9158//console This message is automatically generated. Enable hfilev3 by default - Key: HBASE-10855 URL: https://issues.apache.org/jira/browse/HBASE-10855 Project: HBase Issue Type: Sub-task Components: HFile Reporter: stack Assignee: stack Fix For: 0.99.0 Attachments: 10855.txt, 10855.txt, 10855.txt, 10855.txt, 10855v2.txt Distributed log replay needs this. Should be on by default in 1.0/0.99. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10851) Wait for regionservers to join the cluster
[ https://issues.apache.org/jira/browse/HBASE-10851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956861#comment-13956861 ] Hadoop QA commented on HBASE-10851: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12638087/hbase-10851_v2.patch against trunk revision . ATTACHMENT ID: 12638087 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/9159//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9159//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9159//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9159//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9159//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9159//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9159//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9159//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9159//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9159//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/9159//console This message is automatically generated. Wait for regionservers to join the cluster -- Key: HBASE-10851 URL: https://issues.apache.org/jira/browse/HBASE-10851 Project: HBase Issue Type: Bug Reporter: Jimmy Xiang Assignee: Jimmy Xiang Priority: Critical Fix For: 0.99.0 Attachments: hbase-10851.patch, hbase-10851_v2.patch With HBASE-10569, if regionservers are started a while after the master, all regions will be assigned to the master. That may not be what users expect. A work-around is to always start regionservers before masters. I was wondering if the master can wait a little for other regionservers to join. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10888) Enable distributed log replay as default
[ https://issues.apache.org/jira/browse/HBASE-10888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-10888: -- Attachment: 10888v2.txt Missing terminator in hbase-default.xml. This patch depends on hfile v3 patch being in first. Enable distributed log replay as default Key: HBASE-10888 URL: https://issues.apache.org/jira/browse/HBASE-10888 Project: HBase Issue Type: Sub-task Reporter: stack Assignee: stack Attachments: 10888.txt, 10888v2.txt Enable 'distributed log replay' by default. Depends on hfilev3 being enabled. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10888) Enable distributed log replay as default
[ https://issues.apache.org/jira/browse/HBASE-10888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-10888: -- Status: Patch Available (was: Open) Trying hadoopqa anyways. Enable distributed log replay as default Key: HBASE-10888 URL: https://issues.apache.org/jira/browse/HBASE-10888 Project: HBase Issue Type: Sub-task Reporter: stack Assignee: stack Attachments: 10888.txt, 10888v2.txt Enable 'distributed log replay' by default. Depends on hfilev3 being enabled. -- This message was sent by Atlassian JIRA (v6.2#6252)