[jira] [Updated] (HBASE-6470) SingleColumnValueFilter with private fields and methods
[ https://issues.apache.org/jira/browse/HBASE-6470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benjamin Kim updated HBASE-6470: Attachment: SingleColumnValueFilter_HBASE_6470-trunk.patch SingleColumnValueFilter with private fields and methods --- Key: HBASE-6470 URL: https://issues.apache.org/jira/browse/HBASE-6470 Project: HBase Issue Type: Improvement Components: Filters Affects Versions: 0.94.0 Reporter: Benjamin Kim Assignee: Benjamin Kim Labels: patch Fix For: 0.96.0 Attachments: SingleColumnValueFilter_HBASE_6470-trunk.patch Why are most fields and methods declared private in SingleColumnValueFilter? I'm trying to extend the functions of the SingleColumnValueFilter to support complex column types such as JSON, Array, CSV, etc. But inheriting the SingleColumnValueFilter doesn't give any benefits for I have to rewrite the codes. I think all private fields and methods could turn into protected mode. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-6470) SingleColumnValueFilter with private fields and methods
[ https://issues.apache.org/jira/browse/HBASE-6470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benjamin Kim updated HBASE-6470: Status: Open (was: Patch Available) SingleColumnValueFilter with private fields and methods --- Key: HBASE-6470 URL: https://issues.apache.org/jira/browse/HBASE-6470 Project: HBase Issue Type: Improvement Components: Filters Affects Versions: 0.94.0 Reporter: Benjamin Kim Assignee: Benjamin Kim Labels: patch Fix For: 0.96.0 Attachments: SingleColumnValueFilter_HBASE_6470-trunk.patch Why are most fields and methods declared private in SingleColumnValueFilter? I'm trying to extend the functions of the SingleColumnValueFilter to support complex column types such as JSON, Array, CSV, etc. But inheriting the SingleColumnValueFilter doesn't give any benefits for I have to rewrite the codes. I think all private fields and methods could turn into protected mode. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6470) SingleColumnValueFilter with private fields and methods
[ https://issues.apache.org/jira/browse/HBASE-6470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496036#comment-13496036 ] Benjamin Kim commented on HBASE-6470: - oops I just did SingleColumnValueFilter with private fields and methods --- Key: HBASE-6470 URL: https://issues.apache.org/jira/browse/HBASE-6470 Project: HBase Issue Type: Improvement Components: Filters Affects Versions: 0.94.0 Reporter: Benjamin Kim Assignee: Benjamin Kim Labels: patch Fix For: 0.96.0 Attachments: SingleColumnValueFilter_HBASE_6470-trunk.patch Why are most fields and methods declared private in SingleColumnValueFilter? I'm trying to extend the functions of the SingleColumnValueFilter to support complex column types such as JSON, Array, CSV, etc. But inheriting the SingleColumnValueFilter doesn't give any benefits for I have to rewrite the codes. I think all private fields and methods could turn into protected mode. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7104) HBase includes multiple versions of netty: 3.5.0; 3.2.4; 3.2.2
[ https://issues.apache.org/jira/browse/HBASE-7104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496040#comment-13496040 ] nkeywal commented on HBASE-7104: I suppose it works when you previously built hadoop 1.1 locally. I have the same error if I delete my repository. I don't why it happens this way. With this, I have: [INFO] +- org.apache.hbase:hbase-hadoop1-compat:test-jar:tests:0.95-SNAPSHOT:test [INFO] +- io.netty:netty:jar:3.5.9.Final:compile [INFO] +- org.apache.hadoop:hadoop-client:jar:2.0.2-alpha:compile [INFO] | +- org.apache.hadoop:hadoop-mapreduce-client-app:jar:2.0.2-alpha:compile [INFO] | | \- org.jboss.netty:netty:jar:3.2.4.Final:compile So we still have the 3.2.4 when we build HBase, but as it comes only from the compat hadoop 2 module, let's say it's acceptable for now (I hope it will be changed in hadoop itself)... Sorry for the mess... HBase includes multiple versions of netty: 3.5.0; 3.2.4; 3.2.2 -- Key: HBASE-7104 URL: https://issues.apache.org/jira/browse/HBASE-7104 Project: HBase Issue Type: Bug Components: build Affects Versions: 0.96.0 Reporter: nkeywal Assignee: nkeywal Priority: Minor Fix For: 0.96.0 Attachments: 7104.v1.patch We've got 3 of them on trunk. [INFO] org.apache.hbase:hbase-server:jar:0.95-SNAPSHOT [INFO] +- io.netty:netty:jar:3.5.0.Final:compile [INFO] +- org.apache.zookeeper:zookeeper:jar:3.4.3:compile [INFO] | \- org.jboss.netty:netty:jar:3.2.2.Final:compile [INFO] org.apache.hbase:hbase-hadoop2-compat:jar:0.95-SNAPSHOT [INFO] +- org.apache.hadoop:hadoop-client:jar:2.0.2-alpha:compile [INFO] | +- org.apache.hadoop:hadoop-mapreduce-client-app:jar:2.0.2-alpha:compile [INFO] | | \- org.jboss.netty:netty:jar:3.2.4.Final:compile The patch attached: - fixes this for hadoop 1 profile - bump the netty version to 3.5.9 - does not fix it for hadoop 2. I don't know why, but I haven't investigate: as it's still alpha may be they will change the version on hadoop side anyway. Tests are ok. I haven't really investigated the differences between netty 3.2 and 3.5. A quick search seems to say it's ok, but don't hesitate to raise a warning... -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5984) TestLogRolling.testLogRollOnPipelineRestart failed with HADOOP 2.0.0
[ https://issues.apache.org/jira/browse/HBASE-5984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496043#comment-13496043 ] Hudson commented on HBASE-5984: --- Integrated in HBase-0.94 #582 (See [https://builds.apache.org/job/HBase-0.94/582/]) HBASE-5984 TestLogRolling.testLogRollOnPipelineRestart failed with HADOOP 2.0.0 (Revision 1408574) Result = FAILURE stack : Files : * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRolling.java TestLogRolling.testLogRollOnPipelineRestart failed with HADOOP 2.0.0 Key: HBASE-5984 URL: https://issues.apache.org/jira/browse/HBASE-5984 Project: HBase Issue Type: Test Components: test Affects Versions: 0.96.0 Reporter: Jimmy Xiang Assignee: Jimmy Xiang Fix For: 0.94.3, 0.96.0 Attachments: hbase_5984.patch java.io.IOException: Cannot obtain block length for LocatedBlock{BP-1455809779-127.0.0.1-1336670196362:blk_-6960847342982670493_1028; getBlockSize()=1474; corrupt=false; offset=0; locs=[127.0.0.1:58343, 127.0.0.1:48427]} at org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:232) at org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:177) at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:119) at org.apache.hadoop.hdfs.DFSInputStream.init(DFSInputStream.java:112) at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:928) at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:212) at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:75) at org.apache.hadoop.io.SequenceFile$Reader.openFile(SequenceFile.java:1768) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.openFile(SequenceFileLogReader.java:66) at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1688) at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1709) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.init(SequenceFileLogReader.java:58) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:166) at org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:659) at org.apache.hadoop.hbase.regionserver.wal.TestLogRolling.testLogRollOnPipelineRestart(TestLogRolling.java:498) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.runners.ParentRunner.run(ParentRunner.java:300) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) at
[jira] [Commented] (HBASE-5898) Consider double-checked locking for block cache lock
[ https://issues.apache.org/jira/browse/HBASE-5898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496044#comment-13496044 ] Hudson commented on HBASE-5898: --- Integrated in HBase-0.94 #582 (See [https://builds.apache.org/job/HBase-0.94/582/]) HBASE-5898 Consider double-checked locking for block cache lock (Todd, Elliot, LarsH) (Revision 1408621) Result = FAILURE larsh : Files : * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileReader.java * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCache.java * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/io/hfile/DoubleBlockCache.java * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV1.java * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/io/hfile/SimpleBlockCache.java * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/io/hfile/slab/SingleSizeCache.java * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/io/hfile/slab/SlabCache.java * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/io/hfile/CacheTestUtils.java * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheOnWrite.java * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileDataBlockEncoder.java * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/io/hfile/TestLruBlockCache.java * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/TestCacheOnWriteInSchema.java Consider double-checked locking for block cache lock Key: HBASE-5898 URL: https://issues.apache.org/jira/browse/HBASE-5898 Project: HBase Issue Type: Improvement Components: Performance Affects Versions: 0.94.1 Reporter: Todd Lipcon Assignee: Todd Lipcon Priority: Critical Fix For: 0.94.3, 0.96.0 Attachments: 5898-0.94.txt, 5898-TestBlocksRead.txt, 5898-v2.txt, 5898-v3.txt, 5898-v4.txt, 5898-v4.txt, HBASE-5898-0.patch, HBASE-5898-1.patch, HBASE-5898-1.patch, hbase-5898.txt Running a workload with a high query rate against a dataset that fits in cache, I saw a lot of CPU being used in IdLock.getLockEntry, being called by HFileReaderV2.readBlock. Even though it was all cache hits, it was wasting a lot of CPU doing lock management here. I wrote a quick patch to switch to a double-checked locking and it improved throughput substantially for this workload. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5898) Consider double-checked locking for block cache lock
[ https://issues.apache.org/jira/browse/HBASE-5898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496049#comment-13496049 ] Hudson commented on HBASE-5898: --- Integrated in HBase-TRUNK #3534 (See [https://builds.apache.org/job/HBase-TRUNK/3534/]) HBASE-5898 Consider double-checked locking for block cache lock (Todd, Elliot, LarsH) (Revision 1408620) Result = FAILURE larsh : Files : * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileReader.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCache.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/DoubleBlockCache.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV1.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/SimpleBlockCache.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/slab/SingleSizeCache.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/slab/SlabCache.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/CacheTestUtils.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheOnWrite.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileDataBlockEncoder.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestLruBlockCache.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCacheOnWriteInSchema.java Consider double-checked locking for block cache lock Key: HBASE-5898 URL: https://issues.apache.org/jira/browse/HBASE-5898 Project: HBase Issue Type: Improvement Components: Performance Affects Versions: 0.94.1 Reporter: Todd Lipcon Assignee: Todd Lipcon Priority: Critical Fix For: 0.94.3, 0.96.0 Attachments: 5898-0.94.txt, 5898-TestBlocksRead.txt, 5898-v2.txt, 5898-v3.txt, 5898-v4.txt, 5898-v4.txt, HBASE-5898-0.patch, HBASE-5898-1.patch, HBASE-5898-1.patch, hbase-5898.txt Running a workload with a high query rate against a dataset that fits in cache, I saw a lot of CPU being used in IdLock.getLockEntry, being called by HFileReaderV2.readBlock. Even though it was all cache hits, it was wasting a lot of CPU doing lock management here. I wrote a quick patch to switch to a double-checked locking and it improved throughput substantially for this workload. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7104) HBase includes multiple versions of netty: 3.5.0; 3.2.4; 3.2.2
[ https://issues.apache.org/jira/browse/HBASE-7104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496050#comment-13496050 ] Hudson commented on HBASE-7104: --- Integrated in HBase-TRUNK #3534 (See [https://builds.apache.org/job/HBase-TRUNK/3534/]) HBASE-7104 Partial revert, due to build issues. (Revision 1408575) Result = FAILURE larsh : Files : * /hbase/trunk/pom.xml HBase includes multiple versions of netty: 3.5.0; 3.2.4; 3.2.2 -- Key: HBASE-7104 URL: https://issues.apache.org/jira/browse/HBASE-7104 Project: HBase Issue Type: Bug Components: build Affects Versions: 0.96.0 Reporter: nkeywal Assignee: nkeywal Priority: Minor Fix For: 0.96.0 Attachments: 7104.v1.patch We've got 3 of them on trunk. [INFO] org.apache.hbase:hbase-server:jar:0.95-SNAPSHOT [INFO] +- io.netty:netty:jar:3.5.0.Final:compile [INFO] +- org.apache.zookeeper:zookeeper:jar:3.4.3:compile [INFO] | \- org.jboss.netty:netty:jar:3.2.2.Final:compile [INFO] org.apache.hbase:hbase-hadoop2-compat:jar:0.95-SNAPSHOT [INFO] +- org.apache.hadoop:hadoop-client:jar:2.0.2-alpha:compile [INFO] | +- org.apache.hadoop:hadoop-mapreduce-client-app:jar:2.0.2-alpha:compile [INFO] | | \- org.jboss.netty:netty:jar:3.2.4.Final:compile The patch attached: - fixes this for hadoop 1 profile - bump the netty version to 3.5.9 - does not fix it for hadoop 2. I don't know why, but I haven't investigate: as it's still alpha may be they will change the version on hadoop side anyway. Tests are ok. I haven't really investigated the differences between netty 3.2 and 3.5. A quick search seems to say it's ok, but don't hesitate to raise a warning... -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-7155) Excessive usage of InterruptedException where it can't be thrown
Daniel Gómez Ferro created HBASE-7155: - Summary: Excessive usage of InterruptedException where it can't be thrown Key: HBASE-7155 URL: https://issues.apache.org/jira/browse/HBASE-7155 Project: HBase Issue Type: Bug Reporter: Daniel Gómez Ferro RootRegionTracker.getRootRegionLocation() declares that it can throw an InterruptedException, but it can't. This exception is rethrown by many other functions reaching the HBaseAdmin API. If we remove the throws statement from the HBaseAdmin API libraries already compiled will work fine, but if the user is trying to catch an InterruptedException around one of those methods the compiler will complain. Should we clean this up? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-7156) Add Data Block Encoding and -D opts to Performance Evaluation
Matteo Bertozzi created HBASE-7156: -- Summary: Add Data Block Encoding and -D opts to Performance Evaluation Key: HBASE-7156 URL: https://issues.apache.org/jira/browse/HBASE-7156 Project: HBase Issue Type: Improvement Components: test Reporter: Matteo Bertozzi Assignee: Matteo Bertozzi Priority: Minor Attachments: HBASE-7156-v0.patch Add the ability to specify Data Block Encoding and other configuration options. --blockEncoding=TYPE -D property=value Example: hbase org.apache.hadoop.hbase.PerformanceEvaluation -D mapreduce.task.timeout=6 --blockEncoding=DIFF sequentialWrite 1 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7156) Add Data Block Encoding and -D opts to Performance Evaluation
[ https://issues.apache.org/jira/browse/HBASE-7156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matteo Bertozzi updated HBASE-7156: --- Status: Patch Available (was: Open) Add Data Block Encoding and -D opts to Performance Evaluation - Key: HBASE-7156 URL: https://issues.apache.org/jira/browse/HBASE-7156 Project: HBase Issue Type: Improvement Components: test Reporter: Matteo Bertozzi Assignee: Matteo Bertozzi Priority: Minor Attachments: HBASE-7156-v0.patch Add the ability to specify Data Block Encoding and other configuration options. --blockEncoding=TYPE -D property=value Example: hbase org.apache.hadoop.hbase.PerformanceEvaluation -D mapreduce.task.timeout=6 --blockEncoding=DIFF sequentialWrite 1 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7156) Add Data Block Encoding and -D opts to Performance Evaluation
[ https://issues.apache.org/jira/browse/HBASE-7156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matteo Bertozzi updated HBASE-7156: --- Attachment: HBASE-7156-v0.patch Add Data Block Encoding and -D opts to Performance Evaluation - Key: HBASE-7156 URL: https://issues.apache.org/jira/browse/HBASE-7156 Project: HBase Issue Type: Improvement Components: test Reporter: Matteo Bertozzi Assignee: Matteo Bertozzi Priority: Minor Attachments: HBASE-7156-v0.patch Add the ability to specify Data Block Encoding and other configuration options. --blockEncoding=TYPE -D property=value Example: hbase org.apache.hadoop.hbase.PerformanceEvaluation -D mapreduce.task.timeout=6 --blockEncoding=DIFF sequentialWrite 1 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7155) Excessive usage of InterruptedException where it can't be thrown
[ https://issues.apache.org/jira/browse/HBASE-7155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Gómez Ferro updated HBASE-7155: -- Attachment: HBASE-7155.patch I did a quick cleaning of InterruptedExceptions. If you think this should be merged I'll do an in depth pass to check I didn't miss anything. Excessive usage of InterruptedException where it can't be thrown Key: HBASE-7155 URL: https://issues.apache.org/jira/browse/HBASE-7155 Project: HBase Issue Type: Bug Reporter: Daniel Gómez Ferro Attachments: HBASE-7155.patch RootRegionTracker.getRootRegionLocation() declares that it can throw an InterruptedException, but it can't. This exception is rethrown by many other functions reaching the HBaseAdmin API. If we remove the throws statement from the HBaseAdmin API libraries already compiled will work fine, but if the user is trying to catch an InterruptedException around one of those methods the compiler will complain. Should we clean this up? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7155) Excessive usage of InterruptedException where it can't be thrown
[ https://issues.apache.org/jira/browse/HBASE-7155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Gómez Ferro updated HBASE-7155: -- Assignee: Daniel Gómez Ferro Release Note: Some HBaseAdmin methods don't throw InterruptedException any longer, this could break existing code that tries to catch it. Status: Patch Available (was: Open) Excessive usage of InterruptedException where it can't be thrown Key: HBASE-7155 URL: https://issues.apache.org/jira/browse/HBASE-7155 Project: HBase Issue Type: Bug Reporter: Daniel Gómez Ferro Assignee: Daniel Gómez Ferro Attachments: HBASE-7155.patch RootRegionTracker.getRootRegionLocation() declares that it can throw an InterruptedException, but it can't. This exception is rethrown by many other functions reaching the HBaseAdmin API. If we remove the throws statement from the HBaseAdmin API libraries already compiled will work fine, but if the user is trying to catch an InterruptedException around one of those methods the compiler will complain. Should we clean this up? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5984) TestLogRolling.testLogRollOnPipelineRestart failed with HADOOP 2.0.0
[ https://issues.apache.org/jira/browse/HBASE-5984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496078#comment-13496078 ] Hudson commented on HBASE-5984: --- Integrated in HBase-0.94-security #83 (See [https://builds.apache.org/job/HBase-0.94-security/83/]) HBASE-5984 TestLogRolling.testLogRollOnPipelineRestart failed with HADOOP 2.0.0 (Revision 1408574) Result = SUCCESS stack : Files : * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRolling.java TestLogRolling.testLogRollOnPipelineRestart failed with HADOOP 2.0.0 Key: HBASE-5984 URL: https://issues.apache.org/jira/browse/HBASE-5984 Project: HBase Issue Type: Test Components: test Affects Versions: 0.96.0 Reporter: Jimmy Xiang Assignee: Jimmy Xiang Fix For: 0.94.3, 0.96.0 Attachments: hbase_5984.patch java.io.IOException: Cannot obtain block length for LocatedBlock{BP-1455809779-127.0.0.1-1336670196362:blk_-6960847342982670493_1028; getBlockSize()=1474; corrupt=false; offset=0; locs=[127.0.0.1:58343, 127.0.0.1:48427]} at org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:232) at org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:177) at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:119) at org.apache.hadoop.hdfs.DFSInputStream.init(DFSInputStream.java:112) at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:928) at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:212) at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:75) at org.apache.hadoop.io.SequenceFile$Reader.openFile(SequenceFile.java:1768) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.openFile(SequenceFileLogReader.java:66) at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1688) at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1709) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.init(SequenceFileLogReader.java:58) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:166) at org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:659) at org.apache.hadoop.hbase.regionserver.wal.TestLogRolling.testLogRollOnPipelineRestart(TestLogRolling.java:498) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.runners.ParentRunner.run(ParentRunner.java:300) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) at
[jira] [Commented] (HBASE-6958) TestAssignmentManager sometimes fails
[ https://issues.apache.org/jira/browse/HBASE-6958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496079#comment-13496079 ] Hudson commented on HBASE-6958: --- Integrated in HBase-0.94-security #83 (See [https://builds.apache.org/job/HBase-0.94-security/83/]) HBASE-6958 TestAssignmentManager sometimes fails (Revision 1406700) Result = SUCCESS jxiang : Files : * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentManager.java TestAssignmentManager sometimes fails - Key: HBASE-6958 URL: https://issues.apache.org/jira/browse/HBASE-6958 Project: HBase Issue Type: Bug Components: test Reporter: Ted Yu Assignee: Jimmy Xiang Fix For: 0.94.3, 0.96.0 Attachments: 6958_0.94.patch, trunk-6958.patch From https://builds.apache.org/job/HBase-TRUNK/3432/testReport/junit/org.apache.hadoop.hbase.master/TestAssignmentManager/testBalanceOnMasterFailoverScenarioWithOpenedNode/ : {code} Stacktrace java.lang.Exception: test timed out after 5000 milliseconds at java.lang.System.arraycopy(Native Method) at java.lang.ThreadGroup.remove(ThreadGroup.java:969) at java.lang.ThreadGroup.threadTerminated(ThreadGroup.java:942) at java.lang.Thread.exit(Thread.java:732) ... 2012-10-06 00:46:12,521 DEBUG [MASTER_CLOSE_REGION-mockedAMExecutor-0] zookeeper.ZKUtil(1141): mockedServer-0x13a33892de7000e Retrieved 81 byte(s) of data from znode /hbase/unassigned/dc01abf9cd7fd0ea256af4df02811640 and set watcher; region=t,,1349484359011.dc01abf9cd7fd0ea256af4df02811640., state=M_ZK_REGION_OFFLINE, servername=master,1,1, createTime=1349484372509, payload.length=0 2012-10-06 00:46:12,522 ERROR [MASTER_CLOSE_REGION-mockedAMExecutor-0] executor.EventHandler(205): Caught throwable while processing event RS_ZK_REGION_CLOSED java.lang.NullPointerException at org.apache.hadoop.hbase.master.TestAssignmentManager$MockedLoadBalancer.randomAssignment(TestAssignmentManager.java:773) at org.apache.hadoop.hbase.master.AssignmentManager.getRegionPlan(AssignmentManager.java:1709) at org.apache.hadoop.hbase.master.AssignmentManager.getRegionPlan(AssignmentManager.java:1666) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1435) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1155) at org.apache.hadoop.hbase.master.TestAssignmentManager$AssignmentManagerWithExtrasForTesting.assign(TestAssignmentManager.java:1035) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1130) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1125) at org.apache.hadoop.hbase.master.handler.ClosedRegionHandler.process(ClosedRegionHandler.java:106) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:202) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:722) 2012-10-06 00:46:12,522 DEBUG [pool-1-thread-1-EventThread] master.AssignmentManager(670): Handling transition=M_ZK_REGION_OFFLINE, server=master,1,1, region=dc01abf9cd7fd0ea256af4df02811640, current state from region state map ={t,,1349484359011.dc01abf9cd7fd0ea256af4df02811640. state=OFFLINE, ts=1349484372508, server=null} {code} Looks like NPE happened on this line: {code} this.gate.set(true); {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6665) ROOT region should not be splitted even with META row as explicit split key
[ https://issues.apache.org/jira/browse/HBASE-6665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496080#comment-13496080 ] Hudson commented on HBASE-6665: --- Integrated in HBase-0.94-security #83 (See [https://builds.apache.org/job/HBase-0.94-security/83/]) HBASE-6665 ROOT region should not be splitted even with META row as explicit split key (Rajesh) (Revision 1407727) Result = SUCCESS ramkrishna : Files : * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestAdmin.java ROOT region should not be splitted even with META row as explicit split key --- Key: HBASE-6665 URL: https://issues.apache.org/jira/browse/HBASE-6665 Project: HBase Issue Type: Bug Reporter: Y. SREENIVASULU REDDY Assignee: rajeshbabu Fix For: 0.92.3, 0.94.3, 0.96.0 Attachments: HBASE-6665_92.patch, HBASE-6665_94.patch, HBASE-6665_trunk.patch split operation on ROOT table by specifying explicit split key as .META. closing the ROOT region and taking long time to fail the split before rollback. I think we can skip split for ROOT table as how we are doing for META region. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5898) Consider double-checked locking for block cache lock
[ https://issues.apache.org/jira/browse/HBASE-5898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496081#comment-13496081 ] Hudson commented on HBASE-5898: --- Integrated in HBase-0.94-security #83 (See [https://builds.apache.org/job/HBase-0.94-security/83/]) HBASE-5898 Consider double-checked locking for block cache lock (Todd, Elliot, LarsH) (Revision 1408621) Result = SUCCESS larsh : Files : * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileReader.java * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCache.java * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/io/hfile/DoubleBlockCache.java * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV1.java * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/io/hfile/SimpleBlockCache.java * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/io/hfile/slab/SingleSizeCache.java * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/io/hfile/slab/SlabCache.java * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/io/hfile/CacheTestUtils.java * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheOnWrite.java * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileDataBlockEncoder.java * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/io/hfile/TestLruBlockCache.java * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/TestCacheOnWriteInSchema.java Consider double-checked locking for block cache lock Key: HBASE-5898 URL: https://issues.apache.org/jira/browse/HBASE-5898 Project: HBase Issue Type: Improvement Components: Performance Affects Versions: 0.94.1 Reporter: Todd Lipcon Assignee: Todd Lipcon Priority: Critical Fix For: 0.94.3, 0.96.0 Attachments: 5898-0.94.txt, 5898-TestBlocksRead.txt, 5898-v2.txt, 5898-v3.txt, 5898-v4.txt, 5898-v4.txt, HBASE-5898-0.patch, HBASE-5898-1.patch, HBASE-5898-1.patch, hbase-5898.txt Running a workload with a high query rate against a dataset that fits in cache, I saw a lot of CPU being used in IdLock.getLockEntry, being called by HFileReaderV2.readBlock. Even though it was all cache hits, it was wasting a lot of CPU doing lock management here. I wrote a quick patch to switch to a double-checked locking and it improved throughput substantially for this workload. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7143) TestMetaMigrationRemovingHTD fails when used with Hadoop 0.23/2.x
[ https://issues.apache.org/jira/browse/HBASE-7143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496082#comment-13496082 ] Hudson commented on HBASE-7143: --- Integrated in HBase-0.94-security #83 (See [https://builds.apache.org/job/HBase-0.94-security/83/]) HBASE-7143 TestMetaMigrationRemovingHTD fails when used with Hadoop 0.23/2.x (Andrey Klochlov) (Revision 1408012) Result = SUCCESS tedyu : Files : * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestMetaMigrationRemovingHTD.java TestMetaMigrationRemovingHTD fails when used with Hadoop 0.23/2.x - Key: HBASE-7143 URL: https://issues.apache.org/jira/browse/HBASE-7143 Project: HBase Issue Type: Bug Components: test Affects Versions: 0.94.2 Reporter: Andrey Klochkov Assignee: Andrey Klochkov Fix For: 0.94.3, 0.96.0 Attachments: 7143-trunk-v2.txt, HBASE-7143-0.94.patch, HBASE-7143-0.94.patch, HBASE-7143-trunk.patch, HBASE-7143-trunk.patch TestMetaMigrationRemovingHTD fails when build is done with -Dhadoop.profile=23 option. The reason is the changes of defaults in -mkdir CLI call. In 0.23/2.x, it doesn't create parent directories by default anymore. The patch will be submitted shortly. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7097) Log message in SecureServer.class uses wrong class name
[ https://issues.apache.org/jira/browse/HBASE-7097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496083#comment-13496083 ] Hudson commented on HBASE-7097: --- Integrated in HBase-0.94-security #83 (See [https://builds.apache.org/job/HBase-0.94-security/83/]) Amend HBASE-7097. Change per-request logging in SecureServer to TRACE level (Revision 1406420) HBASE-7097. Log message in SecureServer.class uses wrong class name (Y. Sreenivasulu Reddy) (Revision 1405906) Result = SUCCESS apurtell : Files : * /hbase/branches/0.94/security/src/main/java/org/apache/hadoop/hbase/ipc/SecureServer.java apurtell : Files : * /hbase/branches/0.94/security/src/main/java/org/apache/hadoop/hbase/ipc/SecureServer.java Log message in SecureServer.class uses wrong class name --- Key: HBASE-7097 URL: https://issues.apache.org/jira/browse/HBASE-7097 Project: HBase Issue Type: Improvement Components: security Affects Versions: 0.92.2, 0.94.2 Reporter: Y. SREENIVASULU REDDY Priority: Minor Fix For: 0.92.3, 0.94.3 Attachments: HBASE-7097_94.patch, HBASE-7097-addendum.patch, HBASE-7097-addendum.patch log messages are printing wrongly in org.apache.hadoop.hbase.ipc.SecureServer.class {code} public static final Log LOG = LogFactory.getLog(org.apache.hadoop.ipc.SecureServer); private static final Log AUDITLOG = LogFactory.getLog(SecurityLogger.org.apache.hadoop.ipc.SecureServer); {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7142) TestSplitLogManager#testDeadWorker may fail because of hard limit on the TimeoutMonitor's timeout period
[ https://issues.apache.org/jira/browse/HBASE-7142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496084#comment-13496084 ] Hudson commented on HBASE-7142: --- Integrated in HBase-0.94-security #83 (See [https://builds.apache.org/job/HBase-0.94-security/83/]) HBASE-7142 TestSplitLogManager#testDeadWorker may fail because of hard limit on the TimeoutMonitor's timeout period (Himanshu) (Revision 1408119) Result = SUCCESS larsh : Files : * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/master/TestSplitLogManager.java TestSplitLogManager#testDeadWorker may fail because of hard limit on the TimeoutMonitor's timeout period Key: HBASE-7142 URL: https://issues.apache.org/jira/browse/HBASE-7142 Project: HBase Issue Type: Test Components: test Affects Versions: 0.94.2 Reporter: Himanshu Vashishtha Assignee: Himanshu Vashishtha Priority: Minor Fix For: 0.94.3 Attachments: HBASE-7142.patch The timeout in testDeadWorker is set to 1 sec, it is the same as the TimeoutMonitor thread timeout. In some case, this may fail: {code} java.lang.AssertionError at org.junit.Assert.fail(Assert.java:92) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertTrue(Assert.java:54) at org.apache.hadoop.hbase.master.TestSplitLogManager.waitForCounter(TestSplitLogManager.java:147) at org.apache.hadoop.hbase.master.TestSplitLogManager.waitForCounter(TestSplitLogManager.java:127) at org.apache.hadoop.hbase.master.TestSplitLogManager.testDeadWorker(TestSplitLogManager.java:433) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) {code} Fix is to increase the timeout for this test. Its not needed in trunk as the timeout is 3 seconds. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7089) Allow filter to be specified for Get from HBase shell
[ https://issues.apache.org/jira/browse/HBASE-7089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496085#comment-13496085 ] Hudson commented on HBASE-7089: --- Integrated in HBase-0.94-security #83 (See [https://builds.apache.org/job/HBase-0.94-security/83/]) HBASE-7089 Allow filter to be specified for Get from HBase shell (Aditya Kishore) (Revision 1405697) Result = SUCCESS larsh : Files : * /hbase/branches/0.94/src/main/ruby/hbase/table.rb * /hbase/branches/0.94/src/main/ruby/shell/commands/get.rb * /hbase/branches/0.94/src/test/ruby/hbase/table_test.rb Allow filter to be specified for Get from HBase shell - Key: HBASE-7089 URL: https://issues.apache.org/jira/browse/HBASE-7089 Project: HBase Issue Type: Improvement Components: shell Affects Versions: 0.96.0 Reporter: Aditya Kishore Assignee: Aditya Kishore Priority: Minor Fix For: 0.94.3, 0.96.0 Attachments: HBASE-7089_94.patch, HBASE-7089_trunk.patch, HBASE-7089_trunk_v2.patch, HBASE-7089_trunk_v3.patch, HBASE-7089_trunk_v4.patch Unlike scan, get in HBase shell does not accept FILTER as an argument. {noformat} hbase(main):001:0 get 'table', 'row3', {FILTER = ValueFilter (=, 'binary:valueX')} COLUMN CELL ERROR: Failed parse of {FILTER=ValueFilter (=, 'binary:valueX')}, Hash Here is some help for this command: ... {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7103) Need to fail split if SPLIT znode is deleted even before the split is completed.
[ https://issues.apache.org/jira/browse/HBASE-7103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496086#comment-13496086 ] Hudson commented on HBASE-7103: --- Integrated in HBase-0.94-security #83 (See [https://builds.apache.org/job/HBase-0.94-security/83/]) HBASE-7103 Need to fail split if SPLIT znode is deleted even before the split is completed. (Ram) (Revision 1408421) Result = SUCCESS larsh : Files : * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/SplitTransaction.java * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransactionOnCluster.java Need to fail split if SPLIT znode is deleted even before the split is completed. Key: HBASE-7103 URL: https://issues.apache.org/jira/browse/HBASE-7103 Project: HBase Issue Type: Bug Reporter: ramkrishna.s.vasudevan Assignee: ramkrishna.s.vasudevan Fix For: 0.94.3, 0.96.0 Attachments: 7103-6088-revert.txt, HBASE-7103_0.94.patch, HBASE-7103_0.94.patch, HBASE-7103_testcase.patch, HBASE-7103_trunk.patch This came up after the following mail in dev list 'infinite loop of RS_ZK_REGION_SPLIT on .94.2'. The following is the reason for the problem The following steps happen - Initially the parent region P1 starts splitting. - The split is going on normally. - Another split starts at the same time for the same region P1. (Not sure why this started). - Rollback happens seeing an already existing node. - This node gets deleted in rollback and nodeDeleted Event starts. - In nodeDeleted event the RIT for the region P1 gets deleted. - Because of this there is no region in RIT. - Now the first split gets over. Here the problem is we try to transit the node to SPLITTING to SPLIT. But the node even does not exist. But we don take any action on this. We think it is successful. - Because of this SplitRegionHandler never gets invoked. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4913) Per-CF compaction Via the Shell
[ https://issues.apache.org/jira/browse/HBASE-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496087#comment-13496087 ] Hudson commented on HBASE-4913: --- Integrated in HBase-0.94-security #83 (See [https://builds.apache.org/job/HBase-0.94-security/83/]) HBASE-4913 Per-CF compaction Via the Shell (Revision 1408435) Result = SUCCESS gchanan : Files : * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/ipc/HRegionInterface.java * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java * /hbase/branches/0.94/src/main/ruby/hbase/admin.rb * /hbase/branches/0.94/src/main/ruby/shell/commands/compact.rb * /hbase/branches/0.94/src/main/ruby/shell/commands/major_compact.rb * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactionState.java Per-CF compaction Via the Shell --- Key: HBASE-4913 URL: https://issues.apache.org/jira/browse/HBASE-4913 Project: HBase Issue Type: Sub-task Components: Client, regionserver Reporter: Nicolas Spiegelberg Assignee: Mubarak Seyed Fix For: 0.94.3, 0.96.0 Attachments: HBASE-4913-94.patch, HBASE-4913-addendum.patch, HBASE-4913.trunk.v1.patch, HBASE-4913.trunk.v2.patch, HBASE-4913.trunk.v2.patch, HBASE-4913-trunk-v3.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7151) Better log message for Per-CF compactions
[ https://issues.apache.org/jira/browse/HBASE-7151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496088#comment-13496088 ] Hudson commented on HBASE-7151: --- Integrated in HBase-0.94-security #83 (See [https://builds.apache.org/job/HBase-0.94-security/83/]) HBASE-7151 Better log message for Per-CF compactions (Revision 1408501) Result = SUCCESS gchanan : Files : * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java Better log message for Per-CF compactions - Key: HBASE-7151 URL: https://issues.apache.org/jira/browse/HBASE-7151 Project: HBase Issue Type: Improvement Components: Compaction Reporter: Gregory Chanan Assignee: Gregory Chanan Priority: Trivial Fix For: 0.94.3, 0.96.0 Attachments: HBASE-7151-94.patch, HBASE-7151-trunk.patch A coworker pointed out that in HBASE-4913 it would be nice to include the column family in the log message for a per-CF compaction. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7156) Add Data Block Encoding and -D opts to Performance Evaluation
[ https://issues.apache.org/jira/browse/HBASE-7156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496091#comment-13496091 ] Hadoop QA commented on HBASE-7156: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12553292/HBASE-7156-v0.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified tests. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:red}-1 javadoc{color}. The javadoc tool appears to have generated 93 warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 findbugs{color}. The patch appears to introduce 17 new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.client.TestMultiParallel org.apache.hadoop.hbase.client.TestShell org.apache.hadoop.hbase.io.hfile.TestForceCacheImportantBlocks org.apache.hadoop.hbase.TestDrainingServer Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/3325//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3325//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3325//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3325//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3325//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3325//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3325//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/3325//console This message is automatically generated. Add Data Block Encoding and -D opts to Performance Evaluation - Key: HBASE-7156 URL: https://issues.apache.org/jira/browse/HBASE-7156 Project: HBase Issue Type: Improvement Components: test Reporter: Matteo Bertozzi Assignee: Matteo Bertozzi Priority: Minor Attachments: HBASE-7156-v0.patch Add the ability to specify Data Block Encoding and other configuration options. --blockEncoding=TYPE -D property=value Example: hbase org.apache.hadoop.hbase.PerformanceEvaluation -D mapreduce.task.timeout=6 --blockEncoding=DIFF sequentialWrite 1 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7155) Excessive usage of InterruptedException where it can't be thrown
[ https://issues.apache.org/jira/browse/HBASE-7155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496106#comment-13496106 ] Hadoop QA commented on HBASE-7155: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12553295/HBASE-7155.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 6 new or modified tests. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:red}-1 javadoc{color}. The javadoc tool appears to have generated 93 warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 findbugs{color}. The patch appears to introduce 16 new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.client.TestShell Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/3326//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3326//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3326//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3326//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3326//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3326//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3326//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/3326//console This message is automatically generated. Excessive usage of InterruptedException where it can't be thrown Key: HBASE-7155 URL: https://issues.apache.org/jira/browse/HBASE-7155 Project: HBase Issue Type: Bug Reporter: Daniel Gómez Ferro Assignee: Daniel Gómez Ferro Attachments: HBASE-7155.patch RootRegionTracker.getRootRegionLocation() declares that it can throw an InterruptedException, but it can't. This exception is rethrown by many other functions reaching the HBaseAdmin API. If we remove the throws statement from the HBaseAdmin API libraries already compiled will work fine, but if the user is trying to catch an InterruptedException around one of those methods the compiler will complain. Should we clean this up? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7104) HBase includes multiple versions of netty: 3.5.0; 3.2.4; 3.2.2
[ https://issues.apache.org/jira/browse/HBASE-7104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496194#comment-13496194 ] Hudson commented on HBASE-7104: --- Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #258 (See [https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/258/]) HBASE-7104 Partial revert, due to build issues. (Revision 1408575) Result = FAILURE larsh : Files : * /hbase/trunk/pom.xml HBase includes multiple versions of netty: 3.5.0; 3.2.4; 3.2.2 -- Key: HBASE-7104 URL: https://issues.apache.org/jira/browse/HBASE-7104 Project: HBase Issue Type: Bug Components: build Affects Versions: 0.96.0 Reporter: nkeywal Assignee: nkeywal Priority: Minor Fix For: 0.96.0 Attachments: 7104.v1.patch We've got 3 of them on trunk. [INFO] org.apache.hbase:hbase-server:jar:0.95-SNAPSHOT [INFO] +- io.netty:netty:jar:3.5.0.Final:compile [INFO] +- org.apache.zookeeper:zookeeper:jar:3.4.3:compile [INFO] | \- org.jboss.netty:netty:jar:3.2.2.Final:compile [INFO] org.apache.hbase:hbase-hadoop2-compat:jar:0.95-SNAPSHOT [INFO] +- org.apache.hadoop:hadoop-client:jar:2.0.2-alpha:compile [INFO] | +- org.apache.hadoop:hadoop-mapreduce-client-app:jar:2.0.2-alpha:compile [INFO] | | \- org.jboss.netty:netty:jar:3.2.4.Final:compile The patch attached: - fixes this for hadoop 1 profile - bump the netty version to 3.5.9 - does not fix it for hadoop 2. I don't know why, but I haven't investigate: as it's still alpha may be they will change the version on hadoop side anyway. Tests are ok. I haven't really investigated the differences between netty 3.2 and 3.5. A quick search seems to say it's ok, but don't hesitate to raise a warning... -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5898) Consider double-checked locking for block cache lock
[ https://issues.apache.org/jira/browse/HBASE-5898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496192#comment-13496192 ] Hudson commented on HBASE-5898: --- Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #258 (See [https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/258/]) HBASE-5898 Consider double-checked locking for block cache lock (Todd, Elliot, LarsH) (Revision 1408620) Result = FAILURE larsh : Files : * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileReader.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCache.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/DoubleBlockCache.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV1.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/SimpleBlockCache.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/slab/SingleSizeCache.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/slab/SlabCache.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/CacheTestUtils.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheOnWrite.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileDataBlockEncoder.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestLruBlockCache.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCacheOnWriteInSchema.java Consider double-checked locking for block cache lock Key: HBASE-5898 URL: https://issues.apache.org/jira/browse/HBASE-5898 Project: HBase Issue Type: Improvement Components: Performance Affects Versions: 0.94.1 Reporter: Todd Lipcon Assignee: Todd Lipcon Priority: Critical Fix For: 0.94.3, 0.96.0 Attachments: 5898-0.94.txt, 5898-TestBlocksRead.txt, 5898-v2.txt, 5898-v3.txt, 5898-v4.txt, 5898-v4.txt, HBASE-5898-0.patch, HBASE-5898-1.patch, HBASE-5898-1.patch, hbase-5898.txt Running a workload with a high query rate against a dataset that fits in cache, I saw a lot of CPU being used in IdLock.getLockEntry, being called by HFileReaderV2.readBlock. Even though it was all cache hits, it was wasting a lot of CPU doing lock management here. I wrote a quick patch to switch to a double-checked locking and it improved throughput substantially for this workload. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7151) Better log message for Per-CF compactions
[ https://issues.apache.org/jira/browse/HBASE-7151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496193#comment-13496193 ] Hudson commented on HBASE-7151: --- Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #258 (See [https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/258/]) HBASE-7151 Better log message for Per-CF compactions (Revision 1408502) Result = FAILURE gchanan : Files : * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java Better log message for Per-CF compactions - Key: HBASE-7151 URL: https://issues.apache.org/jira/browse/HBASE-7151 Project: HBase Issue Type: Improvement Components: Compaction Reporter: Gregory Chanan Assignee: Gregory Chanan Priority: Trivial Fix For: 0.94.3, 0.96.0 Attachments: HBASE-7151-94.patch, HBASE-7151-trunk.patch A coworker pointed out that in HBASE-4913 it would be nice to include the column family in the log message for a per-CF compaction. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7128) Reduce annoying catch clauses of UnsupportedEncodingException that is never thrown because of UTF-8
[ https://issues.apache.org/jira/browse/HBASE-7128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496234#comment-13496234 ] Ted Yu commented on HBASE-7128: --- Integrated to trunk. Thanks for the patch, Hiroshi. Thanks for the review, Stack. Reduce annoying catch clauses of UnsupportedEncodingException that is never thrown because of UTF-8 --- Key: HBASE-7128 URL: https://issues.apache.org/jira/browse/HBASE-7128 Project: HBase Issue Type: Improvement Reporter: Hiroshi Ikeda Priority: Trivial Fix For: 0.96.0 Attachments: HBASE-7128.patch, HBASE-7128-V2.patch There are some codes that catch UnsupportedEncodingException, and log or ignore it because Java always supports UTF-8 (see the javadoc of Charset). The catch clauses are annoying, and they should be replaced by methods of Bytes. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7104) HBase includes multiple versions of netty: 3.5.0; 3.2.4; 3.2.2
[ https://issues.apache.org/jira/browse/HBASE-7104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496260#comment-13496260 ] Lars Hofhansl commented on HBASE-7104: -- Thanks N. This is a mess, but not because of your changes, but because of the Maven dependency hell :) HBase includes multiple versions of netty: 3.5.0; 3.2.4; 3.2.2 -- Key: HBASE-7104 URL: https://issues.apache.org/jira/browse/HBASE-7104 Project: HBase Issue Type: Bug Components: build Affects Versions: 0.96.0 Reporter: nkeywal Assignee: nkeywal Priority: Minor Fix For: 0.96.0 Attachments: 7104.v1.patch We've got 3 of them on trunk. [INFO] org.apache.hbase:hbase-server:jar:0.95-SNAPSHOT [INFO] +- io.netty:netty:jar:3.5.0.Final:compile [INFO] +- org.apache.zookeeper:zookeeper:jar:3.4.3:compile [INFO] | \- org.jboss.netty:netty:jar:3.2.2.Final:compile [INFO] org.apache.hbase:hbase-hadoop2-compat:jar:0.95-SNAPSHOT [INFO] +- org.apache.hadoop:hadoop-client:jar:2.0.2-alpha:compile [INFO] | +- org.apache.hadoop:hadoop-mapreduce-client-app:jar:2.0.2-alpha:compile [INFO] | | \- org.jboss.netty:netty:jar:3.2.4.Final:compile The patch attached: - fixes this for hadoop 1 profile - bump the netty version to 3.5.9 - does not fix it for hadoop 2. I don't know why, but I haven't investigate: as it's still alpha may be they will change the version on hadoop side anyway. Tests are ok. I haven't really investigated the differences between netty 3.2 and 3.5. A quick search seems to say it's ok, but don't hesitate to raise a warning... -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7124) typo in pom.xml with exlude, no definition of test.exclude.pattern
[ https://issues.apache.org/jira/browse/HBASE-7124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496264#comment-13496264 ] Li Ping Zhang commented on HBASE-7124: -- Yes, Stack, I can attach it here. I have generated a diff patch for 0.94 branch and trunk, need I provide it? By the way, can I have the honor to be the assignee to fix this issue? Thanks! typo in pom.xml with exlude, no definition of test.exclude.pattern -- Key: HBASE-7124 URL: https://issues.apache.org/jira/browse/HBASE-7124 Project: HBase Issue Type: Bug Affects Versions: 0.94.0 Reporter: Li Ping Zhang Priority: Minor Labels: patch Original Estimate: 4h Remaining Estimate: 4h There is a typo in pom.xml with exlude, and there is no definition of test.exclude.pattern. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7119) org.apache.hadoop.hbase.io.TestHeapSize failed with testNativeSizes unit test
[ https://issues.apache.org/jira/browse/HBASE-7119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496274#comment-13496274 ] Li Ping Zhang commented on HBASE-7119: -- Yes, Stack and Jimmy, it is a failure when running the UT with non-SUN JDK(like IBM). This issue is due to different implementation of ConcurrentHashMap and ArraryList between different JVM vendors(like IBM jdk, SUN jdk). The solution is to modify src/main/java/org/apache/hadoop/hbase/util/ClassSize.java, and set CONCURRENT_HASHMAP and ARRAYLIST to the right size with the right JVM vendor. I have fixed it with a patch, and have tested it with UT successfully, and also run full UT to ensure the patch doesn't cause any new issue. I can work with fixing this JIRA if it is needed. org.apache.hadoop.hbase.io.TestHeapSize failed with testNativeSizes unit test - Key: HBASE-7119 URL: https://issues.apache.org/jira/browse/HBASE-7119 Project: HBase Issue Type: Bug Affects Versions: 0.90.4, 0.90.5, 0.92.0, 0.94.0 Environment: RHEL 5.3, open JDK 1.6 Reporter: Li Ping Zhang Labels: patch Original Estimate: 24h Remaining Estimate: 24h Running org.apache.hadoop.hbase.io.TestHeapSize Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.068 sec FAILURE! testNativeSizes(org.apache.hadoop.hbase.io.TestHeapSize) Time elapsed: 0.01 sec FAILURE! junit.framework.AssertionFailedError: expected:64 but was:56 at junit.framework.Assert.fail(Assert.java:47) at junit.framework.Assert.failNotEquals(Assert.java:283) at junit.framework.Assert.assertEquals(Assert.java:64) at junit.framework.Assert.assertEquals(Assert.java:130) at junit.framework.Assert.assertEquals(Assert.java:136) at org.apache.hadoop.hbase.io.TestHeapSize.testNativeSizes(TestHeapSize.java:75) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7124) typo in pom.xml with exlude, no definition of test.exclude.pattern
[ https://issues.apache.org/jira/browse/HBASE-7124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496302#comment-13496302 ] Jesse Yates commented on HBASE-7124: [~michelle] yeah, you can attach them here. You should be able to make yourself the assignee ([~saint@gmail.com] - I can't seem to get Li Ping Zhang's name to come up here, you have any luck?) typo in pom.xml with exlude, no definition of test.exclude.pattern -- Key: HBASE-7124 URL: https://issues.apache.org/jira/browse/HBASE-7124 Project: HBase Issue Type: Bug Affects Versions: 0.94.0 Reporter: Li Ping Zhang Priority: Minor Labels: patch Original Estimate: 4h Remaining Estimate: 4h There is a typo in pom.xml with exlude, and there is no definition of test.exclude.pattern. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7124) typo in pom.xml with exlude, no definition of test.exclude.pattern
[ https://issues.apache.org/jira/browse/HBASE-7124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496311#comment-13496311 ] Li Ping Zhang commented on HBASE-7124: -- That's cool, Jesse! Stack, can you help me out with the assignee? Thanks in advance!:-) typo in pom.xml with exlude, no definition of test.exclude.pattern -- Key: HBASE-7124 URL: https://issues.apache.org/jira/browse/HBASE-7124 Project: HBase Issue Type: Bug Affects Versions: 0.94.0 Reporter: Li Ping Zhang Priority: Minor Labels: patch Original Estimate: 4h Remaining Estimate: 4h There is a typo in pom.xml with exlude, and there is no definition of test.exclude.pattern. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7120) hbase-daemon.sh (start) missing necessary check when writing pid and log files
[ https://issues.apache.org/jira/browse/HBASE-7120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Li Ping Zhang updated HBASE-7120: - Description: $HBASE_HOME/bin/hbase-daemon.sh exit code is Zero, when runing hbase-daemon.sh failed with start, which doesn’t do required command exit code check, it's better to do necessary check when writing pid and log files. was: $HBASE_HOME/bin/hbase-daemon.sh exit code is Zero, when runing hbase-daemon.sh failed with start, which doesn’t do required command exit code chek, it's better to do necessary check when writing pid and log files. hbase-daemon.sh (start) missing necessary check when writing pid and log files -- Key: HBASE-7120 URL: https://issues.apache.org/jira/browse/HBASE-7120 Project: HBase Issue Type: Bug Affects Versions: 0.94.0 Environment: RHEL 5.3, open JDK 1.6 Reporter: Li Ping Zhang Labels: patch Original Estimate: 48h Remaining Estimate: 48h $HBASE_HOME/bin/hbase-daemon.sh exit code is Zero, when runing hbase-daemon.sh failed with start, which doesn’t do required command exit code check, it's better to do necessary check when writing pid and log files. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7156) Add Data Block Encoding and -D opts to Performance Evaluation
[ https://issues.apache.org/jira/browse/HBASE-7156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496323#comment-13496323 ] Ted Yu commented on HBASE-7156: --- {code} +if (cmd.equals(-D)) { + String[] keyval = args[++i].split(=, 2); {code} Consider checking that keyval.length is 2. Otherwise patch looks good. Add Data Block Encoding and -D opts to Performance Evaluation - Key: HBASE-7156 URL: https://issues.apache.org/jira/browse/HBASE-7156 Project: HBase Issue Type: Improvement Components: test Reporter: Matteo Bertozzi Assignee: Matteo Bertozzi Priority: Minor Attachments: HBASE-7156-v0.patch Add the ability to specify Data Block Encoding and other configuration options. --blockEncoding=TYPE -D property=value Example: hbase org.apache.hadoop.hbase.PerformanceEvaluation -D mapreduce.task.timeout=6 --blockEncoding=DIFF sequentialWrite 1 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7120) hbase-daemon.sh (start) missing necessary check when writing pid and log files
[ https://issues.apache.org/jira/browse/HBASE-7120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496342#comment-13496342 ] Li Ping Zhang commented on HBASE-7120: -- Yes, Stack, I have the patch generated from 0.94 branch and trunk, can I patch it? hbase-daemon.sh (start) missing necessary check when writing pid and log files -- Key: HBASE-7120 URL: https://issues.apache.org/jira/browse/HBASE-7120 Project: HBase Issue Type: Bug Affects Versions: 0.94.0 Environment: RHEL 5.3, open JDK 1.6 Reporter: Li Ping Zhang Labels: patch Original Estimate: 48h Remaining Estimate: 48h $HBASE_HOME/bin/hbase-daemon.sh exit code is Zero, when runing hbase-daemon.sh failed with start, which doesn’t do required command exit code check, it's better to do necessary check when writing pid and log files. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-6356) printStackTrace in FSUtils
[ https://issues.apache.org/jira/browse/HBASE-6356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] nkeywal updated HBASE-6356: --- Assignee: nkeywal printStackTrace in FSUtils -- Key: HBASE-6356 URL: https://issues.apache.org/jira/browse/HBASE-6356 Project: HBase Issue Type: Bug Components: Client, master, regionserver Affects Versions: 0.96.0 Reporter: nkeywal Assignee: nkeywal Priority: Trivial Labels: noob Attachments: HBASE-6356.patch This is bad... {noformat} public boolean accept(Path p) { boolean isValid = false; try { if (HConstants.HBASE_NON_USER_TABLE_DIRS.contains(p.toString())) { isValid = false; } else { isValid = this.fs.getFileStatus(p).isDir(); } } catch (IOException e) { e.printStackTrace(); } return isValid; } } {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-6356) printStackTrace in FSUtils
[ https://issues.apache.org/jira/browse/HBASE-6356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] nkeywal updated HBASE-6356: --- Assignee: Gustavo Anatoly (was: nkeywal) printStackTrace in FSUtils -- Key: HBASE-6356 URL: https://issues.apache.org/jira/browse/HBASE-6356 Project: HBase Issue Type: Bug Components: Client, master, regionserver Affects Versions: 0.96.0 Reporter: nkeywal Assignee: Gustavo Anatoly Priority: Trivial Labels: noob Attachments: HBASE-6356.patch This is bad... {noformat} public boolean accept(Path p) { boolean isValid = false; try { if (HConstants.HBASE_NON_USER_TABLE_DIRS.contains(p.toString())) { isValid = false; } else { isValid = this.fs.getFileStatus(p).isDir(); } } catch (IOException e) { e.printStackTrace(); } return isValid; } } {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6356) printStackTrace in FSUtils
[ https://issues.apache.org/jira/browse/HBASE-6356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496361#comment-13496361 ] nkeywal commented on HBASE-6356: Committed and jira assigned to you, Gustavo. Thanks for the patch!. printStackTrace in FSUtils -- Key: HBASE-6356 URL: https://issues.apache.org/jira/browse/HBASE-6356 Project: HBase Issue Type: Bug Components: Client, master, regionserver Affects Versions: 0.96.0 Reporter: nkeywal Assignee: Gustavo Anatoly Priority: Trivial Labels: noob Attachments: HBASE-6356.patch This is bad... {noformat} public boolean accept(Path p) { boolean isValid = false; try { if (HConstants.HBASE_NON_USER_TABLE_DIRS.contains(p.toString())) { isValid = false; } else { isValid = this.fs.getFileStatus(p).isDir(); } } catch (IOException e) { e.printStackTrace(); } return isValid; } } {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7156) Add Data Block Encoding and -D opts to Performance Evaluation
[ https://issues.apache.org/jira/browse/HBASE-7156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matteo Bertozzi updated HBASE-7156: --- Attachment: HBASE-7156-v1.patch Add Data Block Encoding and -D opts to Performance Evaluation - Key: HBASE-7156 URL: https://issues.apache.org/jira/browse/HBASE-7156 Project: HBase Issue Type: Improvement Components: test Reporter: Matteo Bertozzi Assignee: Matteo Bertozzi Priority: Minor Attachments: HBASE-7156-v0.patch, HBASE-7156-v1.patch Add the ability to specify Data Block Encoding and other configuration options. --blockEncoding=TYPE -D property=value Example: hbase org.apache.hadoop.hbase.PerformanceEvaluation -D mapreduce.task.timeout=6 --blockEncoding=DIFF sequentialWrite 1 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7152) testShouldCheckMasterFailOverWhenMETAIsInOpenedState times out occasionally
[ https://issues.apache.org/jira/browse/HBASE-7152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jimmy Xiang updated HBASE-7152: --- Resolution: Fixed Fix Version/s: 0.96.0 Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Integrated into trunk. Thanks Stack for the review. testShouldCheckMasterFailOverWhenMETAIsInOpenedState times out occasionally --- Key: HBASE-7152 URL: https://issues.apache.org/jira/browse/HBASE-7152 Project: HBase Issue Type: Test Components: test Reporter: Jimmy Xiang Assignee: Jimmy Xiang Priority: Minor Fix For: 0.96.0 Attachments: trunk-7152.patch {noformat} java.lang.Exception: test timed out after 18 milliseconds at java.lang.Throwable.fillInStackTrace(Native Method) at java.lang.Throwable.init(Throwable.java:181) at org.apache.log4j.spi.LoggingEvent.getLocationInformation(LoggingEvent.java:253) at org.apache.log4j.helpers.PatternParser$ClassNamePatternConverter.getFullyQualifiedName(PatternParser.java:555) at org.apache.log4j.helpers.PatternParser$NamedPatternConverter.convert(PatternParser.java:528) at org.apache.log4j.helpers.PatternConverter.format(PatternConverter.java:65) at org.apache.log4j.PatternLayout.format(PatternLayout.java:506) at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:310) at org.apache.log4j.WriterAppender.append(WriterAppender.java:162) at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251) at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66) at org.apache.log4j.Category.callAppenders(Category.java:206) at org.apache.log4j.Category.forcedLog(Category.java:391) at org.apache.log4j.Category.log(Category.java:856) at org.apache.commons.logging.impl.Log4JLogger.debug(Log4JLogger.java:188) at org.apache.hadoop.hbase.LocalHBaseCluster.join(LocalHBaseCluster.java:407) at org.apache.hadoop.hbase.MiniHBaseCluster.join(MiniHBaseCluster.java:408) at org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniHBaseCluster(HBaseTestingUtility.java:599) at org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniCluster(HBaseTestingUtility.java:573) at org.apache.hadoop.hbase.master.TestMasterFailover.testShouldCheckMasterFailOverWhenMETAIsInOpenedState(TestMasterFailover.java:113) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:62) {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-3869) RegionServer metrics - add read and write byte-transfer statistics
[ https://issues.apache.org/jira/browse/HBASE-3869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elliott Clark updated HBASE-3869: - Component/s: metrics RegionServer metrics - add read and write byte-transfer statistics -- Key: HBASE-3869 URL: https://issues.apache.org/jira/browse/HBASE-3869 Project: HBase Issue Type: Improvement Components: metrics Reporter: Doug Meil Priority: Minor It would be beneficial to have the data transfer weight of reads and writes per region server. HBASE-3647 split out the read/write metric requests from the uber-request metric - which is great. But there isn't a notion of data transfer weight and this is why it's important: the read metrics are effectively RPC-based. Thus, with a scan caching of 500, there is 1 RPC call every 500 rows read (and 1 'read' metric increment). And this metric doesn't indicate how much data is being transferred (e.g., a read with 50 attributes will probably cost a lot more than a read with 5 attributes). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6787) Convert RowProcessorProtocol to protocol buffer service
[ https://issues.apache.org/jira/browse/HBASE-6787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496394#comment-13496394 ] Gary Helmling commented on HBASE-6787: -- Sorry for the delay in the review. Overall this looks good to me, but I'm still trying to think through the client usage. The one thing eating at me is that between this and the ColumnInterpreter stuff for AggregateService, we seem to be working towards a common interface for the user code serialization/deserialization bits: Request ser/de: {code:java} public ByteString rowProcessorSpecificData() throws IOException {} public void initialize(ByteString bytes) throws IOException {} {code} Response ser/de: {code:java} public ByteString getProtoForResult(T t) {} public T parseResponseAsResultType(byte[] response) {} {code} Given the very similar pattern here, does it make sense to try to factor these out into a common interface that can be shared? I'm not sure it's really necessary to remain serialization agnostic for the user code, either. Would it be simpler if we just required PB serialization for these bits? RowProcessor could represent it's serialization as a message: {code:java} Message getRequestData() throws IOException; void initialize(Message request) throws IOException; {code} (These could also be parameterized for additional type safety.) And the response type could similarly be a message: {code:java} T getResult(); // where T extends Message {code} Clients would still get a typed response that they could extract values from without the additional {{getProtoForResult(T t)}} and {{parseResponseAsResultType(byte[] bytes)}} methods. Does this make sense? Of course you could still do the PB-conversions yourself for PB serialization with the original patch. But I think we're adding some complexity to remain serialization agnostic in this case. Is the need there to make the trade off worth it? Convert RowProcessorProtocol to protocol buffer service --- Key: HBASE-6787 URL: https://issues.apache.org/jira/browse/HBASE-6787 Project: HBase Issue Type: Sub-task Components: Coprocessors Reporter: Gary Helmling Assignee: Devaraj Das Fix For: 0.96.0 Attachments: 6787-1.patch, 6787-2.patch With coprocessor endpoints now exposed as protobuf defined services, we should convert over all of our built-in endpoints to PB services. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7156) Add Data Block Encoding and -D opts to Performance Evaluation
[ https://issues.apache.org/jira/browse/HBASE-7156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496396#comment-13496396 ] Hadoop QA commented on HBASE-7156: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12553336/HBASE-7156-v1.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified tests. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:red}-1 javadoc{color}. The javadoc tool appears to have generated 93 warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 findbugs{color}. The patch appears to introduce 17 new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.client.TestShell Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/3327//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3327//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3327//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3327//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3327//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3327//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3327//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/3327//console This message is automatically generated. Add Data Block Encoding and -D opts to Performance Evaluation - Key: HBASE-7156 URL: https://issues.apache.org/jira/browse/HBASE-7156 Project: HBase Issue Type: Improvement Components: test Reporter: Matteo Bertozzi Assignee: Matteo Bertozzi Priority: Minor Attachments: HBASE-7156-v0.patch, HBASE-7156-v1.patch Add the ability to specify Data Block Encoding and other configuration options. --blockEncoding=TYPE -D property=value Example: hbase org.apache.hadoop.hbase.PerformanceEvaluation -D mapreduce.task.timeout=6 --blockEncoding=DIFF sequentialWrite 1 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7156) Add Data Block Encoding and -D opts to Performance Evaluation
[ https://issues.apache.org/jira/browse/HBASE-7156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496404#comment-13496404 ] Ted Yu commented on HBASE-7156: --- +1 on second patch. Add Data Block Encoding and -D opts to Performance Evaluation - Key: HBASE-7156 URL: https://issues.apache.org/jira/browse/HBASE-7156 Project: HBase Issue Type: Improvement Components: test Reporter: Matteo Bertozzi Assignee: Matteo Bertozzi Priority: Minor Attachments: HBASE-7156-v0.patch, HBASE-7156-v1.patch Add the ability to specify Data Block Encoding and other configuration options. --blockEncoding=TYPE -D property=value Example: hbase org.apache.hadoop.hbase.PerformanceEvaluation -D mapreduce.task.timeout=6 --blockEncoding=DIFF sequentialWrite 1 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7128) Reduce annoying catch clauses of UnsupportedEncodingException that is never thrown because of UTF-8
[ https://issues.apache.org/jira/browse/HBASE-7128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496405#comment-13496405 ] Hudson commented on HBASE-7128: --- Integrated in HBase-TRUNK #3535 (See [https://builds.apache.org/job/HBase-TRUNK/3535/]) HBASE-7128 Reduce annoying catch clauses of UnsupportedEncodingException that is never thrown because of UTF-8 (Hiroshi) (Revision 1408758) Result = FAILURE tedyu : Files : * /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java * /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/RegexStringComparator.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapred/GroupingTableMap.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/GroupingTableMapper.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogUtil.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestPrefixFilter.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestTableMapReduce.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestMultithreadedTableMapper.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableMapReduce.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompaction.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStore.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestScanner.java Reduce annoying catch clauses of UnsupportedEncodingException that is never thrown because of UTF-8 --- Key: HBASE-7128 URL: https://issues.apache.org/jira/browse/HBASE-7128 Project: HBase Issue Type: Improvement Reporter: Hiroshi Ikeda Priority: Trivial Fix For: 0.96.0 Attachments: HBASE-7128.patch, HBASE-7128-V2.patch There are some codes that catch UnsupportedEncodingException, and log or ignore it because Java always supports UTF-8 (see the javadoc of Charset). The catch clauses are annoying, and they should be replaced by methods of Bytes. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-7157) Report Metrics into an HBase table
Elliott Clark created HBASE-7157: Summary: Report Metrics into an HBase table Key: HBASE-7157 URL: https://issues.apache.org/jira/browse/HBASE-7157 Project: HBase Issue Type: Improvement Reporter: Elliott Clark Assignee: Elliott Clark Right now metrics are sent using ServerLoad and RegionLoad to the master. We should store those(and other) metrics in a table. Storing these metrics in a table will allow the LoadBalancer to have more context. In addition those metrics will be useful for the ui. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7157) Report Metrics into an HBase table
[ https://issues.apache.org/jira/browse/HBASE-7157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elliott Clark updated HBASE-7157: - Component/s: metrics Report Metrics into an HBase table -- Key: HBASE-7157 URL: https://issues.apache.org/jira/browse/HBASE-7157 Project: HBase Issue Type: Improvement Components: metrics Reporter: Elliott Clark Assignee: Elliott Clark Right now metrics are sent using ServerLoad and RegionLoad to the master. We should store those(and other) metrics in a table. Storing these metrics in a table will allow the LoadBalancer to have more context. In addition those metrics will be useful for the ui. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7157) Report Metrics into an HBase table
[ https://issues.apache.org/jira/browse/HBASE-7157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496431#comment-13496431 ] Jimmy Xiang commented on HBASE-7157: Could this table hook up with TSDB? Report Metrics into an HBase table -- Key: HBASE-7157 URL: https://issues.apache.org/jira/browse/HBASE-7157 Project: HBase Issue Type: Improvement Components: metrics Reporter: Elliott Clark Assignee: Elliott Clark Right now metrics are sent using ServerLoad and RegionLoad to the master. We should store those(and other) metrics in a table. Storing these metrics in a table will allow the LoadBalancer to have more context. In addition those metrics will be useful for the ui. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7156) Add Data Block Encoding and -D opts to Performance Evaluation
[ https://issues.apache.org/jira/browse/HBASE-7156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496442#comment-13496442 ] Himanshu Vashishtha commented on HBASE-7156: nit: -D option usually is prefixed with the option key, for example: -Dhbase.client.scanner.caching=100, etc. Why not be consistent here too. Otherwise looks good to me. Add Data Block Encoding and -D opts to Performance Evaluation - Key: HBASE-7156 URL: https://issues.apache.org/jira/browse/HBASE-7156 Project: HBase Issue Type: Improvement Components: test Reporter: Matteo Bertozzi Assignee: Matteo Bertozzi Priority: Minor Attachments: HBASE-7156-v0.patch, HBASE-7156-v1.patch Add the ability to specify Data Block Encoding and other configuration options. --blockEncoding=TYPE -D property=value Example: hbase org.apache.hadoop.hbase.PerformanceEvaluation -D mapreduce.task.timeout=6 --blockEncoding=DIFF sequentialWrite 1 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-4913) Per-CF compaction Via the Shell
[ https://issues.apache.org/jira/browse/HBASE-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-4913: - Attachment: 4913-addendum2.txt The addendum broke TestShell. Here's a 2nd addendum fixing that. Simple fix, will commit in the next few minutes unless I hear objections. Per-CF compaction Via the Shell --- Key: HBASE-4913 URL: https://issues.apache.org/jira/browse/HBASE-4913 Project: HBase Issue Type: Sub-task Components: Client, regionserver Reporter: Nicolas Spiegelberg Assignee: Mubarak Seyed Fix For: 0.94.3, 0.96.0 Attachments: 4913-addendum2.txt, HBASE-4913-94.patch, HBASE-4913-addendum.patch, HBASE-4913.trunk.v1.patch, HBASE-4913.trunk.v2.patch, HBASE-4913.trunk.v2.patch, HBASE-4913-trunk-v3.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4913) Per-CF compaction Via the Shell
[ https://issues.apache.org/jira/browse/HBASE-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496454#comment-13496454 ] Gregory Chanan commented on HBASE-4913: --- addendumv2 looks good. Necessary on 0.94 and 0.96? Per-CF compaction Via the Shell --- Key: HBASE-4913 URL: https://issues.apache.org/jira/browse/HBASE-4913 Project: HBase Issue Type: Sub-task Components: Client, regionserver Reporter: Nicolas Spiegelberg Assignee: Mubarak Seyed Fix For: 0.94.3, 0.96.0 Attachments: 4913-addendum2.txt, HBASE-4913-94.patch, HBASE-4913-addendum.patch, HBASE-4913.trunk.v1.patch, HBASE-4913.trunk.v2.patch, HBASE-4913.trunk.v2.patch, HBASE-4913-trunk-v3.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4913) Per-CF compaction Via the Shell
[ https://issues.apache.org/jira/browse/HBASE-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496457#comment-13496457 ] Lars Hofhansl commented on HBASE-4913: -- Yep. Committing now. Per-CF compaction Via the Shell --- Key: HBASE-4913 URL: https://issues.apache.org/jira/browse/HBASE-4913 Project: HBase Issue Type: Sub-task Components: Client, regionserver Reporter: Nicolas Spiegelberg Assignee: Mubarak Seyed Fix For: 0.94.3, 0.96.0 Attachments: 4913-addendum2.txt, HBASE-4913-94.patch, HBASE-4913-addendum.patch, HBASE-4913.trunk.v1.patch, HBASE-4913.trunk.v2.patch, HBASE-4913.trunk.v2.patch, HBASE-4913-trunk-v3.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4913) Per-CF compaction Via the Shell
[ https://issues.apache.org/jira/browse/HBASE-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496461#comment-13496461 ] Lars Hofhansl commented on HBASE-4913: -- Done. Found this when I tried to get a successful jenkins build for 0.94.3rc0 :) Per-CF compaction Via the Shell --- Key: HBASE-4913 URL: https://issues.apache.org/jira/browse/HBASE-4913 Project: HBase Issue Type: Sub-task Components: Client, regionserver Reporter: Nicolas Spiegelberg Assignee: Mubarak Seyed Fix For: 0.94.3, 0.96.0 Attachments: 4913-addendum2.txt, HBASE-4913-94.patch, HBASE-4913-addendum.patch, HBASE-4913.trunk.v1.patch, HBASE-4913.trunk.v2.patch, HBASE-4913.trunk.v2.patch, HBASE-4913-trunk-v3.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4913) Per-CF compaction Via the Shell
[ https://issues.apache.org/jira/browse/HBASE-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496463#comment-13496463 ] Gregory Chanan commented on HBASE-4913: --- Sorry about that. Per-CF compaction Via the Shell --- Key: HBASE-4913 URL: https://issues.apache.org/jira/browse/HBASE-4913 Project: HBase Issue Type: Sub-task Components: Client, regionserver Reporter: Nicolas Spiegelberg Assignee: Mubarak Seyed Fix For: 0.94.3, 0.96.0 Attachments: 4913-addendum2.txt, HBASE-4913-94.patch, HBASE-4913-addendum.patch, HBASE-4913.trunk.v1.patch, HBASE-4913.trunk.v2.patch, HBASE-4913.trunk.v2.patch, HBASE-4913-trunk-v3.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4913) Per-CF compaction Via the Shell
[ https://issues.apache.org/jira/browse/HBASE-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496468#comment-13496468 ] Lars Hofhansl commented on HBASE-4913: -- Heh. No problem. The shell tests are bit obtuse. Per-CF compaction Via the Shell --- Key: HBASE-4913 URL: https://issues.apache.org/jira/browse/HBASE-4913 Project: HBase Issue Type: Sub-task Components: Client, regionserver Reporter: Nicolas Spiegelberg Assignee: Mubarak Seyed Fix For: 0.94.3, 0.96.0 Attachments: 4913-addendum2.txt, HBASE-4913-94.patch, HBASE-4913-addendum.patch, HBASE-4913.trunk.v1.patch, HBASE-4913.trunk.v2.patch, HBASE-4913.trunk.v2.patch, HBASE-4913-trunk-v3.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7156) Add Data Block Encoding and -D opts to Performance Evaluation
[ https://issues.apache.org/jira/browse/HBASE-7156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496499#comment-13496499 ] stack commented on HBASE-7156: -- [~mbertozzi] If PE implemented http://hadoop.apache.org/docs/current/api/org/apache/hadoop/util/Tool.html, you'd get the -D stuff and some other bits and pieces for free? As Himanshu says, this might not work the way folks expect: {code} +if (cmd.equals(-D)) { {code} Often they won't have the space between the '-D' and config. Add Data Block Encoding and -D opts to Performance Evaluation - Key: HBASE-7156 URL: https://issues.apache.org/jira/browse/HBASE-7156 Project: HBase Issue Type: Improvement Components: test Reporter: Matteo Bertozzi Assignee: Matteo Bertozzi Priority: Minor Attachments: HBASE-7156-v0.patch, HBASE-7156-v1.patch Add the ability to specify Data Block Encoding and other configuration options. --blockEncoding=TYPE -D property=value Example: hbase org.apache.hadoop.hbase.PerformanceEvaluation -D mapreduce.task.timeout=6 --blockEncoding=DIFF sequentialWrite 1 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7157) Report Metrics into an HBase table
[ https://issues.apache.org/jira/browse/HBASE-7157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496502#comment-13496502 ] stack commented on HBASE-7157: -- [~jxiang] Yeah, would be cool if the tsdb-reader could run against this table...so could do nice queries. Benoit said'd it'd be cool having same format. TSDB has nice compaction of the content every hour to make the metrics more compact but maybe we just TTL the stuff out before we need to do this -- unless user changes the config. to keep the metrics. Report Metrics into an HBase table -- Key: HBASE-7157 URL: https://issues.apache.org/jira/browse/HBASE-7157 Project: HBase Issue Type: Improvement Components: metrics Reporter: Elliott Clark Assignee: Elliott Clark Right now metrics are sent using ServerLoad and RegionLoad to the master. We should store those(and other) metrics in a table. Storing these metrics in a table will allow the LoadBalancer to have more context. In addition those metrics will be useful for the ui. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6356) printStackTrace in FSUtils
[ https://issues.apache.org/jira/browse/HBASE-6356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496511#comment-13496511 ] Gustavo Anatoly commented on HBASE-6356: Thanks you Nicolas, for your patience and your reviews. printStackTrace in FSUtils -- Key: HBASE-6356 URL: https://issues.apache.org/jira/browse/HBASE-6356 Project: HBase Issue Type: Bug Components: Client, master, regionserver Affects Versions: 0.96.0 Reporter: nkeywal Assignee: Gustavo Anatoly Priority: Trivial Labels: noob Attachments: HBASE-6356.patch This is bad... {noformat} public boolean accept(Path p) { boolean isValid = false; try { if (HConstants.HBASE_NON_USER_TABLE_DIRS.contains(p.toString())) { isValid = false; } else { isValid = this.fs.getFileStatus(p).isDir(); } } catch (IOException e) { e.printStackTrace(); } return isValid; } } {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6356) printStackTrace in FSUtils
[ https://issues.apache.org/jira/browse/HBASE-6356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496551#comment-13496551 ] Hudson commented on HBASE-6356: --- Integrated in HBase-TRUNK #3536 (See [https://builds.apache.org/job/HBase-TRUNK/3536/]) HBASE-6356 printStackTrace in FSUtils (Gustavo Anatoly) (Revision 1408851) Result = FAILURE nkeywal : Files : * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java printStackTrace in FSUtils -- Key: HBASE-6356 URL: https://issues.apache.org/jira/browse/HBASE-6356 Project: HBase Issue Type: Bug Components: Client, master, regionserver Affects Versions: 0.96.0 Reporter: nkeywal Assignee: Gustavo Anatoly Priority: Trivial Labels: noob Attachments: HBASE-6356.patch This is bad... {noformat} public boolean accept(Path p) { boolean isValid = false; try { if (HConstants.HBASE_NON_USER_TABLE_DIRS.contains(p.toString())) { isValid = false; } else { isValid = this.fs.getFileStatus(p).isDir(); } } catch (IOException e) { e.printStackTrace(); } return isValid; } } {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7152) testShouldCheckMasterFailOverWhenMETAIsInOpenedState times out occasionally
[ https://issues.apache.org/jira/browse/HBASE-7152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496553#comment-13496553 ] Hudson commented on HBASE-7152: --- Integrated in HBase-TRUNK #3536 (See [https://builds.apache.org/job/HBase-TRUNK/3536/]) HBASE-7152 testShouldCheckMasterFailOverWhenMETAIsInOpenedState times out occasionally (Revision 1408854) Result = FAILURE jxiang : Files : * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterFailover.java testShouldCheckMasterFailOverWhenMETAIsInOpenedState times out occasionally --- Key: HBASE-7152 URL: https://issues.apache.org/jira/browse/HBASE-7152 Project: HBase Issue Type: Test Components: test Reporter: Jimmy Xiang Assignee: Jimmy Xiang Priority: Minor Fix For: 0.96.0 Attachments: trunk-7152.patch {noformat} java.lang.Exception: test timed out after 18 milliseconds at java.lang.Throwable.fillInStackTrace(Native Method) at java.lang.Throwable.init(Throwable.java:181) at org.apache.log4j.spi.LoggingEvent.getLocationInformation(LoggingEvent.java:253) at org.apache.log4j.helpers.PatternParser$ClassNamePatternConverter.getFullyQualifiedName(PatternParser.java:555) at org.apache.log4j.helpers.PatternParser$NamedPatternConverter.convert(PatternParser.java:528) at org.apache.log4j.helpers.PatternConverter.format(PatternConverter.java:65) at org.apache.log4j.PatternLayout.format(PatternLayout.java:506) at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:310) at org.apache.log4j.WriterAppender.append(WriterAppender.java:162) at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251) at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66) at org.apache.log4j.Category.callAppenders(Category.java:206) at org.apache.log4j.Category.forcedLog(Category.java:391) at org.apache.log4j.Category.log(Category.java:856) at org.apache.commons.logging.impl.Log4JLogger.debug(Log4JLogger.java:188) at org.apache.hadoop.hbase.LocalHBaseCluster.join(LocalHBaseCluster.java:407) at org.apache.hadoop.hbase.MiniHBaseCluster.join(MiniHBaseCluster.java:408) at org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniHBaseCluster(HBaseTestingUtility.java:599) at org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniCluster(HBaseTestingUtility.java:573) at org.apache.hadoop.hbase.master.TestMasterFailover.testShouldCheckMasterFailOverWhenMETAIsInOpenedState(TestMasterFailover.java:113) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:62) {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4913) Per-CF compaction Via the Shell
[ https://issues.apache.org/jira/browse/HBASE-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496552#comment-13496552 ] Hudson commented on HBASE-4913: --- Integrated in HBase-TRUNK #3536 (See [https://builds.apache.org/job/HBase-TRUNK/3536/]) HBASE-4913 Addendum: Fix TestShell (Revision 1408902) Result = FAILURE larsh : Files : * /hbase/trunk/hbase-server/src/main/ruby/hbase/admin.rb Per-CF compaction Via the Shell --- Key: HBASE-4913 URL: https://issues.apache.org/jira/browse/HBASE-4913 Project: HBase Issue Type: Sub-task Components: Client, regionserver Reporter: Nicolas Spiegelberg Assignee: Mubarak Seyed Fix For: 0.94.3, 0.96.0 Attachments: 4913-addendum2.txt, HBASE-4913-94.patch, HBASE-4913-addendum.patch, HBASE-4913.trunk.v1.patch, HBASE-4913.trunk.v2.patch, HBASE-4913.trunk.v2.patch, HBASE-4913-trunk-v3.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7062) Move HLog stats to metrics 2
[ https://issues.apache.org/jira/browse/HBASE-7062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elliott Clark updated HBASE-7062: - Attachment: HBASE-7062-4.patch Version with the small changes that Ted found. I'll commit soon unless anyone raises a flag. Move HLog stats to metrics 2 Key: HBASE-7062 URL: https://issues.apache.org/jira/browse/HBASE-7062 Project: HBase Issue Type: Sub-task Components: metrics Affects Versions: 0.96.0 Reporter: Elliott Clark Assignee: Elliott Clark Priority: Critical Fix For: 0.96.0 Attachments: HBASE-7062-1.patch, HBASE-7062-2.patch, HBASE-7062-3.patch, HBASE-7062-4.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7156) Add Data Block Encoding and -D opts to Performance Evaluation
[ https://issues.apache.org/jira/browse/HBASE-7156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496569#comment-13496569 ] Ted Yu commented on HBASE-7156: --- The space after '-D' should be removed, right ? Add Data Block Encoding and -D opts to Performance Evaluation - Key: HBASE-7156 URL: https://issues.apache.org/jira/browse/HBASE-7156 Project: HBase Issue Type: Improvement Components: test Reporter: Matteo Bertozzi Assignee: Matteo Bertozzi Priority: Minor Attachments: HBASE-7156-v0.patch, HBASE-7156-v1.patch, HBASE-7156-v2.patch Add the ability to specify Data Block Encoding and other configuration options. --blockEncoding=TYPE -D property=value Example: hbase org.apache.hadoop.hbase.PerformanceEvaluation -D mapreduce.task.timeout=6 --blockEncoding=DIFF sequentialWrite 1 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7156) Add Data Block Encoding and -D opts to Performance Evaluation
[ https://issues.apache.org/jira/browse/HBASE-7156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matteo Bertozzi updated HBASE-7156: --- Attachment: HBASE-7156-v3.patch Add Data Block Encoding and -D opts to Performance Evaluation - Key: HBASE-7156 URL: https://issues.apache.org/jira/browse/HBASE-7156 Project: HBase Issue Type: Improvement Components: test Reporter: Matteo Bertozzi Assignee: Matteo Bertozzi Priority: Minor Attachments: HBASE-7156-v0.patch, HBASE-7156-v1.patch, HBASE-7156-v2.patch, HBASE-7156-v3.patch Add the ability to specify Data Block Encoding and other configuration options. --blockEncoding=TYPE -D property=value Example: hbase org.apache.hadoop.hbase.PerformanceEvaluation -D mapreduce.task.timeout=6 --blockEncoding=DIFF sequentialWrite 1 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7156) Add Data Block Encoding and -D opts to Performance Evaluation
[ https://issues.apache.org/jira/browse/HBASE-7156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496571#comment-13496571 ] Matteo Bertozzi commented on HBASE-7156: {quote}The space after '-D' should be removed, right?{quote} both syntax are valid... -D property=value or -Dproperty=value if you look at the Export tool the help is mixed with space and not. I find the space one much cleaner to read, but if we go for the non space one as default I'm ok with it. Add Data Block Encoding and -D opts to Performance Evaluation - Key: HBASE-7156 URL: https://issues.apache.org/jira/browse/HBASE-7156 Project: HBase Issue Type: Improvement Components: test Reporter: Matteo Bertozzi Assignee: Matteo Bertozzi Priority: Minor Attachments: HBASE-7156-v0.patch, HBASE-7156-v1.patch, HBASE-7156-v2.patch, HBASE-7156-v3.patch Add the ability to specify Data Block Encoding and other configuration options. --blockEncoding=TYPE -D property=value Example: hbase org.apache.hadoop.hbase.PerformanceEvaluation -D mapreduce.task.timeout=6 --blockEncoding=DIFF sequentialWrite 1 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6055) Snapshots in HBase 0.96
[ https://issues.apache.org/jira/browse/HBASE-6055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496573#comment-13496573 ] Matteo Bertozzi commented on HBASE-6055: Current offline snapshot status: the code was up for review for a while now, and everything is at least +1 we're missing a couple of reviews to merge it in the snapshot branch. || Jira || Description || Status || Review Link || | HBASE-5547 | HFile Archiver | trunk | | | HBASE-6610 | HFileLink hardlink alternative | trunk | | | HBASE-6571 | Error handling framework | snapshot branch | [review board|https://reviews.apache.org/r/6589/] | | HBASE-6765 | Take a Snapshot Interface | snapshot branch | [review board|https://reviews.apache.org/r/7072/] | | HBASE-6230 | Snapshot Reference Utils | snapshot branch | [review board|https://reviews.apache.org/r/7788/] | | HBASE-6353 | Snapshot Shell | snapshot-branch | [review board|https://reviews.apache.org/r/7583/] | | HBASE-6863 | Offline Snapshot | review +2 | [review board|https://reviews.apache.org/r/7608/] | | HBASE-6865 | Snapshot cleaner | review +2 | [review board|https://reviews.apache.org/r/7627] | | HBASE-6777 | Restore Interface | review +1 | [review board|https://reviews.apache.org/r/7096] | | HBASE-6230 | Restore Snapshot | review +1 | [review board|https://reviews.apache.org/r/5963/] | | HBASE-6802 | Export Snapshot | review +1 | [review board|https://reviews.apache.org/r/7137/] | The *reference snapshot branch* is: https://github.com/jyates/hbase/tree/snapshots The complete dev branch with all commit above is: https://github.com/matteobertozzi/hbase/commits/offline-snapshot-review-v3 Snapshots in HBase 0.96 --- Key: HBASE-6055 URL: https://issues.apache.org/jira/browse/HBASE-6055 Project: HBase Issue Type: New Feature Components: Client, master, regionserver, snapshots, Zookeeper Reporter: Jesse Yates Assignee: Jesse Yates Fix For: hbase-6055, 0.96.0 Attachments: Snapshots in HBase.docx Continuation of HBASE-50 for the current trunk. Since the implementation has drastically changed, opening as a new ticket. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6055) Snapshots in HBase 0.96
[ https://issues.apache.org/jira/browse/HBASE-6055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496582#comment-13496582 ] Jonathan Hsieh commented on HBASE-6055: --- I think we need to get all those committed to the branch, and then put up a doc about the features, what it provides, how it works, and why we chose particular semantics. We also need to document its current caveats. We'll do some testing and probably need to do a little rebasing before we can consider a trunk merge. We'll put a flag up here when we are ready but as a heads up, we'd like 1-2 more committers to review. (Currently it is Me, Jesse, and Ted in some places). Snapshots in HBase 0.96 --- Key: HBASE-6055 URL: https://issues.apache.org/jira/browse/HBASE-6055 Project: HBase Issue Type: New Feature Components: Client, master, regionserver, snapshots, Zookeeper Reporter: Jesse Yates Assignee: Jesse Yates Fix For: hbase-6055, 0.96.0 Attachments: Snapshots in HBase.docx Continuation of HBASE-50 for the current trunk. Since the implementation has drastically changed, opening as a new ticket. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4913) Per-CF compaction Via the Shell
[ https://issues.apache.org/jira/browse/HBASE-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496585#comment-13496585 ] Hudson commented on HBASE-4913: --- Integrated in HBase-0.94 #585 (See [https://builds.apache.org/job/HBase-0.94/585/]) HBASE-4913 Addendum: Fix TestShell (Revision 1408904) Result = FAILURE larsh : Files : * /hbase/branches/0.94/src/main/ruby/hbase/admin.rb Per-CF compaction Via the Shell --- Key: HBASE-4913 URL: https://issues.apache.org/jira/browse/HBASE-4913 Project: HBase Issue Type: Sub-task Components: Client, regionserver Reporter: Nicolas Spiegelberg Assignee: Mubarak Seyed Fix For: 0.94.3, 0.96.0 Attachments: 4913-addendum2.txt, HBASE-4913-94.patch, HBASE-4913-addendum.patch, HBASE-4913.trunk.v1.patch, HBASE-4913.trunk.v2.patch, HBASE-4913.trunk.v2.patch, HBASE-4913-trunk-v3.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7062) Move HLog stats to metrics 2
[ https://issues.apache.org/jira/browse/HBASE-7062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496592#comment-13496592 ] stack commented on HBASE-7062: -- So, a new bean for hlog. Nice that it slots under regionserver. Call the bean 'wal' instead? (Change this too: Metrics about HBase RegionServer HLog;) Looks like you should rename some classes too... MetricsSourceHLogImpl, MetricsHLog... etc. HLog is an abomination of a name. WAL tells you more what it is about where HLog says nought, worse, is misleading even. Are those bytes or megs for size or what? If I hover over the metric will it tell me -- it doesn't look like the description identifies the unit size? They are kinda beautiful. Your BaseSource in wal class should reference the other BaseSource stuff so it can be seen that there is a pattern going on here. slow log time should be configurable? i.e. +if (time 1000) {... can do that in another issue. Patch looks good to me otherwise. Move HLog stats to metrics 2 Key: HBASE-7062 URL: https://issues.apache.org/jira/browse/HBASE-7062 Project: HBase Issue Type: Sub-task Components: metrics Affects Versions: 0.96.0 Reporter: Elliott Clark Assignee: Elliott Clark Priority: Critical Fix For: 0.96.0 Attachments: HBASE-7062-1.patch, HBASE-7062-2.patch, HBASE-7062-3.patch, HBASE-7062-4.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7120) hbase-daemon.sh (start) missing necessary check when writing pid and log files
[ https://issues.apache.org/jira/browse/HBASE-7120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496593#comment-13496593 ] stack commented on HBASE-7120: -- Add your patch here [~michelle]? hbase-daemon.sh (start) missing necessary check when writing pid and log files -- Key: HBASE-7120 URL: https://issues.apache.org/jira/browse/HBASE-7120 Project: HBase Issue Type: Bug Affects Versions: 0.94.0 Environment: RHEL 5.3, open JDK 1.6 Reporter: Li Ping Zhang Labels: patch Original Estimate: 48h Remaining Estimate: 48h $HBASE_HOME/bin/hbase-daemon.sh exit code is Zero, when runing hbase-daemon.sh failed with start, which doesn’t do required command exit code check, it's better to do necessary check when writing pid and log files. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (HBASE-7124) typo in pom.xml with exlude, no definition of test.exclude.pattern
[ https://issues.apache.org/jira/browse/HBASE-7124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack reassigned HBASE-7124: Assignee: Li Ping Zhang Assigning Li Ping Zhang. Added you as an HBase Contributor. Jesse, added you as an administrator so you should be able to do this going forward. typo in pom.xml with exlude, no definition of test.exclude.pattern -- Key: HBASE-7124 URL: https://issues.apache.org/jira/browse/HBASE-7124 Project: HBase Issue Type: Bug Affects Versions: 0.94.0 Reporter: Li Ping Zhang Assignee: Li Ping Zhang Priority: Minor Labels: patch Original Estimate: 4h Remaining Estimate: 4h There is a typo in pom.xml with exlude, and there is no definition of test.exclude.pattern. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7155) Excessive usage of InterruptedException where it can't be thrown
[ https://issues.apache.org/jira/browse/HBASE-7155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496597#comment-13496597 ] stack commented on HBASE-7155: -- For sure we are not suppresssing the IE at a lower level? If not, this change would be good for hbase 0.96/trunk. Thanks Daniel. Excessive usage of InterruptedException where it can't be thrown Key: HBASE-7155 URL: https://issues.apache.org/jira/browse/HBASE-7155 Project: HBase Issue Type: Bug Reporter: Daniel Gómez Ferro Assignee: Daniel Gómez Ferro Attachments: HBASE-7155.patch RootRegionTracker.getRootRegionLocation() declares that it can throw an InterruptedException, but it can't. This exception is rethrown by many other functions reaching the HBaseAdmin API. If we remove the throws statement from the HBaseAdmin API libraries already compiled will work fine, but if the user is trying to catch an InterruptedException around one of those methods the compiler will complain. Should we clean this up? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6929) Publish Hbase 0.94 artifacts build against hadoop-2.0
[ https://issues.apache.org/jira/browse/HBASE-6929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496598#comment-13496598 ] Hari Shreedharan commented on HBASE-6929: - What is the status of this issue? Right now, even Flume requires that the user builds hbase locally before building Flume. We'd like to avoid this, so it'd be great if this is committed soon. Thanks! Publish Hbase 0.94 artifacts build against hadoop-2.0 - Key: HBASE-6929 URL: https://issues.apache.org/jira/browse/HBASE-6929 Project: HBase Issue Type: Task Components: build Affects Versions: 0.94.2 Reporter: Enis Soztutar Attachments: 6929.txt, hbase-6929_v2.patch Downstream projects (flume, hive, pig, etc) depends on hbase, but since the hbase binaries build with hadoop-2.0 are not pushed to maven, they cannot depend on them. AFAIK, hadoop 1 and 2 are not binary compatible, so we should also push hbase jars build with hadoop2.0 profile into maven, possibly with version string like 0.94.2-hadoop2.0. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7156) Add Data Block Encoding and -D opts to Performance Evaluation
[ https://issues.apache.org/jira/browse/HBASE-7156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496600#comment-13496600 ] Hadoop QA commented on HBASE-7156: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12553366/HBASE-7156-v2.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified tests. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:red}-1 javadoc{color}. The javadoc tool appears to have generated 93 warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 findbugs{color}. The patch appears to introduce 17 new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/3328//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3328//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3328//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3328//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3328//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3328//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3328//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/3328//console This message is automatically generated. Add Data Block Encoding and -D opts to Performance Evaluation - Key: HBASE-7156 URL: https://issues.apache.org/jira/browse/HBASE-7156 Project: HBase Issue Type: Improvement Components: test Reporter: Matteo Bertozzi Assignee: Matteo Bertozzi Priority: Minor Attachments: HBASE-7156-v0.patch, HBASE-7156-v1.patch, HBASE-7156-v2.patch, HBASE-7156-v3.patch Add the ability to specify Data Block Encoding and other configuration options. --blockEncoding=TYPE -D property=value Example: hbase org.apache.hadoop.hbase.PerformanceEvaluation -D mapreduce.task.timeout=6 --blockEncoding=DIFF sequentialWrite 1 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7062) Move HLog stats to metrics 2
[ https://issues.apache.org/jira/browse/HBASE-7062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496613#comment-13496613 ] Hadoop QA commented on HBASE-7062: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12553370/HBASE-7062-4.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 9 new or modified tests. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:red}-1 javadoc{color}. The javadoc tool appears to have generated 93 warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 findbugs{color}. The patch appears to introduce 17 new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.util.TestHBaseFsck Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/3329//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3329//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3329//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3329//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3329//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3329//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3329//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/3329//console This message is automatically generated. Move HLog stats to metrics 2 Key: HBASE-7062 URL: https://issues.apache.org/jira/browse/HBASE-7062 Project: HBase Issue Type: Sub-task Components: metrics Affects Versions: 0.96.0 Reporter: Elliott Clark Assignee: Elliott Clark Priority: Critical Fix For: 0.96.0 Attachments: HBASE-7062-1.patch, HBASE-7062-2.patch, HBASE-7062-3.patch, HBASE-7062-4.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7062) Move HLog stats to metrics 2
[ https://issues.apache.org/jira/browse/HBASE-7062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496617#comment-13496617 ] Elliott Clark commented on HBASE-7062: -- bq.Your BaseSource in wal class should reference the other BaseSource stuff so it can be seen that there is a pattern going on here. The BaseSource is an interface that is shared between all sources. The BaseSourceImpl is a class that all the sources in a single compat jar derive from. Is there something else that it needs ? Move HLog stats to metrics 2 Key: HBASE-7062 URL: https://issues.apache.org/jira/browse/HBASE-7062 Project: HBase Issue Type: Sub-task Components: metrics Affects Versions: 0.96.0 Reporter: Elliott Clark Assignee: Elliott Clark Priority: Critical Fix For: 0.96.0 Attachments: HBASE-7062-1.patch, HBASE-7062-2.patch, HBASE-7062-3.patch, HBASE-7062-4.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7156) Add Data Block Encoding and -D opts to Performance Evaluation
[ https://issues.apache.org/jira/browse/HBASE-7156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496623#comment-13496623 ] Hadoop QA commented on HBASE-7156: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12553372/HBASE-7156-v3.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified tests. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:red}-1 javadoc{color}. The javadoc tool appears to have generated 93 warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 findbugs{color}. The patch appears to introduce 17 new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.client.TestMultiParallel Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/3330//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3330//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3330//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3330//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3330//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3330//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3330//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/3330//console This message is automatically generated. Add Data Block Encoding and -D opts to Performance Evaluation - Key: HBASE-7156 URL: https://issues.apache.org/jira/browse/HBASE-7156 Project: HBase Issue Type: Improvement Components: test Reporter: Matteo Bertozzi Assignee: Matteo Bertozzi Priority: Minor Attachments: HBASE-7156-v0.patch, HBASE-7156-v1.patch, HBASE-7156-v2.patch, HBASE-7156-v3.patch Add the ability to specify Data Block Encoding and other configuration options. --blockEncoding=TYPE -D property=value Example: hbase org.apache.hadoop.hbase.PerformanceEvaluation -D mapreduce.task.timeout=6 --blockEncoding=DIFF sequentialWrite 1 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7156) Add Data Block Encoding and -D opts to Performance Evaluation
[ https://issues.apache.org/jira/browse/HBASE-7156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-7156: - Resolution: Fixed Fix Version/s: 0.96.0 Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Committed to trunk. Thanks for the patch Matteo. Add Data Block Encoding and -D opts to Performance Evaluation - Key: HBASE-7156 URL: https://issues.apache.org/jira/browse/HBASE-7156 Project: HBase Issue Type: Improvement Components: test Reporter: Matteo Bertozzi Assignee: Matteo Bertozzi Priority: Minor Fix For: 0.96.0 Attachments: HBASE-7156-v0.patch, HBASE-7156-v1.patch, HBASE-7156-v2.patch, HBASE-7156-v3.patch Add the ability to specify Data Block Encoding and other configuration options. --blockEncoding=TYPE -D property=value Example: hbase org.apache.hadoop.hbase.PerformanceEvaluation -D mapreduce.task.timeout=6 --blockEncoding=DIFF sequentialWrite 1 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-7133) svn:ignore on module directories
[ https://issues.apache.org/jira/browse/HBASE-7133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar resolved HBASE-7133. -- Resolution: Fixed Fix Version/s: 0.96.0 Hadoop Flags: Reviewed Committed this. Thanks Stack for taking a look. svn:ignore on module directories Key: HBASE-7133 URL: https://issues.apache.org/jira/browse/HBASE-7133 Project: HBase Issue Type: Improvement Components: build Affects Versions: 0.96.0 Reporter: Enis Soztutar Assignee: Enis Soztutar Priority: Trivial Fix For: 0.96.0 Attachments: hbase-7133.patch This has been bothering me whenever I go back to svn to commit smt. We have to set svn:ignore on module directories hbase-common,hbase-server,etc. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7148) Some files in hbase-examples module miss license header
[ https://issues.apache.org/jira/browse/HBASE-7148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-7148: - Resolution: Fixed Fix Version/s: 0.96.0 Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Committed this. Thanks for the reviews. Some files in hbase-examples module miss license header --- Key: HBASE-7148 URL: https://issues.apache.org/jira/browse/HBASE-7148 Project: HBase Issue Type: Bug Reporter: Ted Yu Assignee: Enis Soztutar Fix For: 0.96.0 Attachments: hbase-7148.patch Trunk build 3530 got to building hbase-examples module but failed: {code} [INFO] HBase - Examples .. FAILURE [3.222s] [INFO] [INFO] BUILD FAILURE [INFO] [INFO] Total time: 29:21.569s [INFO] Finished at: Sun Nov 11 15:17:35 UTC 2012 [INFO] Final Memory: 68M/642M [INFO] [ERROR] Failed to execute goal org.apache.rat:apache-rat-plugin:0.8:check (default) on project hbase-examples: Too many unapproved licenses: 20 - [Help 1] {code} Looks like license headers are missing in some of the files in hbase-examples module -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-6945) Compilation errors when using non-Sun JDKs to build HBase-0.94
[ https://issues.apache.org/jira/browse/HBASE-6945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kumar Ravi updated HBASE-6945: -- Attachment: (was: HBASE-6945_ResourceCheckerJUnitListener.patch) Compilation errors when using non-Sun JDKs to build HBase-0.94 -- Key: HBASE-6945 URL: https://issues.apache.org/jira/browse/HBASE-6945 Project: HBase Issue Type: Sub-task Components: build Affects Versions: 0.94.1 Environment: RHEL 6.3, IBM Java 7 Reporter: Kumar Ravi Assignee: Kumar Ravi Labels: patch Fix For: 0.94.4 When using IBM Java 7 to build HBase-0.94.1, the following comilation error is seen. [INFO] - [ERROR] COMPILATION ERROR : [INFO] - [ERROR] /home/hadoop/hbase-0.94/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java:[23,25] error: package com.sun.management does not exist [ERROR] /home/hadoop/hbase-0.94/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java:[46,25] error: cannot find symbol [ERROR] symbol: class UnixOperatingSystemMXBean location: class ResourceAnalyzer /home/hadoop/hbase-0.94/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java:[75,29] error: cannot find symbol [ERROR] symbol: class UnixOperatingSystemMXBean location: class ResourceAnalyzer /home/hadoop/hbase-0.94/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java:[76,23] error: cannot find symbol [INFO] 4 errors [INFO] - [INFO] [INFO] BUILD FAILURE [INFO] I have a patch available which should work for all JDKs including Sun. I am in the process of testing this patch. Preliminary tests indicate the build is working fine with this patch. I will post this patch when I am done testing. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-6945) Compilation errors when using non-Sun JDKs to build HBase-0.94
[ https://issues.apache.org/jira/browse/HBASE-6945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kumar Ravi updated HBASE-6945: -- Attachment: HBASE-6945.patch As per stack's recommendation, merging HBASE-7150 patch (New class JVM.java) with patch for ResourceCheckerJUnitListener (This JIRA). Note that JVM.java cannot use hadoop's ShellCommandExecutor as intended due to differences in IBM JDK's implementation of the Long class. Compilation errors when using non-Sun JDKs to build HBase-0.94 -- Key: HBASE-6945 URL: https://issues.apache.org/jira/browse/HBASE-6945 Project: HBase Issue Type: Sub-task Components: build Affects Versions: 0.94.1 Environment: RHEL 6.3, IBM Java 7 Reporter: Kumar Ravi Assignee: Kumar Ravi Labels: patch Fix For: 0.94.4 Attachments: HBASE-6945.patch When using IBM Java 7 to build HBase-0.94.1, the following comilation error is seen. [INFO] - [ERROR] COMPILATION ERROR : [INFO] - [ERROR] /home/hadoop/hbase-0.94/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java:[23,25] error: package com.sun.management does not exist [ERROR] /home/hadoop/hbase-0.94/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java:[46,25] error: cannot find symbol [ERROR] symbol: class UnixOperatingSystemMXBean location: class ResourceAnalyzer /home/hadoop/hbase-0.94/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java:[75,29] error: cannot find symbol [ERROR] symbol: class UnixOperatingSystemMXBean location: class ResourceAnalyzer /home/hadoop/hbase-0.94/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java:[76,23] error: cannot find symbol [INFO] 4 errors [INFO] - [INFO] [INFO] BUILD FAILURE [INFO] I have a patch available which should work for all JDKs including Sun. I am in the process of testing this patch. Preliminary tests indicate the build is working fine with this patch. I will post this patch when I am done testing. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6945) Compilation errors when using non-Sun JDKs to build HBase-0.94
[ https://issues.apache.org/jira/browse/HBASE-6945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496675#comment-13496675 ] Kumar Ravi commented on HBASE-6945: --- A request to the person who will be committing this patch - After review when this patch is committed, please delete the file (svn rm?) hbase-common/src/main/java/org/apache/hadoop/hbase/util/OSMXBean.java that was included with HBASE-6965 as the file JVM.java included in the patch above replaces that file. Compilation errors when using non-Sun JDKs to build HBase-0.94 -- Key: HBASE-6945 URL: https://issues.apache.org/jira/browse/HBASE-6945 Project: HBase Issue Type: Sub-task Components: build Affects Versions: 0.94.1 Environment: RHEL 6.3, IBM Java 7 Reporter: Kumar Ravi Assignee: Kumar Ravi Labels: patch Fix For: 0.94.4 Attachments: HBASE-6945.patch When using IBM Java 7 to build HBase-0.94.1, the following comilation error is seen. [INFO] - [ERROR] COMPILATION ERROR : [INFO] - [ERROR] /home/hadoop/hbase-0.94/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java:[23,25] error: package com.sun.management does not exist [ERROR] /home/hadoop/hbase-0.94/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java:[46,25] error: cannot find symbol [ERROR] symbol: class UnixOperatingSystemMXBean location: class ResourceAnalyzer /home/hadoop/hbase-0.94/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java:[75,29] error: cannot find symbol [ERROR] symbol: class UnixOperatingSystemMXBean location: class ResourceAnalyzer /home/hadoop/hbase-0.94/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java:[76,23] error: cannot find symbol [INFO] 4 errors [INFO] - [INFO] [INFO] BUILD FAILURE [INFO] I have a patch available which should work for all JDKs including Sun. I am in the process of testing this patch. Preliminary tests indicate the build is working fine with this patch. I will post this patch when I am done testing. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-7158) Allow CopyTable to identify the source cluster (for replication scenarios)
Lars Hofhansl created HBASE-7158: Summary: Allow CopyTable to identify the source cluster (for replication scenarios) Key: HBASE-7158 URL: https://issues.apache.org/jira/browse/HBASE-7158 Project: HBase Issue Type: Bug Reporter: Lars Hofhansl -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7158) Allow CopyTable to identify the source cluster (for replication scenarios)
[ https://issues.apache.org/jira/browse/HBASE-7158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496683#comment-13496683 ] Lars Hofhansl commented on HBASE-7158: -- When I worked on HBASE-2195 I added a mechanism for an edit to identify its source cluster, so that replication would not bounce it back to the source. See: {{this.clusterId = zkHelper.getUUIDForCluster(zkHelper.getZookeeperWatcher());}} in ReplicationSource, and {{put.setClusterId(entry.getKey().getClusterId());}} in ReplicationSink. In master-master replication scenarios, it would very useful if CopyTable would identify the source cluster (by tagging each Put/Delete with the source clusterId before applying it). Allow CopyTable to identify the source cluster (for replication scenarios) -- Key: HBASE-7158 URL: https://issues.apache.org/jira/browse/HBASE-7158 Project: HBase Issue Type: Bug Reporter: Lars Hofhansl -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7158) Allow CopyTable to identify the source cluster (for replication scenarios)
[ https://issues.apache.org/jira/browse/HBASE-7158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-7158: - Comment: was deleted (was: When I worked on HBASE-2195 I added a mechanism for an edit to identify its source cluster, so that replication would not bounce it back to the source. See: {{this.clusterId = zkHelper.getUUIDForCluster(zkHelper.getZookeeperWatcher());}} in ReplicationSource, and {{put.setClusterId(entry.getKey().getClusterId());}} in ReplicationSink. In master-master replication scenarios, it would very useful if CopyTable would identify the source cluster (by tagging each Put/Delete with the source clusterId before applying it).) Allow CopyTable to identify the source cluster (for replication scenarios) -- Key: HBASE-7158 URL: https://issues.apache.org/jira/browse/HBASE-7158 Project: HBase Issue Type: Bug Reporter: Lars Hofhansl When I worked on HBASE-2195 I added a mechanism for an edit to identify its source cluster, so that replication would not bounce it back to the source. See: {{this.clusterId = zkHelper.getUUIDForCluster(zkHelper.getZookeeperWatcher());}} in ReplicationSource, and {{put.setClusterId(entry.getKey().getClusterId());}} in ReplicationSink. In master-master replication scenarios, it would very useful if CopyTable would identify the source cluster (by tagging each Put/Delete with the source clusterId before applying it). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7158) Allow CopyTable to identify the source cluster (for replication scenarios)
[ https://issues.apache.org/jira/browse/HBASE-7158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-7158: - Description: When I worked on HBASE-2195 I added a mechanism for an edit to identify its source cluster, so that replication would not bounce it back to the source. See: {{this.clusterId = zkHelper.getUUIDForCluster(zkHelper.getZookeeperWatcher());}} in ReplicationSource, and {{put.setClusterId(entry.getKey().getClusterId());}} in ReplicationSink. In master-master replication scenarios, it would very useful if CopyTable would identify the source cluster (by tagging each Put/Delete with the source clusterId before applying it). Allow CopyTable to identify the source cluster (for replication scenarios) -- Key: HBASE-7158 URL: https://issues.apache.org/jira/browse/HBASE-7158 Project: HBase Issue Type: Bug Reporter: Lars Hofhansl When I worked on HBASE-2195 I added a mechanism for an edit to identify its source cluster, so that replication would not bounce it back to the source. See: {{this.clusterId = zkHelper.getUUIDForCluster(zkHelper.getZookeeperWatcher());}} in ReplicationSource, and {{put.setClusterId(entry.getKey().getClusterId());}} in ReplicationSink. In master-master replication scenarios, it would very useful if CopyTable would identify the source cluster (by tagging each Put/Delete with the source clusterId before applying it). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-7159) Upgrade zookeeper dependency to 3.4.5
Ted Yu created HBASE-7159: - Summary: Upgrade zookeeper dependency to 3.4.5 Key: HBASE-7159 URL: https://issues.apache.org/jira/browse/HBASE-7159 Project: HBase Issue Type: Bug Reporter: Ted Yu zookeeper 3.4.5 works with Oracle JDK 1.7 We should upgrade to zookeeper 3.4.5 in trunk -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4913) Per-CF compaction Via the Shell
[ https://issues.apache.org/jira/browse/HBASE-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496717#comment-13496717 ] Hudson commented on HBASE-4913: --- Integrated in HBase-0.94-security #84 (See [https://builds.apache.org/job/HBase-0.94-security/84/]) HBASE-4913 Addendum: Fix TestShell (Revision 1408904) Result = SUCCESS larsh : Files : * /hbase/branches/0.94/src/main/ruby/hbase/admin.rb Per-CF compaction Via the Shell --- Key: HBASE-4913 URL: https://issues.apache.org/jira/browse/HBASE-4913 Project: HBase Issue Type: Sub-task Components: Client, regionserver Reporter: Nicolas Spiegelberg Assignee: Mubarak Seyed Fix For: 0.94.3, 0.96.0 Attachments: 4913-addendum2.txt, HBASE-4913-94.patch, HBASE-4913-addendum.patch, HBASE-4913.trunk.v1.patch, HBASE-4913.trunk.v2.patch, HBASE-4913.trunk.v2.patch, HBASE-4913-trunk-v3.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6962) Upgrade hadoop 1 dependency to hadoop 1.1
[ https://issues.apache.org/jira/browse/HBASE-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496718#comment-13496718 ] Varun Sharma commented on HBASE-6962: - Hey folks, I saw the hadoop 1.1.0 release notes and wanted to check on the Hadoop 8230 in particular. I am looking to try 1.1 with hbase 0.94 so as to get the hdfs stale node patches into the system. However, I wanted to check on the sync/append changes. Does this change mean that hsync is enabled by default and is going to be used for HBase append operation instead of the previous hflush implementation ? From my understanding, this would be a performance cost (persisting to disk vs writing to OS buffers) ? Thanks Varun Upgrade hadoop 1 dependency to hadoop 1.1 - Key: HBASE-6962 URL: https://issues.apache.org/jira/browse/HBASE-6962 Project: HBase Issue Type: Bug Environment: hadoop 1.1 contains multiple important fixes, including HDFS-3703 Reporter: Ted Yu Assignee: Ted Yu Fix For: 0.96.0 Attachments: 6962.txt -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6962) Upgrade hadoop 1 dependency to hadoop 1.1
[ https://issues.apache.org/jira/browse/HBASE-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496736#comment-13496736 ] Lars Hofhansl commented on HBASE-6962: -- While 0.94 can be build against Hadoop 1.1.0 the behavior will be the same. There is a lot of confusion about append, sync, hsync, hflush. Let me try to clarify. # HBase never needed append, but only the sync part of the 0.20 append branch. # Until HDFS-744 Hadoop did not have any durable sync. hsync was identical to hflush # When we talk about sync in HBase we always mean hflush (until HBASE-5954 is done, that is) That means as far as this issue is concerned, you can safely switch to Hadoop 1.1.0. (I tried to summarize this here: http://hadoop-hbase.blogspot.com/2012/05/hbase-hdfs-and-durable-sync.html) Upgrade hadoop 1 dependency to hadoop 1.1 - Key: HBASE-6962 URL: https://issues.apache.org/jira/browse/HBASE-6962 Project: HBase Issue Type: Bug Environment: hadoop 1.1 contains multiple important fixes, including HDFS-3703 Reporter: Ted Yu Assignee: Ted Yu Fix For: 0.96.0 Attachments: 6962.txt -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6863) Offline snapshots
[ https://issues.apache.org/jira/browse/HBASE-6863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496738#comment-13496738 ] Jonathan Hsieh commented on HBASE-6863: --- [~jesse_yates] Ping. Is this going to get committed to the branch? Offline snapshots - Key: HBASE-6863 URL: https://issues.apache.org/jira/browse/HBASE-6863 Project: HBase Issue Type: Sub-task Reporter: Jesse Yates Assignee: Jesse Yates Fix For: hbase-6055 Attachments: hbase-6863-v3.patch Create a snapshot of a table while the table is offline. This also should handle a lot of the common utils/scaffolding for taking snapshots (HBASE-6055) with minimal overhead as the code itself is pretty simple. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7157) Report Metrics into an HBase table
[ https://issues.apache.org/jira/browse/HBASE-7157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496764#comment-13496764 ] Enis Soztutar commented on HBASE-7157: -- DO we have to resurface the system-tables discussion. This looks like a system table to me. Could not find the issue. Report Metrics into an HBase table -- Key: HBASE-7157 URL: https://issues.apache.org/jira/browse/HBASE-7157 Project: HBase Issue Type: Improvement Components: metrics Reporter: Elliott Clark Assignee: Elliott Clark Right now metrics are sent using ServerLoad and RegionLoad to the master. We should store those(and other) metrics in a table. Storing these metrics in a table will allow the LoadBalancer to have more context. In addition those metrics will be useful for the ui. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7138) RegionSplitter's rollingSplit terminated with / by zero, and the _balancedSplit file was not deleted properly
[ https://issues.apache.org/jira/browse/HBASE-7138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496779#comment-13496779 ] Davey Yan commented on HBASE-7138: -- [~ram_krish] The splitCount will not be 0 with option '--risky' with or without this patch. I have tested. RegionSplitter's rollingSplit terminated with / by zero, and the _balancedSplit file was not deleted properly --- Key: HBASE-7138 URL: https://issues.apache.org/jira/browse/HBASE-7138 Project: HBase Issue Type: Bug Components: util Affects Versions: 0.94.1 Environment: Ubuntu Server 10.04, Hadoop 1.0.3 Reporter: Davey Yan Priority: Minor Attachments: RegionSplitter_HBASE-7138-0.94.patch, RegionSplitter_HBASE-7138.patch The 'splitCount' in this line is zero in some scenario, then throw ArithmeticException: / by zero, and the '_balancedSplit' file was not deleted: {code:java} LOG.debug(Avg Time / Split = + org.apache.hadoop.util.StringUtils.formatTime(tDiff / splitCount)); {code} Steps to reproduce: {code} shell create 'test2', 'i' shell for i in 'a'..'z' do for j in 'a'..'z' do put 'test2', #{i}#{j}, i:#{j}, #{j} end end {code} {noformat} $ bin/hbase org.apache.hadoop.hbase.util.RegionSplitter -r -o 2 test2 HexStringSplit 12/11/08 19:20:40 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT 12/11/08 19:20:40 INFO zookeeper.ZooKeeper: Client environment:host.name=dev-vm0 12/11/08 19:20:40 INFO zookeeper.ZooKeeper: Client environment:java.version=1.6.0_29 12/11/08 19:20:40 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Sun Microsystems Inc. 12/11/08 19:20:40 INFO zookeeper.ZooKeeper: Client environment:java.home=/usr/lib/jvm/jdk1.6.0_29/jre 12/11/08 19:20:40 INFO zookeeper.ZooKeeper: Client environment:java.class.path=/opt/hbase/bin/../conf:/usr/lib/jvm/default-java/lib/tools.jar:/opt/hbase/bin/..:/opt/hbase/bin/../hbase-0.94.1.jar:/opt/hbase/bin/../hbase-0.94.1-tests.jar:/opt/hbase/bin/../lib/activation-1.1.jar:/opt/hbase/bin/../lib/asm-3.1.jar:/opt/hbase/bin/../lib/avro-1.5.3.jar:/opt/hbase/bin/../lib/avro-ipc-1.5.3.jar:/opt/hbase/bin/../lib/commons-beanutils-1.7.0.jar:/opt/hbase/bin/../lib/commons-beanutils-core-1.8.0.jar:/opt/hbase/bin/../lib/commons-cli-1.2.jar:/opt/hbase/bin/../lib/commons-codec-1.4.jar:/opt/hbase/bin/../lib/commons-collections-3.2.1.jar:/opt/hbase/bin/../lib/commons-configuration-1.6.jar:/opt/hbase/bin/../lib/commons-digester-1.8.jar:/opt/hbase/bin/../lib/commons-el-1.0.jar:/opt/hbase/bin/../lib/commons-httpclient-3.1.jar:/opt/hbase/bin/../lib/commons-io-2.1.jar:/opt/hbase/bin/../lib/commons-lang-2.5.jar:/opt/hbase/bin/../lib/commons-logging-1.1.1.jar:/opt/hbase/bin/../lib/commons-math-2.1.jar:/opt/hbase/bin/../lib/commons-net-1.4.1.jar:/opt/hbase/bin/../lib/core-3.1.1.jar:/opt/hbase/bin/../lib/guava-11.0.2.jar:/opt/hbase/bin/../lib/hadoop-core-1.0.3.jar:/opt/hbase/bin/../lib/high-scale-lib-1.1.1.jar:/opt/hbase/bin/../lib/httpclient-4.1.2.jar:/opt/hbase/bin/../lib/httpcore-4.1.3.jar:/opt/hbase/bin/../lib/jackson-core-asl-1.8.8.jar:/opt/hbase/bin/../lib/jackson-jaxrs-1.8.8.jar:/opt/hbase/bin/../lib/jackson-mapper-asl-1.8.8.jar:/opt/hbase/bin/../lib/jackson-xc-1.8.8.jar:/opt/hbase/bin/../lib/jamon-runtime-2.3.1.jar:/opt/hbase/bin/../lib/jasper-compiler-5.5.23.jar:/opt/hbase/bin/../lib/jasper-runtime-5.5.23.jar:/opt/hbase/bin/../lib/jaxb-api-2.1.jar:/opt/hbase/bin/../lib/jaxb-impl-2.2.3-1.jar:/opt/hbase/bin/../lib/jersey-core-1.8.jar:/opt/hbase/bin/../lib/jersey-json-1.8.jar:/opt/hbase/bin/../lib/jersey-server-1.8.jar:/opt/hbase/bin/../lib/jettison-1.1.jar:/opt/hbase/bin/../lib/jetty-6.1.26.jar:/opt/hbase/bin/../lib/jetty-util-6.1.26.jar:/opt/hbase/bin/../lib/jruby-complete-1.6.5.jar:/opt/hbase/bin/../lib/jsp-2.1-6.1.14.jar:/opt/hbase/bin/../lib/jsp-api-2.1-6.1.14.jar:/opt/hbase/bin/../lib/jsr305-1.3.9.jar:/opt/hbase/bin/../lib/junit-4.10-HBASE-1.jar:/opt/hbase/bin/../lib/libthrift-0.8.0.jar:/opt/hbase/bin/../lib/log4j-1.2.16.jar:/opt/hbase/bin/../lib/metrics-core-2.1.2.jar:/opt/hbase/bin/../lib/netty-3.2.4.Final.jar:/opt/hbase/bin/../lib/protobuf-java-2.4.0a.jar:/opt/hbase/bin/../lib/servlet-api-2.5-6.1.14.jar:/opt/hbase/bin/../lib/slf4j-api-1.4.3.jar:/opt/hbase/bin/../lib/slf4j-log4j12-1.4.3.jar:/opt/hbase/bin/../lib/snappy-java-1.0.3.2.jar:/opt/hbase/bin/../lib/stax-api-1.0.1.jar:/opt/hbase/bin/../lib/velocity-1.7.jar:/opt/hbase/bin/../lib/xmlenc-0.52.jar:/opt/hbase/bin/../lib/zookeeper-3.4.3.jar:/opt/hbase/bin/../libextra/mybk-commons-cc.jar:/opt/hbase/bin/../libextra/hbase.jar:/opt/hbase/bin/../libextra/sfdcloud-hbase.jar: 12/11/08 19:20:40 INFO zookeeper.ZooKeeper: Client
[jira] [Updated] (HBASE-7137) Improve Bytes to accept byte buffers which don't allow us to directly access thier backed arrays
[ https://issues.apache.org/jira/browse/HBASE-7137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hiroshi Ikeda updated HBASE-7137: - Attachment: HBASE-7137-V2.patch bq. In toBytes, we do dup.postion(0) but we don't do this when we do getBytes. Should we? Should getBytes make use of toBytes? At first I thought that the methods Bytes.toBytes(ByteBuffer) and toStringBinary(ByteBuffer) would be used for error logs, because it is required to reset the position to 0 and write down the whole contents of the byte buffer if the exception is thrown from the middle of reading data from the byte buffer. But it seems the methods are not used for the purpose. I am not sure why the similar methods, Bytes.toByte(ByteBuffer) and getBytes(ByteBuffer), are prepared. It might be possible to replace calling toByte(ByteBuffer) with calling getByte(ByteBuffer) inside HBase. Anyway, for the class Bytes, it is annotated as stable, and we can only extend the specification with keeping the compatibility. Added a revised patch; In the previous patch the implementations of Bytes.toBytes(ByteBuffer) and getByte(ByteBuffer) are quite similar, and in the new patch I extracted the shared logic between the methods into a new private method. Improve Bytes to accept byte buffers which don't allow us to directly access thier backed arrays Key: HBASE-7137 URL: https://issues.apache.org/jira/browse/HBASE-7137 Project: HBase Issue Type: Improvement Reporter: Hiroshi Ikeda Priority: Minor Attachments: HBASE-7137.patch, HBASE-7137-V2.patch Inside HBase, it seems that there is the implicit assumption that byte buffers have backed arrays and are not read-only, and we can freely call ByteBuffer.array() and arrayOffset() without runtime exceptions. But some classes, including Bytes, are supposed to be used by users from outside of HBase, and we should think the possibility that methods receive byte buffers which don't hold the assumption. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-4676) Prefix Compression - Trie data block encoding
[ https://issues.apache.org/jira/browse/HBASE-4676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Corgan updated HBASE-4676: --- Attachment: HBASE-4676-prefix-tree-trunk-v7.patch Attaching latest patch, v7, which addresses review comments and contains more docs Prefix Compression - Trie data block encoding - Key: HBASE-4676 URL: https://issues.apache.org/jira/browse/HBASE-4676 Project: HBase Issue Type: New Feature Components: io, Performance, regionserver Affects Versions: 0.96.0 Reporter: Matt Corgan Assignee: Matt Corgan Attachments: HBASE-4676-0.94-v1.patch, HBASE-4676-prefix-tree-trunk-v1.patch, HBASE-4676-prefix-tree-trunk-v2.patch, HBASE-4676-prefix-tree-trunk-v3.patch, HBASE-4676-prefix-tree-trunk-v4.patch, HBASE-4676-prefix-tree-trunk-v5.patch, HBASE-4676-prefix-tree-trunk-v6.patch, HBASE-4676-prefix-tree-trunk-v7.patch, hbase-prefix-trie-0.1.jar, PrefixTrie_Format_v1.pdf, PrefixTrie_Performance_v1.pdf, SeeksPerSec by blockSize.png The HBase data block format has room for 2 significant improvements for applications that have high block cache hit ratios. First, there is no prefix compression, and the current KeyValue format is somewhat metadata heavy, so there can be tremendous memory bloat for many common data layouts, specifically those with long keys and short values. Second, there is no random access to KeyValues inside data blocks. This means that every time you double the datablock size, average seek time (or average cpu consumption) goes up by a factor of 2. The standard 64KB block size is ~10x slower for random seeks than a 4KB block size, but block sizes as small as 4KB cause problems elsewhere. Using block sizes of 256KB or 1MB or more may be more efficient from a disk access and block-cache perspective in many big-data applications, but doing so is infeasible from a random seek perspective. The PrefixTrie block encoding format attempts to solve both of these problems. Some features: * trie format for row key encoding completely eliminates duplicate row keys and encodes similar row keys into a standard trie structure which also saves a lot of space * the column family is currently stored once at the beginning of each block. this could easily be modified to allow multiple family names per block * all qualifiers in the block are stored in their own trie format which caters nicely to wide rows. duplicate qualifers between rows are eliminated. the size of this trie determines the width of the block's qualifier fixed-width-int * the minimum timestamp is stored at the beginning of the block, and deltas are calculated from that. the maximum delta determines the width of the block's timestamp fixed-width-int The block is structured with metadata at the beginning, then a section for the row trie, then the column trie, then the timestamp deltas, and then then all the values. Most work is done in the row trie, where every leaf node (corresponding to a row) contains a list of offsets/references corresponding to the cells in that row. Each cell is fixed-width to enable binary searching and is represented by [1 byte operationType, X bytes qualifier offset, X bytes timestamp delta offset]. If all operation types are the same for a block, there will be zero per-cell overhead. Same for timestamps. Same for qualifiers when i get a chance. So, the compression aspect is very strong, but makes a few small sacrifices on VarInt size to enable faster binary searches in trie fan-out nodes. A more compressed but slower version might build on this by also applying further (suffix, etc) compression on the trie nodes at the cost of slower write speed. Even further compression could be obtained by using all VInts instead of FInts with a sacrifice on random seek speed (though not huge). One current drawback is the current write speed. While programmed with good constructs like TreeMaps, ByteBuffers, binary searches, etc, it's not programmed with the same level of optimization as the read path. Work will need to be done to optimize the data structures used for encoding and could probably show a 10x increase. It will still be slower than delta encoding, but with a much higher decode speed. I have not yet created a thorough benchmark for write speed nor sequential read speed. Though the trie is reaching a point where it is internally very efficient (probably within half or a quarter of its max read speed) the way that hbase currently uses it is far from optimal. The KeyValueScanner and related classes that iterate through the trie will eventually need to be smarter and have methods to do things like skipping to the next row of results without scanning every