[jira] [Commented] (HBASE-14810) Update Hadoop support description to explain "not tested" vs "not supported"
[ https://issues.apache.org/jira/browse/HBASE-14810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15124773#comment-15124773 ] Hadoop QA commented on HBASE-14810: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 48s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 43s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 22s {color} | {color:green} master passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 35s {color} | {color:green} master passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 48s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 35s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 45m 28s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 6s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 20s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 226m 31s {color} | {color:red} root in the patch failed with JDK v1.8.0_66. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 245m 5s {color} | {color:red} root in the patch failed with JDK v1.7.0_91. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 29s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 554m 15s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.8.0_66 Failed junit tests | hadoop.hbase.master.balancer.TestDefaultLoadBalancer | | | hadoop.hbase.regionserver.TestRegionServerHostname | | | hadoop.hbase.regionserver.throttle.TestFlushWithThroughputController | | | hadoop.hbase.regionserver.TestEncryptionKeyRotation | | | hadoop.hbase.regionserver.TestPerColumnFamilyFlush | | | hadoop.hbase.regionserver.TestRegionReplicaFailover | | | hadoop.hbase.master.procedure.TestWALProcedureStoreOnHDFS | | JDK v1.8.0_66 Timed out junit tests | org.apache.hadoop.hbase.regionserver.TestRegionMergeTransactionOnCluster | | | org.apache.hadoop.hbase.master.balancer.TestStochasticLoadBalancer | | | org.apache.hadoop.hbase.regionserver.wal.TestLogRolling | | | org.apache.hadoop.hbase.snapshot.TestMobSecureExportSnapshot | | | org.apache.hadoop.hbase.snapshot.TestSecureExportSnapshot | | | org.apache.hadoop.hbase.snapshot.TestMobExportSnapshot | | | org.apache.hadoop.hbase.regionserver.TestHRegionOnCluster | | | org.apache.hadoop.hbase.regionserver.TestCompoundBloomFilter | | | org.apache.hadoop.hbase.snapshot.TestExportSnapshot | | | org.apache.hadoop.hbase.io.hfile.TestCacheOnWrite | | | org.apache.hadoop.hbase.regionserver.TestCacheOnWriteInSchema | | | org.apache.hadoop.hbase.regionserver.TestParallelPut | | | org.apache.hadoop.hbase.io.encoding.TestEncodedSeekers | | JDK v1.7.0_91 Failed junit tests | hadoop.hbase.master.balancer.TestStochasticLoadBalancer | | JDK v1.7.0_91 Timed out junit tests | org.apache.hadoop.hbase.regionserver.TestRegionMergeTransactionOnCluster | | | org.apache.hadoop.hbase.io.encoding.TestDataBlockEncoders | | | org.apache.hadoop.hbase.snapshot.TestMobSecureExportSnapshot | | | org.apache.hadoop.hbase.snapshot.TestSecureExportSnapshot | | | org.apache.hadoop.hbase.snapshot.TestMobExportSnapshot | | | org.apache.hadoop.hbase.snapshot.TestExportSnapshot | | | org.apache.hadoop.hbase.io.hfile.TestCacheOnWri
[jira] [Commented] (HBASE-15180) Reduce garbage created while reading Cells from Codec Decoder
[ https://issues.apache.org/jira/browse/HBASE-15180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15124771#comment-15124771 ] ramkrishna.s.vasudevan commented on HBASE-15180: Renaming the interface is better +1 on that. G1GC and MSLAB is interesting to see. We need a handoff from the incoming request handlers to the Server. Going based on MSLAB on or off makes sense but if it is 'off' then creating a BB pool on the Rpcserver request processing side may have to see for issues still the BBs are flushed to a hfile? > Reduce garbage created while reading Cells from Codec Decoder > - > > Key: HBASE-15180 > URL: https://issues.apache.org/jira/browse/HBASE-15180 > Project: HBase > Issue Type: Sub-task > Components: regionserver >Reporter: Anoop Sam John >Assignee: Anoop Sam John > Fix For: 2.0.0 > > Attachments: HBASE-15180.patch, HBASE-15180_V2.patch > > > In KeyValueDecoder#parseCell (Default Codec decoder) we use > KeyValueUtil#iscreate to read cells from the InputStream. Here we 1st create > a byte[] of length 4 and read the cell length and then an array of Cell's > length and read in cell bytes into it and create a KV. > Actually in server we read the reqs into a byte[] and CellScanner is created > on top of a ByteArrayInputStream on top of this. By default in write path, we > have MSLAB usage ON. So while adding Cells to memstore, we will copy the Cell > bytes to MSLAB memory chunks (default 2 MB size) and recreate Cells over that > bytes. So there is no issue if we create Cells over the RPC read byte[] > directly here in Decoder. No need for 2 byte[] creation and copy for every > Cell in request. > My plan is to make a Cell aware ByteArrayInputStream which can read Cells > directly from it. > Same Codec path is used in client side also. There better we can avoid this > direct Cell create and continue to do the copy to smaller byte[]s path. Plan > to introduce some thing like a CodecContext associated with every Codec > instance which can say the server/client context. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15192) TestRegionMergeTransactionOnCluster#testCleanMergeReference is flaky
[ https://issues.apache.org/jira/browse/HBASE-15192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15124751#comment-15124751 ] ramkrishna.s.vasudevan commented on HBASE-15192: I can change the sleep to something else so that we can assert the store file count after the chore is run. > TestRegionMergeTransactionOnCluster#testCleanMergeReference is flaky > > > Key: HBASE-15192 > URL: https://issues.apache.org/jira/browse/HBASE-15192 > Project: HBase > Issue Type: Test >Reporter: Ted Yu >Assignee: Ted Yu >Priority: Minor > Attachments: HBASE-15192.v1.patch > > > TestRegionMergeTransactionOnCluster#testCleanMergeReference fails > intermittently due to failed assertion on cleaned merge region count: > {code} > testCleanMergeReference(org.apache.hadoop.hbase.regionserver.TestRegionMergeTransactionOnCluster) > Time elapsed: 64.183 sec <<< FAILURE! > java.lang.AssertionError: null > at org.junit.Assert.fail(Assert.java:86) > at org.junit.Assert.assertTrue(Assert.java:41) > at org.junit.Assert.assertTrue(Assert.java:52) > at > org.apache.hadoop.hbase.regionserver.TestRegionMergeTransactionOnCluster.testCleanMergeReference(TestRegionMergeTransactionOnCluster.java:284) > {code} > Before calling CatalogJanitor#scan(), the test does: > {code} > int newcount1 = 0; > while (System.currentTimeMillis() < timeout) { > for(HColumnDescriptor colFamily : columnFamilies) { > newcount1 += hrfs.getStoreFiles(colFamily.getName()).size(); > } > if(newcount1 <= 1) { > break; > } > Thread.sleep(50); > } > {code} > newcount1 is not cleared at the beginning of the loop. > This means that if the check for newcount1 <= 1 doesn't pass the first > iteration, it wouldn't pass in subsequent iterations. > After timeout is exhausted, admin.runCatalogScan() is called. However, there > is a chance that CatalogJanitor#scan() has been called by the Chore already > (during the wait period), leaving the cleaned count 0 and failing the test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15193) Rename ByteBufferInputStream in master
[ https://issues.apache.org/jira/browse/HBASE-15193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15124750#comment-15124750 ] ramkrishna.s.vasudevan commented on HBASE-15193: [~enis] Yes, this ByteBuffInputStream is for working with ByteBuff which is nothing but an abstraction of single or MultipleByteBuffers to work in read path. > Rename ByteBufferInputStream in master > -- > > Key: HBASE-15193 > URL: https://issues.apache.org/jira/browse/HBASE-15193 > Project: HBase > Issue Type: Bug >Reporter: Enis Soztutar >Assignee: Enis Soztutar > Fix For: 2.0.0 > > Attachments: hbase-15193.patch > > > master has ByteBuffInputStream while branch-1 has ByteBufferInputStream. > cc. [~ram_krish], [~anoopsharma]. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15193) Rename ByteBufferInputStream in master
[ https://issues.apache.org/jira/browse/HBASE-15193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15124713#comment-15124713 ] Hadoop QA commented on HBASE-15193: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 3s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s {color} | {color:green} master passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s {color} | {color:green} master passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 5m 9s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 26s {color} | {color:green} master passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 44s {color} | {color:red} hbase-common in master has 1 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 53s {color} | {color:red} hbase-server in master has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s {color} | {color:green} master passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s {color} | {color:green} master passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 4s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 56s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 1s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 6m 5s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 33s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 23m 54s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 51s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 44s {color} | {color:green} hbase-common in the patch passed with JDK v1.8.0_66. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 124m 40s {color} | {color:red} hbase-server in the patch failed with JDK v1.8.0_66. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 7s {color} | {color:green} hbase-common in the patch passed with JDK v1.7.0_91. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 19s {color} | {color:red} hbase-server in the patch failed with JDK v1.7.0_91. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 182m 51s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1
[jira] [Commented] (HBASE-15192) TestRegionMergeTransactionOnCluster#testCleanMergeReference is flaky
[ https://issues.apache.org/jira/browse/HBASE-15192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15124691#comment-15124691 ] Hadoop QA commented on HBASE-15192: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 39s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s {color} | {color:green} master passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s {color} | {color:green} master passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 59s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s {color} | {color:green} master passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 55s {color} | {color:red} hbase-server in master has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s {color} | {color:green} master passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s {color} | {color:green} master passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 46s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s {color} | {color:green} the patch passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 33s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 33s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 20s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 22m 10s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 58s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s {color} | {color:green} the patch passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 80m 11s {color} | {color:green} hbase-server in the patch passed with JDK v1.8.0_72. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 83m 26s {color} | {color:red} hbase-server in the patch failed with JDK v1.7.0_91. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 206m 47s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.7.0_91 Timed out junit tests | org.apache.hadoop.hbase.snapshot.TestFlushSnapshotFromClient | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.9.1 Server=1.9.1 Image:yetus/hbase:date2016-01-30 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12785299/HBASE-15192.v1.patch | | JIRA Issue | HBASE-15192 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux d94aecd37bae 3.13
[jira] [Commented] (HBASE-14810) Update Hadoop support description to explain "not tested" vs "not supported"
[ https://issues.apache.org/jira/browse/HBASE-14810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15124684#comment-15124684 ] Hudson commented on HBASE-14810: FAILURE: Integrated in HBase-Trunk_matrix #668 (See [https://builds.apache.org/job/HBase-Trunk_matrix/668/]) HBASE-14810 Update Hadoop support description to explain "not tested" vs (mstanleyjones: rev 9cd487129d5a0048216ff00ef15fdb8effc525ae) * src/main/asciidoc/_chapters/preface.adoc * src/main/asciidoc/_chapters/configuration.adoc * src/main/asciidoc/_chapters/upgrading.adoc * src/main/asciidoc/_chapters/getting_started.adoc > Update Hadoop support description to explain "not tested" vs "not supported" > > > Key: HBASE-14810 > URL: https://issues.apache.org/jira/browse/HBASE-14810 > Project: HBase > Issue Type: Bug > Components: documentation >Reporter: Sean Busbey >Assignee: Misty Stanley-Jones >Priority: Critical > Fix For: 2.0.0 > > Attachments: HBASE-14810.patch > > > from [~ndimiduk] in thread about hadoop 2.6.1+: > {quote} > While we're in there, we should also clarify the meaning of "Not Supported" > vs "Not Tested". It seems we don't say what we mean by these distinctions. > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15177) Reduce garbage created under high load
[ https://issues.apache.org/jira/browse/HBASE-15177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15124679#comment-15124679 ] Hadoop QA commented on HBASE-15177: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 35s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s {color} | {color:green} master passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s {color} | {color:green} master passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 6m 59s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 37s {color} | {color:green} master passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 43s {color} | {color:red} hbase-common in master has 1 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 48s {color} | {color:red} hbase-server in master has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s {color} | {color:green} master passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s {color} | {color:green} master passed with JDK v1.7.0_91 {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 15s {color} | {color:red} hbase-client in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 25s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 19s {color} | {color:red} hbase-server in the patch failed with JDK v1.8.0_72. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 19s {color} | {color:red} hbase-server in the patch failed with JDK v1.8.0_72. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 25s {color} | {color:red} hbase-server in the patch failed with JDK v1.7.0_91. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 25s {color} | {color:red} hbase-server in the patch failed with JDK v1.7.0_91. {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 4s {color} | {color:red} Patch generated 3 new checkstyle issues in hbase-common (total was 7, now 10). {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 36s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 21m 40s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 51s {color} | {color:red} hbase-common introduced 1 new FindBugs issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s {color} | {color:green} the patch passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 45s {color} | {color:red} hbase-client in the patch failed with JDK v1.8.0_72. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 34s {color} | {color:green} hbase-common in the patch passed with JDK v1.8.0_72. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 18s {color} | {color:red} hbase-server in the patch failed with JDK v1.8.0_72. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 54s {color} | {color:red} hbase-client in the patch failed with JDK v1.7.0_91. {color} | | {color:g
[jira] [Updated] (HBASE-15177) Reduce garbage created under high load
[ https://issues.apache.org/jira/browse/HBASE-15177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-15177: -- Status: Patch Available (was: Open) > Reduce garbage created under high load > -- > > Key: HBASE-15177 > URL: https://issues.apache.org/jira/browse/HBASE-15177 > Project: HBase > Issue Type: Improvement >Reporter: Enis Soztutar >Assignee: Enis Soztutar > Fix For: 2.0.0, 1.3.0 > > Attachments: Screen Shot 2016-01-26 at 10.03.48 PM.png, Screen Shot > 2016-01-26 at 10.03.56 PM.png, Screen Shot 2016-01-26 at 10.06.16 PM.png, > Screen Shot 2016-01-26 at 10.15.15 PM.png, hbase-15177_v0.patch, > hbase-15177_v1.patch > > > I have been doing some profiling of the garbage being created. The idea was > to follow up on HBASE-14490 and experiment with offheap IPC byte buffers and > byte buffer re-use. However, without changing the IPC byte buffers for now, > there are a couple of (easy) improvements that I've identified from > profiling: > 1. RPCServer.Connection.processRequest() should work with ByteBuffer instead > of byte[] and not-recreate CodedInputStream a few times. > 2. RSRpcServices.getRegion() allocates two byte arrays for region, while only > 1 is needed. > 3. AnnotationReadingPriorityFunction is very expensive in allocations. Mainly > it allocates the regionName byte[] to get the table name. We already set the > priority for most of the operations (multi, get, increment, etc) but we are > only reading the priority in case of multi. We should use the priority from > the client side. > Lets do the simple improvements in this patch, we can get to IPC buffer > re-use in HBASE-14490. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15177) Reduce garbage created under high load
[ https://issues.apache.org/jira/browse/HBASE-15177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-15177: -- Attachment: hbase-15177_v1.patch Here is updated patch. Brings back BBIS from branch-1. > Reduce garbage created under high load > -- > > Key: HBASE-15177 > URL: https://issues.apache.org/jira/browse/HBASE-15177 > Project: HBase > Issue Type: Improvement >Reporter: Enis Soztutar >Assignee: Enis Soztutar > Fix For: 2.0.0, 1.3.0 > > Attachments: Screen Shot 2016-01-26 at 10.03.48 PM.png, Screen Shot > 2016-01-26 at 10.03.56 PM.png, Screen Shot 2016-01-26 at 10.06.16 PM.png, > Screen Shot 2016-01-26 at 10.15.15 PM.png, hbase-15177_v0.patch, > hbase-15177_v1.patch > > > I have been doing some profiling of the garbage being created. The idea was > to follow up on HBASE-14490 and experiment with offheap IPC byte buffers and > byte buffer re-use. However, without changing the IPC byte buffers for now, > there are a couple of (easy) improvements that I've identified from > profiling: > 1. RPCServer.Connection.processRequest() should work with ByteBuffer instead > of byte[] and not-recreate CodedInputStream a few times. > 2. RSRpcServices.getRegion() allocates two byte arrays for region, while only > 1 is needed. > 3. AnnotationReadingPriorityFunction is very expensive in allocations. Mainly > it allocates the regionName byte[] to get the table name. We already set the > priority for most of the operations (multi, get, increment, etc) but we are > only reading the priority in case of multi. We should use the priority from > the client side. > Lets do the simple improvements in this patch, we can get to IPC buffer > re-use in HBASE-14490. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15177) Reduce garbage created under high load
[ https://issues.apache.org/jira/browse/HBASE-15177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15124592#comment-15124592 ] Anoop Sam John commented on HBASE-15177: Ya in trunk we added ByteBufferIS and later changed it to ByteBuffIS as we have ByteBuff in HFileBlock. May be we can add ByteBufferIS also in HBase if needed.. I think you will remove the need for it in your next patch. > Reduce garbage created under high load > -- > > Key: HBASE-15177 > URL: https://issues.apache.org/jira/browse/HBASE-15177 > Project: HBase > Issue Type: Improvement >Reporter: Enis Soztutar >Assignee: Enis Soztutar > Fix For: 2.0.0, 1.3.0 > > Attachments: Screen Shot 2016-01-26 at 10.03.48 PM.png, Screen Shot > 2016-01-26 at 10.03.56 PM.png, Screen Shot 2016-01-26 at 10.06.16 PM.png, > Screen Shot 2016-01-26 at 10.15.15 PM.png, hbase-15177_v0.patch > > > I have been doing some profiling of the garbage being created. The idea was > to follow up on HBASE-14490 and experiment with offheap IPC byte buffers and > byte buffer re-use. However, without changing the IPC byte buffers for now, > there are a couple of (easy) improvements that I've identified from > profiling: > 1. RPCServer.Connection.processRequest() should work with ByteBuffer instead > of byte[] and not-recreate CodedInputStream a few times. > 2. RSRpcServices.getRegion() allocates two byte arrays for region, while only > 1 is needed. > 3. AnnotationReadingPriorityFunction is very expensive in allocations. Mainly > it allocates the regionName byte[] to get the table name. We already set the > priority for most of the operations (multi, get, increment, etc) but we are > only reading the priority in case of multi. We should use the priority from > the client side. > Lets do the simple improvements in this patch, we can get to IPC buffer > re-use in HBASE-14490. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15180) Reduce garbage created while reading Cells from Codec Decoder
[ https://issues.apache.org/jira/browse/HBASE-15180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15124586#comment-15124586 ] Anoop Sam John commented on HBASE-15180: CellScanner - Ya we made it like normal our scanner way of call advance and then get current. You mean we should have been doing just getting next Cell from the Decoder? bq. If so, we have CellOutputStream, should your CellReadable be a CellInputStream with read methods that return Cells to mirror the write methods we have in CellOutputStream. Your CellReadableByteArrayInputStream would become CellByteArrayInputStream and would implement CellInputStream. SO u suggest renaming of the interface. That should be fine and looks better. bq.I've asked this before I know but do we have to flag when tags and when without? Internally, when we read, the Cell will know if it has tags or not? To avoid the overhead of parsing tagsLength every time this was done. The old method we used to read cells from InputStream was iscreate(final InputStream in, boolean withTags) So KVCodecWithTags only pass this as true and we make KeyValue instance. By default we have KVCodec which pass false and we make NoTagsKV on which getTagsLength is just return 0. But there is still an issue more. When we copy the Cell to MSLAB before adding to Memstore, we do copy and make a new KeyValue only. So this NoTagsKV becomes KV there !!!Need handle but another issue. Ya as u said, we can avoid passing this boolean and after the Cell is been read out in new CellInputStream (KeyValue object then), we need to parse tags length once and see it is 0 or >0 and based on that recreate NoTagsKV if needed. That is one parsing op extra and one Object creation extra. For that gone with passing the boolean at that time I believe. What do u say? {quote} What is the length in the below? Cell readCell(int length, boolean withTags) throws IOException; Do we have to pass this in each time? {quote} This was needed because of the way we have this PushbackIS. In Decoder impl, we wrap the actual IS with this PBIS. The Decoder advance operation, reads one byte to know whether it is end. And now the remaining 3 bytes and old already read byte to be considered to get the Cell's total length. Said that the read of the Cell's length has to happen in the PBIS impl. (there only we know the old already read single byte).. And the read of the Cell (Construction of Cell) to be done in CellByteArrayInputStream. That is why we have to pass the length. I agree it looks ugly... I had no other way.. This PBIS thing was done to read Cells in WAL decoder properly. In case of reading Cells from byte[] from RPC, we dont really have such an issue. Let me see some way we can solve this. May be we need special treatment for both cases. bq.IPCUtil takes a Configuration? Can we not just read the Configuration on construction rather than pass this flag per call? IPCUtil #createCellScanner(final Codec codec, final CompressionCodec compressor, final byte [] cellBlock) - Used from RpcClient new method called from RpcServer alone. Yes I dont want this direct Cell create to happen in client side where we dont have an MSLAB copy. Still as u said, if we read the conf property and set in IPCUtil and use that while construction of Stream, I can avoid this passing of boolean. If any chance the config xml in server is used in the client and that says use MSLAB (Still we dont have as we dont have Memstore and all there), we will do it wrongly no. That is I was afraid to do that. Now any way you suggest add a new config to decide this copy or not rather than rely on MSLAB. I think ya it is good. Only downside is again a new config! But ya it looks better.. Let me do that.. > Reduce garbage created while reading Cells from Codec Decoder > - > > Key: HBASE-15180 > URL: https://issues.apache.org/jira/browse/HBASE-15180 > Project: HBase > Issue Type: Sub-task > Components: regionserver >Reporter: Anoop Sam John >Assignee: Anoop Sam John > Fix For: 2.0.0 > > Attachments: HBASE-15180.patch, HBASE-15180_V2.patch > > > In KeyValueDecoder#parseCell (Default Codec decoder) we use > KeyValueUtil#iscreate to read cells from the InputStream. Here we 1st create > a byte[] of length 4 and read the cell length and then an array of Cell's > length and read in cell bytes into it and create a KV. > Actually in server we read the reqs into a byte[] and CellScanner is created > on top of a ByteArrayInputStream on top of this. By default in write path, we > have MSLAB usage ON. So while adding Cells to memstore, we will copy the Cell > bytes to MSLAB memory chunks (default 2 MB size) and recreate Cells over that > bytes. S
[jira] [Resolved] (HBASE-15193) Rename ByteBufferInputStream in master
[ https://issues.apache.org/jira/browse/HBASE-15193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar resolved HBASE-15193. --- Resolution: Not A Problem Turns out in master we have ByteBuff that is abstracting java's ByteBuffer. > Rename ByteBufferInputStream in master > -- > > Key: HBASE-15193 > URL: https://issues.apache.org/jira/browse/HBASE-15193 > Project: HBase > Issue Type: Bug >Reporter: Enis Soztutar >Assignee: Enis Soztutar > Fix For: 2.0.0 > > Attachments: hbase-15193.patch > > > master has ByteBuffInputStream while branch-1 has ByteBufferInputStream. > cc. [~ram_krish], [~anoopsharma]. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15193) Rename ByteBufferInputStream in master
[ https://issues.apache.org/jira/browse/HBASE-15193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-15193: -- Status: Open (was: Patch Available) > Rename ByteBufferInputStream in master > -- > > Key: HBASE-15193 > URL: https://issues.apache.org/jira/browse/HBASE-15193 > Project: HBase > Issue Type: Bug >Reporter: Enis Soztutar >Assignee: Enis Soztutar > Fix For: 2.0.0 > > Attachments: hbase-15193.patch > > > master has ByteBuffInputStream while branch-1 has ByteBufferInputStream. > cc. [~ram_krish], [~anoopsharma]. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15177) Reduce garbage created under high load
[ https://issues.apache.org/jira/browse/HBASE-15177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15124572#comment-15124572 ] Enis Soztutar commented on HBASE-15177: --- bq. Will this be needed? Because the CIS would have operated on the buf array only right? This was an artifact of trying IS over BB in CIS. You are right, we do not need it. Removed. bq. On the patch, seems odd pulling BBIS from zookeeper especially given we have a BBOS in hbase itself. To fix one day. Turns out we have our own BBIS. I did not realize that we were depending on the zk one. See HBASE-15193. bq. Can we just remove AnnotationReadingPriorityFunction now you've done all the convertions Enis Soztutar? Can do in another issue. Client side operations like get/multi, etc are good now. We do not set the priorities at Admin or things like RegionServerReport, now. I think we need a follow up issue. bq. I notice that pb2.6.0 says CIS supports BBs https://github.com/google/protobuf/blob/master/java/core/src/main/java/com/google/protobuf/CodedInputStream.java#L104 Still does copying to byte[] for reading. > Reduce garbage created under high load > -- > > Key: HBASE-15177 > URL: https://issues.apache.org/jira/browse/HBASE-15177 > Project: HBase > Issue Type: Improvement >Reporter: Enis Soztutar >Assignee: Enis Soztutar > Fix For: 2.0.0, 1.3.0 > > Attachments: Screen Shot 2016-01-26 at 10.03.48 PM.png, Screen Shot > 2016-01-26 at 10.03.56 PM.png, Screen Shot 2016-01-26 at 10.06.16 PM.png, > Screen Shot 2016-01-26 at 10.15.15 PM.png, hbase-15177_v0.patch > > > I have been doing some profiling of the garbage being created. The idea was > to follow up on HBASE-14490 and experiment with offheap IPC byte buffers and > byte buffer re-use. However, without changing the IPC byte buffers for now, > there are a couple of (easy) improvements that I've identified from > profiling: > 1. RPCServer.Connection.processRequest() should work with ByteBuffer instead > of byte[] and not-recreate CodedInputStream a few times. > 2. RSRpcServices.getRegion() allocates two byte arrays for region, while only > 1 is needed. > 3. AnnotationReadingPriorityFunction is very expensive in allocations. Mainly > it allocates the regionName byte[] to get the table name. We already set the > priority for most of the operations (multi, get, increment, etc) but we are > only reading the priority in case of multi. We should use the priority from > the client side. > Lets do the simple improvements in this patch, we can get to IPC buffer > re-use in HBASE-14490. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15177) Reduce garbage created under high load
[ https://issues.apache.org/jira/browse/HBASE-15177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15124551#comment-15124551 ] Anoop Sam John commented on HBASE-15177: bq.I notice that pb2.6.0 says CIS supports BBs. Ya I checked the PB code base yesterday(trunk) to see what new it supports abt BBs. There is newInstance(BB) in CIS. But the impl again doing an array expectation. When on heap, just refers to array() of the BB.. Avoid any sort of copy there. When DBB, it creates a byte[] and copy into. > Reduce garbage created under high load > -- > > Key: HBASE-15177 > URL: https://issues.apache.org/jira/browse/HBASE-15177 > Project: HBase > Issue Type: Improvement >Reporter: Enis Soztutar >Assignee: Enis Soztutar > Fix For: 2.0.0, 1.3.0 > > Attachments: Screen Shot 2016-01-26 at 10.03.48 PM.png, Screen Shot > 2016-01-26 at 10.03.56 PM.png, Screen Shot 2016-01-26 at 10.06.16 PM.png, > Screen Shot 2016-01-26 at 10.15.15 PM.png, hbase-15177_v0.patch > > > I have been doing some profiling of the garbage being created. The idea was > to follow up on HBASE-14490 and experiment with offheap IPC byte buffers and > byte buffer re-use. However, without changing the IPC byte buffers for now, > there are a couple of (easy) improvements that I've identified from > profiling: > 1. RPCServer.Connection.processRequest() should work with ByteBuffer instead > of byte[] and not-recreate CodedInputStream a few times. > 2. RSRpcServices.getRegion() allocates two byte arrays for region, while only > 1 is needed. > 3. AnnotationReadingPriorityFunction is very expensive in allocations. Mainly > it allocates the regionName byte[] to get the table name. We already set the > priority for most of the operations (multi, get, increment, etc) but we are > only reading the priority in case of multi. We should use the priority from > the client side. > Lets do the simple improvements in this patch, we can get to IPC buffer > re-use in HBASE-14490. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15193) Rename ByteBufferInputStream in master
[ https://issues.apache.org/jira/browse/HBASE-15193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-15193: -- Status: Patch Available (was: Open) > Rename ByteBufferInputStream in master > -- > > Key: HBASE-15193 > URL: https://issues.apache.org/jira/browse/HBASE-15193 > Project: HBase > Issue Type: Bug >Reporter: Enis Soztutar >Assignee: Enis Soztutar > Fix For: 2.0.0 > > Attachments: hbase-15193.patch > > > master has ByteBuffInputStream while branch-1 has ByteBufferInputStream. > cc. [~ram_krish], [~anoopsharma]. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15193) Rename ByteBufferInputStream in master
[ https://issues.apache.org/jira/browse/HBASE-15193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-15193: -- Attachment: hbase-15193.patch Simple patch. > Rename ByteBufferInputStream in master > -- > > Key: HBASE-15193 > URL: https://issues.apache.org/jira/browse/HBASE-15193 > Project: HBase > Issue Type: Bug >Reporter: Enis Soztutar >Assignee: Enis Soztutar > Fix For: 2.0.0 > > Attachments: hbase-15193.patch > > > master has ByteBuffInputStream while branch-1 has ByteBufferInputStream. > cc. [~ram_krish], [~anoopsharma]. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-15193) Rename ByteBufferInputStream in master
Enis Soztutar created HBASE-15193: - Summary: Rename ByteBufferInputStream in master Key: HBASE-15193 URL: https://issues.apache.org/jira/browse/HBASE-15193 Project: HBase Issue Type: Bug Reporter: Enis Soztutar Assignee: Enis Soztutar Fix For: 2.0.0 master has ByteBuffInputStream while branch-1 has ByteBufferInputStream. cc. [~ram_krish], [~anoopsharma]. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15192) TestRegionMergeTransactionOnCluster#testCleanMergeReference is flaky
[ https://issues.apache.org/jira/browse/HBASE-15192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-15192: --- Status: Patch Available (was: Open) > TestRegionMergeTransactionOnCluster#testCleanMergeReference is flaky > > > Key: HBASE-15192 > URL: https://issues.apache.org/jira/browse/HBASE-15192 > Project: HBase > Issue Type: Test >Reporter: Ted Yu >Assignee: Ted Yu >Priority: Minor > Attachments: HBASE-15192.v1.patch > > > TestRegionMergeTransactionOnCluster#testCleanMergeReference fails > intermittently due to failed assertion on cleaned merge region count: > {code} > testCleanMergeReference(org.apache.hadoop.hbase.regionserver.TestRegionMergeTransactionOnCluster) > Time elapsed: 64.183 sec <<< FAILURE! > java.lang.AssertionError: null > at org.junit.Assert.fail(Assert.java:86) > at org.junit.Assert.assertTrue(Assert.java:41) > at org.junit.Assert.assertTrue(Assert.java:52) > at > org.apache.hadoop.hbase.regionserver.TestRegionMergeTransactionOnCluster.testCleanMergeReference(TestRegionMergeTransactionOnCluster.java:284) > {code} > Before calling CatalogJanitor#scan(), the test does: > {code} > int newcount1 = 0; > while (System.currentTimeMillis() < timeout) { > for(HColumnDescriptor colFamily : columnFamilies) { > newcount1 += hrfs.getStoreFiles(colFamily.getName()).size(); > } > if(newcount1 <= 1) { > break; > } > Thread.sleep(50); > } > {code} > newcount1 is not cleared at the beginning of the loop. > This means that if the check for newcount1 <= 1 doesn't pass the first > iteration, it wouldn't pass in subsequent iterations. > After timeout is exhausted, admin.runCatalogScan() is called. However, there > is a chance that CatalogJanitor#scan() has been called by the Chore already > (during the wait period), leaving the cleaned count 0 and failing the test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15192) TestRegionMergeTransactionOnCluster#testCleanMergeReference is flaky
[ https://issues.apache.org/jira/browse/HBASE-15192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-15192: --- Attachment: HBASE-15192.v1.patch First attempt for a fix. The log level change in CatalogJanitor is for collecting more information in case the test fails. It will be taken out in the final patch. > TestRegionMergeTransactionOnCluster#testCleanMergeReference is flaky > > > Key: HBASE-15192 > URL: https://issues.apache.org/jira/browse/HBASE-15192 > Project: HBase > Issue Type: Test >Reporter: Ted Yu >Assignee: Ted Yu >Priority: Minor > Attachments: HBASE-15192.v1.patch > > > TestRegionMergeTransactionOnCluster#testCleanMergeReference fails > intermittently due to failed assertion on cleaned merge region count: > {code} > testCleanMergeReference(org.apache.hadoop.hbase.regionserver.TestRegionMergeTransactionOnCluster) > Time elapsed: 64.183 sec <<< FAILURE! > java.lang.AssertionError: null > at org.junit.Assert.fail(Assert.java:86) > at org.junit.Assert.assertTrue(Assert.java:41) > at org.junit.Assert.assertTrue(Assert.java:52) > at > org.apache.hadoop.hbase.regionserver.TestRegionMergeTransactionOnCluster.testCleanMergeReference(TestRegionMergeTransactionOnCluster.java:284) > {code} > Before calling CatalogJanitor#scan(), the test does: > {code} > int newcount1 = 0; > while (System.currentTimeMillis() < timeout) { > for(HColumnDescriptor colFamily : columnFamilies) { > newcount1 += hrfs.getStoreFiles(colFamily.getName()).size(); > } > if(newcount1 <= 1) { > break; > } > Thread.sleep(50); > } > {code} > newcount1 is not cleared at the beginning of the loop. > This means that if the check for newcount1 <= 1 doesn't pass the first > iteration, it wouldn't pass in subsequent iterations. > After timeout is exhausted, admin.runCatalogScan() is called. However, there > is a chance that CatalogJanitor#scan() has been called by the Chore already > (during the wait period), leaving the cleaned count 0 and failing the test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-15192) TestRegionMergeTransactionOnCluster#testCleanMergeReference is flaky
Ted Yu created HBASE-15192: -- Summary: TestRegionMergeTransactionOnCluster#testCleanMergeReference is flaky Key: HBASE-15192 URL: https://issues.apache.org/jira/browse/HBASE-15192 Project: HBase Issue Type: Test Reporter: Ted Yu Assignee: Ted Yu Priority: Minor TestRegionMergeTransactionOnCluster#testCleanMergeReference fails intermittently due to failed assertion on cleaned merge region count: {code} testCleanMergeReference(org.apache.hadoop.hbase.regionserver.TestRegionMergeTransactionOnCluster) Time elapsed: 64.183 sec <<< FAILURE! java.lang.AssertionError: null at org.junit.Assert.fail(Assert.java:86) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertTrue(Assert.java:52) at org.apache.hadoop.hbase.regionserver.TestRegionMergeTransactionOnCluster.testCleanMergeReference(TestRegionMergeTransactionOnCluster.java:284) {code} Before calling CatalogJanitor#scan(), the test does: {code} int newcount1 = 0; while (System.currentTimeMillis() < timeout) { for(HColumnDescriptor colFamily : columnFamilies) { newcount1 += hrfs.getStoreFiles(colFamily.getName()).size(); } if(newcount1 <= 1) { break; } Thread.sleep(50); } {code} newcount1 is not cleared at the beginning of the loop. This means that if the check for newcount1 <= 1 doesn't pass the first iteration, it wouldn't pass in subsequent iterations. After timeout is exhausted, admin.runCatalogScan() is called. However, there is a chance that CatalogJanitor#scan() has been called by the Chore already (during the wait period), leaving the cleaned count 0 and failing the test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15191) CopyTable and VerifyReplication - Option to specify batch size, versions
[ https://issues.apache.org/jira/browse/HBASE-15191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15124389#comment-15124389 ] Heng Chen commented on HBASE-15191: --- {{hbase.mapreduce.scan.batchsize}} and {{hbase.mapreduce.scan.cachedrows}} meet your needs? > CopyTable and VerifyReplication - Option to specify batch size, versions > > > Key: HBASE-15191 > URL: https://issues.apache.org/jira/browse/HBASE-15191 > Project: HBase > Issue Type: Improvement > Components: Replication >Affects Versions: 0.98.16.1 >Reporter: Parth Shah >Priority: Minor > Attachments: HBASE_15191.patch > > > Need option to specify batch size for CopyTable and VerifyReplication. We > are working on patch for this. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-11792) Organize PerformanceEvaluation usage output
[ https://issues.apache.org/jira/browse/HBASE-11792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Misty Stanley-Jones updated HBASE-11792: Status: Patch Available (was: Open) > Organize PerformanceEvaluation usage output > --- > > Key: HBASE-11792 > URL: https://issues.apache.org/jira/browse/HBASE-11792 > Project: HBase > Issue Type: Improvement > Components: Performance, test >Reporter: Nick Dimiduk >Assignee: Misty Stanley-Jones >Priority: Minor > Labels: beginner > Attachments: HBASE-11792-0.98.patch, HBASE-11792-branch-1.0.patch, > HBASE-11792-branch-1.1.patch, HBASE-11792-branch-1.2.patch, > HBASE-11792-branch-1.patch, HBASE-11792.patch > > > PerformanceEvaluation has enjoyed a good bit of attention recently. All the > new features are muddled together. It would be nice to organize the output of > the Options list according to some scheme. I was thinking you're group > entries by when they're used. For example > *General options* > - nomapred > - rows > - oneCon > - ... > *Table Creation/Write tests* > - compress > - flushCommits > - valueZipf > - ... > *Read tests* > - filterAll > - multiGet > - replicas > - ... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-11792) Organize PerformanceEvaluation usage output
[ https://issues.apache.org/jira/browse/HBASE-11792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Misty Stanley-Jones updated HBASE-11792: Status: Open (was: Patch Available) > Organize PerformanceEvaluation usage output > --- > > Key: HBASE-11792 > URL: https://issues.apache.org/jira/browse/HBASE-11792 > Project: HBase > Issue Type: Improvement > Components: Performance, test >Reporter: Nick Dimiduk >Assignee: Misty Stanley-Jones >Priority: Minor > Labels: beginner > Attachments: HBASE-11792-0.98.patch, HBASE-11792-branch-1.0.patch, > HBASE-11792-branch-1.1.patch, HBASE-11792-branch-1.2.patch, > HBASE-11792-branch-1.patch, HBASE-11792.patch > > > PerformanceEvaluation has enjoyed a good bit of attention recently. All the > new features are muddled together. It would be nice to organize the output of > the Options list according to some scheme. I was thinking you're group > entries by when they're used. For example > *General options* > - nomapred > - rows > - oneCon > - ... > *Table Creation/Write tests* > - compress > - flushCommits > - valueZipf > - ... > *Read tests* > - filterAll > - multiGet > - replicas > - ... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-14810) Update Hadoop support description to explain "not tested" vs "not supported"
[ https://issues.apache.org/jira/browse/HBASE-14810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Misty Stanley-Jones updated HBASE-14810: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.0.0 Status: Resolved (was: Patch Available) Pushed to master. > Update Hadoop support description to explain "not tested" vs "not supported" > > > Key: HBASE-14810 > URL: https://issues.apache.org/jira/browse/HBASE-14810 > Project: HBase > Issue Type: Bug > Components: documentation >Reporter: Sean Busbey >Assignee: Misty Stanley-Jones >Priority: Critical > Fix For: 2.0.0 > > Attachments: HBASE-14810.patch > > > from [~ndimiduk] in thread about hadoop 2.6.1+: > {quote} > While we're in there, we should also clarify the meaning of "Not Supported" > vs "Not Tested". It seems we don't say what we mean by these distinctions. > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14810) Update Hadoop support description to explain "not tested" vs "not supported"
[ https://issues.apache.org/jira/browse/HBASE-14810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15124361#comment-15124361 ] stack commented on HBASE-14810: --- Man, we got a bunch of flakies again... on master branch at least. +1 on patch. Excellent writeup. > Update Hadoop support description to explain "not tested" vs "not supported" > > > Key: HBASE-14810 > URL: https://issues.apache.org/jira/browse/HBASE-14810 > Project: HBase > Issue Type: Bug > Components: documentation >Reporter: Sean Busbey >Assignee: Misty Stanley-Jones >Priority: Critical > Attachments: HBASE-14810.patch > > > from [~ndimiduk] in thread about hadoop 2.6.1+: > {quote} > While we're in there, we should also clarify the meaning of "Not Supported" > vs "Not Tested". It seems we don't say what we mean by these distinctions. > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14770) RowCounter argument input parse error
[ https://issues.apache.org/jira/browse/HBASE-14770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15124359#comment-15124359 ] Adrian Muraru commented on HBASE-14770: --- This doesn't seem to be related to this patch > RowCounter argument input parse error > - > > Key: HBASE-14770 > URL: https://issues.apache.org/jira/browse/HBASE-14770 > Project: HBase > Issue Type: Bug > Components: mapreduce >Affects Versions: 2.0.0, 1.3.0, 1.2.1, 1.0.3 >Reporter: Frank Chang >Assignee: Adrian Muraru >Priority: Minor > Attachments: HBASE-14770-master-2.patch, HBASE-14770-master.patch > > > I'm tried to use > https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/RowCounter.java > code and package a new jar then excuted following shell script: > {code:none} > hadoop jar test.jar --range=row001,row002 cf:c2 > {code} > Then I got "NoSuchColumnFamilyException". > It seems input argument parsing problem. > And I tried to add > {code:java} > continue; > {code} > after #L123 to avoid "--range=*" string be appended to qualifer. > Then the problem seems solve. > -- > data in table: > ||row||cf:c1||cf:c2||cf:c3||cf:c4|| > |row001|v1|v2| | | > |row002| |v2|v3| | > |row003| | |v3|v4| > |row004|v1| | |v4| > Exception Message: > {code:java} > org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: > org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column > family --range=row001,row002 does not exist in region > frank_rowcounttest1,,1446191360354.6c52c71a82f0fa041c467002a2bf433c. in table > 'frank_rowcounttest1', {NAME => 'cf', DATA_BLOCK_ENCODING => 'NONE', > BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', > VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => > 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-15037) CopyTable and VerifyReplication - Option to specify batch size, versions
[ https://issues.apache.org/jira/browse/HBASE-15037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ramana Uppala resolved HBASE-15037. --- Resolution: Duplicate Closing as this is duplicate of https://issues.apache.org/jira/browse/HBASE-15191 > CopyTable and VerifyReplication - Option to specify batch size, versions > > > Key: HBASE-15037 > URL: https://issues.apache.org/jira/browse/HBASE-15037 > Project: HBase > Issue Type: Improvement > Components: Replication >Affects Versions: 0.98.16.1 >Reporter: Ramana Uppala >Priority: Minor > > Need option to specify batch size for CopyTable and VerifyReplication. We > are working on patch for this. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14810) Update Hadoop support description to explain "not tested" vs "not supported"
[ https://issues.apache.org/jira/browse/HBASE-14810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15124285#comment-15124285 ] Sean Busbey commented on HBASE-14810: - unit test failures safe to ignore, since asciidoc changes don't impact them (filed as YETUS-296) > Update Hadoop support description to explain "not tested" vs "not supported" > > > Key: HBASE-14810 > URL: https://issues.apache.org/jira/browse/HBASE-14810 > Project: HBase > Issue Type: Bug > Components: documentation >Reporter: Sean Busbey >Assignee: Misty Stanley-Jones >Priority: Critical > Attachments: HBASE-14810.patch > > > from [~ndimiduk] in thread about hadoop 2.6.1+: > {quote} > While we're in there, we should also clarify the meaning of "Not Supported" > vs "Not Tested". It seems we don't say what we mean by these distinctions. > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15186) HBASE-15158 Preamble: fix findbugs, add javadoc and some util
[ https://issues.apache.org/jira/browse/HBASE-15186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15124283#comment-15124283 ] Hadoop QA commented on HBASE-15186: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 4 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 27s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 58s {color} | {color:green} master passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 44s {color} | {color:green} master passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 7m 45s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 1s {color} | {color:green} master passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 56s {color} | {color:red} hbase-common in master has 1 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 19s {color} | {color:red} hbase-server in master has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 52s {color} | {color:green} master passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 48s {color} | {color:green} master passed with JDK v1.7.0_91 {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 17s {color} | {color:red} hbase-examples in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 59s {color} | {color:green} the patch passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 59s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 45s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 45s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 4m 7s {color} | {color:red} Patch generated 4 new checkstyle issues in hbase-server (total was 1121, now 1100). {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 3s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 31m 36s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 32s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 25s {color} | {color:green} the patch passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 11s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 22s {color} | {color:green} hbase-client in the patch passed with JDK v1.8.0_72. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 19s {color} | {color:green} hbase-common in the patch passed with JDK v1.8.0_72. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 35s {color} | {color:green} hbase-examples in the patch passed with JDK v1.8.0_72. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 141m 19s {color} | {color:red} hbase-server in the patch failed with JDK v1.8.0_72. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 13s {color} | {color:green} hbase-client in the patch passed with JDK v1.7.0_91. {color} | | {color:green}+1{color} | {color:green} unit {color} |
[jira] [Updated] (HBASE-15191) CopyTable and VerifyReplication - Option to specify batch size, versions
[ https://issues.apache.org/jira/browse/HBASE-15191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Parth Shah updated HBASE-15191: --- Attachment: HBASE_15191.patch > CopyTable and VerifyReplication - Option to specify batch size, versions > > > Key: HBASE-15191 > URL: https://issues.apache.org/jira/browse/HBASE-15191 > Project: HBase > Issue Type: Improvement > Components: Replication >Affects Versions: 0.98.16.1 >Reporter: Parth Shah >Priority: Minor > Attachments: HBASE_15191.patch > > > Need option to specify batch size for CopyTable and VerifyReplication. We > are working on patch for this. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-15191) CopyTable and VerifyReplication - Option to specify batch size, versions
Parth Shah created HBASE-15191: -- Summary: CopyTable and VerifyReplication - Option to specify batch size, versions Key: HBASE-15191 URL: https://issues.apache.org/jira/browse/HBASE-15191 Project: HBase Issue Type: Improvement Components: Replication Affects Versions: 0.98.16.1 Reporter: Parth Shah Priority: Minor Need option to specify batch size for CopyTable and VerifyReplication. We are working on patch for this. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15177) Reduce garbage created under high load
[ https://issues.apache.org/jira/browse/HBASE-15177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15124228#comment-15124228 ] stack commented on HBASE-15177: --- bq. CodedInputStream is final ! Missed that. I knew there was something to prevent our being able to extend/replace. I notice that pb2.6.0 says CIS supports BBs. See https://github.com/google/protobuf/releases/tag/v2.6.0 or this is easier to read: http://upstream.rosalinux.ru/changelogs/protobuf/2.6.0/changelog.html There is this nice doc on how compat 2.5 vs 2.6 is: http://upstream.rosalinux.ru/versions/protobuf.html I should try it out. See if we can use 2.6 to read from hdfs. On the patch, seems odd pulling BBIS from zookeeper especially given we have a BBOS in hbase itself. To fix one day. Can we just remove AnnotationReadingPriorityFunction now you've done all the convertions [~enis]? Can do in another issue. +1 > Reduce garbage created under high load > -- > > Key: HBASE-15177 > URL: https://issues.apache.org/jira/browse/HBASE-15177 > Project: HBase > Issue Type: Improvement >Reporter: Enis Soztutar >Assignee: Enis Soztutar > Fix For: 2.0.0, 1.3.0 > > Attachments: Screen Shot 2016-01-26 at 10.03.48 PM.png, Screen Shot > 2016-01-26 at 10.03.56 PM.png, Screen Shot 2016-01-26 at 10.06.16 PM.png, > Screen Shot 2016-01-26 at 10.15.15 PM.png, hbase-15177_v0.patch > > > I have been doing some profiling of the garbage being created. The idea was > to follow up on HBASE-14490 and experiment with offheap IPC byte buffers and > byte buffer re-use. However, without changing the IPC byte buffers for now, > there are a couple of (easy) improvements that I've identified from > profiling: > 1. RPCServer.Connection.processRequest() should work with ByteBuffer instead > of byte[] and not-recreate CodedInputStream a few times. > 2. RSRpcServices.getRegion() allocates two byte arrays for region, while only > 1 is needed. > 3. AnnotationReadingPriorityFunction is very expensive in allocations. Mainly > it allocates the regionName byte[] to get the table name. We already set the > priority for most of the operations (multi, get, increment, etc) but we are > only reading the priority in case of multi. We should use the priority from > the client side. > Lets do the simple improvements in this patch, we can get to IPC buffer > re-use in HBASE-14490. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-15189) IT monkey fails if running on shared cluster; can't kill the other fellows job and gives up
[ https://issues.apache.org/jira/browse/HBASE-15189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack resolved HBASE-15189. --- Resolution: Duplicate HBASE-15190 > IT monkey fails if running on shared cluster; can't kill the other fellows > job and gives up > --- > > Key: HBASE-15189 > URL: https://issues.apache.org/jira/browse/HBASE-15189 > Project: HBase > Issue Type: Bug > Components: integration tests >Reporter: stack > > Trying to run IT on a cluster shared with an other, the monkeys give up > because they error out trying to kill the other fellows daemons: > Failure looks like this: > {code} > 16/01/29 09:07:09 WARN hbase.HBaseClusterManager: Remote command: ps aux | > grep proc_regionserver | grep -v grep | tr -s ' ' | cut -d ' ' -f2 | xargs > kill -s SIGKILL , hostname:ve0536.halxg.cloudera.com failed at attempt 3. > Retrying until maxAttempts: 5. Exception: stderr: kill 115040: Operation not > permitted > , stdout: > {code} > The operation is not permitted because there is a regionserver running that > is owned by someone else. We retry and then give up on the monkey. > Fix seems simple. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15190) Monkey dies when running on shared cluster (gives up when can't kill the other fellows processes)
[ https://issues.apache.org/jira/browse/HBASE-15190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15124146#comment-15124146 ] stack commented on HBASE-15190: --- Yeah, in this case, the 'other fellow' is the hbase user he won't let me kill his stuff. Thanks for review [~enis] > Monkey dies when running on shared cluster (gives up when can't kill the > other fellows processes) > - > > Key: HBASE-15190 > URL: https://issues.apache.org/jira/browse/HBASE-15190 > Project: HBase > Issue Type: Bug > Components: integration tests >Reporter: stack >Assignee: stack > Attachments: shared.patch > > > Trying to run IT on a cluster shared with an other, the monkeys give up > because they error out trying to kill the other fellows daemons: > Failure looks like this: > {code} > 16/01/29 09:07:09 WARN hbase.HBaseClusterManager: Remote command: ps aux | > grep proc_regionserver | grep -v grep | tr -s ' ' | cut -d ' ' -f2 | xargs > kill -s SIGKILL , hostname:ve0536.halxg.cloudera.com failed at attempt 3. > Retrying until maxAttempts: 5. Exception: stderr: kill 115040: Operation not > permitted > , stdout: > {code} > The operation is not permitted because there is a regionserver running that > is owned by someone else. We retry and then give up on the monkey. > Fix seems simple. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-15188) IT monkey fails if running on shared cluster; can't kill the other fellows job and gives up
[ https://issues.apache.org/jira/browse/HBASE-15188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack resolved HBASE-15188. --- Resolution: Duplicate Dup of HBASE-15189 > IT monkey fails if running on shared cluster; can't kill the other fellows > job and gives up > --- > > Key: HBASE-15188 > URL: https://issues.apache.org/jira/browse/HBASE-15188 > Project: HBase > Issue Type: Bug > Components: integration tests >Reporter: stack > > Trying to run IT on a cluster shared with an other, the monkeys give up > because they error out trying to kill the other fellows daemons: > Failure looks like this: > {code} > 16/01/29 09:07:09 WARN hbase.HBaseClusterManager: Remote command: ps aux | > grep proc_regionserver | grep -v grep | tr -s ' ' | cut -d ' ' -f2 | xargs > kill -s SIGKILL , hostname:ve0536.halxg.cloudera.com failed at attempt 3. > Retrying until maxAttempts: 5. Exception: stderr: kill 115040: Operation not > permitted > , stdout: > {code} > The operation is not permitted because there is a regionserver running that > is owned by someone else. We retry and then give up on the monkey. > Fix seems simple. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15180) Reduce garbage created while reading Cells from Codec Decoder
[ https://issues.apache.org/jira/browse/HBASE-15180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15124145#comment-15124145 ] stack commented on HBASE-15180: --- bq. Even with G1, this is unexpected. Is there a theoretical explanation? Why you say [~enis]? Here's a SWAG. MSLAB is to prevent fragmentation in CMS. Every Cell gets copied and there is the allocation of the SLABs themselves (no reuse). G1GC avoids fragmentation by copying to a new region if fragmention. Many small copies and SLAB allocations cost more than the relatively macro copies G1GC does. To be verified... > Reduce garbage created while reading Cells from Codec Decoder > - > > Key: HBASE-15180 > URL: https://issues.apache.org/jira/browse/HBASE-15180 > Project: HBase > Issue Type: Sub-task > Components: regionserver >Reporter: Anoop Sam John >Assignee: Anoop Sam John > Fix For: 2.0.0 > > Attachments: HBASE-15180.patch, HBASE-15180_V2.patch > > > In KeyValueDecoder#parseCell (Default Codec decoder) we use > KeyValueUtil#iscreate to read cells from the InputStream. Here we 1st create > a byte[] of length 4 and read the cell length and then an array of Cell's > length and read in cell bytes into it and create a KV. > Actually in server we read the reqs into a byte[] and CellScanner is created > on top of a ByteArrayInputStream on top of this. By default in write path, we > have MSLAB usage ON. So while adding Cells to memstore, we will copy the Cell > bytes to MSLAB memory chunks (default 2 MB size) and recreate Cells over that > bytes. So there is no issue if we create Cells over the RPC read byte[] > directly here in Decoder. No need for 2 byte[] creation and copy for every > Cell in request. > My plan is to make a Cell aware ByteArrayInputStream which can read Cells > directly from it. > Same Codec path is used in client side also. There better we can avoid this > direct Cell create and continue to do the copy to smaller byte[]s path. Plan > to introduce some thing like a CodecContext associated with every Codec > instance which can say the server/client context. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15180) Reduce garbage created while reading Cells from Codec Decoder
[ https://issues.apache.org/jira/browse/HBASE-15180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15124141#comment-15124141 ] Esteban Gutierrez commented on HBASE-15180: --- My guess at that time was that the rate the RSs where flushing the memstores aggressively, MSLAB was only creating extra work for the collector. If I remember correctly, I remember I started to notice the improvement of disabling MSLAB after going to higher throughputs (+100K puts/sec). At lower throughputs I don't think the contribution with our without was noticeable. > Reduce garbage created while reading Cells from Codec Decoder > - > > Key: HBASE-15180 > URL: https://issues.apache.org/jira/browse/HBASE-15180 > Project: HBase > Issue Type: Sub-task > Components: regionserver >Reporter: Anoop Sam John >Assignee: Anoop Sam John > Fix For: 2.0.0 > > Attachments: HBASE-15180.patch, HBASE-15180_V2.patch > > > In KeyValueDecoder#parseCell (Default Codec decoder) we use > KeyValueUtil#iscreate to read cells from the InputStream. Here we 1st create > a byte[] of length 4 and read the cell length and then an array of Cell's > length and read in cell bytes into it and create a KV. > Actually in server we read the reqs into a byte[] and CellScanner is created > on top of a ByteArrayInputStream on top of this. By default in write path, we > have MSLAB usage ON. So while adding Cells to memstore, we will copy the Cell > bytes to MSLAB memory chunks (default 2 MB size) and recreate Cells over that > bytes. So there is no issue if we create Cells over the RPC read byte[] > directly here in Decoder. No need for 2 byte[] creation and copy for every > Cell in request. > My plan is to make a Cell aware ByteArrayInputStream which can read Cells > directly from it. > Same Codec path is used in client side also. There better we can avoid this > direct Cell create and continue to do the copy to smaller byte[]s path. Plan > to introduce some thing like a CodecContext associated with every Codec > instance which can say the server/client context. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15180) Reduce garbage created while reading Cells from Codec Decoder
[ https://issues.apache.org/jira/browse/HBASE-15180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15124131#comment-15124131 ] stack commented on HBASE-15180: --- Why not use Cell Codec for CellReadableByteArrayInputStream? CellCodec.Decoded takes an InputStream and returns Cells via CellScanner implementation. What is difference between a CellReadable and a CellScanner? You have to do advance and then current each time which is a little more awkward but it is a common pattern we want to institute throughout. I suppose it is awkward. That'd be your argument. If so, we have CellOutputStream, should your CellReadable be a CellInputStream with read methods that return Cells to mirror the write methods we have in CellOutputStream. Your CellReadableByteArrayInputStream would become CellByteArrayInputStream and would implement CellInputStream. I've asked this before I know but do we have to flag when tags and when without? Internally, when we read, the Cell will know if it has tags or not? What is the length in the below? Cell readCell(int length, boolean withTags) throws IOException; Do we have to pass this in each time? 192* @param directCellRead 193* Whether to make Cells directly from the cellBlock bytes or need to copy. Pass false 194* while using from client side. IPCUtil takes a Configuration? Can we not just read the Configuration on construction rather than pass this flag per call? Seems like you want server-side and client-side to act different. Having RPCServer 'know' about MSLAB don't seem right. It is pollution of rpc with internals on how we do memstore. Can we have another property for when we should lean on the rpc buffer (we 'know' it safe when mslab is going on. perhaps a method that obscures the rationale for when to copy ) Patch looks great otherwise. > Reduce garbage created while reading Cells from Codec Decoder > - > > Key: HBASE-15180 > URL: https://issues.apache.org/jira/browse/HBASE-15180 > Project: HBase > Issue Type: Sub-task > Components: regionserver >Reporter: Anoop Sam John >Assignee: Anoop Sam John > Fix For: 2.0.0 > > Attachments: HBASE-15180.patch, HBASE-15180_V2.patch > > > In KeyValueDecoder#parseCell (Default Codec decoder) we use > KeyValueUtil#iscreate to read cells from the InputStream. Here we 1st create > a byte[] of length 4 and read the cell length and then an array of Cell's > length and read in cell bytes into it and create a KV. > Actually in server we read the reqs into a byte[] and CellScanner is created > on top of a ByteArrayInputStream on top of this. By default in write path, we > have MSLAB usage ON. So while adding Cells to memstore, we will copy the Cell > bytes to MSLAB memory chunks (default 2 MB size) and recreate Cells over that > bytes. So there is no issue if we create Cells over the RPC read byte[] > directly here in Decoder. No need for 2 byte[] creation and copy for every > Cell in request. > My plan is to make a Cell aware ByteArrayInputStream which can read Cells > directly from it. > Same Codec path is used in client side also. There better we can avoid this > direct Cell create and continue to do the copy to smaller byte[]s path. Plan > to introduce some thing like a CodecContext associated with every Codec > instance which can say the server/client context. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15180) Reduce garbage created while reading Cells from Codec Decoder
[ https://issues.apache.org/jira/browse/HBASE-15180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15124091#comment-15124091 ] Enis Soztutar commented on HBASE-15180: --- bq. Yes, I noticed about 5-10% improvement on GC times and CPU utilization after disabling MSLAB only if using G1GC. Even with G1, this is unexpected. Is there a theoretical explanation? > Reduce garbage created while reading Cells from Codec Decoder > - > > Key: HBASE-15180 > URL: https://issues.apache.org/jira/browse/HBASE-15180 > Project: HBase > Issue Type: Sub-task > Components: regionserver >Reporter: Anoop Sam John >Assignee: Anoop Sam John > Fix For: 2.0.0 > > Attachments: HBASE-15180.patch, HBASE-15180_V2.patch > > > In KeyValueDecoder#parseCell (Default Codec decoder) we use > KeyValueUtil#iscreate to read cells from the InputStream. Here we 1st create > a byte[] of length 4 and read the cell length and then an array of Cell's > length and read in cell bytes into it and create a KV. > Actually in server we read the reqs into a byte[] and CellScanner is created > on top of a ByteArrayInputStream on top of this. By default in write path, we > have MSLAB usage ON. So while adding Cells to memstore, we will copy the Cell > bytes to MSLAB memory chunks (default 2 MB size) and recreate Cells over that > bytes. So there is no issue if we create Cells over the RPC read byte[] > directly here in Decoder. No need for 2 byte[] creation and copy for every > Cell in request. > My plan is to make a Cell aware ByteArrayInputStream which can read Cells > directly from it. > Same Codec path is used in client side also. There better we can avoid this > direct Cell create and continue to do the copy to smaller byte[]s path. Plan > to introduce some thing like a CodecContext associated with every Codec > instance which can say the server/client context. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15190) Monkey dies when running on shared cluster (gives up when can't kill the other fellows processes)
[ https://issues.apache.org/jira/browse/HBASE-15190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15124086#comment-15124086 ] Enis Soztutar commented on HBASE-15190: --- +1. In our env, we execute the commands as the hbase user. > Monkey dies when running on shared cluster (gives up when can't kill the > other fellows processes) > - > > Key: HBASE-15190 > URL: https://issues.apache.org/jira/browse/HBASE-15190 > Project: HBase > Issue Type: Bug > Components: integration tests >Reporter: stack >Assignee: stack > Attachments: shared.patch > > > Trying to run IT on a cluster shared with an other, the monkeys give up > because they error out trying to kill the other fellows daemons: > Failure looks like this: > {code} > 16/01/29 09:07:09 WARN hbase.HBaseClusterManager: Remote command: ps aux | > grep proc_regionserver | grep -v grep | tr -s ' ' | cut -d ' ' -f2 | xargs > kill -s SIGKILL , hostname:ve0536.halxg.cloudera.com failed at attempt 3. > Retrying until maxAttempts: 5. Exception: stderr: kill 115040: Operation not > permitted > , stdout: > {code} > The operation is not permitted because there is a regionserver running that > is owned by someone else. We retry and then give up on the monkey. > Fix seems simple. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15180) Reduce garbage created while reading Cells from Codec Decoder
[ https://issues.apache.org/jira/browse/HBASE-15180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15124076#comment-15124076 ] Hadoop QA commented on HBASE-15180: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 1s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 40s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s {color} | {color:green} master passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 7s {color} | {color:green} master passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 7m 10s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 37s {color} | {color:green} master passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 42s {color} | {color:red} hbase-common in master has 1 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 50s {color} | {color:red} hbase-server in master has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s {color} | {color:green} master passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s {color} | {color:green} master passed with JDK v1.7.0_91 {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 14s {color} | {color:red} hbase-client in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 24s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 20s {color} | {color:red} hbase-server in the patch failed with JDK v1.8.0_72. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 20s {color} | {color:red} hbase-server in the patch failed with JDK v1.8.0_72. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 25s {color} | {color:red} hbase-server in the patch failed with JDK v1.7.0_91. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 25s {color} | {color:red} hbase-server in the patch failed with JDK v1.7.0_91. {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 10s {color} | {color:red} Patch generated 2 new checkstyle issues in hbase-common (total was 100, now 102). {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 36s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 21m 55s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 56s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s {color} | {color:green} the patch passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 14s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 57s {color} | {color:green} hbase-client in the patch passed with JDK v1.8.0_72. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 44s {color} | {color:green} hbase-common in the patch passed with JDK v1.8.0_72. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 20s {color} | {color:red} hbase-server in the patch failed with JDK v1.8.0_72. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 56s {color} | {color:green} hbase-client in the patch passed with JDK v1.7.0_91. {color} | | {color:gr
[jira] [Commented] (HBASE-15187) Integrate CSRF prevention filter to REST gateway
[ https://issues.apache.org/jira/browse/HBASE-15187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15124073#comment-15124073 ] Hadoop QA commented on HBASE-15187: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 14s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 19s {color} | {color:green} master passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s {color} | {color:green} master passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 34s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 18s {color} | {color:green} master passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 48s {color} | {color:red} hbase-rest in master has 2 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s {color} | {color:green} master passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s {color} | {color:green} master passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 22s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 17s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 21s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 33s {color} | {color:red} Patch generated 1 new checkstyle issues in hbase-rest (total was 30, now 30). {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 24m 38s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 54s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 13s {color} | {color:green} hbase-rest in the patch passed with JDK v1.8.0_66. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 14s {color} | {color:green} hbase-rest in the patch passed with JDK v1.7.0_91. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 9s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 41m 14s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.9.1 Server=1.9.1 Image:yetus/hbase:date2016-01-29 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12785215/HBASE-15187.v2.patch | | JIRA Issue | HBASE-15187 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux c7a430698073 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMP
[jira] [Commented] (HBASE-15180) Reduce garbage created while reading Cells from Codec Decoder
[ https://issues.apache.org/jira/browse/HBASE-15180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15124036#comment-15124036 ] Esteban Gutierrez commented on HBASE-15180: --- Yes, I noticed about 5-10% improvement on GC times and CPU utilization after disabling MSLAB only if using G1GC. Tuning MSLAB helps a little but I don't see to much advantage to have it enabled when G1GC is there. > Reduce garbage created while reading Cells from Codec Decoder > - > > Key: HBASE-15180 > URL: https://issues.apache.org/jira/browse/HBASE-15180 > Project: HBase > Issue Type: Sub-task > Components: regionserver >Reporter: Anoop Sam John >Assignee: Anoop Sam John > Fix For: 2.0.0 > > Attachments: HBASE-15180.patch, HBASE-15180_V2.patch > > > In KeyValueDecoder#parseCell (Default Codec decoder) we use > KeyValueUtil#iscreate to read cells from the InputStream. Here we 1st create > a byte[] of length 4 and read the cell length and then an array of Cell's > length and read in cell bytes into it and create a KV. > Actually in server we read the reqs into a byte[] and CellScanner is created > on top of a ByteArrayInputStream on top of this. By default in write path, we > have MSLAB usage ON. So while adding Cells to memstore, we will copy the Cell > bytes to MSLAB memory chunks (default 2 MB size) and recreate Cells over that > bytes. So there is no issue if we create Cells over the RPC read byte[] > directly here in Decoder. No need for 2 byte[] creation and copy for every > Cell in request. > My plan is to make a Cell aware ByteArrayInputStream which can read Cells > directly from it. > Same Codec path is used in client side also. There better we can avoid this > direct Cell create and continue to do the copy to smaller byte[]s path. Plan > to introduce some thing like a CodecContext associated with every Codec > instance which can say the server/client context. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15180) Reduce garbage created while reading Cells from Codec Decoder
[ https://issues.apache.org/jira/browse/HBASE-15180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15124024#comment-15124024 ] stack commented on HBASE-15180: --- On MSLAB on all the time, there is talk of turning it off in G1GC because there it does not help (this you [~esteban]?). TBD. On other hand, I can see need for a 'terminus', a 'handoff' from the reading handler so no longer a reference and so something like a copy into MSLAB makes some sense. MSLAB is like an HBASE-14918 Segment. Would it make sense to make it a Segment? > Reduce garbage created while reading Cells from Codec Decoder > - > > Key: HBASE-15180 > URL: https://issues.apache.org/jira/browse/HBASE-15180 > Project: HBase > Issue Type: Sub-task > Components: regionserver >Reporter: Anoop Sam John >Assignee: Anoop Sam John > Fix For: 2.0.0 > > Attachments: HBASE-15180.patch, HBASE-15180_V2.patch > > > In KeyValueDecoder#parseCell (Default Codec decoder) we use > KeyValueUtil#iscreate to read cells from the InputStream. Here we 1st create > a byte[] of length 4 and read the cell length and then an array of Cell's > length and read in cell bytes into it and create a KV. > Actually in server we read the reqs into a byte[] and CellScanner is created > on top of a ByteArrayInputStream on top of this. By default in write path, we > have MSLAB usage ON. So while adding Cells to memstore, we will copy the Cell > bytes to MSLAB memory chunks (default 2 MB size) and recreate Cells over that > bytes. So there is no issue if we create Cells over the RPC read byte[] > directly here in Decoder. No need for 2 byte[] creation and copy for every > Cell in request. > My plan is to make a Cell aware ByteArrayInputStream which can read Cells > directly from it. > Same Codec path is used in client side also. There better we can avoid this > direct Cell create and continue to do the copy to smaller byte[]s path. Plan > to introduce some thing like a CodecContext associated with every Codec > instance which can say the server/client context. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15180) Reduce garbage created while reading Cells from Codec Decoder
[ https://issues.apache.org/jira/browse/HBASE-15180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15124005#comment-15124005 ] Enis Soztutar commented on HBASE-15180: --- bq. May be BB based then. Again am trying to experiment with reading the req not just into one single large BB. From the pool we might be getting fixed sized smaller BBs (Say 64 KB or so) And we can read in to those many BBs. And the CellScanner need to work on a set of BBs (Like the MultiByteBuff stuff) Then again even this BB based API is an issue.. Continue with an InputStream based API gives us the freedom of experimenting with this different data structures. We are still creating an IS per request. It is not the end of the world though. At the time of the request we do know the RPC buffer size required. I was trying to reuse the BBPool from Hadoop which can return a buffer at least as large as the request size. bq. Ya we have MSALB enabled by default. I agree that doing the MSLAB check in RPC layer looks ugly. Wanted to avoid we refer to the req read byte[] (from memstore cells) when some one turns MSLAB off. So what do you say? Remove this check? We need to do a separate issue and remove the option for disabling MSLAB. Then we can assume in this patch that MSLAB is always enabled. > Reduce garbage created while reading Cells from Codec Decoder > - > > Key: HBASE-15180 > URL: https://issues.apache.org/jira/browse/HBASE-15180 > Project: HBase > Issue Type: Sub-task > Components: regionserver >Reporter: Anoop Sam John >Assignee: Anoop Sam John > Fix For: 2.0.0 > > Attachments: HBASE-15180.patch, HBASE-15180_V2.patch > > > In KeyValueDecoder#parseCell (Default Codec decoder) we use > KeyValueUtil#iscreate to read cells from the InputStream. Here we 1st create > a byte[] of length 4 and read the cell length and then an array of Cell's > length and read in cell bytes into it and create a KV. > Actually in server we read the reqs into a byte[] and CellScanner is created > on top of a ByteArrayInputStream on top of this. By default in write path, we > have MSLAB usage ON. So while adding Cells to memstore, we will copy the Cell > bytes to MSLAB memory chunks (default 2 MB size) and recreate Cells over that > bytes. So there is no issue if we create Cells over the RPC read byte[] > directly here in Decoder. No need for 2 byte[] creation and copy for every > Cell in request. > My plan is to make a Cell aware ByteArrayInputStream which can read Cells > directly from it. > Same Codec path is used in client side also. There better we can avoid this > direct Cell create and continue to do the copy to smaller byte[]s path. Plan > to introduce some thing like a CodecContext associated with every Codec > instance which can say the server/client context. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15128) Disable region splits and merges in HBCK
[ https://issues.apache.org/jira/browse/HBASE-15128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15123977#comment-15123977 ] Enis Soztutar commented on HBASE-15128: --- bq. this patch as it is is a -1 to me. because of that set_switch that wants to be generic. My understanding is that HBASE-13936 is for refactoring the configuration in code to be more manageable. It does not aim to do dynamic conf as we describe it above. The dynamic conf issue, HBASE-3909 is open for 4+ years. My point is that unless I see some progress on these, it does not make sense to hold this issue. Heng's proposal seems logical that once we have the dynamic conf framework, we can migrate this to using it. {{set_switch}} is not trying to be a generic conf framework. It does not allow you to change random config values. Maybe we can rename it to something more reflective, like set_master_process bq. if you want to solve ONLY the disable split/merge. Jon solution with the table lock is probably ok, and I also think we use that already in hbck. Although the original goal was to enable / disable splitting for HBCK, we should aim for getting a dynamic behavior for the operator or HBCK to control splitting. > Disable region splits and merges in HBCK > > > Key: HBASE-15128 > URL: https://issues.apache.org/jira/browse/HBASE-15128 > Project: HBase > Issue Type: Bug >Reporter: Enis Soztutar >Assignee: Heng Chen > Fix For: 2.0.0, 1.3.0 > > Attachments: HBASE-15128.patch, HBASE-15128_v1.patch, > HBASE-15128_v3.patch > > > In large clusters where region splits are frequent, and HBCK runs take > longer, the concurrent splits cause further problems in HBCK since HBCK > assumes a static state for the region partition map. We have just seen a case > where HBCK undo's a concurrently splitting region causing number of > inconsistencies to go up. > We can have a mode in master where splits and merges are disabled like the > balancer and catalog janitor switches. Master will reject the split requests > if regionservers decide to split. This switch can be turned on / off by the > admins and also automatically by HBCK while it is running (similar to > balancer switch being disabled by HBCK). > HBCK should also disable the Catalog Janitor just in case. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15187) Integrate CSRF prevention filter to REST gateway
[ https://issues.apache.org/jira/browse/HBASE-15187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-15187: --- Attachment: HBASE-15187.v2.patch > Integrate CSRF prevention filter to REST gateway > > > Key: HBASE-15187 > URL: https://issues.apache.org/jira/browse/HBASE-15187 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Assignee: Ted Yu > Attachments: HBASE-15187.v1.patch, HBASE-15187.v2.patch > > > HADOOP-12691 introduced a filter in Hadoop Common to help REST APIs guard > against cross-site request forgery attacks. > This issue tracks the integration of that filter into HBase REST gateway. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15190) Monkey dies when running on shared cluster (gives up when can't kill the other fellows processes)
[ https://issues.apache.org/jira/browse/HBASE-15190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15123966#comment-15123966 ] Hadoop QA commented on HBASE-15190: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 1s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 54s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s {color} | {color:green} master passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s {color} | {color:green} master passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 8s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s {color} | {color:green} master passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 12s {color} | {color:red} branch/hbase-it no findbugs output file (hbase-it/target/findbugsXml.xml) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 7s {color} | {color:green} master passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s {color} | {color:green} master passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 17s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 7s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 22m 10s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 12s {color} | {color:red} patch/hbase-it no findbugs output file (hbase-it/target/findbugsXml.xml) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 7s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 15s {color} | {color:green} hbase-it in the patch passed with JDK v1.8.0_66. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 16s {color} | {color:green} hbase-it in the patch passed with JDK v1.7.0_91. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 8s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 28m 53s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.9.1 Server=1.9.1 Image:yetus/hbase:date2016-01-29 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12785203/shared.patch | | JIRA Issue | HBASE-15190 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 015ae8d3d373 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_6
[jira] [Commented] (HBASE-15187) Integrate CSRF prevention filter to REST gateway
[ https://issues.apache.org/jira/browse/HBASE-15187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15123965#comment-15123965 ] Ted Yu commented on HBASE-15187: The complaint about method length was due to patch adding the following call in main(): addCSRFFilter(context, conf); The main() method was already exceeding suggested length before the patch is applied. > Integrate CSRF prevention filter to REST gateway > > > Key: HBASE-15187 > URL: https://issues.apache.org/jira/browse/HBASE-15187 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Assignee: Ted Yu > Attachments: HBASE-15187.v1.patch > > > HADOOP-12691 introduced a filter in Hadoop Common to help REST APIs guard > against cross-site request forgery attacks. > This issue tracks the integration of that filter into HBase REST gateway. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15187) Integrate CSRF prevention filter to REST gateway
[ https://issues.apache.org/jira/browse/HBASE-15187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15123953#comment-15123953 ] Hadoop QA commented on HBASE-15187: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 52s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s {color} | {color:green} master passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s {color} | {color:green} master passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 30s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s {color} | {color:green} master passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 42s {color} | {color:red} hbase-rest in master has 2 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s {color} | {color:green} master passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s {color} | {color:green} master passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 20s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s {color} | {color:green} the patch passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 18s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 31s {color} | {color:red} Patch generated 5 new checkstyle issues in hbase-rest (total was 30, now 34). {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 22m 52s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 51s {color} | {color:red} hbase-rest introduced 1 new FindBugs issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s {color} | {color:green} the patch passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 13s {color} | {color:green} hbase-rest in the patch passed with JDK v1.8.0_72. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 30s {color} | {color:green} hbase-rest in the patch passed with JDK v1.7.0_91. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 8s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 40m 46s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hbase-rest | | | Dead store to restCsrfCustomHeader in org.apache.hadoop.hbase.rest.RESTServer.addCSRFFilter(Context, Configuration) At RESTServer.java:org.apache.hadoop.hbase.rest.RESTServer.addCSRFFilter(Context, Configuration) At RESTServer.java:[line 110] | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.9.1 Server=1.9.1 Image:yetu
[jira] [Updated] (HBASE-15180) Reduce garbage created while reading Cells from Codec Decoder
[ https://issues.apache.org/jira/browse/HBASE-15180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anoop Sam John updated HBASE-15180: --- Attachment: (was: HBASE-15180_v2.patch) > Reduce garbage created while reading Cells from Codec Decoder > - > > Key: HBASE-15180 > URL: https://issues.apache.org/jira/browse/HBASE-15180 > Project: HBase > Issue Type: Sub-task > Components: regionserver >Reporter: Anoop Sam John >Assignee: Anoop Sam John > Fix For: 2.0.0 > > Attachments: HBASE-15180.patch, HBASE-15180_V2.patch > > > In KeyValueDecoder#parseCell (Default Codec decoder) we use > KeyValueUtil#iscreate to read cells from the InputStream. Here we 1st create > a byte[] of length 4 and read the cell length and then an array of Cell's > length and read in cell bytes into it and create a KV. > Actually in server we read the reqs into a byte[] and CellScanner is created > on top of a ByteArrayInputStream on top of this. By default in write path, we > have MSLAB usage ON. So while adding Cells to memstore, we will copy the Cell > bytes to MSLAB memory chunks (default 2 MB size) and recreate Cells over that > bytes. So there is no issue if we create Cells over the RPC read byte[] > directly here in Decoder. No need for 2 byte[] creation and copy for every > Cell in request. > My plan is to make a Cell aware ByteArrayInputStream which can read Cells > directly from it. > Same Codec path is used in client side also. There better we can avoid this > direct Cell create and continue to do the copy to smaller byte[]s path. Plan > to introduce some thing like a CodecContext associated with every Codec > instance which can say the server/client context. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15180) Reduce garbage created while reading Cells from Codec Decoder
[ https://issues.apache.org/jira/browse/HBASE-15180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anoop Sam John updated HBASE-15180: --- Attachment: HBASE-15180_V2.patch > Reduce garbage created while reading Cells from Codec Decoder > - > > Key: HBASE-15180 > URL: https://issues.apache.org/jira/browse/HBASE-15180 > Project: HBase > Issue Type: Sub-task > Components: regionserver >Reporter: Anoop Sam John >Assignee: Anoop Sam John > Fix For: 2.0.0 > > Attachments: HBASE-15180.patch, HBASE-15180_V2.patch > > > In KeyValueDecoder#parseCell (Default Codec decoder) we use > KeyValueUtil#iscreate to read cells from the InputStream. Here we 1st create > a byte[] of length 4 and read the cell length and then an array of Cell's > length and read in cell bytes into it and create a KV. > Actually in server we read the reqs into a byte[] and CellScanner is created > on top of a ByteArrayInputStream on top of this. By default in write path, we > have MSLAB usage ON. So while adding Cells to memstore, we will copy the Cell > bytes to MSLAB memory chunks (default 2 MB size) and recreate Cells over that > bytes. So there is no issue if we create Cells over the RPC read byte[] > directly here in Decoder. No need for 2 byte[] creation and copy for every > Cell in request. > My plan is to make a Cell aware ByteArrayInputStream which can read Cells > directly from it. > Same Codec path is used in client side also. There better we can avoid this > direct Cell create and continue to do the copy to smaller byte[]s path. Plan > to introduce some thing like a CodecContext associated with every Codec > instance which can say the server/client context. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15180) Reduce garbage created while reading Cells from Codec Decoder
[ https://issues.apache.org/jira/browse/HBASE-15180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15123913#comment-15123913 ] Hadoop QA commented on HBASE-15180: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 39s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 5s {color} | {color:green} master passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 37s {color} | {color:green} master passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 8m 12s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 50s {color} | {color:green} master passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 0s {color} | {color:red} hbase-common in master has 1 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 47s {color} | {color:red} hbase-server in master has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 53s {color} | {color:green} master passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 46s {color} | {color:green} master passed with JDK v1.7.0_91 {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 21s {color} | {color:red} hbase-client in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 37s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 43s {color} | {color:red} hbase-server in the patch failed with JDK v1.8.0_72. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 43s {color} | {color:red} hbase-server in the patch failed with JDK v1.8.0_72. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 35s {color} | {color:red} hbase-server in the patch failed with JDK v1.7.0_91. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 35s {color} | {color:red} hbase-server in the patch failed with JDK v1.7.0_91. {color} | | {color:black}{color} | {color:black} checkstyle {color} | {color:black} 1m 23s {color} | {color:black} Patch generated 2 new checkstyle issues in hbase-common (total was 100, now 102). {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 57s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 30m 36s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 29s {color} | {color:red} hbase-common introduced 1 new FindBugs issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 15s {color} | {color:green} the patch passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 3s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 37s {color} | {color:green} hbase-client in the patch passed with JDK v1.8.0_72. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 53s {color} | {color:green} hbase-common in the patch passed with JDK v1.8.0_72. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 49s {color} | {color:red} hbase-server in the patch failed with JDK v1.8.0_72. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 26s {color} | {color:green} hbase-client in the patch passed with JDK v1.7.
[jira] [Commented] (HBASE-15122) Servlets generate XSS_REQUEST_PARAMETER_TO_SERVLET_WRITER findbugs warnings
[ https://issues.apache.org/jira/browse/HBASE-15122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15123903#comment-15123903 ] Samir Ahmic commented on HBASE-15122: - Thanks for tip [~busbey]. I'm trying to test this patch on master branch. This what i get after running: {code} mvn clean package assembly:single -Dlicense.debug.print.included=true -DskipTests -X {code} Debugging details: {code} [DEBUG] Building project for commons-collections:commons-collections:jar:3.2.2:compile [DEBUG] Adding project with groupId [commons-collections] [ERROR] Error invoking method 'get(java.lang.Integer)' in java.util.ArrayList at META-INF/NOTICE.vm[line 275, column 22] java.lang.reflect.InvocationTargetException at sun.reflect.GeneratedMethodAccessor151.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.velocity.util.introspection.UberspectImpl$VelMethodImpl.doInvoke(UberspectImpl.java:395) at org.apache.velocity.util.introspection.UberspectImpl$VelMethodImpl.invoke(UberspectImpl.java:384) at org.apache.velocity.runtime.parser.node.ASTIndex.execute(ASTIndex.java:149) at org.apache.velocity.runtime.parser.node.ASTReference.execute(ASTReference.java:280) at org.apache.velocity.runtime.parser.node.ASTReference.evaluate(ASTReference.java:530) at org.apache.velocity.runtime.parser.node.ASTOrNode.evaluate(ASTOrNode.java:98) at org.apache.velocity.runtime.parser.node.ASTExpression.evaluate(ASTExpression.java:62) at org.apache.velocity.runtime.parser.node.ASTNotNode.evaluate(ASTNotNode.java:63) at org.apache.velocity.runtime.parser.node.ASTExpression.evaluate(ASTExpression.java:62) at org.apache.velocity.runtime.parser.node.ASTIfStatement.render(ASTIfStatement.java:85) at org.apache.velocity.runtime.parser.node.ASTBlock.render(ASTBlock.java:72) at org.apache.velocity.runtime.directive.Foreach.render(Foreach.java:420) at org.apache.velocity.runtime.parser.node.ASTDirective.render(ASTDirective.java:207) at org.apache.velocity.runtime.parser.node.ASTBlock.render(ASTBlock.java:72) at org.apache.velocity.runtime.parser.node.ASTIfStatement.render(ASTIfStatement.java:87) at org.apache.velocity.runtime.parser.node.SimpleNode.render(SimpleNode.java:342) at org.apache.velocity.Template.merge(Template.java:356) at org.apache.velocity.Template.merge(Template.java:260) at org.apache.velocity.app.VelocityEngine.mergeTemplate(VelocityEngine.java:354) at org.apache.maven.plugin.resources.remote.ProcessRemoteResourcesMojo.processResourceBundles(ProcessRemoteResourcesMojo.java:1164) at org.apache.maven.plugin.resources.remote.ProcessRemoteResourcesMojo.execute(ProcessRemoteResourcesMojo.java:520) at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59) at org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183) at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:320) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156) at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537) at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196) at org.apache.maven.cli.MavenCli.main(MavenCli.java:141) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289) at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229) at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415) at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356) Caused by: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0 at java.util.ArrayList.rangeCh
[jira] [Updated] (HBASE-15190) Monkey dies when running on shared cluster (gives up when can't kill the other fellows processes)
[ https://issues.apache.org/jira/browse/HBASE-15190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-15190: -- Attachment: shared.patch Just change the process listing to be my processes only: i.e. ps ux rather than ps aux. I've been running this patch on cluster and does the right thing > Monkey dies when running on shared cluster (gives up when can't kill the > other fellows processes) > - > > Key: HBASE-15190 > URL: https://issues.apache.org/jira/browse/HBASE-15190 > Project: HBase > Issue Type: Bug > Components: integration tests >Reporter: stack > Attachments: shared.patch > > > Trying to run IT on a cluster shared with an other, the monkeys give up > because they error out trying to kill the other fellows daemons: > Failure looks like this: > {code} > 16/01/29 09:07:09 WARN hbase.HBaseClusterManager: Remote command: ps aux | > grep proc_regionserver | grep -v grep | tr -s ' ' | cut -d ' ' -f2 | xargs > kill -s SIGKILL , hostname:ve0536.halxg.cloudera.com failed at attempt 3. > Retrying until maxAttempts: 5. Exception: stderr: kill 115040: Operation not > permitted > , stdout: > {code} > The operation is not permitted because there is a regionserver running that > is owned by someone else. We retry and then give up on the monkey. > Fix seems simple. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15190) Monkey dies when running on shared cluster (gives up when can't kill the other fellows processes)
[ https://issues.apache.org/jira/browse/HBASE-15190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-15190: -- Assignee: stack Status: Patch Available (was: Open) > Monkey dies when running on shared cluster (gives up when can't kill the > other fellows processes) > - > > Key: HBASE-15190 > URL: https://issues.apache.org/jira/browse/HBASE-15190 > Project: HBase > Issue Type: Bug > Components: integration tests >Reporter: stack >Assignee: stack > Attachments: shared.patch > > > Trying to run IT on a cluster shared with an other, the monkeys give up > because they error out trying to kill the other fellows daemons: > Failure looks like this: > {code} > 16/01/29 09:07:09 WARN hbase.HBaseClusterManager: Remote command: ps aux | > grep proc_regionserver | grep -v grep | tr -s ' ' | cut -d ' ' -f2 | xargs > kill -s SIGKILL , hostname:ve0536.halxg.cloudera.com failed at attempt 3. > Retrying until maxAttempts: 5. Exception: stderr: kill 115040: Operation not > permitted > , stdout: > {code} > The operation is not permitted because there is a regionserver running that > is owned by someone else. We retry and then give up on the monkey. > Fix seems simple. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-15190) Monkey dies when running on shared cluster (gives up when can't kill the other fellows processes)
stack created HBASE-15190: - Summary: Monkey dies when running on shared cluster (gives up when can't kill the other fellows processes) Key: HBASE-15190 URL: https://issues.apache.org/jira/browse/HBASE-15190 Project: HBase Issue Type: Bug Components: integration tests Reporter: stack Trying to run IT on a cluster shared with an other, the monkeys give up because they error out trying to kill the other fellows daemons: Failure looks like this: {code} 16/01/29 09:07:09 WARN hbase.HBaseClusterManager: Remote command: ps aux | grep proc_regionserver | grep -v grep | tr -s ' ' | cut -d ' ' -f2 | xargs kill -s SIGKILL , hostname:ve0536.halxg.cloudera.com failed at attempt 3. Retrying until maxAttempts: 5. Exception: stderr: kill 115040: Operation not permitted , stdout: {code} The operation is not permitted because there is a regionserver running that is owned by someone else. We retry and then give up on the monkey. Fix seems simple. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-15188) IT monkey fails if running on shared cluster; can't kill the other fellows job and gives up
stack created HBASE-15188: - Summary: IT monkey fails if running on shared cluster; can't kill the other fellows job and gives up Key: HBASE-15188 URL: https://issues.apache.org/jira/browse/HBASE-15188 Project: HBase Issue Type: Bug Components: integration tests Reporter: stack Trying to run IT on a cluster shared with an other, the monkeys give up because they error out trying to kill the other fellows daemons: Failure looks like this: {code} 16/01/29 09:07:09 WARN hbase.HBaseClusterManager: Remote command: ps aux | grep proc_regionserver | grep -v grep | tr -s ' ' | cut -d ' ' -f2 | xargs kill -s SIGKILL , hostname:ve0536.halxg.cloudera.com failed at attempt 3. Retrying until maxAttempts: 5. Exception: stderr: kill 115040: Operation not permitted , stdout: {code} The operation is not permitted because there is a regionserver running that is owned by someone else. We retry and then give up on the monkey. Fix seems simple. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-15189) IT monkey fails if running on shared cluster; can't kill the other fellows job and gives up
stack created HBASE-15189: - Summary: IT monkey fails if running on shared cluster; can't kill the other fellows job and gives up Key: HBASE-15189 URL: https://issues.apache.org/jira/browse/HBASE-15189 Project: HBase Issue Type: Bug Components: integration tests Reporter: stack Trying to run IT on a cluster shared with an other, the monkeys give up because they error out trying to kill the other fellows daemons: Failure looks like this: {code} 16/01/29 09:07:09 WARN hbase.HBaseClusterManager: Remote command: ps aux | grep proc_regionserver | grep -v grep | tr -s ' ' | cut -d ' ' -f2 | xargs kill -s SIGKILL , hostname:ve0536.halxg.cloudera.com failed at attempt 3. Retrying until maxAttempts: 5. Exception: stderr: kill 115040: Operation not permitted , stdout: {code} The operation is not permitted because there is a regionserver running that is owned by someone else. We retry and then give up on the monkey. Fix seems simple. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15186) HBASE-15158 Preamble: fix findbugs, add javadoc and some util
[ https://issues.apache.org/jira/browse/HBASE-15186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15123889#comment-15123889 ] stack commented on HBASE-15186: --- The findbugs preamble warnings are addressed in this patch. The checkstyle warning is silly and wrong. The license warning is some dumped jvm file not in the src tree. > HBASE-15158 Preamble: fix findbugs, add javadoc and some util > - > > Key: HBASE-15186 > URL: https://issues.apache.org/jira/browse/HBASE-15186 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: stack > Attachments: 15186v2.patch, subpatch.patch > > > Break up the HBASE-15158 patch. Here is the first piece. Its a bunch of > findbugs fixes, a bit of utility for tag-handling (to be exploited in later > patches), some clarifying comments and javadoc (and javadoc fixes), cleanup > of a some Region API. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15187) Integrate CSRF prevention filter to REST gateway
[ https://issues.apache.org/jira/browse/HBASE-15187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15123880#comment-15123880 ] Ted Yu commented on HBASE-15187: RestCsrfPreventionFilter is included in the patch since there is no hadoop release with this Filter as of today. > Integrate CSRF prevention filter to REST gateway > > > Key: HBASE-15187 > URL: https://issues.apache.org/jira/browse/HBASE-15187 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Assignee: Ted Yu > Attachments: HBASE-15187.v1.patch > > > HADOOP-12691 introduced a filter in Hadoop Common to help REST APIs guard > against cross-site request forgery attacks. > This issue tracks the integration of that filter into HBase REST gateway. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15187) Integrate CSRF prevention filter to REST gateway
[ https://issues.apache.org/jira/browse/HBASE-15187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-15187: --- Attachment: HBASE-15187.v1.patch > Integrate CSRF prevention filter to REST gateway > > > Key: HBASE-15187 > URL: https://issues.apache.org/jira/browse/HBASE-15187 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Assignee: Ted Yu > Attachments: HBASE-15187.v1.patch > > > HADOOP-12691 introduced a filter in Hadoop Common to help REST APIs guard > against cross-site request forgery attacks. > This issue tracks the integration of that filter into HBase REST gateway. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15187) Integrate CSRF prevention filter to REST gateway
[ https://issues.apache.org/jira/browse/HBASE-15187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-15187: --- Status: Patch Available (was: Open) > Integrate CSRF prevention filter to REST gateway > > > Key: HBASE-15187 > URL: https://issues.apache.org/jira/browse/HBASE-15187 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Assignee: Ted Yu > Attachments: HBASE-15187.v1.patch > > > HADOOP-12691 introduced a filter in Hadoop Common to help REST APIs guard > against cross-site request forgery attacks. > This issue tracks the integration of that filter into HBase REST gateway. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15019) Replication stuck when HDFS is restarted
[ https://issues.apache.org/jira/browse/HBASE-15019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15123811#comment-15123811 ] Hudson commented on HBASE-15019: FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #1164 (See [https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/1164/]) HBASE-15019 Replication stuck when HDFS is restarted. (matteo.bertozzi: rev 60c6b6df104030995754bb1470a0d5d3e20cf220) * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogFactory.java * hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java * hbase-server/src/main/java/org/apache/hadoop/hbase/util/LeaseNotRecoveredException.java > Replication stuck when HDFS is restarted > > > Key: HBASE-15019 > URL: https://issues.apache.org/jira/browse/HBASE-15019 > Project: HBase > Issue Type: Bug > Components: Replication, wal >Affects Versions: 2.0.0, 1.2.0, 1.1.2, 1.0.3, 0.98.16.1 >Reporter: Matteo Bertozzi >Assignee: Matteo Bertozzi > Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.4, 1.0.4, 0.98.18 > > Attachments: HBASE-15019-v0_branch-1.2.patch, HBASE-15019-v1.patch, > HBASE-15019-v1_0.98.patch, HBASE-15019-v1_branch-1.2.patch, > HBASE-15019-v2.patch, HBASE-15019-v3.patch, HBASE-15019-v4.patch > > > RS is normally working and writing on the WAL. > HDFS is killed and restarted, and the RS try to do a roll. > The close fail, but the roll succeed (because hdfs is now up) and everything > works. > {noformat} > 2015-12-11 21:52:28,058 ERROR > org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter: Got IOException > while writing trailer > java.io.IOException: All datanodes 10.51.30.152:50010 are bad. Aborting... > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1147) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:945) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:496) > 2015-12-11 21:52:28,059 ERROR > org.apache.hadoop.hbase.regionserver.wal.FSHLog: Failed close of HLog writer > java.io.IOException: All datanodes 10.51.30.152:50010 are bad. Aborting... > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1147) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:945) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:496) > 2015-12-11 21:52:28,059 WARN org.apache.hadoop.hbase.regionserver.wal.FSHLog: > Riding over HLog close failure! error count=1 > {noformat} > The problem is on the replication side. that log we rolled and we were not > able to close > is waiting for a lease recovery. > {noformat} > 2015-12-11 21:16:31,909 ERROR > org.apache.hadoop.hbase.regionserver.wal.HLogFactory: Can't open after 267 > attempts and 301124ms > {noformat} > the WALFactory notify us about that, but there is nothing on the RS side that > perform the WAL recovery. > {noformat} > 2015-12-11 21:11:30,921 WARN > org.apache.hadoop.hbase.regionserver.wal.HLogFactory: Lease should have > recovered. This is not expected. Will retry > java.io.IOException: Cannot obtain block length for > LocatedBlock{BP-1547065147-10.51.30.152-1446756937665:blk_1073801614_61243; > getBlockSize()=83; corrupt=false; offset=0; locs=[10.51.30.154:50010, > 10.51.30.152:50010, 10.51.30.155:50010]} > at > org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:358) > at > org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:300) > at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:237) > at org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:230) > at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1448) > at > org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:301) > at > org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:297) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:297) > at org.apache.hadoop.fs.FilterFileSystem.open(FilterFileSystem.java:161) > at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:766) > at > org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:116) > at > org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:89) > at > org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:77) > at > org.apache.hadoop.hbase.replication.regionserver.Replication
[jira] [Commented] (HBASE-15019) Replication stuck when HDFS is restarted
[ https://issues.apache.org/jira/browse/HBASE-15019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15123768#comment-15123768 ] Hudson commented on HBASE-15019: FAILURE: Integrated in HBase-0.98-matrix #290 (See [https://builds.apache.org/job/HBase-0.98-matrix/290/]) HBASE-15019 Replication stuck when HDFS is restarted. (matteo.bertozzi: rev 60c6b6df104030995754bb1470a0d5d3e20cf220) * hbase-server/src/main/java/org/apache/hadoop/hbase/util/LeaseNotRecoveredException.java * hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogFactory.java > Replication stuck when HDFS is restarted > > > Key: HBASE-15019 > URL: https://issues.apache.org/jira/browse/HBASE-15019 > Project: HBase > Issue Type: Bug > Components: Replication, wal >Affects Versions: 2.0.0, 1.2.0, 1.1.2, 1.0.3, 0.98.16.1 >Reporter: Matteo Bertozzi >Assignee: Matteo Bertozzi > Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.4, 1.0.4, 0.98.18 > > Attachments: HBASE-15019-v0_branch-1.2.patch, HBASE-15019-v1.patch, > HBASE-15019-v1_0.98.patch, HBASE-15019-v1_branch-1.2.patch, > HBASE-15019-v2.patch, HBASE-15019-v3.patch, HBASE-15019-v4.patch > > > RS is normally working and writing on the WAL. > HDFS is killed and restarted, and the RS try to do a roll. > The close fail, but the roll succeed (because hdfs is now up) and everything > works. > {noformat} > 2015-12-11 21:52:28,058 ERROR > org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter: Got IOException > while writing trailer > java.io.IOException: All datanodes 10.51.30.152:50010 are bad. Aborting... > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1147) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:945) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:496) > 2015-12-11 21:52:28,059 ERROR > org.apache.hadoop.hbase.regionserver.wal.FSHLog: Failed close of HLog writer > java.io.IOException: All datanodes 10.51.30.152:50010 are bad. Aborting... > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1147) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:945) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:496) > 2015-12-11 21:52:28,059 WARN org.apache.hadoop.hbase.regionserver.wal.FSHLog: > Riding over HLog close failure! error count=1 > {noformat} > The problem is on the replication side. that log we rolled and we were not > able to close > is waiting for a lease recovery. > {noformat} > 2015-12-11 21:16:31,909 ERROR > org.apache.hadoop.hbase.regionserver.wal.HLogFactory: Can't open after 267 > attempts and 301124ms > {noformat} > the WALFactory notify us about that, but there is nothing on the RS side that > perform the WAL recovery. > {noformat} > 2015-12-11 21:11:30,921 WARN > org.apache.hadoop.hbase.regionserver.wal.HLogFactory: Lease should have > recovered. This is not expected. Will retry > java.io.IOException: Cannot obtain block length for > LocatedBlock{BP-1547065147-10.51.30.152-1446756937665:blk_1073801614_61243; > getBlockSize()=83; corrupt=false; offset=0; locs=[10.51.30.154:50010, > 10.51.30.152:50010, 10.51.30.155:50010]} > at > org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:358) > at > org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:300) > at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:237) > at org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:230) > at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1448) > at > org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:301) > at > org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:297) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:297) > at org.apache.hadoop.fs.FilterFileSystem.open(FilterFileSystem.java:161) > at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:766) > at > org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:116) > at > org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:89) > at > org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:77) > at > org.apache.hadoop.hbase.replication.regionserver.ReplicationHLogReaderManage
[jira] [Commented] (HBASE-15019) Replication stuck when HDFS is restarted
[ https://issues.apache.org/jira/browse/HBASE-15019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15123743#comment-15123743 ] Hudson commented on HBASE-15019: FAILURE: Integrated in HBase-1.0 #1139 (See [https://builds.apache.org/job/HBase-1.0/1139/]) HBASE-15019 Replication stuck when HDFS is restarted. (matteo.bertozzi: rev 9c42beaa3423e1476aa87e56f59168ed5ce0f461) * hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALFactory.java * hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java * hbase-server/src/main/java/org/apache/hadoop/hbase/util/LeaseNotRecoveredException.java > Replication stuck when HDFS is restarted > > > Key: HBASE-15019 > URL: https://issues.apache.org/jira/browse/HBASE-15019 > Project: HBase > Issue Type: Bug > Components: Replication, wal >Affects Versions: 2.0.0, 1.2.0, 1.1.2, 1.0.3, 0.98.16.1 >Reporter: Matteo Bertozzi >Assignee: Matteo Bertozzi > Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.4, 1.0.4, 0.98.18 > > Attachments: HBASE-15019-v0_branch-1.2.patch, HBASE-15019-v1.patch, > HBASE-15019-v1_0.98.patch, HBASE-15019-v1_branch-1.2.patch, > HBASE-15019-v2.patch, HBASE-15019-v3.patch, HBASE-15019-v4.patch > > > RS is normally working and writing on the WAL. > HDFS is killed and restarted, and the RS try to do a roll. > The close fail, but the roll succeed (because hdfs is now up) and everything > works. > {noformat} > 2015-12-11 21:52:28,058 ERROR > org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter: Got IOException > while writing trailer > java.io.IOException: All datanodes 10.51.30.152:50010 are bad. Aborting... > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1147) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:945) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:496) > 2015-12-11 21:52:28,059 ERROR > org.apache.hadoop.hbase.regionserver.wal.FSHLog: Failed close of HLog writer > java.io.IOException: All datanodes 10.51.30.152:50010 are bad. Aborting... > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1147) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:945) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:496) > 2015-12-11 21:52:28,059 WARN org.apache.hadoop.hbase.regionserver.wal.FSHLog: > Riding over HLog close failure! error count=1 > {noformat} > The problem is on the replication side. that log we rolled and we were not > able to close > is waiting for a lease recovery. > {noformat} > 2015-12-11 21:16:31,909 ERROR > org.apache.hadoop.hbase.regionserver.wal.HLogFactory: Can't open after 267 > attempts and 301124ms > {noformat} > the WALFactory notify us about that, but there is nothing on the RS side that > perform the WAL recovery. > {noformat} > 2015-12-11 21:11:30,921 WARN > org.apache.hadoop.hbase.regionserver.wal.HLogFactory: Lease should have > recovered. This is not expected. Will retry > java.io.IOException: Cannot obtain block length for > LocatedBlock{BP-1547065147-10.51.30.152-1446756937665:blk_1073801614_61243; > getBlockSize()=83; corrupt=false; offset=0; locs=[10.51.30.154:50010, > 10.51.30.152:50010, 10.51.30.155:50010]} > at > org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:358) > at > org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:300) > at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:237) > at org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:230) > at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1448) > at > org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:301) > at > org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:297) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:297) > at org.apache.hadoop.fs.FilterFileSystem.open(FilterFileSystem.java:161) > at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:766) > at > org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:116) > at > org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:89) > at > org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:77) > at > org.apache.hadoop.hbase.replication.regionserver.ReplicationHLogReaderManager.openReader(ReplicationHLogR
[jira] [Updated] (HBASE-15180) Reduce garbage created while reading Cells from Codec Decoder
[ https://issues.apache.org/jira/browse/HBASE-15180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anoop Sam John updated HBASE-15180: --- Attachment: HBASE-15180_v2.patch There was an issue in CellReadableByteArrayInputStream#readCell(int length, boolean withTags) where not advancing the position in BAIS after Cell instance is created. Thanks [~ram_krish] for noticing this. > Reduce garbage created while reading Cells from Codec Decoder > - > > Key: HBASE-15180 > URL: https://issues.apache.org/jira/browse/HBASE-15180 > Project: HBase > Issue Type: Sub-task > Components: regionserver >Reporter: Anoop Sam John >Assignee: Anoop Sam John > Fix For: 2.0.0 > > Attachments: HBASE-15180.patch, HBASE-15180_v2.patch > > > In KeyValueDecoder#parseCell (Default Codec decoder) we use > KeyValueUtil#iscreate to read cells from the InputStream. Here we 1st create > a byte[] of length 4 and read the cell length and then an array of Cell's > length and read in cell bytes into it and create a KV. > Actually in server we read the reqs into a byte[] and CellScanner is created > on top of a ByteArrayInputStream on top of this. By default in write path, we > have MSLAB usage ON. So while adding Cells to memstore, we will copy the Cell > bytes to MSLAB memory chunks (default 2 MB size) and recreate Cells over that > bytes. So there is no issue if we create Cells over the RPC read byte[] > directly here in Decoder. No need for 2 byte[] creation and copy for every > Cell in request. > My plan is to make a Cell aware ByteArrayInputStream which can read Cells > directly from it. > Same Codec path is used in client side also. There better we can avoid this > direct Cell create and continue to do the copy to smaller byte[]s path. Plan > to introduce some thing like a CodecContext associated with every Codec > instance which can say the server/client context. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15184) SparkSQL Scan operation doesn't work on kerberos cluster
[ https://issues.apache.org/jira/browse/HBASE-15184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HBASE-15184: Priority: Critical (was: Major) > SparkSQL Scan operation doesn't work on kerberos cluster > > > Key: HBASE-15184 > URL: https://issues.apache.org/jira/browse/HBASE-15184 > Project: HBase > Issue Type: Bug > Components: spark >Reporter: Ted Malaska >Priority: Critical > Fix For: 2.0.0 > > Attachments: HBaseSparkModule.zip > > > I was using the HBase Spark Module at a client with Kerberos and I ran into > an issue with the Scan. > I made a fix for the client but we need to put it back into HBase. I will > attach my solution, but it has a major problem. I had to over ride a > protected class in spark. I will need help to decover a better approach -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15184) SparkSQL Scan operation doesn't work on kerberos cluster
[ https://issues.apache.org/jira/browse/HBASE-15184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HBASE-15184: Fix Version/s: 2.0.0 > SparkSQL Scan operation doesn't work on kerberos cluster > > > Key: HBASE-15184 > URL: https://issues.apache.org/jira/browse/HBASE-15184 > Project: HBase > Issue Type: Bug > Components: spark >Reporter: Ted Malaska > Fix For: 2.0.0 > > Attachments: HBaseSparkModule.zip > > > I was using the HBase Spark Module at a client with Kerberos and I ran into > an issue with the Scan. > I made a fix for the client but we need to put it back into HBase. I will > attach my solution, but it has a major problem. I had to over ride a > protected class in spark. I will need help to decover a better approach -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15184) SparkSQL Scan operation doesn't work on kerberos cluster
[ https://issues.apache.org/jira/browse/HBASE-15184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HBASE-15184: Component/s: spark > SparkSQL Scan operation doesn't work on kerberos cluster > > > Key: HBASE-15184 > URL: https://issues.apache.org/jira/browse/HBASE-15184 > Project: HBase > Issue Type: Bug > Components: spark >Reporter: Ted Malaska >Priority: Critical > Fix For: 2.0.0 > > Attachments: HBaseSparkModule.zip > > > I was using the HBase Spark Module at a client with Kerberos and I ran into > an issue with the Scan. > I made a fix for the client but we need to put it back into HBase. I will > attach my solution, but it has a major problem. I had to over ride a > protected class in spark. I will need help to decover a better approach -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15122) Servlets generate XSS_REQUEST_PARAMETER_TO_SERVLET_WRITER findbugs warnings
[ https://issues.apache.org/jira/browse/HBASE-15122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15123621#comment-15123621 ] Sean Busbey commented on HBASE-15122: - [~asamir], what branch are you attempting to build on? Can you turn on the debug license output and see what dependency changed? It'll be the last one listed in the LICENSE file. > Servlets generate XSS_REQUEST_PARAMETER_TO_SERVLET_WRITER findbugs warnings > --- > > Key: HBASE-15122 > URL: https://issues.apache.org/jira/browse/HBASE-15122 > Project: HBase > Issue Type: Bug >Reporter: stack >Priority: Critical > Attachments: HBASE-15122.patch > > > In our JMXJsonServlet we are doing this: > jsonpcb = request.getParameter(CALLBACK_PARAM); > if (jsonpcb != null) { > response.setContentType("application/javascript; charset=utf8"); > writer.write(jsonpcb + "("); > ... > Findbugs complains rightly. There are other instances in our servlets and > then there are the pages generated by jamon excluded from findbugs checking > (and findbugs volunteers that it is dumb in this regard finding only the most > egregious of violations). > We have no sanitizing tooling in hbase that I know of (correct me if I am > wrong). I started to pull on this thread and it runs deep. Our Jamon > templating (last updated in 2013 and before that, in 2011) engine doesn't > seem to have sanitizing means either and there seems to be outstanding XSS > complaint against jamon that goes unaddressed. > Could pull in something like > https://www.owasp.org/index.php/OWASP_Java_Encoder_Project and run all > emissions via it or get a templating engine that has sanitizing built in. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-15187) Integrate CSRF prevention filter to REST gateway
Ted Yu created HBASE-15187: -- Summary: Integrate CSRF prevention filter to REST gateway Key: HBASE-15187 URL: https://issues.apache.org/jira/browse/HBASE-15187 Project: HBase Issue Type: Bug Reporter: Ted Yu Assignee: Ted Yu HADOOP-12691 introduced a filter in Hadoop Common to help REST APIs guard against cross-site request forgery attacks. This issue tracks the integration of that filter into HBase REST gateway. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15186) HBASE-15158 Preamble: fix findbugs, add javadoc and some util
[ https://issues.apache.org/jira/browse/HBASE-15186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-15186: -- Attachment: 15186v2.patch Address Anoop suggestion. I tried all the failed tests locally. Going to start up a flakey test weeding project to address the unrelated failures. > HBASE-15158 Preamble: fix findbugs, add javadoc and some util > - > > Key: HBASE-15186 > URL: https://issues.apache.org/jira/browse/HBASE-15186 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: stack > Attachments: 15186v2.patch, subpatch.patch > > > Break up the HBASE-15158 patch. Here is the first piece. Its a bunch of > findbugs fixes, a bit of utility for tag-handling (to be exploited in later > patches), some clarifying comments and javadoc (and javadoc fixes), cleanup > of a some Region API. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15185) Fix jdk8 javadoc warnings for branch-1.1
[ https://issues.apache.org/jira/browse/HBASE-15185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15123522#comment-15123522 ] Yu Li commented on HBASE-15185: --- >From the report there're still javadoc warnings left, will update the patch >soon. > Fix jdk8 javadoc warnings for branch-1.1 > > > Key: HBASE-15185 > URL: https://issues.apache.org/jira/browse/HBASE-15185 > Project: HBase > Issue Type: Task >Affects Versions: 1.1.3 >Reporter: Yu Li >Assignee: Yu Li > Fix For: 1.1.4 > > Attachments: HBASE-15185.branch-1.1.patch > > > [This > link|https://builds.apache.org/job/PreCommit-HBASE-Build/340/artifact/patchprocess/patch-javadoc-hbase-server-jdk1.8.0_66.txt] > shows jdk8 javadoc warnings for current branch-1.1 code base. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15167) Deadlock in TestNamespaceAuditor.testRegionOperations on 1.1
[ https://issues.apache.org/jira/browse/HBASE-15167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15123501#comment-15123501 ] Heng Chen commented on HBASE-15167: --- relates test has passed. https://builds.apache.org/job/PreCommit-HBASE-Build/348/testReport/org.apache.hadoop.hbase.namespace/TestNamespaceAuditor/ > Deadlock in TestNamespaceAuditor.testRegionOperations on 1.1 > > > Key: HBASE-15167 > URL: https://issues.apache.org/jira/browse/HBASE-15167 > Project: HBase > Issue Type: Bug > Components: test >Affects Versions: 1.1.3 >Reporter: Nick Dimiduk >Assignee: Heng Chen >Priority: Critical > Fix For: 1.1.4 > > Attachments: HBASE-15167-branch-1.1.patch, blocked.log > > > This was left as a zombie after one of my test runs this weekend. > {noformat} > "WALProcedureStoreSyncThread" daemon prio=10 tid=0x7f3ccc209000 > nid=0x3960 in Object.wait() [0x7f3c6b6b5000] >java.lang.Thread.State: BLOCKED (on object monitor) > at java.lang.Object.wait(Native Method) > at java.lang.Object.wait(Object.java:503) > at org.apache.hadoop.ipc.Client.call(Client.java:1397) > - locked <0x0007f2813390> (a org.apache.hadoop.ipc.Client$Call) > at org.apache.hadoop.ipc.Client.call(Client.java:1364) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy23.create(Unknown Source) > at sun.reflect.GeneratedMethodAccessor25.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy23.create(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:264) > at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279) > at com.sun.proxy.$Proxy24.create(Unknown Source) > at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279) > at com.sun.proxy.$Proxy24.create(Unknown Source) > at > org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1612) > at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1488) > at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1413) > at > org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:387) > at > org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:383) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:383) > at > org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:327) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:906) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:887) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:784) > at > org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.rollWriter(WALProcedureStore.java:766) > at > org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.rollWriter(WALProcedureStore.java:733) > at > org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.tryRollWriter(WALProcedureStore.java:668) > at > org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.periodicRoll(WALProcedureStore.java:711) > at > org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.syncLoop(WALProcedureStore.java:531) > at > org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.access$000(WALProcedureStore.java:66) > at > org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore$1.run(WALProcedureStore.java:180) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15186) HBASE-15158 Preamble: fix findbugs, add javadoc and some util
[ https://issues.apache.org/jira/browse/HBASE-15186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15123496#comment-15123496 ] stack commented on HBASE-15186: --- Thanks for the review [~anoop.hbase] Good one on the null iterator check. Doing as you suggest. Should TagUtil be inside CellUtil altogether. Tag utility is currently split between the two classes. Or the only consumer of TagUtil should be CellUtil? On removing javadoc param that has no value, checkstyle flags these empty javadoc as issue. Let me fix the above complaints and commit. > HBASE-15158 Preamble: fix findbugs, add javadoc and some util > - > > Key: HBASE-15186 > URL: https://issues.apache.org/jira/browse/HBASE-15186 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: stack > Attachments: subpatch.patch > > > Break up the HBASE-15158 patch. Here is the first piece. Its a bunch of > findbugs fixes, a bit of utility for tag-handling (to be exploited in later > patches), some clarifying comments and javadoc (and javadoc fixes), cleanup > of a some Region API. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15128) Disable region splits and merges in HBCK
[ https://issues.apache.org/jira/browse/HBASE-15128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15123486#comment-15123486 ] Matteo Bertozzi commented on HBASE-15128: - {quote}IMO we can do a tradeoff, firstly we go on this issue and patch, after it committed, we could disable region split and merge at least. And then, we create an issue as subtask of HBASE-13936 to refactor all switches based on dynamic configuration{quote} this patch as it is is a -1 to me. because of that set_switch that wants to be generic. if you want to solve ONLY the disable split/merge. Jon solution with the table lock is probably ok, and I also think we use that already in hbck. > Disable region splits and merges in HBCK > > > Key: HBASE-15128 > URL: https://issues.apache.org/jira/browse/HBASE-15128 > Project: HBase > Issue Type: Bug >Reporter: Enis Soztutar >Assignee: Heng Chen > Fix For: 2.0.0, 1.3.0 > > Attachments: HBASE-15128.patch, HBASE-15128_v1.patch, > HBASE-15128_v3.patch > > > In large clusters where region splits are frequent, and HBCK runs take > longer, the concurrent splits cause further problems in HBCK since HBCK > assumes a static state for the region partition map. We have just seen a case > where HBCK undo's a concurrently splitting region causing number of > inconsistencies to go up. > We can have a mode in master where splits and merges are disabled like the > balancer and catalog janitor switches. Master will reject the split requests > if regionservers decide to split. This switch can be turned on / off by the > admins and also automatically by HBCK while it is running (similar to > balancer switch being disabled by HBCK). > HBCK should also disable the Catalog Janitor just in case. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15122) Servlets generate XSS_REQUEST_PARAMETER_TO_SERVLET_WRITER findbugs warnings
[ https://issues.apache.org/jira/browse/HBASE-15122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15123465#comment-15123465 ] Samir Ahmic commented on HBASE-15122: - I was trying to test this patch but after running mvn -DskipTests clean package assembly:single i see this: {code} [ERROR] Failed to execute goal org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process (default) on project hbase-assembly: Error rendering velocity resource. Error invoking method 'get(java.lang.Integer)' in java.util.ArrayList at META-INF/NOTICE.vm[line 275, column 22]: InvocationTargetException: Index: 0, Size: 0 -> [Help 1] {code} Based on [-HBASE-14199- | https://issues.apache.org/jira/browse/HBASE-14199] we need to update supplemental-models. > Servlets generate XSS_REQUEST_PARAMETER_TO_SERVLET_WRITER findbugs warnings > --- > > Key: HBASE-15122 > URL: https://issues.apache.org/jira/browse/HBASE-15122 > Project: HBase > Issue Type: Bug >Reporter: stack >Priority: Critical > Attachments: HBASE-15122.patch > > > In our JMXJsonServlet we are doing this: > jsonpcb = request.getParameter(CALLBACK_PARAM); > if (jsonpcb != null) { > response.setContentType("application/javascript; charset=utf8"); > writer.write(jsonpcb + "("); > ... > Findbugs complains rightly. There are other instances in our servlets and > then there are the pages generated by jamon excluded from findbugs checking > (and findbugs volunteers that it is dumb in this regard finding only the most > egregious of violations). > We have no sanitizing tooling in hbase that I know of (correct me if I am > wrong). I started to pull on this thread and it runs deep. Our Jamon > templating (last updated in 2013 and before that, in 2011) engine doesn't > seem to have sanitizing means either and there seems to be outstanding XSS > complaint against jamon that goes unaddressed. > Could pull in something like > https://www.owasp.org/index.php/OWASP_Java_Encoder_Project and run all > emissions via it or get a templating engine that has sanitizing built in. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15167) Deadlock in TestNamespaceAuditor.testRegionOperations on 1.1
[ https://issues.apache.org/jira/browse/HBASE-15167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15123425#comment-15123425 ] Hadoop QA commented on HBASE-15167: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 13s {color} | {color:green} branch-1.1 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s {color} | {color:green} branch-1.1 passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s {color} | {color:green} branch-1.1 passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s {color} | {color:green} branch-1.1 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 23s {color} | {color:green} branch-1.1 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 0s {color} | {color:red} hbase-server in branch-1.1 has 80 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 42s {color} | {color:red} hbase-server in branch-1.1 failed with JDK v1.8.0_72. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s {color} | {color:green} branch-1.1 passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 46s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s {color} | {color:green} the patch passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 4m 52s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.1 2.5.2 2.6.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 19s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 35s {color} | {color:red} hbase-server in the patch failed with JDK v1.8.0_72. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 104m 28s {color} | {color:green} hbase-server in the patch passed with JDK v1.8.0_72. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 101m 49s {color} | {color:green} hbase-server in the patch passed with JDK v1.7.0_91. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 231m 53s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.7.1 Server=1.7.1 Image:yetus/hbase:date2016-01-29 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12785127/HBASE-15167-branch-1.1.patch | | JIRA Issue | HBASE-15167 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 1ddf79dbe6f2 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Lin
[jira] [Commented] (HBASE-14025) Update CHANGES.txt for 1.2
[ https://issues.apache.org/jira/browse/HBASE-14025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15123376#comment-15123376 ] Hudson commented on HBASE-14025: SUCCESS: Integrated in HBase-1.2-IT #414 (See [https://builds.apache.org/job/HBase-1.2-IT/414/]) HBASE-14025 update CHANGES.txt for 1.2 RC1 (busbey: rev 46fc1d876bd604f2f71f8692d79978055a095a7a) * CHANGES.txt > Update CHANGES.txt for 1.2 > -- > > Key: HBASE-14025 > URL: https://issues.apache.org/jira/browse/HBASE-14025 > Project: HBase > Issue Type: Sub-task > Components: documentation >Affects Versions: 1.2.0 >Reporter: Sean Busbey >Assignee: Sean Busbey > Fix For: 1.2.0 > > > Since it's more effort than I expected, making a ticket to track actually > updating CHANGES.txt so that new RMs have an idea what to expect. > Maybe will make doc changes if there's enough here. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14025) Update CHANGES.txt for 1.2
[ https://issues.apache.org/jira/browse/HBASE-14025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15123334#comment-15123334 ] Hudson commented on HBASE-14025: SUCCESS: Integrated in HBase-1.2 #524 (See [https://builds.apache.org/job/HBase-1.2/524/]) HBASE-14025 update CHANGES.txt for 1.2 RC1 (busbey: rev 46fc1d876bd604f2f71f8692d79978055a095a7a) * CHANGES.txt > Update CHANGES.txt for 1.2 > -- > > Key: HBASE-14025 > URL: https://issues.apache.org/jira/browse/HBASE-14025 > Project: HBase > Issue Type: Sub-task > Components: documentation >Affects Versions: 1.2.0 >Reporter: Sean Busbey >Assignee: Sean Busbey > Fix For: 1.2.0 > > > Since it's more effort than I expected, making a ticket to track actually > updating CHANGES.txt so that new RMs have an idea what to expect. > Maybe will make doc changes if there's enough here. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15186) HBASE-15158 Preamble: fix findbugs, add javadoc and some util
[ https://issues.apache.org/jira/browse/HBASE-15186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15123300#comment-15123300 ] Hadoop QA commented on HBASE-15186: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 4 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 22s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 8s {color} | {color:green} master passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 59s {color} | {color:green} master passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 8m 14s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 7s {color} | {color:green} master passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 57s {color} | {color:red} hbase-common in master has 1 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 34s {color} | {color:red} hbase-server in master has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 2s {color} | {color:green} master passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 56s {color} | {color:green} master passed with JDK v1.7.0_91 {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 17s {color} | {color:red} hbase-examples in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 10s {color} | {color:green} the patch passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 10s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 47s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 47s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 4m 25s {color} | {color:red} Patch generated 1 new checkstyle issues in hbase-server (total was 1031, now 1011). {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 9s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 29m 21s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 30s {color} | {color:green} the patch passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 6s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 9s {color} | {color:green} hbase-client in the patch passed with JDK v1.8.0_72. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 12s {color} | {color:green} hbase-common in the patch passed with JDK v1.8.0_72. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 30s {color} | {color:green} hbase-examples in the patch passed with JDK v1.8.0_72. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 156m 56s {color} | {color:red} hbase-server in the patch failed with JDK v1.8.0_72. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 21s {color} | {color:green} hbase-client in the patch passed with JDK v1.7.0_91. {color} | | {color:green}+1{color} | {color:green} unit {color} | {co
[jira] [Commented] (HBASE-15185) Fix jdk8 javadoc warnings for branch-1.1
[ https://issues.apache.org/jira/browse/HBASE-15185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15123252#comment-15123252 ] Hadoop QA commented on HBASE-15185: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 25s {color} | {color:green} branch-1.1 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s {color} | {color:green} branch-1.1 passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s {color} | {color:green} branch-1.1 passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s {color} | {color:green} branch-1.1 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 19s {color} | {color:green} branch-1.1 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 55s {color} | {color:red} hbase-server in branch-1.1 has 80 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 31s {color} | {color:red} hbase-server in branch-1.1 failed with JDK v1.8.0_66. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s {color} | {color:green} branch-1.1 passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 42s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 32s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 33s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 4m 10s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.1 2.5.2 2.6.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 53s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 24s {color} | {color:red} hbase-server in the patch failed with JDK v1.8.0_66. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 24s {color} | {color:red} hbase-server in the patch failed with JDK v1.8.0_66. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 46s {color} | {color:red} hbase-server in the patch failed with JDK v1.7.0_91. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 209m 26s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.8.0_66 Timed out junit tests | org.apache.hadoop.hbase.namespace.TestNamespaceAuditor | | JDK v1.7.0_91 Failed junit tests | hadoop.hbase.regionserver.TestWALLockup | | JDK v1.7.0_91 Timed out junit tests | org.apache.hadoop.hbase.namespace.TestNamespaceAuditor | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.9.1 Server=1.9.1 Imag
[jira] [Commented] (HBASE-9393) Hbase does not closing a closed socket resulting in many CLOSE_WAIT
[ https://issues.apache.org/jira/browse/HBASE-9393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15123199#comment-15123199 ] Ashish Singhi commented on HBASE-9393: -- {{TestFlushSnapshotFromClient}} failure is not related to patch. I have manually ran 3 times locally and not able to reproduce it. {noformat} --- T E S T S --- Running org.apache.hadoop.hbase.snapshot.TestFlushSnapshotFromClient Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 95.841 sec - in org.apache.hadoop.hbase.snapshot.TestFlushSnapshot FromClient Results : Tests run: 9, Failures: 0, Errors: 0, Skipped: 0 [INFO] [INFO] --- maven-surefire-plugin:2.18.1:test (secondPartTestsExecution) @ hbase-server --- [INFO] Tests are skipped. [INFO] [INFO] BUILD SUCCESS [INFO] [INFO] Total time: 1:50.899s [INFO] Finished at: Fri Jan 29 14:07:38 GMT+05:30 2016 [INFO] Final Memory: 36M/96M [INFO] --- T E S T S --- Running org.apache.hadoop.hbase.snapshot.TestFlushSnapshotFromClient Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 93.399 sec - in org.apache.hadoop.hbase.snapshot.TestFlushSnapshot FromClient Results : Tests run: 9, Failures: 0, Errors: 0, Skipped: 0 [INFO] [INFO] --- maven-surefire-plugin:2.18.1:test (secondPartTestsExecution) @ hbase-server --- [INFO] Tests are skipped. [INFO] [INFO] BUILD SUCCESS [INFO] [INFO] Total time: 1:48.177s [INFO] Finished at: Fri Jan 29 14:13:52 GMT+05:30 2016 [INFO] Final Memory: 35M/89M [INFO] --- T E S T S --- Running org.apache.hadoop.hbase.snapshot.TestFlushSnapshotFromClient Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 94.072 sec - in org.apache.hadoop.hbase.snapshot.TestFlushSnapshot FromClient Results : Tests run: 9, Failures: 0, Errors: 0, Skipped: 0 [INFO] [INFO] --- maven-surefire-plugin:2.18.1:test (secondPartTestsExecution) @ hbase-server --- [INFO] Tests are skipped. [INFO] [INFO] BUILD SUCCESS [INFO] [INFO] Total time: 1:48.012s [INFO] Finished at: Fri Jan 29 14:16:28 GMT+05:30 2016 [INFO] Final Memory: 36M/100M [INFO] {noformat} [~saint@gmail.com], is v6 patch ok to commit ? Thanks. > Hbase does not closing a closed socket resulting in many CLOSE_WAIT > > > Key: HBASE-9393 > URL: https://issues.apache.org/jira/browse/HBASE-9393 > Project: HBase > Issue Type: Bug >Affects Versions: 0.94.2, 0.98.0 > Environment: Centos 6.4 - 7 regionservers/datanodes, 8 TB per node, > 7279 regions >Reporter: Avi Zrachya >Assignee: Ashish Singhi >Priority: Critical > Fix For: 2.0.0 > > Attachments: HBASE-9393.patch, HBASE-9393.v1.patch, > HBASE-9393.v2.patch, HBASE-9393.v3.patch, HBASE-9393.v4.patch, > HBASE-9393.v5.patch, HBASE-9393.v5.patch, HBASE-9393.v5.patch, > HBASE-9393.v6.patch, HBASE-9393.v6.patch, HBASE-9393.v6.patch > > > HBase dose not close a dead connection with the datanode. > This resulting in over 60K CLOSE_WAIT and at some point HBase can not connect > to the datanode because too many mapped sockets from one host to another on > the same port. > The example below is with low CLOSE_WAIT count because we had to restart > hbase to solve the porblem, later in time it will incease to 60-100K sockets > on CLOSE_WAIT > [root@hd2-region3 ~]# netstat -nap |grep CLOSE_WAIT |grep 21592 |wc -l > 13156 > [root@hd2-region3 ~]# ps -ef |grep 21592 > root 17255 17219 0 12:26 pts/000:00:00 grep 21592 > hbase21592 1 17 Aug29 ?03:29:06 > /usr/java/jdk1.6.0_26/bin/java -XX:OnOutOfMemoryError=kill -9 %p -Xmx8000m > -ea -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode > -Dhbase.log.dir=/var/log/hbase > -Dhbase.log.file=hbase-hbase-regionserver-hd2-region3.swnet.corp.log ... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15167) Deadlock in TestNamespaceAuditor.testRegionOperations on 1.1
[ https://issues.apache.org/jira/browse/HBASE-15167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Heng Chen updated HBASE-15167: -- Status: Patch Available (was: Open) > Deadlock in TestNamespaceAuditor.testRegionOperations on 1.1 > > > Key: HBASE-15167 > URL: https://issues.apache.org/jira/browse/HBASE-15167 > Project: HBase > Issue Type: Bug > Components: test >Affects Versions: 1.1.3 >Reporter: Nick Dimiduk >Assignee: Heng Chen >Priority: Critical > Fix For: 1.1.4 > > Attachments: HBASE-15167-branch-1.1.patch, blocked.log > > > This was left as a zombie after one of my test runs this weekend. > {noformat} > "WALProcedureStoreSyncThread" daemon prio=10 tid=0x7f3ccc209000 > nid=0x3960 in Object.wait() [0x7f3c6b6b5000] >java.lang.Thread.State: BLOCKED (on object monitor) > at java.lang.Object.wait(Native Method) > at java.lang.Object.wait(Object.java:503) > at org.apache.hadoop.ipc.Client.call(Client.java:1397) > - locked <0x0007f2813390> (a org.apache.hadoop.ipc.Client$Call) > at org.apache.hadoop.ipc.Client.call(Client.java:1364) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy23.create(Unknown Source) > at sun.reflect.GeneratedMethodAccessor25.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy23.create(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:264) > at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279) > at com.sun.proxy.$Proxy24.create(Unknown Source) > at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279) > at com.sun.proxy.$Proxy24.create(Unknown Source) > at > org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1612) > at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1488) > at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1413) > at > org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:387) > at > org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:383) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:383) > at > org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:327) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:906) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:887) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:784) > at > org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.rollWriter(WALProcedureStore.java:766) > at > org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.rollWriter(WALProcedureStore.java:733) > at > org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.tryRollWriter(WALProcedureStore.java:668) > at > org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.periodicRoll(WALProcedureStore.java:711) > at > org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.syncLoop(WALProcedureStore.java:531) > at > org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.access$000(WALProcedureStore.java:66) > at > org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore$1.run(WALProcedureStore.java:180) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15167) Deadlock in TestNamespaceAuditor.testRegionOperations on 1.1
[ https://issues.apache.org/jira/browse/HBASE-15167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Heng Chen updated HBASE-15167: -- Attachment: HBASE-15167-branch-1.1.patch > Deadlock in TestNamespaceAuditor.testRegionOperations on 1.1 > > > Key: HBASE-15167 > URL: https://issues.apache.org/jira/browse/HBASE-15167 > Project: HBase > Issue Type: Bug > Components: test >Affects Versions: 1.1.3 >Reporter: Nick Dimiduk >Assignee: Heng Chen >Priority: Critical > Fix For: 1.1.4 > > Attachments: HBASE-15167-branch-1.1.patch, blocked.log > > > This was left as a zombie after one of my test runs this weekend. > {noformat} > "WALProcedureStoreSyncThread" daemon prio=10 tid=0x7f3ccc209000 > nid=0x3960 in Object.wait() [0x7f3c6b6b5000] >java.lang.Thread.State: BLOCKED (on object monitor) > at java.lang.Object.wait(Native Method) > at java.lang.Object.wait(Object.java:503) > at org.apache.hadoop.ipc.Client.call(Client.java:1397) > - locked <0x0007f2813390> (a org.apache.hadoop.ipc.Client$Call) > at org.apache.hadoop.ipc.Client.call(Client.java:1364) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy23.create(Unknown Source) > at sun.reflect.GeneratedMethodAccessor25.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy23.create(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:264) > at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279) > at com.sun.proxy.$Proxy24.create(Unknown Source) > at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279) > at com.sun.proxy.$Proxy24.create(Unknown Source) > at > org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1612) > at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1488) > at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1413) > at > org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:387) > at > org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:383) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:383) > at > org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:327) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:906) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:887) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:784) > at > org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.rollWriter(WALProcedureStore.java:766) > at > org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.rollWriter(WALProcedureStore.java:733) > at > org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.tryRollWriter(WALProcedureStore.java:668) > at > org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.periodicRoll(WALProcedureStore.java:711) > at > org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.syncLoop(WALProcedureStore.java:531) > at > org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.access$000(WALProcedureStore.java:66) > at > org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore$1.run(WALProcedureStore.java:180) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15097) When the scan operation covered two regions,sometimes the final results have duplicated rows.
[ https://issues.apache.org/jira/browse/HBASE-15097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15123183#comment-15123183 ] chenrongwei commented on HBASE-15097: - hi,how about this issue? @Anoop Sam John @Ted yu > When the scan operation covered two regions,sometimes the final results have > duplicated rows. > - > > Key: HBASE-15097 > URL: https://issues.apache.org/jira/browse/HBASE-15097 > Project: HBase > Issue Type: Bug > Components: regionserver >Affects Versions: 1.1.2 > Environment: centos 6.5 > hbase 1.1.2 >Reporter: chenrongwei >Assignee: chenrongwei > Attachments: HBASE-15097-v001.patch, HBASE-15097-v002.patch, > HBASE-15097-v003.patch, HBASE-15097-v004.patch, output.log, rowkey.txt, > snapshot2016-01-13 pm 8.42.37.png > > Original Estimate: 24h > Remaining Estimate: 24h > > When the scan operationās start key and end key covered two regions,the first > region returned the rows which were beyond of its' end key.So,this finally > leads to duplicated rows in the results. > To avoid this problem,we should add a judgment before setting the variable > "stopRow" in the class of HRegion,like follow: > if (Bytes.equals(scan.getStopRow(), HConstants.EMPTY_END_ROW) && > !scan.isGetScan()) { > this.stopRow = null; > } else { > if (Bytes.compareTo(scan.getStopRow(), > this.getRegionInfo().getEndKey()) >= 0) { > this.stopRow = this.getRegionInfo().getEndKey(); > } else { > this.stopRow = scan.getStopRow(); > } > } -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15163) Add sampling code and metrics for get/scan/multi/mutate count separately
[ https://issues.apache.org/jira/browse/HBASE-15163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15123164#comment-15123164 ] Hadoop QA commented on HBASE-15163: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 4s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s {color} | {color:green} master passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s {color} | {color:green} master passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 41s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 35s {color} | {color:green} master passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 53s {color} | {color:red} hbase-server in master has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s {color} | {color:green} master passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s {color} | {color:green} master passed with JDK v1.7.0_91 {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 11s {color} | {color:red} hbase-hadoop2-compat in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 47s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 53s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 3m 59s {color} | {color:red} Patch generated 1 new checkstyle issues in hbase-server (total was 233, now 233). {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 34s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 22m 2s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 54s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 17s {color} | {color:green} hbase-hadoop2-compat in the patch passed with JDK v1.8.0_66. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 15s {color} | {color:green} hbase-hadoop-compat in the patch passed with JDK v1.8.0_66. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m 11s {color} | {color:red} hbase-server in the patch failed with JDK v1.8.0_66. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 39s {color} | {color:green} hbase-hadoop2-compat in the patch passed with JDK v1.7.0_91. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s {color} | {color:green} hbase-hadoop-compat in the patch passed with JDK v1.7.0_91. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 82m 4s {color} | {color:green} hbase-server in the patch passed with JDK v1.7.0_91. {color} | | {color:green}+1{color
[jira] [Commented] (HBASE-15167) Deadlock in TestNamespaceAuditor.testRegionOperations on 1.1
[ https://issues.apache.org/jira/browse/HBASE-15167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15123161#comment-15123161 ] Heng Chen commented on HBASE-15167: --- OK. Let me take it. > Deadlock in TestNamespaceAuditor.testRegionOperations on 1.1 > > > Key: HBASE-15167 > URL: https://issues.apache.org/jira/browse/HBASE-15167 > Project: HBase > Issue Type: Bug > Components: test >Affects Versions: 1.1.3 >Reporter: Nick Dimiduk >Assignee: Heng Chen >Priority: Critical > Fix For: 1.1.4 > > Attachments: blocked.log > > > This was left as a zombie after one of my test runs this weekend. > {noformat} > "WALProcedureStoreSyncThread" daemon prio=10 tid=0x7f3ccc209000 > nid=0x3960 in Object.wait() [0x7f3c6b6b5000] >java.lang.Thread.State: BLOCKED (on object monitor) > at java.lang.Object.wait(Native Method) > at java.lang.Object.wait(Object.java:503) > at org.apache.hadoop.ipc.Client.call(Client.java:1397) > - locked <0x0007f2813390> (a org.apache.hadoop.ipc.Client$Call) > at org.apache.hadoop.ipc.Client.call(Client.java:1364) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy23.create(Unknown Source) > at sun.reflect.GeneratedMethodAccessor25.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy23.create(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:264) > at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279) > at com.sun.proxy.$Proxy24.create(Unknown Source) > at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279) > at com.sun.proxy.$Proxy24.create(Unknown Source) > at > org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1612) > at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1488) > at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1413) > at > org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:387) > at > org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:383) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:383) > at > org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:327) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:906) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:887) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:784) > at > org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.rollWriter(WALProcedureStore.java:766) > at > org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.rollWriter(WALProcedureStore.java:733) > at > org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.tryRollWriter(WALProcedureStore.java:668) > at > org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.periodicRoll(WALProcedureStore.java:711) > at > org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.syncLoop(WALProcedureStore.java:531) > at > org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.access$000(WALProcedureStore.java:66) > at > org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore$1.run(WALProcedureStore.java:180) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)