[jira] [Commented] (HBASE-19782) Reject the replication request when peer is DA or A state
[ https://issues.apache.org/jira/browse/HBASE-19782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16349934#comment-16349934 ] Zheng Hu commented on HBASE-19782: -- Retry.. > Reject the replication request when peer is DA or A state > - > > Key: HBASE-19782 > URL: https://issues.apache.org/jira/browse/HBASE-19782 > Project: HBase > Issue Type: Sub-task > Components: Replication >Reporter: Zheng Hu >Assignee: Zheng Hu >Priority: Major > Fix For: 3.0.0 > > Attachments: HBASE-19064-HBASE-19782.v1.patch, > HBASE-19064-HBASE-19782.v2.patch, HBASE-19064-HBASE-19782.v2.patch > > > According to the design doc, we'll initialize both of the cluster state to > DA after added the bidirectional replication path. and when a cluster in > DA state, it'll reject replication request. > so for cluster A and B in state DA, if any received replication entry whose > table or namespace match the peer,the cluster will skip to apply into > its local rs. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-19782) Reject the replication request when peer is DA or A state
[ https://issues.apache.org/jira/browse/HBASE-19782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zheng Hu updated HBASE-19782: - Attachment: HBASE-19064-HBASE-19782.v2.patch > Reject the replication request when peer is DA or A state > - > > Key: HBASE-19782 > URL: https://issues.apache.org/jira/browse/HBASE-19782 > Project: HBase > Issue Type: Sub-task > Components: Replication >Reporter: Zheng Hu >Assignee: Zheng Hu >Priority: Major > Fix For: 3.0.0 > > Attachments: HBASE-19064-HBASE-19782.v1.patch, > HBASE-19064-HBASE-19782.v2.patch, HBASE-19064-HBASE-19782.v2.patch > > > According to the design doc, we'll initialize both of the cluster state to > DA after added the bidirectional replication path. and when a cluster in > DA state, it'll reject replication request. > so for cluster A and B in state DA, if any received replication entry whose > table or namespace match the peer,the cluster will skip to apply into > its local rs. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19782) Reject the replication request when peer is DA or A state
[ https://issues.apache.org/jira/browse/HBASE-19782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16349940#comment-16349940 ] Hadoop QA commented on HBASE-19782: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 2s{color} | {color:red} HBASE-19782 does not apply to master. Rebase required? Wrong Branch? See https://yetus.apache.org/documentation/0.7.0/precommit-patchnames for help. {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 | | JIRA Issue | HBASE-19782 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12908934/HBASE-19064-HBASE-19782.v2.patch | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/11354/console | | Powered by | Apache Yetus 0.7.0 http://yetus.apache.org | This message was automatically generated. > Reject the replication request when peer is DA or A state > - > > Key: HBASE-19782 > URL: https://issues.apache.org/jira/browse/HBASE-19782 > Project: HBase > Issue Type: Sub-task > Components: Replication >Reporter: Zheng Hu >Assignee: Zheng Hu >Priority: Major > Fix For: 3.0.0 > > Attachments: HBASE-19064-HBASE-19782.v1.patch, > HBASE-19064-HBASE-19782.v2.patch, HBASE-19064-HBASE-19782.v2.patch > > > According to the design doc, we'll initialize both of the cluster state to > DA after added the bidirectional replication path. and when a cluster in > DA state, it'll reject replication request. > so for cluster A and B in state DA, if any received replication entry whose > table or namespace match the peer,the cluster will skip to apply into > its local rs. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19884) BucketEntryGroup's equals, hashCode and compareTo methods are not consistent
[ https://issues.apache.org/jira/browse/HBASE-19884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16349942#comment-16349942 ] Peter Somogyi commented on HBASE-19884: --- Thanks for the reviews! > BucketEntryGroup's equals, hashCode and compareTo methods are not consistent > > > Key: HBASE-19884 > URL: https://issues.apache.org/jira/browse/HBASE-19884 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0-beta-1 >Reporter: Peter Somogyi >Assignee: Peter Somogyi >Priority: Major > Fix For: 2.0.0-beta-2 > > Attachments: HBASE-19884.master.001.patch, > HBASE-19884.master.001.patch, HBASE-19884.master.001.patch, > HBASE-19884.master.002.patch, HBASE-19884.master.003.patch > > > BucketEntryGroup currently uses different fields to calculate compareTo, > equals and hasCode. > In some cases !a.equals(b) but a.compareTo(b) == 0. Javadoc of Comparator > recommends that natural orderings be consistent with equals. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-19782) Reject the replication request when peer is DA or A state
[ https://issues.apache.org/jira/browse/HBASE-19782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zheng Hu updated HBASE-19782: - Attachment: (was: HBASE-19064-HBASE-19782.v2.patch) > Reject the replication request when peer is DA or A state > - > > Key: HBASE-19782 > URL: https://issues.apache.org/jira/browse/HBASE-19782 > Project: HBase > Issue Type: Sub-task > Components: Replication >Reporter: Zheng Hu >Assignee: Zheng Hu >Priority: Major > Fix For: 3.0.0 > > Attachments: HBASE-19064-HBASE-19782.v1.patch, > HBASE-19064-HBASE-19782.v2.patch, HBASE-19782.HBASE-19064.v2.patch > > > According to the design doc, we'll initialize both of the cluster state to > DA after added the bidirectional replication path. and when a cluster in > DA state, it'll reject replication request. > so for cluster A and B in state DA, if any received replication entry whose > table or namespace match the peer,the cluster will skip to apply into > its local rs. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-19782) Reject the replication request when peer is DA or A state
[ https://issues.apache.org/jira/browse/HBASE-19782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zheng Hu updated HBASE-19782: - Attachment: HBASE-19782.HBASE-19064.v2.patch > Reject the replication request when peer is DA or A state > - > > Key: HBASE-19782 > URL: https://issues.apache.org/jira/browse/HBASE-19782 > Project: HBase > Issue Type: Sub-task > Components: Replication >Reporter: Zheng Hu >Assignee: Zheng Hu >Priority: Major > Fix For: 3.0.0 > > Attachments: HBASE-19064-HBASE-19782.v1.patch, > HBASE-19064-HBASE-19782.v2.patch, HBASE-19782.HBASE-19064.v2.patch > > > According to the design doc, we'll initialize both of the cluster state to > DA after added the bidirectional replication path. and when a cluster in > DA state, it'll reject replication request. > so for cluster A and B in state DA, if any received replication entry whose > table or namespace match the peer,the cluster will skip to apply into > its local rs. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-19917) Improve RSGroupBasedLoadBalancer#filterServers() to be more efficient
[ https://issues.apache.org/jira/browse/HBASE-19917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiang Li updated HBASE-19917: - Description: {code:title=hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupBasedLoadBalancer.java|borderStyle=solid} private List filterServers(Collection servers, Collection onlineServers) { ArrayList finalList = new ArrayList(); for (Address server : servers) { for(ServerName curr: onlineServers) { if(curr.getAddress().equals(server)) { finalList.add(curr); } } } return finalList; } {code} filterServers is to return the union of servers and onlineServers. The current implementation has time complexity as O(m * n) (2 loops), could be in O(m + n) if HashSet is used. The trade-off is space complexity is increased. Another point which could be improved: filterServers() is only called in filterOfflineServers(). filterOfflineServers calls filterServers(Set, List). The current filterServers(Collection, Collection) seems could be improved. was: {code:title=hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupBasedLoadBalancer.java|borderStyle=solid} private List filterServers(Collection servers, Collection onlineServers) { ArrayList finalList = new ArrayList(); for (Address server : servers) { for(ServerName curr: onlineServers) { if(curr.getAddress().equals(server)) { finalList.add(curr); } } } return finalList; } {code} filterServers is to return the union of servers and onlineServers. The current implementation has time complexity as O(m*n) (2 loops), could be in O(m + n) if HashSet is used. The trade-off is space complexity is increased. Another point which could be improved: filterServers() is only called in filterOfflineServers(). filterOfflineServers calls filterServers(Set, List). The current filterServers(Collection, Collection) seems could be improved. > Improve RSGroupBasedLoadBalancer#filterServers() to be more efficient > - > > Key: HBASE-19917 > URL: https://issues.apache.org/jira/browse/HBASE-19917 > Project: HBase > Issue Type: Improvement > Components: rsgroup >Reporter: Xiang Li >Assignee: Xiang Li >Priority: Minor > > {code:title=hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupBasedLoadBalancer.java|borderStyle=solid} > private List filterServers(Collection servers, > Collection onlineServers) { > ArrayList finalList = new ArrayList(); > for (Address server : servers) { > for(ServerName curr: onlineServers) { > if(curr.getAddress().equals(server)) { > finalList.add(curr); > } > } > } > return finalList; > } > {code} > filterServers is to return the union of servers and onlineServers. The > current implementation has time complexity as O(m * n) (2 loops), could be in > O(m + n) if HashSet is used. The trade-off is space complexity is increased. > Another point which could be improved: filterServers() is only called in > filterOfflineServers(). filterOfflineServers calls filterServers(Set, List). > The current filterServers(Collection, Collection) seems could be improved. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-19919) Tidying up logging
stack created HBASE-19919: - Summary: Tidying up logging Key: HBASE-19919 URL: https://issues.apache.org/jira/browse/HBASE-19919 Project: HBase Issue Type: Bug Reporter: stack -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-19919) Tidying up logging
[ https://issues.apache.org/jira/browse/HBASE-19919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-19919: -- Description: Reading logs, there is a bunch of stuff we don't need, thread names are too long, etc. Doing a little tidying. > Tidying up logging > -- > > Key: HBASE-19919 > URL: https://issues.apache.org/jira/browse/HBASE-19919 > Project: HBase > Issue Type: Bug >Reporter: stack >Priority: Major > Attachments: HBASE-19919.branch-2.001.patch > > > Reading logs, there is a bunch of stuff we don't need, thread names are too > long, etc. Doing a little tidying. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-19919) Tidying up logging
[ https://issues.apache.org/jira/browse/HBASE-19919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-19919: -- Attachment: HBASE-19919.branch-2.001.patch > Tidying up logging > -- > > Key: HBASE-19919 > URL: https://issues.apache.org/jira/browse/HBASE-19919 > Project: HBase > Issue Type: Bug >Reporter: stack >Priority: Major > Attachments: HBASE-19919.branch-2.001.patch > > > Reading logs, there is a bunch of stuff we don't need, thread names are too > long, etc. Doing a little tidying. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19919) Tidying up logging
[ https://issues.apache.org/jira/browse/HBASE-19919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16349955#comment-16349955 ] stack commented on HBASE-19919: --- .001 is first batch. Need to test it... Posting here so don't lose what has been done so far. > Tidying up logging > -- > > Key: HBASE-19919 > URL: https://issues.apache.org/jira/browse/HBASE-19919 > Project: HBase > Issue Type: Bug >Reporter: stack >Priority: Major > Attachments: HBASE-19919.branch-2.001.patch > > > Reading logs, there is a bunch of stuff we don't need, thread names are too > long, etc. Doing a little tidying. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19918) Promote TestAsyncClusterAdminApi to LargeTests
[ https://issues.apache.org/jira/browse/HBASE-19918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16349957#comment-16349957 ] stack commented on HBASE-19918: --- +1 > Promote TestAsyncClusterAdminApi to LargeTests > -- > > Key: HBASE-19918 > URL: https://issues.apache.org/jira/browse/HBASE-19918 > Project: HBase > Issue Type: Sub-task > Components: test >Affects Versions: 2.0.0-beta-1 >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang >Priority: Major > > https://builds.apache.org/job/HBase%20Nightly/job/branch-2/221/testReport/junit/org.apache.hadoop.hbase.client/TestAsyncClusterAdminApi/org_apache_hadoop_hbase_client_TestAsyncClusterAdminApi/ > org.junit.runners.model.TestTimedOutException: test timed out after 180 > seconds > Found this timeout in our branch-2 nightly jobs. And this test run more than > 110 seconds on my local computer. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-19918) Promote TestAsyncClusterAdminApi to LargeTests
[ https://issues.apache.org/jira/browse/HBASE-19918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guanghao Zhang updated HBASE-19918: --- Attachment: HBASE-19918.master.001.patch > Promote TestAsyncClusterAdminApi to LargeTests > -- > > Key: HBASE-19918 > URL: https://issues.apache.org/jira/browse/HBASE-19918 > Project: HBase > Issue Type: Sub-task > Components: test >Affects Versions: 2.0.0-beta-1 >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang >Priority: Major > Attachments: HBASE-19918.master.001.patch > > > https://builds.apache.org/job/HBase%20Nightly/job/branch-2/221/testReport/junit/org.apache.hadoop.hbase.client/TestAsyncClusterAdminApi/org_apache_hadoop_hbase_client_TestAsyncClusterAdminApi/ > org.junit.runners.model.TestTimedOutException: test timed out after 180 > seconds > Found this timeout in our branch-2 nightly jobs. And this test run more than > 110 seconds on my local computer. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-19918) Promote TestAsyncClusterAdminApi to LargeTests
[ https://issues.apache.org/jira/browse/HBASE-19918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guanghao Zhang updated HBASE-19918: --- Status: Patch Available (was: Open) > Promote TestAsyncClusterAdminApi to LargeTests > -- > > Key: HBASE-19918 > URL: https://issues.apache.org/jira/browse/HBASE-19918 > Project: HBase > Issue Type: Sub-task > Components: test >Affects Versions: 2.0.0-beta-1 >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang >Priority: Major > Attachments: HBASE-19918.master.001.patch > > > https://builds.apache.org/job/HBase%20Nightly/job/branch-2/221/testReport/junit/org.apache.hadoop.hbase.client/TestAsyncClusterAdminApi/org_apache_hadoop_hbase_client_TestAsyncClusterAdminApi/ > org.junit.runners.model.TestTimedOutException: test timed out after 180 > seconds > Found this timeout in our branch-2 nightly jobs. And this test run more than > 110 seconds on my local computer. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19855) Refactor RegionScannerImpl.nextInternal method
[ https://issues.apache.org/jira/browse/HBASE-19855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16349976#comment-16349976 ] Hadoop QA commented on HBASE-19855: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 8s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 16s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 6s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 46s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 7s{color} | {color:green} hbase-server: The patch generated 0 new + 210 unchanged - 2 fixed = 210 total (was 212) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 41s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 18m 16s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.5 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}112m 4s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}148m 48s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.TestPartialResultsFromClientSide | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 | | JIRA Issue | HBASE-19855 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12908925/HBASE-19855.master.002.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 438810befc0c 3.13.0-133-generic #182-Ubuntu SMP Tue Sep 19 15:49:21 UTC 2017 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / a2bc19aa11 | | maven | version: Apache Maven 3.5.2 (138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) | | Default Java | 1.8.0_151 | | unit | https://builds.apache.org/job/PreCommit-HBASE-Build/11350/artifact/patchprocess/patch-unit-hbase-server.txt | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/11350/testReport/ | | Max. process+thread count | 5361 (vs.
[jira] [Commented] (HBASE-19841) Tests against hadoop3 fail with StreamLacksCapabilityException
[ https://issues.apache.org/jira/browse/HBASE-19841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16349986#comment-16349986 ] Hudson commented on HBASE-19841: FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4512 (See [https://builds.apache.org/job/HBase-Trunk_matrix/4512/]) HBASE-19841 Every HTU should be local until DFS starts (mdrob: rev 99b9fff07bb2669792f9c1c8a796605971d02592) * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestCase.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionObserverStacking.java * (edit) hbase-common/src/test/java/org/apache/hadoop/hbase/HBaseCommonTestingUtility.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHStore.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/wal/WALPerformanceEvaluation.java * (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/util/CommonFSUtils.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/io/asyncfs/TestLocalAsyncOutput.java * (edit) hbase-server/src/test/resources/hbase-site.xml * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionObserverScannerOpenHook.java > Tests against hadoop3 fail with StreamLacksCapabilityException > -- > > Key: HBASE-19841 > URL: https://issues.apache.org/jira/browse/HBASE-19841 > Project: HBase > Issue Type: Test >Reporter: Ted Yu >Assignee: Mike Drob >Priority: Major > Fix For: 2.0.0-beta-2 > > Attachments: 19841.007.patch, 19841.06.patch, 19841.v0.txt, > 19841.v1.txt, HBASE-19841.v10.patch, HBASE-19841.v11.patch, > HBASE-19841.v11.patch, HBASE-19841.v2.patch, HBASE-19841.v3.patch, > HBASE-19841.v4.patch, HBASE-19841.v5.patch, HBASE-19841.v7.patch, > HBASE-19841.v8.patch, HBASE-19841.v8.patch, HBASE-19841.v8.patch, > HBASE-19841.v9.patch > > > The following can be observed running against hadoop3: > {code} > java.io.IOException: cannot get log writer > at > org.apache.hadoop.hbase.regionserver.TestCompactingMemStore.compactingSetUp(TestCompactingMemStore.java:107) > at > org.apache.hadoop.hbase.regionserver.TestCompactingMemStore.setUp(TestCompactingMemStore.java:89) > Caused by: > org.apache.hadoop.hbase.util.CommonFSUtils$StreamLacksCapabilityException: > hflush and hsync > at > org.apache.hadoop.hbase.regionserver.TestCompactingMemStore.compactingSetUp(TestCompactingMemStore.java:107) > at > org.apache.hadoop.hbase.regionserver.TestCompactingMemStore.setUp(TestCompactingMemStore.java:89) > {code} > This was due to hbase-server/src/test/resources/hbase-site.xml not being > picked up by Configuration object. Among the configs from this file, the > value for "hbase.unsafe.stream.capability.enforce" relaxes check for presence > of hflush and hsync. Without this config entry, > StreamLacksCapabilityException is thrown. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19895) Add keepDeletedCells option in ScanOptions for customizing scanInfo in pre-hooks
[ https://issues.apache.org/jira/browse/HBASE-19895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16349987#comment-16349987 ] Hudson commented on HBASE-19895: FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4512 (See [https://builds.apache.org/job/HBase-Trunk_matrix/4512/]) HBASE-19895 Add keepDeletedCells option in ScanOptions for customizing (tedyu: rev a11258599e7412ea4867ef850cd67bb9e7bd8e67) * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ScanOptions.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ScanInfo.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestDefaultCompactSelection.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CustomizedScanInfoBuilder.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompaction.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMajorCompaction.java > Add keepDeletedCells option in ScanOptions for customizing scanInfo in > pre-hooks > > > Key: HBASE-19895 > URL: https://issues.apache.org/jira/browse/HBASE-19895 > Project: HBase > Issue Type: Sub-task > Components: Coprocessors >Reporter: Ankit Singhal >Assignee: Ankit Singhal >Priority: Major > Fix For: 2.0.0-beta-2 > > Attachments: 19895.v2.txt, HBASE-19895.patch, HBASE-19895_v1.patch, > HBASE-19895_v1.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19901) Up yetus proclimit on nightlies
[ https://issues.apache.org/jira/browse/HBASE-19901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16349984#comment-16349984 ] Hudson commented on HBASE-19901: FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4512 (See [https://builds.apache.org/job/HBase-Trunk_matrix/4512/]) Revert "HBASE-19901 HBASE-19901 Up yetus proclimit on nightlies" (stack: rev 18eec8c1a58574ae3ff529904f677288ff696765) * (edit) dev-support/hbase_nightly_yetus.sh * (edit) dev-support/hbase-personality.sh HBASE-19901 HBASE-19901 Up yetus proclimit on nightlies; REAPPLY TO TEST (stack: rev cb7bfc21da4e2658602a9a2b2080c4d883287dd1) * (edit) dev-support/hbase_nightly_yetus.sh * (edit) dev-support/hbase-personality.sh HBASE-19901 HBASE-19901 Up yetus proclimit on nightlies; AMENDMENT (stack: rev a2bc19aa112b8cd0697845d0825d277ff3a8bcfc) * (edit) dev-support/hbase_nightly_yetus.sh > Up yetus proclimit on nightlies > --- > > Key: HBASE-19901 > URL: https://issues.apache.org/jira/browse/HBASE-19901 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: stack >Priority: Major > Attachments: HBASE-19901.master.001.patch, > HBASE-19901.master.002.patch > > > We're on 0.7.0 now which enforces limits meant to protect against runaway > processes. Default is 1000 procs. HBase test runs seem to consume almost 4k. > Up our proclimit. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-19886) Display maintenance mode in shell, web UI
[ https://issues.apache.org/jira/browse/HBASE-19886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros updated HBASE-19886: Description: Maintenance mode was introduced in HBASE-16008. This mode is controlled by hbck. Splitting an balancing is disabled in this mode. It would be useful to present this information to users through shell, web UI. (was: Maintenance mode was introduced in HBASE-16008. This mode is controlled by hbck. Splitting an balancing is disabled in this mode. It would be useful to present this information to users through shell, web UI, JMX.) > Display maintenance mode in shell, web UI > - > > Key: HBASE-19886 > URL: https://issues.apache.org/jira/browse/HBASE-19886 > Project: HBase > Issue Type: New Feature >Affects Versions: 2.0.0, 3.0.0, 1.4.2 >Reporter: Balazs Meszaros >Assignee: Balazs Meszaros >Priority: Major > Fix For: 2.0.0, 3.0.0, 1.4.2 > > Attachments: HBASE-19886.master.001.patch > > > Maintenance mode was introduced in HBASE-16008. This mode is controlled by > hbck. Splitting an balancing is disabled in this mode. It would be useful to > present this information to users through shell, web UI. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-19886) Display maintenance mode in shell, web UI
[ https://issues.apache.org/jira/browse/HBASE-19886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros updated HBASE-19886: Summary: Display maintenance mode in shell, web UI (was: Display maintenance mode in shell, web UI, JMX) > Display maintenance mode in shell, web UI > - > > Key: HBASE-19886 > URL: https://issues.apache.org/jira/browse/HBASE-19886 > Project: HBase > Issue Type: New Feature >Affects Versions: 2.0.0, 3.0.0, 1.4.2 >Reporter: Balazs Meszaros >Assignee: Balazs Meszaros >Priority: Major > Fix For: 2.0.0, 3.0.0, 1.4.2 > > Attachments: HBASE-19886.master.001.patch > > > Maintenance mode was introduced in HBASE-16008. This mode is controlled by > hbck. Splitting an balancing is disabled in this mode. It would be useful to > present this information to users through shell, web UI, JMX. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-19886) Display maintenance mode in shell, web UI
[ https://issues.apache.org/jira/browse/HBASE-19886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros updated HBASE-19886: Attachment: HBASE-19886.master.002.patch > Display maintenance mode in shell, web UI > - > > Key: HBASE-19886 > URL: https://issues.apache.org/jira/browse/HBASE-19886 > Project: HBase > Issue Type: New Feature >Affects Versions: 2.0.0, 3.0.0, 1.4.2 >Reporter: Balazs Meszaros >Assignee: Balazs Meszaros >Priority: Major > Fix For: 2.0.0, 3.0.0, 1.4.2 > > Attachments: HBASE-19886.master.001.patch, > HBASE-19886.master.002.patch > > > Maintenance mode was introduced in HBASE-16008. This mode is controlled by > hbck. Splitting an balancing is disabled in this mode. It would be useful to > present this information to users through shell, web UI. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19886) Display maintenance mode in shell, web UI
[ https://issues.apache.org/jira/browse/HBASE-19886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16349991#comment-16349991 ] Balazs Meszaros commented on HBASE-19886: - Ok, I removed it from metrics. > Display maintenance mode in shell, web UI > - > > Key: HBASE-19886 > URL: https://issues.apache.org/jira/browse/HBASE-19886 > Project: HBase > Issue Type: New Feature >Affects Versions: 2.0.0, 3.0.0, 1.4.2 >Reporter: Balazs Meszaros >Assignee: Balazs Meszaros >Priority: Major > Fix For: 2.0.0, 3.0.0, 1.4.2 > > Attachments: HBASE-19886.master.001.patch, > HBASE-19886.master.002.patch > > > Maintenance mode was introduced in HBASE-16008. This mode is controlled by > hbck. Splitting an balancing is disabled in this mode. It would be useful to > present this information to users through shell, web UI. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (HBASE-19886) Display maintenance mode in shell, web UI
[ https://issues.apache.org/jira/browse/HBASE-19886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16349991#comment-16349991 ] Balazs Meszaros edited comment on HBASE-19886 at 2/2/18 8:50 AM: - Thanks [~stack], I removed it from metrics, [~appy] also suggested it. was (Author: balazs.meszaros): Ok, I removed it from metrics. > Display maintenance mode in shell, web UI > - > > Key: HBASE-19886 > URL: https://issues.apache.org/jira/browse/HBASE-19886 > Project: HBase > Issue Type: New Feature >Affects Versions: 2.0.0, 3.0.0, 1.4.2 >Reporter: Balazs Meszaros >Assignee: Balazs Meszaros >Priority: Major > Fix For: 2.0.0, 3.0.0, 1.4.2 > > Attachments: HBASE-19886.master.001.patch, > HBASE-19886.master.002.patch > > > Maintenance mode was introduced in HBASE-16008. This mode is controlled by > hbck. Splitting an balancing is disabled in this mode. It would be useful to > present this information to users through shell, web UI. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19082) Implement a procedure to convert RS from DA to S
[ https://issues.apache.org/jira/browse/HBASE-19082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16349996#comment-16349996 ] Duo Zhang commented on HBASE-19082: --- Rebased HBASE-19064 so that I can start working on the new WALProvider API. > Implement a procedure to convert RS from DA to S > > > Key: HBASE-19082 > URL: https://issues.apache.org/jira/browse/HBASE-19082 > Project: HBase > Issue Type: Sub-task > Components: Replication >Reporter: Duo Zhang >Priority: Major > Labels: HBASE-19064 > Fix For: 3.0.0 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19855) Refactor RegionScannerImpl.nextInternal method
[ https://issues.apache.org/jira/browse/HBASE-19855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16350024#comment-16350024 ] Hadoop QA commented on HBASE-19855: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 40s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 11s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 53s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 8s{color} | {color:green} hbase-server: The patch generated 0 new + 208 unchanged - 2 fixed = 208 total (was 210) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 41s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 20m 38s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.5 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}114m 43s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}154m 50s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.security.visibility.TestVisibilityLabelsWithDefaultVisLabelService | | | hadoop.hbase.master.TestMasterMetrics | | | hadoop.hbase.regionserver.TestMajorCompaction | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 | | JIRA Issue | HBASE-19855 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12908929/HBASE-19855.master.003.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux f74a46a91c3a 3.13.0-133-generic #182-Ubuntu SMP Tue Sep 19 15:49:21 UTC 2017 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh | | git revision | master / fc6d140adf | | maven | version: Apache Maven 3.5.2 (138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) | | Default Java | 1.8.0_151 | | unit | https://builds.apache.org/job/PreCommit-HBASE-Build/11351/artifact/patchprocess/patch-unit-hbase-server.
[jira] [Commented] (HBASE-19904) Break dependency of WAL constructor on Replication
[ https://issues.apache.org/jira/browse/HBASE-19904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16350027#comment-16350027 ] Hadoop QA commented on HBASE-19904: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 35 new or modified test files. {color} | || || || || {color:brown} branch-2 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 29s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 15s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 51s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 59s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s{color} | {color:green} branch-2 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} The patch hbase-replication passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 23s{color} | {color:green} hbase-server: The patch generated 0 new + 1151 unchanged - 27 fixed = 1151 total (was 1178) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s{color} | {color:green} The patch hbase-mapreduce passed checkstyle {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 3s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 14m 32s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.5 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 13s{color} | {color:green} hbase-replication in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}105m 28s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 1s{color} | {color:green} hbase-mapreduce in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 4s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}156m 16s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.client.TestAvoidCellReferencesIntoShippedBlocks | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:9f2f2db | | JIRA Issue | HBASE-19904 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12908928/HBASE-19904-branch-2
[jira] [Updated] (HBASE-19904) Break dependency of WAL constructor on Replication
[ https://issues.apache.org/jira/browse/HBASE-19904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-19904: -- Attachment: HBASE-19904-branch-2.patch > Break dependency of WAL constructor on Replication > -- > > Key: HBASE-19904 > URL: https://issues.apache.org/jira/browse/HBASE-19904 > Project: HBase > Issue Type: Improvement > Components: Replication, wal >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 2.0.0-beta-2 > > Attachments: HBASE-19904-branch-2.patch, HBASE-19904-branch-2.patch, > HBASE-19904-v3.patch, HBASE-19904-v3.patch, HBASE-19904-v4.patch, > HBASE-19904-v4.patch, HBASE-19904-v5.patch > > > When implementing synchronous replication, I found that we need to depend > more on replication in WAL so it is even more pain... -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-19855) Refactor RegionScannerImpl.nextInternal method
[ https://issues.apache.org/jira/browse/HBASE-19855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guanghao Zhang updated HBASE-19855: --- Attachment: HBASE-19855.master.004.patch > Refactor RegionScannerImpl.nextInternal method > -- > > Key: HBASE-19855 > URL: https://issues.apache.org/jira/browse/HBASE-19855 > Project: HBase > Issue Type: Bug >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang >Priority: Major > Attachments: HBASE-19855.master.002.patch, > HBASE-19855.master.003.patch, HBASE-19855.master.004.patch > > > Now this method is too complicated and confusing... > https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-19876) The exception happening in converting pb mutation to hbase.mutation messes up the CellScanner
[ https://issues.apache.org/jira/browse/HBASE-19876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai updated HBASE-19876: --- Attachment: HBASE-19876.v3.patch > The exception happening in converting pb mutation to hbase.mutation messes up > the CellScanner > - > > Key: HBASE-19876 > URL: https://issues.apache.org/jira/browse/HBASE-19876 > Project: HBase > Issue Type: Bug >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai >Priority: Critical > Fix For: 1.3.2, 1.5.0, 1.2.7, 2.0.0-beta-2, 1.4.2 > > Attachments: HBASE-19876.v0.patch, HBASE-19876.v1.patch, > HBASE-19876.v2.patch, HBASE-19876.v3.patch, HBASE-19876.v3.patch, > HBASE-19876.v3.patch, HBASE-19876.v3.patch > > > {code:java} > 2018-01-27 22:51:43,794 INFO [hconnection-0x3291b443-shared-pool11-t6] > client.AsyncRequestFutureImpl(778): id=5, table=testQuotaStatusFromMaster3, > attempt=6/16 failed=20ops, last > exception=org.apache.hadoop.hbase.client.WrongRowIOException: > org.apache.hadoop.hbase.client.WrongRowIOException: The row in xxx doesn't > match the original one aaa > at org.apache.hadoop.hbase.client.Mutation.add(Mutation.java:776) > at org.apache.hadoop.hbase.client.Put.add(Put.java:282) > at > org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toPut(ProtobufUtil.java:642) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:952) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:896) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2591) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41560) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:404) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304){code} > I noticed this bug when testing the table space quota. > When rs are converting pb mutation to hbase.mutation, the quota exception or > cell exception may be thrown. > {code} > Unable to find source-code formatter for language: > rsrpcservices#dobatchop.java. Available languages are: actionscript, ada, > applescript, bash, c, c#, c++, cpp, css, erlang, go, groovy, haskell, html, > java, javascript, js, json, lua, none, nyan, objc, perl, php, python, r, > rainbow, ruby, scala, sh, sql, swift, visualbasic, xml, yaml for > (ClientProtos.Action action: mutations) { > MutationProto m = action.getMutation(); > Mutation mutation; > if (m.getMutateType() == MutationType.PUT) { > mutation = ProtobufUtil.toPut(m, cells); > batchContainsPuts = true; > } else { > mutation = ProtobufUtil.toDelete(m, cells); > batchContainsDelete = true; > } > mutationActionMap.put(mutation, action); > mArray[i++] = mutation; > checkCellSizeLimit(region, mutation); > // Check if a space quota disallows this mutation > spaceQuotaEnforcement.getPolicyEnforcement(region).check(mutation); > quota.addMutation(mutation); > } > {code} > rs has caught the exception but it doesn't have the cellscanner skip the > failed cells. > {code:java} > } catch (IOException ie) { > if (atomic) { > throw ie; > } > for (Action mutation : mutations) { > builder.addResultOrException(getResultOrException(ie, > mutation.getIndex())); > } > } > {code} > The bug results in the WrongRowIOException to remaining mutations since they > refer to invalid cells. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19876) The exception happening in converting pb mutation to hbase.mutation messes up the CellScanner
[ https://issues.apache.org/jira/browse/HBASE-19876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16350174#comment-16350174 ] Hadoop QA commented on HBASE-19876: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s{color} | {color:red} HBASE-19876 does not apply to master. Rebase required? Wrong Branch? See https://yetus.apache.org/documentation/0.7.0/precommit-patchnames for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HBASE-19876 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12908964/HBASE-19876.v3.patch | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/11360/console | | Powered by | Apache Yetus 0.7.0 http://yetus.apache.org | This message was automatically generated. > The exception happening in converting pb mutation to hbase.mutation messes up > the CellScanner > - > > Key: HBASE-19876 > URL: https://issues.apache.org/jira/browse/HBASE-19876 > Project: HBase > Issue Type: Bug >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai >Priority: Critical > Fix For: 1.3.2, 1.5.0, 1.2.7, 2.0.0-beta-2, 1.4.2 > > Attachments: HBASE-19876.v0.patch, HBASE-19876.v1.patch, > HBASE-19876.v2.patch, HBASE-19876.v3.patch, HBASE-19876.v3.patch, > HBASE-19876.v3.patch, HBASE-19876.v3.patch > > > {code:java} > 2018-01-27 22:51:43,794 INFO [hconnection-0x3291b443-shared-pool11-t6] > client.AsyncRequestFutureImpl(778): id=5, table=testQuotaStatusFromMaster3, > attempt=6/16 failed=20ops, last > exception=org.apache.hadoop.hbase.client.WrongRowIOException: > org.apache.hadoop.hbase.client.WrongRowIOException: The row in xxx doesn't > match the original one aaa > at org.apache.hadoop.hbase.client.Mutation.add(Mutation.java:776) > at org.apache.hadoop.hbase.client.Put.add(Put.java:282) > at > org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toPut(ProtobufUtil.java:642) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:952) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:896) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2591) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41560) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:404) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304){code} > I noticed this bug when testing the table space quota. > When rs are converting pb mutation to hbase.mutation, the quota exception or > cell exception may be thrown. > {code} > Unable to find source-code formatter for language: > rsrpcservices#dobatchop.java. Available languages are: actionscript, ada, > applescript, bash, c, c#, c++, cpp, css, erlang, go, groovy, haskell, html, > java, javascript, js, json, lua, none, nyan, objc, perl, php, python, r, > rainbow, ruby, scala, sh, sql, swift, visualbasic, xml, yaml for > (ClientProtos.Action action: mutations) { > MutationProto m = action.getMutation(); > Mutation mutation; > if (m.getMutateType() == MutationType.PUT) { > mutation = ProtobufUtil.toPut(m, cells); > batchContainsPuts = true; > } else { > mutation = ProtobufUtil.toDelete(m, cells); > batchContainsDelete = true; > } > mutationActionMap.put(mutation, action); > mArray[i++] = mutation; > checkCellSizeLimit(region, mutation); > // Check if a space quota disallows this mutation > spaceQuotaEnforcement.getPolicyEnforcement(region).check(mutation); > quota.addMutation(mutation); > } > {code} > rs has caught the exception but it doesn't have the cellscanner skip the > failed cells. > {code:java} > } catch (IOException ie) { > if (atomic) { > throw ie; > } > for (Action mutation : mutations) { > builder.addResultOrException(getResultOrException(ie, > mutation.getIndex())); > } > } > {code} > The bug results in the WrongRowIOException to remaining mutations since they > refer to invalid cells. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19918) Promote TestAsyncClusterAdminApi to LargeTests
[ https://issues.apache.org/jira/browse/HBASE-19918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16350150#comment-16350150 ] Hadoop QA commented on HBASE-19918: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 59s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 55s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 0s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 9s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 16m 32s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.5 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 97m 38s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}131m 0s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.TestFullLogReconstruction | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 | | JIRA Issue | HBASE-19918 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12908944/HBASE-19918.master.001.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux a044b71e1fa2 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / fc6d140adf | | maven | version: Apache Maven 3.5.2 (138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) | | Default Java | 1.8.0_151 | | unit | https://builds.apache.org/job/PreCommit-HBASE-Build/11356/artifact/patchprocess/patch-unit-hbase-server.txt | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/11356/testReport/ | | Max. process+thread count | 5642 (vs. ulimit of 1) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/11356/console | | Powered by | Apache Yetus 0.7.0 http://yetus.apache.o
[jira] [Commented] (HBASE-19782) Reject the replication request when peer is DA or A state
[ https://issues.apache.org/jira/browse/HBASE-19782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16350151#comment-16350151 ] Hadoop QA commented on HBASE-19782: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} HBASE-19064 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 28s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 30s{color} | {color:green} HBASE-19064 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s{color} | {color:green} HBASE-19064 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 24s{color} | {color:green} HBASE-19064 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 6m 37s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} HBASE-19064 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 19s{color} | {color:red} hbase-server: The patch generated 1 new + 54 unchanged - 1 fixed = 55 total (was 55) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 19s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 20m 49s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.5 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s{color} | {color:green} hbase-replication in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}106m 35s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 36s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}152m 38s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.TestZooKeeper | | | hadoop.hbase.TestFullLogReconstruction | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 | | JIRA Issue | HBASE-19782 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12908935/HBASE-19782.HBASE-19064.v2.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 75dc1f3f4945 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 14:43:09 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh | | git revision | HBASE-19064 / c40ddd6458 | | maven | version: Ap
[jira] [Commented] (HBASE-19886) Display maintenance mode in shell, web UI
[ https://issues.apache.org/jira/browse/HBASE-19886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16350166#comment-16350166 ] Hadoop QA commented on HBASE-19886: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 16s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 8s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} rubocop {color} | {color:red} 0m 27s{color} | {color:red} The patch generated 6 new + 407 unchanged - 1 fixed = 413 total (was 408) {color} | | {color:red}-1{color} | {color:red} ruby-lint {color} | {color:red} 0m 5s{color} | {color:red} The patch generated 3 new + 730 unchanged - 0 fixed = 733 total (was 730) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}105m 34s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 31s{color} | {color:green} hbase-shell in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 42s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}127m 15s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.TestFullLogReconstruction | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 | | JIRA Issue | HBASE-19886 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12908947/HBASE-19886.master.002.patch | | Optional Tests | asflicense javac javadoc unit rubocop ruby_lint | | uname | Linux 6879c1a3bd89 3.13.0-133-generic #182-Ubuntu SMP Tue Sep 19 15:49:21 UTC 2017 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / fc6d140adf | | maven | version: Apache Maven 3.5.2 (138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) | | Default Java | 1.8.0_151 | | rubocop | v0.52.1 | | rubocop | https://builds.apache.org/job/PreCommit-HBASE-Build/11357/artifact/patchprocess/diff-patch-rubocop.txt | | ruby-lint | v2.3.1 | | ruby-lint | https://builds.apache.org/job/PreCommit-HBASE-Build/11357/artifact/patchprocess/diff-patch-ruby-lint.txt | | unit | https://builds.apache.org/job/PreCommit-HBASE-Build/11357/artifact/patchprocess/patch-unit-hbase-server.txt | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/11357/testReport/ | | Max. process+thread count | 4970 (vs. ulimit of 1) | | modules | C: hbase-server hbase-shell U: . | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/11357/console | | Powered by | Apache Yetus 0.7.0 http://yetus.apache.org | This message was automatically generated. > Display maintenance mode in shell, web UI > - > > Key: HBASE-19886 > URL: https://issues.apache.org/jira/browse/HBASE-19886 > Project: HBase > Issue Type: New Feature >Affects Versions: 2.0.0, 3.0.0,
[jira] [Updated] (HBASE-19720) Rename WALKey#getTabnename to WALKey#getTableName
[ https://issues.apache.org/jira/browse/HBASE-19720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai updated HBASE-19720: --- Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Thanks for the reviews. [~stack] > Rename WALKey#getTabnename to WALKey#getTableName > - > > Key: HBASE-19720 > URL: https://issues.apache.org/jira/browse/HBASE-19720 > Project: HBase > Issue Type: Task >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai >Priority: Major > Fix For: 2.0.0 > > Attachments: HBASE-19720.v0.patch > > > WALKey is denoted as LP so its naming should obey the common rule in our > codebase. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-19848) Zookeeper thread leaks in hbase-spark bulkLoad method
[ https://issues.apache.org/jira/browse/HBASE-19848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Key Hutu updated HBASE-19848: - Attachment: HBASE-19848-V3.patch > Zookeeper thread leaks in hbase-spark bulkLoad method > - > > Key: HBASE-19848 > URL: https://issues.apache.org/jira/browse/HBASE-19848 > Project: HBase > Issue Type: Bug > Components: spark, Zookeeper >Affects Versions: 1.2.0 > Environment: hbase-spark-1.2.0-cdh5.12.1 version > spark 1.6 >Reporter: Key Hutu >Assignee: Key Hutu >Priority: Major > Labels: performance > Fix For: 1.2.0 > > Attachments: HBASE-19848-V2.patch, HBASE-19848-V3.patch, > HBaseContext.patch, HBaseContext.scala > > Original Estimate: 72h > Remaining Estimate: 72h > > In hbase-spark project, HBaseContext provides bulkload methond for loading > spark rdd data to hbase easily.But when i using it frequently, the program > will throw "cannot create native thread" exception. > using pstack command in spark driver process , the thread num is increasing > using jstack, named "main-SendThread" and "main-EventThread" thread so many > It seems like that , connection created before bulkload ,but close method > uninvoked at last -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19848) Zookeeper thread leaks in hbase-spark bulkLoad method
[ https://issues.apache.org/jira/browse/HBASE-19848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16350222#comment-16350222 ] Hadoop QA commented on HBASE-19848: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 52s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} scaladoc {color} | {color:green} 0m 27s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} scalac {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} scaladoc {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 21s{color} | {color:green} hbase-spark in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 9s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 11m 27s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 | | JIRA Issue | HBASE-19848 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12908972/HBASE-19848-V3.patch | | Optional Tests | asflicense scalac scaladoc unit compile | | uname | Linux c6ceb0be5c72 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 2f4d0b94bc | | maven | version: Apache Maven 3.5.2 (138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/11361/testReport/ | | Max. process+thread count | 1036 (vs. ulimit of 1) | | modules | C: hbase-spark U: hbase-spark | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/11361/console | | Powered by | Apache Yetus 0.7.0 http://yetus.apache.org | This message was automatically generated. > Zookeeper thread leaks in hbase-spark bulkLoad method > - > > Key: HBASE-19848 > URL: https://issues.apache.org/jira/browse/HBASE-19848 > Project: HBase > Issue Type: Bug > Components: spark, Zookeeper >Affects Versions: 1.2.0 > Environment: hbase-spark-1.2.0-cdh5.12.1 version > spark 1.6 >Reporter: Key Hutu >Assignee: Key Hutu >Priority: Major > Labels: performance > Fix For: 1.2.0 > > Attachments: HBASE-19848-V2.patch, HBASE-19848-V3.patch, > HBaseContext.patch, HBaseContext.scala > > Original Estimate: 72h > Remaining Estimate: 72h > > In hbase-spark project, HBaseContext provides bulkload methond for loading > spark rdd data to hbase easily.But when i using it frequently, the program > will throw "cannot create native thread" exception. > using pstack command in spark driver process , the thread num is increasing > using jstack, named "main-SendThread" and "main-EventThread" thread so many > It seems like that , connection created before bulkload ,but close method > uninvoked at last -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19904) Break dependency of WAL constructor on Replication
[ https://issues.apache.org/jira/browse/HBASE-19904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16350227#comment-16350227 ] Hudson commented on HBASE-19904: FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4513 (See [https://builds.apache.org/job/HBase-Trunk_matrix/4513/]) HBASE-19904 Break dependency of WAL constructor on Replication (zhangduo: rev fc6d140adf0b382e0b7bfef02ae96be7908036e1) * (edit) hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationUtils.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestWALSplit.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/master/cleaner/TestReplicationHFileCleaner.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/wal/IOTestProvider.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/wal/FSHLogProvider.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestWALMonotonicallyIncreasingSeqId.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestSecureWAL.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestWALFactory.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestWALObserver.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSyncUp.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestBoundedRegionGroupingStrategy.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALActionsListener.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestWALReaderOnSecureWAL.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationSmallTests.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/AbstractTestWALReplay.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHMobStore.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestWALRootDir.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactionArchiveIOException.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ReplicationSourceService.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSourceManager.java * (edit) hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestWALRecordReader.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestDurability.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRollingNoCluster.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRollAbort.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/master/cleaner/TestLogsCleaner.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFileRefresherChore.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestWALMethods.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestDefaultMemStore.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/AbstractTestProtobufLog.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactionArchiveConcurrentClose.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALProvider.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestWALEntryStream.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/AbstractTestLogRolling.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/Replication.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/wal/DisabledWALProvider.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCacheOnWriteInSchema.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationEmptyWALRecovery.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ReplicationService.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHStore.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestFSHLogPro
[jira] [Commented] (HBASE-19855) Refactor RegionScannerImpl.nextInternal method
[ https://issues.apache.org/jira/browse/HBASE-19855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16350230#comment-16350230 ] Hadoop QA commented on HBASE-19855: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 15s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 22s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 6m 45s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 14s{color} | {color:green} hbase-server: The patch generated 0 new + 253 unchanged - 2 fixed = 253 total (was 255) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 7s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 21m 48s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.5 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}101m 47s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}145m 55s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 | | JIRA Issue | HBASE-19855 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12908959/HBASE-19855.master.004.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 5fb2971731b3 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 14:43:09 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / fc6d140adf | | maven | version: Apache Maven 3.5.2 (138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) | | Default Java | 1.8.0_151 | | unit | https://builds.apache.org/job/PreCommit-HBASE-Build/11359/artifact/patchprocess/patch-unit-hbase-server.txt | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/11359/testReport/ | | Max. process+thread count | 4730 (vs. ulimit of 1) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/11359/console | | Powered by | Apache Yetus 0.7.0 http://yetus.apache.org | This messag
[jira] [Commented] (HBASE-19904) Break dependency of WAL constructor on Replication
[ https://issues.apache.org/jira/browse/HBASE-19904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16350228#comment-16350228 ] Hadoop QA commented on HBASE-19904: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 35 new or modified test files. {color} | || || || || {color:brown} branch-2 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 56s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 27s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 56s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 6m 38s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s{color} | {color:green} branch-2 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} The patch hbase-replication passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 32s{color} | {color:green} hbase-server: The patch generated 0 new + 1151 unchanged - 27 fixed = 1151 total (was 1178) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s{color} | {color:green} The patch hbase-mapreduce passed checkstyle {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 22s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 16m 44s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.5 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 16s{color} | {color:green} hbase-replication in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green}102m 14s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 8s{color} | {color:green} hbase-mapreduce in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 59s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}155m 24s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:9f2f2db | | JIRA Issue | HBASE-19904 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12908958/HBASE-19904-branch-2.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbase
[jira] [Commented] (HBASE-19904) Break dependency of WAL constructor on Replication
[ https://issues.apache.org/jira/browse/HBASE-19904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16350237#comment-16350237 ] Duo Zhang commented on HBASE-19904: --- OK, all green. Will commit tomorrow if no objections. Thanks. > Break dependency of WAL constructor on Replication > -- > > Key: HBASE-19904 > URL: https://issues.apache.org/jira/browse/HBASE-19904 > Project: HBase > Issue Type: Improvement > Components: Replication, wal >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 2.0.0-beta-2 > > Attachments: HBASE-19904-branch-2.patch, HBASE-19904-branch-2.patch, > HBASE-19904-v3.patch, HBASE-19904-v3.patch, HBASE-19904-v4.patch, > HBASE-19904-v4.patch, HBASE-19904-v5.patch > > > When implementing synchronous replication, I found that we need to depend > more on replication in WAL so it is even more pain... -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19826) Provide a option to see rows behind a delete in a time range queries
[ https://issues.apache.org/jira/browse/HBASE-19826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16350245#comment-16350245 ] Ankit Singhal commented on HBASE-19826: --- bq. HBase-2.0 is a big release that breaks things, so I think it may also be a good chance for Phoenix to drop some legacy support when upgrading to HBase-2.0? Yes, we will be planning to remove some legacy stuff with Phoenix 5.0. bq. You can try using attribute to carry some Phoenix only logic.. [~Apache9], In HBase 2.0, we are not getting scan object in preStoreScannerOpen() , is it possible to add scan(at least in Immutable form) in preStoreScannerOpen() hook so that we can decide based on the attributes (like time range scan, raw etc) and set ScanOptions accordingly. > Provide a option to see rows behind a delete in a time range queries > > > Key: HBASE-19826 > URL: https://issues.apache.org/jira/browse/HBASE-19826 > Project: HBase > Issue Type: Improvement >Reporter: Ankit Singhal >Assignee: Ankit Singhal >Priority: Major > Fix For: 2.0.0 > > > We can provide an option (something like seePastDeleteMarkers) in a scan to > let the user see the versions behind the delete marker even if > keepDeletedCells is set to false in the descriptor. > With the previous version, we workaround the same in preStoreScannerOpen > hook. For reference PHOENIX-4277 > {code} > @Override > public KeyValueScanner preStoreScannerOpen(final > ObserverContext c, > final Store store, final Scan scan, final NavigableSet > targetCols, > final KeyValueScanner s) throws IOException { > > if (scan.isRaw() || > ScanInfoUtil.isKeepDeletedCells(store.getScanInfo()) || > scan.getTimeRange().getMax() == HConstants.LATEST_TIMESTAMP || > TransactionUtil.isTransactionalTimestamp(scan.getTimeRange().getMax())) { > return s; > } > > ScanInfo scanInfo = > ScanInfoUtil.cloneScanInfoWithKeepDeletedCells(store.getScanInfo()); > return new StoreScanner(store, scanInfo, scan, targetCols, > > c.getEnvironment().getRegion().getReadpoint(scan.getIsolationLevel())); > } > {code} > Another way is to provide a way to set KEEP_DELETED_CELLS to true in > ScanOptions of preStoreScannerOpen. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19826) Provide a option to see rows behind a delete in a time range queries
[ https://issues.apache.org/jira/browse/HBASE-19826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16350256#comment-16350256 ] Duo Zhang commented on HBASE-19826: --- There is no Scan object for compaction and flush so we do not provide it. And in general, I do not think you can get a stable result if you reset the ScanOptions for some scans and not for others. > Provide a option to see rows behind a delete in a time range queries > > > Key: HBASE-19826 > URL: https://issues.apache.org/jira/browse/HBASE-19826 > Project: HBase > Issue Type: Improvement >Reporter: Ankit Singhal >Assignee: Ankit Singhal >Priority: Major > Fix For: 2.0.0 > > > We can provide an option (something like seePastDeleteMarkers) in a scan to > let the user see the versions behind the delete marker even if > keepDeletedCells is set to false in the descriptor. > With the previous version, we workaround the same in preStoreScannerOpen > hook. For reference PHOENIX-4277 > {code} > @Override > public KeyValueScanner preStoreScannerOpen(final > ObserverContext c, > final Store store, final Scan scan, final NavigableSet > targetCols, > final KeyValueScanner s) throws IOException { > > if (scan.isRaw() || > ScanInfoUtil.isKeepDeletedCells(store.getScanInfo()) || > scan.getTimeRange().getMax() == HConstants.LATEST_TIMESTAMP || > TransactionUtil.isTransactionalTimestamp(scan.getTimeRange().getMax())) { > return s; > } > > ScanInfo scanInfo = > ScanInfoUtil.cloneScanInfoWithKeepDeletedCells(store.getScanInfo()); > return new StoreScanner(store, scanInfo, scan, targetCols, > > c.getEnvironment().getRegion().getReadpoint(scan.getIsolationLevel())); > } > {code} > Another way is to provide a way to set KEEP_DELETED_CELLS to true in > ScanOptions of preStoreScannerOpen. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19826) Provide a option to see rows behind a delete in a time range queries
[ https://issues.apache.org/jira/browse/HBASE-19826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16350272#comment-16350272 ] Duo Zhang commented on HBASE-19826: --- And could you please describe the usage in phoenix? We can see how to implement with the new CP hooks in 2.0. Thanks. > Provide a option to see rows behind a delete in a time range queries > > > Key: HBASE-19826 > URL: https://issues.apache.org/jira/browse/HBASE-19826 > Project: HBase > Issue Type: Improvement >Reporter: Ankit Singhal >Assignee: Ankit Singhal >Priority: Major > Fix For: 2.0.0 > > > We can provide an option (something like seePastDeleteMarkers) in a scan to > let the user see the versions behind the delete marker even if > keepDeletedCells is set to false in the descriptor. > With the previous version, we workaround the same in preStoreScannerOpen > hook. For reference PHOENIX-4277 > {code} > @Override > public KeyValueScanner preStoreScannerOpen(final > ObserverContext c, > final Store store, final Scan scan, final NavigableSet > targetCols, > final KeyValueScanner s) throws IOException { > > if (scan.isRaw() || > ScanInfoUtil.isKeepDeletedCells(store.getScanInfo()) || > scan.getTimeRange().getMax() == HConstants.LATEST_TIMESTAMP || > TransactionUtil.isTransactionalTimestamp(scan.getTimeRange().getMax())) { > return s; > } > > ScanInfo scanInfo = > ScanInfoUtil.cloneScanInfoWithKeepDeletedCells(store.getScanInfo()); > return new StoreScanner(store, scanInfo, scan, targetCols, > > c.getEnvironment().getRegion().getReadpoint(scan.getIsolationLevel())); > } > {code} > Another way is to provide a way to set KEEP_DELETED_CELLS to true in > ScanOptions of preStoreScannerOpen. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-19848) Zookeeper thread leaks in hbase-spark bulkLoad method
[ https://issues.apache.org/jira/browse/HBASE-19848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-19848: --- Resolution: Fixed Fix Version/s: (was: 1.2.0) Status: Resolved (was: Patch Available) Thanks for the patch, Key. > Zookeeper thread leaks in hbase-spark bulkLoad method > - > > Key: HBASE-19848 > URL: https://issues.apache.org/jira/browse/HBASE-19848 > Project: HBase > Issue Type: Bug > Components: spark, Zookeeper >Affects Versions: 1.2.0 > Environment: hbase-spark-1.2.0-cdh5.12.1 version > spark 1.6 >Reporter: Key Hutu >Assignee: Key Hutu >Priority: Major > Labels: performance > Attachments: HBASE-19848-V2.patch, HBASE-19848-V3.patch, > HBaseContext.patch, HBaseContext.scala > > Original Estimate: 72h > Remaining Estimate: 72h > > In hbase-spark project, HBaseContext provides bulkload methond for loading > spark rdd data to hbase easily.But when i using it frequently, the program > will throw "cannot create native thread" exception. > using pstack command in spark driver process , the thread num is increasing > using jstack, named "main-SendThread" and "main-EventThread" thread so many > It seems like that , connection created before bulkload ,but close method > uninvoked at last -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19826) Provide a option to see rows behind a delete in a time range queries
[ https://issues.apache.org/jira/browse/HBASE-19826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16350362#comment-16350362 ] Ankit Singhal commented on HBASE-19826: --- We need Scan object to check whether it's a time range query or not. {code} @Override public void preStoreScannerOpen(ObserverContext ctx, Store store, ScanOptions options) throws IOException { //Set KEEP_DELETED_CELLS for time range non-raw scan if (scan.isRaw() || scan.getTimeRange().getMax() == HConstants.LATEST_TIMESTAMP) { return; } options.setKeepDeletedCells(KeepDeletedCells.TRUE); } {code} > Provide a option to see rows behind a delete in a time range queries > > > Key: HBASE-19826 > URL: https://issues.apache.org/jira/browse/HBASE-19826 > Project: HBase > Issue Type: Improvement >Reporter: Ankit Singhal >Assignee: Ankit Singhal >Priority: Major > Fix For: 2.0.0 > > > We can provide an option (something like seePastDeleteMarkers) in a scan to > let the user see the versions behind the delete marker even if > keepDeletedCells is set to false in the descriptor. > With the previous version, we workaround the same in preStoreScannerOpen > hook. For reference PHOENIX-4277 > {code} > @Override > public KeyValueScanner preStoreScannerOpen(final > ObserverContext c, > final Store store, final Scan scan, final NavigableSet > targetCols, > final KeyValueScanner s) throws IOException { > > if (scan.isRaw() || > ScanInfoUtil.isKeepDeletedCells(store.getScanInfo()) || > scan.getTimeRange().getMax() == HConstants.LATEST_TIMESTAMP || > TransactionUtil.isTransactionalTimestamp(scan.getTimeRange().getMax())) { > return s; > } > > ScanInfo scanInfo = > ScanInfoUtil.cloneScanInfoWithKeepDeletedCells(store.getScanInfo()); > return new StoreScanner(store, scanInfo, scan, targetCols, > > c.getEnvironment().getRegion().getReadpoint(scan.getIsolationLevel())); > } > {code} > Another way is to provide a way to set KEEP_DELETED_CELLS to true in > ScanOptions of preStoreScannerOpen. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19826) Provide a option to see rows behind a delete in a time range queries
[ https://issues.apache.org/jira/browse/HBASE-19826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16350390#comment-16350390 ] Duo Zhang commented on HBASE-19826: --- More background please? Thanks. > Provide a option to see rows behind a delete in a time range queries > > > Key: HBASE-19826 > URL: https://issues.apache.org/jira/browse/HBASE-19826 > Project: HBase > Issue Type: Improvement >Reporter: Ankit Singhal >Assignee: Ankit Singhal >Priority: Major > Fix For: 2.0.0 > > > We can provide an option (something like seePastDeleteMarkers) in a scan to > let the user see the versions behind the delete marker even if > keepDeletedCells is set to false in the descriptor. > With the previous version, we workaround the same in preStoreScannerOpen > hook. For reference PHOENIX-4277 > {code} > @Override > public KeyValueScanner preStoreScannerOpen(final > ObserverContext c, > final Store store, final Scan scan, final NavigableSet > targetCols, > final KeyValueScanner s) throws IOException { > > if (scan.isRaw() || > ScanInfoUtil.isKeepDeletedCells(store.getScanInfo()) || > scan.getTimeRange().getMax() == HConstants.LATEST_TIMESTAMP || > TransactionUtil.isTransactionalTimestamp(scan.getTimeRange().getMax())) { > return s; > } > > ScanInfo scanInfo = > ScanInfoUtil.cloneScanInfoWithKeepDeletedCells(store.getScanInfo()); > return new StoreScanner(store, scanInfo, scan, targetCols, > > c.getEnvironment().getRegion().getReadpoint(scan.getIsolationLevel())); > } > {code} > Another way is to provide a way to set KEEP_DELETED_CELLS to true in > ScanOptions of preStoreScannerOpen. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19826) Provide a option to see rows behind a delete in a time range queries
[ https://issues.apache.org/jira/browse/HBASE-19826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16350402#comment-16350402 ] Ankit Singhal commented on HBASE-19826: --- sure, (Second use-case mentioned in my earlier [comment|https://issues.apache.org/jira/browse/HBASE-19826?focusedCommentId=16344850&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16344850]):- "While doing Index scrutiny on a live table, time range scan wants to see PUTs not eclipsed by newer DELETE markers.(raw scan cannot be utilized here as it will give all cells even if we have delete markers within the time range)" To achieve this, we were earlier updating the store scanner by setting KeepDeletedCells to true in preStoreScannerOpen hook so that our time range queries will see puts which are deleted at the newer timestamp. Let me know if you need more details. Thanks. > Provide a option to see rows behind a delete in a time range queries > > > Key: HBASE-19826 > URL: https://issues.apache.org/jira/browse/HBASE-19826 > Project: HBase > Issue Type: Improvement >Reporter: Ankit Singhal >Assignee: Ankit Singhal >Priority: Major > Fix For: 2.0.0 > > > We can provide an option (something like seePastDeleteMarkers) in a scan to > let the user see the versions behind the delete marker even if > keepDeletedCells is set to false in the descriptor. > With the previous version, we workaround the same in preStoreScannerOpen > hook. For reference PHOENIX-4277 > {code} > @Override > public KeyValueScanner preStoreScannerOpen(final > ObserverContext c, > final Store store, final Scan scan, final NavigableSet > targetCols, > final KeyValueScanner s) throws IOException { > > if (scan.isRaw() || > ScanInfoUtil.isKeepDeletedCells(store.getScanInfo()) || > scan.getTimeRange().getMax() == HConstants.LATEST_TIMESTAMP || > TransactionUtil.isTransactionalTimestamp(scan.getTimeRange().getMax())) { > return s; > } > > ScanInfo scanInfo = > ScanInfoUtil.cloneScanInfoWithKeepDeletedCells(store.getScanInfo()); > return new StoreScanner(store, scanInfo, scan, targetCols, > > c.getEnvironment().getRegion().getReadpoint(scan.getIsolationLevel())); > } > {code} > Another way is to provide a way to set KEEP_DELETED_CELLS to true in > ScanOptions of preStoreScannerOpen. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (HBASE-19826) Provide a option to see rows behind a delete in a time range queries
[ https://issues.apache.org/jira/browse/HBASE-19826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16350402#comment-16350402 ] Ankit Singhal edited comment on HBASE-19826 at 2/2/18 2:34 PM: --- sure, (Second use-case mentioned in my earlier comment)(For reference PHOENIX-4277) "While doing Index scrutiny on a live table, time range scan wants to see PUTs not eclipsed by newer DELETE markers.(raw scan cannot be utilized here as it will give all cells even if we have delete markers within the time range)" To achieve this, we were earlier updating the store scanner by setting KeepDeletedCells to true in preStoreScannerOpen hook so that our time range queries will see puts which are deleted at the newer timestamp. Let me know if you need more details. Thanks. was (Author: an...@apache.org): sure, (Second use-case mentioned in my earlier [comment|https://issues.apache.org/jira/browse/HBASE-19826?focusedCommentId=16344850&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16344850]):- "While doing Index scrutiny on a live table, time range scan wants to see PUTs not eclipsed by newer DELETE markers.(raw scan cannot be utilized here as it will give all cells even if we have delete markers within the time range)" To achieve this, we were earlier updating the store scanner by setting KeepDeletedCells to true in preStoreScannerOpen hook so that our time range queries will see puts which are deleted at the newer timestamp. Let me know if you need more details. Thanks. > Provide a option to see rows behind a delete in a time range queries > > > Key: HBASE-19826 > URL: https://issues.apache.org/jira/browse/HBASE-19826 > Project: HBase > Issue Type: Improvement >Reporter: Ankit Singhal >Assignee: Ankit Singhal >Priority: Major > Fix For: 2.0.0 > > > We can provide an option (something like seePastDeleteMarkers) in a scan to > let the user see the versions behind the delete marker even if > keepDeletedCells is set to false in the descriptor. > With the previous version, we workaround the same in preStoreScannerOpen > hook. For reference PHOENIX-4277 > {code} > @Override > public KeyValueScanner preStoreScannerOpen(final > ObserverContext c, > final Store store, final Scan scan, final NavigableSet > targetCols, > final KeyValueScanner s) throws IOException { > > if (scan.isRaw() || > ScanInfoUtil.isKeepDeletedCells(store.getScanInfo()) || > scan.getTimeRange().getMax() == HConstants.LATEST_TIMESTAMP || > TransactionUtil.isTransactionalTimestamp(scan.getTimeRange().getMax())) { > return s; > } > > ScanInfo scanInfo = > ScanInfoUtil.cloneScanInfoWithKeepDeletedCells(store.getScanInfo()); > return new StoreScanner(store, scanInfo, scan, targetCols, > > c.getEnvironment().getRegion().getReadpoint(scan.getIsolationLevel())); > } > {code} > Another way is to provide a way to set KEEP_DELETED_CELLS to true in > ScanOptions of preStoreScannerOpen. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19826) Provide a option to see rows behind a delete in a time range queries
[ https://issues.apache.org/jira/browse/HBASE-19826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16350441#comment-16350441 ] Duo Zhang commented on HBASE-19826: --- What is a index scrutiny? When do you need to do this? > Provide a option to see rows behind a delete in a time range queries > > > Key: HBASE-19826 > URL: https://issues.apache.org/jira/browse/HBASE-19826 > Project: HBase > Issue Type: Improvement >Reporter: Ankit Singhal >Assignee: Ankit Singhal >Priority: Major > Fix For: 2.0.0 > > > We can provide an option (something like seePastDeleteMarkers) in a scan to > let the user see the versions behind the delete marker even if > keepDeletedCells is set to false in the descriptor. > With the previous version, we workaround the same in preStoreScannerOpen > hook. For reference PHOENIX-4277 > {code} > @Override > public KeyValueScanner preStoreScannerOpen(final > ObserverContext c, > final Store store, final Scan scan, final NavigableSet > targetCols, > final KeyValueScanner s) throws IOException { > > if (scan.isRaw() || > ScanInfoUtil.isKeepDeletedCells(store.getScanInfo()) || > scan.getTimeRange().getMax() == HConstants.LATEST_TIMESTAMP || > TransactionUtil.isTransactionalTimestamp(scan.getTimeRange().getMax())) { > return s; > } > > ScanInfo scanInfo = > ScanInfoUtil.cloneScanInfoWithKeepDeletedCells(store.getScanInfo()); > return new StoreScanner(store, scanInfo, scan, targetCols, > > c.getEnvironment().getRegion().getReadpoint(scan.getIsolationLevel())); > } > {code} > Another way is to provide a way to set KEEP_DELETED_CELLS to true in > ScanOptions of preStoreScannerOpen. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19826) Provide a option to see rows behind a delete in a time range queries
[ https://issues.apache.org/jira/browse/HBASE-19826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16350493#comment-16350493 ] Ankit Singhal commented on HBASE-19826: --- {quote}What is a index scrutiny? When do you need to do this? {quote} It's a MapReduce tool which does time range scan on the data table and SKIP SCAN on the index table to verify that index table is in sync with a data table or not. [1]https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/IndexScrutinyTool.java > Provide a option to see rows behind a delete in a time range queries > > > Key: HBASE-19826 > URL: https://issues.apache.org/jira/browse/HBASE-19826 > Project: HBase > Issue Type: Improvement >Reporter: Ankit Singhal >Assignee: Ankit Singhal >Priority: Major > Fix For: 2.0.0 > > > We can provide an option (something like seePastDeleteMarkers) in a scan to > let the user see the versions behind the delete marker even if > keepDeletedCells is set to false in the descriptor. > With the previous version, we workaround the same in preStoreScannerOpen > hook. For reference PHOENIX-4277 > {code} > @Override > public KeyValueScanner preStoreScannerOpen(final > ObserverContext c, > final Store store, final Scan scan, final NavigableSet > targetCols, > final KeyValueScanner s) throws IOException { > > if (scan.isRaw() || > ScanInfoUtil.isKeepDeletedCells(store.getScanInfo()) || > scan.getTimeRange().getMax() == HConstants.LATEST_TIMESTAMP || > TransactionUtil.isTransactionalTimestamp(scan.getTimeRange().getMax())) { > return s; > } > > ScanInfo scanInfo = > ScanInfoUtil.cloneScanInfoWithKeepDeletedCells(store.getScanInfo()); > return new StoreScanner(store, scanInfo, scan, targetCols, > > c.getEnvironment().getRegion().getReadpoint(scan.getIsolationLevel())); > } > {code} > Another way is to provide a way to set KEEP_DELETED_CELLS to true in > ScanOptions of preStoreScannerOpen. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-19920) TokenUtil.obtainToken unnecessarily creates a local directory
Rohini Palaniswamy created HBASE-19920: -- Summary: TokenUtil.obtainToken unnecessarily creates a local directory Key: HBASE-19920 URL: https://issues.apache.org/jira/browse/HBASE-19920 Project: HBase Issue Type: Bug Reporter: Rohini Palaniswamy On client code, when one calls TokenUtil.obtainToken it loads ProtobufUtil which in its static block initializes DynamicClassLoader and that creates the directory ${hbase.rootdir}/lib https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/util/DynamicClassLoader.java#L115-L127 Since this is region server specific code, not expecting this to happen when one accesses hbase as a client. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19920) TokenUtil.obtainToken unnecessarily creates a local directory
[ https://issues.apache.org/jira/browse/HBASE-19920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16350569#comment-16350569 ] Mike Drob commented on HBASE-19920: --- I don't see any directory creation code there, can you be more specific? > TokenUtil.obtainToken unnecessarily creates a local directory > - > > Key: HBASE-19920 > URL: https://issues.apache.org/jira/browse/HBASE-19920 > Project: HBase > Issue Type: Bug >Reporter: Rohini Palaniswamy >Priority: Major > > On client code, when one calls TokenUtil.obtainToken it loads ProtobufUtil > which in its static block initializes DynamicClassLoader and that creates the > directory ${hbase.rootdir}/lib > https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/util/DynamicClassLoader.java#L115-L127 > Since this is region server specific code, not expecting this to happen when > one accesses hbase as a client. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19133) Transfer big cells or upserted/appended cells into MSLAB upon flattening to CellChunkMap
[ https://issues.apache.org/jira/browse/HBASE-19133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16350621#comment-16350621 ] Ted Yu commented on HBASE-19133: In CellChunkImmutableSegment#copyCellIntoMSLAB , we shouldn't pass true for forceCloneOfBigCell: {code} long oldHeapSize = heapSizeChange(cell, true); long oldCellSize = getCellLength(cell); cell = maybeCloneWithAllocator(cell, true); {code} We can lift maybeCloneWithAllocator() as the first call in copyCellIntoMSLAB. maybeCloneWithAllocator() should check whether clone is supported by this.memStoreLAB. If not, it just returns the Cell. copyCellIntoMSLAB() would determine the forceCloneOfBigCell flag based on whether cloning happened or nor. [~anastas] [~galish]: What do you think ? > Transfer big cells or upserted/appended cells into MSLAB upon flattening to > CellChunkMap > > > Key: HBASE-19133 > URL: https://issues.apache.org/jira/browse/HBASE-19133 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Gali Sheffi >Priority: Major > Fix For: 2.0.0-beta-1 > > Attachments: HBASE-19133-V01.patch, HBASE-19133-V02.patch, > HBASE-19133-V03.patch, HBASE-19133.01.patch, HBASE-19133.02.patch, > HBASE-19133.03.patch, HBASE-19133.04.patch, HBASE-19133.05.patch, > HBASE-19133.06.patch, HBASE-19133.07.patch, HBASE-19133.08.patch, > HBASE-19133.09.patch, HBASE-19133.10.patch, HBASE-19133.11.patch > > > CellChunkMap Segment index requires all cell data to be written in the MSLAB > Chunks. Eventhough MSLAB is enabled, cells bigger than chunk size or > upserted/incremented/appended cells are still allocated on the JVM stack. If > such cells are found in the process of flattening into CellChunkMap > (in-memory-flush) they need to be copied into MSLAB. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19920) TokenUtil.obtainToken unnecessarily creates a local directory
[ https://issues.apache.org/jira/browse/HBASE-19920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16350623#comment-16350623 ] Mike Drob commented on HBASE-19920: --- Ah, ok a few lines up there is the mkdir > TokenUtil.obtainToken unnecessarily creates a local directory > - > > Key: HBASE-19920 > URL: https://issues.apache.org/jira/browse/HBASE-19920 > Project: HBase > Issue Type: Bug >Reporter: Rohini Palaniswamy >Priority: Major > > On client code, when one calls TokenUtil.obtainToken it loads ProtobufUtil > which in its static block initializes DynamicClassLoader and that creates the > directory ${hbase.rootdir}/lib > https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/util/DynamicClassLoader.java#L115-L127 > Since this is region server specific code, not expecting this to happen when > one accesses hbase as a client. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19720) Rename WALKey#getTabnename to WALKey#getTableName
[ https://issues.apache.org/jira/browse/HBASE-19720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16350650#comment-16350650 ] Hudson commented on HBASE-19720: FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4514 (See [https://builds.apache.org/job/HBase-Trunk_matrix/4514/]) HBASE-19720 Rename WALKey#getTabnename to WALKey#getTableName (chia7712: rev 2f4d0b94bc61b00f1d7c549e8dafb4cc420fab18) * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALKey.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALKeyImpl.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/AbstractTestProtobufLog.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HBaseInterClusterReplicationEndpoint.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALSplitter.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/protobuf/ReplicationProtbufUtil.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestRegionReplicaReplicationEndpointNoMaster.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/replication/NamespaceTableCfWALEntryFilter.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/SimpleRegionObserver.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestWALLockup.java * (edit) hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestWALPlayer.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/ReaderBase.java * (edit) hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/WALPlayer.java * (edit) hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportExport.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/replication/SystemTableWALEntryFilter.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestWALFactory.java > Rename WALKey#getTabnename to WALKey#getTableName > - > > Key: HBASE-19720 > URL: https://issues.apache.org/jira/browse/HBASE-19720 > Project: HBase > Issue Type: Task >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai >Priority: Major > Fix For: 2.0.0 > > Attachments: HBASE-19720.v0.patch > > > WALKey is denoted as LP so its naming should obey the common rule in our > codebase. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19920) TokenUtil.obtainToken unnecessarily creates a local directory
[ https://issues.apache.org/jira/browse/HBASE-19920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16350662#comment-16350662 ] Ted Yu commented on HBASE-19920: Rohini: Do you want to submit a patch ? > TokenUtil.obtainToken unnecessarily creates a local directory > - > > Key: HBASE-19920 > URL: https://issues.apache.org/jira/browse/HBASE-19920 > Project: HBase > Issue Type: Bug >Reporter: Rohini Palaniswamy >Priority: Major > > On client code, when one calls TokenUtil.obtainToken it loads ProtobufUtil > which in its static block initializes DynamicClassLoader and that creates the > directory ${hbase.rootdir}/lib > https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/util/DynamicClassLoader.java#L115-L127 > Since this is region server specific code, not expecting this to happen when > one accesses hbase as a client. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-19900) The failed op is added to RetriesExhaustedWithDetailsException repeatedly
[ https://issues.apache.org/jira/browse/HBASE-19900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai updated HBASE-19900: --- Description: The inconsistency includes the following bug. 3) The failed op is added to RetriesExhaustedWithDetailsException repeatedly AsyncRequestFutureImpl#receiveMultiAction process the action-lever error first, and then add the region-level exception to each action. Hence, user may get the various exceptions for the same action (row op) from the RetriesExhaustedWithDetailsException. In fact, if both of action-level exception and region-lever exception exist, they always have the same context. I'm not sure whether that is what RetriesExhaustedWithDetailsException want. As i see it, we shouldn't have the duplicate ops in RetriesExhaustedWithDetailsException since that may confuse users if they catch the RetriesExhaustedWithDetailsException to check the invalid operations. was: AsyncRequestFutureImpl#receiveMultiAction process the action-lever error first, and then add the region-level exception to each action. Hence, user may get the various exceptions for the same action (row op) from the RetriesExhaustedWithDetailsException. In fact, if both of action-level exception and region-lever exception exist, they always have the same context. I'm not sure whether that is what RetriesExhaustedWithDetailsException want. As i see it, we shouldn't have the duplicate ops in RetriesExhaustedWithDetailsException since that may confuse users if they catch the RetriesExhaustedWithDetailsException to check the invalid operations. > The failed op is added to RetriesExhaustedWithDetailsException repeatedly > -- > > Key: HBASE-19900 > URL: https://issues.apache.org/jira/browse/HBASE-19900 > Project: HBase > Issue Type: Bug >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai >Priority: Minor > > The inconsistency includes the following bug. > > 3) The failed op is added to RetriesExhaustedWithDetailsException repeatedly > > AsyncRequestFutureImpl#receiveMultiAction process the action-lever error > first, and then add the region-level exception to each action. Hence, user > may get the various exceptions for the same action (row op) from the > RetriesExhaustedWithDetailsException. > In fact, if both of action-level exception and region-lever exception exist, > they always have the same context. I'm not sure whether that is what > RetriesExhaustedWithDetailsException want. As i see it, we shouldn't have the > duplicate ops in RetriesExhaustedWithDetailsException since that may confuse > users if they catch the RetriesExhaustedWithDetailsException to check the > invalid operations. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-19900) The failed op is added to RetriesExhaustedWithDetailsException repeatedly
[ https://issues.apache.org/jira/browse/HBASE-19900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai updated HBASE-19900: --- Description: The inconsistency includes the following bug. 1) decrease action count repeatedly If the AsyncRequestFuture#waitUntilDone return prematurely, user will get the incorrect results. Or user will be block by AsyncRequestFuture#waitUntilDone as the count is never equal with 0. 2) the successive result will be overwrited 3) The failed op is added to RetriesExhaustedWithDetailsException repeatedly AsyncRequestFutureImpl#receiveMultiAction process the action-lever error first, and then add the region-level exception to each action. Hence, user may get the various exceptions for the same action (row op) from the RetriesExhaustedWithDetailsException. In fact, if both of action-level exception and region-lever exception exist, they always have the same context. I'm not sure whether that is what RetriesExhaustedWithDetailsException want. As i see it, we shouldn't have the duplicate ops in RetriesExhaustedWithDetailsException since that may confuse users if they catch the RetriesExhaustedWithDetailsException to check the invalid operations. was: The inconsistency includes the following bug. 3) The failed op is added to RetriesExhaustedWithDetailsException repeatedly AsyncRequestFutureImpl#receiveMultiAction process the action-lever error first, and then add the region-level exception to each action. Hence, user may get the various exceptions for the same action (row op) from the RetriesExhaustedWithDetailsException. In fact, if both of action-level exception and region-lever exception exist, they always have the same context. I'm not sure whether that is what RetriesExhaustedWithDetailsException want. As i see it, we shouldn't have the duplicate ops in RetriesExhaustedWithDetailsException since that may confuse users if they catch the RetriesExhaustedWithDetailsException to check the invalid operations. > The failed op is added to RetriesExhaustedWithDetailsException repeatedly > -- > > Key: HBASE-19900 > URL: https://issues.apache.org/jira/browse/HBASE-19900 > Project: HBase > Issue Type: Bug >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai >Priority: Minor > > The inconsistency includes the following bug. > 1) decrease action count repeatedly > If the AsyncRequestFuture#waitUntilDone return prematurely, user will get the > incorrect results. Or user will be block by AsyncRequestFuture#waitUntilDone > as the count is never equal with 0. > 2) the successive result will be overwrited > 3) The failed op is added to RetriesExhaustedWithDetailsException repeatedly > AsyncRequestFutureImpl#receiveMultiAction process the action-lever error > first, and then add the region-level exception to each action. Hence, user > may get the various exceptions for the same action (row op) from the > RetriesExhaustedWithDetailsException. > In fact, if both of action-level exception and region-lever exception exist, > they always have the same context. I'm not sure whether that is what > RetriesExhaustedWithDetailsException want. As i see it, we shouldn't have the > duplicate ops in RetriesExhaustedWithDetailsException since that may confuse > users if they catch the RetriesExhaustedWithDetailsException to check the > invalid operations. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-19900) Region-level exception destroy the result of batch
[ https://issues.apache.org/jira/browse/HBASE-19900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai updated HBASE-19900: --- Summary: Region-level exception destroy the result of batch (was: The failed op is added to RetriesExhaustedWithDetailsException repeatedly ) > Region-level exception destroy the result of batch > -- > > Key: HBASE-19900 > URL: https://issues.apache.org/jira/browse/HBASE-19900 > Project: HBase > Issue Type: Bug >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai >Priority: Minor > > The inconsistency includes the following bug. > 1) decrease action count repeatedly > If the AsyncRequestFuture#waitUntilDone return prematurely, user will get the > incorrect results. Or user will be block by AsyncRequestFuture#waitUntilDone > as the count is never equal with 0. > 2) the successive result will be overwrited > 3) The failed op is added to RetriesExhaustedWithDetailsException repeatedly > AsyncRequestFutureImpl#receiveMultiAction process the action-lever error > first, and then add the region-level exception to each action. Hence, user > may get the various exceptions for the same action (row op) from the > RetriesExhaustedWithDetailsException. > In fact, if both of action-level exception and region-lever exception exist, > they always have the same context. I'm not sure whether that is what > RetriesExhaustedWithDetailsException want. As i see it, we shouldn't have the > duplicate ops in RetriesExhaustedWithDetailsException since that may confuse > users if they catch the RetriesExhaustedWithDetailsException to check the > invalid operations. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-19900) Region-level exception destroy the result of batch
[ https://issues.apache.org/jira/browse/HBASE-19900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai updated HBASE-19900: --- Priority: Critical (was: Minor) > Region-level exception destroy the result of batch > -- > > Key: HBASE-19900 > URL: https://issues.apache.org/jira/browse/HBASE-19900 > Project: HBase > Issue Type: Bug >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai >Priority: Critical > > 1) decrease action count repeatedly > If the AsyncRequestFuture#waitUntilDone return prematurely, user will get the > incorrect results. Or user will be block by AsyncRequestFuture#waitUntilDone > as the count is never equal with 0. > 2) the successive result will be overwrited > 3) The failed op is added to RetriesExhaustedWithDetailsException repeatedly > AsyncRequestFutureImpl#receiveMultiAction process the action-lever error > first, and then add the region-level exception to each action. Hence, user > may get the various exceptions for the same action (row op) from the > RetriesExhaustedWithDetailsException. > In fact, if both of action-level exception and region-lever exception exist, > they always have the same context. I'm not sure whether that is what > RetriesExhaustedWithDetailsException want. As i see it, we shouldn't have the > duplicate ops in RetriesExhaustedWithDetailsException since that may confuse > users if they catch the RetriesExhaustedWithDetailsException to check the > invalid operations. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-19900) Region-level exception destroy the result of batch
[ https://issues.apache.org/jira/browse/HBASE-19900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai updated HBASE-19900: --- Description: 1) decrease action count repeatedly If the AsyncRequestFuture#waitUntilDone return prematurely, user will get the incorrect results. Or user will be block by AsyncRequestFuture#waitUntilDone as the count is never equal with 0. 2) the successive result will be overwrited 3) The failed op is added to RetriesExhaustedWithDetailsException repeatedly AsyncRequestFutureImpl#receiveMultiAction process the action-lever error first, and then add the region-level exception to each action. Hence, user may get the various exceptions for the same action (row op) from the RetriesExhaustedWithDetailsException. In fact, if both of action-level exception and region-lever exception exist, they always have the same context. I'm not sure whether that is what RetriesExhaustedWithDetailsException want. As i see it, we shouldn't have the duplicate ops in RetriesExhaustedWithDetailsException since that may confuse users if they catch the RetriesExhaustedWithDetailsException to check the invalid operations. was: The inconsistency includes the following bug. 1) decrease action count repeatedly If the AsyncRequestFuture#waitUntilDone return prematurely, user will get the incorrect results. Or user will be block by AsyncRequestFuture#waitUntilDone as the count is never equal with 0. 2) the successive result will be overwrited 3) The failed op is added to RetriesExhaustedWithDetailsException repeatedly AsyncRequestFutureImpl#receiveMultiAction process the action-lever error first, and then add the region-level exception to each action. Hence, user may get the various exceptions for the same action (row op) from the RetriesExhaustedWithDetailsException. In fact, if both of action-level exception and region-lever exception exist, they always have the same context. I'm not sure whether that is what RetriesExhaustedWithDetailsException want. As i see it, we shouldn't have the duplicate ops in RetriesExhaustedWithDetailsException since that may confuse users if they catch the RetriesExhaustedWithDetailsException to check the invalid operations. > Region-level exception destroy the result of batch > -- > > Key: HBASE-19900 > URL: https://issues.apache.org/jira/browse/HBASE-19900 > Project: HBase > Issue Type: Bug >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai >Priority: Minor > > 1) decrease action count repeatedly > If the AsyncRequestFuture#waitUntilDone return prematurely, user will get the > incorrect results. Or user will be block by AsyncRequestFuture#waitUntilDone > as the count is never equal with 0. > 2) the successive result will be overwrited > 3) The failed op is added to RetriesExhaustedWithDetailsException repeatedly > AsyncRequestFutureImpl#receiveMultiAction process the action-lever error > first, and then add the region-level exception to each action. Hence, user > may get the various exceptions for the same action (row op) from the > RetriesExhaustedWithDetailsException. > In fact, if both of action-level exception and region-lever exception exist, > they always have the same context. I'm not sure whether that is what > RetriesExhaustedWithDetailsException want. As i see it, we shouldn't have the > duplicate ops in RetriesExhaustedWithDetailsException since that may confuse > users if they catch the RetriesExhaustedWithDetailsException to check the > invalid operations. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-19876) The exception happening in converting pb mutation to hbase.mutation messes up the CellScanner
[ https://issues.apache.org/jira/browse/HBASE-19876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-19876: -- Attachment: HBASE-19876.master.001.patch > The exception happening in converting pb mutation to hbase.mutation messes up > the CellScanner > - > > Key: HBASE-19876 > URL: https://issues.apache.org/jira/browse/HBASE-19876 > Project: HBase > Issue Type: Bug >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai >Priority: Critical > Fix For: 1.3.2, 1.5.0, 1.2.7, 2.0.0-beta-2, 1.4.2 > > Attachments: HBASE-19876.master.001.patch, HBASE-19876.v0.patch, > HBASE-19876.v1.patch, HBASE-19876.v2.patch, HBASE-19876.v3.patch, > HBASE-19876.v3.patch, HBASE-19876.v3.patch, HBASE-19876.v3.patch > > > {code:java} > 2018-01-27 22:51:43,794 INFO [hconnection-0x3291b443-shared-pool11-t6] > client.AsyncRequestFutureImpl(778): id=5, table=testQuotaStatusFromMaster3, > attempt=6/16 failed=20ops, last > exception=org.apache.hadoop.hbase.client.WrongRowIOException: > org.apache.hadoop.hbase.client.WrongRowIOException: The row in xxx doesn't > match the original one aaa > at org.apache.hadoop.hbase.client.Mutation.add(Mutation.java:776) > at org.apache.hadoop.hbase.client.Put.add(Put.java:282) > at > org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toPut(ProtobufUtil.java:642) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:952) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:896) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2591) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41560) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:404) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304){code} > I noticed this bug when testing the table space quota. > When rs are converting pb mutation to hbase.mutation, the quota exception or > cell exception may be thrown. > {code} > Unable to find source-code formatter for language: > rsrpcservices#dobatchop.java. Available languages are: actionscript, ada, > applescript, bash, c, c#, c++, cpp, css, erlang, go, groovy, haskell, html, > java, javascript, js, json, lua, none, nyan, objc, perl, php, python, r, > rainbow, ruby, scala, sh, sql, swift, visualbasic, xml, yaml for > (ClientProtos.Action action: mutations) { > MutationProto m = action.getMutation(); > Mutation mutation; > if (m.getMutateType() == MutationType.PUT) { > mutation = ProtobufUtil.toPut(m, cells); > batchContainsPuts = true; > } else { > mutation = ProtobufUtil.toDelete(m, cells); > batchContainsDelete = true; > } > mutationActionMap.put(mutation, action); > mArray[i++] = mutation; > checkCellSizeLimit(region, mutation); > // Check if a space quota disallows this mutation > spaceQuotaEnforcement.getPolicyEnforcement(region).check(mutation); > quota.addMutation(mutation); > } > {code} > rs has caught the exception but it doesn't have the cellscanner skip the > failed cells. > {code:java} > } catch (IOException ie) { > if (atomic) { > throw ie; > } > for (Action mutation : mutations) { > builder.addResultOrException(getResultOrException(ie, > mutation.getIndex())); > } > } > {code} > The bug results in the WrongRowIOException to remaining mutations since they > refer to invalid cells. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19876) The exception happening in converting pb mutation to hbase.mutation messes up the CellScanner
[ https://issues.apache.org/jira/browse/HBASE-19876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16350730#comment-16350730 ] stack commented on HBASE-19876: --- .001 rebase of [~chia7712] patch. > The exception happening in converting pb mutation to hbase.mutation messes up > the CellScanner > - > > Key: HBASE-19876 > URL: https://issues.apache.org/jira/browse/HBASE-19876 > Project: HBase > Issue Type: Bug >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai >Priority: Critical > Fix For: 1.3.2, 1.5.0, 1.2.7, 2.0.0-beta-2, 1.4.2 > > Attachments: HBASE-19876.master.001.patch, HBASE-19876.v0.patch, > HBASE-19876.v1.patch, HBASE-19876.v2.patch, HBASE-19876.v3.patch, > HBASE-19876.v3.patch, HBASE-19876.v3.patch, HBASE-19876.v3.patch > > > {code:java} > 2018-01-27 22:51:43,794 INFO [hconnection-0x3291b443-shared-pool11-t6] > client.AsyncRequestFutureImpl(778): id=5, table=testQuotaStatusFromMaster3, > attempt=6/16 failed=20ops, last > exception=org.apache.hadoop.hbase.client.WrongRowIOException: > org.apache.hadoop.hbase.client.WrongRowIOException: The row in xxx doesn't > match the original one aaa > at org.apache.hadoop.hbase.client.Mutation.add(Mutation.java:776) > at org.apache.hadoop.hbase.client.Put.add(Put.java:282) > at > org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toPut(ProtobufUtil.java:642) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:952) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:896) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2591) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41560) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:404) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304){code} > I noticed this bug when testing the table space quota. > When rs are converting pb mutation to hbase.mutation, the quota exception or > cell exception may be thrown. > {code} > Unable to find source-code formatter for language: > rsrpcservices#dobatchop.java. Available languages are: actionscript, ada, > applescript, bash, c, c#, c++, cpp, css, erlang, go, groovy, haskell, html, > java, javascript, js, json, lua, none, nyan, objc, perl, php, python, r, > rainbow, ruby, scala, sh, sql, swift, visualbasic, xml, yaml for > (ClientProtos.Action action: mutations) { > MutationProto m = action.getMutation(); > Mutation mutation; > if (m.getMutateType() == MutationType.PUT) { > mutation = ProtobufUtil.toPut(m, cells); > batchContainsPuts = true; > } else { > mutation = ProtobufUtil.toDelete(m, cells); > batchContainsDelete = true; > } > mutationActionMap.put(mutation, action); > mArray[i++] = mutation; > checkCellSizeLimit(region, mutation); > // Check if a space quota disallows this mutation > spaceQuotaEnforcement.getPolicyEnforcement(region).check(mutation); > quota.addMutation(mutation); > } > {code} > rs has caught the exception but it doesn't have the cellscanner skip the > failed cells. > {code:java} > } catch (IOException ie) { > if (atomic) { > throw ie; > } > for (Action mutation : mutations) { > builder.addResultOrException(getResultOrException(ie, > mutation.getIndex())); > } > } > {code} > The bug results in the WrongRowIOException to remaining mutations since they > refer to invalid cells. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-19528) Major Compaction Tool
[ https://issues.apache.org/jira/browse/HBASE-19528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-19528: -- Fix Version/s: (was: 3.0.0) > Major Compaction Tool > -- > > Key: HBASE-19528 > URL: https://issues.apache.org/jira/browse/HBASE-19528 > Project: HBase > Issue Type: New Feature >Reporter: churro morales >Assignee: churro morales >Priority: Major > Fix For: 2.0.0-beta-2 > > Attachments: 0001-HBASE-19528-Major-Compaction-Tool-ADDENDUM.patch, > HBASE-19528.branch-1.patch, HBASE-19528.patch, HBASE-19528.v1.branch-1.patch, > HBASE-19528.v1.patch, HBASE-19528.v2.branch-1.patch, > HBASE-19528.v2.branch-1.patch, HBASE-19528.v8.patch > > > The basic overview of how this tool works is: > Parameters: > Table > Stores > ClusterConcurrency > Timestamp > So you input a table, desired concurrency and the list of stores you wish to > major compact. The tool first checks the filesystem to see which stores need > compaction based on the timestamp you provide (default is current time). It > takes that list of stores that require compaction and executes those requests > concurrently with at most N distinct RegionServers compacting at a given > time. Each thread waits for the compaction to complete before moving to the > next queue. If a region split, merge or move happens this tool ensures those > regions get major compacted as well. > This helps us in two ways, we can limit how much I/O bandwidth we are using > for major compaction cluster wide and we are guaranteed after the tool > completes that all requested compactions complete regardless of moves, merges > and splits. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-19528) Major Compaction Tool
[ https://issues.apache.org/jira/browse/HBASE-19528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-19528: -- Attachment: HBASE-19528.v2.branch-1.patch > Major Compaction Tool > -- > > Key: HBASE-19528 > URL: https://issues.apache.org/jira/browse/HBASE-19528 > Project: HBase > Issue Type: New Feature >Reporter: churro morales >Assignee: churro morales >Priority: Major > Fix For: 2.0.0-beta-2 > > Attachments: 0001-HBASE-19528-Major-Compaction-Tool-ADDENDUM.patch, > HBASE-19528.branch-1.patch, HBASE-19528.patch, HBASE-19528.v1.branch-1.patch, > HBASE-19528.v1.patch, HBASE-19528.v2.branch-1.patch, > HBASE-19528.v2.branch-1.patch, HBASE-19528.v2.branch-1.patch, > HBASE-19528.v8.patch > > > The basic overview of how this tool works is: > Parameters: > Table > Stores > ClusterConcurrency > Timestamp > So you input a table, desired concurrency and the list of stores you wish to > major compact. The tool first checks the filesystem to see which stores need > compaction based on the timestamp you provide (default is current time). It > takes that list of stores that require compaction and executes those requests > concurrently with at most N distinct RegionServers compacting at a given > time. Each thread waits for the compaction to complete before moving to the > next queue. If a region split, merge or move happens this tool ensures those > regions get major compacted as well. > This helps us in two ways, we can limit how much I/O bandwidth we are using > for major compaction cluster wide and we are guaranteed after the tool > completes that all requested compactions complete regardless of moves, merges > and splits. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19876) The exception happening in converting pb mutation to hbase.mutation messes up the CellScanner
[ https://issues.apache.org/jira/browse/HBASE-19876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16350734#comment-16350734 ] Chia-Ping Tsai commented on HBASE-19876: Thanks [~stack]. I forget to say this issue is blocked by HBASE-19900. If HBASE-19900 is not resolved, the tests in the patch are unstable because of the region-level exception. > The exception happening in converting pb mutation to hbase.mutation messes up > the CellScanner > - > > Key: HBASE-19876 > URL: https://issues.apache.org/jira/browse/HBASE-19876 > Project: HBase > Issue Type: Bug >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai >Priority: Critical > Fix For: 1.3.2, 1.5.0, 1.2.7, 2.0.0-beta-2, 1.4.2 > > Attachments: HBASE-19876.master.001.patch, HBASE-19876.v0.patch, > HBASE-19876.v1.patch, HBASE-19876.v2.patch, HBASE-19876.v3.patch, > HBASE-19876.v3.patch, HBASE-19876.v3.patch, HBASE-19876.v3.patch > > > {code:java} > 2018-01-27 22:51:43,794 INFO [hconnection-0x3291b443-shared-pool11-t6] > client.AsyncRequestFutureImpl(778): id=5, table=testQuotaStatusFromMaster3, > attempt=6/16 failed=20ops, last > exception=org.apache.hadoop.hbase.client.WrongRowIOException: > org.apache.hadoop.hbase.client.WrongRowIOException: The row in xxx doesn't > match the original one aaa > at org.apache.hadoop.hbase.client.Mutation.add(Mutation.java:776) > at org.apache.hadoop.hbase.client.Put.add(Put.java:282) > at > org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toPut(ProtobufUtil.java:642) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:952) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:896) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2591) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41560) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:404) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304){code} > I noticed this bug when testing the table space quota. > When rs are converting pb mutation to hbase.mutation, the quota exception or > cell exception may be thrown. > {code} > Unable to find source-code formatter for language: > rsrpcservices#dobatchop.java. Available languages are: actionscript, ada, > applescript, bash, c, c#, c++, cpp, css, erlang, go, groovy, haskell, html, > java, javascript, js, json, lua, none, nyan, objc, perl, php, python, r, > rainbow, ruby, scala, sh, sql, swift, visualbasic, xml, yaml for > (ClientProtos.Action action: mutations) { > MutationProto m = action.getMutation(); > Mutation mutation; > if (m.getMutateType() == MutationType.PUT) { > mutation = ProtobufUtil.toPut(m, cells); > batchContainsPuts = true; > } else { > mutation = ProtobufUtil.toDelete(m, cells); > batchContainsDelete = true; > } > mutationActionMap.put(mutation, action); > mArray[i++] = mutation; > checkCellSizeLimit(region, mutation); > // Check if a space quota disallows this mutation > spaceQuotaEnforcement.getPolicyEnforcement(region).check(mutation); > quota.addMutation(mutation); > } > {code} > rs has caught the exception but it doesn't have the cellscanner skip the > failed cells. > {code:java} > } catch (IOException ie) { > if (atomic) { > throw ie; > } > for (Action mutation : mutations) { > builder.addResultOrException(getResultOrException(ie, > mutation.getIndex())); > } > } > {code} > The bug results in the WrongRowIOException to remaining mutations since they > refer to invalid cells. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-19900) Region-level exception destroy the result of batch
[ https://issues.apache.org/jira/browse/HBASE-19900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai updated HBASE-19900: --- Fix Version/s: 1.4.2 2.0.0-beta-2 1.2.7 1.5.0 1.3.2 > Region-level exception destroy the result of batch > -- > > Key: HBASE-19900 > URL: https://issues.apache.org/jira/browse/HBASE-19900 > Project: HBase > Issue Type: Bug >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai >Priority: Critical > Fix For: 1.3.2, 1.5.0, 1.2.7, 2.0.0-beta-2, 1.4.2 > > > 1) decrease action count repeatedly > If the AsyncRequestFuture#waitUntilDone return prematurely, user will get the > incorrect results. Or user will be block by AsyncRequestFuture#waitUntilDone > as the count is never equal with 0. > 2) the successive result will be overwrited > 3) The failed op is added to RetriesExhaustedWithDetailsException repeatedly > AsyncRequestFutureImpl#receiveMultiAction process the action-lever error > first, and then add the region-level exception to each action. Hence, user > may get the various exceptions for the same action (row op) from the > RetriesExhaustedWithDetailsException. > In fact, if both of action-level exception and region-lever exception exist, > they always have the same context. I'm not sure whether that is what > RetriesExhaustedWithDetailsException want. As i see it, we shouldn't have the > duplicate ops in RetriesExhaustedWithDetailsException since that may confuse > users if they catch the RetriesExhaustedWithDetailsException to check the > invalid operations. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-19663) site build fails complaining "javadoc: error - class file for javax.annotation.meta.TypeQualifierNickname not found"
[ https://issues.apache.org/jira/browse/HBASE-19663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-19663: -- Fix Version/s: (was: 2.0.0-beta-2) 2.0.0 > site build fails complaining "javadoc: error - class file for > javax.annotation.meta.TypeQualifierNickname not found" > > > Key: HBASE-19663 > URL: https://issues.apache.org/jira/browse/HBASE-19663 > Project: HBase > Issue Type: Bug > Components: site >Reporter: stack >Assignee: stack >Priority: Blocker > Fix For: 2.0.0, 1.4.2 > > Attachments: script.sh > > > Cryptic failure trying to build beta-1 RC. Fails like this: > {code} > [INFO] BUILD FAILURE > [INFO] > > [INFO] Total time: 03:54 min > [INFO] Finished at: 2017-12-29T01:13:15-08:00 > [INFO] Final Memory: 381M/9165M > [INFO] > > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project > hbase: Error generating maven-javadoc-plugin:2.10.3:aggregate: > [ERROR] Exit code: 1 - warning: unknown enum constant When.ALWAYS > [ERROR] reason: class file for javax.annotation.meta.When not found > [ERROR] warning: unknown enum constant When.UNKNOWN > [ERROR] warning: unknown enum constant When.MAYBE > [ERROR] > /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762: > warning - Tag @link: malformed: "#matchingRows(Cell, byte[]))" > [ERROR] > /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762: > warning - Tag @link: reference not found: #matchingRows(Cell, byte[])) > [ERROR] > /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762: > warning - Tag @link: reference not found: #matchingRows(Cell, byte[])) > [ERROR] javadoc: warning - Class javax.annotation.Nonnull not found. > [ERROR] javadoc: error - class file for > javax.annotation.meta.TypeQualifierNickname not found > [ERROR] > [ERROR] Command line was: /home/stack/bin/jdk1.8.0_151/jre/../bin/javadoc > -J-Xmx2G @options @packages > [ERROR] > [ERROR] Refer to the generated Javadoc files in > '/home/stack/hbase.git/target/site/apidocs' dir. > [ERROR] -> [Help 1] > [ERROR] > [ERROR] To see the full stack trace of the errors, re-run Maven with the -e > switch. > [ERROR] Re-run Maven using the -X switch to enable full debug logging. > [ERROR] > [ERROR] For more information about the errors and possible solutions, please > read the following articles: > [ERROR] [Help 1] > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException > {code} > javax.annotation.meta.TypeQualifierNickname is out of jsr305 but we don't > include this anywhere according to mvn dependency. > Happens building the User API both test and main. > Excluding these lines gets us passing again: > {code} > 3511 > 3512 > org.apache.yetus.audience.tools.IncludePublicAnnotationsStandardDoclet > 3513 > 3514 > 3515 org.apache.yetus > 3516 audience-annotations > 3517 ${audience-annotations.version} > 3518 > + 3519 true > {code} > Tried upgrading to newer mvn site (ours is three years old) but that a > different set of problems. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19528) Major Compaction Tool
[ https://issues.apache.org/jira/browse/HBASE-19528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16350759#comment-16350759 ] Hadoop QA commented on HBASE-19528: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} docker {color} | {color:red} 15m 37s{color} | {color:red} Docker failed to build yetus/hbase:36a7029. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HBASE-19528 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12909024/HBASE-19528.v2.branch-1.patch | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/11362/console | | Powered by | Apache Yetus 0.7.0 http://yetus.apache.org | This message was automatically generated. > Major Compaction Tool > -- > > Key: HBASE-19528 > URL: https://issues.apache.org/jira/browse/HBASE-19528 > Project: HBase > Issue Type: New Feature >Reporter: churro morales >Assignee: churro morales >Priority: Major > Fix For: 2.0.0-beta-2 > > Attachments: 0001-HBASE-19528-Major-Compaction-Tool-ADDENDUM.patch, > HBASE-19528.branch-1.patch, HBASE-19528.patch, HBASE-19528.v1.branch-1.patch, > HBASE-19528.v1.patch, HBASE-19528.v2.branch-1.patch, > HBASE-19528.v2.branch-1.patch, HBASE-19528.v2.branch-1.patch, > HBASE-19528.v8.patch > > > The basic overview of how this tool works is: > Parameters: > Table > Stores > ClusterConcurrency > Timestamp > So you input a table, desired concurrency and the list of stores you wish to > major compact. The tool first checks the filesystem to see which stores need > compaction based on the timestamp you provide (default is current time). It > takes that list of stores that require compaction and executes those requests > concurrently with at most N distinct RegionServers compacting at a given > time. Each thread waits for the compaction to complete before moving to the > next queue. If a region split, merge or move happens this tool ensures those > regions get major compacted as well. > This helps us in two ways, we can limit how much I/O bandwidth we are using > for major compaction cluster wide and we are guaranteed after the tool > completes that all requested compactions complete regardless of moves, merges > and splits. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19528) Major Compaction Tool
[ https://issues.apache.org/jira/browse/HBASE-19528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16350768#comment-16350768 ] churro morales commented on HBASE-19528: [~stack] I've retried a bunch of times, cant get that docker container not to fail, any ideas or just wait and try some other time. > Major Compaction Tool > -- > > Key: HBASE-19528 > URL: https://issues.apache.org/jira/browse/HBASE-19528 > Project: HBase > Issue Type: New Feature >Reporter: churro morales >Assignee: churro morales >Priority: Major > Fix For: 2.0.0-beta-2 > > Attachments: 0001-HBASE-19528-Major-Compaction-Tool-ADDENDUM.patch, > HBASE-19528.branch-1.patch, HBASE-19528.patch, HBASE-19528.v1.branch-1.patch, > HBASE-19528.v1.patch, HBASE-19528.v2.branch-1.patch, > HBASE-19528.v2.branch-1.patch, HBASE-19528.v2.branch-1.patch, > HBASE-19528.v8.patch > > > The basic overview of how this tool works is: > Parameters: > Table > Stores > ClusterConcurrency > Timestamp > So you input a table, desired concurrency and the list of stores you wish to > major compact. The tool first checks the filesystem to see which stores need > compaction based on the timestamp you provide (default is current time). It > takes that list of stores that require compaction and executes those requests > concurrently with at most N distinct RegionServers compacting at a given > time. Each thread waits for the compaction to complete before moving to the > next queue. If a region split, merge or move happens this tool ensures those > regions get major compacted as well. > This helps us in two ways, we can limit how much I/O bandwidth we are using > for major compaction cluster wide and we are guaranteed after the tool > completes that all requested compactions complete regardless of moves, merges > and splits. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work started] (HBASE-19915) From split/ merge procedures daughter/ merged regions get created in OFFLINE state
[ https://issues.apache.org/jira/browse/HBASE-19915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HBASE-19915 started by Umesh Agashe. > From split/ merge procedures daughter/ merged regions get created in OFFLINE > state > -- > > Key: HBASE-19915 > URL: https://issues.apache.org/jira/browse/HBASE-19915 > Project: HBase > Issue Type: Improvement >Affects Versions: 2.0.0-beta-1 >Reporter: Umesh Agashe >Assignee: Umesh Agashe >Priority: Major > Fix For: 2.0.0-beta-2 > > > See HBASE-19530. When regions are created initial state should be CLOSED. Bug > was discovered while debugging flaky test > TestSplitTableRegionProcedure#testRollbackAndDoubleExecution with numOfSteps > set to 4. After updating daughter regions in meta when master is restarted, > startup sequence of master assigns all OFFLINE regions. As daughter regions > are stored with OFFLINE state, daughter regions are assigned. This is > followed by re-assignment of daughter regions from resumed > SplitTableRegionProcedure. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-19915) From split/ merge procedures daughter/ merged regions get created in OFFLINE state
[ https://issues.apache.org/jira/browse/HBASE-19915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Umesh Agashe updated HBASE-19915: - Status: Patch Available (was: In Progress) > From split/ merge procedures daughter/ merged regions get created in OFFLINE > state > -- > > Key: HBASE-19915 > URL: https://issues.apache.org/jira/browse/HBASE-19915 > Project: HBase > Issue Type: Improvement >Affects Versions: 2.0.0-beta-1 >Reporter: Umesh Agashe >Assignee: Umesh Agashe >Priority: Major > Fix For: 2.0.0-beta-2 > > Attachments: hbase-19915.master.001.patch > > > See HBASE-19530. When regions are created initial state should be CLOSED. Bug > was discovered while debugging flaky test > TestSplitTableRegionProcedure#testRollbackAndDoubleExecution with numOfSteps > set to 4. After updating daughter regions in meta when master is restarted, > startup sequence of master assigns all OFFLINE regions. As daughter regions > are stored with OFFLINE state, daughter regions are assigned. This is > followed by re-assignment of daughter regions from resumed > SplitTableRegionProcedure. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-19915) From split/ merge procedures daughter/ merged regions get created in OFFLINE state
[ https://issues.apache.org/jira/browse/HBASE-19915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Umesh Agashe updated HBASE-19915: - Attachment: hbase-19915.master.001.patch > From split/ merge procedures daughter/ merged regions get created in OFFLINE > state > -- > > Key: HBASE-19915 > URL: https://issues.apache.org/jira/browse/HBASE-19915 > Project: HBase > Issue Type: Improvement >Affects Versions: 2.0.0-beta-1 >Reporter: Umesh Agashe >Assignee: Umesh Agashe >Priority: Major > Fix For: 2.0.0-beta-2 > > Attachments: hbase-19915.master.001.patch > > > See HBASE-19530. When regions are created initial state should be CLOSED. Bug > was discovered while debugging flaky test > TestSplitTableRegionProcedure#testRollbackAndDoubleExecution with numOfSteps > set to 4. After updating daughter regions in meta when master is restarted, > startup sequence of master assigns all OFFLINE regions. As daughter regions > are stored with OFFLINE state, daughter regions are assigned. This is > followed by re-assignment of daughter regions from resumed > SplitTableRegionProcedure. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19805) NPE in HMaster while issuing a sequence of table splits
[ https://issues.apache.org/jira/browse/HBASE-19805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16350786#comment-16350786 ] stack commented on HBASE-19805: --- Any luck [~sergey.soldatov] ? > NPE in HMaster while issuing a sequence of table splits > --- > > Key: HBASE-19805 > URL: https://issues.apache.org/jira/browse/HBASE-19805 > Project: HBase > Issue Type: Bug > Components: master >Affects Versions: 2.0.0-beta-1 >Reporter: Josh Elser >Assignee: Sergey Soldatov >Priority: Critical > Fix For: 2.0.0-beta-2 > > > I wrote a toy program to test the client tarball in HBASE-19735. After the > first few region splits, I see the following error in the Master log. > {noformat} > 2018-01-16 14:07:52,797 INFO > [RpcServer.default.FPBQ.Fifo.handler=28,queue=1,port=16000] master.HMaster: > Client=jelser//192.168.1.23 split > myTestTable,1,1516129669054.8313b755f74092118f9dd30a4190ee23. > 2018-01-16 14:07:52,797 ERROR > [RpcServer.default.FPBQ.Fifo.handler=28,queue=1,port=16000] ipc.RpcServer: > Unexpected throwable object > java.lang.NullPointerException > at > org.apache.hadoop.hbase.client.ConnectionUtils.getStubKey(ConnectionUtils.java:229) > at > org.apache.hadoop.hbase.client.ConnectionImplementation.getAdmin(ConnectionImplementation.java:1175) > at > org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.getAdmin(ConnectionUtils.java:149) > at > org.apache.hadoop.hbase.master.assignment.Util.getRegionInfoResponse(Util.java:59) > at > org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure.checkSplittable(SplitTableRegionProcedure.java:146) > at > org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure.(SplitTableRegionProcedure.java:103) > at > org.apache.hadoop.hbase.master.assignment.AssignmentManager.createSplitProcedure(AssignmentManager.java:761) > at org.apache.hadoop.hbase.master.HMaster$2.run(HMaster.java:1626) > at > org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:134) > at org.apache.hadoop.hbase.master.HMaster.splitRegion(HMaster.java:1618) > at > org.apache.hadoop.hbase.master.MasterRpcServices.splitRegion(MasterRpcServices.java:778) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:404) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) > {noformat} > {code} > public static void main(String[] args) throws Exception { > Configuration conf = HBaseConfiguration.create(); > try (Connection conn = ConnectionFactory.createConnection(conf); > Admin admin = conn.getAdmin()) { > final TableName tn = TableName.valueOf("myTestTable"); > if (admin.tableExists(tn)) { > admin.disableTable(tn); > admin.deleteTable(tn); > } > final TableDescriptor desc = TableDescriptorBuilder.newBuilder(tn) > > .addColumnFamily(ColumnFamilyDescriptorBuilder.newBuilder(Bytes.toBytes("f1")).build()) > .build(); > admin.createTable(desc); > List splitPoints = new ArrayList<>(16); > for (int i = 1; i <= 16; i++) { > splitPoints.add(Integer.toString(i, 16)); > } > > System.out.println("Splits: " + splitPoints); > int numRegions = admin.getRegions(tn).size(); > for (String splitPoint : splitPoints) { > System.out.println("Splitting on " + splitPoint); > admin.split(tn, Bytes.toBytes(splitPoint)); > Thread.sleep(200); > int newRegionSize = admin.getRegions(tn).size(); > while (numRegions == newRegionSize) { > Thread.sleep(50); > newRegionSize = admin.getRegions(tn).size(); > } > } > {code} > A quick glance, looks like {{Util.getRegionInfoResponse}} is to blame. > {code} > static GetRegionInfoResponse getRegionInfoResponse(final MasterProcedureEnv > env, > final ServerName regionLocation, final RegionInfo hri, boolean > includeBestSplitRow) > throws IOException { > // TODO: There is no timeout on this controller. Set one! > HBaseRpcController controller = > env.getMasterServices().getClusterConnection(). > getRpcControllerFactory().newController(); > final AdminService.BlockingInterface admin = > > env.getMasterServices().getClusterConnection().getAdmin(regionLocation); > {code} > We don't validate that we have a non-null
[jira] [Updated] (HBASE-19528) Major Compaction Tool
[ https://issues.apache.org/jira/browse/HBASE-19528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] churro morales updated HBASE-19528: --- Status: Open (was: Patch Available) > Major Compaction Tool > -- > > Key: HBASE-19528 > URL: https://issues.apache.org/jira/browse/HBASE-19528 > Project: HBase > Issue Type: New Feature >Reporter: churro morales >Assignee: churro morales >Priority: Major > Fix For: 2.0.0-beta-2 > > Attachments: 0001-HBASE-19528-Major-Compaction-Tool-ADDENDUM.patch, > HBASE-19528.branch-1.patch, HBASE-19528.patch, HBASE-19528.v1.branch-1.patch, > HBASE-19528.v1.patch, HBASE-19528.v2.branch-1.patch, > HBASE-19528.v2.branch-1.patch, HBASE-19528.v2.branch-1.patch, > HBASE-19528.v8.patch > > > The basic overview of how this tool works is: > Parameters: > Table > Stores > ClusterConcurrency > Timestamp > So you input a table, desired concurrency and the list of stores you wish to > major compact. The tool first checks the filesystem to see which stores need > compaction based on the timestamp you provide (default is current time). It > takes that list of stores that require compaction and executes those requests > concurrently with at most N distinct RegionServers compacting at a given > time. Each thread waits for the compaction to complete before moving to the > next queue. If a region split, merge or move happens this tool ensures those > regions get major compacted as well. > This helps us in two ways, we can limit how much I/O bandwidth we are using > for major compaction cluster wide and we are guaranteed after the tool > completes that all requested compactions complete regardless of moves, merges > and splits. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-19528) Major Compaction Tool
[ https://issues.apache.org/jira/browse/HBASE-19528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] churro morales updated HBASE-19528: --- Status: Patch Available (was: Open) > Major Compaction Tool > -- > > Key: HBASE-19528 > URL: https://issues.apache.org/jira/browse/HBASE-19528 > Project: HBase > Issue Type: New Feature >Reporter: churro morales >Assignee: churro morales >Priority: Major > Fix For: 2.0.0-beta-2 > > Attachments: 0001-HBASE-19528-Major-Compaction-Tool-ADDENDUM.patch, > HBASE-19528.branch-1.patch, HBASE-19528.patch, HBASE-19528.v1.branch-1.patch, > HBASE-19528.v1.patch, HBASE-19528.v2.branch-1.patch, > HBASE-19528.v2.branch-1.patch, HBASE-19528.v2.branch-1.patch, > HBASE-19528.v8.patch > > > The basic overview of how this tool works is: > Parameters: > Table > Stores > ClusterConcurrency > Timestamp > So you input a table, desired concurrency and the list of stores you wish to > major compact. The tool first checks the filesystem to see which stores need > compaction based on the timestamp you provide (default is current time). It > takes that list of stores that require compaction and executes those requests > concurrently with at most N distinct RegionServers compacting at a given > time. Each thread waits for the compaction to complete before moving to the > next queue. If a region split, merge or move happens this tool ensures those > regions get major compacted as well. > This helps us in two ways, we can limit how much I/O bandwidth we are using > for major compaction cluster wide and we are guaranteed after the tool > completes that all requested compactions complete regardless of moves, merges > and splits. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-19901) Up yetus proclimit on nightlies
[ https://issues.apache.org/jira/browse/HBASE-19901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-19901: -- Resolution: Fixed Fix Version/s: 1.1.13 1.4.2 1.2.8 2.0.0-beta-2 1.5.0 1.3.2 Release Note: Pass to yetus a dockermemlimit of 20G and a proclimit of 1. Defaults are 4G and 1G respectively. Status: Resolved (was: Patch Available) Pushed change to 1.1+ All now have hardcoded docker memlimit of 20G and proclimit of 10k. TODO: How to get hbase_nightly_yetus to use defined globals in hbase-personality instead of hardcoding. > Up yetus proclimit on nightlies > --- > > Key: HBASE-19901 > URL: https://issues.apache.org/jira/browse/HBASE-19901 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: stack >Priority: Major > Fix For: 1.3.2, 1.5.0, 2.0.0-beta-2, 1.2.8, 1.4.2, 1.1.13 > > Attachments: HBASE-19901.master.001.patch, > HBASE-19901.master.002.patch > > > We're on 0.7.0 now which enforces limits meant to protect against runaway > processes. Default is 1000 procs. HBase test runs seem to consume almost 4k. > Up our proclimit. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-19921) Disable 1.1 nightly builds.
stack created HBASE-19921: - Summary: Disable 1.1 nightly builds. Key: HBASE-19921 URL: https://issues.apache.org/jira/browse/HBASE-19921 Project: HBase Issue Type: Sub-task Reporter: stack As suggested by [~mdrob], remove the JenkinsFile is the trick (Nightlies run for all branches in hbase). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-19921) Disable 1.1 nightly builds.
[ https://issues.apache.org/jira/browse/HBASE-19921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-19921: -- Attachment: 0001-HBASE-19921-Disable-1.1-nightly-builds.patch > Disable 1.1 nightly builds. > --- > > Key: HBASE-19921 > URL: https://issues.apache.org/jira/browse/HBASE-19921 > Project: HBase > Issue Type: Sub-task >Reporter: stack >Priority: Major > Attachments: 0001-HBASE-19921-Disable-1.1-nightly-builds.patch > > > As suggested by [~mdrob], remove the JenkinsFile is the trick (Nightlies run > for all branches in hbase). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-19663) site build fails complaining "javadoc: error - class file for javax.annotation.meta.TypeQualifierNickname not found"
[ https://issues.apache.org/jira/browse/HBASE-19663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-19663: --- Fix Version/s: 1.5.0 > site build fails complaining "javadoc: error - class file for > javax.annotation.meta.TypeQualifierNickname not found" > > > Key: HBASE-19663 > URL: https://issues.apache.org/jira/browse/HBASE-19663 > Project: HBase > Issue Type: Bug > Components: site >Reporter: stack >Assignee: stack >Priority: Blocker > Fix For: 2.0.0, 1.5.0, 1.4.2 > > Attachments: script.sh > > > Cryptic failure trying to build beta-1 RC. Fails like this: > {code} > [INFO] BUILD FAILURE > [INFO] > > [INFO] Total time: 03:54 min > [INFO] Finished at: 2017-12-29T01:13:15-08:00 > [INFO] Final Memory: 381M/9165M > [INFO] > > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project > hbase: Error generating maven-javadoc-plugin:2.10.3:aggregate: > [ERROR] Exit code: 1 - warning: unknown enum constant When.ALWAYS > [ERROR] reason: class file for javax.annotation.meta.When not found > [ERROR] warning: unknown enum constant When.UNKNOWN > [ERROR] warning: unknown enum constant When.MAYBE > [ERROR] > /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762: > warning - Tag @link: malformed: "#matchingRows(Cell, byte[]))" > [ERROR] > /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762: > warning - Tag @link: reference not found: #matchingRows(Cell, byte[])) > [ERROR] > /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762: > warning - Tag @link: reference not found: #matchingRows(Cell, byte[])) > [ERROR] javadoc: warning - Class javax.annotation.Nonnull not found. > [ERROR] javadoc: error - class file for > javax.annotation.meta.TypeQualifierNickname not found > [ERROR] > [ERROR] Command line was: /home/stack/bin/jdk1.8.0_151/jre/../bin/javadoc > -J-Xmx2G @options @packages > [ERROR] > [ERROR] Refer to the generated Javadoc files in > '/home/stack/hbase.git/target/site/apidocs' dir. > [ERROR] -> [Help 1] > [ERROR] > [ERROR] To see the full stack trace of the errors, re-run Maven with the -e > switch. > [ERROR] Re-run Maven using the -X switch to enable full debug logging. > [ERROR] > [ERROR] For more information about the errors and possible solutions, please > read the following articles: > [ERROR] [Help 1] > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException > {code} > javax.annotation.meta.TypeQualifierNickname is out of jsr305 but we don't > include this anywhere according to mvn dependency. > Happens building the User API both test and main. > Excluding these lines gets us passing again: > {code} > 3511 > 3512 > org.apache.yetus.audience.tools.IncludePublicAnnotationsStandardDoclet > 3513 > 3514 > 3515 org.apache.yetus > 3516 audience-annotations > 3517 ${audience-annotations.version} > 3518 > + 3519 true > {code} > Tried upgrading to newer mvn site (ours is three years old) but that a > different set of problems. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HBASE-19921) Disable 1.1 nightly builds.
[ https://issues.apache.org/jira/browse/HBASE-19921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack resolved HBASE-19921. --- Resolution: Fixed Assignee: stack Fix Version/s: 1.1.13 Release Note: Disabled nightly build on branch-1.1 since it EOL'd. Removed JenkinsFile from under dev-support. Pushed to branch-1.1. > Disable 1.1 nightly builds. > --- > > Key: HBASE-19921 > URL: https://issues.apache.org/jira/browse/HBASE-19921 > Project: HBase > Issue Type: Sub-task >Reporter: stack >Assignee: stack >Priority: Major > Fix For: 1.1.13 > > Attachments: 0001-HBASE-19921-Disable-1.1-nightly-builds.patch > > > As suggested by [~mdrob], remove the JenkinsFile is the trick (Nightlies run > for all branches in hbase). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19921) Disable 1.1 nightly builds.
[ https://issues.apache.org/jira/browse/HBASE-19921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16350874#comment-16350874 ] stack commented on HBASE-19921: --- Turned off test runs on jenkins too. > Disable 1.1 nightly builds. > --- > > Key: HBASE-19921 > URL: https://issues.apache.org/jira/browse/HBASE-19921 > Project: HBase > Issue Type: Sub-task >Reporter: stack >Assignee: stack >Priority: Major > Fix For: 1.1.13 > > Attachments: 0001-HBASE-19921-Disable-1.1-nightly-builds.patch > > > As suggested by [~mdrob], remove the JenkinsFile is the trick (Nightlies run > for all branches in hbase). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19921) Disable 1.1 nightly builds.
[ https://issues.apache.org/jira/browse/HBASE-19921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16350877#comment-16350877 ] Mike Drob commented on HBASE-19921: --- Would it have been better to delete the whole branch? > Disable 1.1 nightly builds. > --- > > Key: HBASE-19921 > URL: https://issues.apache.org/jira/browse/HBASE-19921 > Project: HBase > Issue Type: Sub-task >Reporter: stack >Assignee: stack >Priority: Major > Fix For: 1.1.13 > > Attachments: 0001-HBASE-19921-Disable-1.1-nightly-builds.patch > > > As suggested by [~mdrob], remove the JenkinsFile is the trick (Nightlies run > for all branches in hbase). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19904) Break dependency of WAL constructor on Replication
[ https://issues.apache.org/jira/browse/HBASE-19904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16350884#comment-16350884 ] Appy commented on HBASE-19904: -- Wohoo..since QA is happy...commit it. There's one more improvement suggestion up in RB, but can be done in addendum/separate jira. > Break dependency of WAL constructor on Replication > -- > > Key: HBASE-19904 > URL: https://issues.apache.org/jira/browse/HBASE-19904 > Project: HBase > Issue Type: Improvement > Components: Replication, wal >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 2.0.0-beta-2 > > Attachments: HBASE-19904-branch-2.patch, HBASE-19904-branch-2.patch, > HBASE-19904-v3.patch, HBASE-19904-v3.patch, HBASE-19904-v4.patch, > HBASE-19904-v4.patch, HBASE-19904-v5.patch > > > When implementing synchronous replication, I found that we need to depend > more on replication in WAL so it is even more pain... -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19786) acl table is created by coprocessor inside Master start procedure; broke TestJMXConnectorServer
[ https://issues.apache.org/jira/browse/HBASE-19786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16350890#comment-16350890 ] stack commented on HBASE-19786: --- Moved out of beta-2. Not happening. > acl table is created by coprocessor inside Master start procedure; broke > TestJMXConnectorServer > --- > > Key: HBASE-19786 > URL: https://issues.apache.org/jira/browse/HBASE-19786 > Project: HBase > Issue Type: Sub-task >Reporter: stack >Assignee: stack >Priority: Major > Fix For: 2.0.0 > > > Parent reordering of startup broke TestJMXConnectorServer. Its failing > because we start cluster then near immediately go down. Meantime, the acl > table is trying to get created but the servers have been pulled out from > under it so it can't complete Test gets stuck. > Creating tables inside the Master startup process is a bit dodgy. Fix. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-19786) acl table is created by coprocessor inside Master start procedure; broke TestJMXConnectorServer
[ https://issues.apache.org/jira/browse/HBASE-19786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-19786: -- Fix Version/s: (was: 2.0.0-beta-2) 2.0.0 > acl table is created by coprocessor inside Master start procedure; broke > TestJMXConnectorServer > --- > > Key: HBASE-19786 > URL: https://issues.apache.org/jira/browse/HBASE-19786 > Project: HBase > Issue Type: Sub-task >Reporter: stack >Assignee: stack >Priority: Major > Fix For: 2.0.0 > > > Parent reordering of startup broke TestJMXConnectorServer. Its failing > because we start cluster then near immediately go down. Meantime, the acl > table is trying to get created but the servers have been pulled out from > under it so it can't complete Test gets stuck. > Creating tables inside the Master startup process is a bit dodgy. Fix. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19876) The exception happening in converting pb mutation to hbase.mutation messes up the CellScanner
[ https://issues.apache.org/jira/browse/HBASE-19876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16350908#comment-16350908 ] Hadoop QA commented on HBASE-19876: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 2m 5s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 23s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 7s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 47s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 44s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 18m 14s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.5 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}102m 32s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}141m 26s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.replication.TestReplicationDroppedTables | | | hadoop.hbase.TestFullLogReconstruction | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 | | JIRA Issue | HBASE-19876 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12909023/HBASE-19876.master.001.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux edd281aedb18 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 14:43:09 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh | | git revision | master / 8143d5afa4 | | maven | version: Apache Maven 3.5.2 (138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) | | Default Java | 1.8.0_151 | | unit | https://builds.apache.org/job/PreCommit-HBASE-Build/11363/artifact/patchprocess/patch-unit-hbase-server.txt | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/11363/testReport/ | | Max. process+thread count | 4797 (vs. ulimit of 1)
[jira] [Commented] (HBASE-19767) Master web UI shows negative values for Remaining KVs
[ https://issues.apache.org/jira/browse/HBASE-19767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16350915#comment-16350915 ] stack commented on HBASE-19767: --- Is the metric rubbish then? Should we just remove it? > Master web UI shows negative values for Remaining KVs > - > > Key: HBASE-19767 > URL: https://issues.apache.org/jira/browse/HBASE-19767 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0-alpha-4 >Reporter: Jean-Marc Spaggiari >Priority: Major > Fix For: 2.0.0-beta-2 > > Attachments: Screen Shot 2018-01-12 at 12.18.41 PM.png > > > In the Master Web UI, under the compaction tab, the Remaining KVs sometimes > shows negative values. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-19726) Failed to start HMaster due to infinite retrying on meta assign
[ https://issues.apache.org/jira/browse/HBASE-19726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-19726: -- Attachment: 19726.patch > Failed to start HMaster due to infinite retrying on meta assign > --- > > Key: HBASE-19726 > URL: https://issues.apache.org/jira/browse/HBASE-19726 > Project: HBase > Issue Type: Bug >Reporter: Duo Zhang >Assignee: stack >Priority: Major > Fix For: 2.0.0-beta-2 > > Attachments: 19726.patch > > > This is what I got at first, an exception when trying to write something to > meta when meta has not been onlined yet. > {noformat} > 2018-01-07,21:03:14,389 INFO org.apache.hadoop.hbase.master.HMaster: Running > RecoverMetaProcedure to ensure proper hbase:meta deploy. > 2018-01-07,21:03:14,637 INFO > org.apache.hadoop.hbase.master.procedure.RecoverMetaProcedure: Start pid=1, > state=RUNNABLE:RECOVER_META_SPLIT_LOGS; RecoverMetaProcedure > failedMetaServer=null, splitWal=true > 2018-01-07,21:03:14,645 INFO org.apache.hadoop.hbase.master.MasterWalManager: > Log folder > hdfs://c402tst-community/hbase/c402tst-community/WALs/c4-hadoop-tst-st27.bj,38900,1515330173896 > belongs to an existing region server > 2018-01-07,21:03:14,646 INFO org.apache.hadoop.hbase.master.MasterWalManager: > Log folder > hdfs://c402tst-community/hbase/c402tst-community/WALs/c4-hadoop-tst-st29.bj,38900,1515330177232 > belongs to an existing region server > 2018-01-07,21:03:14,648 INFO > org.apache.hadoop.hbase.master.procedure.RecoverMetaProcedure: pid=1, > state=RUNNABLE:RECOVER_META_ASSIGN_REGIONS; RecoverMetaProcedure > failedMetaServer=null, splitWal=true; Retaining meta assignment to server=null > 2018-01-07,21:03:14,653 INFO > org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Initialized > subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_QUEUE; > AssignProcedure table=hbase:meta, region=1588230740}] > 2018-01-07,21:03:14,660 INFO > org.apache.hadoop.hbase.master.procedure.MasterProcedureScheduler: pid=2, > ppid=1, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure > table=hbase:meta, region=1588230740 hbase:meta hbase:meta,,1.1588230740 > 2018-01-07,21:03:14,663 INFO > org.apache.hadoop.hbase.master.assignment.AssignProcedure: Start pid=2, > ppid=1, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure > table=hbase:meta, region=1588230740; rit=OFFLINE, location=null; > forceNewPlan=false, retain=false > 2018-01-07,21:03:14,831 INFO > org.apache.hadoop.hbase.zookeeper.MetaTableLocator: Setting hbase:meta > (replicaId=0) location in ZooKeeper as > c4-hadoop-tst-st27.bj,38900,1515330173896 > 2018-01-07,21:03:14,841 INFO > org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure: Dispatch > pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure > table=hbase:meta, region=1588230740; rit=OPENING, > location=c4-hadoop-tst-st27.bj,38900,1515330173896 > 2018-01-07,21:03:14,992 INFO > org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher: Using > procedure batch rpc execution for > serverName=c4-hadoop-tst-st27.bj,38900,1515330173896 version=3145728 > 2018-01-07,21:03:15,593 ERROR > org.apache.hadoop.hbase.client.AsyncRequestFutureImpl: Cannot get replica 0 > location for > {"totalColumns":1,"row":"hbase:meta","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1515330195514}]},"ts":1515330195514} > 2018-01-07,21:03:15,594 WARN > org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure: > Retryable error trying to transition: pid=2, ppid=1, > state=RUNNABLE:REGION_TRANSITION_FINISH; AssignProcedure table=hbase:meta, > region=1588230740; rit=OPEN, > location=c4-hadoop-tst-st27.bj,38900,1515330173896 > org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 > action: IOException: 1 time, servers with issues: null > at > org.apache.hadoop.hbase.client.BatchErrors.makeException(BatchErrors.java:54) > at > org.apache.hadoop.hbase.client.AsyncRequestFutureImpl.getErrors(AsyncRequestFutureImpl.java:1250) > at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:457) > at org.apache.hadoop.hbase.client.HTable.put(HTable.java:570) > at > org.apache.hadoop.hbase.MetaTableAccessor.put(MetaTableAccessor.java:1450) > at > org.apache.hadoop.hbase.MetaTableAccessor.putToMetaTable(MetaTableAccessor.java:1439) > at > org.apache.hadoop.hbase.MetaTableAccessor.updateTableState(MetaTableAccessor.java:1785) > at > org.apache.hadoop.hbase.MetaTableAccessor.updateTableState(MetaTableAccessor.java:1151) > at > org.apache.hadoop.hbase.master.TableStateManager.udpateMetaState(TableStateManager.java:183) > at > org.apache.hadoop.hbase.master.TableS
[jira] [Updated] (HBASE-19726) Failed to start HMaster due to infinite retrying on meta assign
[ https://issues.apache.org/jira/browse/HBASE-19726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-19726: -- Status: Patch Available (was: Open) As suggested above by [~Apache9], no need of setting hbase:meta as ENABLED. It is always ENABLED. Short-circuit all calls to ENABLE hbase:meta. This saves on RPC and possible deadlock. The other issue in here where we were stuck in an RPC has been addressed elsewhere; shutdown now closes the Master connection which breaks the Connection shown hung in the original thread dump here in the description. > Failed to start HMaster due to infinite retrying on meta assign > --- > > Key: HBASE-19726 > URL: https://issues.apache.org/jira/browse/HBASE-19726 > Project: HBase > Issue Type: Bug >Reporter: Duo Zhang >Assignee: stack >Priority: Major > Fix For: 2.0.0-beta-2 > > Attachments: 19726.patch > > > This is what I got at first, an exception when trying to write something to > meta when meta has not been onlined yet. > {noformat} > 2018-01-07,21:03:14,389 INFO org.apache.hadoop.hbase.master.HMaster: Running > RecoverMetaProcedure to ensure proper hbase:meta deploy. > 2018-01-07,21:03:14,637 INFO > org.apache.hadoop.hbase.master.procedure.RecoverMetaProcedure: Start pid=1, > state=RUNNABLE:RECOVER_META_SPLIT_LOGS; RecoverMetaProcedure > failedMetaServer=null, splitWal=true > 2018-01-07,21:03:14,645 INFO org.apache.hadoop.hbase.master.MasterWalManager: > Log folder > hdfs://c402tst-community/hbase/c402tst-community/WALs/c4-hadoop-tst-st27.bj,38900,1515330173896 > belongs to an existing region server > 2018-01-07,21:03:14,646 INFO org.apache.hadoop.hbase.master.MasterWalManager: > Log folder > hdfs://c402tst-community/hbase/c402tst-community/WALs/c4-hadoop-tst-st29.bj,38900,1515330177232 > belongs to an existing region server > 2018-01-07,21:03:14,648 INFO > org.apache.hadoop.hbase.master.procedure.RecoverMetaProcedure: pid=1, > state=RUNNABLE:RECOVER_META_ASSIGN_REGIONS; RecoverMetaProcedure > failedMetaServer=null, splitWal=true; Retaining meta assignment to server=null > 2018-01-07,21:03:14,653 INFO > org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Initialized > subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_QUEUE; > AssignProcedure table=hbase:meta, region=1588230740}] > 2018-01-07,21:03:14,660 INFO > org.apache.hadoop.hbase.master.procedure.MasterProcedureScheduler: pid=2, > ppid=1, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure > table=hbase:meta, region=1588230740 hbase:meta hbase:meta,,1.1588230740 > 2018-01-07,21:03:14,663 INFO > org.apache.hadoop.hbase.master.assignment.AssignProcedure: Start pid=2, > ppid=1, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure > table=hbase:meta, region=1588230740; rit=OFFLINE, location=null; > forceNewPlan=false, retain=false > 2018-01-07,21:03:14,831 INFO > org.apache.hadoop.hbase.zookeeper.MetaTableLocator: Setting hbase:meta > (replicaId=0) location in ZooKeeper as > c4-hadoop-tst-st27.bj,38900,1515330173896 > 2018-01-07,21:03:14,841 INFO > org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure: Dispatch > pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure > table=hbase:meta, region=1588230740; rit=OPENING, > location=c4-hadoop-tst-st27.bj,38900,1515330173896 > 2018-01-07,21:03:14,992 INFO > org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher: Using > procedure batch rpc execution for > serverName=c4-hadoop-tst-st27.bj,38900,1515330173896 version=3145728 > 2018-01-07,21:03:15,593 ERROR > org.apache.hadoop.hbase.client.AsyncRequestFutureImpl: Cannot get replica 0 > location for > {"totalColumns":1,"row":"hbase:meta","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1515330195514}]},"ts":1515330195514} > 2018-01-07,21:03:15,594 WARN > org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure: > Retryable error trying to transition: pid=2, ppid=1, > state=RUNNABLE:REGION_TRANSITION_FINISH; AssignProcedure table=hbase:meta, > region=1588230740; rit=OPEN, > location=c4-hadoop-tst-st27.bj,38900,1515330173896 > org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 > action: IOException: 1 time, servers with issues: null > at > org.apache.hadoop.hbase.client.BatchErrors.makeException(BatchErrors.java:54) > at > org.apache.hadoop.hbase.client.AsyncRequestFutureImpl.getErrors(AsyncRequestFutureImpl.java:1250) > at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:457) > at org.apache.hadoop.hbase.client.HTable.put(HTable.java:570) > at > org.apache.hadoop.hbase.MetaTableAccessor.put(MetaTableAccessor.java:1450) > at > org.apache.hadoop.hbase.MetaTableA
[jira] [Commented] (HBASE-19848) Zookeeper thread leaks in hbase-spark bulkLoad method
[ https://issues.apache.org/jira/browse/HBASE-19848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16350936#comment-16350936 ] Hudson commented on HBASE-19848: FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4515 (See [https://builds.apache.org/job/HBase-Trunk_matrix/4515/]) HBASE-19848 Zookeeper thread leaks in hbase-spark bulkLoad method (Key (tedyu: rev 8143d5afa4a34c5f06a22e30b5017958b8c3f60c) * (edit) hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/HBaseContext.scala > Zookeeper thread leaks in hbase-spark bulkLoad method > - > > Key: HBASE-19848 > URL: https://issues.apache.org/jira/browse/HBASE-19848 > Project: HBase > Issue Type: Bug > Components: spark, Zookeeper >Affects Versions: 1.2.0 > Environment: hbase-spark-1.2.0-cdh5.12.1 version > spark 1.6 >Reporter: Key Hutu >Assignee: Key Hutu >Priority: Major > Labels: performance > Attachments: HBASE-19848-V2.patch, HBASE-19848-V3.patch, > HBaseContext.patch, HBaseContext.scala > > Original Estimate: 72h > Remaining Estimate: 72h > > In hbase-spark project, HBaseContext provides bulkload methond for loading > spark rdd data to hbase easily.But when i using it frequently, the program > will throw "cannot create native thread" exception. > using pstack command in spark driver process , the thread num is increasing > using jstack, named "main-SendThread" and "main-EventThread" thread so many > It seems like that , connection created before bulkload ,but close method > uninvoked at last -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19920) TokenUtil.obtainToken unnecessarily creates a local directory
[ https://issues.apache.org/jira/browse/HBASE-19920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16350943#comment-16350943 ] Mike Drob commented on HBASE-19920: --- It looks like there are a couple of subtle things going on here. # Clients could use dynamic jars for coprocessors, for example if they are making a request that involves endpoint coprocessors. So I think I disagree with the implied solution. # Dynamic jars shouldn't be getting loaded from the local file system, the intended use it to load them from a shared file system like HDFS. This might break in use cases where HBase is running on LocalFS instead of HDFS, I suspect this to be seen in a test environment? # Maybe we shouldn't be creating this directory, but limit to checking if it exists and is readable. Not having the directory there shouldn't be a fatal error, probably sufficient to log a warning and move on. > TokenUtil.obtainToken unnecessarily creates a local directory > - > > Key: HBASE-19920 > URL: https://issues.apache.org/jira/browse/HBASE-19920 > Project: HBase > Issue Type: Bug >Reporter: Rohini Palaniswamy >Priority: Major > > On client code, when one calls TokenUtil.obtainToken it loads ProtobufUtil > which in its static block initializes DynamicClassLoader and that creates the > directory ${hbase.rootdir}/lib > https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/util/DynamicClassLoader.java#L115-L127 > Since this is region server specific code, not expecting this to happen when > one accesses hbase as a client. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-19922) ProtobufUtils::PRIMITIVES is unused
Mike Drob created HBASE-19922: - Summary: ProtobufUtils::PRIMITIVES is unused Key: HBASE-19922 URL: https://issues.apache.org/jira/browse/HBASE-19922 Project: HBase Issue Type: Task Components: Protobufs Reporter: Mike Drob It looks like ProtobufUtils::PRIMITIVES is never read in both the shaded and non-shaded versions of the class. Is it safe to remove? https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java#L128 We populate the map in a static initializer but never read any values from it... -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-19922) ProtobufUtils::PRIMITIVES is unused
[ https://issues.apache.org/jira/browse/HBASE-19922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike Drob updated HBASE-19922: -- Attachment: HBASE-19922.patch > ProtobufUtils::PRIMITIVES is unused > --- > > Key: HBASE-19922 > URL: https://issues.apache.org/jira/browse/HBASE-19922 > Project: HBase > Issue Type: Task > Components: Protobufs >Reporter: Mike Drob >Priority: Major > Fix For: 2.0 > > Attachments: HBASE-19922.patch > > > It looks like ProtobufUtils::PRIMITIVES is never read in both the shaded and > non-shaded versions of the class. Is it safe to remove? > https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java#L128 > We populate the map in a static initializer but never read any values from > it... -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-19922) ProtobufUtils::PRIMITIVES is unused
[ https://issues.apache.org/jira/browse/HBASE-19922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike Drob updated HBASE-19922: -- Assignee: Mike Drob Fix Version/s: 2.0 Status: Patch Available (was: Open) > ProtobufUtils::PRIMITIVES is unused > --- > > Key: HBASE-19922 > URL: https://issues.apache.org/jira/browse/HBASE-19922 > Project: HBase > Issue Type: Task > Components: Protobufs >Reporter: Mike Drob >Assignee: Mike Drob >Priority: Major > Fix For: 2.0 > > Attachments: HBASE-19922.patch > > > It looks like ProtobufUtils::PRIMITIVES is never read in both the shaded and > non-shaded versions of the class. Is it safe to remove? > https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java#L128 > We populate the map in a static initializer but never read any values from > it... -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19920) TokenUtil.obtainToken unnecessarily creates a local directory
[ https://issues.apache.org/jira/browse/HBASE-19920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16350991#comment-16350991 ] Rohini Palaniswamy commented on HBASE-19920: bq. Do you want to submit a patch ? No >From my perspective, a call to get delegation token should not be 1) Creating a local directory 2) Instantiating a filesystem class be it local or remote. It is worse when it is remote because of the overhead involved with instantiating a DFSClient (opening sockets, etc). I do not have a problem if DynamicClassLoader actually does those things when the client intends to use coprocessors. Would just prefer it to be taken out of the code path of getting delegation tokens. > TokenUtil.obtainToken unnecessarily creates a local directory > - > > Key: HBASE-19920 > URL: https://issues.apache.org/jira/browse/HBASE-19920 > Project: HBase > Issue Type: Bug >Reporter: Rohini Palaniswamy >Priority: Major > > On client code, when one calls TokenUtil.obtainToken it loads ProtobufUtil > which in its static block initializes DynamicClassLoader and that creates the > directory ${hbase.rootdir}/lib > https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/util/DynamicClassLoader.java#L115-L127 > Since this is region server specific code, not expecting this to happen when > one accesses hbase as a client. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-19703) Functionality added as part of HBASE-12583 is not working after moving the split code to master
[ https://issues.apache.org/jira/browse/HBASE-19703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-19703: -- Attachment: HBASE-19703.branch-2.001.patch > Functionality added as part of HBASE-12583 is not working after moving the > split code to master > --- > > Key: HBASE-19703 > URL: https://issues.apache.org/jira/browse/HBASE-19703 > Project: HBase > Issue Type: Bug >Reporter: Rajeshbabu Chintaguntla >Assignee: Rajeshbabu Chintaguntla >Priority: Major > Fix For: 2.0.0-beta-2 > > Attachments: HBASE-19703-WIP.patch, HBASE-19703.branch-2.001.patch, > HBASE-19703_v2.patch, HBASE-19703_v3.patch, HBASE-19703_v4.patch, > HBASE-19703_v5.patch > > > As part of HBASE-12583 we are passing split policy to > HRegionFileSystem#splitStoreFile so that we can allow to create reference > files even the split key is out of HFile key range. This is needed for Local > Indexing implementation in Phoenix. But now after moving the split code to > master just passing null for split policy. > {noformat} > final String familyName = Bytes.toString(family); > final Path path_first = > regionFs.splitStoreFile(this.daughter_1_RI, familyName, sf, splitRow, > false, null); > final Path path_second = > regionFs.splitStoreFile(this.daughter_2_RI, familyName, sf, splitRow, > true, null); > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19703) Functionality added as part of HBASE-12583 is not working after moving the split code to master
[ https://issues.apache.org/jira/browse/HBASE-19703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16350993#comment-16350993 ] stack commented on HBASE-19703: --- .001 Add more doc explaining this is doubling-down on a hack for Phoenix local indices. Its the only user. Made that clear. Yeah, this needs cleanup. Did you get a chance to file an issue [~rajeshbabu]? Thanks. Will push this after a hadoopqa run. > Functionality added as part of HBASE-12583 is not working after moving the > split code to master > --- > > Key: HBASE-19703 > URL: https://issues.apache.org/jira/browse/HBASE-19703 > Project: HBase > Issue Type: Bug >Reporter: Rajeshbabu Chintaguntla >Assignee: Rajeshbabu Chintaguntla >Priority: Major > Fix For: 2.0.0-beta-2 > > Attachments: HBASE-19703-WIP.patch, HBASE-19703.branch-2.001.patch, > HBASE-19703_v2.patch, HBASE-19703_v3.patch, HBASE-19703_v4.patch, > HBASE-19703_v5.patch > > > As part of HBASE-12583 we are passing split policy to > HRegionFileSystem#splitStoreFile so that we can allow to create reference > files even the split key is out of HFile key range. This is needed for Local > Indexing implementation in Phoenix. But now after moving the split code to > master just passing null for split policy. > {noformat} > final String familyName = Bytes.toString(family); > final Path path_first = > regionFs.splitStoreFile(this.daughter_1_RI, familyName, sf, splitRow, > false, null); > final Path path_second = > regionFs.splitStoreFile(this.daughter_2_RI, familyName, sf, splitRow, > true, null); > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-19703) Functionality added as part of HBASE-12583 is not working after moving the split code to master
[ https://issues.apache.org/jira/browse/HBASE-19703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-19703: -- Status: Patch Available (was: Open) > Functionality added as part of HBASE-12583 is not working after moving the > split code to master > --- > > Key: HBASE-19703 > URL: https://issues.apache.org/jira/browse/HBASE-19703 > Project: HBase > Issue Type: Bug >Reporter: Rajeshbabu Chintaguntla >Assignee: Rajeshbabu Chintaguntla >Priority: Major > Fix For: 2.0.0-beta-2 > > Attachments: HBASE-19703-WIP.patch, HBASE-19703.branch-2.001.patch, > HBASE-19703_v2.patch, HBASE-19703_v3.patch, HBASE-19703_v4.patch, > HBASE-19703_v5.patch > > > As part of HBASE-12583 we are passing split policy to > HRegionFileSystem#splitStoreFile so that we can allow to create reference > files even the split key is out of HFile key range. This is needed for Local > Indexing implementation in Phoenix. But now after moving the split code to > master just passing null for split policy. > {noformat} > final String familyName = Bytes.toString(family); > final Path path_first = > regionFs.splitStoreFile(this.daughter_1_RI, familyName, sf, splitRow, > false, null); > final Path path_second = > regionFs.splitStoreFile(this.daughter_2_RI, familyName, sf, splitRow, > true, null); > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-19920) TokenUtil.obtainToken unnecessarily creates a local directory
[ https://issues.apache.org/jira/browse/HBASE-19920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohini Palaniswamy updated HBASE-19920: --- Description: On client code, when one calls TokenUtil.obtainToken it loads ProtobufUtil which in its static block initializes DynamicClassLoader and that creates the directory ${hbase.local.dir}/jars/ and also instantiates a filesystem class to access hbase.dynamic.jars.dir. https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/util/DynamicClassLoader.java#L109-L127 Since this is region server specific code, not expecting this to happen when one accesses hbase as a client. was: On client code, when one calls TokenUtil.obtainToken it loads ProtobufUtil which in its static block initializes DynamicClassLoader and that creates the directory ${hbase.rootdir}/lib https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/util/DynamicClassLoader.java#L115-L127 Since this is region server specific code, not expecting this to happen when one accesses hbase as a client. > TokenUtil.obtainToken unnecessarily creates a local directory > - > > Key: HBASE-19920 > URL: https://issues.apache.org/jira/browse/HBASE-19920 > Project: HBase > Issue Type: Bug >Reporter: Rohini Palaniswamy >Priority: Major > > On client code, when one calls TokenUtil.obtainToken it loads ProtobufUtil > which in its static block initializes DynamicClassLoader and that creates the > directory ${hbase.local.dir}/jars/ and also instantiates a filesystem class > to access hbase.dynamic.jars.dir. > https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/util/DynamicClassLoader.java#L109-L127 > Since this is region server specific code, not expecting this to happen when > one accesses hbase as a client. -- This message was sent by Atlassian JIRA (v7.6.3#76005)