[jira] [Commented] (HBASE-20307) LoadTestTool prints too much zookeeper logging
[ https://issues.apache.org/jira/browse/HBASE-20307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607917#comment-16607917 ] Hudson commented on HBASE-20307: Results for branch branch-2.1 [build #294 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/294/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/294//General_Nightly_Build_Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/294//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/294//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > LoadTestTool prints too much zookeeper logging > -- > > Key: HBASE-20307 > URL: https://issues.apache.org/jira/browse/HBASE-20307 > Project: HBase > Issue Type: Bug > Components: tooling >Reporter: Mike Drob >Assignee: Colin Garcia >Priority: Major > Labels: beginner > Fix For: 3.0.0, 1.5.0, 1.3.3, 1.2.8, 2.2.0, 1.4.8, 2.1.1, 2.0.3 > > Attachments: HBASE-20307.000.patch, HBASE-20307.001.patch > > > When running ltt there is a ton of ZK related cruft that I probably don't > care about. Hide it behind -verbose flag or point people at log4j > configuration but don't print it by default. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20307) LoadTestTool prints too much zookeeper logging
[ https://issues.apache.org/jira/browse/HBASE-20307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607910#comment-16607910 ] Hudson commented on HBASE-20307: Results for branch branch-2 [build #1217 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1217/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1217//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1217//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1217//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > LoadTestTool prints too much zookeeper logging > -- > > Key: HBASE-20307 > URL: https://issues.apache.org/jira/browse/HBASE-20307 > Project: HBase > Issue Type: Bug > Components: tooling >Reporter: Mike Drob >Assignee: Colin Garcia >Priority: Major > Labels: beginner > Fix For: 3.0.0, 1.5.0, 1.3.3, 1.2.8, 2.2.0, 1.4.8, 2.1.1, 2.0.3 > > Attachments: HBASE-20307.000.patch, HBASE-20307.001.patch > > > When running ltt there is a ton of ZK related cruft that I probably don't > care about. Hide it behind -verbose flag or point people at log4j > configuration but don't print it by default. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-15666) shaded dependencies for hbase-testing-util
[ https://issues.apache.org/jira/browse/HBASE-15666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607903#comment-16607903 ] Vrushali C commented on HBASE-15666: Thank you! > shaded dependencies for hbase-testing-util > -- > > Key: HBASE-15666 > URL: https://issues.apache.org/jira/browse/HBASE-15666 > Project: HBase > Issue Type: New Feature > Components: test >Affects Versions: 1.1.0, 1.2.0 >Reporter: Sean Busbey >Priority: Critical > > Folks that make use of our shaded client but then want to test things using > the hbase-testing-util end up getting all of our dependencies again in the > test scope. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-15666) shaded dependencies for hbase-testing-util
[ https://issues.apache.org/jira/browse/HBASE-15666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607900#comment-16607900 ] Sean Busbey commented on HBASE-15666: - yeah that sounds reasonable. > shaded dependencies for hbase-testing-util > -- > > Key: HBASE-15666 > URL: https://issues.apache.org/jira/browse/HBASE-15666 > Project: HBase > Issue Type: New Feature > Components: test >Affects Versions: 1.1.0, 1.2.0 >Reporter: Sean Busbey >Priority: Critical > > Folks that make use of our shaded client but then want to test things using > the hbase-testing-util end up getting all of our dependencies again in the > test scope. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-16458) Shorten backup / restore test execution time
[ https://issues.apache.org/jira/browse/HBASE-16458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607893#comment-16607893 ] Ted Yu commented on HBASE-16458: Vlad: Can you take a look at 16458.v2.txt ? This is based on your patch, using shutdown hook to tear down the mini-cluster at the end of last test which is subclass of TestBackupBase. > Shorten backup / restore test execution time > > > Key: HBASE-16458 > URL: https://issues.apache.org/jira/browse/HBASE-16458 > Project: HBase > Issue Type: Test >Reporter: Ted Yu >Assignee: Vladimir Rodionov >Priority: Major > Labels: backup > Attachments: 16458-v1.patch, 16458.HBASE-7912.v3.txt, > 16458.HBASE-7912.v4.txt, 16458.HBASE-7912.v5.txt, 16458.v1.txt, 16458.v2.txt, > 16458.v2.txt, 16458.v3.txt, HBASE-16458-v1.patch, HBASE-16458-v2.patch > > > Below was timing information for all the backup / restore tests (today's > result): > {code} > Running org.apache.hadoop.hbase.backup.TestIncrementalBackup > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 576.273 sec - > in org.apache.hadoop.hbase.backup.TestIncrementalBackup > Running org.apache.hadoop.hbase.backup.TestBackupBoundaryTests > Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 124.67 sec - > in org.apache.hadoop.hbase.backup.TestBackupBoundaryTests > Running org.apache.hadoop.hbase.backup.TestBackupStatusProgress > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 102.34 sec - > in org.apache.hadoop.hbase.backup.TestBackupStatusProgress > Running org.apache.hadoop.hbase.backup.TestBackupAdmin > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 490.251 sec - > in org.apache.hadoop.hbase.backup.TestBackupAdmin > Running org.apache.hadoop.hbase.backup.TestHFileArchiving > Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.323 sec - > in org.apache.hadoop.hbase.backup.TestHFileArchiving > Running org.apache.hadoop.hbase.backup.TestSystemTableSnapshot > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 65.492 sec - > in org.apache.hadoop.hbase.backup.TestSystemTableSnapshot > Running org.apache.hadoop.hbase.backup.TestBackupDescribe > Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 93.758 sec - > in org.apache.hadoop.hbase.backup.TestBackupDescribe > Running org.apache.hadoop.hbase.backup.TestBackupLogCleaner > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 109.187 sec - > in org.apache.hadoop.hbase.backup.TestBackupLogCleaner > Running org.apache.hadoop.hbase.backup.TestIncrementalBackupNoDataLoss > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 330.539 sec - > in org.apache.hadoop.hbase.backup.TestIncrementalBackupNoDataLoss > Running org.apache.hadoop.hbase.backup.TestRemoteBackup > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.371 sec - > in org.apache.hadoop.hbase.backup.TestRemoteBackup > Running org.apache.hadoop.hbase.backup.TestBackupSystemTable > Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 67.893 sec - > in org.apache.hadoop.hbase.backup.TestBackupSystemTable > Running org.apache.hadoop.hbase.backup.TestRestoreBoundaryTests > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 120.779 sec - > in org.apache.hadoop.hbase.backup.TestRestoreBoundaryTests > Running org.apache.hadoop.hbase.backup.TestFullBackupSetRestoreSet > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 117.815 sec - > in org.apache.hadoop.hbase.backup.TestFullBackupSetRestoreSet > Running org.apache.hadoop.hbase.backup.TestBackupShowHistory > Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 136.517 sec - > in org.apache.hadoop.hbase.backup.TestBackupShowHistory > Running org.apache.hadoop.hbase.backup.TestRemoteRestore > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 91.799 sec - > in org.apache.hadoop.hbase.backup.TestRemoteRestore > Running org.apache.hadoop.hbase.backup.TestFullRestore > Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 317.711 sec > - in org.apache.hadoop.hbase.backup.TestFullRestore > Running org.apache.hadoop.hbase.backup.TestFullBackupSet > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 87.045 sec - > in org.apache.hadoop.hbase.backup.TestFullBackupSet > Running org.apache.hadoop.hbase.backup.TestBackupDelete > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 86.214 sec - > in org.apache.hadoop.hbase.backup.TestBackupDelete > Running org.apache.hadoop.hbase.backup.TestBackupDeleteRestore > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 77.631 sec - > in org.apache.hadoop.hbase.backup.TestBackupDeleteRestore > Running
[jira] [Updated] (HBASE-16458) Shorten backup / restore test execution time
[ https://issues.apache.org/jira/browse/HBASE-16458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-16458: --- Attachment: 16458.v2.txt > Shorten backup / restore test execution time > > > Key: HBASE-16458 > URL: https://issues.apache.org/jira/browse/HBASE-16458 > Project: HBase > Issue Type: Test >Reporter: Ted Yu >Assignee: Vladimir Rodionov >Priority: Major > Labels: backup > Attachments: 16458-v1.patch, 16458.HBASE-7912.v3.txt, > 16458.HBASE-7912.v4.txt, 16458.HBASE-7912.v5.txt, 16458.v1.txt, 16458.v2.txt, > 16458.v2.txt, 16458.v3.txt, HBASE-16458-v1.patch, HBASE-16458-v2.patch > > > Below was timing information for all the backup / restore tests (today's > result): > {code} > Running org.apache.hadoop.hbase.backup.TestIncrementalBackup > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 576.273 sec - > in org.apache.hadoop.hbase.backup.TestIncrementalBackup > Running org.apache.hadoop.hbase.backup.TestBackupBoundaryTests > Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 124.67 sec - > in org.apache.hadoop.hbase.backup.TestBackupBoundaryTests > Running org.apache.hadoop.hbase.backup.TestBackupStatusProgress > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 102.34 sec - > in org.apache.hadoop.hbase.backup.TestBackupStatusProgress > Running org.apache.hadoop.hbase.backup.TestBackupAdmin > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 490.251 sec - > in org.apache.hadoop.hbase.backup.TestBackupAdmin > Running org.apache.hadoop.hbase.backup.TestHFileArchiving > Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.323 sec - > in org.apache.hadoop.hbase.backup.TestHFileArchiving > Running org.apache.hadoop.hbase.backup.TestSystemTableSnapshot > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 65.492 sec - > in org.apache.hadoop.hbase.backup.TestSystemTableSnapshot > Running org.apache.hadoop.hbase.backup.TestBackupDescribe > Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 93.758 sec - > in org.apache.hadoop.hbase.backup.TestBackupDescribe > Running org.apache.hadoop.hbase.backup.TestBackupLogCleaner > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 109.187 sec - > in org.apache.hadoop.hbase.backup.TestBackupLogCleaner > Running org.apache.hadoop.hbase.backup.TestIncrementalBackupNoDataLoss > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 330.539 sec - > in org.apache.hadoop.hbase.backup.TestIncrementalBackupNoDataLoss > Running org.apache.hadoop.hbase.backup.TestRemoteBackup > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.371 sec - > in org.apache.hadoop.hbase.backup.TestRemoteBackup > Running org.apache.hadoop.hbase.backup.TestBackupSystemTable > Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 67.893 sec - > in org.apache.hadoop.hbase.backup.TestBackupSystemTable > Running org.apache.hadoop.hbase.backup.TestRestoreBoundaryTests > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 120.779 sec - > in org.apache.hadoop.hbase.backup.TestRestoreBoundaryTests > Running org.apache.hadoop.hbase.backup.TestFullBackupSetRestoreSet > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 117.815 sec - > in org.apache.hadoop.hbase.backup.TestFullBackupSetRestoreSet > Running org.apache.hadoop.hbase.backup.TestBackupShowHistory > Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 136.517 sec - > in org.apache.hadoop.hbase.backup.TestBackupShowHistory > Running org.apache.hadoop.hbase.backup.TestRemoteRestore > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 91.799 sec - > in org.apache.hadoop.hbase.backup.TestRemoteRestore > Running org.apache.hadoop.hbase.backup.TestFullRestore > Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 317.711 sec > - in org.apache.hadoop.hbase.backup.TestFullRestore > Running org.apache.hadoop.hbase.backup.TestFullBackupSet > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 87.045 sec - > in org.apache.hadoop.hbase.backup.TestFullBackupSet > Running org.apache.hadoop.hbase.backup.TestBackupDelete > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 86.214 sec - > in org.apache.hadoop.hbase.backup.TestBackupDelete > Running org.apache.hadoop.hbase.backup.TestBackupDeleteRestore > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 77.631 sec - > in org.apache.hadoop.hbase.backup.TestBackupDeleteRestore > Running org.apache.hadoop.hbase.backup.TestIncrementalBackupDeleteTable > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 190.358 sec - > in org.apache.hadoop.hbase.backup.TestIncrementalBackupDeleteTable > Running >
[jira] [Commented] (HBASE-15666) shaded dependencies for hbase-testing-util
[ https://issues.apache.org/jira/browse/HBASE-15666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607892#comment-16607892 ] Vrushali C commented on HBASE-15666: Thanks [~busbey] , appreciate the response! So, just thinking out loud here about my options. We happen to have a separate module for the tests that use the mini-cluster, called as 'hadoop-yarn-server-timelineservice-hbase-tests'. This module only has test code and is not required for cluster deployment. So, perhaps in the pom for that module, I can use the non-shaded client (& dependencies) in test-scope and exclude the shaded jars inherited from other dependent modules. Do you think that might be worth a try? > shaded dependencies for hbase-testing-util > -- > > Key: HBASE-15666 > URL: https://issues.apache.org/jira/browse/HBASE-15666 > Project: HBase > Issue Type: New Feature > Components: test >Affects Versions: 1.1.0, 1.2.0 >Reporter: Sean Busbey >Priority: Critical > > Folks that make use of our shaded client but then want to test things using > the hbase-testing-util end up getting all of our dependencies again in the > test scope. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-18276) Release 1.2.7
[ https://issues.apache.org/jira/browse/HBASE-18276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607891#comment-16607891 ] Sean Busbey commented on HBASE-18276: - RC0 thread is up: https://s.apache.org/hbase-1.2.7-rc0-vote > Release 1.2.7 > - > > Key: HBASE-18276 > URL: https://issues.apache.org/jira/browse/HBASE-18276 > Project: HBase > Issue Type: Task > Components: community >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Major > Fix For: 1.2.7 > > > about time to get rolling on 1.2.7 for ~monthly -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20307) LoadTestTool prints too much zookeeper logging
[ https://issues.apache.org/jira/browse/HBASE-20307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607889#comment-16607889 ] Hudson commented on HBASE-20307: Results for branch branch-2.0 [build #783 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/783/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/783//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/783//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/783//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. > LoadTestTool prints too much zookeeper logging > -- > > Key: HBASE-20307 > URL: https://issues.apache.org/jira/browse/HBASE-20307 > Project: HBase > Issue Type: Bug > Components: tooling >Reporter: Mike Drob >Assignee: Colin Garcia >Priority: Major > Labels: beginner > Fix For: 3.0.0, 1.5.0, 1.3.3, 1.2.8, 2.2.0, 1.4.8, 2.1.1, 2.0.3 > > Attachments: HBASE-20307.000.patch, HBASE-20307.001.patch > > > When running ltt there is a ton of ZK related cruft that I probably don't > care about. Hide it behind -verbose flag or point people at log4j > configuration but don't print it by default. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21162) Revert suspicious change to BoundedByteBufferPool and disable use of direct buffers for IPC reservoir by default
[ https://issues.apache.org/jira/browse/HBASE-21162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607885#comment-16607885 ] Hadoop QA commented on HBASE-21162: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 27s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange} 0m 0s{color} | {color:orange} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} branch-1 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 6m 29s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 23s{color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s{color} | {color:green} branch-1 passed with JDK v1.8.0_181 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s{color} | {color:green} branch-1 passed with JDK v1.7.0_191 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 33s{color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 2m 23s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s{color} | {color:green} branch-1 passed with JDK v1.8.0_181 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} branch-1 passed with JDK v1.7.0_191 {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s{color} | {color:green} the patch passed with JDK v1.8.0_181 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s{color} | {color:green} the patch passed with JDK v1.7.0_191 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 2m 21s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 1m 29s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s{color} | {color:green} the patch passed with JDK v1.8.0_181 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} the patch passed with JDK v1.7.0_191 {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 23s{color} | {color:green} hbase-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green}133m 52s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 28s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}164m 32s{color} |
[jira] [Updated] (HBASE-9393) Region Server fails to properly close socket resulting in many CLOSE_WAIT to Data Nodes
[ https://issues.apache.org/jira/browse/HBASE-9393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HBASE-9393: --- Summary: Region Server fails to properly close socket resulting in many CLOSE_WAIT to Data Nodes (was: Hbase does not closing a closed socket resulting in many CLOSE_WAIT ) > Region Server fails to properly close socket resulting in many CLOSE_WAIT to > Data Nodes > --- > > Key: HBASE-9393 > URL: https://issues.apache.org/jira/browse/HBASE-9393 > Project: HBase > Issue Type: Bug >Affects Versions: 0.94.2, 0.98.0, 1.0.1.1, 1.1.2 > Environment: Centos 6.4 - 7 regionservers/datanodes, 8 TB per node, > 7279 regions >Reporter: Avi Zrachya >Assignee: Ashish Singhi >Priority: Critical > Fix For: 1.4.0, 1.3.2, 1.1.12, 2.0.0, 1.2.7 > > Attachments: HBASE-9393-branch-1.patch, HBASE-9393.patch, > HBASE-9393.v1.patch, HBASE-9393.v10.patch, HBASE-9393.v11.patch, > HBASE-9393.v12.patch, HBASE-9393.v13.patch, HBASE-9393.v14.patch, > HBASE-9393.v15.patch, HBASE-9393.v15.patch, HBASE-9393.v16.patch, > HBASE-9393.v16.patch, HBASE-9393.v17.patch, HBASE-9393.v18.patch, > HBASE-9393.v2.patch, HBASE-9393.v3.patch, HBASE-9393.v4.patch, > HBASE-9393.v5.patch, HBASE-9393.v5.patch, HBASE-9393.v5.patch, > HBASE-9393.v6.patch, HBASE-9393.v6.patch, HBASE-9393.v6.patch, > HBASE-9393.v7.patch, HBASE-9393.v8.patch, HBASE-9393.v9.patch > > > HBase dose not close a dead connection with the datanode. > This resulting in over 60K CLOSE_WAIT and at some point HBase can not connect > to the datanode because too many mapped sockets from one host to another on > the same port. > The example below is with low CLOSE_WAIT count because we had to restart > hbase to solve the porblem, later in time it will incease to 60-100K sockets > on CLOSE_WAIT > [root@hd2-region3 ~]# netstat -nap |grep CLOSE_WAIT |grep 21592 |wc -l > 13156 > [root@hd2-region3 ~]# ps -ef |grep 21592 > root 17255 17219 0 12:26 pts/000:00:00 grep 21592 > hbase21592 1 17 Aug29 ?03:29:06 > /usr/java/jdk1.6.0_26/bin/java -XX:OnOutOfMemoryError=kill -9 %p -Xmx8000m > -ea -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode > -Dhbase.log.dir=/var/log/hbase > -Dhbase.log.file=hbase-hbase-regionserver-hd2-region3.swnet.corp.log ... -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-15666) shaded dependencies for hbase-testing-util
[ https://issues.apache.org/jira/browse/HBASE-15666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607867#comment-16607867 ] Sean Busbey commented on HBASE-15666: - AFAIK there's been no progress on this and no one is currently actively looking at it. I am reasonably certain you can't use the testing module and the shaded client modules at the same time as they currently are. > shaded dependencies for hbase-testing-util > -- > > Key: HBASE-15666 > URL: https://issues.apache.org/jira/browse/HBASE-15666 > Project: HBase > Issue Type: New Feature > Components: test >Affects Versions: 1.1.0, 1.2.0 >Reporter: Sean Busbey >Priority: Critical > > Folks that make use of our shaded client but then want to test things using > the hbase-testing-util end up getting all of our dependencies again in the > test scope. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21164) reportForDuty should do (expotential) backoff rather than retry every 3 seconds (default).
[ https://issues.apache.org/jira/browse/HBASE-21164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607865#comment-16607865 ] Hadoop QA commented on HBASE-21164: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} branch-2.1 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 29s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 53s{color} | {color:green} branch-2.1 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 18s{color} | {color:green} branch-2.1 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 33s{color} | {color:green} branch-2.1 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 3m 25s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 28s{color} | {color:green} branch-2.1 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} branch-2.1 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} hbase-common: The patch generated 0 new + 2 unchanged - 1 fixed = 2 total (was 3) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 6s{color} | {color:red} hbase-server: The patch generated 1 new + 233 unchanged - 0 fixed = 234 total (was 233) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 3m 23s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 8m 23s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 39s{color} | {color:green} hbase-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}181m 31s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 34s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}227m 58s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.regionserver.throttle.TestFlushWithThroughputController | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:42ca976 | | JIRA Issue | HBASE-21164 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12938906/HBASE-21164.branch-2.1.003.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 16d0aac6ad8a
[jira] [Commented] (HBASE-21164) reportForDuty should do (expotential) backoff rather than retry every 3 seconds (default).
[ https://issues.apache.org/jira/browse/HBASE-21164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607863#comment-16607863 ] Hadoop QA commented on HBASE-21164: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} branch-2.1 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 27s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 10s{color} | {color:green} branch-2.1 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 14s{color} | {color:green} branch-2.1 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 39s{color} | {color:green} branch-2.1 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 3m 43s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 30s{color} | {color:green} branch-2.1 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} branch-2.1 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} hbase-common: The patch generated 0 new + 2 unchanged - 1 fixed = 2 total (was 3) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 15s{color} | {color:green} The patch hbase-server passed checkstyle {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 3m 29s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 8m 34s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 23s{color} | {color:green} hbase-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}123m 33s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 41s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}166m 41s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.regionserver.throttle.TestFlushWithThroughputController | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:42ca976 | | JIRA Issue | HBASE-21164 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12938913/HBASE-21164.branch-2.1.004.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 56d4ca8e7859 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27
[jira] [Commented] (HBASE-16458) Shorten backup / restore test execution time
[ https://issues.apache.org/jira/browse/HBASE-16458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607860#comment-16607860 ] Hadoop QA commented on HBASE-16458: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 46s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 3m 59s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 36s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 3m 51s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 9m 42s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 18m 48s{color} | {color:green} hbase-backup in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 49m 41s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b | | JIRA Issue | HBASE-16458 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12938927/HBASE-16458-v2.patch | | Optional Tests | asflicense javac javadoc unit shadedjars hadoopcheck xml compile findbugs hbaseanti checkstyle | | uname | Linux 481fb4c9fb9e 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 07:31:43 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 9af7bc6204 | | maven | version: Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC3 | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/14359/testReport/ | | Max. process+thread count | 3693 (vs. ulimit of 1) | | modules | C: hbase-backup U: hbase-backup | | Console output |
[jira] [Commented] (HBASE-21144) AssignmentManager.waitForAssignment is not stable
[ https://issues.apache.org/jira/browse/HBASE-21144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607838#comment-16607838 ] Duo Zhang commented on HBASE-21144: --- Tried locally, TestDisableTableProcedure passed. I think there are other problems which cause the test hang. Let me commit the addendum to master. Let's see how it works. > AssignmentManager.waitForAssignment is not stable > - > > Key: HBASE-21144 > URL: https://issues.apache.org/jira/browse/HBASE-21144 > Project: HBase > Issue Type: Bug > Components: amv2, test >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 3.0.0, 2.2.0, 2.1.1, 2.0.3 > > Attachments: HBASE-21144-addendum.patch, HBASE-21144-v1.patch, > HBASE-21144.patch > > > https://builds.apache.org/job/HBase-Flaky-Tests/job/master/366/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.client.TestMetaWithReplicas-output.txt/*view*/ > All replicas for meta table are on the same machine > {noformat} > 2018-09-02 19:49:05,486 DEBUG [RS_OPEN_META-regionserver/asf904:0-0] > handler.OpenRegionHandler(127): Opened hbase:meta,,1.1588230740 on > asf904.gq1.ygridcore.net,47561,1535917740998 > 2018-09-02 19:49:32,802 DEBUG [RS_OPEN_META-regionserver/asf904:0-0] > handler.OpenRegionHandler(127): Opened hbase:meta,,1_0001.534574363 on > asf904.gq1.ygridcore.net,55408,1535917768453 > 2018-09-02 19:49:33,496 DEBUG [RS_OPEN_META-regionserver/asf904:0-0] > handler.OpenRegionHandler(127): Opened hbase:meta,,1_0002.1657623790 on > asf904.gq1.ygridcore.net,55408,1535917768453 > {noformat} > But after calling am.waitForAssignment, the region location is still null... > {noformat} > 2018-09-02 19:49:32,414 INFO [Time-limited test] > client.TestMetaWithReplicas(113): HBASE:META DEPLOY: > hbase:meta,,1_0001.534574363 on null > 2018-09-02 19:49:32,844 INFO [Time-limited test] > client.TestMetaWithReplicas(113): HBASE:META DEPLOY: > hbase:meta,,1_0002.1657623790 on null > {noformat} > So we will not balance the replicas and cause TestMetaWithReplicas to hang > forever... -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21171) [amv2] Tool to parse a directory of MasterProcWALs standalone
[ https://issues.apache.org/jira/browse/HBASE-21171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607827#comment-16607827 ] Hadoop QA commented on HBASE-21171: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange} 0m 0s{color} | {color:orange} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} branch-2.1 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 9s{color} | {color:green} branch-2.1 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s{color} | {color:green} branch-2.1 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s{color} | {color:green} branch-2.1 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 2s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 30s{color} | {color:green} branch-2.1 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} branch-2.1 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 16s{color} | {color:red} hbase-procedure: The patch generated 5 new + 12 unchanged - 0 fixed = 17 total (was 12) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 6s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 10m 5s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 or 3.0.0. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 40s{color} | {color:red} hbase-procedure generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 15s{color} | {color:green} hbase-procedure in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 11s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 35m 13s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hbase-procedure | | | Unread public/protected field:At WALProcedureStore.java:[line 1331] | | | Unread public/protected field:At WALProcedureStore.java:[line 1330] | | | Unread public/protected field:At WALProcedureStore.java:[line 1332] | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:42ca976 | | JIRA Issue | HBASE-21171 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12938925/HBASE-21171.branch-2.1.001.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 47ddb8a492fb 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh | | git revision | branch-2.1 / f85fba4a54 | |
[jira] [Commented] (HBASE-20874) Sending compaction descriptions from all regionservers to master.
[ https://issues.apache.org/jira/browse/HBASE-20874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607826#comment-16607826 ] Sakthi commented on HBASE-20874: [~stack], could you please review the patch. > Sending compaction descriptions from all regionservers to master. > - > > Key: HBASE-20874 > URL: https://issues.apache.org/jira/browse/HBASE-20874 > Project: HBase > Issue Type: Sub-task >Reporter: Mohit Goel >Assignee: Mohit Goel >Priority: Minor > Attachments: HBASE-20874.master.004.patch, > HBASE-20874.master.005.patch, HBASE-20874.master.006.patch, > HBASE-20874.master.007.patch, HBASE-20874.master.008.patch, > hbase-20874.master.009.patch, hbase-20874.master.010.patch > > > Need to send the compaction description from region servers to Master , to > let master know of the entire compaction state of the cluster. Further need > to change the implementation of client Side API than like getCompactionState, > which will consult master for the result instead of sending individual > request to regionservers. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20874) Sending compaction descriptions from all regionservers to master.
[ https://issues.apache.org/jira/browse/HBASE-20874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607820#comment-16607820 ] Hadoop QA commented on HBASE-20874: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 59s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 36s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 15s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 20s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 22s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 15s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 3m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} rubocop {color} | {color:green} 0m 13s{color} | {color:green} There were no new rubocop issues. {color} | | {color:orange}-0{color} | {color:orange} ruby-lint {color} | {color:orange} 0m 5s{color} | {color:orange} The patch generated 13 new + 749 unchanged - 0 fixed = 762 total (was 749) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 18s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 10m 48s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 1m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 34s{color} | {color:green} hbase-protocol-shaded in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 13s{color} | {color:green} hbase-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}118m 18s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 57s{color} | {color:green} hbase-shell in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} |
[jira] [Commented] (HBASE-16458) Shorten backup / restore test execution time
[ https://issues.apache.org/jira/browse/HBASE-16458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607807#comment-16607807 ] Vladimir Rodionov commented on HBASE-16458: --- Patch v2 addresses checkstyle warnings. > Shorten backup / restore test execution time > > > Key: HBASE-16458 > URL: https://issues.apache.org/jira/browse/HBASE-16458 > Project: HBase > Issue Type: Test >Reporter: Ted Yu >Assignee: Vladimir Rodionov >Priority: Major > Labels: backup > Attachments: 16458-v1.patch, 16458.HBASE-7912.v3.txt, > 16458.HBASE-7912.v4.txt, 16458.HBASE-7912.v5.txt, 16458.v1.txt, 16458.v2.txt, > 16458.v3.txt, HBASE-16458-v1.patch, HBASE-16458-v2.patch > > > Below was timing information for all the backup / restore tests (today's > result): > {code} > Running org.apache.hadoop.hbase.backup.TestIncrementalBackup > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 576.273 sec - > in org.apache.hadoop.hbase.backup.TestIncrementalBackup > Running org.apache.hadoop.hbase.backup.TestBackupBoundaryTests > Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 124.67 sec - > in org.apache.hadoop.hbase.backup.TestBackupBoundaryTests > Running org.apache.hadoop.hbase.backup.TestBackupStatusProgress > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 102.34 sec - > in org.apache.hadoop.hbase.backup.TestBackupStatusProgress > Running org.apache.hadoop.hbase.backup.TestBackupAdmin > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 490.251 sec - > in org.apache.hadoop.hbase.backup.TestBackupAdmin > Running org.apache.hadoop.hbase.backup.TestHFileArchiving > Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.323 sec - > in org.apache.hadoop.hbase.backup.TestHFileArchiving > Running org.apache.hadoop.hbase.backup.TestSystemTableSnapshot > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 65.492 sec - > in org.apache.hadoop.hbase.backup.TestSystemTableSnapshot > Running org.apache.hadoop.hbase.backup.TestBackupDescribe > Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 93.758 sec - > in org.apache.hadoop.hbase.backup.TestBackupDescribe > Running org.apache.hadoop.hbase.backup.TestBackupLogCleaner > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 109.187 sec - > in org.apache.hadoop.hbase.backup.TestBackupLogCleaner > Running org.apache.hadoop.hbase.backup.TestIncrementalBackupNoDataLoss > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 330.539 sec - > in org.apache.hadoop.hbase.backup.TestIncrementalBackupNoDataLoss > Running org.apache.hadoop.hbase.backup.TestRemoteBackup > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.371 sec - > in org.apache.hadoop.hbase.backup.TestRemoteBackup > Running org.apache.hadoop.hbase.backup.TestBackupSystemTable > Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 67.893 sec - > in org.apache.hadoop.hbase.backup.TestBackupSystemTable > Running org.apache.hadoop.hbase.backup.TestRestoreBoundaryTests > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 120.779 sec - > in org.apache.hadoop.hbase.backup.TestRestoreBoundaryTests > Running org.apache.hadoop.hbase.backup.TestFullBackupSetRestoreSet > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 117.815 sec - > in org.apache.hadoop.hbase.backup.TestFullBackupSetRestoreSet > Running org.apache.hadoop.hbase.backup.TestBackupShowHistory > Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 136.517 sec - > in org.apache.hadoop.hbase.backup.TestBackupShowHistory > Running org.apache.hadoop.hbase.backup.TestRemoteRestore > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 91.799 sec - > in org.apache.hadoop.hbase.backup.TestRemoteRestore > Running org.apache.hadoop.hbase.backup.TestFullRestore > Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 317.711 sec > - in org.apache.hadoop.hbase.backup.TestFullRestore > Running org.apache.hadoop.hbase.backup.TestFullBackupSet > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 87.045 sec - > in org.apache.hadoop.hbase.backup.TestFullBackupSet > Running org.apache.hadoop.hbase.backup.TestBackupDelete > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 86.214 sec - > in org.apache.hadoop.hbase.backup.TestBackupDelete > Running org.apache.hadoop.hbase.backup.TestBackupDeleteRestore > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 77.631 sec - > in org.apache.hadoop.hbase.backup.TestBackupDeleteRestore > Running org.apache.hadoop.hbase.backup.TestIncrementalBackupDeleteTable > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 190.358 sec - > in
[jira] [Updated] (HBASE-16458) Shorten backup / restore test execution time
[ https://issues.apache.org/jira/browse/HBASE-16458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Rodionov updated HBASE-16458: -- Attachment: HBASE-16458-v2.patch > Shorten backup / restore test execution time > > > Key: HBASE-16458 > URL: https://issues.apache.org/jira/browse/HBASE-16458 > Project: HBase > Issue Type: Test >Reporter: Ted Yu >Assignee: Vladimir Rodionov >Priority: Major > Labels: backup > Attachments: 16458-v1.patch, 16458.HBASE-7912.v3.txt, > 16458.HBASE-7912.v4.txt, 16458.HBASE-7912.v5.txt, 16458.v1.txt, 16458.v2.txt, > 16458.v3.txt, HBASE-16458-v1.patch, HBASE-16458-v2.patch > > > Below was timing information for all the backup / restore tests (today's > result): > {code} > Running org.apache.hadoop.hbase.backup.TestIncrementalBackup > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 576.273 sec - > in org.apache.hadoop.hbase.backup.TestIncrementalBackup > Running org.apache.hadoop.hbase.backup.TestBackupBoundaryTests > Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 124.67 sec - > in org.apache.hadoop.hbase.backup.TestBackupBoundaryTests > Running org.apache.hadoop.hbase.backup.TestBackupStatusProgress > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 102.34 sec - > in org.apache.hadoop.hbase.backup.TestBackupStatusProgress > Running org.apache.hadoop.hbase.backup.TestBackupAdmin > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 490.251 sec - > in org.apache.hadoop.hbase.backup.TestBackupAdmin > Running org.apache.hadoop.hbase.backup.TestHFileArchiving > Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.323 sec - > in org.apache.hadoop.hbase.backup.TestHFileArchiving > Running org.apache.hadoop.hbase.backup.TestSystemTableSnapshot > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 65.492 sec - > in org.apache.hadoop.hbase.backup.TestSystemTableSnapshot > Running org.apache.hadoop.hbase.backup.TestBackupDescribe > Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 93.758 sec - > in org.apache.hadoop.hbase.backup.TestBackupDescribe > Running org.apache.hadoop.hbase.backup.TestBackupLogCleaner > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 109.187 sec - > in org.apache.hadoop.hbase.backup.TestBackupLogCleaner > Running org.apache.hadoop.hbase.backup.TestIncrementalBackupNoDataLoss > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 330.539 sec - > in org.apache.hadoop.hbase.backup.TestIncrementalBackupNoDataLoss > Running org.apache.hadoop.hbase.backup.TestRemoteBackup > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.371 sec - > in org.apache.hadoop.hbase.backup.TestRemoteBackup > Running org.apache.hadoop.hbase.backup.TestBackupSystemTable > Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 67.893 sec - > in org.apache.hadoop.hbase.backup.TestBackupSystemTable > Running org.apache.hadoop.hbase.backup.TestRestoreBoundaryTests > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 120.779 sec - > in org.apache.hadoop.hbase.backup.TestRestoreBoundaryTests > Running org.apache.hadoop.hbase.backup.TestFullBackupSetRestoreSet > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 117.815 sec - > in org.apache.hadoop.hbase.backup.TestFullBackupSetRestoreSet > Running org.apache.hadoop.hbase.backup.TestBackupShowHistory > Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 136.517 sec - > in org.apache.hadoop.hbase.backup.TestBackupShowHistory > Running org.apache.hadoop.hbase.backup.TestRemoteRestore > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 91.799 sec - > in org.apache.hadoop.hbase.backup.TestRemoteRestore > Running org.apache.hadoop.hbase.backup.TestFullRestore > Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 317.711 sec > - in org.apache.hadoop.hbase.backup.TestFullRestore > Running org.apache.hadoop.hbase.backup.TestFullBackupSet > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 87.045 sec - > in org.apache.hadoop.hbase.backup.TestFullBackupSet > Running org.apache.hadoop.hbase.backup.TestBackupDelete > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 86.214 sec - > in org.apache.hadoop.hbase.backup.TestBackupDelete > Running org.apache.hadoop.hbase.backup.TestBackupDeleteRestore > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 77.631 sec - > in org.apache.hadoop.hbase.backup.TestBackupDeleteRestore > Running org.apache.hadoop.hbase.backup.TestIncrementalBackupDeleteTable > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 190.358 sec - > in org.apache.hadoop.hbase.backup.TestIncrementalBackupDeleteTable > Running >
[jira] [Commented] (HBASE-21162) Revert suspicious change to BoundedByteBufferPool and disable use of direct buffers for IPC reservoir by default
[ https://issues.apache.org/jira/browse/HBASE-21162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607805#comment-16607805 ] Andrew Purtell commented on HBASE-21162: Latest patch fixes checkstyle nit > Revert suspicious change to BoundedByteBufferPool and disable use of direct > buffers for IPC reservoir by default > > > Key: HBASE-21162 > URL: https://issues.apache.org/jira/browse/HBASE-21162 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.7 >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Critical > Fix For: 1.5.0, 1.4.8 > > Attachments: HBASE-21162-branch-1.patch, HBASE-21162-branch-1.patch, > HBASE-21162-branch-1.patch > > > We had a production incident where we traced the issue to a direct buffer > leak. On a hunch we tried setting hbase.ipc.server.reservoir.enabled = false > and after that no native memory leak could be observed in any regionserver > process under the triggering load. > On HBASE-19239 (Fix findbugs and error-prone issues) I made a change to > BoundedByteBufferPool that is suspicious given this finding. It was committed > to branch-1.4 and branch-1. I'm going to revert this change. > In addition the allocation of direct memory for the server RPC reservoir is a > bit problematic in that tracing native memory or direct buffer leaks to a > particular class or compilation unit is difficult, so I also propose > allocating the reservoir on the heap by default instead. Should there be a > leak it is much easier to do an analysis of a heap dump with familiar tools > to find it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21162) Revert suspicious change to BoundedByteBufferPool and disable use of direct buffers for IPC reservoir by default
[ https://issues.apache.org/jira/browse/HBASE-21162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-21162: --- Attachment: HBASE-21162-branch-1.patch > Revert suspicious change to BoundedByteBufferPool and disable use of direct > buffers for IPC reservoir by default > > > Key: HBASE-21162 > URL: https://issues.apache.org/jira/browse/HBASE-21162 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.7 >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Critical > Fix For: 1.5.0, 1.4.8 > > Attachments: HBASE-21162-branch-1.patch, HBASE-21162-branch-1.patch, > HBASE-21162-branch-1.patch > > > We had a production incident where we traced the issue to a direct buffer > leak. On a hunch we tried setting hbase.ipc.server.reservoir.enabled = false > and after that no native memory leak could be observed in any regionserver > process under the triggering load. > On HBASE-19239 (Fix findbugs and error-prone issues) I made a change to > BoundedByteBufferPool that is suspicious given this finding. It was committed > to branch-1.4 and branch-1. I'm going to revert this change. > In addition the allocation of direct memory for the server RPC reservoir is a > bit problematic in that tracing native memory or direct buffer leaks to a > particular class or compilation unit is difficult, so I also propose > allocating the reservoir on the heap by default instead. Should there be a > leak it is much easier to do an analysis of a heap dump with familiar tools > to find it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21162) Revert suspicious change to BoundedByteBufferPool and disable use of direct buffers for IPC reservoir by default
[ https://issues.apache.org/jira/browse/HBASE-21162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-21162: --- Attachment: (was: HBASE-21162-branch-1.patch) > Revert suspicious change to BoundedByteBufferPool and disable use of direct > buffers for IPC reservoir by default > > > Key: HBASE-21162 > URL: https://issues.apache.org/jira/browse/HBASE-21162 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.7 >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Critical > Fix For: 1.5.0, 1.4.8 > > Attachments: HBASE-21162-branch-1.patch, HBASE-21162-branch-1.patch, > HBASE-21162-branch-1.patch > > > We had a production incident where we traced the issue to a direct buffer > leak. On a hunch we tried setting hbase.ipc.server.reservoir.enabled = false > and after that no native memory leak could be observed in any regionserver > process under the triggering load. > On HBASE-19239 (Fix findbugs and error-prone issues) I made a change to > BoundedByteBufferPool that is suspicious given this finding. It was committed > to branch-1.4 and branch-1. I'm going to revert this change. > In addition the allocation of direct memory for the server RPC reservoir is a > bit problematic in that tracing native memory or direct buffer leaks to a > particular class or compilation unit is difficult, so I also propose > allocating the reservoir on the heap by default instead. Should there be a > leak it is much easier to do an analysis of a heap dump with familiar tools > to find it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21162) Revert suspicious change to BoundedByteBufferPool and disable use of direct buffers for IPC reservoir by default
[ https://issues.apache.org/jira/browse/HBASE-21162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607803#comment-16607803 ] Andrew Purtell commented on HBASE-21162: TestLoadIncrementalHFilesUseSecurityEndPoint failure is unrelated. Checked local results just in case > Revert suspicious change to BoundedByteBufferPool and disable use of direct > buffers for IPC reservoir by default > > > Key: HBASE-21162 > URL: https://issues.apache.org/jira/browse/HBASE-21162 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.7 >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Critical > Fix For: 1.5.0, 1.4.8 > > Attachments: HBASE-21162-branch-1.patch, HBASE-21162-branch-1.patch, > HBASE-21162-branch-1.patch > > > We had a production incident where we traced the issue to a direct buffer > leak. On a hunch we tried setting hbase.ipc.server.reservoir.enabled = false > and after that no native memory leak could be observed in any regionserver > process under the triggering load. > On HBASE-19239 (Fix findbugs and error-prone issues) I made a change to > BoundedByteBufferPool that is suspicious given this finding. It was committed > to branch-1.4 and branch-1. I'm going to revert this change. > In addition the allocation of direct memory for the server RPC reservoir is a > bit problematic in that tracing native memory or direct buffer leaks to a > particular class or compilation unit is difficult, so I also propose > allocating the reservoir on the heap by default instead. Should there be a > leak it is much easier to do an analysis of a heap dump with familiar tools > to find it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20307) LoadTestTool prints too much zookeeper logging
[ https://issues.apache.org/jira/browse/HBASE-20307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607800#comment-16607800 ] Hudson commented on HBASE-20307: SUCCESS: Integrated in Jenkins build HBase-1.3-IT #472 (See [https://builds.apache.org/job/HBase-1.3-IT/472/]) HBASE-20307 LoadTestTool prints too much zookeeper logging (Colin (apurtell: rev d4440e6bce37401e16ecbdde95cc214ad97f5548) * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/util/LoadTestTool.java > LoadTestTool prints too much zookeeper logging > -- > > Key: HBASE-20307 > URL: https://issues.apache.org/jira/browse/HBASE-20307 > Project: HBase > Issue Type: Bug > Components: tooling >Reporter: Mike Drob >Assignee: Colin Garcia >Priority: Major > Labels: beginner > Fix For: 3.0.0, 1.5.0, 1.3.3, 1.2.8, 2.2.0, 1.4.8, 2.1.1, 2.0.3 > > Attachments: HBASE-20307.000.patch, HBASE-20307.001.patch > > > When running ltt there is a ton of ZK related cruft that I probably don't > care about. Hide it behind -verbose flag or point people at log4j > configuration but don't print it by default. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20307) LoadTestTool prints too much zookeeper logging
[ https://issues.apache.org/jira/browse/HBASE-20307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607801#comment-16607801 ] Hudson commented on HBASE-20307: SUCCESS: Integrated in Jenkins build HBase-1.2-IT #1159 (See [https://builds.apache.org/job/HBase-1.2-IT/1159/]) HBASE-20307 LoadTestTool prints too much zookeeper logging (Colin (apurtell: rev 30fe214aa1775d7dc7635634ede776fc0f39b824) * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/util/LoadTestTool.java > LoadTestTool prints too much zookeeper logging > -- > > Key: HBASE-20307 > URL: https://issues.apache.org/jira/browse/HBASE-20307 > Project: HBase > Issue Type: Bug > Components: tooling >Reporter: Mike Drob >Assignee: Colin Garcia >Priority: Major > Labels: beginner > Fix For: 3.0.0, 1.5.0, 1.3.3, 1.2.8, 2.2.0, 1.4.8, 2.1.1, 2.0.3 > > Attachments: HBASE-20307.000.patch, HBASE-20307.001.patch > > > When running ltt there is a ton of ZK related cruft that I probably don't > care about. Hide it behind -verbose flag or point people at log4j > configuration but don't print it by default. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21171) [amv2] Tool to parse a directory of MasterProcWALs standalone
[ https://issues.apache.org/jira/browse/HBASE-21171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-21171: -- Release Note: Make it so can run the WAL parse and load system in isolation. Here is an example: {code}$ HBASE_OPTS=" -XX:+UnlockDiagnosticVMOptions -XX:+UnlockCommercialFeatures -XX:+FlightRecorder -XX:+DebugNonSafepoints" ./bin/hbase org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore ~/big_set_of_masterprocwals/ {code} Status: Patch Available (was: Open) > [amv2] Tool to parse a directory of MasterProcWALs standalone > - > > Key: HBASE-21171 > URL: https://issues.apache.org/jira/browse/HBASE-21171 > Project: HBase > Issue Type: Bug > Components: amv2, test >Reporter: stack >Assignee: stack >Priority: Major > Attachments: HBASE-21171.branch-2.1.001.patch > > > I want to be able to test parsing and be able to profile a standalone parse > and WALProcedureStore load of procedures. Adding a simple main on > WALProcedureStore seems to be enough. I tested parsing it a dir of hundreds > of WALs to see what is going on when we try to load. Good for figuring how to > log, where the memory is going, etc., in this subsystem. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21171) [amv2] Tool to parse a directory of MasterProcWALs standalone
[ https://issues.apache.org/jira/browse/HBASE-21171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-21171: -- Attachment: HBASE-21171.branch-2.1.001.patch > [amv2] Tool to parse a directory of MasterProcWALs standalone > - > > Key: HBASE-21171 > URL: https://issues.apache.org/jira/browse/HBASE-21171 > Project: HBase > Issue Type: Bug > Components: amv2, test >Reporter: stack >Assignee: stack >Priority: Major > Attachments: HBASE-21171.branch-2.1.001.patch > > > I want to be able to test parsing and be able to profile a standalone parse > and WALProcedureStore load of procedures. Adding a simple main on > WALProcedureStore seems to be enough. I tested parsing it a dir of hundreds > of WALs to see what is going on when we try to load. Good for figuring how to > log, where the memory is going, etc., in this subsystem. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21171) [amv2] Tool to parse a directory of MasterProcWALs standalone
stack created HBASE-21171: - Summary: [amv2] Tool to parse a directory of MasterProcWALs standalone Key: HBASE-21171 URL: https://issues.apache.org/jira/browse/HBASE-21171 Project: HBase Issue Type: Bug Components: amv2, test Reporter: stack Assignee: stack I want to be able to test parsing and be able to profile a standalone parse and WALProcedureStore load of procedures. Adding a simple main on WALProcedureStore seems to be enough. I tested parsing it a dir of hundreds of WALs to see what is going on when we try to load. Good for figuring how to log, where the memory is going, etc., in this subsystem. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21168) BloomFilterUtil uses hardcoded randomness
[ https://issues.apache.org/jira/browse/HBASE-21168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607794#comment-16607794 ] Hadoop QA commented on HBASE-21168: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 0s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 45s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 4s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 2s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 4s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 4s{color} | {color:green} hbase-server: The patch generated 0 new + 31 unchanged - 1 fixed = 31 total (was 32) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 2s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 10m 3s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green}213m 57s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}253m 42s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b | | JIRA Issue | HBASE-21168 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12938887/HBASE-21168.master.002.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux b66ffebaae55 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 07:31:43 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / c3419be003 | | maven | version: Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC3 | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/14351/testReport/ | | Max. process+thread count | 5000 (vs. ulimit of 1) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/14351/console | | Powered by | Apache Yetus 0.7.0 http://yetus.apache.org | This
[jira] [Created] (HBASE-21170) Backport HBASE-20642 to branch-1: Clients should re-use the same nonce across DDL operations
Mingliang Liu created HBASE-21170: - Summary: Backport HBASE-20642 to branch-1: Clients should re-use the same nonce across DDL operations Key: HBASE-21170 URL: https://issues.apache.org/jira/browse/HBASE-21170 Project: HBase Issue Type: Bug Components: rpc Reporter: Mingliang Liu Assignee: Ankit Singhal Per discussion in [HBASE-20642], its nonce changes are applicable as the handling of retry with nonce is also not correct in {{branch-1}} as well. The patch does not apply to {{brach-1}} and we need some coding and review work to land {{branch-1}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HBASE-20307) LoadTestTool prints too much zookeeper logging
[ https://issues.apache.org/jira/browse/HBASE-20307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-20307. Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.0.3 2.1.1 1.4.8 2.2.0 1.2.8 1.3.3 1.5.0 3.0.0 > LoadTestTool prints too much zookeeper logging > -- > > Key: HBASE-20307 > URL: https://issues.apache.org/jira/browse/HBASE-20307 > Project: HBase > Issue Type: Bug > Components: tooling >Reporter: Mike Drob >Assignee: Colin Garcia >Priority: Major > Labels: beginner > Fix For: 3.0.0, 1.5.0, 1.3.3, 1.2.8, 2.2.0, 1.4.8, 2.1.1, 2.0.3 > > Attachments: HBASE-20307.000.patch, HBASE-20307.001.patch > > > When running ltt there is a ton of ZK related cruft that I probably don't > care about. Hide it behind -verbose flag or point people at log4j > configuration but don't print it by default. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-16458) Shorten backup / restore test execution time
[ https://issues.apache.org/jira/browse/HBASE-16458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607779#comment-16607779 ] Hadoop QA commented on HBASE-16458: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 26s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 55s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 28s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 39s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 15s{color} | {color:red} hbase-backup: The patch generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 52s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 12m 23s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 36s{color} | {color:green} hbase-backup in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 54m 55s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b | | JIRA Issue | HBASE-16458 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12938910/16458-v1.patch | | Optional Tests | asflicense javac javadoc unit shadedjars hadoopcheck xml compile findbugs hbaseanti checkstyle | | uname | Linux b9e3bc4c36bc 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 17:03:53 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh | | git revision | master / c3419be003 | | maven | version: Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC3 | | checkstyle | https://builds.apache.org/job/PreCommit-HBASE-Build/14356/artifact/patchprocess/diff-checkstyle-hbase-backup.txt | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/14356/testReport/ | |
[jira] [Commented] (HBASE-21138) Close HRegion instance at the end of every test in TestHRegion
[ https://issues.apache.org/jira/browse/HBASE-21138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607774#comment-16607774 ] Hudson commented on HBASE-21138: Results for branch branch-1.3 [build #457 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/457/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/457//General_Nightly_Build_Report/] (/) {color:green}+1 jdk7 checks{color} -- For more information [see jdk7 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/457//JDK7_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/457//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. > Close HRegion instance at the end of every test in TestHRegion > -- > > Key: HBASE-21138 > URL: https://issues.apache.org/jira/browse/HBASE-21138 > Project: HBase > Issue Type: Test >Reporter: Ted Yu >Assignee: Mingliang Liu >Priority: Major > Fix For: 3.0.0, 1.5.0, 1.3.3, 2.2.0, 1.4.8 > > Attachments: HBASE-21138.000.patch, HBASE-21138.001.patch, > HBASE-21138.002.patch, HBASE-21138.003.patch, HBASE-21138.004.patch, > HBASE-21138.branch-1.004.patch, HBASE-21138.branch-1.004.patch, > HBASE-21138.branch-2.004.patch > > > TestHRegion has over 100 tests. > The following is from one subtest: > {code} > public void testCompactionAffectedByScanners() throws Exception { > byte[] family = Bytes.toBytes("family"); > this.region = initHRegion(tableName, method, CONF, family); > {code} > this.region is not closed at the end of the subtest. > testToShowNPEOnRegionScannerReseek is another example. > Every subtest should use the following construct toward the end: > {code} > } finally { > HBaseTestingUtility.closeRegionAndWAL(this.region); > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21166) Creating a CoprocessorHConnection re-retrieves the cluster id from ZK
[ https://issues.apache.org/jira/browse/HBASE-21166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607762#comment-16607762 ] Hudson commented on HBASE-21166: SUCCESS: Integrated in Jenkins build HBase-1.2-IT #1158 (See [https://builds.apache.org/job/HBase-1.2-IT/1158/]) HBASE-21166 Creating a CoprocessorHConnection re-retrieves the cluster (larsh: rev ca858499309df7c12145095eabfbb0f6d076c942) * (edit) hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionManager.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/client/CoprocessorHConnection.java > Creating a CoprocessorHConnection re-retrieves the cluster id from ZK > - > > Key: HBASE-21166 > URL: https://issues.apache.org/jira/browse/HBASE-21166 > Project: HBase > Issue Type: Bug >Affects Versions: 1.5.0 >Reporter: Lars Hofhansl >Assignee: Lars Hofhansl >Priority: Major > Fix For: 1.5.0, 1.3.3, 1.4.8, 1.2.7 > > Attachments: HBASE-21166.branch-1.001.patch > > > CoprocessorHConnections are created for example during a call of > CoprocessorHost$Environent.getTable(...). The region server already know the > cluster id, yet, we're resolving it over and over again. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21164) reportForDuty should do (expotential) backoff rather than retry every 3 seconds (default).
[ https://issues.apache.org/jira/browse/HBASE-21164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607757#comment-16607757 ] Mingliang Liu commented on HBASE-21164: --- Patch v4 to fix checkstyle warning. Failing tests are not related and can pass locally. > reportForDuty should do (expotential) backoff rather than retry every 3 > seconds (default). > -- > > Key: HBASE-21164 > URL: https://issues.apache.org/jira/browse/HBASE-21164 > Project: HBase > Issue Type: Improvement > Components: regionserver >Reporter: stack >Assignee: Mingliang Liu >Priority: Minor > Attachments: HBASE-21164.branch-2.1.001.patch, > HBASE-21164.branch-2.1.002.patch, HBASE-21164.branch-2.1.003.patch, > HBASE-21164.branch-2.1.004.patch > > > RegionServers do reportForDuty on startup to tell Master they are available. > If Master is initializing, and especially on a big cluster when it can take a > while particularly if something is amiss, the log every three seconds is > annoying and doesn't do anything of use. Do backoff if fails up to a > reasonable maximum period. Here is example: > {code} > 2018-09-06 14:01:39,312 INFO > org.apache.hadoop.hbase.regionserver.HRegionServer: reportForDuty to > master=vc0207.halxg.cloudera.com,22001,1536266763109 with port=22001, > startcode=1536266763109 > 2018-09-06 14:01:39,312 WARN > org.apache.hadoop.hbase.regionserver.HRegionServer: reportForDuty failed; > sleeping and then retrying. > > {code} > For example, I am looking at a large cluster now that had a backlog of > procedure WALs. It is taking a couple of hours recreating the procedure-state > because there are millions of procedures outstanding. Meantime, the Master > log is just full of the above message -- every three seconds... -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21164) reportForDuty should do (expotential) backoff rather than retry every 3 seconds (default).
[ https://issues.apache.org/jira/browse/HBASE-21164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HBASE-21164: -- Attachment: HBASE-21164.branch-2.1.004.patch > reportForDuty should do (expotential) backoff rather than retry every 3 > seconds (default). > -- > > Key: HBASE-21164 > URL: https://issues.apache.org/jira/browse/HBASE-21164 > Project: HBase > Issue Type: Improvement > Components: regionserver >Reporter: stack >Assignee: Mingliang Liu >Priority: Minor > Attachments: HBASE-21164.branch-2.1.001.patch, > HBASE-21164.branch-2.1.002.patch, HBASE-21164.branch-2.1.003.patch, > HBASE-21164.branch-2.1.004.patch > > > RegionServers do reportForDuty on startup to tell Master they are available. > If Master is initializing, and especially on a big cluster when it can take a > while particularly if something is amiss, the log every three seconds is > annoying and doesn't do anything of use. Do backoff if fails up to a > reasonable maximum period. Here is example: > {code} > 2018-09-06 14:01:39,312 INFO > org.apache.hadoop.hbase.regionserver.HRegionServer: reportForDuty to > master=vc0207.halxg.cloudera.com,22001,1536266763109 with port=22001, > startcode=1536266763109 > 2018-09-06 14:01:39,312 WARN > org.apache.hadoop.hbase.regionserver.HRegionServer: reportForDuty failed; > sleeping and then retrying. > > {code} > For example, I am looking at a large cluster now that had a backlog of > procedure WALs. It is taking a couple of hours recreating the procedure-state > because there are millions of procedures outstanding. Meantime, the Master > log is just full of the above message -- every three seconds... -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21166) Creating a CoprocessorHConnection re-retrieves the cluster id from ZK
[ https://issues.apache.org/jira/browse/HBASE-21166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607754#comment-16607754 ] Hudson commented on HBASE-21166: SUCCESS: Integrated in Jenkins build HBase-1.3-IT #471 (See [https://builds.apache.org/job/HBase-1.3-IT/471/]) HBASE-21166 Creating a CoprocessorHConnection re-retrieves the cluster (larsh: rev 386fdb2c3b8712e7f1df179a2deb939bd93cbc5e) * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/client/CoprocessorHConnection.java * (edit) hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionManager.java > Creating a CoprocessorHConnection re-retrieves the cluster id from ZK > - > > Key: HBASE-21166 > URL: https://issues.apache.org/jira/browse/HBASE-21166 > Project: HBase > Issue Type: Bug >Affects Versions: 1.5.0 >Reporter: Lars Hofhansl >Assignee: Lars Hofhansl >Priority: Major > Fix For: 1.5.0, 1.3.3, 1.4.8, 1.2.7 > > Attachments: HBASE-21166.branch-1.001.patch > > > CoprocessorHConnections are created for example during a call of > CoprocessorHost$Environent.getTable(...). The region server already know the > cluster id, yet, we're resolving it over and over again. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-16458) Shorten backup / restore test execution time
[ https://issues.apache.org/jira/browse/HBASE-16458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607755#comment-16607755 ] Hadoop QA commented on HBASE-16458: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 34s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 22s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 46s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 14s{color} | {color:red} hbase-backup: The patch generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 15s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 10m 44s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 57s{color} | {color:green} hbase-backup in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 51m 34s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b | | JIRA Issue | HBASE-16458 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12938904/HBASE-16458-v1.patch | | Optional Tests | asflicense javac javadoc unit shadedjars hadoopcheck xml compile findbugs hbaseanti checkstyle | | uname | Linux fc7d12345598 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 17:03:53 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / c3419be003 | | maven | version: Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC3 | | checkstyle | https://builds.apache.org/job/PreCommit-HBASE-Build/14354/artifact/patchprocess/diff-checkstyle-hbase-backup.txt | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/14354/testReport/ |
[jira] [Commented] (HBASE-21166) Creating a CoprocessorHConnection re-retrieves the cluster id from ZK
[ https://issues.apache.org/jira/browse/HBASE-21166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607753#comment-16607753 ] Andrew Purtell commented on HBASE-21166: Thanks! RMs everywhere thank you in advance... > Creating a CoprocessorHConnection re-retrieves the cluster id from ZK > - > > Key: HBASE-21166 > URL: https://issues.apache.org/jira/browse/HBASE-21166 > Project: HBase > Issue Type: Bug >Affects Versions: 1.5.0 >Reporter: Lars Hofhansl >Assignee: Lars Hofhansl >Priority: Major > Fix For: 1.5.0, 1.3.3, 1.4.8, 1.2.7 > > Attachments: HBASE-21166.branch-1.001.patch > > > CoprocessorHConnections are created for example during a call of > CoprocessorHost$Environent.getTable(...). The region server already know the > cluster id, yet, we're resolving it over and over again. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21166) Creating a CoprocessorHConnection re-retrieves the cluster id from ZK
[ https://issues.apache.org/jira/browse/HBASE-21166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-21166: -- Fix Version/s: 1.2.7 1.4.8 1.3.3 > Creating a CoprocessorHConnection re-retrieves the cluster id from ZK > - > > Key: HBASE-21166 > URL: https://issues.apache.org/jira/browse/HBASE-21166 > Project: HBase > Issue Type: Bug >Affects Versions: 1.5.0 >Reporter: Lars Hofhansl >Assignee: Lars Hofhansl >Priority: Major > Fix For: 1.5.0, 1.3.3, 1.4.8, 1.2.7 > > Attachments: HBASE-21166.branch-1.001.patch > > > CoprocessorHConnections are created for example during a call of > CoprocessorHost$Environent.getTable(...). The region server already know the > cluster id, yet, we're resolving it over and over again. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21166) Creating a CoprocessorHConnection re-retrieves the cluster id from ZK
[ https://issues.apache.org/jira/browse/HBASE-21166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607752#comment-16607752 ] Lars Hofhansl commented on HBASE-21166: --- Pushed to branch-1.2, branch-1.3, and branch-1.4 as well. Apologies [~apurtell] > Creating a CoprocessorHConnection re-retrieves the cluster id from ZK > - > > Key: HBASE-21166 > URL: https://issues.apache.org/jira/browse/HBASE-21166 > Project: HBase > Issue Type: Bug >Affects Versions: 1.5.0 >Reporter: Lars Hofhansl >Assignee: Lars Hofhansl >Priority: Major > Fix For: 1.5.0, 1.3.3, 1.4.8, 1.2.7 > > Attachments: HBASE-21166.branch-1.001.patch > > > CoprocessorHConnections are created for example during a call of > CoprocessorHost$Environent.getTable(...). The region server already know the > cluster id, yet, we're resolving it over and over again. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21164) reportForDuty should do (expotential) backoff rather than retry every 3 seconds (default).
[ https://issues.apache.org/jira/browse/HBASE-21164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607745#comment-16607745 ] Hadoop QA commented on HBASE-21164: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} branch-2.1 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 13s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 28s{color} | {color:green} branch-2.1 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 25s{color} | {color:green} branch-2.1 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 32s{color} | {color:green} branch-2.1 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 3m 31s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 33s{color} | {color:green} branch-2.1 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} branch-2.1 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s{color} | {color:green} hbase-common: The patch generated 0 new + 2 unchanged - 1 fixed = 2 total (was 3) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 10s{color} | {color:red} hbase-server: The patch generated 1 new + 233 unchanged - 0 fixed = 234 total (was 233) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 3m 28s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 8m 29s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 37s{color} | {color:green} hbase-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}173m 28s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 29s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}219m 43s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.regionserver.throttle.TestFlushWithThroughputController | | | hadoop.hbase.client.TestAsyncTableGetMultiThreaded | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:42ca976 | | JIRA Issue | HBASE-21164 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12938762/HBASE-21164.branch-2.1.002.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti
[jira] [Commented] (HBASE-21001) ReplicationObserver fails to load in HBase 2.0.0
[ https://issues.apache.org/jira/browse/HBASE-21001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607742#comment-16607742 ] Hudson commented on HBASE-21001: Results for branch branch-2.1 [build #293 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/293/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/293//General_Nightly_Build_Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/293//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/293//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > ReplicationObserver fails to load in HBase 2.0.0 > > > Key: HBASE-21001 > URL: https://issues.apache.org/jira/browse/HBASE-21001 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0-alpha-4, 2.0.0 >Reporter: Wei-Chiu Chuang >Assignee: Guangxu Cheng >Priority: Major > Labels: replication > Attachments: HBASE-21001.branch-2.0.001.patch, > HBASE-21001.master.001.patch, HBASE-21001.master.001.patch, > HBASE-21001.master.002.patch, HBASE-21001.master.003.patch, > HBASE-21001.master.004.patch > > > ReplicationObserver was added in HBASE-17290 to prevent "Potential loss of > data for replication of bulk loaded hfiles". > I tried to enable bulk loading replication feature > (hbase.replication.bulkload.enabled=true and configure > hbase.replication.cluster.id) on a HBase 2.0.0 cluster, but the RegionServer > started with the following error: > {quote} > 2018-08-02 18:20:36,365 INFO > org.apache.hadoop.hbase.regionserver.RegionServerCoprocessorHost: System > coprocessor loading is enabled > 2018-08-02 18:20:36,365 INFO > org.apache.hadoop.hbase.regionserver.RegionServerCoprocessorHost: Table > coprocessor loading is enabled > 2018-08-02 18:20:36,365 ERROR > org.apache.hadoop.hbase.regionserver.RegionServerCoprocessorHost: > org.apache.hadoop.hbase.replication.regionserver.ReplicationObserver is not > of type > RegionServerCoprocessor. Check the configuration of > hbase.coprocessor.regionserver.classes > 2018-08-02 18:20:36,366 ERROR > org.apache.hadoop.hbase.coprocessor.CoprocessorHost: Cannot load coprocessor > ReplicationObserver > {quote} > It looks like this was broken by HBASE-17732 to me, but I could be wrong. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21001) ReplicationObserver fails to load in HBase 2.0.0
[ https://issues.apache.org/jira/browse/HBASE-21001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607740#comment-16607740 ] Hudson commented on HBASE-21001: Results for branch branch-2 [build #1216 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1216/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1216//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1216//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1216//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > ReplicationObserver fails to load in HBase 2.0.0 > > > Key: HBASE-21001 > URL: https://issues.apache.org/jira/browse/HBASE-21001 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0-alpha-4, 2.0.0 >Reporter: Wei-Chiu Chuang >Assignee: Guangxu Cheng >Priority: Major > Labels: replication > Attachments: HBASE-21001.branch-2.0.001.patch, > HBASE-21001.master.001.patch, HBASE-21001.master.001.patch, > HBASE-21001.master.002.patch, HBASE-21001.master.003.patch, > HBASE-21001.master.004.patch > > > ReplicationObserver was added in HBASE-17290 to prevent "Potential loss of > data for replication of bulk loaded hfiles". > I tried to enable bulk loading replication feature > (hbase.replication.bulkload.enabled=true and configure > hbase.replication.cluster.id) on a HBase 2.0.0 cluster, but the RegionServer > started with the following error: > {quote} > 2018-08-02 18:20:36,365 INFO > org.apache.hadoop.hbase.regionserver.RegionServerCoprocessorHost: System > coprocessor loading is enabled > 2018-08-02 18:20:36,365 INFO > org.apache.hadoop.hbase.regionserver.RegionServerCoprocessorHost: Table > coprocessor loading is enabled > 2018-08-02 18:20:36,365 ERROR > org.apache.hadoop.hbase.regionserver.RegionServerCoprocessorHost: > org.apache.hadoop.hbase.replication.regionserver.ReplicationObserver is not > of type > RegionServerCoprocessor. Check the configuration of > hbase.coprocessor.regionserver.classes > 2018-08-02 18:20:36,366 ERROR > org.apache.hadoop.hbase.coprocessor.CoprocessorHost: Cannot load coprocessor > ReplicationObserver > {quote} > It looks like this was broken by HBASE-17732 to me, but I could be wrong. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-16458) Shorten backup / restore test execution time
[ https://issues.apache.org/jira/browse/HBASE-16458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-16458: --- Attachment: 16458-v1.patch > Shorten backup / restore test execution time > > > Key: HBASE-16458 > URL: https://issues.apache.org/jira/browse/HBASE-16458 > Project: HBase > Issue Type: Test >Reporter: Ted Yu >Assignee: Vladimir Rodionov >Priority: Major > Labels: backup > Attachments: 16458-v1.patch, 16458.HBASE-7912.v3.txt, > 16458.HBASE-7912.v4.txt, 16458.HBASE-7912.v5.txt, 16458.v1.txt, 16458.v2.txt, > 16458.v3.txt, HBASE-16458-v1.patch > > > Below was timing information for all the backup / restore tests (today's > result): > {code} > Running org.apache.hadoop.hbase.backup.TestIncrementalBackup > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 576.273 sec - > in org.apache.hadoop.hbase.backup.TestIncrementalBackup > Running org.apache.hadoop.hbase.backup.TestBackupBoundaryTests > Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 124.67 sec - > in org.apache.hadoop.hbase.backup.TestBackupBoundaryTests > Running org.apache.hadoop.hbase.backup.TestBackupStatusProgress > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 102.34 sec - > in org.apache.hadoop.hbase.backup.TestBackupStatusProgress > Running org.apache.hadoop.hbase.backup.TestBackupAdmin > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 490.251 sec - > in org.apache.hadoop.hbase.backup.TestBackupAdmin > Running org.apache.hadoop.hbase.backup.TestHFileArchiving > Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.323 sec - > in org.apache.hadoop.hbase.backup.TestHFileArchiving > Running org.apache.hadoop.hbase.backup.TestSystemTableSnapshot > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 65.492 sec - > in org.apache.hadoop.hbase.backup.TestSystemTableSnapshot > Running org.apache.hadoop.hbase.backup.TestBackupDescribe > Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 93.758 sec - > in org.apache.hadoop.hbase.backup.TestBackupDescribe > Running org.apache.hadoop.hbase.backup.TestBackupLogCleaner > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 109.187 sec - > in org.apache.hadoop.hbase.backup.TestBackupLogCleaner > Running org.apache.hadoop.hbase.backup.TestIncrementalBackupNoDataLoss > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 330.539 sec - > in org.apache.hadoop.hbase.backup.TestIncrementalBackupNoDataLoss > Running org.apache.hadoop.hbase.backup.TestRemoteBackup > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.371 sec - > in org.apache.hadoop.hbase.backup.TestRemoteBackup > Running org.apache.hadoop.hbase.backup.TestBackupSystemTable > Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 67.893 sec - > in org.apache.hadoop.hbase.backup.TestBackupSystemTable > Running org.apache.hadoop.hbase.backup.TestRestoreBoundaryTests > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 120.779 sec - > in org.apache.hadoop.hbase.backup.TestRestoreBoundaryTests > Running org.apache.hadoop.hbase.backup.TestFullBackupSetRestoreSet > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 117.815 sec - > in org.apache.hadoop.hbase.backup.TestFullBackupSetRestoreSet > Running org.apache.hadoop.hbase.backup.TestBackupShowHistory > Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 136.517 sec - > in org.apache.hadoop.hbase.backup.TestBackupShowHistory > Running org.apache.hadoop.hbase.backup.TestRemoteRestore > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 91.799 sec - > in org.apache.hadoop.hbase.backup.TestRemoteRestore > Running org.apache.hadoop.hbase.backup.TestFullRestore > Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 317.711 sec > - in org.apache.hadoop.hbase.backup.TestFullRestore > Running org.apache.hadoop.hbase.backup.TestFullBackupSet > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 87.045 sec - > in org.apache.hadoop.hbase.backup.TestFullBackupSet > Running org.apache.hadoop.hbase.backup.TestBackupDelete > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 86.214 sec - > in org.apache.hadoop.hbase.backup.TestBackupDelete > Running org.apache.hadoop.hbase.backup.TestBackupDeleteRestore > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 77.631 sec - > in org.apache.hadoop.hbase.backup.TestBackupDeleteRestore > Running org.apache.hadoop.hbase.backup.TestIncrementalBackupDeleteTable > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 190.358 sec - > in org.apache.hadoop.hbase.backup.TestIncrementalBackupDeleteTable > Running >
[jira] [Commented] (HBASE-16458) Shorten backup / restore test execution time
[ https://issues.apache.org/jira/browse/HBASE-16458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607737#comment-16607737 ] Ted Yu commented on HBASE-16458: On Linux, with patch, from first test output: {code} 2018-09-07 22:06:50,491 INFO [Time-limited test] hbase.ResourceChecker(148): before: backup.TestBackupUtils#TestGetBulkOutputDir Thread=8, OpenFileDescriptor=179, MaxFileDescriptor=32000, SystemLoadAverage=242, ProcessCount=363, AvailableMemoryMB=56614 {code} to last: {code} 2018-09-07 22:23:48,010 INFO [Block report processor] blockmanagement.BlockManager(2645): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:36058 is added to blk_1073741829_1005{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-453ccfd4-ec24-490b-a51e-2b75f5b1da9f:NORMAL:127.0.0.1:36058|RBW]]} size 146414 2018-09-07 22:23:48,413 INFO [Thread-3] regionserver.ShutdownHook$ShutdownHookThread(135): Shutdown hook finished. {code} That was ~17 minutes. > Shorten backup / restore test execution time > > > Key: HBASE-16458 > URL: https://issues.apache.org/jira/browse/HBASE-16458 > Project: HBase > Issue Type: Test >Reporter: Ted Yu >Assignee: Vladimir Rodionov >Priority: Major > Labels: backup > Attachments: 16458.HBASE-7912.v3.txt, 16458.HBASE-7912.v4.txt, > 16458.HBASE-7912.v5.txt, 16458.v1.txt, 16458.v2.txt, 16458.v3.txt, > HBASE-16458-v1.patch > > > Below was timing information for all the backup / restore tests (today's > result): > {code} > Running org.apache.hadoop.hbase.backup.TestIncrementalBackup > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 576.273 sec - > in org.apache.hadoop.hbase.backup.TestIncrementalBackup > Running org.apache.hadoop.hbase.backup.TestBackupBoundaryTests > Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 124.67 sec - > in org.apache.hadoop.hbase.backup.TestBackupBoundaryTests > Running org.apache.hadoop.hbase.backup.TestBackupStatusProgress > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 102.34 sec - > in org.apache.hadoop.hbase.backup.TestBackupStatusProgress > Running org.apache.hadoop.hbase.backup.TestBackupAdmin > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 490.251 sec - > in org.apache.hadoop.hbase.backup.TestBackupAdmin > Running org.apache.hadoop.hbase.backup.TestHFileArchiving > Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.323 sec - > in org.apache.hadoop.hbase.backup.TestHFileArchiving > Running org.apache.hadoop.hbase.backup.TestSystemTableSnapshot > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 65.492 sec - > in org.apache.hadoop.hbase.backup.TestSystemTableSnapshot > Running org.apache.hadoop.hbase.backup.TestBackupDescribe > Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 93.758 sec - > in org.apache.hadoop.hbase.backup.TestBackupDescribe > Running org.apache.hadoop.hbase.backup.TestBackupLogCleaner > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 109.187 sec - > in org.apache.hadoop.hbase.backup.TestBackupLogCleaner > Running org.apache.hadoop.hbase.backup.TestIncrementalBackupNoDataLoss > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 330.539 sec - > in org.apache.hadoop.hbase.backup.TestIncrementalBackupNoDataLoss > Running org.apache.hadoop.hbase.backup.TestRemoteBackup > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.371 sec - > in org.apache.hadoop.hbase.backup.TestRemoteBackup > Running org.apache.hadoop.hbase.backup.TestBackupSystemTable > Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 67.893 sec - > in org.apache.hadoop.hbase.backup.TestBackupSystemTable > Running org.apache.hadoop.hbase.backup.TestRestoreBoundaryTests > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 120.779 sec - > in org.apache.hadoop.hbase.backup.TestRestoreBoundaryTests > Running org.apache.hadoop.hbase.backup.TestFullBackupSetRestoreSet > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 117.815 sec - > in org.apache.hadoop.hbase.backup.TestFullBackupSetRestoreSet > Running org.apache.hadoop.hbase.backup.TestBackupShowHistory > Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 136.517 sec - > in org.apache.hadoop.hbase.backup.TestBackupShowHistory > Running org.apache.hadoop.hbase.backup.TestRemoteRestore > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 91.799 sec - > in org.apache.hadoop.hbase.backup.TestRemoteRestore > Running org.apache.hadoop.hbase.backup.TestFullRestore > Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 317.711 sec > - in org.apache.hadoop.hbase.backup.TestFullRestore > Running org.apache.hadoop.hbase.backup.TestFullBackupSet
[jira] [Commented] (HBASE-20952) Re-visit the WAL API
[ https://issues.apache.org/jira/browse/HBASE-20952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607728#comment-16607728 ] Josh Elser commented on HBASE-20952: FYI, had the pleasure of spending some time talking this through with Ted today. He has a new draft doc he's working on, incorporating some of the already given feedback, and working on splitting up the current patch to make the "core changes" to the API more clear (leaving the rest of the changes in a subsequent patch). The plan is to get something up the first half of next week. Reviewers, hold tight, please! Appreciate the interest from all. > Re-visit the WAL API > > > Key: HBASE-20952 > URL: https://issues.apache.org/jira/browse/HBASE-20952 > Project: HBase > Issue Type: Sub-task > Components: wal >Reporter: Josh Elser >Priority: Major > Attachments: 20952.v1.txt > > > Take a step back from the current WAL implementations and think about what an > HBase WAL API should look like. What are the primitive calls that we require > to guarantee durability of writes with a high degree of performance? > The API needs to take the current implementations into consideration. We > should also have a mind for what is happening in the Ratis LogService (but > the LogService should not dictate what HBase's WAL API looks like RATIS-272). > Other "systems" inside of HBase that use WALs are replication and > backup Replication has the use-case for "tail"'ing the WAL which we > should provide via our new API. B doesn't do anything fancy (IIRC). We > should make sure all consumers are generally going to be OK with the API we > create. > The API may be "OK" (or OK in a part). We need to also consider other methods > which were "bolted" on such as {{AbstractFSWAL}} and > {{WALFileLengthProvider}}. Other corners of "WAL use" (like the > {{WALSplitter}} should also be looked at to use WAL-APIs only). > We also need to make sure that adequate interface audience and stability > annotations are chosen. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21162) Revert suspicious change to BoundedByteBufferPool and disable use of direct buffers for IPC reservoir by default
[ https://issues.apache.org/jira/browse/HBASE-21162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607730#comment-16607730 ] Hadoop QA commented on HBASE-21162: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 30s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange} 0m 0s{color} | {color:orange} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} branch-1 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 37s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 46s{color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s{color} | {color:green} branch-1 passed with JDK v1.8.0_181 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s{color} | {color:green} branch-1 passed with JDK v1.7.0_191 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 46s{color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 2m 38s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s{color} | {color:green} branch-1 passed with JDK v1.8.0_181 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} branch-1 passed with JDK v1.7.0_191 {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s{color} | {color:green} the patch passed with JDK v1.8.0_181 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} the patch passed with JDK v1.7.0_191 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 26s{color} | {color:red} hbase-common: The patch generated 1 new + 5 unchanged - 0 fixed = 6 total (was 5) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 2m 52s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 1m 45s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} the patch passed with JDK v1.8.0_181 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} the patch passed with JDK v1.7.0_191 {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 11s{color} | {color:green} hbase-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}124m 53s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 41s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} |
[jira] [Commented] (HBASE-21166) Creating a CoprocessorHConnection re-retrieves the cluster id from ZK
[ https://issues.apache.org/jira/browse/HBASE-21166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607725#comment-16607725 ] Andrew Purtell commented on HBASE-21166: [~lhofhansl] this is a bug, please commit to releasing branches. Should be no need to backport. That just makes work for others.. > Creating a CoprocessorHConnection re-retrieves the cluster id from ZK > - > > Key: HBASE-21166 > URL: https://issues.apache.org/jira/browse/HBASE-21166 > Project: HBase > Issue Type: Bug >Affects Versions: 1.5.0 >Reporter: Lars Hofhansl >Assignee: Lars Hofhansl >Priority: Major > Fix For: 1.5.0 > > Attachments: HBASE-21166.branch-1.001.patch > > > CoprocessorHConnections are created for example during a call of > CoprocessorHost$Environent.getTable(...). The region server already know the > cluster id, yet, we're resolving it over and over again. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20743) ASF License warnings for branch-1
[ https://issues.apache.org/jira/browse/HBASE-20743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-20743: --- Description: >From >https://builds.apache.org/job/HBase%20Nightly/job/branch-1/450/artifact/output-general/patch-asflicense-problems.txt > : {code} Lines that start with ? in the ASF License report indicate files that do not have an Apache license header: !? hbase-error-prone/target/checkstyle-result.xml !? hbase-error-prone/target/classes/META-INF/services/com.google.errorprone.bugpatterns.BugChecker !? hbase-error-prone/target/maven-status/maven-compiler-plugin/compile/default-compile/inputFiles.lst !? hbase-error-prone/target/maven-status/maven-compiler-plugin/compile/default-compile/createdFiles.lst {code} Looks like they should be excluded. was: >From >https://builds.apache.org/job/HBase%20Nightly/job/branch-1/350/artifact/output-general/patch-asflicense-problems.txt > : {code} Lines that start with ? in the ASF License report indicate files that do not have an Apache license header: !? hbase-error-prone/target/checkstyle-result.xml !? hbase-error-prone/target/classes/META-INF/services/com.google.errorprone.bugpatterns.BugChecker !? hbase-error-prone/target/maven-status/maven-compiler-plugin/compile/default-compile/inputFiles.lst !? hbase-error-prone/target/maven-status/maven-compiler-plugin/compile/default-compile/createdFiles.lst {code} Looks like they should be excluded. > ASF License warnings for branch-1 > - > > Key: HBASE-20743 > URL: https://issues.apache.org/jira/browse/HBASE-20743 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Priority: Minor > > From > https://builds.apache.org/job/HBase%20Nightly/job/branch-1/450/artifact/output-general/patch-asflicense-problems.txt > : > {code} > Lines that start with ? in the ASF License report indicate files that do > not have an Apache license header: > !? hbase-error-prone/target/checkstyle-result.xml > !? > hbase-error-prone/target/classes/META-INF/services/com.google.errorprone.bugpatterns.BugChecker > !? > hbase-error-prone/target/maven-status/maven-compiler-plugin/compile/default-compile/inputFiles.lst > !? > hbase-error-prone/target/maven-status/maven-compiler-plugin/compile/default-compile/createdFiles.lst > {code} > Looks like they should be excluded. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21166) Creating a CoprocessorHConnection re-retrieves the cluster id from ZK
[ https://issues.apache.org/jira/browse/HBASE-21166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-21166: -- Resolution: Fixed Status: Resolved (was: Patch Available) > Creating a CoprocessorHConnection re-retrieves the cluster id from ZK > - > > Key: HBASE-21166 > URL: https://issues.apache.org/jira/browse/HBASE-21166 > Project: HBase > Issue Type: Bug >Affects Versions: 1.5.0 >Reporter: Lars Hofhansl >Assignee: Lars Hofhansl >Priority: Major > Fix For: 1.5.0 > > Attachments: HBASE-21166.branch-1.001.patch > > > CoprocessorHConnections are created for example during a call of > CoprocessorHost$Environent.getTable(...). The region server already know the > cluster id, yet, we're resolving it over and over again. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21166) Creating a CoprocessorHConnection re-retrieves the cluster id from ZK
[ https://issues.apache.org/jira/browse/HBASE-21166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607723#comment-16607723 ] Lars Hofhansl commented on HBASE-21166: --- Tests are unrelated. Committing. branch-1 only. Can backport to 1.2, 1.3, and 1.4 of course. > Creating a CoprocessorHConnection re-retrieves the cluster id from ZK > - > > Key: HBASE-21166 > URL: https://issues.apache.org/jira/browse/HBASE-21166 > Project: HBase > Issue Type: Bug >Affects Versions: 1.5.0 >Reporter: Lars Hofhansl >Assignee: Lars Hofhansl >Priority: Major > Fix For: 1.5.0 > > Attachments: HBASE-21166.branch-1.001.patch > > > CoprocessorHConnections are created for example during a call of > CoprocessorHost$Environent.getTable(...). The region server already know the > cluster id, yet, we're resolving it over and over again. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21091) Update Hadoop compatibility table
[ https://issues.apache.org/jira/browse/HBASE-21091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607719#comment-16607719 ] Josh Elser commented on HBASE-21091: {quote}[~elserj] mind if I take a crack at the font thing this weekend? {quote} Please do! > Update Hadoop compatibility table > - > > Key: HBASE-21091 > URL: https://issues.apache.org/jira/browse/HBASE-21091 > Project: HBase > Issue Type: Improvement > Components: documentation >Reporter: Josh Elser >Assignee: Josh Elser >Priority: Major > Attachments: HBASE-20264.001.patch, HBASE-20264.002.patch, > hbase-20264-emoji.png, hbase-20264-html.png > > > [https://lists.apache.org/thread.html/7016d322a07e96dccdb071041c37238e43d3df4f93e9515d52ccfafc@%3Cdev.hbase.apache.org%3E] > covers some discussion around our Hadoop Version Compatibility table. > > A "leading" suggestion to make this more clear is to use a green/yellow/red > (traffic-signal) style marking, instead of using specifics words/phrases (as > they're often dependent on the interpretation of the reader). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-15666) shaded dependencies for hbase-testing-util
[ https://issues.apache.org/jira/browse/HBASE-15666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607711#comment-16607711 ] Vrushali C commented on HBASE-15666: Hi Checking in to see if there are any updates on this. In Yarn we are using hbase as the backend for the timeline service v2 feature and we would like to start using the shaded hbase client. But for the unit tests, I am running into issues with the mini cluster not being up due to the master not being initialized. I also see a bunch of {code} java.lang.NoSuchMethodError: org.apache.hbase.thirdparty.io.netty.handler.codec.protobuf.ProtobufDecoder.(Lcom/google/protobuf/MessageLite;)V {code} > shaded dependencies for hbase-testing-util > -- > > Key: HBASE-15666 > URL: https://issues.apache.org/jira/browse/HBASE-15666 > Project: HBase > Issue Type: New Feature > Components: test >Affects Versions: 1.1.0, 1.2.0 >Reporter: Sean Busbey >Priority: Critical > > Folks that make use of our shaded client but then want to test things using > the hbase-testing-util end up getting all of our dependencies again in the > test scope. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21164) reportForDuty should do (expotential) backoff rather than retry every 3 seconds (default).
[ https://issues.apache.org/jira/browse/HBASE-21164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HBASE-21164: -- Attachment: HBASE-21164.branch-2.1.003.patch > reportForDuty should do (expotential) backoff rather than retry every 3 > seconds (default). > -- > > Key: HBASE-21164 > URL: https://issues.apache.org/jira/browse/HBASE-21164 > Project: HBase > Issue Type: Improvement > Components: regionserver >Reporter: stack >Assignee: Mingliang Liu >Priority: Minor > Attachments: HBASE-21164.branch-2.1.001.patch, > HBASE-21164.branch-2.1.002.patch, HBASE-21164.branch-2.1.003.patch > > > RegionServers do reportForDuty on startup to tell Master they are available. > If Master is initializing, and especially on a big cluster when it can take a > while particularly if something is amiss, the log every three seconds is > annoying and doesn't do anything of use. Do backoff if fails up to a > reasonable maximum period. Here is example: > {code} > 2018-09-06 14:01:39,312 INFO > org.apache.hadoop.hbase.regionserver.HRegionServer: reportForDuty to > master=vc0207.halxg.cloudera.com,22001,1536266763109 with port=22001, > startcode=1536266763109 > 2018-09-06 14:01:39,312 WARN > org.apache.hadoop.hbase.regionserver.HRegionServer: reportForDuty failed; > sleeping and then retrying. > > {code} > For example, I am looking at a large cluster now that had a backlog of > procedure WALs. It is taking a couple of hours recreating the procedure-state > because there are millions of procedures outstanding. Meantime, the Master > log is just full of the above message -- every three seconds... -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21164) reportForDuty should do (expotential) backoff rather than retry every 3 seconds (default).
[ https://issues.apache.org/jira/browse/HBASE-21164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607706#comment-16607706 ] Mingliang Liu commented on HBASE-21164: --- Thanks [~stack]. I revised the test a little bit by relaxing the assertion to tolerate thread contention. And also make sure the log capturer did capture the fail message logs. In v3 patch, the assertion now is basically as following: {code} int count = StringUtils.countMatches(output, failMsg); // Following asserts the actual retry number is in range (expectedRetry/2, expectedRetry*2). // Ideally we can assert the exact retry count. We relax here to tolerate contention error. int expectedRetry = (int)Math.ceil(Math.log(interval - 100)); assertTrue(String.format("reportForDuty retries %d times, less than expected min %d", count, expectedRetry / 2), count > expectedRetry / 2); assertTrue(String.format("reportForDuty retries %d times, more than expected max %d", count, expectedRetry * 2), count < expectedRetry * 2); {code} I think it makes sense to do this for following heartbeats. I found {{tryRegionServerReport()}} does not return the result, does not log message and the testing to be in separate class. I think a new issue is good. Shall we put the test util class LogCapture out of the current class so that it can be used elsewhere? > reportForDuty should do (expotential) backoff rather than retry every 3 > seconds (default). > -- > > Key: HBASE-21164 > URL: https://issues.apache.org/jira/browse/HBASE-21164 > Project: HBase > Issue Type: Improvement > Components: regionserver >Reporter: stack >Assignee: Mingliang Liu >Priority: Minor > Attachments: HBASE-21164.branch-2.1.001.patch, > HBASE-21164.branch-2.1.002.patch, HBASE-21164.branch-2.1.003.patch > > > RegionServers do reportForDuty on startup to tell Master they are available. > If Master is initializing, and especially on a big cluster when it can take a > while particularly if something is amiss, the log every three seconds is > annoying and doesn't do anything of use. Do backoff if fails up to a > reasonable maximum period. Here is example: > {code} > 2018-09-06 14:01:39,312 INFO > org.apache.hadoop.hbase.regionserver.HRegionServer: reportForDuty to > master=vc0207.halxg.cloudera.com,22001,1536266763109 with port=22001, > startcode=1536266763109 > 2018-09-06 14:01:39,312 WARN > org.apache.hadoop.hbase.regionserver.HRegionServer: reportForDuty failed; > sleeping and then retrying. > > {code} > For example, I am looking at a large cluster now that had a backlog of > procedure WALs. It is taking a couple of hours recreating the procedure-state > because there are millions of procedures outstanding. Meantime, the Master > log is just full of the above message -- every three seconds... -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-16458) Shorten backup / restore test execution time
[ https://issues.apache.org/jira/browse/HBASE-16458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-16458: --- Status: Patch Available (was: Reopened) > Shorten backup / restore test execution time > > > Key: HBASE-16458 > URL: https://issues.apache.org/jira/browse/HBASE-16458 > Project: HBase > Issue Type: Test >Reporter: Ted Yu >Assignee: Vladimir Rodionov >Priority: Major > Labels: backup > Attachments: 16458.HBASE-7912.v3.txt, 16458.HBASE-7912.v4.txt, > 16458.HBASE-7912.v5.txt, 16458.v1.txt, 16458.v2.txt, 16458.v3.txt, > HBASE-16458-v1.patch > > > Below was timing information for all the backup / restore tests (today's > result): > {code} > Running org.apache.hadoop.hbase.backup.TestIncrementalBackup > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 576.273 sec - > in org.apache.hadoop.hbase.backup.TestIncrementalBackup > Running org.apache.hadoop.hbase.backup.TestBackupBoundaryTests > Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 124.67 sec - > in org.apache.hadoop.hbase.backup.TestBackupBoundaryTests > Running org.apache.hadoop.hbase.backup.TestBackupStatusProgress > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 102.34 sec - > in org.apache.hadoop.hbase.backup.TestBackupStatusProgress > Running org.apache.hadoop.hbase.backup.TestBackupAdmin > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 490.251 sec - > in org.apache.hadoop.hbase.backup.TestBackupAdmin > Running org.apache.hadoop.hbase.backup.TestHFileArchiving > Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.323 sec - > in org.apache.hadoop.hbase.backup.TestHFileArchiving > Running org.apache.hadoop.hbase.backup.TestSystemTableSnapshot > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 65.492 sec - > in org.apache.hadoop.hbase.backup.TestSystemTableSnapshot > Running org.apache.hadoop.hbase.backup.TestBackupDescribe > Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 93.758 sec - > in org.apache.hadoop.hbase.backup.TestBackupDescribe > Running org.apache.hadoop.hbase.backup.TestBackupLogCleaner > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 109.187 sec - > in org.apache.hadoop.hbase.backup.TestBackupLogCleaner > Running org.apache.hadoop.hbase.backup.TestIncrementalBackupNoDataLoss > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 330.539 sec - > in org.apache.hadoop.hbase.backup.TestIncrementalBackupNoDataLoss > Running org.apache.hadoop.hbase.backup.TestRemoteBackup > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.371 sec - > in org.apache.hadoop.hbase.backup.TestRemoteBackup > Running org.apache.hadoop.hbase.backup.TestBackupSystemTable > Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 67.893 sec - > in org.apache.hadoop.hbase.backup.TestBackupSystemTable > Running org.apache.hadoop.hbase.backup.TestRestoreBoundaryTests > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 120.779 sec - > in org.apache.hadoop.hbase.backup.TestRestoreBoundaryTests > Running org.apache.hadoop.hbase.backup.TestFullBackupSetRestoreSet > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 117.815 sec - > in org.apache.hadoop.hbase.backup.TestFullBackupSetRestoreSet > Running org.apache.hadoop.hbase.backup.TestBackupShowHistory > Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 136.517 sec - > in org.apache.hadoop.hbase.backup.TestBackupShowHistory > Running org.apache.hadoop.hbase.backup.TestRemoteRestore > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 91.799 sec - > in org.apache.hadoop.hbase.backup.TestRemoteRestore > Running org.apache.hadoop.hbase.backup.TestFullRestore > Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 317.711 sec > - in org.apache.hadoop.hbase.backup.TestFullRestore > Running org.apache.hadoop.hbase.backup.TestFullBackupSet > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 87.045 sec - > in org.apache.hadoop.hbase.backup.TestFullBackupSet > Running org.apache.hadoop.hbase.backup.TestBackupDelete > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 86.214 sec - > in org.apache.hadoop.hbase.backup.TestBackupDelete > Running org.apache.hadoop.hbase.backup.TestBackupDeleteRestore > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 77.631 sec - > in org.apache.hadoop.hbase.backup.TestBackupDeleteRestore > Running org.apache.hadoop.hbase.backup.TestIncrementalBackupDeleteTable > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 190.358 sec - > in org.apache.hadoop.hbase.backup.TestIncrementalBackupDeleteTable > Running >
[jira] [Commented] (HBASE-16458) Shorten backup / restore test execution time
[ https://issues.apache.org/jira/browse/HBASE-16458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607699#comment-16607699 ] Vladimir Rodionov commented on HBASE-16458: --- With some tweak TestBackupBase and pom.xml now we can run tests 5x times faster. cc: [~yuzhih...@gmail.com]. > Shorten backup / restore test execution time > > > Key: HBASE-16458 > URL: https://issues.apache.org/jira/browse/HBASE-16458 > Project: HBase > Issue Type: Test >Reporter: Ted Yu >Assignee: Vladimir Rodionov >Priority: Major > Labels: backup > Attachments: 16458.HBASE-7912.v3.txt, 16458.HBASE-7912.v4.txt, > 16458.HBASE-7912.v5.txt, 16458.v1.txt, 16458.v2.txt, 16458.v3.txt, > HBASE-16458-v1.patch > > > Below was timing information for all the backup / restore tests (today's > result): > {code} > Running org.apache.hadoop.hbase.backup.TestIncrementalBackup > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 576.273 sec - > in org.apache.hadoop.hbase.backup.TestIncrementalBackup > Running org.apache.hadoop.hbase.backup.TestBackupBoundaryTests > Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 124.67 sec - > in org.apache.hadoop.hbase.backup.TestBackupBoundaryTests > Running org.apache.hadoop.hbase.backup.TestBackupStatusProgress > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 102.34 sec - > in org.apache.hadoop.hbase.backup.TestBackupStatusProgress > Running org.apache.hadoop.hbase.backup.TestBackupAdmin > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 490.251 sec - > in org.apache.hadoop.hbase.backup.TestBackupAdmin > Running org.apache.hadoop.hbase.backup.TestHFileArchiving > Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.323 sec - > in org.apache.hadoop.hbase.backup.TestHFileArchiving > Running org.apache.hadoop.hbase.backup.TestSystemTableSnapshot > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 65.492 sec - > in org.apache.hadoop.hbase.backup.TestSystemTableSnapshot > Running org.apache.hadoop.hbase.backup.TestBackupDescribe > Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 93.758 sec - > in org.apache.hadoop.hbase.backup.TestBackupDescribe > Running org.apache.hadoop.hbase.backup.TestBackupLogCleaner > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 109.187 sec - > in org.apache.hadoop.hbase.backup.TestBackupLogCleaner > Running org.apache.hadoop.hbase.backup.TestIncrementalBackupNoDataLoss > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 330.539 sec - > in org.apache.hadoop.hbase.backup.TestIncrementalBackupNoDataLoss > Running org.apache.hadoop.hbase.backup.TestRemoteBackup > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.371 sec - > in org.apache.hadoop.hbase.backup.TestRemoteBackup > Running org.apache.hadoop.hbase.backup.TestBackupSystemTable > Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 67.893 sec - > in org.apache.hadoop.hbase.backup.TestBackupSystemTable > Running org.apache.hadoop.hbase.backup.TestRestoreBoundaryTests > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 120.779 sec - > in org.apache.hadoop.hbase.backup.TestRestoreBoundaryTests > Running org.apache.hadoop.hbase.backup.TestFullBackupSetRestoreSet > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 117.815 sec - > in org.apache.hadoop.hbase.backup.TestFullBackupSetRestoreSet > Running org.apache.hadoop.hbase.backup.TestBackupShowHistory > Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 136.517 sec - > in org.apache.hadoop.hbase.backup.TestBackupShowHistory > Running org.apache.hadoop.hbase.backup.TestRemoteRestore > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 91.799 sec - > in org.apache.hadoop.hbase.backup.TestRemoteRestore > Running org.apache.hadoop.hbase.backup.TestFullRestore > Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 317.711 sec > - in org.apache.hadoop.hbase.backup.TestFullRestore > Running org.apache.hadoop.hbase.backup.TestFullBackupSet > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 87.045 sec - > in org.apache.hadoop.hbase.backup.TestFullBackupSet > Running org.apache.hadoop.hbase.backup.TestBackupDelete > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 86.214 sec - > in org.apache.hadoop.hbase.backup.TestBackupDelete > Running org.apache.hadoop.hbase.backup.TestBackupDeleteRestore > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 77.631 sec - > in org.apache.hadoop.hbase.backup.TestBackupDeleteRestore > Running org.apache.hadoop.hbase.backup.TestIncrementalBackupDeleteTable > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 190.358 sec - > in
[jira] [Updated] (HBASE-16458) Shorten backup / restore test execution time
[ https://issues.apache.org/jira/browse/HBASE-16458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Rodionov updated HBASE-16458: -- Attachment: HBASE-16458-v1.patch > Shorten backup / restore test execution time > > > Key: HBASE-16458 > URL: https://issues.apache.org/jira/browse/HBASE-16458 > Project: HBase > Issue Type: Test >Reporter: Ted Yu >Assignee: Vladimir Rodionov >Priority: Major > Labels: backup > Attachments: 16458.HBASE-7912.v3.txt, 16458.HBASE-7912.v4.txt, > 16458.HBASE-7912.v5.txt, 16458.v1.txt, 16458.v2.txt, 16458.v3.txt, > HBASE-16458-v1.patch > > > Below was timing information for all the backup / restore tests (today's > result): > {code} > Running org.apache.hadoop.hbase.backup.TestIncrementalBackup > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 576.273 sec - > in org.apache.hadoop.hbase.backup.TestIncrementalBackup > Running org.apache.hadoop.hbase.backup.TestBackupBoundaryTests > Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 124.67 sec - > in org.apache.hadoop.hbase.backup.TestBackupBoundaryTests > Running org.apache.hadoop.hbase.backup.TestBackupStatusProgress > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 102.34 sec - > in org.apache.hadoop.hbase.backup.TestBackupStatusProgress > Running org.apache.hadoop.hbase.backup.TestBackupAdmin > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 490.251 sec - > in org.apache.hadoop.hbase.backup.TestBackupAdmin > Running org.apache.hadoop.hbase.backup.TestHFileArchiving > Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.323 sec - > in org.apache.hadoop.hbase.backup.TestHFileArchiving > Running org.apache.hadoop.hbase.backup.TestSystemTableSnapshot > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 65.492 sec - > in org.apache.hadoop.hbase.backup.TestSystemTableSnapshot > Running org.apache.hadoop.hbase.backup.TestBackupDescribe > Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 93.758 sec - > in org.apache.hadoop.hbase.backup.TestBackupDescribe > Running org.apache.hadoop.hbase.backup.TestBackupLogCleaner > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 109.187 sec - > in org.apache.hadoop.hbase.backup.TestBackupLogCleaner > Running org.apache.hadoop.hbase.backup.TestIncrementalBackupNoDataLoss > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 330.539 sec - > in org.apache.hadoop.hbase.backup.TestIncrementalBackupNoDataLoss > Running org.apache.hadoop.hbase.backup.TestRemoteBackup > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.371 sec - > in org.apache.hadoop.hbase.backup.TestRemoteBackup > Running org.apache.hadoop.hbase.backup.TestBackupSystemTable > Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 67.893 sec - > in org.apache.hadoop.hbase.backup.TestBackupSystemTable > Running org.apache.hadoop.hbase.backup.TestRestoreBoundaryTests > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 120.779 sec - > in org.apache.hadoop.hbase.backup.TestRestoreBoundaryTests > Running org.apache.hadoop.hbase.backup.TestFullBackupSetRestoreSet > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 117.815 sec - > in org.apache.hadoop.hbase.backup.TestFullBackupSetRestoreSet > Running org.apache.hadoop.hbase.backup.TestBackupShowHistory > Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 136.517 sec - > in org.apache.hadoop.hbase.backup.TestBackupShowHistory > Running org.apache.hadoop.hbase.backup.TestRemoteRestore > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 91.799 sec - > in org.apache.hadoop.hbase.backup.TestRemoteRestore > Running org.apache.hadoop.hbase.backup.TestFullRestore > Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 317.711 sec > - in org.apache.hadoop.hbase.backup.TestFullRestore > Running org.apache.hadoop.hbase.backup.TestFullBackupSet > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 87.045 sec - > in org.apache.hadoop.hbase.backup.TestFullBackupSet > Running org.apache.hadoop.hbase.backup.TestBackupDelete > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 86.214 sec - > in org.apache.hadoop.hbase.backup.TestBackupDelete > Running org.apache.hadoop.hbase.backup.TestBackupDeleteRestore > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 77.631 sec - > in org.apache.hadoop.hbase.backup.TestBackupDeleteRestore > Running org.apache.hadoop.hbase.backup.TestIncrementalBackupDeleteTable > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 190.358 sec - > in org.apache.hadoop.hbase.backup.TestIncrementalBackupDeleteTable > Running >
[jira] [Created] (HBASE-21169) Initiate hbck2 tool in hbase-operator-tools repo
Umesh Agashe created HBASE-21169: Summary: Initiate hbck2 tool in hbase-operator-tools repo Key: HBASE-21169 URL: https://issues.apache.org/jira/browse/HBASE-21169 Project: HBase Issue Type: Sub-task Components: hbck2 Affects Versions: 2.1.0 Reporter: Umesh Agashe Assignee: Umesh Agashe Create hbck2 tool in hbase-operator-tools (https://github.com/apache/hbase-operator-tools.git) repo. This is not intended to be complete tool but initial changes with usage, ability to connect to server, logging, and using newly added HbckService etc. Code changes to address specific use cases can be added later and tool will evolve. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21166) Creating a CoprocessorHConnection re-retrieves the cluster id from ZK
[ https://issues.apache.org/jira/browse/HBASE-21166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607697#comment-16607697 ] Hadoop QA commented on HBASE-21166: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 26s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange} 0m 0s{color} | {color:orange} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} branch-1 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 6m 40s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 50s{color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s{color} | {color:green} branch-1 passed with JDK v1.8.0_181 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green} branch-1 passed with JDK v1.7.0_191 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 40s{color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 2m 26s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s{color} | {color:green} branch-1 passed with JDK v1.8.0_181 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} branch-1 passed with JDK v1.7.0_191 {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s{color} | {color:green} the patch passed with JDK v1.8.0_181 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s{color} | {color:green} the patch passed with JDK v1.7.0_191 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} hbase-client: The patch generated 0 new + 66 unchanged - 2 fixed = 66 total (was 68) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 9s{color} | {color:green} The patch hbase-server passed checkstyle {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 2m 19s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 1m 30s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s{color} | {color:green} the patch passed with JDK v1.8.0_181 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} the patch passed with JDK v1.7.0_191 {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 22s{color} | {color:green} hbase-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}171m 5s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} |
[jira] [Updated] (HBASE-20874) Sending compaction descriptions from all regionservers to master.
[ https://issues.apache.org/jira/browse/HBASE-20874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sakthi updated HBASE-20874: --- Attachment: (was: hbase-20874.master.010.patch) > Sending compaction descriptions from all regionservers to master. > - > > Key: HBASE-20874 > URL: https://issues.apache.org/jira/browse/HBASE-20874 > Project: HBase > Issue Type: Sub-task >Reporter: Mohit Goel >Assignee: Mohit Goel >Priority: Minor > Attachments: HBASE-20874.master.004.patch, > HBASE-20874.master.005.patch, HBASE-20874.master.006.patch, > HBASE-20874.master.007.patch, HBASE-20874.master.008.patch, > hbase-20874.master.009.patch, hbase-20874.master.010.patch > > > Need to send the compaction description from region servers to Master , to > let master know of the entire compaction state of the cluster. Further need > to change the implementation of client Side API than like getCompactionState, > which will consult master for the result instead of sending individual > request to regionservers. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20874) Sending compaction descriptions from all regionservers to master.
[ https://issues.apache.org/jira/browse/HBASE-20874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sakthi updated HBASE-20874: --- Attachment: hbase-20874.master.010.patch > Sending compaction descriptions from all regionservers to master. > - > > Key: HBASE-20874 > URL: https://issues.apache.org/jira/browse/HBASE-20874 > Project: HBase > Issue Type: Sub-task >Reporter: Mohit Goel >Assignee: Mohit Goel >Priority: Minor > Attachments: HBASE-20874.master.004.patch, > HBASE-20874.master.005.patch, HBASE-20874.master.006.patch, > HBASE-20874.master.007.patch, HBASE-20874.master.008.patch, > hbase-20874.master.009.patch, hbase-20874.master.010.patch > > > Need to send the compaction description from region servers to Master , to > let master know of the entire compaction state of the cluster. Further need > to change the implementation of client Side API than like getCompactionState, > which will consult master for the result instead of sending individual > request to regionservers. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20874) Sending compaction descriptions from all regionservers to master.
[ https://issues.apache.org/jira/browse/HBASE-20874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607677#comment-16607677 ] Sakthi commented on HBASE-20874: I feel the hbase-server errors reported here are unrelated to the changes made in the .008+ patch. I see that there is still 1 rubocop issue being reported. Will fix it. Also, I tried running both the failed tests on my local machine and they passed. Resubmitting another patch for the rubocop. > Sending compaction descriptions from all regionservers to master. > - > > Key: HBASE-20874 > URL: https://issues.apache.org/jira/browse/HBASE-20874 > Project: HBase > Issue Type: Sub-task >Reporter: Mohit Goel >Assignee: Mohit Goel >Priority: Minor > Attachments: HBASE-20874.master.004.patch, > HBASE-20874.master.005.patch, HBASE-20874.master.006.patch, > HBASE-20874.master.007.patch, HBASE-20874.master.008.patch, > hbase-20874.master.009.patch, hbase-20874.master.010.patch > > > Need to send the compaction description from region servers to Master , to > let master know of the entire compaction state of the cluster. Further need > to change the implementation of client Side API than like getCompactionState, > which will consult master for the result instead of sending individual > request to regionservers. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20874) Sending compaction descriptions from all regionservers to master.
[ https://issues.apache.org/jira/browse/HBASE-20874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sakthi updated HBASE-20874: --- Attachment: hbase-20874.master.010.patch > Sending compaction descriptions from all regionservers to master. > - > > Key: HBASE-20874 > URL: https://issues.apache.org/jira/browse/HBASE-20874 > Project: HBase > Issue Type: Sub-task >Reporter: Mohit Goel >Assignee: Mohit Goel >Priority: Minor > Attachments: HBASE-20874.master.004.patch, > HBASE-20874.master.005.patch, HBASE-20874.master.006.patch, > HBASE-20874.master.007.patch, HBASE-20874.master.008.patch, > hbase-20874.master.009.patch, hbase-20874.master.010.patch > > > Need to send the compaction description from region servers to Master , to > let master know of the entire compaction state of the cluster. Further need > to change the implementation of client Side API than like getCompactionState, > which will consult master for the result instead of sending individual > request to regionservers. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21001) ReplicationObserver fails to load in HBase 2.0.0
[ https://issues.apache.org/jira/browse/HBASE-21001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607648#comment-16607648 ] Mingliang Liu commented on HBASE-21001: --- Nit: we don't need to call {{region.close}} explicitly in UT after [HBASE-21138]. > ReplicationObserver fails to load in HBase 2.0.0 > > > Key: HBASE-21001 > URL: https://issues.apache.org/jira/browse/HBASE-21001 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0-alpha-4, 2.0.0 >Reporter: Wei-Chiu Chuang >Assignee: Guangxu Cheng >Priority: Major > Labels: replication > Attachments: HBASE-21001.branch-2.0.001.patch, > HBASE-21001.master.001.patch, HBASE-21001.master.001.patch, > HBASE-21001.master.002.patch, HBASE-21001.master.003.patch, > HBASE-21001.master.004.patch > > > ReplicationObserver was added in HBASE-17290 to prevent "Potential loss of > data for replication of bulk loaded hfiles". > I tried to enable bulk loading replication feature > (hbase.replication.bulkload.enabled=true and configure > hbase.replication.cluster.id) on a HBase 2.0.0 cluster, but the RegionServer > started with the following error: > {quote} > 2018-08-02 18:20:36,365 INFO > org.apache.hadoop.hbase.regionserver.RegionServerCoprocessorHost: System > coprocessor loading is enabled > 2018-08-02 18:20:36,365 INFO > org.apache.hadoop.hbase.regionserver.RegionServerCoprocessorHost: Table > coprocessor loading is enabled > 2018-08-02 18:20:36,365 ERROR > org.apache.hadoop.hbase.regionserver.RegionServerCoprocessorHost: > org.apache.hadoop.hbase.replication.regionserver.ReplicationObserver is not > of type > RegionServerCoprocessor. Check the configuration of > hbase.coprocessor.regionserver.classes > 2018-08-02 18:20:36,366 ERROR > org.apache.hadoop.hbase.coprocessor.CoprocessorHost: Cannot load coprocessor > ReplicationObserver > {quote} > It looks like this was broken by HBASE-17732 to me, but I could be wrong. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21168) BloomFilterUtil uses hardcoded randomness
[ https://issues.apache.org/jira/browse/HBASE-21168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607638#comment-16607638 ] Mingliang Liu commented on HBASE-21168: --- Perfect! > BloomFilterUtil uses hardcoded randomness > - > > Key: HBASE-21168 > URL: https://issues.apache.org/jira/browse/HBASE-21168 > Project: HBase > Issue Type: Task >Affects Versions: 2.0.0 >Reporter: Mike Drob >Assignee: Mike Drob >Priority: Minor > Fix For: 3.0.0 > > Attachments: HBASE-21168.master.001.patch, > HBASE-21168.master.002.patch > > > This was flagged by a Fortify scan and while it doesn't appear to be a real > issue, it's pretty easy to take care of anyway. > The hard coded rand can be moved to the test class that actually needs it to > make the static analysis happy. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21168) BloomFilterUtil uses hardcoded randomness
[ https://issues.apache.org/jira/browse/HBASE-21168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607604#comment-16607604 ] Hadoop QA commented on HBASE-21168: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 33s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 14s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 23s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 55s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 29s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 22s{color} | {color:green} hbase-server: The patch generated 0 new + 31 unchanged - 1 fixed = 31 total (was 32) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 5s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 12m 27s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}128m 59s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}177m 0s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.replication.TestSyncReplicationStandbyKillRS | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b | | JIRA Issue | HBASE-21168 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12938853/HBASE-21168.master.001.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 4eab68156705 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh | | git revision | master / c3419be003 | | maven | version: Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC3 | | unit | https://builds.apache.org/job/PreCommit-HBASE-Build/14346/artifact/patchprocess/patch-unit-hbase-server.txt | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/14346/testReport/ | | Max. process+thread count | 5382 (vs.
[jira] [Commented] (HBASE-21162) Revert suspicious change to BoundedByteBufferPool and disable use of direct buffers for IPC reservoir by default
[ https://issues.apache.org/jira/browse/HBASE-21162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607585#comment-16607585 ] Andrew Purtell commented on HBASE-21162: Updated patch mentions this JIRA in the comment > Revert suspicious change to BoundedByteBufferPool and disable use of direct > buffers for IPC reservoir by default > > > Key: HBASE-21162 > URL: https://issues.apache.org/jira/browse/HBASE-21162 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.7 >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Critical > Fix For: 1.5.0, 1.4.8 > > Attachments: HBASE-21162-branch-1.patch, HBASE-21162-branch-1.patch, > HBASE-21162-branch-1.patch > > > We had a production incident where we traced the issue to a direct buffer > leak. On a hunch we tried setting hbase.ipc.server.reservoir.enabled = false > and after that no native memory leak could be observed in any regionserver > process under the triggering load. > On HBASE-19239 (Fix findbugs and error-prone issues) I made a change to > BoundedByteBufferPool that is suspicious given this finding. It was committed > to branch-1.4 and branch-1. I'm going to revert this change. > In addition the allocation of direct memory for the server RPC reservoir is a > bit problematic in that tracing native memory or direct buffer leaks to a > particular class or compilation unit is difficult, so I also propose > allocating the reservoir on the heap by default instead. Should there be a > leak it is much easier to do an analysis of a heap dump with familiar tools > to find it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21162) Revert suspicious change to BoundedByteBufferPool and disable use of direct buffers for IPC reservoir by default
[ https://issues.apache.org/jira/browse/HBASE-21162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-21162: --- Attachment: HBASE-21162-branch-1.patch > Revert suspicious change to BoundedByteBufferPool and disable use of direct > buffers for IPC reservoir by default > > > Key: HBASE-21162 > URL: https://issues.apache.org/jira/browse/HBASE-21162 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.7 >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Critical > Fix For: 1.5.0, 1.4.8 > > Attachments: HBASE-21162-branch-1.patch, HBASE-21162-branch-1.patch, > HBASE-21162-branch-1.patch > > > We had a production incident where we traced the issue to a direct buffer > leak. On a hunch we tried setting hbase.ipc.server.reservoir.enabled = false > and after that no native memory leak could be observed in any regionserver > process under the triggering load. > On HBASE-19239 (Fix findbugs and error-prone issues) I made a change to > BoundedByteBufferPool that is suspicious given this finding. It was committed > to branch-1.4 and branch-1. I'm going to revert this change. > In addition the allocation of direct memory for the server RPC reservoir is a > bit problematic in that tracing native memory or direct buffer leaks to a > particular class or compilation unit is difficult, so I also propose > allocating the reservoir on the heap by default instead. Should there be a > leak it is much easier to do an analysis of a heap dump with familiar tools > to find it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21162) Revert suspicious change to BoundedByteBufferPool and disable use of direct buffers for IPC reservoir by default
[ https://issues.apache.org/jira/browse/HBASE-21162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607580#comment-16607580 ] Andrew Purtell commented on HBASE-21162: [~mdrob] Latest patch suppresses warning for http://errorprone.info/bugpattern/NonAtomicVolatileUpdate in BoundedByteBufferPool; otherwise unchanged > Revert suspicious change to BoundedByteBufferPool and disable use of direct > buffers for IPC reservoir by default > > > Key: HBASE-21162 > URL: https://issues.apache.org/jira/browse/HBASE-21162 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.7 >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Critical > Fix For: 1.5.0, 1.4.8 > > Attachments: HBASE-21162-branch-1.patch, HBASE-21162-branch-1.patch > > > We had a production incident where we traced the issue to a direct buffer > leak. On a hunch we tried setting hbase.ipc.server.reservoir.enabled = false > and after that no native memory leak could be observed in any regionserver > process under the triggering load. > On HBASE-19239 (Fix findbugs and error-prone issues) I made a change to > BoundedByteBufferPool that is suspicious given this finding. It was committed > to branch-1.4 and branch-1. I'm going to revert this change. > In addition the allocation of direct memory for the server RPC reservoir is a > bit problematic in that tracing native memory or direct buffer leaks to a > particular class or compilation unit is difficult, so I also propose > allocating the reservoir on the heap by default instead. Should there be a > leak it is much easier to do an analysis of a heap dump with familiar tools > to find it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (HBASE-21162) Revert suspicious change to BoundedByteBufferPool and disable use of direct buffers for IPC reservoir by default
[ https://issues.apache.org/jira/browse/HBASE-21162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607580#comment-16607580 ] Andrew Purtell edited comment on HBASE-21162 at 9/7/18 7:39 PM: [~mdrob] Latest patch suppresses warning for http://errorprone.info/bugpattern/NonAtomicVolatileUpdate in BoundedByteBufferPool; otherwise unchanged Edit: Confirmed with a compile of hbase-common with -PerrorProne was (Author: apurtell): [~mdrob] Latest patch suppresses warning for http://errorprone.info/bugpattern/NonAtomicVolatileUpdate in BoundedByteBufferPool; otherwise unchanged > Revert suspicious change to BoundedByteBufferPool and disable use of direct > buffers for IPC reservoir by default > > > Key: HBASE-21162 > URL: https://issues.apache.org/jira/browse/HBASE-21162 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.7 >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Critical > Fix For: 1.5.0, 1.4.8 > > Attachments: HBASE-21162-branch-1.patch, HBASE-21162-branch-1.patch > > > We had a production incident where we traced the issue to a direct buffer > leak. On a hunch we tried setting hbase.ipc.server.reservoir.enabled = false > and after that no native memory leak could be observed in any regionserver > process under the triggering load. > On HBASE-19239 (Fix findbugs and error-prone issues) I made a change to > BoundedByteBufferPool that is suspicious given this finding. It was committed > to branch-1.4 and branch-1. I'm going to revert this change. > In addition the allocation of direct memory for the server RPC reservoir is a > bit problematic in that tracing native memory or direct buffer leaks to a > particular class or compilation unit is difficult, so I also propose > allocating the reservoir on the heap by default instead. Should there be a > leak it is much easier to do an analysis of a heap dump with familiar tools > to find it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21162) Revert suspicious change to BoundedByteBufferPool and disable use of direct buffers for IPC reservoir by default
[ https://issues.apache.org/jira/browse/HBASE-21162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-21162: --- Attachment: HBASE-21162-branch-1.patch > Revert suspicious change to BoundedByteBufferPool and disable use of direct > buffers for IPC reservoir by default > > > Key: HBASE-21162 > URL: https://issues.apache.org/jira/browse/HBASE-21162 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.7 >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Critical > Fix For: 1.5.0, 1.4.8 > > Attachments: HBASE-21162-branch-1.patch, HBASE-21162-branch-1.patch > > > We had a production incident where we traced the issue to a direct buffer > leak. On a hunch we tried setting hbase.ipc.server.reservoir.enabled = false > and after that no native memory leak could be observed in any regionserver > process under the triggering load. > On HBASE-19239 (Fix findbugs and error-prone issues) I made a change to > BoundedByteBufferPool that is suspicious given this finding. It was committed > to branch-1.4 and branch-1. I'm going to revert this change. > In addition the allocation of direct memory for the server RPC reservoir is a > bit problematic in that tracing native memory or direct buffer leaks to a > particular class or compilation unit is difficult, so I also propose > allocating the reservoir on the heap by default instead. Should there be a > leak it is much easier to do an analysis of a heap dump with familiar tools > to find it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21168) BloomFilterUtil uses hardcoded randomness
[ https://issues.apache.org/jira/browse/HBASE-21168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607573#comment-16607573 ] Mike Drob commented on HBASE-21168: --- v2: add javadoc and rename the method based on [~liuml07]'s feedback > BloomFilterUtil uses hardcoded randomness > - > > Key: HBASE-21168 > URL: https://issues.apache.org/jira/browse/HBASE-21168 > Project: HBase > Issue Type: Task >Affects Versions: 2.0.0 >Reporter: Mike Drob >Assignee: Mike Drob >Priority: Minor > Fix For: 3.0.0 > > Attachments: HBASE-21168.master.001.patch, > HBASE-21168.master.002.patch > > > This was flagged by a Fortify scan and while it doesn't appear to be a real > issue, it's pretty easy to take care of anyway. > The hard coded rand can be moved to the test class that actually needs it to > make the static analysis happy. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21162) Revert suspicious change to BoundedByteBufferPool and disable use of direct buffers for IPC reservoir by default
[ https://issues.apache.org/jira/browse/HBASE-21162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607570#comment-16607570 ] Mike Drob commented on HBASE-21162: --- What was the error-prone complaint? Can still add a SuppressWarnings annotation, since at some point I do plan to go through the warnings and deal with those. > Revert suspicious change to BoundedByteBufferPool and disable use of direct > buffers for IPC reservoir by default > > > Key: HBASE-21162 > URL: https://issues.apache.org/jira/browse/HBASE-21162 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.7 >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Critical > Fix For: 1.5.0, 1.4.8 > > Attachments: HBASE-21162-branch-1.patch > > > We had a production incident where we traced the issue to a direct buffer > leak. On a hunch we tried setting hbase.ipc.server.reservoir.enabled = false > and after that no native memory leak could be observed in any regionserver > process under the triggering load. > On HBASE-19239 (Fix findbugs and error-prone issues) I made a change to > BoundedByteBufferPool that is suspicious given this finding. It was committed > to branch-1.4 and branch-1. I'm going to revert this change. > In addition the allocation of direct memory for the server RPC reservoir is a > bit problematic in that tracing native memory or direct buffer leaks to a > particular class or compilation unit is difficult, so I also propose > allocating the reservoir on the heap by default instead. Should there be a > leak it is much easier to do an analysis of a heap dump with familiar tools > to find it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21168) BloomFilterUtil uses hardcoded randomness
[ https://issues.apache.org/jira/browse/HBASE-21168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike Drob updated HBASE-21168: -- Attachment: HBASE-21168.master.002.patch > BloomFilterUtil uses hardcoded randomness > - > > Key: HBASE-21168 > URL: https://issues.apache.org/jira/browse/HBASE-21168 > Project: HBase > Issue Type: Task >Affects Versions: 2.0.0 >Reporter: Mike Drob >Assignee: Mike Drob >Priority: Minor > Fix For: 3.0.0 > > Attachments: HBASE-21168.master.001.patch, > HBASE-21168.master.002.patch > > > This was flagged by a Fortify scan and while it doesn't appear to be a real > issue, it's pretty easy to take care of anyway. > The hard coded rand can be moved to the test class that actually needs it to > make the static analysis happy. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20307) LoadTestTool prints too much zookeeper logging
[ https://issues.apache.org/jira/browse/HBASE-20307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607568#comment-16607568 ] Mike Drob commented on HBASE-20307: --- Probably could check that ZooKeeper class log level is not already at ERROR or OFF and then we inadvertently set it to more chatty, but for real use cases this is probably fine. Go ahead and commit, Andrew. > LoadTestTool prints too much zookeeper logging > -- > > Key: HBASE-20307 > URL: https://issues.apache.org/jira/browse/HBASE-20307 > Project: HBase > Issue Type: Bug > Components: tooling >Reporter: Mike Drob >Assignee: Colin Garcia >Priority: Major > Labels: beginner > Attachments: HBASE-20307.000.patch, HBASE-20307.001.patch > > > When running ltt there is a ton of ZK related cruft that I probably don't > care about. Hide it behind -verbose flag or point people at log4j > configuration but don't print it by default. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21162) Revert suspicious change to BoundedByteBufferPool and disable use of direct buffers for IPC reservoir by default
[jira] [Commented] (HBASE-20307) LoadTestTool prints too much zookeeper logging
[ https://issues.apache.org/jira/browse/HBASE-20307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607549#comment-16607549 ] Andrew Purtell commented on HBASE-20307: +1 on the new patch Will commit shortly unless objection > LoadTestTool prints too much zookeeper logging > -- > > Key: HBASE-20307 > URL: https://issues.apache.org/jira/browse/HBASE-20307 > Project: HBase > Issue Type: Bug > Components: tooling >Reporter: Mike Drob >Assignee: Colin Garcia >Priority: Major > Labels: beginner > Attachments: HBASE-20307.000.patch, HBASE-20307.001.patch > > > When running ltt there is a ton of ZK related cruft that I probably don't > care about. Hide it behind -verbose flag or point people at log4j > configuration but don't print it by default. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21162) Revert suspicious change to BoundedByteBufferPool and disable use of direct buffers for IPC reservoir by default
[ https://issues.apache.org/jira/browse/HBASE-21162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607519#comment-16607519 ] Andrew Purtell commented on HBASE-21162: My bad, no findbugs warnings here, they were/are error-prone warnings (not errors) - we are all good > Revert suspicious change to BoundedByteBufferPool and disable use of direct > buffers for IPC reservoir by default > > > Key: HBASE-21162 > URL: https://issues.apache.org/jira/browse/HBASE-21162 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.7 >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Critical > Fix For: 1.5.0, 1.4.8 > > Attachments: HBASE-21162-branch-1.patch > > > We had a production incident where we traced the issue to a direct buffer > leak. On a hunch we tried setting hbase.ipc.server.reservoir.enabled = false > and after that no native memory leak could be observed in any regionserver > process under the triggering load. > On HBASE-19239 (Fix findbugs and error-prone issues) I made a change to > BoundedByteBufferPool that is suspicious given this finding. It was committed > to branch-1.4 and branch-1. I'm going to revert this change. > In addition the allocation of direct memory for the server RPC reservoir is a > bit problematic in that tracing native memory or direct buffer leaks to a > particular class or compilation unit is difficult, so I also propose > allocating the reservoir on the heap by default instead. Should there be a > leak it is much easier to do an analysis of a heap dump with familiar tools > to find it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21138) Close HRegion instance at the end of every test in TestHRegion
[ https://issues.apache.org/jira/browse/HBASE-21138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607515#comment-16607515 ] Hudson commented on HBASE-21138: FAILURE: Integrated in Jenkins build HBase-1.3-IT #470 (See [https://builds.apache.org/job/HBase-1.3-IT/470/]) HBASE-21138 Close HRegion instance at the end of every test in (apurtell: rev d3c9723cf454383a2094a31297bf449dfc2b3a68) * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java > Close HRegion instance at the end of every test in TestHRegion > -- > > Key: HBASE-21138 > URL: https://issues.apache.org/jira/browse/HBASE-21138 > Project: HBase > Issue Type: Test >Reporter: Ted Yu >Assignee: Mingliang Liu >Priority: Major > Fix For: 3.0.0, 1.5.0, 1.3.3, 2.2.0, 1.4.8 > > Attachments: HBASE-21138.000.patch, HBASE-21138.001.patch, > HBASE-21138.002.patch, HBASE-21138.003.patch, HBASE-21138.004.patch, > HBASE-21138.branch-1.004.patch, HBASE-21138.branch-1.004.patch, > HBASE-21138.branch-2.004.patch > > > TestHRegion has over 100 tests. > The following is from one subtest: > {code} > public void testCompactionAffectedByScanners() throws Exception { > byte[] family = Bytes.toBytes("family"); > this.region = initHRegion(tableName, method, CONF, family); > {code} > this.region is not closed at the end of the subtest. > testToShowNPEOnRegionScannerReseek is another example. > Every subtest should use the following construct toward the end: > {code} > } finally { > HBaseTestingUtility.closeRegionAndWAL(this.region); > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21091) Update Hadoop compatibility table
[ https://issues.apache.org/jira/browse/HBASE-21091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607514#comment-16607514 ] Sean Busbey commented on HBASE-21091: - [~elserj] mind if I take a crack at the font thing this weekend? > Update Hadoop compatibility table > - > > Key: HBASE-21091 > URL: https://issues.apache.org/jira/browse/HBASE-21091 > Project: HBase > Issue Type: Improvement > Components: documentation >Reporter: Josh Elser >Assignee: Josh Elser >Priority: Major > Attachments: HBASE-20264.001.patch, HBASE-20264.002.patch, > hbase-20264-emoji.png, hbase-20264-html.png > > > [https://lists.apache.org/thread.html/7016d322a07e96dccdb071041c37238e43d3df4f93e9515d52ccfafc@%3Cdev.hbase.apache.org%3E] > covers some discussion around our Hadoop Version Compatibility table. > > A "leading" suggestion to make this more clear is to use a green/yellow/red > (traffic-signal) style marking, instead of using specifics words/phrases (as > they're often dependent on the interpretation of the reader). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21164) reportForDuty should do (expotential) backoff rather than retry every 3 seconds (default).
[ https://issues.apache.org/jira/browse/HBASE-21164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-21164: -- Status: Patch Available (was: Open) > reportForDuty should do (expotential) backoff rather than retry every 3 > seconds (default). > -- > > Key: HBASE-21164 > URL: https://issues.apache.org/jira/browse/HBASE-21164 > Project: HBase > Issue Type: Improvement > Components: regionserver >Reporter: stack >Assignee: Mingliang Liu >Priority: Minor > Attachments: HBASE-21164.branch-2.1.001.patch, > HBASE-21164.branch-2.1.002.patch > > > RegionServers do reportForDuty on startup to tell Master they are available. > If Master is initializing, and especially on a big cluster when it can take a > while particularly if something is amiss, the log every three seconds is > annoying and doesn't do anything of use. Do backoff if fails up to a > reasonable maximum period. Here is example: > {code} > 2018-09-06 14:01:39,312 INFO > org.apache.hadoop.hbase.regionserver.HRegionServer: reportForDuty to > master=vc0207.halxg.cloudera.com,22001,1536266763109 with port=22001, > startcode=1536266763109 > 2018-09-06 14:01:39,312 WARN > org.apache.hadoop.hbase.regionserver.HRegionServer: reportForDuty failed; > sleeping and then retrying. > > {code} > For example, I am looking at a large cluster now that had a backlog of > procedure WALs. It is taking a couple of hours recreating the procedure-state > because there are millions of procedures outstanding. Meantime, the Master > log is just full of the above message -- every three seconds... -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21164) reportForDuty should do (expotential) backoff rather than retry every 3 seconds (default).
[ https://issues.apache.org/jira/browse/HBASE-21164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607511#comment-16607511 ] stack commented on HBASE-21164: --- Patch looks great as does test. Nice. I was going to suggest we do this for all heartbeats, not just the first. Can do in a new issue. > reportForDuty should do (expotential) backoff rather than retry every 3 > seconds (default). > -- > > Key: HBASE-21164 > URL: https://issues.apache.org/jira/browse/HBASE-21164 > Project: HBase > Issue Type: Improvement > Components: regionserver >Reporter: stack >Assignee: Mingliang Liu >Priority: Minor > Attachments: HBASE-21164.branch-2.1.001.patch, > HBASE-21164.branch-2.1.002.patch > > > RegionServers do reportForDuty on startup to tell Master they are available. > If Master is initializing, and especially on a big cluster when it can take a > while particularly if something is amiss, the log every three seconds is > annoying and doesn't do anything of use. Do backoff if fails up to a > reasonable maximum period. Here is example: > {code} > 2018-09-06 14:01:39,312 INFO > org.apache.hadoop.hbase.regionserver.HRegionServer: reportForDuty to > master=vc0207.halxg.cloudera.com,22001,1536266763109 with port=22001, > startcode=1536266763109 > 2018-09-06 14:01:39,312 WARN > org.apache.hadoop.hbase.regionserver.HRegionServer: reportForDuty failed; > sleeping and then retrying. > > {code} > For example, I am looking at a large cluster now that had a backlog of > procedure WALs. It is taking a couple of hours recreating the procedure-state > because there are millions of procedures outstanding. Meantime, the Master > log is just full of the above message -- every three seconds... -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20307) LoadTestTool prints too much zookeeper logging
[ https://issues.apache.org/jira/browse/HBASE-20307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Colin Garcia updated HBASE-20307: - Attachment: HBASE-20307.001.patch > LoadTestTool prints too much zookeeper logging > -- > > Key: HBASE-20307 > URL: https://issues.apache.org/jira/browse/HBASE-20307 > Project: HBase > Issue Type: Bug > Components: tooling >Reporter: Mike Drob >Assignee: Colin Garcia >Priority: Major > Labels: beginner > Attachments: HBASE-20307.000.patch, HBASE-20307.001.patch > > > When running ltt there is a ton of ZK related cruft that I probably don't > care about. Hide it behind -verbose flag or point people at log4j > configuration but don't print it by default. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21165) During ProcedureStore load, there is no listing of progress...
[ https://issues.apache.org/jira/browse/HBASE-21165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-21165: -- Description: I have a Master that crashed on a large cluster with hundreds of outstanding Procedure WALs and (probably --TBD) a few million Procedures to load. It is taking a long time (two hours)... There were STUCK procedures that were preventing clean-up of the old WALs. I can tell we are making progress by enabling TRACE on the Procedure Store. Better would be an emission as we made progress through the files with an output after every so many procedures loaded. Then, post-load, there is a long time spent sorting-out the Procedure image... We are in finish for ages doing stuff like: {code} "master/vc0207:22001:becomeActiveMaster" #98 daemon prio=5 os_prio=0 tid=0x00d31800 nid=0x1efc0 runnable [0x7f0a3c17d000] java.lang.Thread.State: RUNNABLE at org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFormatReader$WalProcedureMap.removeFromMap(ProcedureWALFormatReader.java:837) at org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFormatReader$WalProcedureMap.fetchReady(ProcedureWALFormatReader.java:614) at org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFormatReader.finish(ProcedureWALFormatReader.java:201) at org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFormat.load(ProcedureWALFormat.java:94) at org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.load(WALProcedureStore.java:426) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.load(ProcedureExecutor.java:382) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.init(ProcedureExecutor.java:663) at org.apache.hadoop.hbase.master.HMaster.createProcedureExecutor(HMaster.java:1335) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:878) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2119) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:567) at org.apache.hadoop.hbase.master.HMaster$$Lambda$42/1930759883.run(Unknown Source) at java.lang.Thread.run(Thread.java:748) {code} and {code} "master/vc0207:22001:becomeActiveMaster" #98 daemon prio=5 os_prio=0 tid=0x00d31800 nid=0x1efc0 runnable [0x7f0a3c17d000] java.lang.Thread.State: RUNNABLE at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at java.security.Provider$Service.newInstance(Provider.java:1595) at sun.security.jca.GetInstance.getInstance(GetInstance.java:236) at sun.security.jca.GetInstance.getInstance(GetInstance.java:164) at java.security.Security.getImpl(Security.java:695) at java.security.MessageDigest.getInstance(MessageDigest.java:167) at org.apache.hadoop.hbase.util.MD5Hash.getMD5AsHex(MD5Hash.java:59) at org.apache.hadoop.hbase.client.RegionInfo.createRegionName(RegionInfo.java:560) at org.apache.hadoop.hbase.client.RegionInfo.createRegionName(RegionInfo.java:490) at org.apache.hadoop.hbase.client.RegionInfoBuilder$MutableRegionInfo.(RegionInfoBuilder.java:243) at org.apache.hadoop.hbase.client.RegionInfoBuilder.build(RegionInfoBuilder.java:120) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toRegionInfo(ProtobufUtil.java:3132) at org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.deserializeStateData(ServerCrashProcedure.java:335) at org.apache.hadoop.hbase.procedure2.ProcedureUtil.convertToProcedure(ProcedureUtil.java:283) at org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFormatReader$Entry.convert(ProcedureWALFormatReader.java:359) at org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFormatReader$EntryIterator.next(ProcedureWALFormatReader.java:410) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.loadProcedures(ProcedureExecutor.java:460) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$200(ProcedureExecutor.java:76) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$1.load(ProcedureExecutor.java:391) at org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore$2.load(WALProcedureStore.java:441) at org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFormatReader.finish(ProcedureWALFormatReader.java:202) at org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFormat.load(ProcedureWALFormat.java:94) at org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.load(WALProcedureStore.java:426) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.load(ProcedureExecutor.java:382) at
[jira] [Assigned] (HBASE-18451) PeriodicMemstoreFlusher should inspect the queue before adding a delayed flush request
[ https://issues.apache.org/jira/browse/HBASE-18451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell reassigned HBASE-18451: -- Assignee: Ramie Raufdeen > PeriodicMemstoreFlusher should inspect the queue before adding a delayed > flush request > -- > > Key: HBASE-18451 > URL: https://issues.apache.org/jira/browse/HBASE-18451 > Project: HBase > Issue Type: Bug > Components: regionserver >Affects Versions: 2.0.0-alpha-1 >Reporter: Jean-Marc Spaggiari >Assignee: Ramie Raufdeen >Priority: Major > Attachments: HBASE-18451.master.patch > > > If you run a big job every 4 hours, impacting many tables (they have 150 > regions per server), ad the end all the regions might have some data to be > flushed, and we want, after one hour, trigger a periodic flush. That's > totally fine. > Now, to avoid a flush storm, when we detect a region to be flushed, we add a > "randomDelay" to the delayed flush, that way we spread them away. > RANGE_OF_DELAY is 5 minutes. So we spread the flush over the next 5 minutes, > which is very good. > However, because we don't check if there is already a request in the queue, > 10 seconds after, we create a new request, with a new randomDelay. > If you generate a randomDelay every 10 seconds, at some point, you will end > up having a small one, and the flush will be triggered almost immediatly. > As a result, instead of spreading all the flush within the next 5 minutes, > you end-up getting them all way more quickly. Like within the first minute. > Which not only feed the queue to to many flush requests, but also defeats the > purpose of the randomDelay. > {code} > @Override > protected void chore() { > final StringBuffer whyFlush = new StringBuffer(); > for (Region r : this.server.onlineRegions.values()) { > if (r == null) continue; > if (((HRegion)r).shouldFlush(whyFlush)) { > FlushRequester requester = server.getFlushRequester(); > if (requester != null) { > long randomDelay = RandomUtils.nextInt(RANGE_OF_DELAY) + > MIN_DELAY_TIME; > LOG.info(getName() + " requesting flush of " + > r.getRegionInfo().getRegionNameAsString() + " because " + > whyFlush.toString() + > " after random delay " + randomDelay + "ms"); > //Throttle the flushes by putting a delay. If we don't throttle, > and there > //is a balanced write-load on the regions in a table, we might > end up > //overwhelming the filesystem with too many flushes at once. > requester.requestDelayedFlush(r, randomDelay, false); > } > } > } > } > {code} > {code} > 2017-07-24 18:44:33,338 INFO > org.apache.hadoop.hbase.regionserver.HRegionServer: > hbasetest2.domainname.com,60020,1500916375517-MemstoreFlusherChore requesting > flush of testflush,,1500932649126.578c27d2eb7ef0ad437bf2ff38c053ae. because f > has an old edit so flush to free WALs after random delay 270785ms > 2017-07-24 18:44:43,328 INFO > org.apache.hadoop.hbase.regionserver.HRegionServer: > hbasetest2.domainname.com,60020,1500916375517-MemstoreFlusherChore requesting > flush of testflush,,1500932649126.578c27d2eb7ef0ad437bf2ff38c053ae. because f > has an old edit so flush to free WALs after random delay 200143ms > 2017-07-24 18:44:53,954 INFO > org.apache.hadoop.hbase.regionserver.HRegionServer: > hbasetest2.domainname.com,60020,1500916375517-MemstoreFlusherChore requesting > flush of testflush,,1500932649126.578c27d2eb7ef0ad437bf2ff38c053ae. because f > has an old edit so flush to free WALs after random delay 191082ms > 2017-07-24 18:45:03,528 INFO > org.apache.hadoop.hbase.regionserver.HRegionServer: > hbasetest2.domainname.com,60020,1500916375517-MemstoreFlusherChore requesting > flush of testflush,,1500932649126.578c27d2eb7ef0ad437bf2ff38c053ae. because f > has an old edit so flush to free WALs after random delay 92532ms > 2017-07-24 18:45:14,201 INFO > org.apache.hadoop.hbase.regionserver.HRegionServer: > hbasetest2.domainname.com,60020,1500916375517-MemstoreFlusherChore requesting > flush of testflush,,1500932649126.578c27d2eb7ef0ad437bf2ff38c053ae. because f > has an old edit so flush to free WALs after random delay 238780ms > 2017-07-24 18:45:24,195 INFO > org.apache.hadoop.hbase.regionserver.HRegionServer: > hbasetest2.domainname.com,60020,1500916375517-MemstoreFlusherChore requesting > flush of testflush,,1500932649126.578c27d2eb7ef0ad437bf2ff38c053ae. because f > has an old edit so flush to free WALs after random delay 35390ms > 2017-07-24 18:45:33,362 INFO > org.apache.hadoop.hbase.regionserver.HRegionServer: >
[jira] [Commented] (HBASE-19121) HBCK for AMv2 (A.K.A HBCK2)
[ https://issues.apache.org/jira/browse/HBASE-19121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607502#comment-16607502 ] stack commented on HBASE-19121: --- h2. Horror Story Big cluster. Lots of regions. A couple of STUCK procedures that prevent clean-up of old WALs. A backlog builds. Master crashes (for some unrelated reason). New Master tries to become active Master. It reads outstanding MasterProcWAL logs to reconstruct assignment. If a large backlog, this can take hours. HBASE-21165 describes an instance where 700servers and 420k regions. The Master is taking hours to put together assignment again from backed-up logs (~300 and I think a few million procedures). HBASE-21165 is adding emitting state because otherwise it looks like we are hung. Need to support remove of all MasterProcWAL and come up anyways as per notes above. > HBCK for AMv2 (A.K.A HBCK2) > --- > > Key: HBASE-19121 > URL: https://issues.apache.org/jira/browse/HBASE-19121 > Project: HBase > Issue Type: Bug > Components: hbck >Reporter: stack >Assignee: Umesh Agashe >Priority: Major > Attachments: hbase-19121.master.001.patch > > > We don't have an hbck for the new AM. Old hbck may actually do damage going > against AMv2. > Fix. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21001) ReplicationObserver fails to load in HBase 2.0.0
[ https://issues.apache.org/jira/browse/HBASE-21001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607498#comment-16607498 ] Hadoop QA commented on HBASE-21001: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} branch-2.0 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 42s{color} | {color:green} branch-2.0 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 42s{color} | {color:green} branch-2.0 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 16s{color} | {color:green} branch-2.0 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 3m 56s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 9s{color} | {color:green} branch-2.0 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s{color} | {color:green} branch-2.0 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 3m 52s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 8m 10s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.5 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green}115m 2s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}148m 44s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:6f01af0 | | JIRA Issue | HBASE-21001 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12938846/HBASE-21001.branch-2.0.001.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux f3b9c6c9c1e1 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | branch-2.0 / fb311bb88f | | maven | version: Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC3 | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/14345/testReport/ | | Max. process+thread count | 4063 (vs. ulimit of 1) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/14345/console | | Powered by | Apache Yetus 0.7.0 http://yetus.apache.org | This message was automatically
[jira] [Updated] (HBASE-21138) Close HRegion instance at the end of every test in TestHRegion
[ https://issues.apache.org/jira/browse/HBASE-21138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-21138: --- Fix Version/s: 1.4.8 1.3.3 This applies cleanly to branch-1.4 and to branch-1.3 with minor fuzz, picked it to respective branches. > Close HRegion instance at the end of every test in TestHRegion > -- > > Key: HBASE-21138 > URL: https://issues.apache.org/jira/browse/HBASE-21138 > Project: HBase > Issue Type: Test >Reporter: Ted Yu >Assignee: Mingliang Liu >Priority: Major > Fix For: 3.0.0, 1.5.0, 1.3.3, 2.2.0, 1.4.8 > > Attachments: HBASE-21138.000.patch, HBASE-21138.001.patch, > HBASE-21138.002.patch, HBASE-21138.003.patch, HBASE-21138.004.patch, > HBASE-21138.branch-1.004.patch, HBASE-21138.branch-1.004.patch, > HBASE-21138.branch-2.004.patch > > > TestHRegion has over 100 tests. > The following is from one subtest: > {code} > public void testCompactionAffectedByScanners() throws Exception { > byte[] family = Bytes.toBytes("family"); > this.region = initHRegion(tableName, method, CONF, family); > {code} > this.region is not closed at the end of the subtest. > testToShowNPEOnRegionScannerReseek is another example. > Every subtest should use the following construct toward the end: > {code} > } finally { > HBaseTestingUtility.closeRegionAndWAL(this.region); > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21166) Creating a CoprocessorHConnection re-retrieves the cluster id from ZK
[ https://issues.apache.org/jira/browse/HBASE-21166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-21166: -- Attachment: HBASE-21166.branch-1.001.patch > Creating a CoprocessorHConnection re-retrieves the cluster id from ZK > - > > Key: HBASE-21166 > URL: https://issues.apache.org/jira/browse/HBASE-21166 > Project: HBase > Issue Type: Bug >Affects Versions: 1.5.0 >Reporter: Lars Hofhansl >Assignee: Lars Hofhansl >Priority: Major > Fix For: 1.5.0 > > Attachments: HBASE-21166.branch-1.001.patch > > > CoprocessorHConnections are created for example during a call of > CoprocessorHost$Environent.getTable(...). The region server already know the > cluster id, yet, we're resolving it over and over again. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21166) Creating a CoprocessorHConnection re-retrieves the cluster id from ZK
[ https://issues.apache.org/jira/browse/HBASE-21166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607469#comment-16607469 ] Lars Hofhansl commented on HBASE-21166: --- I'll get the test-run to execute eventually. > Creating a CoprocessorHConnection re-retrieves the cluster id from ZK > - > > Key: HBASE-21166 > URL: https://issues.apache.org/jira/browse/HBASE-21166 > Project: HBase > Issue Type: Bug >Affects Versions: 1.5.0 >Reporter: Lars Hofhansl >Assignee: Lars Hofhansl >Priority: Major > Fix For: 1.5.0 > > Attachments: HBASE-21166.branch-1.001.patch > > > CoprocessorHConnections are created for example during a call of > CoprocessorHost$Environent.getTable(...). The region server already know the > cluster id, yet, we're resolving it over and over again. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21166) Creating a CoprocessorHConnection re-retrieves the cluster id from ZK
[ https://issues.apache.org/jira/browse/HBASE-21166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-21166: -- Attachment: (was: 21166-branch-1.patch) > Creating a CoprocessorHConnection re-retrieves the cluster id from ZK > - > > Key: HBASE-21166 > URL: https://issues.apache.org/jira/browse/HBASE-21166 > Project: HBase > Issue Type: Bug >Affects Versions: 1.5.0 >Reporter: Lars Hofhansl >Assignee: Lars Hofhansl >Priority: Major > Fix For: 1.5.0 > > Attachments: HBASE-21166.branch-1.001.patch > > > CoprocessorHConnections are created for example during a call of > CoprocessorHost$Environent.getTable(...). The region server already know the > cluster id, yet, we're resolving it over and over again. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HBASE-16458) Shorten backup / restore test execution time
[ https://issues.apache.org/jira/browse/HBASE-16458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu reassigned HBASE-16458: -- Assignee: Vladimir Rodionov (was: Ted Yu) Assigning to Vlad who has done experiments. > Shorten backup / restore test execution time > > > Key: HBASE-16458 > URL: https://issues.apache.org/jira/browse/HBASE-16458 > Project: HBase > Issue Type: Test >Reporter: Ted Yu >Assignee: Vladimir Rodionov >Priority: Major > Labels: backup > Attachments: 16458.HBASE-7912.v3.txt, 16458.HBASE-7912.v4.txt, > 16458.HBASE-7912.v5.txt, 16458.v1.txt, 16458.v2.txt, 16458.v3.txt > > > Below was timing information for all the backup / restore tests (today's > result): > {code} > Running org.apache.hadoop.hbase.backup.TestIncrementalBackup > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 576.273 sec - > in org.apache.hadoop.hbase.backup.TestIncrementalBackup > Running org.apache.hadoop.hbase.backup.TestBackupBoundaryTests > Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 124.67 sec - > in org.apache.hadoop.hbase.backup.TestBackupBoundaryTests > Running org.apache.hadoop.hbase.backup.TestBackupStatusProgress > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 102.34 sec - > in org.apache.hadoop.hbase.backup.TestBackupStatusProgress > Running org.apache.hadoop.hbase.backup.TestBackupAdmin > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 490.251 sec - > in org.apache.hadoop.hbase.backup.TestBackupAdmin > Running org.apache.hadoop.hbase.backup.TestHFileArchiving > Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.323 sec - > in org.apache.hadoop.hbase.backup.TestHFileArchiving > Running org.apache.hadoop.hbase.backup.TestSystemTableSnapshot > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 65.492 sec - > in org.apache.hadoop.hbase.backup.TestSystemTableSnapshot > Running org.apache.hadoop.hbase.backup.TestBackupDescribe > Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 93.758 sec - > in org.apache.hadoop.hbase.backup.TestBackupDescribe > Running org.apache.hadoop.hbase.backup.TestBackupLogCleaner > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 109.187 sec - > in org.apache.hadoop.hbase.backup.TestBackupLogCleaner > Running org.apache.hadoop.hbase.backup.TestIncrementalBackupNoDataLoss > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 330.539 sec - > in org.apache.hadoop.hbase.backup.TestIncrementalBackupNoDataLoss > Running org.apache.hadoop.hbase.backup.TestRemoteBackup > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.371 sec - > in org.apache.hadoop.hbase.backup.TestRemoteBackup > Running org.apache.hadoop.hbase.backup.TestBackupSystemTable > Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 67.893 sec - > in org.apache.hadoop.hbase.backup.TestBackupSystemTable > Running org.apache.hadoop.hbase.backup.TestRestoreBoundaryTests > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 120.779 sec - > in org.apache.hadoop.hbase.backup.TestRestoreBoundaryTests > Running org.apache.hadoop.hbase.backup.TestFullBackupSetRestoreSet > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 117.815 sec - > in org.apache.hadoop.hbase.backup.TestFullBackupSetRestoreSet > Running org.apache.hadoop.hbase.backup.TestBackupShowHistory > Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 136.517 sec - > in org.apache.hadoop.hbase.backup.TestBackupShowHistory > Running org.apache.hadoop.hbase.backup.TestRemoteRestore > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 91.799 sec - > in org.apache.hadoop.hbase.backup.TestRemoteRestore > Running org.apache.hadoop.hbase.backup.TestFullRestore > Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 317.711 sec > - in org.apache.hadoop.hbase.backup.TestFullRestore > Running org.apache.hadoop.hbase.backup.TestFullBackupSet > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 87.045 sec - > in org.apache.hadoop.hbase.backup.TestFullBackupSet > Running org.apache.hadoop.hbase.backup.TestBackupDelete > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 86.214 sec - > in org.apache.hadoop.hbase.backup.TestBackupDelete > Running org.apache.hadoop.hbase.backup.TestBackupDeleteRestore > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 77.631 sec - > in org.apache.hadoop.hbase.backup.TestBackupDeleteRestore > Running org.apache.hadoop.hbase.backup.TestIncrementalBackupDeleteTable > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 190.358 sec - > in org.apache.hadoop.hbase.backup.TestIncrementalBackupDeleteTable > Running >
[jira] [Commented] (HBASE-21149) TestIncrementalBackupWithBulkLoad may fail due to file copy failure
[ https://issues.apache.org/jira/browse/HBASE-21149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607459#comment-16607459 ] Vladimir Rodionov commented on HBASE-21149: --- We can configure only backup tests (hbase-backup), so other modules won't be affected. I tried reusing JVM for backup tests. It did not yield any substantial improvement, because now we have to truncate 9 tables before starting new test class and it turned out that this is as expensive as starting new cluster. Instead of reusing JVMs I tried to increase fork count and it worked out much better. With 8 parallel forks the execution time decreased from 63 min to 9 min. But this is for different JIRA: https://issues.apache.org/jira/browse/HBASE-16458 > TestIncrementalBackupWithBulkLoad may fail due to file copy failure > --- > > Key: HBASE-21149 > URL: https://issues.apache.org/jira/browse/HBASE-21149 > Project: HBase > Issue Type: Test > Components: backuprestore >Reporter: Ted Yu >Assignee: Vladimir Rodionov >Priority: Major > Attachments: HBASE-21149-v1.patch > > > From > https://builds.apache.org/job/HBase%20Nightly/job/master/471/testReport/junit/org.apache.hadoop.hbase.backup/TestIncrementalBackupWithBulkLoad/TestIncBackupDeleteTable/ > : > {code} > 2018-09-03 11:54:30,526 ERROR [Time-limited test] > impl.TableBackupClient(235): Unexpected Exception : Failed copy from > hdfs://localhost:53075/user/jenkins/test-data/ecd40bd0-cb93-91e0-90b5-7bfd5bb2c566/data/default/test-1535975627781/773f5709b645b46bd3840f9cfb549c5a/f/0f626c66493649daaf84057b8dd71a30_SeqId_205_,hdfs://localhost:53075/user/jenkins/test-data/ecd40bd0-cb93-91e0-90b5-7bfd5bb2c566/data/default/test-1535975627781/773f5709b645b46bd3840f9cfb549c5a/f/ad8df6415bd9459d9b3df76c588d79df_SeqId_205_ > to hdfs://localhost:53075/backupUT/backup_1535975655488 > java.io.IOException: Failed copy from > hdfs://localhost:53075/user/jenkins/test-data/ecd40bd0-cb93-91e0-90b5-7bfd5bb2c566/data/default/test-1535975627781/773f5709b645b46bd3840f9cfb549c5a/f/0f626c66493649daaf84057b8dd71a30_SeqId_205_,hdfs://localhost:53075/user/jenkins/test-data/ecd40bd0-cb93-91e0-90b5-7bfd5bb2c566/data/default/test-1535975627781/773f5709b645b46bd3840f9cfb549c5a/f/ad8df6415bd9459d9b3df76c588d79df_SeqId_205_ > to hdfs://localhost:53075/backupUT/backup_1535975655488 > at > org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.incrementalCopyHFiles(IncrementalTableBackupClient.java:351) > at > org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.copyBulkLoadedFiles(IncrementalTableBackupClient.java:219) > at > org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.handleBulkLoad(IncrementalTableBackupClient.java:198) > at > org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.execute(IncrementalTableBackupClient.java:320) > at > org.apache.hadoop.hbase.backup.impl.BackupAdminImpl.backupTables(BackupAdminImpl.java:605) > at > org.apache.hadoop.hbase.backup.TestIncrementalBackupWithBulkLoad.TestIncBackupDeleteTable(TestIncrementalBackupWithBulkLoad.java:104) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > {code} > However, some part of the test output was lost: > {code} > 2018-09-03 11:53:36,793 DEBUG [RS:0;765c9ca5ea28:36357] regions > ...[truncated 398396 chars]... > 8) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21166) Creating a CoprocessorHConnection re-retrieves the cluster id from ZK
[ https://issues.apache.org/jira/browse/HBASE-21166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607456#comment-16607456 ] Hadoop QA commented on HBASE-21166: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s{color} | {color:red} HBASE-21166 does not apply to master. Rebase required? Wrong Branch? See https://yetus.apache.org/documentation/0.7.0/precommit-patchnames for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HBASE-21166 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12938870/21166-branch-1.patch | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/14348/console | | Powered by | Apache Yetus 0.7.0 http://yetus.apache.org | This message was automatically generated. > Creating a CoprocessorHConnection re-retrieves the cluster id from ZK > - > > Key: HBASE-21166 > URL: https://issues.apache.org/jira/browse/HBASE-21166 > Project: HBase > Issue Type: Bug >Affects Versions: 1.5.0 >Reporter: Lars Hofhansl >Assignee: Lars Hofhansl >Priority: Major > Fix For: 1.5.0 > > Attachments: 21166-branch-1.patch > > > CoprocessorHConnections are created for example during a call of > CoprocessorHost$Environent.getTable(...). The region server already know the > cluster id, yet, we're resolving it over and over again. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21168) BloomFilterUtil uses hardcoded randomness
[ https://issues.apache.org/jira/browse/HBASE-21168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607454#comment-16607454 ] Mingliang Liu commented on HBASE-21168: --- +1 (non-binding) Nits nits: {{setFakeLookupMode}} can be renamed {{setRandomGeneratorForTest}} as the "mode" is not clear any more. Or add javadoc to indicate the null behavior. Better to have clarifying parentheses as {{int hashLoc = (randomGeneratorForTest == null)}} > BloomFilterUtil uses hardcoded randomness > - > > Key: HBASE-21168 > URL: https://issues.apache.org/jira/browse/HBASE-21168 > Project: HBase > Issue Type: Task >Affects Versions: 2.0.0 >Reporter: Mike Drob >Assignee: Mike Drob >Priority: Minor > Fix For: 3.0.0 > > Attachments: HBASE-21168.master.001.patch > > > This was flagged by a Fortify scan and while it doesn't appear to be a real > issue, it's pretty easy to take care of anyway. > The hard coded rand can be moved to the test class that actually needs it to > make the static analysis happy. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21166) Creating a CoprocessorHConnection re-retrieves the cluster id from ZK
[ https://issues.apache.org/jira/browse/HBASE-21166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-21166: -- Attachment: 21166-branch-1.patch > Creating a CoprocessorHConnection re-retrieves the cluster id from ZK > - > > Key: HBASE-21166 > URL: https://issues.apache.org/jira/browse/HBASE-21166 > Project: HBase > Issue Type: Bug >Affects Versions: 1.5.0 >Reporter: Lars Hofhansl >Assignee: Lars Hofhansl >Priority: Major > Fix For: 1.5.0 > > Attachments: 21166-branch-1.patch > > > CoprocessorHConnections are created for example during a call of > CoprocessorHost$Environent.getTable(...). The region server already know the > cluster id, yet, we're resolving it over and over again. -- This message was sent by Atlassian JIRA (v7.6.3#76005)