[jira] [Updated] (HBASE-21010) HBase Quickstart Development Environment
[ https://issues.apache.org/jira/browse/HBASE-21010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jack Bearden updated HBASE-21010: - Description: Quickstart dev environment for HBase. For those that are familiar with the Hadoop start-build-env.sh, this is a port of that code but for HBase. Usage is simple, just run the script and it will build and run a docker container with your maven cache and hbase dependencies already set up. From there, you can execute your maven goals as usual. was: Quickstart dev environment for HBase. For those that are familiar with the Hadoop start-build-env.sh, this is a port of that code but for HBase. Usage is simple, just run the script and it will build and run a docker container with your maven cache and hbase directory already set up. From there, you can execute your maven goals as usual. > HBase Quickstart Development Environment > > > Key: HBASE-21010 > URL: https://issues.apache.org/jira/browse/HBASE-21010 > Project: HBase > Issue Type: New Feature >Reporter: Jack Bearden >Assignee: Jack Bearden >Priority: Major > Attachments: HBASE-21010.001.patch, HBASE-21010.master.002.patch > > > > Quickstart dev environment for HBase. For those that are familiar with the > Hadoop start-build-env.sh, this is a port of that code but for HBase. > Usage is simple, just run the script and it will build and run a docker > container with your maven cache and hbase dependencies already set up. From > there, you can execute your maven goals as usual. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21010) HBase Quickstart Development Environment
[ https://issues.apache.org/jira/browse/HBASE-21010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jack Bearden updated HBASE-21010: - Description: Quickstart dev environment for HBase. For those that are familiar with the Hadoop start-build-env.sh, this is a port of that code but for HBase. Usage is simple, just run the script and it will build and run a docker container with your maven cache and hbase directory already set up. From there, you can execute your maven goals as usual. was: Hi all, I have been using the following environment (see patch) for conveniently building and testing my HBase patches before they hit precommit. This improvement is a port from Hadoop trunk that was modified to work in our codebase instead. This Linux environment should more closely resemble Jenkins. Usage is simple, just run the script and it will build and run a docker container with your maven cache and hbase directory already set up. From there, you can execute your maven goals as usual. As a kicker, this can also be used to run HBase in docker with low resources to perhaps sniff out and debug flakey tests with maybe less docker overhead. > HBase Quickstart Development Environment > > > Key: HBASE-21010 > URL: https://issues.apache.org/jira/browse/HBASE-21010 > Project: HBase > Issue Type: New Feature >Reporter: Jack Bearden >Assignee: Jack Bearden >Priority: Major > Attachments: HBASE-21010.001.patch, HBASE-21010.master.002.patch > > > > Quickstart dev environment for HBase. For those that are familiar with the > Hadoop start-build-env.sh, this is a port of that code but for HBase. > Usage is simple, just run the script and it will build and run a docker > container with your maven cache and hbase directory already set up. From > there, you can execute your maven goals as usual. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21010) HBase Quickstart Development Environment
[ https://issues.apache.org/jira/browse/HBASE-21010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jack Bearden updated HBASE-21010: - Summary: HBase Quickstart Development Environment (was: HBase Docker Development Environment ) > HBase Quickstart Development Environment > > > Key: HBASE-21010 > URL: https://issues.apache.org/jira/browse/HBASE-21010 > Project: HBase > Issue Type: New Feature >Reporter: Jack Bearden >Assignee: Jack Bearden >Priority: Major > Attachments: HBASE-21010.001.patch, HBASE-21010.master.002.patch > > > Hi all, > I have been using the following environment (see patch) for conveniently > building and testing my HBase patches before they hit precommit. This > improvement is a port from Hadoop trunk that was modified to work in our > codebase instead. This Linux environment should more closely resemble Jenkins. > Usage is simple, just run the script and it will build and run a docker > container with your maven cache and hbase directory already set up. From > there, you can execute your maven goals as usual. > As a kicker, this can also be used to run HBase in docker with low resources > to perhaps sniff out and debug flakey tests with maybe less docker overhead. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21010) HBase Docker Development Environment
[ https://issues.apache.org/jira/browse/HBASE-21010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592479#comment-16592479 ] Jack Bearden commented on HBASE-21010: -- Also could be useful for new users or people trying out the product > HBase Docker Development Environment > - > > Key: HBASE-21010 > URL: https://issues.apache.org/jira/browse/HBASE-21010 > Project: HBase > Issue Type: New Feature >Reporter: Jack Bearden >Assignee: Jack Bearden >Priority: Major > Attachments: HBASE-21010.001.patch, HBASE-21010.master.002.patch > > > Hi all, > I have been using the following environment (see patch) for conveniently > building and testing my HBase patches before they hit precommit. This > improvement is a port from Hadoop trunk that was modified to work in our > codebase instead. This Linux environment should more closely resemble Jenkins. > Usage is simple, just run the script and it will build and run a docker > container with your maven cache and hbase directory already set up. From > there, you can execute your maven goals as usual. > As a kicker, this can also be used to run HBase in docker with low resources > to perhaps sniff out and debug flakey tests with maybe less docker overhead. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21010) HBase Docker Development Environment
[ https://issues.apache.org/jira/browse/HBASE-21010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592478#comment-16592478 ] Jack Bearden commented on HBASE-21010: -- Any takers for a review? :) > HBase Docker Development Environment > - > > Key: HBASE-21010 > URL: https://issues.apache.org/jira/browse/HBASE-21010 > Project: HBase > Issue Type: New Feature >Reporter: Jack Bearden >Assignee: Jack Bearden >Priority: Major > Attachments: HBASE-21010.001.patch, HBASE-21010.master.002.patch > > > Hi all, > I have been using the following environment (see patch) for conveniently > building and testing my HBase patches before they hit precommit. This > improvement is a port from Hadoop trunk that was modified to work in our > codebase instead. This Linux environment should more closely resemble Jenkins. > Usage is simple, just run the script and it will build and run a docker > container with your maven cache and hbase directory already set up. From > there, you can execute your maven goals as usual. > As a kicker, this can also be used to run HBase in docker with low resources > to perhaps sniff out and debug flakey tests with maybe less docker overhead. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21010) HBase Docker Development Environment
[ https://issues.apache.org/jira/browse/HBASE-21010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jack Bearden updated HBASE-21010: - Issue Type: New Feature (was: Improvement) > HBase Docker Development Environment > - > > Key: HBASE-21010 > URL: https://issues.apache.org/jira/browse/HBASE-21010 > Project: HBase > Issue Type: New Feature >Reporter: Jack Bearden >Assignee: Jack Bearden >Priority: Major > Attachments: HBASE-21010.001.patch, HBASE-21010.master.002.patch > > > Hi all, > I have been using the following environment (see patch) for conveniently > building and testing my HBase patches before they hit precommit. This > improvement is a port from Hadoop trunk that was modified to work in our > codebase instead. This Linux environment should more closely resemble Jenkins. > Usage is simple, just run the script and it will build and run a docker > container with your maven cache and hbase directory already set up. From > there, you can execute your maven goals as usual. > As a kicker, this can also be used to run HBase in docker with low resources > to perhaps sniff out and debug flakey tests with maybe less docker overhead. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21118) Add Yetus Smart Apply Patch
[ https://issues.apache.org/jira/browse/HBASE-21118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592477#comment-16592477 ] Jack Bearden commented on HBASE-21118: -- [~busbey] what are your thoughts on this? > Add Yetus Smart Apply Patch > --- > > Key: HBASE-21118 > URL: https://issues.apache.org/jira/browse/HBASE-21118 > Project: HBase > Issue Type: New Feature >Reporter: Jack Bearden >Assignee: Jack Bearden >Priority: Major > Attachments: HBASE-21118.master.001.patch > > > Hadoop trunk has a smart-apply-patch routine contributed by [~aw] and the > Yetus team. It is convenient for quickly pulling down the latest patch tagged > with "Patch Available" and applying it locally. > I think it would be a great addition to HBase and would like to port it over. > Usage: > {code:java} > ./dev-support/yetus-smart-apply-patch.sh --plugins=jira HBASE-123{code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21010) HBase Docker Development Environment
[ https://issues.apache.org/jira/browse/HBASE-21010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592476#comment-16592476 ] Hadoop QA commented on HBASE-21010: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 5m 35s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 4s{color} | {color:blue} Shelldocs was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 1s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 6m 10s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b | | JIRA Issue | HBASE-21010 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12937124/HBASE-21010.master.002.patch | | Optional Tests | asflicense shellcheck shelldocs | | uname | Linux d2d8cba96f7d 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 07:31:43 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 86b35b2687 | | maven | version: Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) | | shellcheck | v0.4.4 | | Max. process+thread count | 48 (vs. ulimit of 1) | | modules | C: . U: . | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/14203/console | | Powered by | Apache Yetus 0.7.0 http://yetus.apache.org | This message was automatically generated. > HBase Docker Development Environment > - > > Key: HBASE-21010 > URL: https://issues.apache.org/jira/browse/HBASE-21010 > Project: HBase > Issue Type: Improvement >Reporter: Jack Bearden >Assignee: Jack Bearden >Priority: Major > Attachments: HBASE-21010.001.patch, HBASE-21010.master.002.patch > > > Hi all, > I have been using the following environment (see patch) for conveniently > building and testing my HBase patches before they hit precommit. This > improvement is a port from Hadoop trunk that was modified to work in our > codebase instead. This Linux environment should more closely resemble Jenkins. > Usage is simple, just run the script and it will build and run a docker > container with your maven cache and hbase directory already set up. From > there, you can execute your maven goals as usual. > As a kicker, this can also be used to run HBase in docker with low resources > to perhaps sniff out and debug flakey tests with maybe less docker overhead. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21118) Add Yetus Smart Apply Patch
[ https://issues.apache.org/jira/browse/HBASE-21118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592474#comment-16592474 ] Hadoop QA commented on HBASE-21118: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 4s{color} | {color:blue} Shelldocs was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 1s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 0m 49s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b | | JIRA Issue | HBASE-21118 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12937123/HBASE-21118.master.001.patch | | Optional Tests | asflicense shellcheck shelldocs | | uname | Linux 10d45915d347 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 14:43:09 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 86b35b2687 | | maven | version: Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) | | shellcheck | v0.4.4 | | Max. process+thread count | 43 (vs. ulimit of 1) | | modules | C: . U: . | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/14202/console | | Powered by | Apache Yetus 0.7.0 http://yetus.apache.org | This message was automatically generated. > Add Yetus Smart Apply Patch > --- > > Key: HBASE-21118 > URL: https://issues.apache.org/jira/browse/HBASE-21118 > Project: HBase > Issue Type: New Feature >Reporter: Jack Bearden >Assignee: Jack Bearden >Priority: Major > Attachments: HBASE-21118.master.001.patch > > > Hadoop trunk has a smart-apply-patch routine contributed by [~aw] and the > Yetus team. It is convenient for quickly pulling down the latest patch tagged > with "Patch Available" and applying it locally. > I think it would be a great addition to HBase and would like to port it over. > Usage: > {code:java} > ./dev-support/yetus-smart-apply-patch.sh --plugins=jira HBASE-123{code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21010) HBase Docker Development Environment
[ https://issues.apache.org/jira/browse/HBASE-21010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592472#comment-16592472 ] Hadoop QA commented on HBASE-21010: --- (!) A patch to the testing environment has been detected. Re-executing against the patched versions to perform further tests. The console is at https://builds.apache.org/job/PreCommit-HBASE-Build/14203/console in case of problems. > HBase Docker Development Environment > - > > Key: HBASE-21010 > URL: https://issues.apache.org/jira/browse/HBASE-21010 > Project: HBase > Issue Type: Improvement >Reporter: Jack Bearden >Assignee: Jack Bearden >Priority: Major > Attachments: HBASE-21010.001.patch, HBASE-21010.master.002.patch > > > Hi all, > I have been using the following environment (see patch) for conveniently > building and testing my HBase patches before they hit precommit. This > improvement is a port from Hadoop trunk that was modified to work in our > codebase instead. This Linux environment should more closely resemble Jenkins. > Usage is simple, just run the script and it will build and run a docker > container with your maven cache and hbase directory already set up. From > there, you can execute your maven goals as usual. > As a kicker, this can also be used to run HBase in docker with low resources > to perhaps sniff out and debug flakey tests with maybe less docker overhead. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21010) HBase Docker Development Environment
[ https://issues.apache.org/jira/browse/HBASE-21010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jack Bearden updated HBASE-21010: - Attachment: HBASE-21010.master.002.patch > HBase Docker Development Environment > - > > Key: HBASE-21010 > URL: https://issues.apache.org/jira/browse/HBASE-21010 > Project: HBase > Issue Type: Improvement >Reporter: Jack Bearden >Assignee: Jack Bearden >Priority: Major > Attachments: HBASE-21010.001.patch, HBASE-21010.master.002.patch > > > Hi all, > I have been using the following environment (see patch) for conveniently > building and testing my HBase patches before they hit precommit. This > improvement is a port from Hadoop trunk that was modified to work in our > codebase instead. This Linux environment should more closely resemble Jenkins. > Usage is simple, just run the script and it will build and run a docker > container with your maven cache and hbase directory already set up. From > there, you can execute your maven goals as usual. > As a kicker, this can also be used to run HBase in docker with low resources > to perhaps sniff out and debug flakey tests with maybe less docker overhead. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21118) Add Yetus Smart Apply Patch
[ https://issues.apache.org/jira/browse/HBASE-21118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jack Bearden updated HBASE-21118: - Attachment: HBASE-21118.master.001.patch > Add Yetus Smart Apply Patch > --- > > Key: HBASE-21118 > URL: https://issues.apache.org/jira/browse/HBASE-21118 > Project: HBase > Issue Type: New Feature >Reporter: Jack Bearden >Assignee: Jack Bearden >Priority: Major > Attachments: HBASE-21118.master.001.patch > > > Hadoop trunk has a smart-apply-patch routine contributed by [~aw] and the > Yetus team. It is convenient for quickly pulling down the latest patch tagged > with "Patch Available" and applying it locally. > I think it would be a great addition to HBase and would like to port it over. > Usage: > {code:java} > ./dev-support/yetus-smart-apply-patch.sh --plugins=jira HBASE-123{code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21118) Add Yetus Smart Apply Patch
[ https://issues.apache.org/jira/browse/HBASE-21118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jack Bearden updated HBASE-21118: - Attachment: (was: HBASE-21118.master.001.patch) > Add Yetus Smart Apply Patch > --- > > Key: HBASE-21118 > URL: https://issues.apache.org/jira/browse/HBASE-21118 > Project: HBase > Issue Type: New Feature >Reporter: Jack Bearden >Assignee: Jack Bearden >Priority: Major > Attachments: HBASE-21118.master.001.patch > > > Hadoop trunk has a smart-apply-patch routine contributed by [~aw] and the > Yetus team. It is convenient for quickly pulling down the latest patch tagged > with "Patch Available" and applying it locally. > I think it would be a great addition to HBase and would like to port it over. > Usage: > {code:java} > ./dev-support/yetus-smart-apply-patch.sh --plugins=jira HBASE-123{code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21118) Add Yetus Smart Apply Patch
[ https://issues.apache.org/jira/browse/HBASE-21118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jack Bearden updated HBASE-21118: - Priority: Major (was: Minor) > Add Yetus Smart Apply Patch > --- > > Key: HBASE-21118 > URL: https://issues.apache.org/jira/browse/HBASE-21118 > Project: HBase > Issue Type: New Feature >Reporter: Jack Bearden >Assignee: Jack Bearden >Priority: Major > Attachments: HBASE-21118.master.001.patch > > > Hadoop trunk has a smart-apply-patch routine contributed by [~aw] and the > Yetus team. It is convenient for quickly pulling down the latest patch tagged > with "Patch Available" and applying it locally. > I think it would be a great addition to HBase and would like to port it over. > Usage: > {code:java} > ./dev-support/yetus-smart-apply-patch.sh --plugins=jira HBASE-123{code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21118) Add Yetus Smart Apply Patch
[ https://issues.apache.org/jira/browse/HBASE-21118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jack Bearden updated HBASE-21118: - Status: Patch Available (was: Open) > Add Yetus Smart Apply Patch > --- > > Key: HBASE-21118 > URL: https://issues.apache.org/jira/browse/HBASE-21118 > Project: HBase > Issue Type: New Feature >Reporter: Jack Bearden >Assignee: Jack Bearden >Priority: Minor > Attachments: HBASE-21118.master.001.patch > > > Hadoop trunk has a smart-apply-patch routine contributed by [~aw] and the > Yetus team. It is convenient for quickly pulling down the latest patch tagged > with "Patch Available" and applying it locally. > I think it would be a great addition to HBase and would like to port it over. > Usage: > {code:java} > ./dev-support/yetus-smart-apply-patch.sh --plugins=jira HBASE-123{code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21118) Add Yetus Smart Apply Patch
[ https://issues.apache.org/jira/browse/HBASE-21118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jack Bearden updated HBASE-21118: - Attachment: HBASE-21118.master.001.patch > Add Yetus Smart Apply Patch > --- > > Key: HBASE-21118 > URL: https://issues.apache.org/jira/browse/HBASE-21118 > Project: HBase > Issue Type: New Feature >Reporter: Jack Bearden >Assignee: Jack Bearden >Priority: Minor > Attachments: HBASE-21118.master.001.patch > > > Hadoop trunk has a smart-apply-patch routine contributed by [~aw] and the > Yetus team. It is convenient for quickly pulling down the latest patch tagged > with "Patch Available" and applying it locally. > I think it would be a great addition to HBase and would like to port it over. > Usage: > {code:java} > ./dev-support/yetus-smart-apply-patch.sh --plugins=jira HBASE-123{code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20993) [Auth] IPC client fallback to simple auth allowed doesn't work
[ https://issues.apache.org/jira/browse/HBASE-20993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592467#comment-16592467 ] Jack Bearden commented on HBASE-20993: -- -009 * Corrected the test I broke while fixing the style issues. * Added suppression for VisibilityModifier for tests only > [Auth] IPC client fallback to simple auth allowed doesn't work > -- > > Key: HBASE-20993 > URL: https://issues.apache.org/jira/browse/HBASE-20993 > Project: HBase > Issue Type: Bug > Components: Client, security >Affects Versions: 1.2.6 >Reporter: Reid Chan >Assignee: Jack Bearden >Priority: Critical > Fix For: 3.0.0, 1.5.0, 2.2.0, 1.4.8 > > Attachments: HBASE-20993.001.patch, > HBASE-20993.003.branch-1.flowchart.png, HBASE-20993.branch-1.002.patch, > HBASE-20993.branch-1.003.patch, HBASE-20993.branch-1.004.patch, > HBASE-20993.branch-1.005.patch, HBASE-20993.branch-1.006.patch, > HBASE-20993.branch-1.007.patch, HBASE-20993.branch-1.008.patch, > HBASE-20993.branch-1.009.patch, HBASE-20993.branch-1.2.001.patch, > HBASE-20993.branch-1.wip.002.patch, HBASE-20993.branch-1.wip.patch > > > It is easily reproducible. > client's hbase-site.xml: hadoop.security.authentication:kerberos, > hbase.security.authentication:kerberos, > hbase.ipc.client.fallback-to-simple-auth-allowed:true, keytab and principal > are right set > A simple auth hbase cluster, a kerberized hbase client application. > application trying to r/w/c/d table will have following exception: > {code} > javax.security.sasl.SaslException: GSS initiate failed [Caused by > GSSException: No valid credentials provided (Mechanism level: Failed to find > any Kerberos tgt)] > at > com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) > at > org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:743) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:740) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1241) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336) > at > org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58383) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(ConnectionManager.java:1592) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1530) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1552) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1581) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1738) > at > org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38) > at > org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:134) > at > org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4297) > at > org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4289) > at > org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsyncV2(HBaseAdmin.java:753) > at > org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:674) > at > org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:607) > at >
[jira] [Updated] (HBASE-20993) [Auth] IPC client fallback to simple auth allowed doesn't work
[ https://issues.apache.org/jira/browse/HBASE-20993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jack Bearden updated HBASE-20993: - Attachment: HBASE-20993.branch-1.009.patch > [Auth] IPC client fallback to simple auth allowed doesn't work > -- > > Key: HBASE-20993 > URL: https://issues.apache.org/jira/browse/HBASE-20993 > Project: HBase > Issue Type: Bug > Components: Client, security >Affects Versions: 1.2.6 >Reporter: Reid Chan >Assignee: Jack Bearden >Priority: Critical > Fix For: 3.0.0, 1.5.0, 2.2.0, 1.4.8 > > Attachments: HBASE-20993.001.patch, > HBASE-20993.003.branch-1.flowchart.png, HBASE-20993.branch-1.002.patch, > HBASE-20993.branch-1.003.patch, HBASE-20993.branch-1.004.patch, > HBASE-20993.branch-1.005.patch, HBASE-20993.branch-1.006.patch, > HBASE-20993.branch-1.007.patch, HBASE-20993.branch-1.008.patch, > HBASE-20993.branch-1.009.patch, HBASE-20993.branch-1.2.001.patch, > HBASE-20993.branch-1.wip.002.patch, HBASE-20993.branch-1.wip.patch > > > It is easily reproducible. > client's hbase-site.xml: hadoop.security.authentication:kerberos, > hbase.security.authentication:kerberos, > hbase.ipc.client.fallback-to-simple-auth-allowed:true, keytab and principal > are right set > A simple auth hbase cluster, a kerberized hbase client application. > application trying to r/w/c/d table will have following exception: > {code} > javax.security.sasl.SaslException: GSS initiate failed [Caused by > GSSException: No valid credentials provided (Mechanism level: Failed to find > any Kerberos tgt)] > at > com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) > at > org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:743) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:740) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1241) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336) > at > org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58383) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(ConnectionManager.java:1592) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1530) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1552) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1581) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1738) > at > org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38) > at > org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:134) > at > org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4297) > at > org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4289) > at > org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsyncV2(HBaseAdmin.java:753) > at > org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:674) > at > org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:607) > at > org.playground.hbase.KerberizedClientFallback.main(KerberizedClientFallback.java:55) > Caused by: GSSException: No valid credentials provided (Mechanism level: > Failed
[jira] [Commented] (HBASE-20940) HStore.cansplit should not allow split to happen if it has references
[ https://issues.apache.org/jira/browse/HBASE-20940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592466#comment-16592466 ] Hadoop QA commented on HBASE-20940: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} branch-1 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 56s{color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s{color} | {color:green} branch-1 passed with JDK v1.8.0_181 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s{color} | {color:green} branch-1 passed with JDK v1.7.0_191 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 17s{color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 2m 35s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s{color} | {color:green} branch-1 passed with JDK v1.8.0_181 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} branch-1 passed with JDK v1.7.0_191 {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s{color} | {color:green} the patch passed with JDK v1.8.0_181 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s{color} | {color:green} the patch passed with JDK v1.7.0_191 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 9s{color} | {color:green} hbase-server: The patch generated 0 new + 69 unchanged - 4 fixed = 69 total (was 73) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 2m 21s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 1m 30s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s{color} | {color:green} the patch passed with JDK v1.8.0_181 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s{color} | {color:green} the patch passed with JDK v1.7.0_191 {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}137m 7s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}155m 24s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.mapreduce.TestSecureLoadIncrementalHFiles | | | hadoop.hbase.mapreduce.TestLoadIncrementalHFilesUseSecurityEndPoint | | | hadoop.hbase.mapreduce.TestLoadIncrementalHFiles | | | hadoop.hbase.regionserver.TestZKLessSplitOnCluster | | | hadoop.hbase.security.visibility.TestVisibilityLabelsWithACL | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:61288f8 | | JIRA Issue
[jira] [Commented] (HBASE-21078) [amv2] CODE-BUG NPE in RTP doing Unassign
[ https://issues.apache.org/jira/browse/HBASE-21078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592465#comment-16592465 ] Hudson commented on HBASE-21078: Results for branch branch-2 [build #1160 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1160/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1160//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1160//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1160//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > [amv2] CODE-BUG NPE in RTP doing Unassign > - > > Key: HBASE-21078 > URL: https://issues.apache.org/jira/browse/HBASE-21078 > Project: HBase > Issue Type: Bug > Components: amv2 >Affects Versions: 2.0.1 >Reporter: stack >Assignee: stack >Priority: Major > Fix For: 2.1.1, 2.0.2 > > Attachments: HBASE-21078.branch-2.0.001.patch, > HBASE-21078.branch-2.0.002.patch, HBASE-21078.branch-2.0.003.patch, > HBASE-21078.branch-2.0.004.patch, HBASE-21078.branch-2.0.004.patch, > HBASE-21078.branch-2.0.004.patch > > > Saw this is a run against tip of branch-2.0. The region had just finished > being split when the move goes to run. > {code} > 2018-08-18 16:55:14,908 INFO [PEWorker-2] procedure2.ProcedureExecutor: > Finished pid=2028, state=SUCCESS, hasLock=false; SplitTableRegionProcedure > table=IntegrationTestBigLinkedList, parent=c3f199b5af62ae2ff8f8b6426b21d95d, > daughterA=31ccbf098ae615ce30f28ec84c956b8f, > daughterB=1890b4c96736f223f31efef11c817c90 in 9.0090sec > 2018-08-18 16:55:14,908 INFO [PEWorker-16] > procedure.MasterProcedureScheduler: pid=2038, ppid=2030, > state=RUNNABLE:MOVE_REGION_UNASSIGN, hasLock=false; MoveRegionProcedure > hri=c3f199b5af62ae2ff8f8b6426b21d95d, > source=ve0540.halxg.cloudera.com,16020,1534632630737, > destination=ve0540.halxg.cloudera.com,16020,1534632630737 checking lock on > c3f199b5af62ae2ff8f8b6426b21d95d > 2018-08-18 16:55:14,958 INFO [PEWorker-16] procedure2.ProcedureExecutor: > Initialized subprocedures=[{pid=2095, ppid=2038, > state=RUNNABLE:REGION_TRANSITION_DISPATCH, hasLock=false; UnassignProcedure > table=IntegrationTestBigLinkedList, region=c3f199b5af62ae2ff8f8b6426b21d95d, > server=ve0540.halxg.cloudera.com,16020,1534632630737}] > 2018-08-18 16:55:15,008 INFO [PEWorker-3] > procedure.MasterProcedureScheduler: pid=2095, ppid=2038, > state=RUNNABLE:REGION_TRANSITION_DISPATCH, hasLock=false; UnassignProcedure > table=IntegrationTestBigLinkedList, region=c3f199b5af62ae2ff8f8b6426b21d95d, > server=ve0540.halxg.cloudera.com,16020,1534632630737 checking lock on > c3f199b5af62ae2ff8f8b6426b21d95d > 2018-08-18 16:55:15,085 ERROR [PEWorker-3] procedure2.ProcedureExecutor: > CODE-BUG: Uncaught runtime exception: pid=2095, ppid=2038, > state=RUNNABLE:REGION_TRANSITION_DISPATCH, hasLock=true; UnassignProcedure > table=IntegrationTestBigLinkedList, region=c3f199b5af62ae2ff8f8b6426b21d95d, > server=ve0540.halxg.cloudera.com,16020,1534632630737 > java.lang.NullPointerException > at java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:936) > at > org.apache.hadoop.hbase.master.assignment.RegionStates.getOrCreateServer(RegionStates.java:1097) > at > org.apache.hadoop.hbase.master.assignment.RegionStates.addRegionToServer(RegionStates.java:1125) > at > org.apache.hadoop.hbase.master.assignment.AssignmentManager.markRegionAsClosing(AssignmentManager.java:1477) > at > org.apache.hadoop.hbase.master.assignment.UnassignProcedure.updateTransition(UnassignProcedure.java:204) > at > org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure.execute(RegionTransitionProcedure.java:345) > at > org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure.execute(RegionTransitionProcedure.java:97) > at > org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:873) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1556) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1344) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$900(ProcedureExecutor.java:76) > at >
[jira] [Commented] (HBASE-21113) Apply the branch-2 version of HBASE-21095, The timeout retry logic for several procedures are broken after master restarts
[ https://issues.apache.org/jira/browse/HBASE-21113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592464#comment-16592464 ] Hudson commented on HBASE-21113: Results for branch branch-2 [build #1160 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1160/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1160//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1160//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1160//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Apply the branch-2 version of HBASE-21095, The timeout retry logic for > several procedures are broken after master restarts > -- > > Key: HBASE-21113 > URL: https://issues.apache.org/jira/browse/HBASE-21113 > Project: HBase > Issue Type: Bug > Components: amv2 >Reporter: stack >Assignee: Allan Yang >Priority: Major > Fix For: 2.1.1, 2.0.2 > > > This issue is for applying branch-2 version of the HBASE-21095 patch. The > patch applied here is the HBASE-21095.branch-2.0.001.patch patch from > HBASE-21095 written by [~allan163]. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21095) The timeout retry logic for several procedures are broken after master restarts
[ https://issues.apache.org/jira/browse/HBASE-21095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592463#comment-16592463 ] Hudson commented on HBASE-21095: Results for branch branch-2 [build #1160 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1160/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1160//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1160//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1160//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > The timeout retry logic for several procedures are broken after master > restarts > --- > > Key: HBASE-21095 > URL: https://issues.apache.org/jira/browse/HBASE-21095 > Project: HBase > Issue Type: Sub-task > Components: amv2, proc-v2 >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Critical > Fix For: 3.0.0, 2.2.0 > > Attachments: HBASE-21095-branch-2.0.patch, HBASE-21095-v1.patch, > HBASE-21095-v2.patch, HBASE-21095.branch-2.0.001.patch, HBASE-21095.patch > > > For TRSP, and also RTP in branch-2.0 and branch-2.1, if we fail to assign or > unassign a region, we will set the procedure to WAITING_TIMEOUT state, and > rely on the ProcedureEvent in RegionStateNode to wake us up later. But after > restarting, we do not suspend the ProcedureEvent in RSN, and also do not add > the procedure to the ProcedureEvent's suspending queue, so we will hang there > forever as no one will wake us up. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21095) The timeout retry logic for several procedures are broken after master restarts
[ https://issues.apache.org/jira/browse/HBASE-21095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592457#comment-16592457 ] Hudson commented on HBASE-21095: Results for branch branch-2.1 [build #238 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/238/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/238//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/238//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/238//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > The timeout retry logic for several procedures are broken after master > restarts > --- > > Key: HBASE-21095 > URL: https://issues.apache.org/jira/browse/HBASE-21095 > Project: HBase > Issue Type: Sub-task > Components: amv2, proc-v2 >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Critical > Fix For: 3.0.0, 2.2.0 > > Attachments: HBASE-21095-branch-2.0.patch, HBASE-21095-v1.patch, > HBASE-21095-v2.patch, HBASE-21095.branch-2.0.001.patch, HBASE-21095.patch > > > For TRSP, and also RTP in branch-2.0 and branch-2.1, if we fail to assign or > unassign a region, we will set the procedure to WAITING_TIMEOUT state, and > rely on the ProcedureEvent in RegionStateNode to wake us up later. But after > restarting, we do not suspend the ProcedureEvent in RSN, and also do not add > the procedure to the ProcedureEvent's suspending queue, so we will hang there > forever as no one will wake us up. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21078) [amv2] CODE-BUG NPE in RTP doing Unassign
[ https://issues.apache.org/jira/browse/HBASE-21078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592458#comment-16592458 ] Hudson commented on HBASE-21078: Results for branch branch-2.1 [build #238 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/238/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/238//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/238//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/238//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > [amv2] CODE-BUG NPE in RTP doing Unassign > - > > Key: HBASE-21078 > URL: https://issues.apache.org/jira/browse/HBASE-21078 > Project: HBase > Issue Type: Bug > Components: amv2 >Affects Versions: 2.0.1 >Reporter: stack >Assignee: stack >Priority: Major > Fix For: 2.1.1, 2.0.2 > > Attachments: HBASE-21078.branch-2.0.001.patch, > HBASE-21078.branch-2.0.002.patch, HBASE-21078.branch-2.0.003.patch, > HBASE-21078.branch-2.0.004.patch, HBASE-21078.branch-2.0.004.patch, > HBASE-21078.branch-2.0.004.patch > > > Saw this is a run against tip of branch-2.0. The region had just finished > being split when the move goes to run. > {code} > 2018-08-18 16:55:14,908 INFO [PEWorker-2] procedure2.ProcedureExecutor: > Finished pid=2028, state=SUCCESS, hasLock=false; SplitTableRegionProcedure > table=IntegrationTestBigLinkedList, parent=c3f199b5af62ae2ff8f8b6426b21d95d, > daughterA=31ccbf098ae615ce30f28ec84c956b8f, > daughterB=1890b4c96736f223f31efef11c817c90 in 9.0090sec > 2018-08-18 16:55:14,908 INFO [PEWorker-16] > procedure.MasterProcedureScheduler: pid=2038, ppid=2030, > state=RUNNABLE:MOVE_REGION_UNASSIGN, hasLock=false; MoveRegionProcedure > hri=c3f199b5af62ae2ff8f8b6426b21d95d, > source=ve0540.halxg.cloudera.com,16020,1534632630737, > destination=ve0540.halxg.cloudera.com,16020,1534632630737 checking lock on > c3f199b5af62ae2ff8f8b6426b21d95d > 2018-08-18 16:55:14,958 INFO [PEWorker-16] procedure2.ProcedureExecutor: > Initialized subprocedures=[{pid=2095, ppid=2038, > state=RUNNABLE:REGION_TRANSITION_DISPATCH, hasLock=false; UnassignProcedure > table=IntegrationTestBigLinkedList, region=c3f199b5af62ae2ff8f8b6426b21d95d, > server=ve0540.halxg.cloudera.com,16020,1534632630737}] > 2018-08-18 16:55:15,008 INFO [PEWorker-3] > procedure.MasterProcedureScheduler: pid=2095, ppid=2038, > state=RUNNABLE:REGION_TRANSITION_DISPATCH, hasLock=false; UnassignProcedure > table=IntegrationTestBigLinkedList, region=c3f199b5af62ae2ff8f8b6426b21d95d, > server=ve0540.halxg.cloudera.com,16020,1534632630737 checking lock on > c3f199b5af62ae2ff8f8b6426b21d95d > 2018-08-18 16:55:15,085 ERROR [PEWorker-3] procedure2.ProcedureExecutor: > CODE-BUG: Uncaught runtime exception: pid=2095, ppid=2038, > state=RUNNABLE:REGION_TRANSITION_DISPATCH, hasLock=true; UnassignProcedure > table=IntegrationTestBigLinkedList, region=c3f199b5af62ae2ff8f8b6426b21d95d, > server=ve0540.halxg.cloudera.com,16020,1534632630737 > java.lang.NullPointerException > at java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:936) > at > org.apache.hadoop.hbase.master.assignment.RegionStates.getOrCreateServer(RegionStates.java:1097) > at > org.apache.hadoop.hbase.master.assignment.RegionStates.addRegionToServer(RegionStates.java:1125) > at > org.apache.hadoop.hbase.master.assignment.AssignmentManager.markRegionAsClosing(AssignmentManager.java:1477) > at > org.apache.hadoop.hbase.master.assignment.UnassignProcedure.updateTransition(UnassignProcedure.java:204) > at > org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure.execute(RegionTransitionProcedure.java:345) > at > org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure.execute(RegionTransitionProcedure.java:97) > at > org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:873) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1556) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1344) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$900(ProcedureExecutor.java:76) > at >
[jira] [Commented] (HBASE-21095) The timeout retry logic for several procedures are broken after master restarts
[ https://issues.apache.org/jira/browse/HBASE-21095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592450#comment-16592450 ] Hudson commented on HBASE-21095: Results for branch branch-2.0 [build #727 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/727/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/727//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/727//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/727//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. > The timeout retry logic for several procedures are broken after master > restarts > --- > > Key: HBASE-21095 > URL: https://issues.apache.org/jira/browse/HBASE-21095 > Project: HBase > Issue Type: Sub-task > Components: amv2, proc-v2 >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Critical > Fix For: 3.0.0, 2.2.0 > > Attachments: HBASE-21095-branch-2.0.patch, HBASE-21095-v1.patch, > HBASE-21095-v2.patch, HBASE-21095.branch-2.0.001.patch, HBASE-21095.patch > > > For TRSP, and also RTP in branch-2.0 and branch-2.1, if we fail to assign or > unassign a region, we will set the procedure to WAITING_TIMEOUT state, and > rely on the ProcedureEvent in RegionStateNode to wake us up later. But after > restarting, we do not suspend the ProcedureEvent in RSN, and also do not add > the procedure to the ProcedureEvent's suspending queue, so we will hang there > forever as no one will wake us up. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21078) [amv2] CODE-BUG NPE in RTP doing Unassign
[ https://issues.apache.org/jira/browse/HBASE-21078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592452#comment-16592452 ] Hudson commented on HBASE-21078: Results for branch branch-2.0 [build #727 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/727/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/727//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/727//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/727//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. > [amv2] CODE-BUG NPE in RTP doing Unassign > - > > Key: HBASE-21078 > URL: https://issues.apache.org/jira/browse/HBASE-21078 > Project: HBase > Issue Type: Bug > Components: amv2 >Affects Versions: 2.0.1 >Reporter: stack >Assignee: stack >Priority: Major > Fix For: 2.1.1, 2.0.2 > > Attachments: HBASE-21078.branch-2.0.001.patch, > HBASE-21078.branch-2.0.002.patch, HBASE-21078.branch-2.0.003.patch, > HBASE-21078.branch-2.0.004.patch, HBASE-21078.branch-2.0.004.patch, > HBASE-21078.branch-2.0.004.patch > > > Saw this is a run against tip of branch-2.0. The region had just finished > being split when the move goes to run. > {code} > 2018-08-18 16:55:14,908 INFO [PEWorker-2] procedure2.ProcedureExecutor: > Finished pid=2028, state=SUCCESS, hasLock=false; SplitTableRegionProcedure > table=IntegrationTestBigLinkedList, parent=c3f199b5af62ae2ff8f8b6426b21d95d, > daughterA=31ccbf098ae615ce30f28ec84c956b8f, > daughterB=1890b4c96736f223f31efef11c817c90 in 9.0090sec > 2018-08-18 16:55:14,908 INFO [PEWorker-16] > procedure.MasterProcedureScheduler: pid=2038, ppid=2030, > state=RUNNABLE:MOVE_REGION_UNASSIGN, hasLock=false; MoveRegionProcedure > hri=c3f199b5af62ae2ff8f8b6426b21d95d, > source=ve0540.halxg.cloudera.com,16020,1534632630737, > destination=ve0540.halxg.cloudera.com,16020,1534632630737 checking lock on > c3f199b5af62ae2ff8f8b6426b21d95d > 2018-08-18 16:55:14,958 INFO [PEWorker-16] procedure2.ProcedureExecutor: > Initialized subprocedures=[{pid=2095, ppid=2038, > state=RUNNABLE:REGION_TRANSITION_DISPATCH, hasLock=false; UnassignProcedure > table=IntegrationTestBigLinkedList, region=c3f199b5af62ae2ff8f8b6426b21d95d, > server=ve0540.halxg.cloudera.com,16020,1534632630737}] > 2018-08-18 16:55:15,008 INFO [PEWorker-3] > procedure.MasterProcedureScheduler: pid=2095, ppid=2038, > state=RUNNABLE:REGION_TRANSITION_DISPATCH, hasLock=false; UnassignProcedure > table=IntegrationTestBigLinkedList, region=c3f199b5af62ae2ff8f8b6426b21d95d, > server=ve0540.halxg.cloudera.com,16020,1534632630737 checking lock on > c3f199b5af62ae2ff8f8b6426b21d95d > 2018-08-18 16:55:15,085 ERROR [PEWorker-3] procedure2.ProcedureExecutor: > CODE-BUG: Uncaught runtime exception: pid=2095, ppid=2038, > state=RUNNABLE:REGION_TRANSITION_DISPATCH, hasLock=true; UnassignProcedure > table=IntegrationTestBigLinkedList, region=c3f199b5af62ae2ff8f8b6426b21d95d, > server=ve0540.halxg.cloudera.com,16020,1534632630737 > java.lang.NullPointerException > at java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:936) > at > org.apache.hadoop.hbase.master.assignment.RegionStates.getOrCreateServer(RegionStates.java:1097) > at > org.apache.hadoop.hbase.master.assignment.RegionStates.addRegionToServer(RegionStates.java:1125) > at > org.apache.hadoop.hbase.master.assignment.AssignmentManager.markRegionAsClosing(AssignmentManager.java:1477) > at > org.apache.hadoop.hbase.master.assignment.UnassignProcedure.updateTransition(UnassignProcedure.java:204) > at > org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure.execute(RegionTransitionProcedure.java:345) > at > org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure.execute(RegionTransitionProcedure.java:97) > at > org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:873) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1556) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1344) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$900(ProcedureExecutor.java:76) > at >
[jira] [Commented] (HBASE-21113) Apply the branch-2 version of HBASE-21095, The timeout retry logic for several procedures are broken after master restarts
[ https://issues.apache.org/jira/browse/HBASE-21113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592451#comment-16592451 ] Hudson commented on HBASE-21113: Results for branch branch-2.0 [build #727 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/727/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/727//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/727//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/727//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. > Apply the branch-2 version of HBASE-21095, The timeout retry logic for > several procedures are broken after master restarts > -- > > Key: HBASE-21113 > URL: https://issues.apache.org/jira/browse/HBASE-21113 > Project: HBase > Issue Type: Bug > Components: amv2 >Reporter: stack >Assignee: Allan Yang >Priority: Major > Fix For: 2.1.1, 2.0.2 > > > This issue is for applying branch-2 version of the HBASE-21095 patch. The > patch applied here is the HBASE-21095.branch-2.0.001.patch patch from > HBASE-21095 written by [~allan163]. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20942) Improve RpcServer TRACE logging
[ https://issues.apache.org/jira/browse/HBASE-20942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Krish Dey updated HBASE-20942: -- Status: Open (was: Patch Available) > Improve RpcServer TRACE logging > --- > > Key: HBASE-20942 > URL: https://issues.apache.org/jira/browse/HBASE-20942 > Project: HBase > Issue Type: Task >Reporter: Esteban Gutierrez >Assignee: Krish Dey >Priority: Major > Attachments: HBASE-20942.002.patch, HBASE-20942.003.patch, > HBASE-20942.004.patch > > > Two things: > * We truncate RpcServer output to 1000 characters for trace logging. Would > be better if that value was configurable. > * There is the chance for an ArrayIndexOutOfBounds when truncating the TRACE > log message. > Esteban mentioned this to me earlier, so I'm crediting him as the reporter. > cc: [~elserj] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20942) Improve RpcServer TRACE logging
[ https://issues.apache.org/jira/browse/HBASE-20942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Krish Dey updated HBASE-20942: -- Attachment: HBASE-20942.004.patch Status: Patch Available (was: Open) > Improve RpcServer TRACE logging > --- > > Key: HBASE-20942 > URL: https://issues.apache.org/jira/browse/HBASE-20942 > Project: HBase > Issue Type: Task >Reporter: Esteban Gutierrez >Assignee: Krish Dey >Priority: Major > Attachments: HBASE-20942.002.patch, HBASE-20942.003.patch, > HBASE-20942.004.patch > > > Two things: > * We truncate RpcServer output to 1000 characters for trace logging. Would > be better if that value was configurable. > * There is the chance for an ArrayIndexOutOfBounds when truncating the TRACE > log message. > Esteban mentioned this to me earlier, so I'm crediting him as the reporter. > cc: [~elserj] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20942) Improve RpcServer TRACE logging
[ https://issues.apache.org/jira/browse/HBASE-20942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Krish Dey updated HBASE-20942: -- Attachment: (was: HBASE-20942.004.patch) > Improve RpcServer TRACE logging > --- > > Key: HBASE-20942 > URL: https://issues.apache.org/jira/browse/HBASE-20942 > Project: HBase > Issue Type: Task >Reporter: Esteban Gutierrez >Assignee: Krish Dey >Priority: Major > Attachments: HBASE-20942.002.patch, HBASE-20942.003.patch, > HBASE-20942.004.patch > > > Two things: > * We truncate RpcServer output to 1000 characters for trace logging. Would > be better if that value was configurable. > * There is the chance for an ArrayIndexOutOfBounds when truncating the TRACE > log message. > Esteban mentioned this to me earlier, so I'm crediting him as the reporter. > cc: [~elserj] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20942) Improve RpcServer TRACE logging
[ https://issues.apache.org/jira/browse/HBASE-20942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592444#comment-16592444 ] Hadoop QA commented on HBASE-20942: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 12s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 49s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 12s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 22s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 2s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 49s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 12s{color} | {color:red} hbase-server: The patch generated 3 new + 10 unchanged - 0 fixed = 13 total (was 10) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 17s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 9m 3s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}132m 41s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 29s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}171m 29s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.util.TestHBaseFsckReplication | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b | | JIRA Issue | HBASE-20942 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12937100/HBASE-20942.004.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux e8dc13b23910 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 86b35b2687 | | maven | version: Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC3 | | checkstyle | https://builds.apache.org/job/PreCommit-HBASE-Build/14197/artifact/patchprocess/diff-checkstyle-hbase-server.txt | | unit | https://builds.apache.org/job/PreCommit-HBASE-Build/14197/artifact/patchprocess/patch-unit-hbase-server.txt | | Test Results |
[jira] [Commented] (HBASE-20941) Create and implement HbckService in master
[ https://issues.apache.org/jira/browse/HBASE-20941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592428#comment-16592428 ] Hadoop QA commented on HBASE-20941: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 30s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 8s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 16s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 32s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 31s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 33s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 34s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 4m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 40s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 8m 30s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 1m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 46s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 46s{color} | {color:green} hbase-protocol-shaded in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 38s{color} | {color:green} hbase-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 7s{color} | {color:green} hbase-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}119m 43s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}188m 12s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.util.TestHBaseFsckReplication | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b | | JIRA Issue | HBASE-20941 | | JIRA Patch URL |
[jira] [Updated] (HBASE-21118) Add Yetus Smart Apply Patch
[ https://issues.apache.org/jira/browse/HBASE-21118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jack Bearden updated HBASE-21118: - Description: Hadoop trunk has a smart-apply-patch routine contributed by [~aw] and the Yetus team. It is convenient for quickly pulling down the latest patch tagged with "Patch Available" and applying it locally. I think it would be a great addition to HBase and would like to port it over. Usage: {code:java} ./dev-support/yetus-smart-apply-patch.sh --plugins=jira HBASE-123{code} was: Hadoop trunk has a smart-apply-patch routine contributed by [~aw] and the Yetus team. It is convenient for quickly pulling down the latest patch tagged with "Patch Available" and applying it locally. I think it would be a great addition to HBase and would like to port it over. Usage: {code:java} ./dev-support/yetus-smarter-apply-patch.sh --plugins=jira HBASE-123{code} > Add Yetus Smart Apply Patch > --- > > Key: HBASE-21118 > URL: https://issues.apache.org/jira/browse/HBASE-21118 > Project: HBase > Issue Type: New Feature >Reporter: Jack Bearden >Assignee: Jack Bearden >Priority: Minor > > Hadoop trunk has a smart-apply-patch routine contributed by [~aw] and the > Yetus team. It is convenient for quickly pulling down the latest patch tagged > with "Patch Available" and applying it locally. > I think it would be a great addition to HBase and would like to port it over. > Usage: > {code:java} > ./dev-support/yetus-smart-apply-patch.sh --plugins=jira HBASE-123{code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21118) Add Yetus Smart Apply Patch
Jack Bearden created HBASE-21118: Summary: Add Yetus Smart Apply Patch Key: HBASE-21118 URL: https://issues.apache.org/jira/browse/HBASE-21118 Project: HBase Issue Type: New Feature Reporter: Jack Bearden Assignee: Jack Bearden Hadoop trunk has a smart-apply-patch routine contributed by [~aw] and the Yetus team. It is convenient for quickly pulling down the latest patch tagged with "Patch Available" and applying it locally. I think it would be a great addition to HBase and would like to port it over. Usage: {code:java} ./dev-support/yetus-smarter-apply-patch.sh --plugins=jira HBASE-123{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21117) Backport HBASE-18350 (fix RSGroups) to branch-1 :
[ https://issues.apache.org/jira/browse/HBASE-21117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592423#comment-16592423 ] Xu Cang commented on HBASE-21117: - Thanks [~stack] , noted. I will focus on fixing rsgroups' issues and try to keep procedure part untouched. > Backport HBASE-18350 (fix RSGroups) to branch-1 : > --- > > Key: HBASE-21117 > URL: https://issues.apache.org/jira/browse/HBASE-21117 > Project: HBase > Issue Type: Bug > Components: backport, rsgroup, shell >Affects Versions: 1.3.2 >Reporter: Xu Cang >Assignee: Xu Cang >Priority: Major > Labels: backport > > When working on HBASE-20666, I found out HBASE-18350 did not get ported to > branch-1, which causes procedure to hang when #moveTables called sometimes. > After looking into the 18350 patch, seems it's important since it fixes 4 > issues. This Jira is an attempt to backport it to branch-1. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21117) Backport HBASE-18350 (fix RSGroups) to branch-1 :
[ https://issues.apache.org/jira/browse/HBASE-21117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xu Cang updated HBASE-21117: Summary: Backport HBASE-18350 (fix RSGroups) to branch-1 : (was: Backport HBASE-18350 (RSGroups are broken underAMv2) to branch-1 :) > Backport HBASE-18350 (fix RSGroups) to branch-1 : > --- > > Key: HBASE-21117 > URL: https://issues.apache.org/jira/browse/HBASE-21117 > Project: HBase > Issue Type: Bug > Components: backport, rsgroup, shell >Affects Versions: 1.3.2 >Reporter: Xu Cang >Assignee: Xu Cang >Priority: Major > Labels: backport > > When working on HBASE-20666, I found out HBASE-18350 did not get ported to > branch-1, which causes procedure to hang when #moveTables called sometimes. > After looking into the 18350 patch, seems it's important since it fixes 4 > issues. This Jira is an attempt to backport it to branch-1. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20993) [Auth] IPC client fallback to simple auth allowed doesn't work
[ https://issues.apache.org/jira/browse/HBASE-20993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592409#comment-16592409 ] Hadoop QA commented on HBASE-20993: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} branch-1 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 34s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 5s{color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s{color} | {color:green} branch-1 passed with JDK v1.8.0_181 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s{color} | {color:green} branch-1 passed with JDK v1.7.0_191 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 3s{color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 3m 12s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} branch-1 passed with JDK v1.8.0_181 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} branch-1 passed with JDK v1.7.0_191 {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s{color} | {color:green} the patch passed with JDK v1.8.0_181 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s{color} | {color:green} the patch passed with JDK v1.7.0_191 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 2m 52s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 1m 42s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} the patch passed with JDK v1.8.0_181 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} the patch passed with JDK v1.7.0_191 {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 27s{color} | {color:green} hbase-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 26m 45s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 54m 55s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.security.TestInsecureIPC | \\ \\ || Subsystem || Report/Notes
[jira] [Commented] (HBASE-21105) TestHBaseFsck failing in branch-1, branch-1.4, branch-1.3 with NPE
[ https://issues.apache.org/jira/browse/HBASE-21105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592391#comment-16592391 ] Vishal Khandelwal commented on HBASE-21105: --- Thanks [~apurtell] and [~busbey] I have fixed the dependency of TestEndToEndSplitTransaction on TestHBaseFSCK. will combine this with HBASE-20940 > TestHBaseFsck failing in branch-1, branch-1.4, branch-1.3 with NPE > -- > > Key: HBASE-21105 > URL: https://issues.apache.org/jira/browse/HBASE-21105 > Project: HBase > Issue Type: Bug > Components: hbck, test >Affects Versions: 1.5.0, 1.3.3, 1.4.7 >Reporter: Sean Busbey >Assignee: Vishal Khandelwal >Priority: Major > Attachments: HBASE-21105.branch-1.v1.patch > > > TestHBaseFsck in the mentioned branches has two tests that rely on > TestEndToEndSplitTransaction for blocking in the same way TestTableResource > used to before HBASE-21076. > Both tests appear to specifically be testing that something happens after a > split, so we'll need a solution that removes the cross-test dependency but > still allows for "wait until this split has finished" > example failure from branch-1 > {code} > java.lang.NullPointerException > at > org.apache.hadoop.hbase.util.TestHBaseFsck.testSplitDaughtersNotInMeta(TestHBaseFsck.java:1985) > {code} > {code} > java.lang.NullPointerException > at > org.apache.hadoop.hbase.util.TestHBaseFsck.testValidLingeringSplitParent(TestHBaseFsck.java:1934) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20940) HStore.cansplit should not allow split to happen if it has references
[ https://issues.apache.org/jira/browse/HBASE-20940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vishal Khandelwal updated HBASE-20940: -- Attachment: (was: HBASE-20940.branch-1.v1.patch) > HStore.cansplit should not allow split to happen if it has references > - > > Key: HBASE-20940 > URL: https://issues.apache.org/jira/browse/HBASE-20940 > Project: HBase > Issue Type: Bug >Affects Versions: 1.3.2 >Reporter: Vishal Khandelwal >Assignee: Vishal Khandelwal >Priority: Major > Fix For: 3.0.0, 1.5.0, 1.3.3, 2.2.0, 2.1.1, 2.0.2, 1.4.7 > > Attachments: HBASE-20940.branch-1.3.v1.patch, > HBASE-20940.branch-1.3.v2.patch, HBASE-20940.branch-1.v1.patch, > HBASE-20940.branch-1.v2.patch, HBASE-20940.branch-1.v3.patch, > HBASE-20940.v1.patch, HBASE-20940.v2.patch, HBASE-20940.v3.patch, > HBASE-20940.v4.patch, result_HBASE-20940.branch-1.v2.log > > > When split happens and immediately another split happens, it may result into > a split of a region who still has references to its parent. More details > about scenario can be found here HBASE-20933 > HStore.hasReferences should check from fs.storefile rather than in memory > objects. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20940) HStore.cansplit should not allow split to happen if it has references
[ https://issues.apache.org/jira/browse/HBASE-20940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592387#comment-16592387 ] Vishal Khandelwal commented on HBASE-20940: --- I think you want me to combine both patch together as one. I shall combine that on my monday morning. > HStore.cansplit should not allow split to happen if it has references > - > > Key: HBASE-20940 > URL: https://issues.apache.org/jira/browse/HBASE-20940 > Project: HBase > Issue Type: Bug >Affects Versions: 1.3.2 >Reporter: Vishal Khandelwal >Assignee: Vishal Khandelwal >Priority: Major > Fix For: 3.0.0, 1.5.0, 1.3.3, 2.2.0, 2.1.1, 2.0.2, 1.4.7 > > Attachments: HBASE-20940.branch-1.3.v1.patch, > HBASE-20940.branch-1.3.v2.patch, HBASE-20940.branch-1.v1.patch, > HBASE-20940.branch-1.v1.patch, HBASE-20940.branch-1.v2.patch, > HBASE-20940.branch-1.v3.patch, HBASE-20940.v1.patch, HBASE-20940.v2.patch, > HBASE-20940.v3.patch, HBASE-20940.v4.patch, result_HBASE-20940.branch-1.v2.log > > > When split happens and immediately another split happens, it may result into > a split of a region who still has references to its parent. More details > about scenario can be found here HBASE-20933 > HStore.hasReferences should check from fs.storefile rather than in memory > objects. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21098) Improve Snapshot Performance with Temporary Snapshot Directory when rootDir on S3
[ https://issues.apache.org/jira/browse/HBASE-21098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592386#comment-16592386 ] Hadoop QA commented on HBASE-21098: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 34s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 8 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 11s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 54s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 32s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 28s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 20s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 37s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 6s{color} | {color:green} hbase-server: The patch generated 0 new + 283 unchanged - 1 fixed = 283 total (was 284) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s{color} | {color:green} The patch hbase-mapreduce passed checkstyle {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 6s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 8m 37s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}221m 24s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 22m 12s{color} | {color:green} hbase-mapreduce in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 47s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}289m 37s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.util.TestHBaseFsckReplication | | | hadoop.hbase.coprocessor.TestMetaTableMetrics | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b | | JIRA Issue | HBASE-21098 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12937067/HBASE-21098.master.004.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux c034510f1aa3 4.4.0-133-generic #159-Ubuntu SMP Fri
[jira] [Commented] (HBASE-20940) HStore.cansplit should not allow split to happen if it has references
[ https://issues.apache.org/jira/browse/HBASE-20940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592385#comment-16592385 ] Vishal Khandelwal commented on HBASE-20940: --- [~apurtell]: I have applied same fix as given "HBASE-21105" and it should be able to fix test blocker. Fix is only applicable for branch-1. Let me know if something else to be taken care > HStore.cansplit should not allow split to happen if it has references > - > > Key: HBASE-20940 > URL: https://issues.apache.org/jira/browse/HBASE-20940 > Project: HBase > Issue Type: Bug >Affects Versions: 1.3.2 >Reporter: Vishal Khandelwal >Assignee: Vishal Khandelwal >Priority: Major > Fix For: 3.0.0, 1.5.0, 1.3.3, 2.2.0, 2.1.1, 2.0.2, 1.4.7 > > Attachments: HBASE-20940.branch-1.3.v1.patch, > HBASE-20940.branch-1.3.v2.patch, HBASE-20940.branch-1.v1.patch, > HBASE-20940.branch-1.v1.patch, HBASE-20940.branch-1.v2.patch, > HBASE-20940.branch-1.v3.patch, HBASE-20940.v1.patch, HBASE-20940.v2.patch, > HBASE-20940.v3.patch, HBASE-20940.v4.patch, result_HBASE-20940.branch-1.v2.log > > > When split happens and immediately another split happens, it may result into > a split of a region who still has references to its parent. More details > about scenario can be found here HBASE-20933 > HStore.hasReferences should check from fs.storefile rather than in memory > objects. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20940) HStore.cansplit should not allow split to happen if it has references
[ https://issues.apache.org/jira/browse/HBASE-20940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vishal Khandelwal updated HBASE-20940: -- Attachment: HBASE-20940.branch-1.v1.patch Status: Patch Available (was: Reopened) > HStore.cansplit should not allow split to happen if it has references > - > > Key: HBASE-20940 > URL: https://issues.apache.org/jira/browse/HBASE-20940 > Project: HBase > Issue Type: Bug >Affects Versions: 1.3.2 >Reporter: Vishal Khandelwal >Assignee: Vishal Khandelwal >Priority: Major > Fix For: 3.0.0, 1.5.0, 1.3.3, 2.2.0, 2.1.1, 2.0.2, 1.4.7 > > Attachments: HBASE-20940.branch-1.3.v1.patch, > HBASE-20940.branch-1.3.v2.patch, HBASE-20940.branch-1.v1.patch, > HBASE-20940.branch-1.v1.patch, HBASE-20940.branch-1.v2.patch, > HBASE-20940.branch-1.v3.patch, HBASE-20940.v1.patch, HBASE-20940.v2.patch, > HBASE-20940.v3.patch, HBASE-20940.v4.patch, result_HBASE-20940.branch-1.v2.log > > > When split happens and immediately another split happens, it may result into > a split of a region who still has references to its parent. More details > about scenario can be found here HBASE-20933 > HStore.hasReferences should check from fs.storefile rather than in memory > objects. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20993) [Auth] IPC client fallback to simple auth allowed doesn't work
[ https://issues.apache.org/jira/browse/HBASE-20993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592382#comment-16592382 ] Jack Bearden commented on HBASE-20993: -- -008: * Fixes remaining checkstyle warnings > [Auth] IPC client fallback to simple auth allowed doesn't work > -- > > Key: HBASE-20993 > URL: https://issues.apache.org/jira/browse/HBASE-20993 > Project: HBase > Issue Type: Bug > Components: Client, security >Affects Versions: 1.2.6 >Reporter: Reid Chan >Assignee: Jack Bearden >Priority: Critical > Fix For: 3.0.0, 1.5.0, 2.2.0, 1.4.8 > > Attachments: HBASE-20993.001.patch, > HBASE-20993.003.branch-1.flowchart.png, HBASE-20993.branch-1.002.patch, > HBASE-20993.branch-1.003.patch, HBASE-20993.branch-1.004.patch, > HBASE-20993.branch-1.005.patch, HBASE-20993.branch-1.006.patch, > HBASE-20993.branch-1.007.patch, HBASE-20993.branch-1.008.patch, > HBASE-20993.branch-1.2.001.patch, HBASE-20993.branch-1.wip.002.patch, > HBASE-20993.branch-1.wip.patch > > > It is easily reproducible. > client's hbase-site.xml: hadoop.security.authentication:kerberos, > hbase.security.authentication:kerberos, > hbase.ipc.client.fallback-to-simple-auth-allowed:true, keytab and principal > are right set > A simple auth hbase cluster, a kerberized hbase client application. > application trying to r/w/c/d table will have following exception: > {code} > javax.security.sasl.SaslException: GSS initiate failed [Caused by > GSSException: No valid credentials provided (Mechanism level: Failed to find > any Kerberos tgt)] > at > com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) > at > org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:743) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:740) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1241) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336) > at > org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58383) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(ConnectionManager.java:1592) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1530) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1552) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1581) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1738) > at > org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38) > at > org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:134) > at > org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4297) > at > org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4289) > at > org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsyncV2(HBaseAdmin.java:753) > at > org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:674) > at > org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:607) > at > org.playground.hbase.KerberizedClientFallback.main(KerberizedClientFallback.java:55) > Caused by: GSSException: No valid credentials provided (Mechanism level: >
[jira] [Updated] (HBASE-20993) [Auth] IPC client fallback to simple auth allowed doesn't work
[ https://issues.apache.org/jira/browse/HBASE-20993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jack Bearden updated HBASE-20993: - Attachment: HBASE-20993.branch-1.008.patch > [Auth] IPC client fallback to simple auth allowed doesn't work > -- > > Key: HBASE-20993 > URL: https://issues.apache.org/jira/browse/HBASE-20993 > Project: HBase > Issue Type: Bug > Components: Client, security >Affects Versions: 1.2.6 >Reporter: Reid Chan >Assignee: Jack Bearden >Priority: Critical > Fix For: 3.0.0, 1.5.0, 2.2.0, 1.4.8 > > Attachments: HBASE-20993.001.patch, > HBASE-20993.003.branch-1.flowchart.png, HBASE-20993.branch-1.002.patch, > HBASE-20993.branch-1.003.patch, HBASE-20993.branch-1.004.patch, > HBASE-20993.branch-1.005.patch, HBASE-20993.branch-1.006.patch, > HBASE-20993.branch-1.007.patch, HBASE-20993.branch-1.008.patch, > HBASE-20993.branch-1.2.001.patch, HBASE-20993.branch-1.wip.002.patch, > HBASE-20993.branch-1.wip.patch > > > It is easily reproducible. > client's hbase-site.xml: hadoop.security.authentication:kerberos, > hbase.security.authentication:kerberos, > hbase.ipc.client.fallback-to-simple-auth-allowed:true, keytab and principal > are right set > A simple auth hbase cluster, a kerberized hbase client application. > application trying to r/w/c/d table will have following exception: > {code} > javax.security.sasl.SaslException: GSS initiate failed [Caused by > GSSException: No valid credentials provided (Mechanism level: Failed to find > any Kerberos tgt)] > at > com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) > at > org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:743) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:740) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1241) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336) > at > org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58383) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(ConnectionManager.java:1592) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1530) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1552) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1581) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1738) > at > org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38) > at > org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:134) > at > org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4297) > at > org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4289) > at > org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsyncV2(HBaseAdmin.java:753) > at > org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:674) > at > org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:607) > at > org.playground.hbase.KerberizedClientFallback.main(KerberizedClientFallback.java:55) > Caused by: GSSException: No valid credentials provided (Mechanism level: > Failed to find any Kerberos tgt) >
[jira] [Updated] (HBASE-21098) Improve Snapshot Performance with Temporary Snapshot Directory when rootDir on S3
[ https://issues.apache.org/jira/browse/HBASE-21098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-21098: --- Release Note: It is recommended to place the working directory on-cluster on HDFS as doing so has shown a strong performance increase due to data locality. It is important to note that the working directory should not overlap with any existing directories as the working directory will be cleaned out during the snapshot process. Beyond that, any well-named directory on HDFS should be sufficient. (was: I recommend storing the working directory on-cluster on HDFS as doing so has shown a strong performance increase due to data locality. It is important to note that the working directory should not overlap with any existing directories as the working directory will be cleaned out during the snapshot process. Beyond that, any well-named directory on HDFS should be sufficient.) > Improve Snapshot Performance with Temporary Snapshot Directory when rootDir > on S3 > - > > Key: HBASE-21098 > URL: https://issues.apache.org/jira/browse/HBASE-21098 > Project: HBase > Issue Type: Improvement >Affects Versions: 3.0.0, 2.1.1 >Reporter: Tyler Mi >Priority: Major > Attachments: HBASE-21098.master.001.patch, > HBASE-21098.master.002.patch, HBASE-21098.master.003.patch, > HBASE-21098.master.004.patch > > > When using Apache HBase, the snapshot feature can be used to make a point in > time recovery. To do this, HBase creates a manifest of all the files in all > of the Regions so that those files can be referenced again when a user > restores a snapshot. With HBase's S3 storage mode, developers can store their > data off-cluster on Amazon S3. However, utilizing S3 as a file system is > inefficient in some operations, namely renames. Most Hadoop ecosystem > applications use an atomic rename as a method of committing data. However, > with S3, a rename is a separate copy and then a delete of every file which is > no longer atomic and, in fact, quite costly. In addition, puts and deletes on > S3 have latency issues that traditional filesystems do not encounter when > manipulating the region snapshots to consolidate into a single manifest. When > HBase on S3 users have a significant amount of regions, puts, deletes, and > renames (the final commit stage of the snapshot) become the bottleneck > causing snapshots to take many minutes or even hours to complete. > The purpose of this patch is to increase the overall performance of snapshots > while utilizing HBase on S3 through the use of a temporary directory for the > snapshots that exists on a traditional filesystem like HDFS to circumvent the > bottlenecks. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20942) Improve RpcServer TRACE logging
[ https://issues.apache.org/jira/browse/HBASE-20942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Krish Dey updated HBASE-20942: -- Status: Open (was: Patch Available) > Improve RpcServer TRACE logging > --- > > Key: HBASE-20942 > URL: https://issues.apache.org/jira/browse/HBASE-20942 > Project: HBase > Issue Type: Task >Reporter: Esteban Gutierrez >Assignee: Krish Dey >Priority: Major > Attachments: HBASE-20942.002.patch, HBASE-20942.003.patch, > HBASE-20942.004.patch > > > Two things: > * We truncate RpcServer output to 1000 characters for trace logging. Would > be better if that value was configurable. > * There is the chance for an ArrayIndexOutOfBounds when truncating the TRACE > log message. > Esteban mentioned this to me earlier, so I'm crediting him as the reporter. > cc: [~elserj] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20942) Improve RpcServer TRACE logging
[ https://issues.apache.org/jira/browse/HBASE-20942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Krish Dey updated HBASE-20942: -- Attachment: HBASE-20942.004.patch Status: Patch Available (was: Open) > Improve RpcServer TRACE logging > --- > > Key: HBASE-20942 > URL: https://issues.apache.org/jira/browse/HBASE-20942 > Project: HBase > Issue Type: Task >Reporter: Esteban Gutierrez >Assignee: Krish Dey >Priority: Major > Attachments: HBASE-20942.002.patch, HBASE-20942.003.patch, > HBASE-20942.004.patch > > > Two things: > * We truncate RpcServer output to 1000 characters for trace logging. Would > be better if that value was configurable. > * There is the chance for an ArrayIndexOutOfBounds when truncating the TRACE > log message. > Esteban mentioned this to me earlier, so I'm crediting him as the reporter. > cc: [~elserj] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21117) Backport HBASE-18350 (RSGroups are broken underAMv2) to branch-1 :
[ https://issues.apache.org/jira/browse/HBASE-21117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592346#comment-16592346 ] stack commented on HBASE-21117: --- Be careful [~xucang] AMv2 is for branch-2, not branch-1. That said, there are some Procedures in branch-1. > Backport HBASE-18350 (RSGroups are broken underAMv2) to branch-1 : > > > Key: HBASE-21117 > URL: https://issues.apache.org/jira/browse/HBASE-21117 > Project: HBase > Issue Type: Bug > Components: backport, rsgroup, shell >Affects Versions: 1.3.2 >Reporter: Xu Cang >Assignee: Xu Cang >Priority: Major > Labels: backport > > When working on HBASE-20666, I found out HBASE-18350 did not get ported to > branch-1, which causes procedure to hang when #moveTables called sometimes. > After looking into the 18350 patch, seems it's important since it fixes 4 > issues. This Jira is an attempt to backport it to branch-1. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20941) Create and implement HbckService in master
[ https://issues.apache.org/jira/browse/HBASE-20941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-20941: -- Attachment: hbase-20941.master.004.patch > Create and implement HbckService in master > -- > > Key: HBASE-20941 > URL: https://issues.apache.org/jira/browse/HBASE-20941 > Project: HBase > Issue Type: Sub-task >Reporter: Umesh Agashe >Assignee: Umesh Agashe >Priority: Major > Attachments: hbase-20941.master.001.patch, > hbase-20941.master.002.patch, hbase-20941.master.003.patch, > hbase-20941.master.004.patch, hbase-20941.master.004.patch, > hbase-20941.master.004.patch > > > Create HbckService in master and implement following methods: > # setTableState(): If table state are inconsistent with action/ procedures > working on them, sometimes manipulating their states in meta fix things. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20941) Create and implement HbckService in master
[ https://issues.apache.org/jira/browse/HBASE-20941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592344#comment-16592344 ] stack commented on HBASE-20941: --- Retry. Failure seems unrelated. > Create and implement HbckService in master > -- > > Key: HBASE-20941 > URL: https://issues.apache.org/jira/browse/HBASE-20941 > Project: HBase > Issue Type: Sub-task >Reporter: Umesh Agashe >Assignee: Umesh Agashe >Priority: Major > Attachments: hbase-20941.master.001.patch, > hbase-20941.master.002.patch, hbase-20941.master.003.patch, > hbase-20941.master.004.patch, hbase-20941.master.004.patch, > hbase-20941.master.004.patch > > > Create HbckService in master and implement following methods: > # setTableState(): If table state are inconsistent with action/ procedures > working on them, sometimes manipulating their states in meta fix things. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21117) Backport HBASE-18350 (RSGroups are broken underAMv2) to branch-1 :
Xu Cang created HBASE-21117: --- Summary: Backport HBASE-18350 (RSGroups are broken underAMv2) to branch-1 : Key: HBASE-21117 URL: https://issues.apache.org/jira/browse/HBASE-21117 Project: HBase Issue Type: Bug Components: backport, rsgroup, shell Affects Versions: 1.3.2 Reporter: Xu Cang Assignee: Xu Cang When working on HBASE-20666, I found out HBASE-18350 did not get ported to branch-1, which causes procedure to hang when #moveTables called sometimes. After looking into the 18350 patch, seems it's important since it fixes 4 issues. This Jira is an attempt to backport it to branch-1. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20941) Create and implement HbckService in master
[ https://issues.apache.org/jira/browse/HBASE-20941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592324#comment-16592324 ] Hadoop QA commented on HBASE-20941: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 18s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 17s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 53s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 25s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 31s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 14s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 32s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 3m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 13s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 7m 51s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 32s{color} | {color:green} hbase-protocol-shaded in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 25s{color} | {color:green} hbase-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 59s{color} | {color:green} hbase-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}190m 37s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}257m 10s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.util.TestHBaseFsckReplication | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b | | JIRA Issue | HBASE-20941 | | JIRA Patch URL |
[jira] [Commented] (HBASE-20940) HStore.cansplit should not allow split to happen if it has references
[ https://issues.apache.org/jira/browse/HBASE-20940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592317#comment-16592317 ] Andrew Purtell commented on HBASE-20940: Issue with known test failure blocking 1.4.7 release, let's fix this or I will revert it on Monday and we can try for 1.4.8 > HStore.cansplit should not allow split to happen if it has references > - > > Key: HBASE-20940 > URL: https://issues.apache.org/jira/browse/HBASE-20940 > Project: HBase > Issue Type: Bug >Affects Versions: 1.3.2 >Reporter: Vishal Khandelwal >Assignee: Vishal Khandelwal >Priority: Major > Fix For: 3.0.0, 1.5.0, 1.3.3, 2.2.0, 2.1.1, 2.0.2, 1.4.7 > > Attachments: HBASE-20940.branch-1.3.v1.patch, > HBASE-20940.branch-1.3.v2.patch, HBASE-20940.branch-1.v1.patch, > HBASE-20940.branch-1.v2.patch, HBASE-20940.branch-1.v3.patch, > HBASE-20940.v1.patch, HBASE-20940.v2.patch, HBASE-20940.v3.patch, > HBASE-20940.v4.patch, result_HBASE-20940.branch-1.v2.log > > > When split happens and immediately another split happens, it may result into > a split of a region who still has references to its parent. More details > about scenario can be found here HBASE-20933 > HStore.hasReferences should check from fs.storefile rather than in memory > objects. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20993) [Auth] IPC client fallback to simple auth allowed doesn't work
[ https://issues.apache.org/jira/browse/HBASE-20993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-20993: --- Fix Version/s: (was: 1.4.7) 1.4.8 1.5.0 > [Auth] IPC client fallback to simple auth allowed doesn't work > -- > > Key: HBASE-20993 > URL: https://issues.apache.org/jira/browse/HBASE-20993 > Project: HBase > Issue Type: Bug > Components: Client, security >Affects Versions: 1.2.6 >Reporter: Reid Chan >Assignee: Jack Bearden >Priority: Critical > Fix For: 3.0.0, 1.5.0, 2.2.0, 1.4.8 > > Attachments: HBASE-20993.001.patch, > HBASE-20993.003.branch-1.flowchart.png, HBASE-20993.branch-1.002.patch, > HBASE-20993.branch-1.003.patch, HBASE-20993.branch-1.004.patch, > HBASE-20993.branch-1.005.patch, HBASE-20993.branch-1.006.patch, > HBASE-20993.branch-1.007.patch, HBASE-20993.branch-1.2.001.patch, > HBASE-20993.branch-1.wip.002.patch, HBASE-20993.branch-1.wip.patch > > > It is easily reproducible. > client's hbase-site.xml: hadoop.security.authentication:kerberos, > hbase.security.authentication:kerberos, > hbase.ipc.client.fallback-to-simple-auth-allowed:true, keytab and principal > are right set > A simple auth hbase cluster, a kerberized hbase client application. > application trying to r/w/c/d table will have following exception: > {code} > javax.security.sasl.SaslException: GSS initiate failed [Caused by > GSSException: No valid credentials provided (Mechanism level: Failed to find > any Kerberos tgt)] > at > com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) > at > org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:743) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:740) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1241) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336) > at > org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58383) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(ConnectionManager.java:1592) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1530) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1552) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1581) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1738) > at > org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38) > at > org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:134) > at > org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4297) > at > org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4289) > at > org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsyncV2(HBaseAdmin.java:753) > at > org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:674) > at > org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:607) > at > org.playground.hbase.KerberizedClientFallback.main(KerberizedClientFallback.java:55) > Caused by: GSSException: No valid credentials provided (Mechanism level: > Failed to find any Kerberos
[jira] [Commented] (HBASE-21072) Block out HBCK1 in hbase2
[ https://issues.apache.org/jira/browse/HBASE-21072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592306#comment-16592306 ] Hudson commented on HBASE-21072: Results for branch master [build #454 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/454/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/master/454//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/master/454//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/master/454//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Block out HBCK1 in hbase2 > - > > Key: HBASE-21072 > URL: https://issues.apache.org/jira/browse/HBASE-21072 > Project: HBase > Issue Type: Sub-task > Components: hbck >Affects Versions: 2.0.1 >Reporter: stack >Assignee: stack >Priority: Major > Fix For: 3.0.0, 2.1.1, 2.0.2 > > Attachments: HBASE-21072.branch-2.0.001.patch, > HBASE-21072.branch-2.0.002.patch, HBASE-21072.branch-2.0.003.patch, > HBASE-21072.branch-2.0.003.patch > > > [~busbey] left a note in the parent issue that I only just read which has a > prescription for how we might block hbck1 from running against an hbase-2.x > (hbck1 could damage a hbase-2Its disabled in hbase-2 but an errant hbck1 > from an hbase-1.x install might run). > Here is quote from parent issue: > {code} > I was idly thinking about how to stop HBase v1 HBCK. Thanks to HBASE-11405, > we know that all HBase 1.y.z hbck instances should refuse to run if there's a > lock file at '/hbase/hbase-hbck.lock' (given defaults). How about HBase v2 > places that file permanently in place and replace the contents (usually just > an IP address) with a note about how you must not run HBase v1 HBCK against > the cluster? > {code} > There is also the below: > {code} > We could pick another location for locking on HBase version 2 and start > building in a version check of some kind? > {code} > ... to which I'd answer, lets see. hbck2 is a different beast. It asks the > master to do stuff. It doesn't do it itself, as hbck1 did. So no need of a > lock/version. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21072) Block out HBCK1 in hbase2
[ https://issues.apache.org/jira/browse/HBASE-21072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592290#comment-16592290 ] Hudson commented on HBASE-21072: Results for branch branch-2.0 [build #725 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/725/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/725//General_Nightly_Build_Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/725//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/725//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. > Block out HBCK1 in hbase2 > - > > Key: HBASE-21072 > URL: https://issues.apache.org/jira/browse/HBASE-21072 > Project: HBase > Issue Type: Sub-task > Components: hbck >Affects Versions: 2.0.1 >Reporter: stack >Assignee: stack >Priority: Major > Fix For: 3.0.0, 2.1.1, 2.0.2 > > Attachments: HBASE-21072.branch-2.0.001.patch, > HBASE-21072.branch-2.0.002.patch, HBASE-21072.branch-2.0.003.patch, > HBASE-21072.branch-2.0.003.patch > > > [~busbey] left a note in the parent issue that I only just read which has a > prescription for how we might block hbck1 from running against an hbase-2.x > (hbck1 could damage a hbase-2Its disabled in hbase-2 but an errant hbck1 > from an hbase-1.x install might run). > Here is quote from parent issue: > {code} > I was idly thinking about how to stop HBase v1 HBCK. Thanks to HBASE-11405, > we know that all HBase 1.y.z hbck instances should refuse to run if there's a > lock file at '/hbase/hbase-hbck.lock' (given defaults). How about HBase v2 > places that file permanently in place and replace the contents (usually just > an IP address) with a note about how you must not run HBase v1 HBCK against > the cluster? > {code} > There is also the below: > {code} > We could pick another location for locking on HBase version 2 and start > building in a version check of some kind? > {code} > ... to which I'd answer, lets see. hbck2 is a different beast. It asks the > master to do stuff. It doesn't do it itself, as hbck1 did. So no need of a > lock/version. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21072) Block out HBCK1 in hbase2
[ https://issues.apache.org/jira/browse/HBASE-21072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592264#comment-16592264 ] Hudson commented on HBASE-21072: Results for branch branch-2.1 [build #236 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/236/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/236//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/236//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/236//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Block out HBCK1 in hbase2 > - > > Key: HBASE-21072 > URL: https://issues.apache.org/jira/browse/HBASE-21072 > Project: HBase > Issue Type: Sub-task > Components: hbck >Affects Versions: 2.0.1 >Reporter: stack >Assignee: stack >Priority: Major > Fix For: 3.0.0, 2.1.1, 2.0.2 > > Attachments: HBASE-21072.branch-2.0.001.patch, > HBASE-21072.branch-2.0.002.patch, HBASE-21072.branch-2.0.003.patch, > HBASE-21072.branch-2.0.003.patch > > > [~busbey] left a note in the parent issue that I only just read which has a > prescription for how we might block hbck1 from running against an hbase-2.x > (hbck1 could damage a hbase-2Its disabled in hbase-2 but an errant hbck1 > from an hbase-1.x install might run). > Here is quote from parent issue: > {code} > I was idly thinking about how to stop HBase v1 HBCK. Thanks to HBASE-11405, > we know that all HBase 1.y.z hbck instances should refuse to run if there's a > lock file at '/hbase/hbase-hbck.lock' (given defaults). How about HBase v2 > places that file permanently in place and replace the contents (usually just > an IP address) with a note about how you must not run HBase v1 HBCK against > the cluster? > {code} > There is also the below: > {code} > We could pick another location for locking on HBase version 2 and start > building in a version check of some kind? > {code} > ... to which I'd answer, lets see. hbck2 is a different beast. It asks the > master to do stuff. It doesn't do it itself, as hbck1 did. So no need of a > lock/version. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20993) [Auth] IPC client fallback to simple auth allowed doesn't work
[ https://issues.apache.org/jira/browse/HBASE-20993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592201#comment-16592201 ] Reid Chan commented on HBASE-20993: --- Please address checkstyle warning. > [Auth] IPC client fallback to simple auth allowed doesn't work > -- > > Key: HBASE-20993 > URL: https://issues.apache.org/jira/browse/HBASE-20993 > Project: HBase > Issue Type: Bug > Components: Client, security >Affects Versions: 1.2.6 >Reporter: Reid Chan >Assignee: Jack Bearden >Priority: Critical > Fix For: 3.0.0, 2.2.0, 1.4.7 > > Attachments: HBASE-20993.001.patch, > HBASE-20993.003.branch-1.flowchart.png, HBASE-20993.branch-1.002.patch, > HBASE-20993.branch-1.003.patch, HBASE-20993.branch-1.004.patch, > HBASE-20993.branch-1.005.patch, HBASE-20993.branch-1.006.patch, > HBASE-20993.branch-1.007.patch, HBASE-20993.branch-1.2.001.patch, > HBASE-20993.branch-1.wip.002.patch, HBASE-20993.branch-1.wip.patch > > > It is easily reproducible. > client's hbase-site.xml: hadoop.security.authentication:kerberos, > hbase.security.authentication:kerberos, > hbase.ipc.client.fallback-to-simple-auth-allowed:true, keytab and principal > are right set > A simple auth hbase cluster, a kerberized hbase client application. > application trying to r/w/c/d table will have following exception: > {code} > javax.security.sasl.SaslException: GSS initiate failed [Caused by > GSSException: No valid credentials provided (Mechanism level: Failed to find > any Kerberos tgt)] > at > com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) > at > org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:743) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:740) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1241) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336) > at > org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58383) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(ConnectionManager.java:1592) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1530) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1552) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1581) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1738) > at > org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38) > at > org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:134) > at > org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4297) > at > org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4289) > at > org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsyncV2(HBaseAdmin.java:753) > at > org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:674) > at > org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:607) > at > org.playground.hbase.KerberizedClientFallback.main(KerberizedClientFallback.java:55) > Caused by: GSSException: No valid credentials provided (Mechanism level: > Failed to find any Kerberos tgt) > at >
[jira] [Updated] (HBASE-20993) [Auth] IPC client fallback to simple auth allowed doesn't work
[ https://issues.apache.org/jira/browse/HBASE-20993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Reid Chan updated HBASE-20993: -- Fix Version/s: 1.4.7 2.2.0 3.0.0 > [Auth] IPC client fallback to simple auth allowed doesn't work > -- > > Key: HBASE-20993 > URL: https://issues.apache.org/jira/browse/HBASE-20993 > Project: HBase > Issue Type: Bug > Components: Client, security >Affects Versions: 1.2.6 >Reporter: Reid Chan >Assignee: Jack Bearden >Priority: Critical > Fix For: 3.0.0, 2.2.0, 1.4.7 > > Attachments: HBASE-20993.001.patch, > HBASE-20993.003.branch-1.flowchart.png, HBASE-20993.branch-1.002.patch, > HBASE-20993.branch-1.003.patch, HBASE-20993.branch-1.004.patch, > HBASE-20993.branch-1.005.patch, HBASE-20993.branch-1.006.patch, > HBASE-20993.branch-1.007.patch, HBASE-20993.branch-1.2.001.patch, > HBASE-20993.branch-1.wip.002.patch, HBASE-20993.branch-1.wip.patch > > > It is easily reproducible. > client's hbase-site.xml: hadoop.security.authentication:kerberos, > hbase.security.authentication:kerberos, > hbase.ipc.client.fallback-to-simple-auth-allowed:true, keytab and principal > are right set > A simple auth hbase cluster, a kerberized hbase client application. > application trying to r/w/c/d table will have following exception: > {code} > javax.security.sasl.SaslException: GSS initiate failed [Caused by > GSSException: No valid credentials provided (Mechanism level: Failed to find > any Kerberos tgt)] > at > com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) > at > org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:743) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:740) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1241) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336) > at > org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58383) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(ConnectionManager.java:1592) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1530) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1552) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1581) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1738) > at > org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38) > at > org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:134) > at > org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4297) > at > org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4289) > at > org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsyncV2(HBaseAdmin.java:753) > at > org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:674) > at > org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:607) > at > org.playground.hbase.KerberizedClientFallback.main(KerberizedClientFallback.java:55) > Caused by: GSSException: No valid credentials provided (Mechanism level: > Failed to find any Kerberos tgt) > at >
[jira] [Resolved] (HBASE-21111) [Auth] IPC client fallback to simple auth (forward-port to branch-2)
[ https://issues.apache.org/jira/browse/HBASE-2?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Reid Chan resolved HBASE-2. --- Resolution: Duplicate Separate jira is unnecessary. > [Auth] IPC client fallback to simple auth (forward-port to branch-2) > > > Key: HBASE-2 > URL: https://issues.apache.org/jira/browse/HBASE-2 > Project: HBase > Issue Type: Bug >Affects Versions: 2.1.0 >Reporter: Jack Bearden >Assignee: Jack Bearden >Priority: Critical > Labels: branch-2 > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HBASE-21112) [Auth] IPC client fallback to simple auth (forward-port to master)
[ https://issues.apache.org/jira/browse/HBASE-21112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Reid Chan resolved HBASE-21112. --- Resolution: Duplicate Separate jira is unnecessary. > [Auth] IPC client fallback to simple auth (forward-port to master) > -- > > Key: HBASE-21112 > URL: https://issues.apache.org/jira/browse/HBASE-21112 > Project: HBase > Issue Type: Bug >Reporter: Jack Bearden >Assignee: Jack Bearden >Priority: Critical > Labels: master > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21116) nightly job should do API compatibility report
[ https://issues.apache.org/jira/browse/HBASE-21116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HBASE-21116: Description: we should add running the API compatibility check ({{dev-support/checkcompatibility.py}}) to our nightly tests. * in master check against head of prior major release * in major release branch (e.g. branch-2) check against head of prior minor release * in minor release branch (e.g. branch-2.0) check against prior maintenance release * update release docs to suggest RM update above after making a release. was: we should add running the API compatibility check ({{dev-support/checkcompatibility.py}}) to our nightly tests. * in master check against head of prior major release * in major release branch (e.g. branch-2) check against prior minor release * in minor release branch (e.g. branch-2.0) check against prior maintenance release * update release docs to suggest RM update above after making a release. > nightly job should do API compatibility report > -- > > Key: HBASE-21116 > URL: https://issues.apache.org/jira/browse/HBASE-21116 > Project: HBase > Issue Type: Improvement > Components: API, community >Reporter: Sean Busbey >Priority: Major > > we should add running the API compatibility check > ({{dev-support/checkcompatibility.py}}) to our nightly tests. > * in master check against head of prior major release > * in major release branch (e.g. branch-2) check against head of prior minor > release > * in minor release branch (e.g. branch-2.0) check against prior maintenance > release > * update release docs to suggest RM update above after making a release. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20993) [Auth] IPC client fallback to simple auth allowed doesn't work
[ https://issues.apache.org/jira/browse/HBASE-20993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592187#comment-16592187 ] Reid Chan commented on HBASE-20993: --- We can just handle it here, separate jiras should be a follow-up or someone finds he needs in other branches but not be ported at that moment. Since this is unresolved, we still are here. > [Auth] IPC client fallback to simple auth allowed doesn't work > -- > > Key: HBASE-20993 > URL: https://issues.apache.org/jira/browse/HBASE-20993 > Project: HBase > Issue Type: Bug > Components: Client, security >Affects Versions: 1.2.6 >Reporter: Reid Chan >Assignee: Jack Bearden >Priority: Critical > Attachments: HBASE-20993.001.patch, > HBASE-20993.003.branch-1.flowchart.png, HBASE-20993.branch-1.002.patch, > HBASE-20993.branch-1.003.patch, HBASE-20993.branch-1.004.patch, > HBASE-20993.branch-1.005.patch, HBASE-20993.branch-1.006.patch, > HBASE-20993.branch-1.007.patch, HBASE-20993.branch-1.2.001.patch, > HBASE-20993.branch-1.wip.002.patch, HBASE-20993.branch-1.wip.patch > > > It is easily reproducible. > client's hbase-site.xml: hadoop.security.authentication:kerberos, > hbase.security.authentication:kerberos, > hbase.ipc.client.fallback-to-simple-auth-allowed:true, keytab and principal > are right set > A simple auth hbase cluster, a kerberized hbase client application. > application trying to r/w/c/d table will have following exception: > {code} > javax.security.sasl.SaslException: GSS initiate failed [Caused by > GSSException: No valid credentials provided (Mechanism level: Failed to find > any Kerberos tgt)] > at > com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) > at > org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:743) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:740) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1241) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336) > at > org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58383) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(ConnectionManager.java:1592) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1530) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1552) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1581) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1738) > at > org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38) > at > org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:134) > at > org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4297) > at > org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4289) > at > org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsyncV2(HBaseAdmin.java:753) > at > org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:674) > at > org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:607) > at > org.playground.hbase.KerberizedClientFallback.main(KerberizedClientFallback.java:55) > Caused by:
[jira] [Created] (HBASE-21116) nightly job should do API compatibility report
Sean Busbey created HBASE-21116: --- Summary: nightly job should do API compatibility report Key: HBASE-21116 URL: https://issues.apache.org/jira/browse/HBASE-21116 Project: HBase Issue Type: Improvement Components: API, community Reporter: Sean Busbey we should add running the API compatibility check ({{dev-support/checkcompatibility.py}}) to our nightly tests. * in master check against head of prior major release * in major release branch (e.g. branch-2) check against prior minor release * in minor release branch (e.g. branch-2.0) check against prior maintenance release * update release docs to suggest RM update above after making a release. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21115) website should have rendered copy of release notes / changelog
Sean Busbey created HBASE-21115: --- Summary: website should have rendered copy of release notes / changelog Key: HBASE-21115 URL: https://issues.apache.org/jira/browse/HBASE-21115 Project: HBase Issue Type: Improvement Components: community, website Reporter: Sean Busbey right now our downloads page links to the raw markdown for releases that are present. we should render them into html and host them on the website. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21114) website should have a copy of 2.1 release docs
Sean Busbey created HBASE-21114: --- Summary: website should have a copy of 2.1 release docs Key: HBASE-21114 URL: https://issues.apache.org/jira/browse/HBASE-21114 Project: HBase Issue Type: Task Components: community, documentation Affects Versions: 2.1.0 Reporter: Sean Busbey Fix For: 2.1.1 in the "Documentation and API" menu we have entries for 2.0 and 1.2. should also add in 2.1. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20642) IntegrationTestDDLMasterFailover throws 'InvalidFamilyOperationException
[ https://issues.apache.org/jira/browse/HBASE-20642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592168#comment-16592168 ] Mingliang Liu commented on HBASE-20642: --- Does it make sense to branch-1 as well? Thanks. [~elserj] [~an...@apache.org] > IntegrationTestDDLMasterFailover throws 'InvalidFamilyOperationException > - > > Key: HBASE-20642 > URL: https://issues.apache.org/jira/browse/HBASE-20642 > Project: HBase > Issue Type: Bug >Reporter: Ankit Singhal >Assignee: Ankit Singhal >Priority: Major > Fix For: 3.0.0, 2.1.0, 2.0.2 > > Attachments: HBASE-20642.001.patch, HBASE-20642.002.patch, > HBASE-20642.patch > > > [~romil.choksi] reported that IntegrationTestDDLMasterFailover is failing > while adding column family during the time master is restarting. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21098) Improve Snapshot Performance with Temporary Snapshot Directory when rootDir on S3
[ https://issues.apache.org/jira/browse/HBASE-21098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592166#comment-16592166 ] Tyler Mi commented on HBASE-21098: -- Thank you for pointing these issues out, I've addressed them now > Improve Snapshot Performance with Temporary Snapshot Directory when rootDir > on S3 > - > > Key: HBASE-21098 > URL: https://issues.apache.org/jira/browse/HBASE-21098 > Project: HBase > Issue Type: Improvement >Affects Versions: 3.0.0, 2.1.1 >Reporter: Tyler Mi >Priority: Major > Attachments: HBASE-21098.master.001.patch, > HBASE-21098.master.002.patch, HBASE-21098.master.003.patch, > HBASE-21098.master.004.patch > > > When using Apache HBase, the snapshot feature can be used to make a point in > time recovery. To do this, HBase creates a manifest of all the files in all > of the Regions so that those files can be referenced again when a user > restores a snapshot. With HBase's S3 storage mode, developers can store their > data off-cluster on Amazon S3. However, utilizing S3 as a file system is > inefficient in some operations, namely renames. Most Hadoop ecosystem > applications use an atomic rename as a method of committing data. However, > with S3, a rename is a separate copy and then a delete of every file which is > no longer atomic and, in fact, quite costly. In addition, puts and deletes on > S3 have latency issues that traditional filesystems do not encounter when > manipulating the region snapshots to consolidate into a single manifest. When > HBase on S3 users have a significant amount of regions, puts, deletes, and > renames (the final commit stage of the snapshot) become the bottleneck > causing snapshots to take many minutes or even hours to complete. > The purpose of this patch is to increase the overall performance of snapshots > while utilizing HBase on S3 through the use of a temporary directory for the > snapshots that exists on a traditional filesystem like HDFS to circumvent the > bottlenecks. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21072) Block out HBCK1 in hbase2
[ https://issues.apache.org/jira/browse/HBASE-21072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592165#comment-16592165 ] Hudson commented on HBASE-21072: Results for branch branch-2 [build #1158 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1158/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1158//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1158//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1158//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Block out HBCK1 in hbase2 > - > > Key: HBASE-21072 > URL: https://issues.apache.org/jira/browse/HBASE-21072 > Project: HBase > Issue Type: Sub-task > Components: hbck >Affects Versions: 2.0.1 >Reporter: stack >Assignee: stack >Priority: Major > Fix For: 3.0.0, 2.1.1, 2.0.2 > > Attachments: HBASE-21072.branch-2.0.001.patch, > HBASE-21072.branch-2.0.002.patch, HBASE-21072.branch-2.0.003.patch, > HBASE-21072.branch-2.0.003.patch > > > [~busbey] left a note in the parent issue that I only just read which has a > prescription for how we might block hbck1 from running against an hbase-2.x > (hbck1 could damage a hbase-2Its disabled in hbase-2 but an errant hbck1 > from an hbase-1.x install might run). > Here is quote from parent issue: > {code} > I was idly thinking about how to stop HBase v1 HBCK. Thanks to HBASE-11405, > we know that all HBase 1.y.z hbck instances should refuse to run if there's a > lock file at '/hbase/hbase-hbck.lock' (given defaults). How about HBase v2 > places that file permanently in place and replace the contents (usually just > an IP address) with a note about how you must not run HBase v1 HBCK against > the cluster? > {code} > There is also the below: > {code} > We could pick another location for locking on HBase version 2 and start > building in a version check of some kind? > {code} > ... to which I'd answer, lets see. hbck2 is a different beast. It asks the > master to do stuff. It doesn't do it itself, as hbck1 did. So no need of a > lock/version. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21098) Improve Snapshot Performance with Temporary Snapshot Directory when rootDir on S3
[ https://issues.apache.org/jira/browse/HBASE-21098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tyler Mi updated HBASE-21098: - Release Note: I recommend storing the working directory on-cluster on HDFS as doing so has shown a strong performance increase due to data locality. It is important to note that the working directory should not overlap with any existing directories as the working directory will be cleaned out during the snapshot process. Beyond that, any well-named directory on HDFS should be sufficient. > Improve Snapshot Performance with Temporary Snapshot Directory when rootDir > on S3 > - > > Key: HBASE-21098 > URL: https://issues.apache.org/jira/browse/HBASE-21098 > Project: HBase > Issue Type: Improvement >Affects Versions: 3.0.0, 2.1.1 >Reporter: Tyler Mi >Priority: Major > Attachments: HBASE-21098.master.001.patch, > HBASE-21098.master.002.patch, HBASE-21098.master.003.patch, > HBASE-21098.master.004.patch > > > When using Apache HBase, the snapshot feature can be used to make a point in > time recovery. To do this, HBase creates a manifest of all the files in all > of the Regions so that those files can be referenced again when a user > restores a snapshot. With HBase's S3 storage mode, developers can store their > data off-cluster on Amazon S3. However, utilizing S3 as a file system is > inefficient in some operations, namely renames. Most Hadoop ecosystem > applications use an atomic rename as a method of committing data. However, > with S3, a rename is a separate copy and then a delete of every file which is > no longer atomic and, in fact, quite costly. In addition, puts and deletes on > S3 have latency issues that traditional filesystems do not encounter when > manipulating the region snapshots to consolidate into a single manifest. When > HBase on S3 users have a significant amount of regions, puts, deletes, and > renames (the final commit stage of the snapshot) become the bottleneck > causing snapshots to take many minutes or even hours to complete. > The purpose of this patch is to increase the overall performance of snapshots > while utilizing HBase on S3 through the use of a temporary directory for the > snapshots that exists on a traditional filesystem like HDFS to circumvent the > bottlenecks. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21098) Improve Snapshot Performance with Temporary Snapshot Directory when rootDir on S3
[ https://issues.apache.org/jira/browse/HBASE-21098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tyler Mi updated HBASE-21098: - Attachment: HBASE-21098.master.004.patch > Improve Snapshot Performance with Temporary Snapshot Directory when rootDir > on S3 > - > > Key: HBASE-21098 > URL: https://issues.apache.org/jira/browse/HBASE-21098 > Project: HBase > Issue Type: Improvement >Affects Versions: 3.0.0, 2.1.1 >Reporter: Tyler Mi >Priority: Major > Attachments: HBASE-21098.master.001.patch, > HBASE-21098.master.002.patch, HBASE-21098.master.003.patch, > HBASE-21098.master.004.patch > > > When using Apache HBase, the snapshot feature can be used to make a point in > time recovery. To do this, HBase creates a manifest of all the files in all > of the Regions so that those files can be referenced again when a user > restores a snapshot. With HBase's S3 storage mode, developers can store their > data off-cluster on Amazon S3. However, utilizing S3 as a file system is > inefficient in some operations, namely renames. Most Hadoop ecosystem > applications use an atomic rename as a method of committing data. However, > with S3, a rename is a separate copy and then a delete of every file which is > no longer atomic and, in fact, quite costly. In addition, puts and deletes on > S3 have latency issues that traditional filesystems do not encounter when > manipulating the region snapshots to consolidate into a single manifest. When > HBase on S3 users have a significant amount of regions, puts, deletes, and > renames (the final commit stage of the snapshot) become the bottleneck > causing snapshots to take many minutes or even hours to complete. > The purpose of this patch is to increase the overall performance of snapshots > while utilizing HBase on S3 through the use of a temporary directory for the > snapshots that exists on a traditional filesystem like HDFS to circumvent the > bottlenecks. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21100) HBTU.waitUntilAllRegionsAssigned will count split/merged region and cause wait timeout
[ https://issues.apache.org/jira/browse/HBASE-21100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592134#comment-16592134 ] Mingliang Liu commented on HBASE-21100: --- Is a simple solution sufficient? > HBTU.waitUntilAllRegionsAssigned will count split/merged region and cause > wait timeout > -- > > Key: HBASE-21100 > URL: https://issues.apache.org/jira/browse/HBASE-21100 > Project: HBase > Issue Type: Bug >Reporter: Duo Zhang >Priority: Major > Attachments: HBASE-21100.wip.patch > > > In TestTruncateTableProcedure, we will call split and then wait until all > regions are assigned. The code itself is a bit strange and should be > reimplement with other another way, but it expose the problem in our > HBTU.waitUntilAllRegionsAssigned method, that we will also count split > region, and find that it is not OPEN and return false, which causes a wait > timeout. > This is the log. > https://builds.apache.org/job/HBase-Flaky-Tests/job/master/143/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.master.procedure.TestTruncateTableProcedure-output.txt > Will open another issue to rewrite the test first, but I still think we need > to open a issue to record this problem. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21100) HBTU.waitUntilAllRegionsAssigned will count split/merged region and cause wait timeout
[ https://issues.apache.org/jira/browse/HBASE-21100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HBASE-21100: -- Attachment: HBASE-21100.wip.patch > HBTU.waitUntilAllRegionsAssigned will count split/merged region and cause > wait timeout > -- > > Key: HBASE-21100 > URL: https://issues.apache.org/jira/browse/HBASE-21100 > Project: HBase > Issue Type: Bug >Reporter: Duo Zhang >Priority: Major > Attachments: HBASE-21100.wip.patch > > > In TestTruncateTableProcedure, we will call split and then wait until all > regions are assigned. The code itself is a bit strange and should be > reimplement with other another way, but it expose the problem in our > HBTU.waitUntilAllRegionsAssigned method, that we will also count split > region, and find that it is not OPEN and return false, which causes a wait > timeout. > This is the log. > https://builds.apache.org/job/HBase-Flaky-Tests/job/master/143/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.master.procedure.TestTruncateTableProcedure-output.txt > Will open another issue to rewrite the test first, but I still think we need > to open a issue to record this problem. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21078) [amv2] CODE-BUG NPE in RTP doing Unassign
[ https://issues.apache.org/jira/browse/HBASE-21078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-21078: -- Resolution: Fixed Status: Resolved (was: Patch Available) Pushed to branch-2.0, branch-2.1, and branch-2. Did not push to master. Master has different means of dealing with this issue type. See HBASE-20881 > [amv2] CODE-BUG NPE in RTP doing Unassign > - > > Key: HBASE-21078 > URL: https://issues.apache.org/jira/browse/HBASE-21078 > Project: HBase > Issue Type: Bug > Components: amv2 >Affects Versions: 2.0.1 >Reporter: stack >Assignee: stack >Priority: Major > Fix For: 2.1.1, 2.0.2 > > Attachments: HBASE-21078.branch-2.0.001.patch, > HBASE-21078.branch-2.0.002.patch, HBASE-21078.branch-2.0.003.patch, > HBASE-21078.branch-2.0.004.patch, HBASE-21078.branch-2.0.004.patch, > HBASE-21078.branch-2.0.004.patch > > > Saw this is a run against tip of branch-2.0. The region had just finished > being split when the move goes to run. > {code} > 2018-08-18 16:55:14,908 INFO [PEWorker-2] procedure2.ProcedureExecutor: > Finished pid=2028, state=SUCCESS, hasLock=false; SplitTableRegionProcedure > table=IntegrationTestBigLinkedList, parent=c3f199b5af62ae2ff8f8b6426b21d95d, > daughterA=31ccbf098ae615ce30f28ec84c956b8f, > daughterB=1890b4c96736f223f31efef11c817c90 in 9.0090sec > 2018-08-18 16:55:14,908 INFO [PEWorker-16] > procedure.MasterProcedureScheduler: pid=2038, ppid=2030, > state=RUNNABLE:MOVE_REGION_UNASSIGN, hasLock=false; MoveRegionProcedure > hri=c3f199b5af62ae2ff8f8b6426b21d95d, > source=ve0540.halxg.cloudera.com,16020,1534632630737, > destination=ve0540.halxg.cloudera.com,16020,1534632630737 checking lock on > c3f199b5af62ae2ff8f8b6426b21d95d > 2018-08-18 16:55:14,958 INFO [PEWorker-16] procedure2.ProcedureExecutor: > Initialized subprocedures=[{pid=2095, ppid=2038, > state=RUNNABLE:REGION_TRANSITION_DISPATCH, hasLock=false; UnassignProcedure > table=IntegrationTestBigLinkedList, region=c3f199b5af62ae2ff8f8b6426b21d95d, > server=ve0540.halxg.cloudera.com,16020,1534632630737}] > 2018-08-18 16:55:15,008 INFO [PEWorker-3] > procedure.MasterProcedureScheduler: pid=2095, ppid=2038, > state=RUNNABLE:REGION_TRANSITION_DISPATCH, hasLock=false; UnassignProcedure > table=IntegrationTestBigLinkedList, region=c3f199b5af62ae2ff8f8b6426b21d95d, > server=ve0540.halxg.cloudera.com,16020,1534632630737 checking lock on > c3f199b5af62ae2ff8f8b6426b21d95d > 2018-08-18 16:55:15,085 ERROR [PEWorker-3] procedure2.ProcedureExecutor: > CODE-BUG: Uncaught runtime exception: pid=2095, ppid=2038, > state=RUNNABLE:REGION_TRANSITION_DISPATCH, hasLock=true; UnassignProcedure > table=IntegrationTestBigLinkedList, region=c3f199b5af62ae2ff8f8b6426b21d95d, > server=ve0540.halxg.cloudera.com,16020,1534632630737 > java.lang.NullPointerException > at java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:936) > at > org.apache.hadoop.hbase.master.assignment.RegionStates.getOrCreateServer(RegionStates.java:1097) > at > org.apache.hadoop.hbase.master.assignment.RegionStates.addRegionToServer(RegionStates.java:1125) > at > org.apache.hadoop.hbase.master.assignment.AssignmentManager.markRegionAsClosing(AssignmentManager.java:1477) > at > org.apache.hadoop.hbase.master.assignment.UnassignProcedure.updateTransition(UnassignProcedure.java:204) > at > org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure.execute(RegionTransitionProcedure.java:345) > at > org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure.execute(RegionTransitionProcedure.java:97) > at > org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:873) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1556) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1344) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$900(ProcedureExecutor.java:76) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1854) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21078) [amv2] CODE-BUG NPE in RTP doing Unassign
[ https://issues.apache.org/jira/browse/HBASE-21078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-21078: -- Fix Version/s: 2.1.1 > [amv2] CODE-BUG NPE in RTP doing Unassign > - > > Key: HBASE-21078 > URL: https://issues.apache.org/jira/browse/HBASE-21078 > Project: HBase > Issue Type: Bug > Components: amv2 >Affects Versions: 2.0.1 >Reporter: stack >Assignee: stack >Priority: Major > Fix For: 2.1.1, 2.0.2 > > Attachments: HBASE-21078.branch-2.0.001.patch, > HBASE-21078.branch-2.0.002.patch, HBASE-21078.branch-2.0.003.patch, > HBASE-21078.branch-2.0.004.patch, HBASE-21078.branch-2.0.004.patch, > HBASE-21078.branch-2.0.004.patch > > > Saw this is a run against tip of branch-2.0. The region had just finished > being split when the move goes to run. > {code} > 2018-08-18 16:55:14,908 INFO [PEWorker-2] procedure2.ProcedureExecutor: > Finished pid=2028, state=SUCCESS, hasLock=false; SplitTableRegionProcedure > table=IntegrationTestBigLinkedList, parent=c3f199b5af62ae2ff8f8b6426b21d95d, > daughterA=31ccbf098ae615ce30f28ec84c956b8f, > daughterB=1890b4c96736f223f31efef11c817c90 in 9.0090sec > 2018-08-18 16:55:14,908 INFO [PEWorker-16] > procedure.MasterProcedureScheduler: pid=2038, ppid=2030, > state=RUNNABLE:MOVE_REGION_UNASSIGN, hasLock=false; MoveRegionProcedure > hri=c3f199b5af62ae2ff8f8b6426b21d95d, > source=ve0540.halxg.cloudera.com,16020,1534632630737, > destination=ve0540.halxg.cloudera.com,16020,1534632630737 checking lock on > c3f199b5af62ae2ff8f8b6426b21d95d > 2018-08-18 16:55:14,958 INFO [PEWorker-16] procedure2.ProcedureExecutor: > Initialized subprocedures=[{pid=2095, ppid=2038, > state=RUNNABLE:REGION_TRANSITION_DISPATCH, hasLock=false; UnassignProcedure > table=IntegrationTestBigLinkedList, region=c3f199b5af62ae2ff8f8b6426b21d95d, > server=ve0540.halxg.cloudera.com,16020,1534632630737}] > 2018-08-18 16:55:15,008 INFO [PEWorker-3] > procedure.MasterProcedureScheduler: pid=2095, ppid=2038, > state=RUNNABLE:REGION_TRANSITION_DISPATCH, hasLock=false; UnassignProcedure > table=IntegrationTestBigLinkedList, region=c3f199b5af62ae2ff8f8b6426b21d95d, > server=ve0540.halxg.cloudera.com,16020,1534632630737 checking lock on > c3f199b5af62ae2ff8f8b6426b21d95d > 2018-08-18 16:55:15,085 ERROR [PEWorker-3] procedure2.ProcedureExecutor: > CODE-BUG: Uncaught runtime exception: pid=2095, ppid=2038, > state=RUNNABLE:REGION_TRANSITION_DISPATCH, hasLock=true; UnassignProcedure > table=IntegrationTestBigLinkedList, region=c3f199b5af62ae2ff8f8b6426b21d95d, > server=ve0540.halxg.cloudera.com,16020,1534632630737 > java.lang.NullPointerException > at java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:936) > at > org.apache.hadoop.hbase.master.assignment.RegionStates.getOrCreateServer(RegionStates.java:1097) > at > org.apache.hadoop.hbase.master.assignment.RegionStates.addRegionToServer(RegionStates.java:1125) > at > org.apache.hadoop.hbase.master.assignment.AssignmentManager.markRegionAsClosing(AssignmentManager.java:1477) > at > org.apache.hadoop.hbase.master.assignment.UnassignProcedure.updateTransition(UnassignProcedure.java:204) > at > org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure.execute(RegionTransitionProcedure.java:345) > at > org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure.execute(RegionTransitionProcedure.java:97) > at > org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:873) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1556) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1344) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$900(ProcedureExecutor.java:76) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1854) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21078) [amv2] CODE-BUG NPE in RTP doing Unassign
[ https://issues.apache.org/jira/browse/HBASE-21078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592129#comment-16592129 ] stack commented on HBASE-21078: --- Ok. Ran large ITBLL serverKill and an overnight of slowDeterministic against .004. Let me commit it (I removed .005 so as not to confuse as to what actually went in). > [amv2] CODE-BUG NPE in RTP doing Unassign > - > > Key: HBASE-21078 > URL: https://issues.apache.org/jira/browse/HBASE-21078 > Project: HBase > Issue Type: Bug > Components: amv2 >Affects Versions: 2.0.1 >Reporter: stack >Assignee: stack >Priority: Major > Fix For: 2.0.2 > > Attachments: HBASE-21078.branch-2.0.001.patch, > HBASE-21078.branch-2.0.002.patch, HBASE-21078.branch-2.0.003.patch, > HBASE-21078.branch-2.0.004.patch, HBASE-21078.branch-2.0.004.patch, > HBASE-21078.branch-2.0.004.patch > > > Saw this is a run against tip of branch-2.0. The region had just finished > being split when the move goes to run. > {code} > 2018-08-18 16:55:14,908 INFO [PEWorker-2] procedure2.ProcedureExecutor: > Finished pid=2028, state=SUCCESS, hasLock=false; SplitTableRegionProcedure > table=IntegrationTestBigLinkedList, parent=c3f199b5af62ae2ff8f8b6426b21d95d, > daughterA=31ccbf098ae615ce30f28ec84c956b8f, > daughterB=1890b4c96736f223f31efef11c817c90 in 9.0090sec > 2018-08-18 16:55:14,908 INFO [PEWorker-16] > procedure.MasterProcedureScheduler: pid=2038, ppid=2030, > state=RUNNABLE:MOVE_REGION_UNASSIGN, hasLock=false; MoveRegionProcedure > hri=c3f199b5af62ae2ff8f8b6426b21d95d, > source=ve0540.halxg.cloudera.com,16020,1534632630737, > destination=ve0540.halxg.cloudera.com,16020,1534632630737 checking lock on > c3f199b5af62ae2ff8f8b6426b21d95d > 2018-08-18 16:55:14,958 INFO [PEWorker-16] procedure2.ProcedureExecutor: > Initialized subprocedures=[{pid=2095, ppid=2038, > state=RUNNABLE:REGION_TRANSITION_DISPATCH, hasLock=false; UnassignProcedure > table=IntegrationTestBigLinkedList, region=c3f199b5af62ae2ff8f8b6426b21d95d, > server=ve0540.halxg.cloudera.com,16020,1534632630737}] > 2018-08-18 16:55:15,008 INFO [PEWorker-3] > procedure.MasterProcedureScheduler: pid=2095, ppid=2038, > state=RUNNABLE:REGION_TRANSITION_DISPATCH, hasLock=false; UnassignProcedure > table=IntegrationTestBigLinkedList, region=c3f199b5af62ae2ff8f8b6426b21d95d, > server=ve0540.halxg.cloudera.com,16020,1534632630737 checking lock on > c3f199b5af62ae2ff8f8b6426b21d95d > 2018-08-18 16:55:15,085 ERROR [PEWorker-3] procedure2.ProcedureExecutor: > CODE-BUG: Uncaught runtime exception: pid=2095, ppid=2038, > state=RUNNABLE:REGION_TRANSITION_DISPATCH, hasLock=true; UnassignProcedure > table=IntegrationTestBigLinkedList, region=c3f199b5af62ae2ff8f8b6426b21d95d, > server=ve0540.halxg.cloudera.com,16020,1534632630737 > java.lang.NullPointerException > at java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:936) > at > org.apache.hadoop.hbase.master.assignment.RegionStates.getOrCreateServer(RegionStates.java:1097) > at > org.apache.hadoop.hbase.master.assignment.RegionStates.addRegionToServer(RegionStates.java:1125) > at > org.apache.hadoop.hbase.master.assignment.AssignmentManager.markRegionAsClosing(AssignmentManager.java:1477) > at > org.apache.hadoop.hbase.master.assignment.UnassignProcedure.updateTransition(UnassignProcedure.java:204) > at > org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure.execute(RegionTransitionProcedure.java:345) > at > org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure.execute(RegionTransitionProcedure.java:97) > at > org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:873) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1556) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1344) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$900(ProcedureExecutor.java:76) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1854) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21078) [amv2] CODE-BUG NPE in RTP doing Unassign
[ https://issues.apache.org/jira/browse/HBASE-21078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-21078: -- Attachment: (was: HBASE-21078.branch-2.0.005.patch) > [amv2] CODE-BUG NPE in RTP doing Unassign > - > > Key: HBASE-21078 > URL: https://issues.apache.org/jira/browse/HBASE-21078 > Project: HBase > Issue Type: Bug > Components: amv2 >Affects Versions: 2.0.1 >Reporter: stack >Assignee: stack >Priority: Major > Fix For: 2.0.2 > > Attachments: HBASE-21078.branch-2.0.001.patch, > HBASE-21078.branch-2.0.002.patch, HBASE-21078.branch-2.0.003.patch, > HBASE-21078.branch-2.0.004.patch, HBASE-21078.branch-2.0.004.patch, > HBASE-21078.branch-2.0.004.patch > > > Saw this is a run against tip of branch-2.0. The region had just finished > being split when the move goes to run. > {code} > 2018-08-18 16:55:14,908 INFO [PEWorker-2] procedure2.ProcedureExecutor: > Finished pid=2028, state=SUCCESS, hasLock=false; SplitTableRegionProcedure > table=IntegrationTestBigLinkedList, parent=c3f199b5af62ae2ff8f8b6426b21d95d, > daughterA=31ccbf098ae615ce30f28ec84c956b8f, > daughterB=1890b4c96736f223f31efef11c817c90 in 9.0090sec > 2018-08-18 16:55:14,908 INFO [PEWorker-16] > procedure.MasterProcedureScheduler: pid=2038, ppid=2030, > state=RUNNABLE:MOVE_REGION_UNASSIGN, hasLock=false; MoveRegionProcedure > hri=c3f199b5af62ae2ff8f8b6426b21d95d, > source=ve0540.halxg.cloudera.com,16020,1534632630737, > destination=ve0540.halxg.cloudera.com,16020,1534632630737 checking lock on > c3f199b5af62ae2ff8f8b6426b21d95d > 2018-08-18 16:55:14,958 INFO [PEWorker-16] procedure2.ProcedureExecutor: > Initialized subprocedures=[{pid=2095, ppid=2038, > state=RUNNABLE:REGION_TRANSITION_DISPATCH, hasLock=false; UnassignProcedure > table=IntegrationTestBigLinkedList, region=c3f199b5af62ae2ff8f8b6426b21d95d, > server=ve0540.halxg.cloudera.com,16020,1534632630737}] > 2018-08-18 16:55:15,008 INFO [PEWorker-3] > procedure.MasterProcedureScheduler: pid=2095, ppid=2038, > state=RUNNABLE:REGION_TRANSITION_DISPATCH, hasLock=false; UnassignProcedure > table=IntegrationTestBigLinkedList, region=c3f199b5af62ae2ff8f8b6426b21d95d, > server=ve0540.halxg.cloudera.com,16020,1534632630737 checking lock on > c3f199b5af62ae2ff8f8b6426b21d95d > 2018-08-18 16:55:15,085 ERROR [PEWorker-3] procedure2.ProcedureExecutor: > CODE-BUG: Uncaught runtime exception: pid=2095, ppid=2038, > state=RUNNABLE:REGION_TRANSITION_DISPATCH, hasLock=true; UnassignProcedure > table=IntegrationTestBigLinkedList, region=c3f199b5af62ae2ff8f8b6426b21d95d, > server=ve0540.halxg.cloudera.com,16020,1534632630737 > java.lang.NullPointerException > at java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:936) > at > org.apache.hadoop.hbase.master.assignment.RegionStates.getOrCreateServer(RegionStates.java:1097) > at > org.apache.hadoop.hbase.master.assignment.RegionStates.addRegionToServer(RegionStates.java:1125) > at > org.apache.hadoop.hbase.master.assignment.AssignmentManager.markRegionAsClosing(AssignmentManager.java:1477) > at > org.apache.hadoop.hbase.master.assignment.UnassignProcedure.updateTransition(UnassignProcedure.java:204) > at > org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure.execute(RegionTransitionProcedure.java:345) > at > org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure.execute(RegionTransitionProcedure.java:97) > at > org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:873) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1556) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1344) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$900(ProcedureExecutor.java:76) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1854) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20993) [Auth] IPC client fallback to simple auth allowed doesn't work
[ https://issues.apache.org/jira/browse/HBASE-20993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592118#comment-16592118 ] Jack Bearden commented on HBASE-20993: -- [~reidchan] I made new tickets for the forward-ports. I was unable to make you reporter for those, assuming you are still interested. Not sure of the HBASE policy on this, if it isn't part of procedure then we can just handle it here. * HBase 2 * HBase 21112 > [Auth] IPC client fallback to simple auth allowed doesn't work > -- > > Key: HBASE-20993 > URL: https://issues.apache.org/jira/browse/HBASE-20993 > Project: HBase > Issue Type: Bug > Components: Client, security >Affects Versions: 1.2.6 >Reporter: Reid Chan >Assignee: Jack Bearden >Priority: Critical > Attachments: HBASE-20993.001.patch, > HBASE-20993.003.branch-1.flowchart.png, HBASE-20993.branch-1.002.patch, > HBASE-20993.branch-1.003.patch, HBASE-20993.branch-1.004.patch, > HBASE-20993.branch-1.005.patch, HBASE-20993.branch-1.006.patch, > HBASE-20993.branch-1.007.patch, HBASE-20993.branch-1.2.001.patch, > HBASE-20993.branch-1.wip.002.patch, HBASE-20993.branch-1.wip.patch > > > It is easily reproducible. > client's hbase-site.xml: hadoop.security.authentication:kerberos, > hbase.security.authentication:kerberos, > hbase.ipc.client.fallback-to-simple-auth-allowed:true, keytab and principal > are right set > A simple auth hbase cluster, a kerberized hbase client application. > application trying to r/w/c/d table will have following exception: > {code} > javax.security.sasl.SaslException: GSS initiate failed [Caused by > GSSException: No valid credentials provided (Mechanism level: Failed to find > any Kerberos tgt)] > at > com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) > at > org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:743) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:740) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1241) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336) > at > org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58383) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(ConnectionManager.java:1592) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1530) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1552) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1581) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1738) > at > org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38) > at > org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:134) > at > org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4297) > at > org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4289) > at > org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsyncV2(HBaseAdmin.java:753) > at > org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:674) > at > org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:607) > at >
[jira] [Updated] (HBASE-21095) The timeout retry logic for several procedures are broken after master restarts
[ https://issues.apache.org/jira/browse/HBASE-21095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-21095: -- Fix Version/s: (was: 2.1.1) (was: 2.0.2) > The timeout retry logic for several procedures are broken after master > restarts > --- > > Key: HBASE-21095 > URL: https://issues.apache.org/jira/browse/HBASE-21095 > Project: HBase > Issue Type: Sub-task > Components: amv2, proc-v2 >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Critical > Fix For: 3.0.0, 2.2.0 > > Attachments: HBASE-21095-branch-2.0.patch, HBASE-21095-v1.patch, > HBASE-21095-v2.patch, HBASE-21095.branch-2.0.001.patch, HBASE-21095.patch > > > For TRSP, and also RTP in branch-2.0 and branch-2.1, if we fail to assign or > unassign a region, we will set the procedure to WAITING_TIMEOUT state, and > rely on the ProcedureEvent in RegionStateNode to wake us up later. But after > restarting, we do not suspend the ProcedureEvent in RSN, and also do not add > the procedure to the ProcedureEvent's suspending queue, so we will hang there > forever as no one will wake us up. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21095) The timeout retry logic for several procedures are broken after master restarts
[ https://issues.apache.org/jira/browse/HBASE-21095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592108#comment-16592108 ] stack commented on HBASE-21095: --- On your patch [~Apache9], I see how it integrates the [~allan163] patch. Looks reasonable. +1 to commit. For branch-2, it'll fail after HBASE-21113... but maybe you can massage it in. > The timeout retry logic for several procedures are broken after master > restarts > --- > > Key: HBASE-21095 > URL: https://issues.apache.org/jira/browse/HBASE-21095 > Project: HBase > Issue Type: Sub-task > Components: amv2, proc-v2 >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Critical > Fix For: 3.0.0, 2.2.0 > > Attachments: HBASE-21095-branch-2.0.patch, HBASE-21095-v1.patch, > HBASE-21095-v2.patch, HBASE-21095.branch-2.0.001.patch, HBASE-21095.patch > > > For TRSP, and also RTP in branch-2.0 and branch-2.1, if we fail to assign or > unassign a region, we will set the procedure to WAITING_TIMEOUT state, and > rely on the ProcedureEvent in RegionStateNode to wake us up later. But after > restarting, we do not suspend the ProcedureEvent in RSN, and also do not add > the procedure to the ProcedureEvent's suspending queue, so we will hang there > forever as no one will wake us up. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HBASE-21113) Apply the branch-2 version of HBASE-21095, The timeout retry logic for several procedures are broken after master restarts
[ https://issues.apache.org/jira/browse/HBASE-21113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack resolved HBASE-21113. --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.0.2 2.1.1 Pushed to branch-2.0, branch-2.1, and branch-2 but NOT to master branch (Should it be on master-branch [~Apache9]/[~allan163] or will HBASE-21095 be enough? It looks to me like it should be on master branch -- thanks). > Apply the branch-2 version of HBASE-21095, The timeout retry logic for > several procedures are broken after master restarts > -- > > Key: HBASE-21113 > URL: https://issues.apache.org/jira/browse/HBASE-21113 > Project: HBase > Issue Type: Bug > Components: amv2 >Reporter: stack >Assignee: Allan Yang >Priority: Major > Fix For: 2.1.1, 2.0.2 > > > This issue is for applying branch-2 version of the HBASE-21095 patch. The > patch applied here is the HBASE-21095.branch-2.0.001.patch patch from > HBASE-21095 written by [~allan163]. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21095) The timeout retry logic for several procedures are broken after master restarts
[ https://issues.apache.org/jira/browse/HBASE-21095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592084#comment-16592084 ] stack commented on HBASE-21095: --- Ok. Took me a while to understand what this issue is about. The unit test helped explain. Thanks. I just pushed Alan's patch to branch-2.0=>branch-2 but then reverted it. I'll put it in under a different JIRA. Otherwise it will be hard to track what went in under this issue. bq. Let me commit. stack Let's also commit HBASE-20881 to branch-2? So that the fix here could also go into branch-2. On the above, ok. We have an outline on how to do rolling upgrade to branch-2.2 so go ahead. The rolling upgrade issue should be blocker on branch-2.2 if not already. I am not clear on how far back the master branch that is attached here should go? And should the [~allan163] patch go on master branch? (I only put it on branch-2.0=>branch-2 under HBASE-21113). > The timeout retry logic for several procedures are broken after master > restarts > --- > > Key: HBASE-21095 > URL: https://issues.apache.org/jira/browse/HBASE-21095 > Project: HBase > Issue Type: Sub-task > Components: amv2, proc-v2 >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Critical > Fix For: 3.0.0, 2.2.0, 2.1.1, 2.0.2 > > Attachments: HBASE-21095-branch-2.0.patch, HBASE-21095-v1.patch, > HBASE-21095-v2.patch, HBASE-21095.branch-2.0.001.patch, HBASE-21095.patch > > > For TRSP, and also RTP in branch-2.0 and branch-2.1, if we fail to assign or > unassign a region, we will set the procedure to WAITING_TIMEOUT state, and > rely on the ProcedureEvent in RegionStateNode to wake us up later. But after > restarting, we do not suspend the ProcedureEvent in RSN, and also do not add > the procedure to the ProcedureEvent's suspending queue, so we will hang there > forever as no one will wake us up. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21113) Apply the branch-2 version of HBASE-21095, The timeout retry logic for several procedures are broken after master restarts
stack created HBASE-21113: - Summary: Apply the branch-2 version of HBASE-21095, The timeout retry logic for several procedures are broken after master restarts Key: HBASE-21113 URL: https://issues.apache.org/jira/browse/HBASE-21113 Project: HBase Issue Type: Bug Components: amv2 Reporter: stack Assignee: Allan Yang This issue is for applying branch-2 version of the HBASE-21095 patch. The patch applied here is the HBASE-21095.branch-2.0.001.patch patch from HBASE-21095 written by [~allan163]. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-18477) Umbrella JIRA for HBase Read Replica clusters
[ https://issues.apache.org/jira/browse/HBASE-18477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592079#comment-16592079 ] Hudson commented on HBASE-18477: Results for branch HBASE-18477 [build #305 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18477/305/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18477/305//General_Nightly_Build_Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18477/305//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18477/305//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (x) {color:red}-1 client integration test{color} --Failed when running client tests on top of Hadoop 2. [see log for details|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18477/305//artifact/output-integration/hadoop-2.log]. (note that this means we didn't run on Hadoop 3) > Umbrella JIRA for HBase Read Replica clusters > - > > Key: HBASE-18477 > URL: https://issues.apache.org/jira/browse/HBASE-18477 > Project: HBase > Issue Type: New Feature >Reporter: Zach York >Assignee: Zach York >Priority: Major > Attachments: HBase Read-Replica Clusters Scope doc.docx, HBase > Read-Replica Clusters Scope doc.pdf, HBase Read-Replica Clusters Scope > doc_v2.docx, HBase Read-Replica Clusters Scope doc_v2.pdf > > > Recently, changes (such as HBASE-17437) have unblocked HBase to run with a > root directory external to the cluster (such as in Amazon S3). This means > that the data is stored outside of the cluster and can be accessible after > the cluster has been terminated. One use case that is often asked about is > pointing multiple clusters to one root directory (sharing the data) to have > read resiliency in the case of a cluster failure. > > This JIRA is an umbrella JIRA to contain all the tasks necessary to create a > read-replica HBase cluster that is pointed at the same root directory. > > This requires making the Read-Replica cluster Read-Only (no metadata > operation or data operations). > Separating the hbase:meta table for each cluster (Otherwise HBase gets > confused with multiple clusters trying to update the meta table with their ip > addresses) > Adding refresh functionality for the meta table to ensure new metadata is > picked up on the read replica cluster. > Adding refresh functionality for HFiles for a given table to ensure new data > is picked up on the read replica cluster. > > This can be used with any existing cluster that is backed by an external > filesystem. > > Please note that this feature is still quite manual (with the potential for > automation later). > > More information on this particular feature can be found here: > https://aws.amazon.com/blogs/big-data/setting-up-read-replica-clusters-with-hbase-on-amazon-s3/ -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20941) Create and implement HbckService in master
[ https://issues.apache.org/jira/browse/HBASE-20941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592075#comment-16592075 ] Umesh Agashe commented on HBASE-20941: -- retry > Create and implement HbckService in master > -- > > Key: HBASE-20941 > URL: https://issues.apache.org/jira/browse/HBASE-20941 > Project: HBase > Issue Type: Sub-task >Reporter: Umesh Agashe >Assignee: Umesh Agashe >Priority: Major > Attachments: hbase-20941.master.001.patch, > hbase-20941.master.002.patch, hbase-20941.master.003.patch, > hbase-20941.master.004.patch, hbase-20941.master.004.patch > > > Create HbckService in master and implement following methods: > # setTableState(): If table state are inconsistent with action/ procedures > working on them, sometimes manipulating their states in meta fix things. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20941) Create and implement HbckService in master
[ https://issues.apache.org/jira/browse/HBASE-20941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Umesh Agashe updated HBASE-20941: - Attachment: hbase-20941.master.004.patch > Create and implement HbckService in master > -- > > Key: HBASE-20941 > URL: https://issues.apache.org/jira/browse/HBASE-20941 > Project: HBase > Issue Type: Sub-task >Reporter: Umesh Agashe >Assignee: Umesh Agashe >Priority: Major > Attachments: hbase-20941.master.001.patch, > hbase-20941.master.002.patch, hbase-20941.master.003.patch, > hbase-20941.master.004.patch, hbase-20941.master.004.patch > > > Create HbckService in master and implement following methods: > # setTableState(): If table state are inconsistent with action/ procedures > working on them, sometimes manipulating their states in meta fix things. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20734) Colocate recovered edits directory with hbase.wal.dir
[ https://issues.apache.org/jira/browse/HBASE-20734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592067#comment-16592067 ] Andrew Purtell commented on HBASE-20734: Modulo the compatibility question the changes lgtm, for what it's worth. > Colocate recovered edits directory with hbase.wal.dir > - > > Key: HBASE-20734 > URL: https://issues.apache.org/jira/browse/HBASE-20734 > Project: HBase > Issue Type: Improvement > Components: MTTR, Recovery, wal >Reporter: Ted Yu >Assignee: Zach York >Priority: Major > Fix For: 3.0.0 > > Attachments: HBASE-20734.branch-1.001.patch, > HBASE-20734.master.001.patch, HBASE-20734.master.002.patch, > HBASE-20734.master.003.patch, HBASE-20734.master.004.patch > > > During investigation of HBASE-20723, I realized that we wouldn't get the best > performance when hbase.wal.dir is configured to be on different (fast) media > than hbase rootdir w.r.t. recovered edits since recovered edits directory is > currently under rootdir. > Such setup may not result in fast recovery when there is region server > failover. > This issue is to find proper (hopefully backward compatible) way in > colocating recovered edits directory with hbase.wal.dir . -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21111) [Auth] IPC client fallback to simple auth (forward-port to branch-2)
[ https://issues.apache.org/jira/browse/HBASE-2?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jack Bearden updated HBASE-2: - Labels: branch-2 (was: ) > [Auth] IPC client fallback to simple auth (forward-port to branch-2) > > > Key: HBASE-2 > URL: https://issues.apache.org/jira/browse/HBASE-2 > Project: HBase > Issue Type: Bug >Affects Versions: 2.1.0 >Reporter: Jack Bearden >Assignee: Jack Bearden >Priority: Critical > Labels: branch-2 > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-11653) RegionObserver coprocessor cannot override KeyValue values in prePut()
[ https://issues.apache.org/jira/browse/HBASE-11653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary Helmling updated HBASE-11653: -- Resolution: Won't Fix Status: Resolved (was: Patch Available) Resolving since this issue is only present in 0.94 versions, which are no longer released. > RegionObserver coprocessor cannot override KeyValue values in prePut() > -- > > Key: HBASE-11653 > URL: https://issues.apache.org/jira/browse/HBASE-11653 > Project: HBase > Issue Type: Bug > Components: Coprocessors >Affects Versions: 0.94.21 >Reporter: Gary Helmling >Assignee: Gary Helmling >Priority: Minor > Attachments: HBASE-11653_0.94.patch > > > Due to a bug in {{HRegion.internalPut()}}, any modifications that a > {{RegionObserver}} makes to a Put's family map in the {{prePut()}} hook are > lost. > This prevents coprocessors from modifying the values written by a {{Put}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21112) [Auth] IPC client fallback to simple auth (forward-port to master)
Jack Bearden created HBASE-21112: Summary: [Auth] IPC client fallback to simple auth (forward-port to master) Key: HBASE-21112 URL: https://issues.apache.org/jira/browse/HBASE-21112 Project: HBase Issue Type: Bug Reporter: Jack Bearden Assignee: Jack Bearden -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21111) [Auth] IPC client fallback to simple auth (forward-port to branch-2)
Jack Bearden created HBASE-2: Summary: [Auth] IPC client fallback to simple auth (forward-port to branch-2) Key: HBASE-2 URL: https://issues.apache.org/jira/browse/HBASE-2 Project: HBase Issue Type: Bug Affects Versions: 2.1.0 Reporter: Jack Bearden Assignee: Jack Bearden -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20734) Colocate recovered edits directory with hbase.wal.dir
[ https://issues.apache.org/jira/browse/HBASE-20734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592056#comment-16592056 ] Zach York commented on HBASE-20734: --- I haven't updated the branch-1 patch for that yet, I was waiting on some feedback on the master approach to avoid having to maintain two patches for all updates. I'll update the branch-1 patch when the master patch is agreed upon. If you are coming from a world where HBASE-20723 isn't applied and using a custom wal.dir, then yeah, it isn't really necessary, but since we have applied HBASE-20723 to a couple releases, we need to do the same thing in branch-1. For your testing though, the current branch-1 patch might be sufficient (if you have a custom wal.dir with no data that needs to be recovered yet) > Colocate recovered edits directory with hbase.wal.dir > - > > Key: HBASE-20734 > URL: https://issues.apache.org/jira/browse/HBASE-20734 > Project: HBase > Issue Type: Improvement > Components: MTTR, Recovery, wal >Reporter: Ted Yu >Assignee: Zach York >Priority: Major > Fix For: 3.0.0 > > Attachments: HBASE-20734.branch-1.001.patch, > HBASE-20734.master.001.patch, HBASE-20734.master.002.patch, > HBASE-20734.master.003.patch, HBASE-20734.master.004.patch > > > During investigation of HBASE-20723, I realized that we wouldn't get the best > performance when hbase.wal.dir is configured to be on different (fast) media > than hbase rootdir w.r.t. recovered edits since recovered edits directory is > currently under rootdir. > Such setup may not result in fast recovery when there is region server > failover. > This issue is to find proper (hopefully backward compatible) way in > colocating recovered edits directory with hbase.wal.dir . -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21097) Flush pressure assertion may fail in testFlushThroughputTuning
[ https://issues.apache.org/jira/browse/HBASE-21097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592047#comment-16592047 ] Hadoop QA commented on HBASE-21097: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | | {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 1s{color} | {color:blue} The patch file was not named according to hbase's naming conventions. Please see https://yetus.apache.org/documentation/0.7.0/precommit-patchnames for instructions. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 32s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 43s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 3s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 6s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 1s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 3m 58s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 7m 20s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}223m 33s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}257m 59s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b | | JIRA Issue | HBASE-21097 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12937008/21097.v2.txt | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 8769615bc7cd 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 07:31:43 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / a452487a9b | | maven | version: Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC3 | | unit | https://builds.apache.org/job/PreCommit-HBASE-Build/14191/artifact/patchprocess/patch-unit-hbase-server.txt | | Test Results |
[jira] [Commented] (HBASE-20993) [Auth] IPC client fallback to simple auth allowed doesn't work
[ https://issues.apache.org/jira/browse/HBASE-20993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592045#comment-16592045 ] Jack Bearden commented on HBASE-20993: -- The test failure for hadoop.hbase.util.TestHBaseFsck may be unrelated? It is passing for me locally > [Auth] IPC client fallback to simple auth allowed doesn't work > -- > > Key: HBASE-20993 > URL: https://issues.apache.org/jira/browse/HBASE-20993 > Project: HBase > Issue Type: Bug > Components: Client, security >Affects Versions: 1.2.6 >Reporter: Reid Chan >Assignee: Jack Bearden >Priority: Critical > Attachments: HBASE-20993.001.patch, > HBASE-20993.003.branch-1.flowchart.png, HBASE-20993.branch-1.002.patch, > HBASE-20993.branch-1.003.patch, HBASE-20993.branch-1.004.patch, > HBASE-20993.branch-1.005.patch, HBASE-20993.branch-1.006.patch, > HBASE-20993.branch-1.007.patch, HBASE-20993.branch-1.2.001.patch, > HBASE-20993.branch-1.wip.002.patch, HBASE-20993.branch-1.wip.patch > > > It is easily reproducible. > client's hbase-site.xml: hadoop.security.authentication:kerberos, > hbase.security.authentication:kerberos, > hbase.ipc.client.fallback-to-simple-auth-allowed:true, keytab and principal > are right set > A simple auth hbase cluster, a kerberized hbase client application. > application trying to r/w/c/d table will have following exception: > {code} > javax.security.sasl.SaslException: GSS initiate failed [Caused by > GSSException: No valid credentials provided (Mechanism level: Failed to find > any Kerberos tgt)] > at > com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) > at > org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:743) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:740) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1241) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336) > at > org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58383) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(ConnectionManager.java:1592) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1530) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1552) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1581) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1738) > at > org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38) > at > org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:134) > at > org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4297) > at > org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4289) > at > org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsyncV2(HBaseAdmin.java:753) > at > org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:674) > at > org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:607) > at > org.playground.hbase.KerberizedClientFallback.main(KerberizedClientFallback.java:55) > Caused by: GSSException: No valid credentials provided (Mechanism level: > Failed to find any Kerberos
[jira] [Commented] (HBASE-18840) Add functionality to refresh meta table at master startup
[ https://issues.apache.org/jira/browse/HBASE-18840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592041#comment-16592041 ] Zach York commented on HBASE-18840: --- I think either of these could potentially work. But yes, the main goal is to get the state of meta without RS ip addresses. Let me think on this for a little bit... Especially in regards to moving this past a manually triggered command. Thanks for the input! > Add functionality to refresh meta table at master startup > - > > Key: HBASE-18840 > URL: https://issues.apache.org/jira/browse/HBASE-18840 > Project: HBase > Issue Type: Sub-task >Affects Versions: HBASE-18477 >Reporter: Zach York >Assignee: Zach York >Priority: Major > Attachments: HBASE-18840.HBASE-18477.001.patch, > HBASE-18840.HBASE-18477.002.patch, HBASE-18840.HBASE-18477.003 (2) (1).patch, > HBASE-18840.HBASE-18477.003 (2).patch, HBASE-18840.HBASE-18477.003.patch, > HBASE-18840.HBASE-18477.004.patch, HBASE-18840.HBASE-18477.005.patch, > HBASE-18840.HBASE-18477.006.patch, HBASE-18840.HBASE-18477.007.patch > > > If a HBase cluster’s hbase:meta table is deleted or a cluster is started with > a new meta table, HBase needs the functionality to synchronize it’s metadata > from Storage. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20734) Colocate recovered edits directory with hbase.wal.dir
[ https://issues.apache.org/jira/browse/HBASE-20734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592029#comment-16592029 ] Andrew Purtell commented on HBASE-20734: I looked at the branch-1 patch. I don't see where we also check the old location for a recovered edits file. Per above review discussion, this isn't necessary because it would have failed anyway? > Colocate recovered edits directory with hbase.wal.dir > - > > Key: HBASE-20734 > URL: https://issues.apache.org/jira/browse/HBASE-20734 > Project: HBase > Issue Type: Improvement > Components: MTTR, Recovery, wal >Reporter: Ted Yu >Assignee: Zach York >Priority: Major > Fix For: 3.0.0 > > Attachments: HBASE-20734.branch-1.001.patch, > HBASE-20734.master.001.patch, HBASE-20734.master.002.patch, > HBASE-20734.master.003.patch, HBASE-20734.master.004.patch > > > During investigation of HBASE-20723, I realized that we wouldn't get the best > performance when hbase.wal.dir is configured to be on different (fast) media > than hbase rootdir w.r.t. recovered edits since recovered edits directory is > currently under rootdir. > Such setup may not result in fast recovery when there is region server > failover. > This issue is to find proper (hopefully backward compatible) way in > colocating recovered edits directory with hbase.wal.dir . -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-18840) Add functionality to refresh meta table at master startup
[ https://issues.apache.org/jira/browse/HBASE-18840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592028#comment-16592028 ] Sean Busbey commented on HBASE-18840: - the issue as I understand it is that meta has a superset of the information the RR cluster wants. the RR cluster wants to know "which of the files in the storage layer correspond to the table regions that are currently valid for the table?" and the primary cluster's meta includes e.g. mapping those regions to active RS instances. what about relying a consistent snapshot of the table -> region -> hfiles mapping that gets written out periodically to storage? > Add functionality to refresh meta table at master startup > - > > Key: HBASE-18840 > URL: https://issues.apache.org/jira/browse/HBASE-18840 > Project: HBase > Issue Type: Sub-task >Affects Versions: HBASE-18477 >Reporter: Zach York >Assignee: Zach York >Priority: Major > Attachments: HBASE-18840.HBASE-18477.001.patch, > HBASE-18840.HBASE-18477.002.patch, HBASE-18840.HBASE-18477.003 (2) (1).patch, > HBASE-18840.HBASE-18477.003 (2).patch, HBASE-18840.HBASE-18477.003.patch, > HBASE-18840.HBASE-18477.004.patch, HBASE-18840.HBASE-18477.005.patch, > HBASE-18840.HBASE-18477.006.patch, HBASE-18840.HBASE-18477.007.patch > > > If a HBase cluster’s hbase:meta table is deleted or a cluster is started with > a new meta table, HBase needs the functionality to synchronize it’s metadata > from Storage. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21067) Backport HBASE-17519 (Rollback the removed cells) to branch-1.3
[ https://issues.apache.org/jira/browse/HBASE-21067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592017#comment-16592017 ] Hadoop QA commented on HBASE-21067: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} branch-1.3 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 21s{color} | {color:green} branch-1.3 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s{color} | {color:green} branch-1.3 passed with JDK v1.8.0_181 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s{color} | {color:green} branch-1.3 passed with JDK v1.7.0_191 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 24s{color} | {color:green} branch-1.3 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 2m 30s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s{color} | {color:green} branch-1.3 passed with JDK v1.8.0_181 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s{color} | {color:green} branch-1.3 passed with JDK v1.7.0_191 {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s{color} | {color:green} the patch passed with JDK v1.8.0_181 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s{color} | {color:green} the patch passed with JDK v1.7.0_191 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 22s{color} | {color:red} hbase-server: The patch generated 4 new + 460 unchanged - 3 fixed = 464 total (was 463) {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 2m 25s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 8m 22s{color} | {color:green} Patch does not cause any errors with Hadoop 2.4.1 2.5.2 2.6.5 2.7.4. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s{color} | {color:green} the patch passed with JDK v1.8.0_181 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s{color} | {color:green} the patch passed with JDK v1.7.0_191 {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 85m 40s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}116m 2s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.util.TestHBaseFsck | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:53dba69 | | JIRA Issue | HBASE-21067 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12937028/HBASE-21067.branch-1.3.001.patch | | Optional Tests | asflicense javac
[jira] [Commented] (HBASE-20429) Support for mixed or write-heavy workloads on non-HDFS filesystems
[ https://issues.apache.org/jira/browse/HBASE-20429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592016#comment-16592016 ] Andrew Purtell commented on HBASE-20429: [~zyork] S3A, Hadoop 2.9. Bottom line I need to retest. > Support for mixed or write-heavy workloads on non-HDFS filesystems > -- > > Key: HBASE-20429 > URL: https://issues.apache.org/jira/browse/HBASE-20429 > Project: HBase > Issue Type: Umbrella >Reporter: Andrew Purtell >Priority: Major > > We can support reasonably well use cases on non-HDFS filesystems, like S3, > where an external writer has loaded (and continues to load) HFiles via the > bulk load mechanism, and then we serve out a read only workload at the HBase > API. > Mixed workloads or write-heavy workloads won't fare as well. In fact, data > loss seems certain. It will depend in the specific filesystem, but all of the > S3 backed Hadoop filesystems suffer from a couple of obvious problems, > notably a lack of atomic rename. > This umbrella will serve to collect some related ideas for consideration. -- This message was sent by Atlassian JIRA (v7.6.3#76005)