[jira] [Commented] (HBASE-21444) Recover meta in case of long ago dead region server appear in meta znode
[ https://issues.apache.org/jira/browse/HBASE-21444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16679219#comment-16679219 ] Allan Yang commented on HBASE-21444: Honestly, I like the idea schedule a SCP our self if there is no SCP for the meta node's server than just wait there for operator like a sitting duck. > Recover meta in case of long ago dead region server appear in meta znode > > > Key: HBASE-21444 > URL: https://issues.apache.org/jira/browse/HBASE-21444 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.2 >Reporter: Ankit Singhal >Assignee: Ankit Singhal >Priority: Major > Attachments: HBASE-21444.branch-2.0.001.patch, > HBASE-21444.branch-2.0.002.patch > > > Ambari metric server uses HBase as storage and currently have different > znodes (/hbase-unsecure and /hbase-secure) to differentiate secure/unsecure > deployment of HBase. > As it also supports the rollback of the cluster from kerberised to > non-kerberised (includes step of changing znode from /hbase-secure to > /hbase-unsecure) , but with HBase 2.0 , meta-region-server znode from old > zookeeper znodes will have regionserver which was long ago gone and there > will be no procedure to transition it, resulting it to get stuck for lifetime. > One option is to clear the znodes before rollingback but as it used to work > with prior releases due to RecoverMetaProcedure, the ask is if we can fix > meta assignment in case the wrong state is available in znode. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21451) The way we maintain the lastestPaths in ReplicationSourceManager is broken when sync replication is used
[ https://issues.apache.org/jira/browse/HBASE-21451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-21451: -- Attachment: HBASE-21451-v1.patch > The way we maintain the lastestPaths in ReplicationSourceManager is broken > when sync replication is used > > > Key: HBASE-21451 > URL: https://issues.apache.org/jira/browse/HBASE-21451 > Project: HBase > Issue Type: Sub-task > Components: Replication >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 3.0.0 > > Attachments: HBASE-21451-v1.patch, HBASE-21451.patch > > > Here is the problematic code > {code} > // Add to latestPaths > Iterator iterator = latestPaths.iterator(); > while (iterator.hasNext()) { > Path path = iterator.next(); > if (path.getName().contains(logPrefix)) { > iterator.remove(); > break; > } > } > this.latestPaths.add(newLog); > {code} > Here we just use contains, but for sync replication wal group, it just adds > something after the default prefix for regionserver, so the code will be > broken... -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20952) Re-visit the WAL API
[ https://issues.apache.org/jira/browse/HBASE-20952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16679256#comment-16679256 ] Hudson commented on HBASE-20952: Results for branch HBASE-20952 [build #43 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20952/43/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20952/43//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20952/43//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20952/43//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Re-visit the WAL API > > > Key: HBASE-20952 > URL: https://issues.apache.org/jira/browse/HBASE-20952 > Project: HBase > Issue Type: Improvement > Components: wal >Reporter: Josh Elser >Priority: Major > Attachments: 20952.v1.txt > > > Take a step back from the current WAL implementations and think about what an > HBase WAL API should look like. What are the primitive calls that we require > to guarantee durability of writes with a high degree of performance? > The API needs to take the current implementations into consideration. We > should also have a mind for what is happening in the Ratis LogService (but > the LogService should not dictate what HBase's WAL API looks like RATIS-272). > Other "systems" inside of HBase that use WALs are replication and > backup&restore. Replication has the use-case for "tail"'ing the WAL which we > should provide via our new API. B&R doesn't do anything fancy (IIRC). We > should make sure all consumers are generally going to be OK with the API we > create. > The API may be "OK" (or OK in a part). We need to also consider other methods > which were "bolted" on such as {{AbstractFSWAL}} and > {{WALFileLengthProvider}}. Other corners of "WAL use" (like the > {{WALSplitter}} should also be looked at to use WAL-APIs only). > We also need to make sure that adequate interface audience and stability > annotations are chosen. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21419) Show sync replication related field for replication peer on master web UI
[ https://issues.apache.org/jira/browse/HBASE-21419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-21419: -- Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Pushed to master. Thanks [~tianjingyun] for contributing. > Show sync replication related field for replication peer on master web UI > - > > Key: HBASE-21419 > URL: https://issues.apache.org/jira/browse/HBASE-21419 > Project: HBase > Issue Type: Sub-task > Components: Replication >Reporter: Duo Zhang >Assignee: Jingyun Tian >Priority: Major > Fix For: 3.0.0 > > Attachments: HBASE-21419.master.001.patch, > HBASE-21419.master.002.patch, HBASE-21419.master.003.patch, Screenshot from > 2018-11-05 16-02-11.png > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21451) The way we maintain the lastestPaths in ReplicationSourceManager is broken when sync replication is used
[ https://issues.apache.org/jira/browse/HBASE-21451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16679353#comment-16679353 ] Duo Zhang commented on HBASE-21451: --- [~zghaobac] FYI. > The way we maintain the lastestPaths in ReplicationSourceManager is broken > when sync replication is used > > > Key: HBASE-21451 > URL: https://issues.apache.org/jira/browse/HBASE-21451 > Project: HBase > Issue Type: Sub-task > Components: Replication >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 3.0.0 > > Attachments: HBASE-21451.patch > > > Here is the problematic code > {code} > // Add to latestPaths > Iterator iterator = latestPaths.iterator(); > while (iterator.hasNext()) { > Path path = iterator.next(); > if (path.getName().contains(logPrefix)) { > iterator.remove(); > break; > } > } > this.latestPaths.add(newLog); > {code} > Here we just use contains, but for sync replication wal group, it just adds > something after the default prefix for regionserver, so the code will be > broken... -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21246) Introduce WALIdentity interface
[ https://issues.apache.org/jira/browse/HBASE-21246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16679257#comment-16679257 ] Reid Chan commented on HBASE-21246: --- Can we file a sub task for refactoring WALFactory, let's start from fundamental classes, not a big one compared to this IMO. Back to the WALIdentity, i think the interface looks good to me, not further comments about the introduction. > Introduce WALIdentity interface > --- > > Key: HBASE-21246 > URL: https://issues.apache.org/jira/browse/HBASE-21246 > Project: HBase > Issue Type: Sub-task >Reporter: Ted Yu >Assignee: Ted Yu >Priority: Major > Fix For: HBASE-20952 > > Attachments: 21246.003.patch, 21246.20.txt, 21246.21.txt, > 21246.23.txt, 21246.24.txt, 21246.25.txt, 21246.HBASE-20952.001.patch, > 21246.HBASE-20952.002.patch, 21246.HBASE-20952.004.patch, > 21246.HBASE-20952.005.patch, 21246.HBASE-20952.007.patch, > 21246.HBASE-20952.008.patch, replication-src-creates-wal-reader.jpg, > wal-factory-providers.png, wal-providers.png, wal-splitter-reader.jpg, > wal-splitter-writer.jpg > > > We are introducing WALIdentity interface so that the WAL representation can > be decoupled from distributed filesystem. > The interface provides getName method whose return value can represent > filename in distributed filesystem environment or, the name of the stream > when the WAL is backed by log stream. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21451) The way we maintain the lastestPaths in ReplicationSourceManager is broken when sync replication is used
[ https://issues.apache.org/jira/browse/HBASE-21451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16679363#comment-16679363 ] Duo Zhang commented on HBASE-21451: --- Bad import. > The way we maintain the lastestPaths in ReplicationSourceManager is broken > when sync replication is used > > > Key: HBASE-21451 > URL: https://issues.apache.org/jira/browse/HBASE-21451 > Project: HBase > Issue Type: Sub-task > Components: Replication >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 3.0.0 > > Attachments: HBASE-21451-v1.patch, HBASE-21451.patch > > > Here is the problematic code > {code} > // Add to latestPaths > Iterator iterator = latestPaths.iterator(); > while (iterator.hasNext()) { > Path path = iterator.next(); > if (path.getName().contains(logPrefix)) { > iterator.remove(); > break; > } > } > this.latestPaths.add(newLog); > {code} > Here we just use contains, but for sync replication wal group, it just adds > something after the default prefix for regionserver, so the code will be > broken... -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21454) Kill zk spew
[ https://issues.apache.org/jira/browse/HBASE-21454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-21454: -- Release Note: Set all zookeeper logging to WARN instead of INFO Status: Patch Available (was: Open) > Kill zk spew > > > Key: HBASE-21454 > URL: https://issues.apache.org/jira/browse/HBASE-21454 > Project: HBase > Issue Type: Bug > Components: logging, Zookeeper >Reporter: stack >Assignee: stack >Priority: Major > Fix For: 3.0.0, 2.2.0 > > Attachments: HBASE-21454.master.001.patch > > > Kill the zk spew. This is radical dropping startup listing of CLASSPATH and > all properties. Can dial back-in what we need after this patch goes in. > I get spew each time I run a little command in spark-shell. Annoying. Always > been annoying in all logs. > More might be needed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20604) ProtobufLogReader#readNext can incorrectly loop to the same position in the stream until the the WAL is rolled
[ https://issues.apache.org/jira/browse/HBASE-20604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16679131#comment-16679131 ] Andrew Purtell commented on HBASE-20604: Back from vacation, sorry for the delay. Sean +1ed this pending good precommit result, which we have, and I don't have any additional comment, so I'll commit this now. > ProtobufLogReader#readNext can incorrectly loop to the same position in the > stream until the the WAL is rolled > -- > > Key: HBASE-20604 > URL: https://issues.apache.org/jira/browse/HBASE-20604 > Project: HBase > Issue Type: Bug > Components: Replication, wal >Affects Versions: 3.0.0 >Reporter: Esteban Gutierrez >Assignee: Esteban Gutierrez >Priority: Critical > Attachments: HBASE-20604.002.patch, HBASE-20604.003.patch, > HBASE-20604.004.patch, HBASE-20604.005.patch, HBASE-20604.patch > > > Every time we call {{ProtobufLogReader#readNext}} we consume the input stream > associated to the {{FSDataInputStream}} from the WAL that we are reading. > Under certain conditions, e.g. when using the encryption at rest > ({{CryptoInputStream}}) the stream can return partial data which can cause a > premature EOF that cause {{inputStream.getPos()}} to return to the same > origina position causing {{ProtobufLogReader#readNext}} to re-try over the > reads until the WAL is rolled. > The side effect of this issue is that {{ReplicationSource}} can get stuck > until the WAL is rolled and causing replication delays up to an hour in some > cases. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21411) Need to document the snapshot metric data that is shown in HBase Master Web UI
[ https://issues.apache.org/jira/browse/HBASE-21411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678936#comment-16678936 ] Hadoop QA commented on HBASE-21411: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 44s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 17m 3s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} refguide {color} | {color:blue} 5m 15s{color} | {color:blue} branch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 14m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:blue}0{color} | {color:blue} refguide {color} | {color:blue} 5m 5s{color} | {color:blue} patch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 9s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 51m 52s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b | | JIRA Issue | HBASE-21411 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12947290/0001-Patch-for-HBASE-21411.patch | | Optional Tests | dupname asflicense refguide mvnsite | | uname | Linux 09342b158a8d 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 14:43:09 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 468c1e77bf | | maven | version: Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) | | refguide | https://builds.apache.org/job/PreCommit-HBASE-Build/14986/artifact/patchprocess/branch-site/book.html | | refguide | https://builds.apache.org/job/PreCommit-HBASE-Build/14986/artifact/patchprocess/patch-site/book.html | | Max. process+thread count | 87 (vs. ulimit of 1) | | modules | C: . U: . | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/14986/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Need to document the snapshot metric data that is shown in HBase Master Web UI > -- > > Key: HBASE-21411 > URL: https://issues.apache.org/jira/browse/HBASE-21411 > Project: HBase > Issue Type: Improvement > Components: documentation >Affects Versions: 1.3.0, 2.0.0 >Reporter: Roland Teague >Assignee: Roland Teague >Priority: Major > Attachments: 0001-Patch-for-HBASE-21411.patch, > HBASE-21411.master.001.patch > > > We need to add documentation into the Reference Guide for the work that was > done in HBASE-15415. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21450) [documentation] Point spark doc at hbase-connectors spark
[ https://issues.apache.org/jira/browse/HBASE-21450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16679067#comment-16679067 ] Hadoop QA commented on HBASE-21450: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 41s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} refguide {color} | {color:blue} 6m 2s{color} | {color:blue} branch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:blue}0{color} | {color:blue} refguide {color} | {color:blue} 5m 52s{color} | {color:blue} patch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 21m 59s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b | | JIRA Issue | HBASE-21450 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12947295/HBASE-21450.master.001.patch | | Optional Tests | dupname asflicense refguide | | uname | Linux bf1a90ebe32e 4.4.0-134-generic #160~14.04.1-Ubuntu SMP Fri Aug 17 11:07:07 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 468c1e77bf | | maven | version: Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) | | refguide | https://builds.apache.org/job/PreCommit-HBASE-Build/14987/artifact/patchprocess/branch-site/book.html | | refguide | https://builds.apache.org/job/PreCommit-HBASE-Build/14987/artifact/patchprocess/patch-site/book.html | | Max. process+thread count | 87 (vs. ulimit of 1) | | modules | C: . U: . | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/14987/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > [documentation] Point spark doc at hbase-connectors spark > - > > Key: HBASE-21450 > URL: https://issues.apache.org/jira/browse/HBASE-21450 > Project: HBase > Issue Type: Bug > Components: documentation, spark >Reporter: stack >Assignee: stack >Priority: Major > Fix For: 3.0.0, 2.2.0 > > Attachments: HBASE-21450.master.001.patch > > > Review the spark chapter in refguide. Have it point at the hbase-connectors > project rather than to local spark modules. Revisit the examples to make sure > they still work. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21450) [documentation] Point spark doc at hbase-connectors spark
stack created HBASE-21450: - Summary: [documentation] Point spark doc at hbase-connectors spark Key: HBASE-21450 URL: https://issues.apache.org/jira/browse/HBASE-21450 Project: HBase Issue Type: Bug Components: documentation, spark Reporter: stack Assignee: stack Fix For: 3.0.0, 2.2.0 Review the spark chapter in refguide. Have it point at the hbase-connectors project rather than to local spark modules. Revisit the examples to make sure they still work. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21450) [documentation] Point spark doc at hbase-connectors spark
[ https://issues.apache.org/jira/browse/HBASE-21450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-21450: -- Status: Patch Available (was: Open) First cut. Points at hbase-connectors. TODO, run through examples. > [documentation] Point spark doc at hbase-connectors spark > - > > Key: HBASE-21450 > URL: https://issues.apache.org/jira/browse/HBASE-21450 > Project: HBase > Issue Type: Bug > Components: documentation, spark >Reporter: stack >Assignee: stack >Priority: Major > Fix For: 3.0.0, 2.2.0 > > Attachments: HBASE-21450.master.001.patch > > > Review the spark chapter in refguide. Have it point at the hbase-connectors > project rather than to local spark modules. Revisit the examples to make sure > they still work. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21450) [documentation] Point spark doc at hbase-connectors spark
[ https://issues.apache.org/jira/browse/HBASE-21450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-21450: -- Attachment: HBASE-21450.master.001.patch > [documentation] Point spark doc at hbase-connectors spark > - > > Key: HBASE-21450 > URL: https://issues.apache.org/jira/browse/HBASE-21450 > Project: HBase > Issue Type: Bug > Components: documentation, spark >Reporter: stack >Assignee: stack >Priority: Major > Fix For: 3.0.0, 2.2.0 > > Attachments: HBASE-21450.master.001.patch > > > Review the spark chapter in refguide. Have it point at the hbase-connectors > project rather than to local spark modules. Revisit the examples to make sure > they still work. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21411) Need to document the snapshot metric data that is shown in HBase Master Web UI
[ https://issues.apache.org/jira/browse/HBASE-21411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Roland Teague updated HBASE-21411: -- Attachment: 0001-Patch-for-HBASE-21411.patch > Need to document the snapshot metric data that is shown in HBase Master Web UI > -- > > Key: HBASE-21411 > URL: https://issues.apache.org/jira/browse/HBASE-21411 > Project: HBase > Issue Type: Improvement > Components: documentation >Affects Versions: 1.3.0, 2.0.0 >Reporter: Roland Teague >Assignee: Roland Teague >Priority: Major > Attachments: 0001-Patch-for-HBASE-21411.patch, > HBASE-21411.master.001.patch > > > We need to add documentation into the Reference Guide for the work that was > done in HBASE-15415. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21411) Need to document the snapshot metric data that is shown in HBase Master Web UI
[ https://issues.apache.org/jira/browse/HBASE-21411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678878#comment-16678878 ] Roland Teague commented on HBASE-21411: --- [~busbey] I've recreated the patch using git format-patch and have resubmitted the patch. > Need to document the snapshot metric data that is shown in HBase Master Web UI > -- > > Key: HBASE-21411 > URL: https://issues.apache.org/jira/browse/HBASE-21411 > Project: HBase > Issue Type: Improvement > Components: documentation >Affects Versions: 1.3.0, 2.0.0 >Reporter: Roland Teague >Assignee: Roland Teague >Priority: Major > Attachments: 0001-Patch-for-HBASE-21411.patch, > HBASE-21411.master.001.patch > > > We need to add documentation into the Reference Guide for the work that was > done in HBASE-15415. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21440) Assign procedure on the crashed server is not properly interrupted
[ https://issues.apache.org/jira/browse/HBASE-21440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Josh Elser updated HBASE-21440: --- Status: Patch Available (was: Open) > Assign procedure on the crashed server is not properly interrupted > -- > > Key: HBASE-21440 > URL: https://issues.apache.org/jira/browse/HBASE-21440 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.2 >Reporter: Ankit Singhal >Assignee: Ankit Singhal >Priority: Major > Attachments: HBASE-21440.branch-2.0.001.patch, > HBASE-21440.branch-2.0.002.patch > > > When the server crashes, it's SCP checks if there is already a procedure > assigning the region on this crashed server. If we found one, SCP will just > interrupt the already running AssignProcedure by calling remoteCallFailed > which internally just changes the region node state to OFFLINE and send the > procedure back with transition queue state for assignment with a new plan. > But, due to the race condition between the calling of the remoteCallFailed > and current state of the already running assign > procedure(REGION_TRANSITION_FINISH: where the region is already opened), it > is possible that assign procedure goes ahead in updating the regionStateNode > to OPEN on a crashed server. > As SCP had already skipped this region for assignment as it was relying on > existing assign procedure to do the right thing, this whole confusion leads > region to a not accessible state. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21439) StochasticLoadBalancer RegionLoads aren’t being used in RegionLoad cost functions
[ https://issues.apache.org/jira/browse/HBASE-21439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678835#comment-16678835 ] Hadoop QA commented on HBASE-21439: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 23s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 9s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 21s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 42s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 3m 51s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 48s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s{color} | {color:green} hbase-client generated 0 new + 100 unchanged - 3 fixed = 100 total (was 103) {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 45s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 3m 54s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 8m 42s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 59s{color} | {color:green} hbase-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green}134m 38s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 56s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}180m 27s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b | | JIRA Issue | HBASE-21439 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12947266/HBASE-21439-master.patch | | Optional Tests | dupname asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 314579c0967c 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 14:43:09 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/compo
[jira] [Commented] (HBASE-21444) Recover meta in case of long ago dead region server appear in meta znode
[ https://issues.apache.org/jira/browse/HBASE-21444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678808#comment-16678808 ] Josh Elser commented on HBASE-21444: bq. Could you describe more about the case? Then we can decide if it is a normal case HBase should cover or it is abnormal, should be fixed externally like delete the meta node or by HBCK2. I've talked to Ankit about this one once or twice now. The Ambari Metrics case is definitely "odd-ball". Getting a super-old ZK root znode that doesn't jive with meta or HDFS contents isn't something we'd want to "plan for" in HBase. However, we have been noticing a trend of issues that cause meta to be "orphaned" in an unassigned state. I think we can (greatly) improve the user-experience by accepting that we will have more bugs like this ("for some reason, meta is offline and we don't have an SCP which will get it assigned"), and do some extra work to try to get it online ourselves. That's my take, anyways :) I need to read up on HBASE-21035 too. Thanks for the pointer! > Recover meta in case of long ago dead region server appear in meta znode > > > Key: HBASE-21444 > URL: https://issues.apache.org/jira/browse/HBASE-21444 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.2 >Reporter: Ankit Singhal >Assignee: Ankit Singhal >Priority: Major > Attachments: HBASE-21444.branch-2.0.001.patch, > HBASE-21444.branch-2.0.002.patch > > > Ambari metric server uses HBase as storage and currently have different > znodes (/hbase-unsecure and /hbase-secure) to differentiate secure/unsecure > deployment of HBase. > As it also supports the rollback of the cluster from kerberised to > non-kerberised (includes step of changing znode from /hbase-secure to > /hbase-unsecure) , but with HBase 2.0 , meta-region-server znode from old > zookeeper znodes will have regionserver which was long ago gone and there > will be no procedure to transition it, resulting it to get stuck for lifetime. > One option is to clear the znodes before rollingback but as it used to work > with prior releases due to RecoverMetaProcedure, the ask is if we can fix > meta assignment in case the wrong state is available in znode. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21440) Assign procedure on the crashed server is not properly interrupted
[ https://issues.apache.org/jira/browse/HBASE-21440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678797#comment-16678797 ] Josh Elser commented on HBASE-21440: Clicked the "Patch Available" button. Great synopsis Ankit, and thanks for the great review, Allan. Love to see nice, clear explanations about a gross/rare bug :) > Assign procedure on the crashed server is not properly interrupted > -- > > Key: HBASE-21440 > URL: https://issues.apache.org/jira/browse/HBASE-21440 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.2 >Reporter: Ankit Singhal >Assignee: Ankit Singhal >Priority: Major > Attachments: HBASE-21440.branch-2.0.001.patch, > HBASE-21440.branch-2.0.002.patch > > > When the server crashes, it's SCP checks if there is already a procedure > assigning the region on this crashed server. If we found one, SCP will just > interrupt the already running AssignProcedure by calling remoteCallFailed > which internally just changes the region node state to OFFLINE and send the > procedure back with transition queue state for assignment with a new plan. > But, due to the race condition between the calling of the remoteCallFailed > and current state of the already running assign > procedure(REGION_TRANSITION_FINISH: where the region is already opened), it > is possible that assign procedure goes ahead in updating the regionStateNode > to OPEN on a crashed server. > As SCP had already skipped this region for assignment as it was relying on > existing assign procedure to do the right thing, this whole confusion leads > region to a not accessible state. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21449) [hbase-connectors] Test can do secure hbase-spark
stack created HBASE-21449: - Summary: [hbase-connectors] Test can do secure hbase-spark Key: HBASE-21449 URL: https://issues.apache.org/jira/browse/HBASE-21449 Project: HBase Issue Type: Bug Components: hbase-connectors, security, spark Reporter: stack In the scope document attached to HBASE-18405, an exit criteria is that user can run the spark-hbase integration in secure mode. Suggests that manual test is good enough for first phase. This issue is for running manual test of a secure setup. [~busbey] thought he might be able to help out. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21439) StochasticLoadBalancer RegionLoads aren’t being used in RegionLoad cost functions
[ https://issues.apache.org/jira/browse/HBASE-21439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678766#comment-16678766 ] Ted Yu commented on HBASE-21439: StochasticLoadBalancer tests passed. lgtm, pending QA. > StochasticLoadBalancer RegionLoads aren’t being used in RegionLoad cost > functions > - > > Key: HBASE-21439 > URL: https://issues.apache.org/jira/browse/HBASE-21439 > Project: HBase > Issue Type: Bug > Components: Balancer >Affects Versions: 1.3.2.1, 2.0.2 >Reporter: Ben Lau >Assignee: Ben Lau >Priority: Major > Attachments: HBASE-21439-master.patch > > > In StochasticLoadBalancer.updateRegionLoad() the region loads are being put > into the map with Bytes.toString(regionName). > First, this is a problem because Bytes.toString() assumes that the byte array > is a UTF8 encoded String but there is no guarantee that regionName bytes are > legal UTF8. > Secondly, in BaseLoadBalancer.registerRegion, we are reading the region loads > out of the load map not using Bytes.toString() but using > region.getRegionNameAsString() and region.getEncodedName(). So the load > balancer will not see or use any of the cluster's RegionLoad history. > There are 2 primary ways to solve this issue, assuming we want to stay with > String keys for the load map (seems reasonable to aid debugging). We can > either fix updateRegionLoad to store the regionName as a string properly or > we can update both the reader & writer to use a new common valid String > representation. > Will post a patch assuming we want to pursue the original intention, i.e. > store regionNameAsAString for the loadmap key, but I'm open to fixing this a > different way. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21355) HStore's storeSize is calculated repeatedly which causing the confusing region split
[ https://issues.apache.org/jira/browse/HBASE-21355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678681#comment-16678681 ] Sean Busbey commented on HBASE-21355: - did this not impact branch-1.2 or was it just overlooked? > HStore's storeSize is calculated repeatedly which causing the confusing > region split > - > > Key: HBASE-21355 > URL: https://issues.apache.org/jira/browse/HBASE-21355 > Project: HBase > Issue Type: Bug > Components: regionserver >Reporter: Zheng Hu >Assignee: Zheng Hu >Priority: Blocker > Fix For: 3.0.0, 1.5.0, 1.3.3, 2.2.0, 2.1.1, 2.0.3, 1.4.9 > > Attachments: HBASE-21355.addendum.patch, HBASE-21355.addendum.patch, > HBASE-21355.branch-1.patch, HBASE-21355.v1.patch > > > When testing the branch-2's write performance in our internal cluster, we > found that the region will be inexplicably split. > We use the default ConstantSizeRegionSplitPolicy and > hbase.hregion.max.filesize=40G,but the region will be split even if its > bytes size is less than 40G(only ~6G). > Checked the code, I found that the following path will accumulate the > store's storeSize to a very big value, because the path has no reset.. > {code} > RsRpcServices#getRegionInfo > -> HRegion#isMergeable >-> HRegion#hasReferences > -> HStore#hasReferences > -> HStore#openStoreFiles > {code} > BTW, we seems forget to maintain the read replica's storeSize when refresh > the store files. > Some comment here, I move the storeSize calculation out of loadStoreFiles() > method, because the secondary read replica's refreshStoreFiles() will also > use loadStoreFiles() to refresh its store files and update the storeSize in > the completeCompaction(..) in the final (just like compaction.) , so no need > calculate the storeSize twice.. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21411) Need to document the snapshot metric data that is shown in HBase Master Web UI
[ https://issues.apache.org/jira/browse/HBASE-21411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678655#comment-16678655 ] Hadoop QA commented on HBASE-21411: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 14s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 14m 59s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} refguide {color} | {color:blue} 5m 0s{color} | {color:blue} branch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 14m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:blue}0{color} | {color:blue} refguide {color} | {color:blue} 5m 7s{color} | {color:blue} patch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 9s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 48m 42s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b | | JIRA Issue | HBASE-21411 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12946306/HBASE-21411.master.001.patch | | Optional Tests | dupname asflicense refguide mvnsite | | uname | Linux 9602af996a51 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 468c1e77bf | | maven | version: Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) | | refguide | https://builds.apache.org/job/PreCommit-HBASE-Build/14983/artifact/patchprocess/branch-site/book.html | | refguide | https://builds.apache.org/job/PreCommit-HBASE-Build/14983/artifact/patchprocess/patch-site/book.html | | Max. process+thread count | 87 (vs. ulimit of 1) | | modules | C: . U: . | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/14983/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Need to document the snapshot metric data that is shown in HBase Master Web UI > -- > > Key: HBASE-21411 > URL: https://issues.apache.org/jira/browse/HBASE-21411 > Project: HBase > Issue Type: Improvement > Components: documentation >Affects Versions: 1.3.0, 2.0.0 >Reporter: Roland Teague >Assignee: Roland Teague >Priority: Major > Attachments: HBASE-21411.master.001.patch > > > We need to add documentation into the Reference Guide for the work that was > done in HBASE-15415. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21439) StochasticLoadBalancer RegionLoads aren’t being used in RegionLoad cost functions
[ https://issues.apache.org/jira/browse/HBASE-21439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ben Lau updated HBASE-21439: Attachment: HBASE-21439-master.patch Status: Patch Available (was: Open) New patch that fixes the style issues. > StochasticLoadBalancer RegionLoads aren’t being used in RegionLoad cost > functions > - > > Key: HBASE-21439 > URL: https://issues.apache.org/jira/browse/HBASE-21439 > Project: HBase > Issue Type: Bug > Components: Balancer >Affects Versions: 2.0.2, 1.3.2.1 >Reporter: Ben Lau >Assignee: Ben Lau >Priority: Major > Attachments: HBASE-21439-master.patch > > > In StochasticLoadBalancer.updateRegionLoad() the region loads are being put > into the map with Bytes.toString(regionName). > First, this is a problem because Bytes.toString() assumes that the byte array > is a UTF8 encoded String but there is no guarantee that regionName bytes are > legal UTF8. > Secondly, in BaseLoadBalancer.registerRegion, we are reading the region loads > out of the load map not using Bytes.toString() but using > region.getRegionNameAsString() and region.getEncodedName(). So the load > balancer will not see or use any of the cluster's RegionLoad history. > There are 2 primary ways to solve this issue, assuming we want to stay with > String keys for the load map (seems reasonable to aid debugging). We can > either fix updateRegionLoad to store the regionName as a string properly or > we can update both the reader & writer to use a new common valid String > representation. > Will post a patch assuming we want to pursue the original intention, i.e. > store regionNameAsAString for the loadmap key, but I'm open to fixing this a > different way. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HBASE-21448) [hbase-connectors] Make compile/tests pass on scala 2.10 AND 2.11
[ https://issues.apache.org/jira/browse/HBASE-21448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack resolved HBASE-21448. --- Resolution: Fixed Merged. Resolving. > [hbase-connectors] Make compile/tests pass on scala 2.10 AND 2.11 > - > > Key: HBASE-21448 > URL: https://issues.apache.org/jira/browse/HBASE-21448 > Project: HBase > Issue Type: Bug > Components: hbase-connectors, spark >Reporter: stack >Assignee: stack >Priority: Major > Fix For: connector-1.0.0 > > > Our spark connector over in hbase-connectors compiles with scala 2.11 only at > the mo. Make it work for 2.10 scala too. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21448) [hbase-connectors] Make compile/tests pass on scala 2.10 AND 2.11
[ https://issues.apache.org/jira/browse/HBASE-21448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678630#comment-16678630 ] ASF GitHub Bot commented on HBASE-21448: saintstack closed pull request #10: HBASE-21448 [hbase-connectors] Make compile/tests pass on scala 2.10 … URL: https://github.com/apache/hbase-connectors/pull/10 This is a PR merged from a forked repository. As GitHub hides the original diff on merge, it is displayed below for the sake of provenance: As this is a foreign pull request (from a fork), the diff is supplied below (as it won't show otherwise due to GitHub magic): diff --git a/spark/hbase-spark/pom.xml b/spark/hbase-spark/pom.xml index 4554ceb..705da4b 100644 --- a/spark/hbase-spark/pom.xml +++ b/spark/hbase-spark/pom.xml @@ -429,7 +429,7 @@ org.scala-lang scala-reflect - 2.11.2 + ${scala.version} org.apache.spark @@ -727,62 +727,87 @@ - -net.alchim31.maven -scala-maven-plugin -3.2.0 - - ${project.build.sourceEncoding} - ${scala.version} - --feature --target:jvm-1.8 - - ${compileSource} - ${compileSource} - - - -scala-compile-first -process-resources - - add-source - compile - - - -scala-test-compile -process-test-resources - - testCompile - - - - - -org.scalatest -scalatest-maven-plugin -1.0 - - ${project.build.directory}/surefire-reports - . - WDF TestSuite.txt - false - - - -test -test - - test - + + +org.codehaus.gmaven +gmaven-plugin +1.5 + + +validate + + execute + + + + + + + + +net.alchim31.maven +scala-maven-plugin +3.2.0 + + ${project.build.sourceEncoding} + ${scala.version} + +-feature + +${target.jvm} + + ${compileSource} + ${compileSource} + + + +scala-compile-first +process-resources + + add-source + compile + + + +scala-test-compile +process-test-resources + + testCompile + + + + + +org.scalatest +scalatest-maven-plugin +1.0 - -Xmx1536m -XX:ReservedCodeCacheSize=512m + ${project.build.directory}/surefire-reports + . + WDF TestSuite.txt false - - - + + +test +test + + test + + + -Xmx1536m -XX:ReservedCodeCacheSize=512m + false + + + + diff --git a/spark/hbase-spark/src/test/scala/org/apache/hadoop/hbase/spark/HBaseConnectionCacheSuite.scala b/spark/hbase-spark/src/test/scala/org/apache/hadoop/hbase/spark/HBaseConnectionCacheSuite.scala index 5b42bd9..1b71eb4 100644 --- a/spark/hbase-spark/src/test/scala/org/apache/hadoop/hbase/spark/HBaseConnectionCacheSuite.scala +++ b/spark/hbase-spark/src/test/scala/org/apache/hadoop/hbase/spark/HBaseConnectionCacheSuite.scala @@ -44,8 +44,8 @@ class ConnectionMocker extends Connection { def getRegionLocator (tableName: TableName): RegionLocator = null def getConfiguration: Configuration = null - def getTable (tableName: TableName): Table = null - def getTable(tableName: TableName, pool: ExecutorService): Table = null + override def getTable (tableName: TableName): Table = null + override def getTable(tableName: TableName, pool: ExecutorService): Table = null def getBufferedMutator (params: BufferedMutatorParams): BufferedMutator = null def getBufferedMutator (tableName: TableName): BufferedMutator = null def getAdmin: Admin = null This is an automated message from the Apache Git Service. To respond to th
[GitHub] saintstack closed pull request #10: HBASE-21448 [hbase-connectors] Make compile/tests pass on scala 2.10 …
saintstack closed pull request #10: HBASE-21448 [hbase-connectors] Make compile/tests pass on scala 2.10 … URL: https://github.com/apache/hbase-connectors/pull/10 This is a PR merged from a forked repository. As GitHub hides the original diff on merge, it is displayed below for the sake of provenance: As this is a foreign pull request (from a fork), the diff is supplied below (as it won't show otherwise due to GitHub magic): diff --git a/spark/hbase-spark/pom.xml b/spark/hbase-spark/pom.xml index 4554ceb..705da4b 100644 --- a/spark/hbase-spark/pom.xml +++ b/spark/hbase-spark/pom.xml @@ -429,7 +429,7 @@ org.scala-lang scala-reflect - 2.11.2 + ${scala.version} org.apache.spark @@ -727,62 +727,87 @@ - -net.alchim31.maven -scala-maven-plugin -3.2.0 - - ${project.build.sourceEncoding} - ${scala.version} - --feature --target:jvm-1.8 - - ${compileSource} - ${compileSource} - - - -scala-compile-first -process-resources - - add-source - compile - - - -scala-test-compile -process-test-resources - - testCompile - - - - - -org.scalatest -scalatest-maven-plugin -1.0 - - ${project.build.directory}/surefire-reports - . - WDF TestSuite.txt - false - - - -test -test - - test - + + +org.codehaus.gmaven +gmaven-plugin +1.5 + + +validate + + execute + + + + + + + + +net.alchim31.maven +scala-maven-plugin +3.2.0 + + ${project.build.sourceEncoding} + ${scala.version} + +-feature + +${target.jvm} + + ${compileSource} + ${compileSource} + + + +scala-compile-first +process-resources + + add-source + compile + + + +scala-test-compile +process-test-resources + + testCompile + + + + + +org.scalatest +scalatest-maven-plugin +1.0 - -Xmx1536m -XX:ReservedCodeCacheSize=512m + ${project.build.directory}/surefire-reports + . + WDF TestSuite.txt false - - - + + +test +test + + test + + + -Xmx1536m -XX:ReservedCodeCacheSize=512m + false + + + + diff --git a/spark/hbase-spark/src/test/scala/org/apache/hadoop/hbase/spark/HBaseConnectionCacheSuite.scala b/spark/hbase-spark/src/test/scala/org/apache/hadoop/hbase/spark/HBaseConnectionCacheSuite.scala index 5b42bd9..1b71eb4 100644 --- a/spark/hbase-spark/src/test/scala/org/apache/hadoop/hbase/spark/HBaseConnectionCacheSuite.scala +++ b/spark/hbase-spark/src/test/scala/org/apache/hadoop/hbase/spark/HBaseConnectionCacheSuite.scala @@ -44,8 +44,8 @@ class ConnectionMocker extends Connection { def getRegionLocator (tableName: TableName): RegionLocator = null def getConfiguration: Configuration = null - def getTable (tableName: TableName): Table = null - def getTable(tableName: TableName, pool: ExecutorService): Table = null + override def getTable (tableName: TableName): Table = null + override def getTable(tableName: TableName, pool: ExecutorService): Table = null def getBufferedMutator (params: BufferedMutatorParams): BufferedMutator = null def getBufferedMutator (tableName: TableName): BufferedMutator = null def getAdmin: Admin = null This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (HBASE-21448) [hbase-connectors] Make compile/tests pass on scala 2.10 AND 2.11
[ https://issues.apache.org/jira/browse/HBASE-21448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678629#comment-16678629 ] ASF GitHub Bot commented on HBASE-21448: saintstack opened a new pull request #10: HBASE-21448 [hbase-connectors] Make compile/tests pass on scala 2.10 … URL: https://github.com/apache/hbase-connectors/pull/10 …AND 2.11 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > [hbase-connectors] Make compile/tests pass on scala 2.10 AND 2.11 > - > > Key: HBASE-21448 > URL: https://issues.apache.org/jira/browse/HBASE-21448 > Project: HBase > Issue Type: Bug > Components: hbase-connectors, spark >Reporter: stack >Assignee: stack >Priority: Major > Fix For: connector-1.0.0 > > > Our spark connector over in hbase-connectors compiles with scala 2.11 only at > the mo. Make it work for 2.10 scala too. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] saintstack opened a new pull request #10: HBASE-21448 [hbase-connectors] Make compile/tests pass on scala 2.10 …
saintstack opened a new pull request #10: HBASE-21448 [hbase-connectors] Make compile/tests pass on scala 2.10 … URL: https://github.com/apache/hbase-connectors/pull/10 …AND 2.11 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Created] (HBASE-21448) [hbase-connectors] Make compile/tests pass on scala 2.10 AND 2.11
stack created HBASE-21448: - Summary: [hbase-connectors] Make compile/tests pass on scala 2.10 AND 2.11 Key: HBASE-21448 URL: https://issues.apache.org/jira/browse/HBASE-21448 Project: HBase Issue Type: Bug Components: hbase-connectors, spark Reporter: stack Assignee: stack Fix For: connector-1.0.0 Our spark connector over in hbase-connectors compiles with scala 2.11 only at the mo. Make it work for 2.10 scala too. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21439) StochasticLoadBalancer RegionLoads aren’t being used in RegionLoad cost functions
[ https://issues.apache.org/jira/browse/HBASE-21439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ben Lau updated HBASE-21439: Status: Open (was: Patch Available) > StochasticLoadBalancer RegionLoads aren’t being used in RegionLoad cost > functions > - > > Key: HBASE-21439 > URL: https://issues.apache.org/jira/browse/HBASE-21439 > Project: HBase > Issue Type: Bug > Components: Balancer >Affects Versions: 2.0.2, 1.3.2.1 >Reporter: Ben Lau >Assignee: Ben Lau >Priority: Major > > In StochasticLoadBalancer.updateRegionLoad() the region loads are being put > into the map with Bytes.toString(regionName). > First, this is a problem because Bytes.toString() assumes that the byte array > is a UTF8 encoded String but there is no guarantee that regionName bytes are > legal UTF8. > Secondly, in BaseLoadBalancer.registerRegion, we are reading the region loads > out of the load map not using Bytes.toString() but using > region.getRegionNameAsString() and region.getEncodedName(). So the load > balancer will not see or use any of the cluster's RegionLoad history. > There are 2 primary ways to solve this issue, assuming we want to stay with > String keys for the load map (seems reasonable to aid debugging). We can > either fix updateRegionLoad to store the regionName as a string properly or > we can update both the reader & writer to use a new common valid String > representation. > Will post a patch assuming we want to pursue the original intention, i.e. > store regionNameAsAString for the loadmap key, but I'm open to fixing this a > different way. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21439) StochasticLoadBalancer RegionLoads aren’t being used in RegionLoad cost functions
[ https://issues.apache.org/jira/browse/HBASE-21439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ben Lau updated HBASE-21439: Attachment: (was: HBASE-21439-master.patch) > StochasticLoadBalancer RegionLoads aren’t being used in RegionLoad cost > functions > - > > Key: HBASE-21439 > URL: https://issues.apache.org/jira/browse/HBASE-21439 > Project: HBase > Issue Type: Bug > Components: Balancer >Affects Versions: 1.3.2.1, 2.0.2 >Reporter: Ben Lau >Assignee: Ben Lau >Priority: Major > > In StochasticLoadBalancer.updateRegionLoad() the region loads are being put > into the map with Bytes.toString(regionName). > First, this is a problem because Bytes.toString() assumes that the byte array > is a UTF8 encoded String but there is no guarantee that regionName bytes are > legal UTF8. > Secondly, in BaseLoadBalancer.registerRegion, we are reading the region loads > out of the load map not using Bytes.toString() but using > region.getRegionNameAsString() and region.getEncodedName(). So the load > balancer will not see or use any of the cluster's RegionLoad history. > There are 2 primary ways to solve this issue, assuming we want to stay with > String keys for the load map (seems reasonable to aid debugging). We can > either fix updateRegionLoad to store the regionName as a string properly or > we can update both the reader & writer to use a new common valid String > representation. > Will post a patch assuming we want to pursue the original intention, i.e. > store regionNameAsAString for the loadmap key, but I'm open to fixing this a > different way. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21439) StochasticLoadBalancer RegionLoads aren’t being used in RegionLoad cost functions
[ https://issues.apache.org/jira/browse/HBASE-21439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678619#comment-16678619 ] Ben Lau commented on HBASE-21439: - The test that failed looks like it's irrelevant (an XML error in some other part of the code base). I will submit a new patch to fix the checkstyle issues. > StochasticLoadBalancer RegionLoads aren’t being used in RegionLoad cost > functions > - > > Key: HBASE-21439 > URL: https://issues.apache.org/jira/browse/HBASE-21439 > Project: HBase > Issue Type: Bug > Components: Balancer >Affects Versions: 1.3.2.1, 2.0.2 >Reporter: Ben Lau >Assignee: Ben Lau >Priority: Major > Attachments: HBASE-21439-master.patch > > > In StochasticLoadBalancer.updateRegionLoad() the region loads are being put > into the map with Bytes.toString(regionName). > First, this is a problem because Bytes.toString() assumes that the byte array > is a UTF8 encoded String but there is no guarantee that regionName bytes are > legal UTF8. > Secondly, in BaseLoadBalancer.registerRegion, we are reading the region loads > out of the load map not using Bytes.toString() but using > region.getRegionNameAsString() and region.getEncodedName(). So the load > balancer will not see or use any of the cluster's RegionLoad history. > There are 2 primary ways to solve this issue, assuming we want to stay with > String keys for the load map (seems reasonable to aid debugging). We can > either fix updateRegionLoad to store the regionName as a string properly or > we can update both the reader & writer to use a new common valid String > representation. > Will post a patch assuming we want to pursue the original intention, i.e. > store regionNameAsAString for the loadmap key, but I'm open to fixing this a > different way. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20604) ProtobufLogReader#readNext can incorrectly loop to the same position in the stream until the the WAL is rolled
[ https://issues.apache.org/jira/browse/HBASE-20604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678610#comment-16678610 ] Hadoop QA commented on HBASE-20604: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange} 0m 0s{color} | {color:orange} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 41s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 51s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 5s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 6s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 55s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 4s{color} | {color:green} hbase-server: The patch generated 0 new + 20 unchanged - 2 fixed = 20 total (was 22) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 6s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 9m 46s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green}130m 42s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 26s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}170m 2s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b | | JIRA Issue | HBASE-20604 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12947239/HBASE-20604.005.patch | | Optional Tests | dupname asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux cbbe83d45a73 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 7997c5187f | | maven | version: Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC3 | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/14982/testReport/ | | Max. process+thread count | 5030 (vs. ulimit of 1) | | modules | C: hbase-server U: hbase-server | | C
[jira] [Updated] (HBASE-21411) Need to document the snapshot metric data that is shown in HBase Master Web UI
[ https://issues.apache.org/jira/browse/HBASE-21411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HBASE-21411: Status: Patch Available (was: Open) > Need to document the snapshot metric data that is shown in HBase Master Web UI > -- > > Key: HBASE-21411 > URL: https://issues.apache.org/jira/browse/HBASE-21411 > Project: HBase > Issue Type: Improvement > Components: documentation >Affects Versions: 2.0.0, 1.3.0 >Reporter: Roland Teague >Assignee: Roland Teague >Priority: Major > Attachments: HBASE-21411.master.001.patch > > > We need to add documentation into the Reference Guide for the work that was > done in HBASE-15415. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21411) Need to document the snapshot metric data that is shown in HBase Master Web UI
[ https://issues.apache.org/jira/browse/HBASE-21411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678596#comment-16678596 ] Sean Busbey commented on HBASE-21411: - please use {{git format-patch}} to create your patch so that it will include authorship information as you'd like to have it appear. > Need to document the snapshot metric data that is shown in HBase Master Web UI > -- > > Key: HBASE-21411 > URL: https://issues.apache.org/jira/browse/HBASE-21411 > Project: HBase > Issue Type: Improvement > Components: documentation >Affects Versions: 1.3.0, 2.0.0 >Reporter: Roland Teague >Assignee: Roland Teague >Priority: Major > Attachments: HBASE-21411.master.001.patch > > > We need to add documentation into the Reference Guide for the work that was > done in HBASE-15415. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HBASE-21411) Need to document the snapshot metric data that is shown in HBase Master Web UI
[ https://issues.apache.org/jira/browse/HBASE-21411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey reassigned HBASE-21411: --- Assignee: Roland Teague Thanks for the patch Roland! I've added you to the contributor role in JIRA so you ought to be able to assign issues to yourself now (as well as mark them "patch available" for qabot checking and review) > Need to document the snapshot metric data that is shown in HBase Master Web UI > -- > > Key: HBASE-21411 > URL: https://issues.apache.org/jira/browse/HBASE-21411 > Project: HBase > Issue Type: Improvement > Components: documentation >Affects Versions: 1.3.0, 2.0.0 >Reporter: Roland Teague >Assignee: Roland Teague >Priority: Major > Attachments: HBASE-21411.master.001.patch > > > We need to add documentation into the Reference Guide for the work that was > done in HBASE-15415. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21387) Race condition surrounding in progress snapshot handling in snapshot cache leads to loss of snapshot files
[ https://issues.apache.org/jira/browse/HBASE-21387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678531#comment-16678531 ] Ted Yu commented on HBASE-21387: Here is a brief summary of the approaches I tried, with most recent first - which is expected to be reviewed: 21387.v9.txt : At the beginning of getUnreferencedFiles, snapshot is temporarily disabled. We check whether there is in-flight snapshot. If there is, don't list any file as unreferenced. Otherwise, fill out unreferenced files. During this time, snapshot attempt would be declined. At the end of getUnreferencedFiles, snapshot is enabled. two-pass-cleaner.v9.txt : Cleaner chore stores candidates from previous invocation of the chore. The chore would calculate the intersection of previous candidates and current candidates. The downside of this approach is that the extra candidates from previous iteration consumes (potentially large) memory. 21387.v8.txt : SnapshotFileCache would try to obtain in progress snapshot under the lock. However, since the timing of when in progress snapshot completes is not under the control of SnapshotFileCache, it is hard to avoid race condition. > Race condition surrounding in progress snapshot handling in snapshot cache > leads to loss of snapshot files > -- > > Key: HBASE-21387 > URL: https://issues.apache.org/jira/browse/HBASE-21387 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Assignee: Ted Yu >Priority: Major > Labels: snapshot > Attachments: 21387.dbg.txt, 21387.v2.txt, 21387.v3.txt, 21387.v6.txt, > 21387.v7.txt, 21387.v8.txt, 21387.v9.txt, two-pass-cleaner.v4.txt, > two-pass-cleaner.v6.txt, two-pass-cleaner.v9.txt > > > During recent report from customer where ExportSnapshot failed: > {code} > 2018-10-09 18:54:32,559 ERROR [VerifySnapshot-pool1-t2] > snapshot.SnapshotReferenceUtil: Can't find hfile: > 44f6c3c646e84de6a63fe30da4fcb3aa in the real > (hdfs://in.com:8020/apps/hbase/data/data/.../a/44f6c3c646e84de6a63fe30da4fcb3aa) > or archive > (hdfs://in.com:8020/apps/hbase/data/archive/data/.../a/44f6c3c646e84de6a63fe30da4fcb3aa) > directory for the primary table. > {code} > We found the following in log: > {code} > 2018-10-09 18:54:23,675 DEBUG > [00:16000.activeMasterManager-HFileCleaner.large-1539035367427] > cleaner.HFileCleaner: Removing: > hdfs:///apps/hbase/data/archive/data/.../a/44f6c3c646e84de6a63fe30da4fcb3aa > from archive > {code} > The root cause is race condition surrounding in progress snapshot(s) handling > between refreshCache() and getUnreferencedFiles(). > There are two callers of refreshCache: one from RefreshCacheTask#run and the > other from SnapshotHFileCleaner. > Let's look at the code of refreshCache: > {code} > if (!name.equals(SnapshotDescriptionUtils.SNAPSHOT_TMP_DIR_NAME)) { > {code} > whose intention is to exclude in progress snapshot(s). > Suppose when the RefreshCacheTask runs refreshCache, there is some in > progress snapshot (about to finish). > When SnapshotHFileCleaner calls getUnreferencedFiles(), it sees that > lastModifiedTime is up to date. So cleaner proceeds to check in progress > snapshot(s). However, the snapshot has completed by that time, resulting in > some file(s) deemed unreferenced. > Here is timeline given by Josh illustrating the scenario: > At time T0, we are checking if F1 is referenced. At time T1, there is a > snapshot S1 in progress that is referencing a file F1. refreshCache() is > called, but no completed snapshot references F1. At T2, the snapshot S1, > which references F1, completes. At T3, we check in-progress snapshots and S1 > is not included. Thus, F1 is marked as unreferenced even though S1 references > it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21387) Race condition surrounding in progress snapshot handling in snapshot cache leads to loss of snapshot files
[ https://issues.apache.org/jira/browse/HBASE-21387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678516#comment-16678516 ] Josh Elser commented on HBASE-21387: bq. Please take a look at 21387.v9.txt which solves the race condition between in-progress snapshot and hfile cleaner chore. Ted, you've provided multiple solutions already in the form of patches. Please briefly summarize the different approaches you see so that others can give their input without having to read every patch, intimately. > Race condition surrounding in progress snapshot handling in snapshot cache > leads to loss of snapshot files > -- > > Key: HBASE-21387 > URL: https://issues.apache.org/jira/browse/HBASE-21387 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Assignee: Ted Yu >Priority: Major > Labels: snapshot > Attachments: 21387.dbg.txt, 21387.v2.txt, 21387.v3.txt, 21387.v6.txt, > 21387.v7.txt, 21387.v8.txt, 21387.v9.txt, two-pass-cleaner.v4.txt, > two-pass-cleaner.v6.txt, two-pass-cleaner.v9.txt > > > During recent report from customer where ExportSnapshot failed: > {code} > 2018-10-09 18:54:32,559 ERROR [VerifySnapshot-pool1-t2] > snapshot.SnapshotReferenceUtil: Can't find hfile: > 44f6c3c646e84de6a63fe30da4fcb3aa in the real > (hdfs://in.com:8020/apps/hbase/data/data/.../a/44f6c3c646e84de6a63fe30da4fcb3aa) > or archive > (hdfs://in.com:8020/apps/hbase/data/archive/data/.../a/44f6c3c646e84de6a63fe30da4fcb3aa) > directory for the primary table. > {code} > We found the following in log: > {code} > 2018-10-09 18:54:23,675 DEBUG > [00:16000.activeMasterManager-HFileCleaner.large-1539035367427] > cleaner.HFileCleaner: Removing: > hdfs:///apps/hbase/data/archive/data/.../a/44f6c3c646e84de6a63fe30da4fcb3aa > from archive > {code} > The root cause is race condition surrounding in progress snapshot(s) handling > between refreshCache() and getUnreferencedFiles(). > There are two callers of refreshCache: one from RefreshCacheTask#run and the > other from SnapshotHFileCleaner. > Let's look at the code of refreshCache: > {code} > if (!name.equals(SnapshotDescriptionUtils.SNAPSHOT_TMP_DIR_NAME)) { > {code} > whose intention is to exclude in progress snapshot(s). > Suppose when the RefreshCacheTask runs refreshCache, there is some in > progress snapshot (about to finish). > When SnapshotHFileCleaner calls getUnreferencedFiles(), it sees that > lastModifiedTime is up to date. So cleaner proceeds to check in progress > snapshot(s). However, the snapshot has completed by that time, resulting in > some file(s) deemed unreferenced. > Here is timeline given by Josh illustrating the scenario: > At time T0, we are checking if F1 is referenced. At time T1, there is a > snapshot S1 in progress that is referencing a file F1. refreshCache() is > called, but no completed snapshot references F1. At T2, the snapshot S1, > which references F1, completes. At T3, we check in-progress snapshots and S1 > is not included. Thus, F1 is marked as unreferenced even though S1 references > it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21387) Race condition surrounding in progress snapshot handling in snapshot cache leads to loss of snapshot files
[ https://issues.apache.org/jira/browse/HBASE-21387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-21387: --- Description: During recent report from customer where ExportSnapshot failed: {code} 2018-10-09 18:54:32,559 ERROR [VerifySnapshot-pool1-t2] snapshot.SnapshotReferenceUtil: Can't find hfile: 44f6c3c646e84de6a63fe30da4fcb3aa in the real (hdfs://in.com:8020/apps/hbase/data/data/.../a/44f6c3c646e84de6a63fe30da4fcb3aa) or archive (hdfs://in.com:8020/apps/hbase/data/archive/data/.../a/44f6c3c646e84de6a63fe30da4fcb3aa) directory for the primary table. {code} We found the following in log: {code} 2018-10-09 18:54:23,675 DEBUG [00:16000.activeMasterManager-HFileCleaner.large-1539035367427] cleaner.HFileCleaner: Removing: hdfs:///apps/hbase/data/archive/data/.../a/44f6c3c646e84de6a63fe30da4fcb3aa from archive {code} The root cause is race condition surrounding in progress snapshot(s) handling between refreshCache() and getUnreferencedFiles(). There are two callers of refreshCache: one from RefreshCacheTask#run and the other from SnapshotHFileCleaner. Let's look at the code of refreshCache: {code} if (!name.equals(SnapshotDescriptionUtils.SNAPSHOT_TMP_DIR_NAME)) { {code} whose intention is to exclude in progress snapshot(s). Suppose when the RefreshCacheTask runs refreshCache, there is some in progress snapshot (about to finish). When SnapshotHFileCleaner calls getUnreferencedFiles(), it sees that lastModifiedTime is up to date. So cleaner proceeds to check in progress snapshot(s). However, the snapshot has completed by that time, resulting in some file(s) deemed unreferenced. Here is timeline given by Josh illustrating the scenario: At time T0, we are checking if F1 is referenced. At time T1, there is a snapshot S1 in progress that is referencing a file F1. refreshCache() is called, but no completed snapshot references F1. At T2, the snapshot S1, which references F1, completes. At T3, we check in-progress snapshots and S1 is not included. Thus, F1 is marked as unreferenced even though S1 references it. was: During recent report from customer where ExportSnapshot failed: {code} 2018-10-09 18:54:32,559 ERROR [VerifySnapshot-pool1-t2] snapshot.SnapshotReferenceUtil: Can't find hfile: 44f6c3c646e84de6a63fe30da4fcb3aa in the real (hdfs://in.com:8020/apps/hbase/data/data/.../a/44f6c3c646e84de6a63fe30da4fcb3aa) or archive (hdfs://in.com:8020/apps/hbase/data/archive/data/.../a/44f6c3c646e84de6a63fe30da4fcb3aa) directory for the primary table. {code} We found the following in log: {code} 2018-10-09 18:54:23,675 DEBUG [00:16000.activeMasterManager-HFileCleaner.large-1539035367427] cleaner.HFileCleaner: Removing: hdfs:///apps/hbase/data/archive/data/.../a/44f6c3c646e84de6a63fe30da4fcb3aa from archive {code} The root cause is race condition surrounding in progress snapshot(s) handling between refreshCache() and getUnreferencedFiles(). There are two callers of refreshCache: one from RefreshCacheTask#run and the other from SnapshotHFileCleaner. Let's look at the code of refreshCache: {code} if (!name.equals(SnapshotDescriptionUtils.SNAPSHOT_TMP_DIR_NAME)) { {code} whose intention is to exclude in progress snapshot(s). Suppose when the RefreshCacheTask runs refreshCache, there is some in progress snapshot (about to finish). When SnapshotHFileCleaner calls getUnreferencedFiles(), it sees that lastModifiedTime is up to date. So cleaner proceeds to check in progress snapshot(s). However, the snapshot has completed by that time, resulting in some file(s) deemed unreferenced. > Race condition surrounding in progress snapshot handling in snapshot cache > leads to loss of snapshot files > -- > > Key: HBASE-21387 > URL: https://issues.apache.org/jira/browse/HBASE-21387 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Assignee: Ted Yu >Priority: Major > Labels: snapshot > Attachments: 21387.dbg.txt, 21387.v2.txt, 21387.v3.txt, 21387.v6.txt, > 21387.v7.txt, 21387.v8.txt, 21387.v9.txt, two-pass-cleaner.v4.txt, > two-pass-cleaner.v6.txt, two-pass-cleaner.v9.txt > > > During recent report from customer where ExportSnapshot failed: > {code} > 2018-10-09 18:54:32,559 ERROR [VerifySnapshot-pool1-t2] > snapshot.SnapshotReferenceUtil: Can't find hfile: > 44f6c3c646e84de6a63fe30da4fcb3aa in the real > (hdfs://in.com:8020/apps/hbase/data/data/.../a/44f6c3c646e84de6a63fe30da4fcb3aa) > or archive > (hdfs://in.com:8020/apps/hbase/data/archive/data/.../a/44f6c3c646e84de6a63fe30da4fcb3aa) > directory for the primary table. > {code} > We found the following in log: > {code} > 2018-10-09 18:54:23,675 DEBUG > [00:16000.activeMasterManager-HFileCleaner.large-1539035367427] > cleaner.H
[jira] [Commented] (HBASE-20952) Re-visit the WAL API
[ https://issues.apache.org/jira/browse/HBASE-20952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678495#comment-16678495 ] Sean Busbey commented on HBASE-20952: - Can we wait to make the branch until there are commits for it? Or wait to run the tests until then? > Re-visit the WAL API > > > Key: HBASE-20952 > URL: https://issues.apache.org/jira/browse/HBASE-20952 > Project: HBase > Issue Type: Improvement > Components: wal >Reporter: Josh Elser >Priority: Major > Attachments: 20952.v1.txt > > > Take a step back from the current WAL implementations and think about what an > HBase WAL API should look like. What are the primitive calls that we require > to guarantee durability of writes with a high degree of performance? > The API needs to take the current implementations into consideration. We > should also have a mind for what is happening in the Ratis LogService (but > the LogService should not dictate what HBase's WAL API looks like RATIS-272). > Other "systems" inside of HBase that use WALs are replication and > backup&restore. Replication has the use-case for "tail"'ing the WAL which we > should provide via our new API. B&R doesn't do anything fancy (IIRC). We > should make sure all consumers are generally going to be OK with the API we > create. > The API may be "OK" (or OK in a part). We need to also consider other methods > which were "bolted" on such as {{AbstractFSWAL}} and > {{WALFileLengthProvider}}. Other corners of "WAL use" (like the > {{WALSplitter}} should also be looked at to use WAL-APIs only). > We also need to make sure that adequate interface audience and stability > annotations are chosen. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-18405) Track scope for HBase-Spark module
[ https://issues.apache.org/jira/browse/HBASE-18405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678479#comment-16678479 ] stack commented on HBASE-18405: --- Fixed. Sorry about that. > Track scope for HBase-Spark module > -- > > Key: HBASE-18405 > URL: https://issues.apache.org/jira/browse/HBASE-18405 > Project: HBase > Issue Type: Task > Components: spark >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Major > Fix For: 3.0.0, 1.5.0, 2.2.0 > > Attachments: Apache HBase - Apache Spark Integration Scope - update > 1.pdf, Apache HBase - Apache Spark Integration Scope.pdf > > > Start with [\[DISCUSS\] status of and plans for our hbase-spark integration > |https://lists.apache.org/thread.html/fd74ef9b9da77abf794664f06ea19c839fb3d543647fb29115081683@%3Cdev.hbase.apache.org%3E] > and formalize into a scope document for bringing this feature into a release. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21387) Race condition surrounding in progress snapshot handling in snapshot cache leads to loss of snapshot files
[ https://issues.apache.org/jira/browse/HBASE-21387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678473#comment-16678473 ] Ted Yu commented on HBASE-21387: [~openinx][~Apache9][~elserj] : Please take a look at 21387.v9.txt which solves the race condition between in-progress snapshot and hfile cleaner chore. Your feedback is welcome. > Race condition surrounding in progress snapshot handling in snapshot cache > leads to loss of snapshot files > -- > > Key: HBASE-21387 > URL: https://issues.apache.org/jira/browse/HBASE-21387 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Assignee: Ted Yu >Priority: Major > Labels: snapshot > Attachments: 21387.dbg.txt, 21387.v2.txt, 21387.v3.txt, 21387.v6.txt, > 21387.v7.txt, 21387.v8.txt, 21387.v9.txt, two-pass-cleaner.v4.txt, > two-pass-cleaner.v6.txt, two-pass-cleaner.v9.txt > > > During recent report from customer where ExportSnapshot failed: > {code} > 2018-10-09 18:54:32,559 ERROR [VerifySnapshot-pool1-t2] > snapshot.SnapshotReferenceUtil: Can't find hfile: > 44f6c3c646e84de6a63fe30da4fcb3aa in the real > (hdfs://in.com:8020/apps/hbase/data/data/.../a/44f6c3c646e84de6a63fe30da4fcb3aa) > or archive > (hdfs://in.com:8020/apps/hbase/data/archive/data/.../a/44f6c3c646e84de6a63fe30da4fcb3aa) > directory for the primary table. > {code} > We found the following in log: > {code} > 2018-10-09 18:54:23,675 DEBUG > [00:16000.activeMasterManager-HFileCleaner.large-1539035367427] > cleaner.HFileCleaner: Removing: > hdfs:///apps/hbase/data/archive/data/.../a/44f6c3c646e84de6a63fe30da4fcb3aa > from archive > {code} > The root cause is race condition surrounding in progress snapshot(s) handling > between refreshCache() and getUnreferencedFiles(). > There are two callers of refreshCache: one from RefreshCacheTask#run and the > other from SnapshotHFileCleaner. > Let's look at the code of refreshCache: > {code} > if (!name.equals(SnapshotDescriptionUtils.SNAPSHOT_TMP_DIR_NAME)) { > {code} > whose intention is to exclude in progress snapshot(s). > Suppose when the RefreshCacheTask runs refreshCache, there is some in > progress snapshot (about to finish). > When SnapshotHFileCleaner calls getUnreferencedFiles(), it sees that > lastModifiedTime is up to date. So cleaner proceeds to check in progress > snapshot(s). However, the snapshot has completed by that time, resulting in > some file(s) deemed unreferenced. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-18405) Track scope for HBase-Spark module
[ https://issues.apache.org/jira/browse/HBASE-18405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678472#comment-16678472 ] Josh Elser commented on HBASE-18405: Permissions not open on the doc? Requested access :) > Track scope for HBase-Spark module > -- > > Key: HBASE-18405 > URL: https://issues.apache.org/jira/browse/HBASE-18405 > Project: HBase > Issue Type: Task > Components: spark >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Major > Fix For: 3.0.0, 1.5.0, 2.2.0 > > Attachments: Apache HBase - Apache Spark Integration Scope - update > 1.pdf, Apache HBase - Apache Spark Integration Scope.pdf > > > Start with [\[DISCUSS\] status of and plans for our hbase-spark integration > |https://lists.apache.org/thread.html/fd74ef9b9da77abf794664f06ea19c839fb3d543647fb29115081683@%3Cdev.hbase.apache.org%3E] > and formalize into a scope document for bringing this feature into a release. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21443) [hbase-connectors] Purge hbase-* modules from core now they've been moved to hbase-connectors
[ https://issues.apache.org/jira/browse/HBASE-21443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-21443: -- Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Pushed to branch-2 and master. Thanks for the help [~psomogyi] and [~busbey] > [hbase-connectors] Purge hbase-* modules from core now they've been moved to > hbase-connectors > - > > Key: HBASE-21443 > URL: https://issues.apache.org/jira/browse/HBASE-21443 > Project: HBase > Issue Type: Sub-task > Components: hbase-connectors, spark >Affects Versions: 3.0.0, 2.2.0 >Reporter: stack >Assignee: stack >Priority: Major > Fix For: 3.0.0, 2.2.0 > > Attachments: HBASE-21443.master.001.patch, > HBASE-21443.master.002.patch, HBASE-21443.master.002.patch > > > The parent copied the spark modules over to hbase-connectors. Here we purge > them from hbase core repo. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21443) [hbase-connectors] Purge hbase-* modules from core now they've been moved to hbase-connectors
[ https://issues.apache.org/jira/browse/HBASE-21443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678435#comment-16678435 ] stack commented on HBASE-21443: --- Thanks boys. Let me then push the first patch (will remove the findbugs thingy on commit). > [hbase-connectors] Purge hbase-* modules from core now they've been moved to > hbase-connectors > - > > Key: HBASE-21443 > URL: https://issues.apache.org/jira/browse/HBASE-21443 > Project: HBase > Issue Type: Sub-task > Components: hbase-connectors, spark >Affects Versions: 3.0.0, 2.2.0 >Reporter: stack >Assignee: stack >Priority: Major > Fix For: 3.0.0, 2.2.0 > > Attachments: HBASE-21443.master.001.patch, > HBASE-21443.master.002.patch, HBASE-21443.master.002.patch > > > The parent copied the spark modules over to hbase-connectors. Here we purge > them from hbase core repo. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21447) HBCK2 tool have questions on holes when HBCK2 checks region chain
[ https://issues.apache.org/jira/browse/HBASE-21447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678425#comment-16678425 ] Jingyun Tian commented on HBASE-21447: -- [~nicholasjiang] General option --skip(-s) could help skip the version check. But I'm not sure if this could bring any harm. > HBCK2 tool have questions on holes when HBCK2 checks region chain > --- > > Key: HBASE-21447 > URL: https://issues.apache.org/jira/browse/HBASE-21447 > Project: HBase > Issue Type: Improvement > Components: hbck2 >Affects Versions: 2.0.2 >Reporter: Nicholas Jiang >Priority: Major > Attachments: Hole.png > > > [hbck2]https://github.com/apache/hbase-operator-tools/tree/master/hbase-hbck2 > This HBCK2 tool have some questions on holes when HBCK2 checks region chain > as follows. > {code:java} > ERROR: There is a hole in the region chain between \x01F\x00\x00 and > \x02\x8C\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x05\x18\x00\x00 and > \x06^\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > ERROR: There is a hole in the region chain between \x07\x01\x00\x00 and > \x07\xA4\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x08G\x00\x00 and > \x09\x8D\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x0A0\x00\x00 and > \x0Bv\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > ERROR: There is a hole in the region chain between \x0C\x19\x00\x00 and > \x0C\xBC\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x0D_\x00\x00 and > \x0E\xA5\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x0F\xEB\x00\x00 and > \x111\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > ERROR: There is a hole in the region chain between \x16I\x00\x00 and > \x16\xEC\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between (\xC0\x00\x00 and > *\x06\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > {code} > !Hole.png! > This hole problem can't be solved by HBCK2 tool. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21443) [hbase-connectors] Purge hbase-* modules from core now they've been moved to hbase-connectors
[ https://issues.apache.org/jira/browse/HBASE-21443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678403#comment-16678403 ] Hadoop QA commented on HBASE-21443: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 16 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 59s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 6s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 17s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 19s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hbase-spark-it hbase-assembly . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 44s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 11s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} scaladoc {color} | {color:green} 2m 4s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 5m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 13s{color} | {color:green} root generated 0 new + 1157 unchanged - 133 fixed = 1157 total (was 1290) {color} | | {color:green}+1{color} | {color:green} scalac {color} | {color:green} 5m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 4s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 3m 56s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 8m 47s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 4m 35s{color} | {color:green} the patch passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hbase-assembly . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s{color} | {color:green} hbase-assembly in the patch passed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 25s{color} | {color:green} root generated 0 new + 4 unchanged - 23 fixed = 4 total (was 27) {color} | | {color:green}+1{color} | {color:green} scaladoc {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {co
[jira] [Commented] (HBASE-20604) ProtobufLogReader#readNext can incorrectly loop to the same position in the stream until the the WAL is rolled
[ https://issues.apache.org/jira/browse/HBASE-20604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678398#comment-16678398 ] Sean Busbey commented on HBASE-20604: - +1 on v5 pending qabot > ProtobufLogReader#readNext can incorrectly loop to the same position in the > stream until the the WAL is rolled > -- > > Key: HBASE-20604 > URL: https://issues.apache.org/jira/browse/HBASE-20604 > Project: HBase > Issue Type: Bug > Components: Replication, wal >Affects Versions: 3.0.0 >Reporter: Esteban Gutierrez >Assignee: Esteban Gutierrez >Priority: Critical > Attachments: HBASE-20604.002.patch, HBASE-20604.003.patch, > HBASE-20604.004.patch, HBASE-20604.005.patch, HBASE-20604.patch > > > Every time we call {{ProtobufLogReader#readNext}} we consume the input stream > associated to the {{FSDataInputStream}} from the WAL that we are reading. > Under certain conditions, e.g. when using the encryption at rest > ({{CryptoInputStream}}) the stream can return partial data which can cause a > premature EOF that cause {{inputStream.getPos()}} to return to the same > origina position causing {{ProtobufLogReader#readNext}} to re-try over the > reads until the WAL is rolled. > The side effect of this issue is that {{ReplicationSource}} can get stuck > until the WAL is rolled and causing replication delays up to an hour in some > cases. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21246) Introduce WALIdentity interface
[ https://issues.apache.org/jira/browse/HBASE-21246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678393#comment-16678393 ] Josh Elser commented on HBASE-21246: bq. WALFactory is only responsible for WALProvider's creation and destruction. And remove those getWAL, createReader, createRecoveredEditsWriter methods which should be covered and already done in WALProvider. Totally agree, [~reidchan]. This was something that I had discussed with Ted already (on a couple occasions, IIRC). These were omitted from the first patch to avoid even more changes (in an already weight-y patch). > Introduce WALIdentity interface > --- > > Key: HBASE-21246 > URL: https://issues.apache.org/jira/browse/HBASE-21246 > Project: HBase > Issue Type: Sub-task >Reporter: Ted Yu >Assignee: Ted Yu >Priority: Major > Fix For: HBASE-20952 > > Attachments: 21246.003.patch, 21246.20.txt, 21246.21.txt, > 21246.23.txt, 21246.24.txt, 21246.25.txt, 21246.HBASE-20952.001.patch, > 21246.HBASE-20952.002.patch, 21246.HBASE-20952.004.patch, > 21246.HBASE-20952.005.patch, 21246.HBASE-20952.007.patch, > 21246.HBASE-20952.008.patch, replication-src-creates-wal-reader.jpg, > wal-factory-providers.png, wal-providers.png, wal-splitter-reader.jpg, > wal-splitter-writer.jpg > > > We are introducing WALIdentity interface so that the WAL representation can > be decoupled from distributed filesystem. > The interface provides getName method whose return value can represent > filename in distributed filesystem environment or, the name of the stream > when the WAL is backed by log stream. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20952) Re-visit the WAL API
[ https://issues.apache.org/jira/browse/HBASE-20952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678392#comment-16678392 ] Josh Elser commented on HBASE-20952: bq. Please do what it takes to get passing builds. If there is a known cause of failure, can a JUnit Assume be used to disable the test on known-bad versions? does the branch require a particular version of Hadoop? If so, why doesn't it have that expressed in the pom? [~busbey], in case it isn't clear, there's nothing committed to this branch. It's a copy of master presently... > Re-visit the WAL API > > > Key: HBASE-20952 > URL: https://issues.apache.org/jira/browse/HBASE-20952 > Project: HBase > Issue Type: Improvement > Components: wal >Reporter: Josh Elser >Priority: Major > Attachments: 20952.v1.txt > > > Take a step back from the current WAL implementations and think about what an > HBase WAL API should look like. What are the primitive calls that we require > to guarantee durability of writes with a high degree of performance? > The API needs to take the current implementations into consideration. We > should also have a mind for what is happening in the Ratis LogService (but > the LogService should not dictate what HBase's WAL API looks like RATIS-272). > Other "systems" inside of HBase that use WALs are replication and > backup&restore. Replication has the use-case for "tail"'ing the WAL which we > should provide via our new API. B&R doesn't do anything fancy (IIRC). We > should make sure all consumers are generally going to be OK with the API we > create. > The API may be "OK" (or OK in a part). We need to also consider other methods > which were "bolted" on such as {{AbstractFSWAL}} and > {{WALFileLengthProvider}}. Other corners of "WAL use" (like the > {{WALSplitter}} should also be looked at to use WAL-APIs only). > We also need to make sure that adequate interface audience and stability > annotations are chosen. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20604) ProtobufLogReader#readNext can incorrectly loop to the same position in the stream until the the WAL is rolled
[ https://issues.apache.org/jira/browse/HBASE-20604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Esteban Gutierrez updated HBASE-20604: -- Attachment: HBASE-20604.005.patch > ProtobufLogReader#readNext can incorrectly loop to the same position in the > stream until the the WAL is rolled > -- > > Key: HBASE-20604 > URL: https://issues.apache.org/jira/browse/HBASE-20604 > Project: HBase > Issue Type: Bug > Components: Replication, wal >Affects Versions: 3.0.0 >Reporter: Esteban Gutierrez >Assignee: Esteban Gutierrez >Priority: Critical > Attachments: HBASE-20604.002.patch, HBASE-20604.003.patch, > HBASE-20604.004.patch, HBASE-20604.005.patch, HBASE-20604.patch > > > Every time we call {{ProtobufLogReader#readNext}} we consume the input stream > associated to the {{FSDataInputStream}} from the WAL that we are reading. > Under certain conditions, e.g. when using the encryption at rest > ({{CryptoInputStream}}) the stream can return partial data which can cause a > premature EOF that cause {{inputStream.getPos()}} to return to the same > origina position causing {{ProtobufLogReader#readNext}} to re-try over the > reads until the WAL is rolled. > The side effect of this issue is that {{ReplicationSource}} can get stuck > until the WAL is rolled and causing replication delays up to an hour in some > cases. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21443) [hbase-connectors] Purge hbase-* modules from core now they've been moved to hbase-connectors
[ https://issues.apache.org/jira/browse/HBASE-21443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678310#comment-16678310 ] Sean Busbey commented on HBASE-21443: - so I'm +1 on either the patch w/o the scalatools plugin or the one that includes it (though in the case of the latter I'll probably file a jira to remove it afterwards) > [hbase-connectors] Purge hbase-* modules from core now they've been moved to > hbase-connectors > - > > Key: HBASE-21443 > URL: https://issues.apache.org/jira/browse/HBASE-21443 > Project: HBase > Issue Type: Sub-task > Components: hbase-connectors, spark >Affects Versions: 3.0.0, 2.2.0 >Reporter: stack >Assignee: stack >Priority: Major > Fix For: 3.0.0, 2.2.0 > > Attachments: HBASE-21443.master.001.patch, > HBASE-21443.master.002.patch, HBASE-21443.master.002.patch > > > The parent copied the spark modules over to hbase-connectors. Here we purge > them from hbase core repo. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21443) [hbase-connectors] Purge hbase-* modules from core now they've been moved to hbase-connectors
[ https://issues.apache.org/jira/browse/HBASE-21443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678322#comment-16678322 ] Peter Somogyi commented on HBASE-21443: --- In dev-support/findbugs-exclude.xml there is a filter for scala files. Since all of them are removed the filter could be dropped. {code:java} {code} > [hbase-connectors] Purge hbase-* modules from core now they've been moved to > hbase-connectors > - > > Key: HBASE-21443 > URL: https://issues.apache.org/jira/browse/HBASE-21443 > Project: HBase > Issue Type: Sub-task > Components: hbase-connectors, spark >Affects Versions: 3.0.0, 2.2.0 >Reporter: stack >Assignee: stack >Priority: Major > Fix For: 3.0.0, 2.2.0 > > Attachments: HBASE-21443.master.001.patch, > HBASE-21443.master.002.patch, HBASE-21443.master.002.patch > > > The parent copied the spark modules over to hbase-connectors. Here we purge > them from hbase core repo. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-15557) Add guidance on HashTable/SyncTable to the RefGuide
[ https://issues.apache.org/jira/browse/HBASE-15557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HBASE-15557: Summary: Add guidance on HashTable/SyncTable to the RefGuide (was: document SyncTable in ref guide) > Add guidance on HashTable/SyncTable to the RefGuide > --- > > Key: HBASE-15557 > URL: https://issues.apache.org/jira/browse/HBASE-15557 > Project: HBase > Issue Type: Bug > Components: documentation >Affects Versions: 1.2.0 >Reporter: Sean Busbey >Assignee: Wellington Chevreuil >Priority: Critical > Attachments: HBASE-15557.master.001.patch, > HBASE-15557.master.002.patch > > > The docs for SyncTable are insufficient. Brief description from [~davelatham] > HBASE-13639 comment: > {quote} > Sorry for the lack of better documentation, Abhishek Soni. Thanks for > bringing it up. I'll try to provide a better explanation. You may have > already seen it, but if not, the design doc linked in the description above > may also give you some better clues as to how it should be used. > Briefly, the feature is intended to start with a pair of tables in remote > clusters that are already substantially similar and make them identical by > comparing hashes of the data and copying only the diffs instead of having to > copy the entire table. So it is targeted at a very specific use case (with > some work it could generalize to cover things like CopyTable and > VerifyRepliaction but it's not there yet). To use it, you choose one table to > be the "source", and the other table is the "target". After the process is > complete the target table should end up being identical to the source table. > In the source table's cluster, run > org.apache.hadoop.hbase.mapreduce.HashTable and pass it the name of the > source table and an output directory in HDFS. HashTable will scan the source > table, break the data up into row key ranges (default of 8kB per range) and > produce a hash of the data for each range. > Make the hashes available to the target cluster - I'd recommend using DistCp > to copy it across. > In the target table's cluster, run > org.apache.hadoop.hbase.mapreduce.SyncTable and pass it the directory where > you put the hashes, and the names of the source and destination tables. You > will likely also need to specify the source table's ZK quorum via the > --sourcezkcluster option. SyncTable will then read the hash information, and > compute the hashes of the same row ranges for the target table. For any row > range where the hash fails to match, it will open a remote scanner to the > source table, read the data for that range, and do Puts and Deletes to the > target table to update it to match the source. > I hope that clarifies it a bit. Let me know if you need a hand. If anyone > wants to work on getting some documentation into the book, I can try to write > some more but would love a hand on turning it into an actual book patch. > {quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20586) SyncTable tool: Add support for cross-realm remote clusters
[ https://issues.apache.org/jira/browse/HBASE-20586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678295#comment-16678295 ] Sean Busbey commented on HBASE-20586: - I agree that we don't have the needed infra to have a test for this right now. I would like whoever commits it to try running the change as well, especially given that it's been ~6 months since it was submitted. I'll try to make time next week. > SyncTable tool: Add support for cross-realm remote clusters > --- > > Key: HBASE-20586 > URL: https://issues.apache.org/jira/browse/HBASE-20586 > Project: HBase > Issue Type: Improvement > Components: mapreduce, Operability, Replication >Affects Versions: 1.2.0, 2.0.0 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Major > Fix For: 1.5.0, 2.2.0 > > Attachments: HBASE-20586.master.001.patch > > > One possible scenario for HashTable/SyncTable is for synchronize different > clusters, for instance, when replication has been enabled but data existed > already, or due replication issues that may had caused long lags in the > replication. > For secured clusters under different kerberos realms (with cross-realm > properly set), though, current SyncTable version would fail to authenticate > with the remote cluster when trying to read HashTable outputs (when > *sourcehashdir* is remote) and also when trying to read table data on the > remote cluster (when *sourcezkcluster* is remote). > The hdfs error would look like this: > {noformat} > INFO mapreduce.Job: Task Id : attempt_1524358175778_105392_m_00_0, Status > : FAILED > Error: java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: "local-host/1.1.1.1"; > destination host is: "remote-nn":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) > at org.apache.hadoop.ipc.Client.call(Client.java:1506) > at org.apache.hadoop.ipc.Client.call(Client.java:1439) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) > at com.sun.proxy.$Proxy13.getBlockLocations(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:256) > ... > at > org.apache.hadoop.hbase.mapreduce.HashTable$TableHash.readPropertiesFile(HashTable.java:144) > at > org.apache.hadoop.hbase.mapreduce.HashTable$TableHash.read(HashTable.java:105) > at > org.apache.hadoop.hbase.mapreduce.SyncTable$SyncMapper.setup(SyncTable.java:188) > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:142) > ... > Caused by: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]{noformat} > The above can be sorted if the SyncTable job acquires a DT for the remote NN. > Once hdfs related authentication is done, it's also necessary to authenticate > against remote HBase, as the below error would arise: > {noformat} > INFO mapreduce.Job: Task Id : attempt_1524358175778_172414_m_00_0, Status > : FAILED > Error: org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get > the location > at > org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:326) > ... > at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:867) > at > org.apache.hadoop.hbase.mapreduce.SyncTable$SyncMapper.syncRange(SyncTable.java:331) > ... > Caused by: java.io.IOException: Could not set up IO Streams to > remote-rs-host/1.1.1.2:60020 > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:786) > ... > Caused by: java.lang.RuntimeException: SASL authentication failed. The most > likely cause is missing or invalid credentials. Consider 'kinit'. > ... > Caused by: GSSException: No valid credentials provided (Mechanism level: > Failed to find any Kerberos tgt) > ...{noformat} > The above would need additional authentication logic against the remote hbase > cluster. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-15557) Add guidance on HashTable/SyncTable to the RefGuide
[ https://issues.apache.org/jira/browse/HBASE-15557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HBASE-15557: Resolution: Fixed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) merged. Thanks again [~wchevreuil] this is a great doc addition! maybe for follow-on, this bit sounds like an error condition we should detect? {code} +.Set sourcezkcluster to the actual source cluster ZK quorum +[NOTE] + +Although not required, if sourcezkcluster is not set, SyncTable will connect to local HBase cluster for both source and target, +which does not give any meaningful result. {code} > Add guidance on HashTable/SyncTable to the RefGuide > --- > > Key: HBASE-15557 > URL: https://issues.apache.org/jira/browse/HBASE-15557 > Project: HBase > Issue Type: Bug > Components: documentation >Affects Versions: 1.2.0 >Reporter: Sean Busbey >Assignee: Wellington Chevreuil >Priority: Critical > Fix For: 3.0.0 > > Attachments: HBASE-15557.master.001.patch, > HBASE-15557.master.002.patch > > > The docs for SyncTable are insufficient. Brief description from [~davelatham] > HBASE-13639 comment: > {quote} > Sorry for the lack of better documentation, Abhishek Soni. Thanks for > bringing it up. I'll try to provide a better explanation. You may have > already seen it, but if not, the design doc linked in the description above > may also give you some better clues as to how it should be used. > Briefly, the feature is intended to start with a pair of tables in remote > clusters that are already substantially similar and make them identical by > comparing hashes of the data and copying only the diffs instead of having to > copy the entire table. So it is targeted at a very specific use case (with > some work it could generalize to cover things like CopyTable and > VerifyRepliaction but it's not there yet). To use it, you choose one table to > be the "source", and the other table is the "target". After the process is > complete the target table should end up being identical to the source table. > In the source table's cluster, run > org.apache.hadoop.hbase.mapreduce.HashTable and pass it the name of the > source table and an output directory in HDFS. HashTable will scan the source > table, break the data up into row key ranges (default of 8kB per range) and > produce a hash of the data for each range. > Make the hashes available to the target cluster - I'd recommend using DistCp > to copy it across. > In the target table's cluster, run > org.apache.hadoop.hbase.mapreduce.SyncTable and pass it the directory where > you put the hashes, and the names of the source and destination tables. You > will likely also need to specify the source table's ZK quorum via the > --sourcezkcluster option. SyncTable will then read the hash information, and > compute the hashes of the same row ranges for the target table. For any row > range where the hash fails to match, it will open a remote scanner to the > source table, read the data for that range, and do Puts and Deletes to the > target table to update it to match the source. > I hope that clarifies it a bit. Let me know if you need a hand. If anyone > wants to work on getting some documentation into the book, I can try to write > some more but would love a hand on turning it into an actual book patch. > {quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21419) Show sync replication related field for replication peer on master web UI
[ https://issues.apache.org/jira/browse/HBASE-21419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678280#comment-16678280 ] Duo Zhang commented on HBASE-21419: --- Change 'Remote Root' to 'Remote WAL'? Otherwise +1. > Show sync replication related field for replication peer on master web UI > - > > Key: HBASE-21419 > URL: https://issues.apache.org/jira/browse/HBASE-21419 > Project: HBase > Issue Type: Sub-task > Components: Replication >Reporter: Duo Zhang >Assignee: Jingyun Tian >Priority: Major > Fix For: 3.0.0 > > Attachments: HBASE-21419.master.001.patch, > HBASE-21419.master.002.patch, Screenshot from 2018-11-05 16-02-11.png > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21443) [hbase-connectors] Purge hbase-* modules from core now they've been moved to hbase-connectors
[ https://issues.apache.org/jira/browse/HBASE-21443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678269#comment-16678269 ] Sean Busbey commented on HBASE-21443: - Peter has it exactly correct. It's just that Yetus sees scala files have changed and the scaladoc test is around (because we tell it to use "all" tests basically) so it tries to run the test on the presumption that a maven project with scala files will have configured the scala plugins. Yetus would be more robust here if it referred to the fully qualified maven plugin name. we should file a bug for that. In general I think the workaround is to treat this as a false error and ignore it. > [hbase-connectors] Purge hbase-* modules from core now they've been moved to > hbase-connectors > - > > Key: HBASE-21443 > URL: https://issues.apache.org/jira/browse/HBASE-21443 > Project: HBase > Issue Type: Sub-task > Components: hbase-connectors, spark >Affects Versions: 3.0.0, 2.2.0 >Reporter: stack >Assignee: stack >Priority: Major > Fix For: 3.0.0, 2.2.0 > > Attachments: HBASE-21443.master.001.patch, > HBASE-21443.master.002.patch, HBASE-21443.master.002.patch > > > The parent copied the spark modules over to hbase-connectors. Here we purge > them from hbase core repo. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20952) Re-visit the WAL API
[ https://issues.apache.org/jira/browse/HBASE-20952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678265#comment-16678265 ] Sean Busbey commented on HBASE-20952: - {quote} >From https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20952/41// , we >can see that TestIncrementalBackupWithBulkLoad failed for hadoop3 build. This is known issue - see HADOOP-15850. Other than that, the build in HBASE-20952 branch is quite normal. {quote} Please do what it takes to get passing builds. If there is a known cause of failure, can a JUnit Assume be used to disable the test on known-bad versions? does the branch require a particular version of Hadoop? If so, why doesn't it have that expressed in the pom? > Re-visit the WAL API > > > Key: HBASE-20952 > URL: https://issues.apache.org/jira/browse/HBASE-20952 > Project: HBase > Issue Type: Improvement > Components: wal >Reporter: Josh Elser >Priority: Major > Attachments: 20952.v1.txt > > > Take a step back from the current WAL implementations and think about what an > HBase WAL API should look like. What are the primitive calls that we require > to guarantee durability of writes with a high degree of performance? > The API needs to take the current implementations into consideration. We > should also have a mind for what is happening in the Ratis LogService (but > the LogService should not dictate what HBase's WAL API looks like RATIS-272). > Other "systems" inside of HBase that use WALs are replication and > backup&restore. Replication has the use-case for "tail"'ing the WAL which we > should provide via our new API. B&R doesn't do anything fancy (IIRC). We > should make sure all consumers are generally going to be OK with the API we > create. > The API may be "OK" (or OK in a part). We need to also consider other methods > which were "bolted" on such as {{AbstractFSWAL}} and > {{WALFileLengthProvider}}. Other corners of "WAL use" (like the > {{WALSplitter}} should also be looked at to use WAL-APIs only). > We also need to make sure that adequate interface audience and stability > annotations are chosen. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21447) HBCK2 tool have questions on holes when HBCK2 checks region chain
[ https://issues.apache.org/jira/browse/HBASE-21447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678247#comment-16678247 ] Nicholas Jiang commented on HBASE-21447: [~tianjingyun] HBCK2 doesn't support 2.0.2 version, so I can't assign regions again through this tool.And problem is that the holes question can't be solved based on 2.0.1 or 2.0.2 version because these versions can't assign regions by HBCK2 tool.If I use HBCK2 tool, I must conside version upgrade unwilling. {code:java} static void checkVersion(final String versionStr) { if (versionStr.startsWith(TWO_POINT_ONE)) { throw new UnsupportedOperationException(TWO_POINT_ONE + " has no support for hbck2"); } if (VersionInfo.compareVersion(MININUM_VERSION, versionStr) > 0) { throw new UnsupportedOperationException("Requires " + MININUM_VERSION + " at least."); } }{code} > HBCK2 tool have questions on holes when HBCK2 checks region chain > --- > > Key: HBASE-21447 > URL: https://issues.apache.org/jira/browse/HBASE-21447 > Project: HBase > Issue Type: Improvement > Components: hbck2 >Affects Versions: 2.0.2 >Reporter: Nicholas Jiang >Priority: Major > Attachments: Hole.png > > > [hbck2]https://github.com/apache/hbase-operator-tools/tree/master/hbase-hbck2 > This HBCK2 tool have some questions on holes when HBCK2 checks region chain > as follows. > {code:java} > ERROR: There is a hole in the region chain between \x01F\x00\x00 and > \x02\x8C\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x05\x18\x00\x00 and > \x06^\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > ERROR: There is a hole in the region chain between \x07\x01\x00\x00 and > \x07\xA4\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x08G\x00\x00 and > \x09\x8D\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x0A0\x00\x00 and > \x0Bv\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > ERROR: There is a hole in the region chain between \x0C\x19\x00\x00 and > \x0C\xBC\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x0D_\x00\x00 and > \x0E\xA5\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x0F\xEB\x00\x00 and > \x111\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > ERROR: There is a hole in the region chain between \x16I\x00\x00 and > \x16\xEC\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between (\xC0\x00\x00 and > *\x06\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > {code} > !Hole.png! > This hole problem can't be solved by HBCK2 tool. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (HBASE-21447) HBCK2 tool have questions on holes when HBCK2 checks region chain
[ https://issues.apache.org/jira/browse/HBASE-21447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678247#comment-16678247 ] Nicholas Jiang edited comment on HBASE-21447 at 11/7/18 2:00 PM: - [~tianjingyun] HBCK2 doesn't support 2.0.2 version, so I can't assign regions again through this tool.And problem is that the holes question can't be solved based on 2.0.1 or 2.0.2 version because these versions can't assign regions by HBCK2 tool.If I use HBCK2 tool, I must conside version upgrade unwilling. {code:java} private static final String MININUM_VERSION = "2.0.3"; static void checkVersion(final String versionStr) { if (versionStr.startsWith(TWO_POINT_ONE)) { throw new UnsupportedOperationException(TWO_POINT_ONE + " has no support for hbck2"); } if (VersionInfo.compareVersion(MININUM_VERSION, versionStr) > 0) { throw new UnsupportedOperationException("Requires " + MININUM_VERSION + " at least."); } }{code} was (Author: nicholasjiang): [~tianjingyun] HBCK2 doesn't support 2.0.2 version, so I can't assign regions again through this tool.And problem is that the holes question can't be solved based on 2.0.1 or 2.0.2 version because these versions can't assign regions by HBCK2 tool.If I use HBCK2 tool, I must conside version upgrade unwilling. {code:java} static void checkVersion(final String versionStr) { if (versionStr.startsWith(TWO_POINT_ONE)) { throw new UnsupportedOperationException(TWO_POINT_ONE + " has no support for hbck2"); } if (VersionInfo.compareVersion(MININUM_VERSION, versionStr) > 0) { throw new UnsupportedOperationException("Requires " + MININUM_VERSION + " at least."); } }{code} > HBCK2 tool have questions on holes when HBCK2 checks region chain > --- > > Key: HBASE-21447 > URL: https://issues.apache.org/jira/browse/HBASE-21447 > Project: HBase > Issue Type: Improvement > Components: hbck2 >Affects Versions: 2.0.2 >Reporter: Nicholas Jiang >Priority: Major > Attachments: Hole.png > > > [hbck2]https://github.com/apache/hbase-operator-tools/tree/master/hbase-hbck2 > This HBCK2 tool have some questions on holes when HBCK2 checks region chain > as follows. > {code:java} > ERROR: There is a hole in the region chain between \x01F\x00\x00 and > \x02\x8C\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x05\x18\x00\x00 and > \x06^\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > ERROR: There is a hole in the region chain between \x07\x01\x00\x00 and > \x07\xA4\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x08G\x00\x00 and > \x09\x8D\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x0A0\x00\x00 and > \x0Bv\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > ERROR: There is a hole in the region chain between \x0C\x19\x00\x00 and > \x0C\xBC\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x0D_\x00\x00 and > \x0E\xA5\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x0F\xEB\x00\x00 and > \x111\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > ERROR: There is a hole in the region chain between \x16I\x00\x00 and > \x16\xEC\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between (\xC0\x00\x00 and > *\x06\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > {code} > !Hole.png! > This hole problem can't be solved by HBCK2 tool. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Issue Comment Deleted] (HBASE-21447) HBCK2 tool have questions on holes when HBCK2 checks region chain
[ https://issues.apache.org/jira/browse/HBASE-21447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicholas Jiang updated HBASE-21447: --- Comment: was deleted (was: OK,I will try this way.But I think it's better to solve holes problem in HBCK2 tool,do you agree with me?[~tianjingyun]) > HBCK2 tool have questions on holes when HBCK2 checks region chain > --- > > Key: HBASE-21447 > URL: https://issues.apache.org/jira/browse/HBASE-21447 > Project: HBase > Issue Type: Improvement > Components: hbck2 >Affects Versions: 2.0.2 >Reporter: Nicholas Jiang >Priority: Major > Attachments: Hole.png > > > [hbck2]https://github.com/apache/hbase-operator-tools/tree/master/hbase-hbck2 > This HBCK2 tool have some questions on holes when HBCK2 checks region chain > as follows. > {code:java} > ERROR: There is a hole in the region chain between \x01F\x00\x00 and > \x02\x8C\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x05\x18\x00\x00 and > \x06^\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > ERROR: There is a hole in the region chain between \x07\x01\x00\x00 and > \x07\xA4\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x08G\x00\x00 and > \x09\x8D\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x0A0\x00\x00 and > \x0Bv\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > ERROR: There is a hole in the region chain between \x0C\x19\x00\x00 and > \x0C\xBC\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x0D_\x00\x00 and > \x0E\xA5\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x0F\xEB\x00\x00 and > \x111\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > ERROR: There is a hole in the region chain between \x16I\x00\x00 and > \x16\xEC\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between (\xC0\x00\x00 and > *\x06\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > {code} > !Hole.png! > This hole problem can't be solved by HBCK2 tool. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20604) ProtobufLogReader#readNext can incorrectly loop to the same position in the stream until the the WAL is rolled
[ https://issues.apache.org/jira/browse/HBASE-20604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678237#comment-16678237 ] Sean Busbey commented on HBASE-20604: - log message on line 422 still needs to include a mention of a malformed edit. checkstyle is close but still off by a bit. > ProtobufLogReader#readNext can incorrectly loop to the same position in the > stream until the the WAL is rolled > -- > > Key: HBASE-20604 > URL: https://issues.apache.org/jira/browse/HBASE-20604 > Project: HBase > Issue Type: Bug > Components: Replication, wal >Affects Versions: 3.0.0 >Reporter: Esteban Gutierrez >Assignee: Esteban Gutierrez >Priority: Critical > Attachments: HBASE-20604.002.patch, HBASE-20604.003.patch, > HBASE-20604.004.patch, HBASE-20604.patch > > > Every time we call {{ProtobufLogReader#readNext}} we consume the input stream > associated to the {{FSDataInputStream}} from the WAL that we are reading. > Under certain conditions, e.g. when using the encryption at rest > ({{CryptoInputStream}}) the stream can return partial data which can cause a > premature EOF that cause {{inputStream.getPos()}} to return to the same > origina position causing {{ProtobufLogReader#readNext}} to re-try over the > reads until the WAL is rolled. > The side effect of this issue is that {{ReplicationSource}} can get stuck > until the WAL is rolled and causing replication delays up to an hour in some > cases. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21441) NPE if RS restarts between REFRESH_PEER_SYNC_REPLICATION_STATE_ON_RS_BEGIN and TRANSIT_PEER_NEW_SYNC_REPLICATION_STATE
[ https://issues.apache.org/jira/browse/HBASE-21441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678213#comment-16678213 ] Hudson commented on HBASE-21441: Results for branch master [build #591 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/591/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/master/591//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/master/591//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/master/591//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > NPE if RS restarts between REFRESH_PEER_SYNC_REPLICATION_STATE_ON_RS_BEGIN > and TRANSIT_PEER_NEW_SYNC_REPLICATION_STATE > -- > > Key: HBASE-21441 > URL: https://issues.apache.org/jira/browse/HBASE-21441 > Project: HBase > Issue Type: Sub-task > Components: Replication >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 3.0.0 > > Attachments: HBASE-21441.patch > > > {noformat} > 2018-11-06,12:55:25,980 WARN > [RpcServer.default.FPBQ.Fifo.handler=251,queue=11,port=17100] > org.apache.hadoop.hbase.master.replication.RefreshPeerProcedure: Refresh peer > TestPeer for TRANSIT_SYNC_REPLICATION_STATE on > c4-hadoop-tst-st54.bj,17200,1541479922465 failed > java.lang.NullPointerException via > c4-hadoop-tst-st54.bj,17200,1541479922465:java.lang.NullPointerException: > at > org.apache.hadoop.hbase.procedure2.RemoteProcedureException.fromProto(RemoteProcedureException.java:124) > at > org.apache.hadoop.hbase.master.MasterRpcServices.lambda$reportProcedureDone$4(MasterRpcServices.java:2303) > at java.util.ArrayList.forEach(ArrayList.java:1249) > at > java.util.Collections$UnmodifiableCollection.forEach(Collections.java:1080) > at > org.apache.hadoop.hbase.master.MasterRpcServices.reportProcedureDone(MasterRpcServices.java:2298) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:13149) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318) > Caused by: java.lang.NullPointerException: > at > org.apache.hadoop.hbase.wal.SyncReplicationWALProvider.peerSyncReplicationStateChange(SyncReplicationWALProvider.java:303) > at > org.apache.hadoop.hbase.replication.regionserver.PeerProcedureHandlerImpl.transitSyncReplicationPeerState(PeerProcedureHandlerImpl.java:216) > at > org.apache.hadoop.hbase.replication.regionserver.RefreshPeerCallable.call(RefreshPeerCallable.java:74) > at > org.apache.hadoop.hbase.replication.regionserver.RefreshPeerCallable.call(RefreshPeerCallable.java:34) > at > org.apache.hadoop.hbase.regionserver.handler.RSProcedureHandler.process(RSProcedureHandler.java:47) > at > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20952) Re-visit the WAL API
[ https://issues.apache.org/jira/browse/HBASE-20952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678144#comment-16678144 ] Hudson commented on HBASE-20952: Results for branch HBASE-20952 [build #42 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20952/42/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20952/42//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20952/42//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20952/42//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Re-visit the WAL API > > > Key: HBASE-20952 > URL: https://issues.apache.org/jira/browse/HBASE-20952 > Project: HBase > Issue Type: Improvement > Components: wal >Reporter: Josh Elser >Priority: Major > Attachments: 20952.v1.txt > > > Take a step back from the current WAL implementations and think about what an > HBase WAL API should look like. What are the primitive calls that we require > to guarantee durability of writes with a high degree of performance? > The API needs to take the current implementations into consideration. We > should also have a mind for what is happening in the Ratis LogService (but > the LogService should not dictate what HBase's WAL API looks like RATIS-272). > Other "systems" inside of HBase that use WALs are replication and > backup&restore. Replication has the use-case for "tail"'ing the WAL which we > should provide via our new API. B&R doesn't do anything fancy (IIRC). We > should make sure all consumers are generally going to be OK with the API we > create. > The API may be "OK" (or OK in a part). We need to also consider other methods > which were "bolted" on such as {{AbstractFSWAL}} and > {{WALFileLengthProvider}}. Other corners of "WAL use" (like the > {{WALSplitter}} should also be looked at to use WAL-APIs only). > We also need to make sure that adequate interface audience and stability > annotations are chosen. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21443) [hbase-connectors] Purge hbase-* modules from core now they've been moved to hbase-connectors
[ https://issues.apache.org/jira/browse/HBASE-21443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678094#comment-16678094 ] Peter Somogyi commented on HBASE-21443: --- [~stack], my idea why yetus run scaladoc for the first patch is because it noticed that there were .scala files changed in the patch. It ran the scaladoc target for branch (original version) and it executed the same verifications on the patched version. [https://github.com/apache/yetus/blob/master/precommit/test-patch.d/maven.sh#L482-L485] [Tue Nov 6 21:42:47 UTC 2018 INFO]: Personality: branch scaladoc ... [Tue Nov 6 22:16:33 UTC 2018 INFO]: Personality: patch scaladoc If my assumption is correct we have to allow this patch in with the yetus -1 for scaladoc. [~busbey]: Are there any workarounds for this? > [hbase-connectors] Purge hbase-* modules from core now they've been moved to > hbase-connectors > - > > Key: HBASE-21443 > URL: https://issues.apache.org/jira/browse/HBASE-21443 > Project: HBase > Issue Type: Sub-task > Components: hbase-connectors, spark >Affects Versions: 3.0.0, 2.2.0 >Reporter: stack >Assignee: stack >Priority: Major > Fix For: 3.0.0, 2.2.0 > > Attachments: HBASE-21443.master.001.patch, > HBASE-21443.master.002.patch, HBASE-21443.master.002.patch > > > The parent copied the spark modules over to hbase-connectors. Here we purge > them from hbase core repo. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21401) Sanity check in BaseDecoder#parseCell
[ https://issues.apache.org/jira/browse/HBASE-21401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678091#comment-16678091 ] Duo Zhang commented on HBASE-21401: --- Please try different key & value size, and also with or without tags? And I think we can apply the fix for the bug first, maybe in another issue, and use this issue to do more testing. > Sanity check in BaseDecoder#parseCell > - > > Key: HBASE-21401 > URL: https://issues.apache.org/jira/browse/HBASE-21401 > Project: HBase > Issue Type: Sub-task > Components: regionserver >Reporter: Zheng Hu >Assignee: Zheng Hu >Priority: Critical > Fix For: 3.0.0, 2.2.0, 2.0.3, 2.1.2 > > Attachments: HBASE-21401.v1.patch, HBASE-21401.v2.patch, > HBASE-21401.v3.patch, HBASE-21401.v4.patch, HBASE-21401.v4.patch, > HBASE-21401.v5.patch > > > In KeyValueDecoder & ByteBuffKeyValueDecoder, we pass a byte buffer to > initialize the Cell without a sanity check (check each field's offset&len > exceed the byte buffer or not), so ArrayIndexOutOfBoundsException may happen > when read the cell's fields, such as HBASE-21379, it's hard to debug this > kind of bug. > An earlier check will help to find such kind of bugs. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21328) add HBASE_DISABLE_HADOOP_CLASSPATH_LOOKUP switch to hbase-env.sh
[ https://issues.apache.org/jira/browse/HBASE-21328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678067#comment-16678067 ] Hudson commented on HBASE-21328: Results for branch master [build #590 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/590/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/master/590//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/master/590//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/master/590//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > add HBASE_DISABLE_HADOOP_CLASSPATH_LOOKUP switch to hbase-env.sh > > > Key: HBASE-21328 > URL: https://issues.apache.org/jira/browse/HBASE-21328 > Project: HBase > Issue Type: Improvement > Components: documentation, Operability >Reporter: Nick.han >Assignee: Nick.han >Priority: Minor > Fix For: 3.0.0, 1.5.0, 2.2.0 > > Attachments: HBASE-21328.master.001.patch, > HBASE-21328.master.002.patch > > > hi,all > I got a problem while I using hbase3.0.0-snapshot and hadoop 2.7.5 to > build a hbase cluster,the problem is hbase using javax.servlet-api-3.1.0-jar > witch is conflict by servlet-api-2.5.jar that in > hadoop lib path, I run into hbase file and got config > HBASE_DISABLE_HADOOP_CLASSPATH_LOOKUP set to false by default, this config > decide whether or not include Hadoop lib to hbase class path,so the question > is why we set this config to false?can we set it to true and exclude the > Hadoop lib by default? -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21255) [acl] Refactor TablePermission into three classes (Global, Namespace, Table)
[ https://issues.apache.org/jira/browse/HBASE-21255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678066#comment-16678066 ] Reid Chan commented on HBASE-21255: --- Hoping more comments. > [acl] Refactor TablePermission into three classes (Global, Namespace, Table) > > > Key: HBASE-21255 > URL: https://issues.apache.org/jira/browse/HBASE-21255 > Project: HBase > Issue Type: Improvement >Reporter: Reid Chan >Assignee: Reid Chan >Priority: Major > Fix For: 3.0.0, 2.2.0 > > Attachments: HBASE-21225.master.001.patch, > HBASE-21225.master.002.patch, HBASE-21225.master.007.patch, > HBASE-21255.master.003.patch, HBASE-21255.master.004.patch, > HBASE-21255.master.005.patch, HBASE-21255.master.006.patch > > > A TODO in {{TablePermission.java}} > {code:java} > //TODO refactor this class > //we need to refacting this into three classes (Global, Table, Namespace) > {code} > Change Notes: > * Divide origin TablePermission into three classes GlobalPermission, > NamespacePermission, TablePermission > * New UserPermission consists of a user name and a permission in one of > [Global, Namespace, Table]Permission. > * Rename TableAuthManager to AuthManager(it is IA.P), and rename some > methods for readability. > * Make PermissionCache thread safe, and the ListMultiMap is changed to Set. > * User cache and group cache in AuthManager is combined together. > * Wire proto is kept, BC should be under guarantee. > * Fix HBASE-21390. > * Resolve a small {{TODO}} global entry should be handled differently in > AccessControlLists -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21247) Custom Meta WAL Provider doesn't default to custom WAL Provider whose configuration value is outside the enums in Providers
[ https://issues.apache.org/jira/browse/HBASE-21247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678063#comment-16678063 ] Hudson commented on HBASE-21247: Results for branch branch-2 [build #1488 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1488/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1488//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1488//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1488//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Custom Meta WAL Provider doesn't default to custom WAL Provider whose > configuration value is outside the enums in Providers > --- > > Key: HBASE-21247 > URL: https://issues.apache.org/jira/browse/HBASE-21247 > Project: HBase > Issue Type: Bug > Components: wal >Affects Versions: 3.0.0, 2.2.0, 2.1.1, 2.0.2 >Reporter: Ted Yu >Assignee: Ted Yu >Priority: Major > Fix For: 3.0.0, 2.2.0, 2.1.2 > > Attachments: 21247.branch-2.patch, 21247.v1.txt, 21247.v10.txt, > 21247.v11.txt, 21247.v2.txt, 21247.v3.txt, 21247.v4.tst, 21247.v4.txt, > 21247.v5.txt, 21247.v6.txt, 21247.v7.txt, 21247.v8.txt, 21247.v9.txt, > HBASE-21247.branch-2.001.patch > > > Currently all the WAL Providers acceptable to hbase are specified in > Providers enum of WALFactory. > This restricts the ability for custom Meta WAL Provider to default to the > custom WAL Provider which is supplied by class name. > This issue fixes the bug by allowing the specification of new WAL Provider > class name using the config "hbase.wal.provider". -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (HBASE-21447) HBCK2 tool have questions on holes when HBCK2 checks region chain
[ https://issues.apache.org/jira/browse/HBASE-21447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678059#comment-16678059 ] Nicholas Jiang edited comment on HBASE-21447 at 11/7/18 11:17 AM: -- OK,I will try this way.But I think it's better to solve holes problem in HBCK2 tool,do you agree with me?[~tianjingyun] was (Author: nicholasjiang): OK,I will try this way.But I think it's better to solve holes problem in HBCK2 tool,do you agree with me? > HBCK2 tool have questions on holes when HBCK2 checks region chain > --- > > Key: HBASE-21447 > URL: https://issues.apache.org/jira/browse/HBASE-21447 > Project: HBase > Issue Type: Improvement > Components: hbck2 >Affects Versions: 2.0.2 >Reporter: Nicholas Jiang >Priority: Major > Attachments: Hole.png > > > [hbck2]https://github.com/apache/hbase-operator-tools/tree/master/hbase-hbck2 > This HBCK2 tool have some questions on holes when HBCK2 checks region chain > as follows. > {code:java} > ERROR: There is a hole in the region chain between \x01F\x00\x00 and > \x02\x8C\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x05\x18\x00\x00 and > \x06^\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > ERROR: There is a hole in the region chain between \x07\x01\x00\x00 and > \x07\xA4\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x08G\x00\x00 and > \x09\x8D\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x0A0\x00\x00 and > \x0Bv\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > ERROR: There is a hole in the region chain between \x0C\x19\x00\x00 and > \x0C\xBC\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x0D_\x00\x00 and > \x0E\xA5\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x0F\xEB\x00\x00 and > \x111\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > ERROR: There is a hole in the region chain between \x16I\x00\x00 and > \x16\xEC\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between (\xC0\x00\x00 and > *\x06\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > {code} > !Hole.png! > This hole problem can't be solved by HBCK2 tool. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21447) HBCK2 tool have questions on holes when HBCK2 checks region chain
[ https://issues.apache.org/jira/browse/HBASE-21447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678059#comment-16678059 ] Nicholas Jiang commented on HBASE-21447: OK,I will try this way.But I think it's better to solve holes problem in HBCK2 tool,do you agree with me? > HBCK2 tool have questions on holes when HBCK2 checks region chain > --- > > Key: HBASE-21447 > URL: https://issues.apache.org/jira/browse/HBASE-21447 > Project: HBase > Issue Type: Improvement > Components: hbck2 >Affects Versions: 2.0.2 >Reporter: Nicholas Jiang >Priority: Major > Attachments: Hole.png > > > [hbck2]https://github.com/apache/hbase-operator-tools/tree/master/hbase-hbck2 > This HBCK2 tool have some questions on holes when HBCK2 checks region chain > as follows. > {code:java} > ERROR: There is a hole in the region chain between \x01F\x00\x00 and > \x02\x8C\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x05\x18\x00\x00 and > \x06^\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > ERROR: There is a hole in the region chain between \x07\x01\x00\x00 and > \x07\xA4\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x08G\x00\x00 and > \x09\x8D\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x0A0\x00\x00 and > \x0Bv\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > ERROR: There is a hole in the region chain between \x0C\x19\x00\x00 and > \x0C\xBC\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x0D_\x00\x00 and > \x0E\xA5\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x0F\xEB\x00\x00 and > \x111\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > ERROR: There is a hole in the region chain between \x16I\x00\x00 and > \x16\xEC\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between (\xC0\x00\x00 and > *\x06\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > {code} > !Hole.png! > This hole problem can't be solved by HBCK2 tool. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21443) [hbase-connectors] Purge hbase-* modules from core now they've been moved to hbase-connectors
[ https://issues.apache.org/jira/browse/HBASE-21443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678058#comment-16678058 ] Peter Somogyi commented on HBASE-21443: --- Retrigger v002. > [hbase-connectors] Purge hbase-* modules from core now they've been moved to > hbase-connectors > - > > Key: HBASE-21443 > URL: https://issues.apache.org/jira/browse/HBASE-21443 > Project: HBase > Issue Type: Sub-task > Components: hbase-connectors, spark >Affects Versions: 3.0.0, 2.2.0 >Reporter: stack >Assignee: stack >Priority: Major > Fix For: 3.0.0, 2.2.0 > > Attachments: HBASE-21443.master.001.patch, > HBASE-21443.master.002.patch, HBASE-21443.master.002.patch > > > The parent copied the spark modules over to hbase-connectors. Here we purge > them from hbase core repo. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21443) [hbase-connectors] Purge hbase-* modules from core now they've been moved to hbase-connectors
[ https://issues.apache.org/jira/browse/HBASE-21443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi updated HBASE-21443: -- Attachment: HBASE-21443.master.002.patch > [hbase-connectors] Purge hbase-* modules from core now they've been moved to > hbase-connectors > - > > Key: HBASE-21443 > URL: https://issues.apache.org/jira/browse/HBASE-21443 > Project: HBase > Issue Type: Sub-task > Components: hbase-connectors, spark >Affects Versions: 3.0.0, 2.2.0 >Reporter: stack >Assignee: stack >Priority: Major > Fix For: 3.0.0, 2.2.0 > > Attachments: HBASE-21443.master.001.patch, > HBASE-21443.master.002.patch, HBASE-21443.master.002.patch > > > The parent copied the spark modules over to hbase-connectors. Here we purge > them from hbase core repo. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21443) [hbase-connectors] Purge hbase-* modules from core now they've been moved to hbase-connectors
[ https://issues.apache.org/jira/browse/HBASE-21443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678050#comment-16678050 ] Hadoop QA commented on HBASE-21443: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 16 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 27s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 11s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 37s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 35s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 4s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hbase-spark-it hbase-assembly . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 55s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 4s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} scaladoc {color} | {color:green} 2m 40s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 6m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 47s{color} | {color:green} root generated 0 new + 1157 unchanged - 133 fixed = 1157 total (was 1290) {color} | | {color:green}+1{color} | {color:green} scalac {color} | {color:green} 6m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 5s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 46s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 10m 33s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 5m 19s{color} | {color:green} the patch passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hbase-assembly . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s{color} | {color:green} hbase-assembly in the patch passed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 9s{color} | {color:green} root generated 0 new + 4 unchanged - 23 fixed = 4 total (was 27) {color} | | {color:green}+1{color} | {color:green} scaladoc {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {colo
[jira] [Commented] (HBASE-21401) Sanity check in BaseDecoder#parseCell
[ https://issues.apache.org/jira/browse/HBASE-21401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678046#comment-16678046 ] Hadoop QA commented on HBASE-21401: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 30s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 33s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 20s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 40s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 19s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 44s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 31s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 30s{color} | {color:red} hbase-common: The patch generated 21 new + 148 unchanged - 1 fixed = 169 total (was 149) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 2s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 12m 35s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 53s{color} | {color:green} hbase-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green}150m 38s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 43s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}205m 6s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b | | JIRA Issue | HBASE-21401 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12947196/HBASE-21401.v5.patch | | Optional Tests | dupname asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux c890f62d772b 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh | | git revision | master / 86cbbdea9e | | maven | version: Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb
[jira] [Commented] (HBASE-21347) Backport HBASE-21200 "Memstore flush doesn't finish because of seekToPreviousRow() in memstore scanner." to branch-1
[ https://issues.apache.org/jira/browse/HBASE-21347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678006#comment-16678006 ] Hudson commented on HBASE-21347: Results for branch branch-1 [build #541 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/541/]: (x) *{color:red}-1 overall{color}* details (if available): (x) {color:red}-1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/541//General_Nightly_Build_Report/] (x) {color:red}-1 jdk7 checks{color} -- For more information [see jdk7 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/541//JDK7_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/541//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 source release artifact{color} -- See build output for details. > Backport HBASE-21200 "Memstore flush doesn't finish because of > seekToPreviousRow() in memstore scanner." to branch-1 > > > Key: HBASE-21347 > URL: https://issues.apache.org/jira/browse/HBASE-21347 > Project: HBase > Issue Type: Sub-task > Components: backport, Scanners >Reporter: Toshihiro Suzuki >Assignee: Toshihiro Suzuki >Priority: Critical > Fix For: 1.5.0, 1.3.3, 1.4.9, 1.2.9 > > Attachments: HBASE-21347.branch-1.001.patch > > > Backport parent issue to branch-1. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21200) Memstore flush doesn't finish because of seekToPreviousRow() in memstore scanner.
[ https://issues.apache.org/jira/browse/HBASE-21200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678007#comment-16678007 ] Hudson commented on HBASE-21200: Results for branch branch-1 [build #541 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/541/]: (x) *{color:red}-1 overall{color}* details (if available): (x) {color:red}-1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/541//General_Nightly_Build_Report/] (x) {color:red}-1 jdk7 checks{color} -- For more information [see jdk7 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/541//JDK7_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/541//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 source release artifact{color} -- See build output for details. > Memstore flush doesn't finish because of seekToPreviousRow() in memstore > scanner. > - > > Key: HBASE-21200 > URL: https://issues.apache.org/jira/browse/HBASE-21200 > Project: HBase > Issue Type: Bug > Components: Scanners >Reporter: dongjin2193.jeon >Assignee: Toshihiro Suzuki >Priority: Critical > Fix For: 3.0.0, 2.2.0, 2.1.1, 2.0.3 > > Attachments: HBASE-21200-UT.patch, HBASE-21200.master.001.patch, > HBASE-21200.master.002.patch, RegionServerJstack.log > > > The issue of delaying memstore flush still occurs after backport hbase-15871. > Reverse scan takes a long time to seek previous row in the memstore full of > deleted cells. > > jstack : > "MemStoreFlusher.0" #114 prio=5 os_prio=0 tid=0x7fa3d0729000 nid=0x486a > waiting on condition [0x7fa3b9b6b000] > java.lang.Thread.State: WAITING (parking) > at sun.misc.Unsafe.park(Native Method) > - parking to wait for <0xa465fe60> (a > java.util.concurrent.locks.ReentrantLock$NonfairSync) > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199) > at > java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:209) > at > java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285) > at > org.apache.hadoop.hbase.regionserver.*StoreScanner.updateReaders(StoreScanner.java:695)* > at > org.apache.hadoop.hbase.regionserver.HStore.notifyChangedReadersObservers(HStore.java:1127) > at > org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1106) > at > org.apache.hadoop.hbase.regionserver.HStore.access$600(HStore.java:130) > at > org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:2455) > at > org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2519) > at > org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2256) > at > org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2218) > at > org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2110) > at > org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:2036) > at > org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:501) > at > org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:471) > at > org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$800(MemStoreFlusher.java:75) > at > org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:259) > at java.lang.Thread.run(Thread.java:748) > > "RpcServer.FifoWFPBQ.default.handler=27,queue=0,port=16020" #65 daemon prio=5 > os_prio=0 tid=0x7fa3e628 nid=0x4801 runnable [0x7fa3bd29a000] > java.lang.Thread.State: RUNNABLE > at > org.apache.hadoop.hbase.regionserver.DefaultMemStore$MemStoreScanner.getNext(DefaultMemStore.java:780) > at > org.apache.hadoop.hbase.regionserver.DefaultMemStore$MemStoreScanner.seekInSubLists(DefaultMemStore.java:826) > - locked <0xb45aa5b8> (a > org.apache.hadoop.hbase.regionserver.DefaultMemStore$MemStoreScanner) > at > org.apache.hadoop.hbase.regionserver.DefaultMemStore$MemS
[jira] [Commented] (HBASE-21328) add HBASE_DISABLE_HADOOP_CLASSPATH_LOOKUP switch to hbase-env.sh
[ https://issues.apache.org/jira/browse/HBASE-21328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678005#comment-16678005 ] Hudson commented on HBASE-21328: Results for branch branch-1 [build #541 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/541/]: (x) *{color:red}-1 overall{color}* details (if available): (x) {color:red}-1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/541//General_Nightly_Build_Report/] (x) {color:red}-1 jdk7 checks{color} -- For more information [see jdk7 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/541//JDK7_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/541//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 source release artifact{color} -- See build output for details. > add HBASE_DISABLE_HADOOP_CLASSPATH_LOOKUP switch to hbase-env.sh > > > Key: HBASE-21328 > URL: https://issues.apache.org/jira/browse/HBASE-21328 > Project: HBase > Issue Type: Improvement > Components: documentation, Operability >Reporter: Nick.han >Assignee: Nick.han >Priority: Minor > Fix For: 3.0.0, 1.5.0, 2.2.0 > > Attachments: HBASE-21328.master.001.patch, > HBASE-21328.master.002.patch > > > hi,all > I got a problem while I using hbase3.0.0-snapshot and hadoop 2.7.5 to > build a hbase cluster,the problem is hbase using javax.servlet-api-3.1.0-jar > witch is conflict by servlet-api-2.5.jar that in > hadoop lib path, I run into hbase file and got config > HBASE_DISABLE_HADOOP_CLASSPATH_LOOKUP set to false by default, this config > decide whether or not include Hadoop lib to hbase class path,so the question > is why we set this config to false?can we set it to true and exclude the > Hadoop lib by default? -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (HBASE-21447) HBCK2 tool have questions on holes when HBCK2 checks region chain
[ https://issues.apache.org/jira/browse/HBASE-21447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678000#comment-16678000 ] Jingyun Tian edited comment on HBASE-21447 at 11/7/18 10:32 AM: [~nicholasjiang] try to assign these regions again and check if this works? Check out the doc here if you have any problem. https://github.com/apache/hbase-operator-tools/tree/master/hbase-hbck2 was (Author: tianjingyun): [~nicholasjiang] try to assign these region again and check if this works? > HBCK2 tool have questions on holes when HBCK2 checks region chain > --- > > Key: HBASE-21447 > URL: https://issues.apache.org/jira/browse/HBASE-21447 > Project: HBase > Issue Type: Improvement > Components: hbck2 >Affects Versions: 2.0.2 >Reporter: Nicholas Jiang >Priority: Major > Attachments: Hole.png > > > [hbck2]https://github.com/apache/hbase-operator-tools/tree/master/hbase-hbck2 > This HBCK2 tool have some questions on holes when HBCK2 checks region chain > as follows. > {code:java} > ERROR: There is a hole in the region chain between \x01F\x00\x00 and > \x02\x8C\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x05\x18\x00\x00 and > \x06^\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > ERROR: There is a hole in the region chain between \x07\x01\x00\x00 and > \x07\xA4\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x08G\x00\x00 and > \x09\x8D\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x0A0\x00\x00 and > \x0Bv\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > ERROR: There is a hole in the region chain between \x0C\x19\x00\x00 and > \x0C\xBC\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x0D_\x00\x00 and > \x0E\xA5\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x0F\xEB\x00\x00 and > \x111\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > ERROR: There is a hole in the region chain between \x16I\x00\x00 and > \x16\xEC\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between (\xC0\x00\x00 and > *\x06\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > {code} > !Hole.png! > This hole problem can't be solved by HBCK2 tool. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21200) Memstore flush doesn't finish because of seekToPreviousRow() in memstore scanner.
[ https://issues.apache.org/jira/browse/HBASE-21200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678004#comment-16678004 ] Hudson commented on HBASE-21200: Results for branch branch-1.4 [build #538 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/538/]: (x) *{color:red}-1 overall{color}* details (if available): (x) {color:red}-1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/538//General_Nightly_Build_Report/] (x) {color:red}-1 jdk7 checks{color} -- For more information [see jdk7 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/538//JDK7_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/538//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. > Memstore flush doesn't finish because of seekToPreviousRow() in memstore > scanner. > - > > Key: HBASE-21200 > URL: https://issues.apache.org/jira/browse/HBASE-21200 > Project: HBase > Issue Type: Bug > Components: Scanners >Reporter: dongjin2193.jeon >Assignee: Toshihiro Suzuki >Priority: Critical > Fix For: 3.0.0, 2.2.0, 2.1.1, 2.0.3 > > Attachments: HBASE-21200-UT.patch, HBASE-21200.master.001.patch, > HBASE-21200.master.002.patch, RegionServerJstack.log > > > The issue of delaying memstore flush still occurs after backport hbase-15871. > Reverse scan takes a long time to seek previous row in the memstore full of > deleted cells. > > jstack : > "MemStoreFlusher.0" #114 prio=5 os_prio=0 tid=0x7fa3d0729000 nid=0x486a > waiting on condition [0x7fa3b9b6b000] > java.lang.Thread.State: WAITING (parking) > at sun.misc.Unsafe.park(Native Method) > - parking to wait for <0xa465fe60> (a > java.util.concurrent.locks.ReentrantLock$NonfairSync) > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199) > at > java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:209) > at > java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285) > at > org.apache.hadoop.hbase.regionserver.*StoreScanner.updateReaders(StoreScanner.java:695)* > at > org.apache.hadoop.hbase.regionserver.HStore.notifyChangedReadersObservers(HStore.java:1127) > at > org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1106) > at > org.apache.hadoop.hbase.regionserver.HStore.access$600(HStore.java:130) > at > org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:2455) > at > org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2519) > at > org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2256) > at > org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2218) > at > org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2110) > at > org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:2036) > at > org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:501) > at > org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:471) > at > org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$800(MemStoreFlusher.java:75) > at > org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:259) > at java.lang.Thread.run(Thread.java:748) > > "RpcServer.FifoWFPBQ.default.handler=27,queue=0,port=16020" #65 daemon prio=5 > os_prio=0 tid=0x7fa3e628 nid=0x4801 runnable [0x7fa3bd29a000] > java.lang.Thread.State: RUNNABLE > at > org.apache.hadoop.hbase.regionserver.DefaultMemStore$MemStoreScanner.getNext(DefaultMemStore.java:780) > at > org.apache.hadoop.hbase.regionserver.DefaultMemStore$MemStoreScanner.seekInSubLists(DefaultMemStore.java:826) > - locked <0xb45aa5b8> (a > org.apache.hadoop.hbase.regionserver.DefaultMemStore$MemStoreScanner) > at > org.apache.hadoop.hbase.regionserver.DefaultM
[jira] [Commented] (HBASE-21347) Backport HBASE-21200 "Memstore flush doesn't finish because of seekToPreviousRow() in memstore scanner." to branch-1
[ https://issues.apache.org/jira/browse/HBASE-21347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678003#comment-16678003 ] Hudson commented on HBASE-21347: Results for branch branch-1.4 [build #538 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/538/]: (x) *{color:red}-1 overall{color}* details (if available): (x) {color:red}-1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/538//General_Nightly_Build_Report/] (x) {color:red}-1 jdk7 checks{color} -- For more information [see jdk7 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/538//JDK7_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/538//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. > Backport HBASE-21200 "Memstore flush doesn't finish because of > seekToPreviousRow() in memstore scanner." to branch-1 > > > Key: HBASE-21347 > URL: https://issues.apache.org/jira/browse/HBASE-21347 > Project: HBase > Issue Type: Sub-task > Components: backport, Scanners >Reporter: Toshihiro Suzuki >Assignee: Toshihiro Suzuki >Priority: Critical > Fix For: 1.5.0, 1.3.3, 1.4.9, 1.2.9 > > Attachments: HBASE-21347.branch-1.001.patch > > > Backport parent issue to branch-1. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21447) HBCK2 tool have questions on holes when HBCK2 checks region chain
[ https://issues.apache.org/jira/browse/HBASE-21447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678000#comment-16678000 ] Jingyun Tian commented on HBASE-21447: -- [~nicholasjiang] try to assign these region again and check if this works? > HBCK2 tool have questions on holes when HBCK2 checks region chain > --- > > Key: HBASE-21447 > URL: https://issues.apache.org/jira/browse/HBASE-21447 > Project: HBase > Issue Type: Improvement > Components: hbck2 >Affects Versions: 2.0.2 >Reporter: Nicholas Jiang >Priority: Major > Attachments: Hole.png > > > [hbck2]https://github.com/apache/hbase-operator-tools/tree/master/hbase-hbck2 > This HBCK2 tool have some questions on holes when HBCK2 checks region chain > as follows. > {code:java} > ERROR: There is a hole in the region chain between \x01F\x00\x00 and > \x02\x8C\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x05\x18\x00\x00 and > \x06^\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > ERROR: There is a hole in the region chain between \x07\x01\x00\x00 and > \x07\xA4\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x08G\x00\x00 and > \x09\x8D\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x0A0\x00\x00 and > \x0Bv\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > ERROR: There is a hole in the region chain between \x0C\x19\x00\x00 and > \x0C\xBC\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x0D_\x00\x00 and > \x0E\xA5\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x0F\xEB\x00\x00 and > \x111\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > ERROR: There is a hole in the region chain between \x16I\x00\x00 and > \x16\xEC\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between (\xC0\x00\x00 and > *\x06\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > {code} > !Hole.png! > This hole problem can't be solved by HBCK2 tool. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21447) HBCK2 tool have questions on holes when HBCK2 checks region chain
[ https://issues.apache.org/jira/browse/HBASE-21447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16677982#comment-16677982 ] Jingyun Tian commented on HBASE-21447: -- [~nicholasjiang] No. HBCK2 doens't solve this now. > HBCK2 tool have questions on holes when HBCK2 checks region chain > --- > > Key: HBASE-21447 > URL: https://issues.apache.org/jira/browse/HBASE-21447 > Project: HBase > Issue Type: Improvement > Components: hbck2 >Affects Versions: 2.0.2 >Reporter: Nicholas Jiang >Priority: Major > Attachments: Hole.png > > > [hbck2]https://github.com/apache/hbase-operator-tools/tree/master/hbase-hbck2 > This HBCK2 tool have some questions on holes when HBCK2 checks region chain > as follows. > {code:java} > ERROR: There is a hole in the region chain between \x01F\x00\x00 and > \x02\x8C\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x05\x18\x00\x00 and > \x06^\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > ERROR: There is a hole in the region chain between \x07\x01\x00\x00 and > \x07\xA4\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x08G\x00\x00 and > \x09\x8D\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x0A0\x00\x00 and > \x0Bv\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > ERROR: There is a hole in the region chain between \x0C\x19\x00\x00 and > \x0C\xBC\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x0D_\x00\x00 and > \x0E\xA5\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x0F\xEB\x00\x00 and > \x111\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > ERROR: There is a hole in the region chain between \x16I\x00\x00 and > \x16\xEC\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between (\xC0\x00\x00 and > *\x06\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > {code} > !Hole.png! > This hole problem can't be solved by HBCK2 tool. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (HBASE-21447) HBCK2 tool have questions on holes when HBCK2 checks region chain
[ https://issues.apache.org/jira/browse/HBASE-21447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16677964#comment-16677964 ] Nicholas Jiang edited comment on HBASE-21447 at 11/7/18 10:12 AM: -- [~tianjingyun] I encounter this situation.Does HBCK2 tool solve holes problem? http://hbase.group/question/226 was (Author: nicholasjiang): [~tianjingyun] I encounter this situation.Does HBCK2 tool solve holes problem? > HBCK2 tool have questions on holes when HBCK2 checks region chain > --- > > Key: HBASE-21447 > URL: https://issues.apache.org/jira/browse/HBASE-21447 > Project: HBase > Issue Type: Improvement > Components: hbck2 >Affects Versions: 2.0.2 >Reporter: Nicholas Jiang >Priority: Major > Attachments: Hole.png > > > [hbck2]https://github.com/apache/hbase-operator-tools/tree/master/hbase-hbck2 > This HBCK2 tool have some questions on holes when HBCK2 checks region chain > as follows. > {code:java} > ERROR: There is a hole in the region chain between \x01F\x00\x00 and > \x02\x8C\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x05\x18\x00\x00 and > \x06^\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > ERROR: There is a hole in the region chain between \x07\x01\x00\x00 and > \x07\xA4\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x08G\x00\x00 and > \x09\x8D\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x0A0\x00\x00 and > \x0Bv\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > ERROR: There is a hole in the region chain between \x0C\x19\x00\x00 and > \x0C\xBC\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x0D_\x00\x00 and > \x0E\xA5\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x0F\xEB\x00\x00 and > \x111\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > ERROR: There is a hole in the region chain between \x16I\x00\x00 and > \x16\xEC\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between (\xC0\x00\x00 and > *\x06\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > {code} > !Hole.png! > This hole problem can't be solved by HBCK2 tool. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21419) Show sync replication related field for replication peer on master web UI
[ https://issues.apache.org/jira/browse/HBASE-21419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16677968#comment-16677968 ] Hadoop QA commented on HBASE-21419: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange} 0m 0s{color} | {color:orange} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 9s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green}149m 49s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 26s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}162m 5s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b | | JIRA Issue | HBASE-21419 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12947193/HBASE-21419.master.002.patch | | Optional Tests | dupname asflicense javac javadoc unit | | uname | Linux ed252778d00a 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 86cbbdea9e | | maven | version: Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) | | Default Java | 1.8.0_181 | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/14977/testReport/ | | Max. process+thread count | 4687 (vs. ulimit of 1) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/14977/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Show sync replication related field for replication peer on master web UI > - > > Key: HBASE-21419 > URL: https://issues.apache.org/jira/browse/HBASE-21419 > Project: HBase > Issue Type: Sub-task > Components: Replication >Reporter: Duo Zhang >Assignee: Jingyun Tian >Priority: Major > Fix For: 3.0.0 > > Attachments: HBASE-21419.master.001.patch, > HBASE-21419.master.002.patch, Screenshot from 2018-11-05 16-02-11.png > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21447) HBCK2 tool have questions on holes when HBCK2 checks region chain
[ https://issues.apache.org/jira/browse/HBASE-21447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16677964#comment-16677964 ] Nicholas Jiang commented on HBASE-21447: [~tianjingyun] I encounter this situation.Does HBCK2 tool solve holes problem? > HBCK2 tool have questions on holes when HBCK2 checks region chain > --- > > Key: HBASE-21447 > URL: https://issues.apache.org/jira/browse/HBASE-21447 > Project: HBase > Issue Type: Improvement > Components: hbck2 >Affects Versions: 2.0.2 >Reporter: Nicholas Jiang >Priority: Major > Attachments: Hole.png > > > [hbck2]https://github.com/apache/hbase-operator-tools/tree/master/hbase-hbck2 > This HBCK2 tool have some questions on holes when HBCK2 checks region chain > as follows. > {code:java} > ERROR: There is a hole in the region chain between \x01F\x00\x00 and > \x02\x8C\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x05\x18\x00\x00 and > \x06^\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > ERROR: There is a hole in the region chain between \x07\x01\x00\x00 and > \x07\xA4\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x08G\x00\x00 and > \x09\x8D\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x0A0\x00\x00 and > \x0Bv\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > ERROR: There is a hole in the region chain between \x0C\x19\x00\x00 and > \x0C\xBC\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x0D_\x00\x00 and > \x0E\xA5\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x0F\xEB\x00\x00 and > \x111\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > ERROR: There is a hole in the region chain between \x16I\x00\x00 and > \x16\xEC\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between (\xC0\x00\x00 and > *\x06\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > {code} > !Hole.png! > This hole problem can't be solved by HBCK2 tool. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21200) Memstore flush doesn't finish because of seekToPreviousRow() in memstore scanner.
[ https://issues.apache.org/jira/browse/HBASE-21200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16677957#comment-16677957 ] Hudson commented on HBASE-21200: Results for branch branch-1.3 [build #531 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/531/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/531//General_Nightly_Build_Report/] (/) {color:green}+1 jdk7 checks{color} -- For more information [see jdk7 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/531//JDK7_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/531//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. > Memstore flush doesn't finish because of seekToPreviousRow() in memstore > scanner. > - > > Key: HBASE-21200 > URL: https://issues.apache.org/jira/browse/HBASE-21200 > Project: HBase > Issue Type: Bug > Components: Scanners >Reporter: dongjin2193.jeon >Assignee: Toshihiro Suzuki >Priority: Critical > Fix For: 3.0.0, 2.2.0, 2.1.1, 2.0.3 > > Attachments: HBASE-21200-UT.patch, HBASE-21200.master.001.patch, > HBASE-21200.master.002.patch, RegionServerJstack.log > > > The issue of delaying memstore flush still occurs after backport hbase-15871. > Reverse scan takes a long time to seek previous row in the memstore full of > deleted cells. > > jstack : > "MemStoreFlusher.0" #114 prio=5 os_prio=0 tid=0x7fa3d0729000 nid=0x486a > waiting on condition [0x7fa3b9b6b000] > java.lang.Thread.State: WAITING (parking) > at sun.misc.Unsafe.park(Native Method) > - parking to wait for <0xa465fe60> (a > java.util.concurrent.locks.ReentrantLock$NonfairSync) > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199) > at > java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:209) > at > java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285) > at > org.apache.hadoop.hbase.regionserver.*StoreScanner.updateReaders(StoreScanner.java:695)* > at > org.apache.hadoop.hbase.regionserver.HStore.notifyChangedReadersObservers(HStore.java:1127) > at > org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1106) > at > org.apache.hadoop.hbase.regionserver.HStore.access$600(HStore.java:130) > at > org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:2455) > at > org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2519) > at > org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2256) > at > org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2218) > at > org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2110) > at > org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:2036) > at > org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:501) > at > org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:471) > at > org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$800(MemStoreFlusher.java:75) > at > org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:259) > at java.lang.Thread.run(Thread.java:748) > > "RpcServer.FifoWFPBQ.default.handler=27,queue=0,port=16020" #65 daemon prio=5 > os_prio=0 tid=0x7fa3e628 nid=0x4801 runnable [0x7fa3bd29a000] > java.lang.Thread.State: RUNNABLE > at > org.apache.hadoop.hbase.regionserver.DefaultMemStore$MemStoreScanner.getNext(DefaultMemStore.java:780) > at > org.apache.hadoop.hbase.regionserver.DefaultMemStore$MemStoreScanner.seekInSubLists(DefaultMemStore.java:826) > - locked <0xb45aa5b8> (a > org.apache.hadoop.hbase.regionserver.DefaultMemStore$MemStoreScanner) > at > org.apache.hadoop.hbase.regionserver.Defa
[jira] [Commented] (HBASE-21347) Backport HBASE-21200 "Memstore flush doesn't finish because of seekToPreviousRow() in memstore scanner." to branch-1
[ https://issues.apache.org/jira/browse/HBASE-21347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16677956#comment-16677956 ] Hudson commented on HBASE-21347: Results for branch branch-1.3 [build #531 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/531/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/531//General_Nightly_Build_Report/] (/) {color:green}+1 jdk7 checks{color} -- For more information [see jdk7 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/531//JDK7_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/531//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. > Backport HBASE-21200 "Memstore flush doesn't finish because of > seekToPreviousRow() in memstore scanner." to branch-1 > > > Key: HBASE-21347 > URL: https://issues.apache.org/jira/browse/HBASE-21347 > Project: HBase > Issue Type: Sub-task > Components: backport, Scanners >Reporter: Toshihiro Suzuki >Assignee: Toshihiro Suzuki >Priority: Critical > Fix For: 1.5.0, 1.3.3, 1.4.9, 1.2.9 > > Attachments: HBASE-21347.branch-1.001.patch > > > Backport parent issue to branch-1. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-15557) document SyncTable in ref guide
[ https://issues.apache.org/jira/browse/HBASE-15557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16677952#comment-16677952 ] Hadoop QA commented on HBASE-15557: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 3m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 16s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} refguide {color} | {color:blue} 5m 11s{color} | {color:blue} branch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:blue}0{color} | {color:blue} refguide {color} | {color:blue} 4m 47s{color} | {color:blue} patch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 25m 44s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b | | JIRA Issue | HBASE-15557 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12947209/HBASE-15557.master.002.patch | | Optional Tests | dupname asflicense refguide | | uname | Linux 3ce3fcc68a65 4.4.0-137-generic #163-Ubuntu SMP Mon Sep 24 13:14:43 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 6d46b8d256 | | maven | version: Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) | | refguide | https://builds.apache.org/job/PreCommit-HBASE-Build/14980/artifact/patchprocess/branch-site/book.html | | refguide | https://builds.apache.org/job/PreCommit-HBASE-Build/14980/artifact/patchprocess/patch-site/book.html | | Max. process+thread count | 93 (vs. ulimit of 1) | | modules | C: . U: . | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/14980/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > document SyncTable in ref guide > --- > > Key: HBASE-15557 > URL: https://issues.apache.org/jira/browse/HBASE-15557 > Project: HBase > Issue Type: Bug > Components: documentation >Affects Versions: 1.2.0 >Reporter: Sean Busbey >Assignee: Wellington Chevreuil >Priority: Critical > Attachments: HBASE-15557.master.001.patch, > HBASE-15557.master.002.patch > > > The docs for SyncTable are insufficient. Brief description from [~davelatham] > HBASE-13639 comment: > {quote} > Sorry for the lack of better documentation, Abhishek Soni. Thanks for > bringing it up. I'll try to provide a better explanation. You may have > already seen it, but if not, the design doc linked in the description above > may also give you some better clues as to how it should be used. > Briefly, the feature is intended to start with a pair of tables in remote > clusters that are already substantially similar and make them identical by > comparing hashes of the data and copying only the diffs instead of having to > copy the entire table. So it is targeted at a very specific use case (with > some work it could generalize to cover things like CopyTable and > VerifyRepliaction but it's not there yet). To use it, you choose one table to > be the "source", and the other table is the "target". After the process is > complete the target table should end up being identical to the source table. > In the source table's cluster, run > org.apache.hadoop.hbase.mapreduce.HashTable and pass it the name of the > source table and an output directory in HDFS. HashTable will scan the source > table, break the data up into row key ranges (d
[jira] [Commented] (HBASE-21447) HBCK2 tool have questions on holes when HBCK2 checks region chain
[ https://issues.apache.org/jira/browse/HBASE-21447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16677949#comment-16677949 ] Jingyun Tian commented on HBASE-21447: -- Sir, do you produce this problem? Or you encounter this situation? > HBCK2 tool have questions on holes when HBCK2 checks region chain > --- > > Key: HBASE-21447 > URL: https://issues.apache.org/jira/browse/HBASE-21447 > Project: HBase > Issue Type: Improvement > Components: hbck2 >Affects Versions: 2.0.2 >Reporter: Nicholas Jiang >Priority: Major > Attachments: Hole.png > > > [hbck2]https://github.com/apache/hbase-operator-tools/tree/master/hbase-hbck2 > This HBCK2 tool have some questions on holes when HBCK2 checks region chain > as follows. > {code:java} > ERROR: There is a hole in the region chain between \x01F\x00\x00 and > \x02\x8C\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x05\x18\x00\x00 and > \x06^\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > ERROR: There is a hole in the region chain between \x07\x01\x00\x00 and > \x07\xA4\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x08G\x00\x00 and > \x09\x8D\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x0A0\x00\x00 and > \x0Bv\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > ERROR: There is a hole in the region chain between \x0C\x19\x00\x00 and > \x0C\xBC\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x0D_\x00\x00 and > \x0E\xA5\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between \x0F\xEB\x00\x00 and > \x111\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > ERROR: There is a hole in the region chain between \x16I\x00\x00 and > \x16\xEC\x00\x00. You need to create a new .regioninfo and region dir in hdfs > to plug the hole. > ERROR: There is a hole in the region chain between (\xC0\x00\x00 and > *\x06\x00\x00. You need to create a new .regioninfo and region dir in hdfs to > plug the hole. > {code} > !Hole.png! > This hole problem can't be solved by HBCK2 tool. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21347) Backport HBASE-21200 "Memstore flush doesn't finish because of seekToPreviousRow() in memstore scanner." to branch-1
[ https://issues.apache.org/jira/browse/HBASE-21347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16677950#comment-16677950 ] Hudson commented on HBASE-21347: Results for branch branch-1.2 [build #540 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/540/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/540//General_Nightly_Build_Report/] (/) {color:green}+1 jdk7 checks{color} -- For more information [see jdk7 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/540//JDK7_Nightly_Build_Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/540//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. > Backport HBASE-21200 "Memstore flush doesn't finish because of > seekToPreviousRow() in memstore scanner." to branch-1 > > > Key: HBASE-21347 > URL: https://issues.apache.org/jira/browse/HBASE-21347 > Project: HBase > Issue Type: Sub-task > Components: backport, Scanners >Reporter: Toshihiro Suzuki >Assignee: Toshihiro Suzuki >Priority: Critical > Fix For: 1.5.0, 1.3.3, 1.4.9, 1.2.9 > > Attachments: HBASE-21347.branch-1.001.patch > > > Backport parent issue to branch-1. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21200) Memstore flush doesn't finish because of seekToPreviousRow() in memstore scanner.
[ https://issues.apache.org/jira/browse/HBASE-21200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16677951#comment-16677951 ] Hudson commented on HBASE-21200: Results for branch branch-1.2 [build #540 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/540/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/540//General_Nightly_Build_Report/] (/) {color:green}+1 jdk7 checks{color} -- For more information [see jdk7 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/540//JDK7_Nightly_Build_Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/540//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. > Memstore flush doesn't finish because of seekToPreviousRow() in memstore > scanner. > - > > Key: HBASE-21200 > URL: https://issues.apache.org/jira/browse/HBASE-21200 > Project: HBase > Issue Type: Bug > Components: Scanners >Reporter: dongjin2193.jeon >Assignee: Toshihiro Suzuki >Priority: Critical > Fix For: 3.0.0, 2.2.0, 2.1.1, 2.0.3 > > Attachments: HBASE-21200-UT.patch, HBASE-21200.master.001.patch, > HBASE-21200.master.002.patch, RegionServerJstack.log > > > The issue of delaying memstore flush still occurs after backport hbase-15871. > Reverse scan takes a long time to seek previous row in the memstore full of > deleted cells. > > jstack : > "MemStoreFlusher.0" #114 prio=5 os_prio=0 tid=0x7fa3d0729000 nid=0x486a > waiting on condition [0x7fa3b9b6b000] > java.lang.Thread.State: WAITING (parking) > at sun.misc.Unsafe.park(Native Method) > - parking to wait for <0xa465fe60> (a > java.util.concurrent.locks.ReentrantLock$NonfairSync) > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199) > at > java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:209) > at > java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285) > at > org.apache.hadoop.hbase.regionserver.*StoreScanner.updateReaders(StoreScanner.java:695)* > at > org.apache.hadoop.hbase.regionserver.HStore.notifyChangedReadersObservers(HStore.java:1127) > at > org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1106) > at > org.apache.hadoop.hbase.regionserver.HStore.access$600(HStore.java:130) > at > org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:2455) > at > org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2519) > at > org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2256) > at > org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2218) > at > org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2110) > at > org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:2036) > at > org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:501) > at > org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:471) > at > org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$800(MemStoreFlusher.java:75) > at > org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:259) > at java.lang.Thread.run(Thread.java:748) > > "RpcServer.FifoWFPBQ.default.handler=27,queue=0,port=16020" #65 daemon prio=5 > os_prio=0 tid=0x7fa3e628 nid=0x4801 runnable [0x7fa3bd29a000] > java.lang.Thread.State: RUNNABLE > at > org.apache.hadoop.hbase.regionserver.DefaultMemStore$MemStoreScanner.getNext(DefaultMemStore.java:780) > at > org.apache.hadoop.hbase.regionserver.DefaultMemStore$MemStoreScanner.seekInSubLists(DefaultMemStore.java:826) > - locked <0xb45aa5b8> (a > org.apache.hadoop.hbase.regionserver.DefaultMemStore$MemStoreScanner) > at > org.apache.hadoop.hbase.regionserver.
[jira] [Commented] (HBASE-21410) A helper page that help find all problematic regions and procedures
[ https://issues.apache.org/jira/browse/HBASE-21410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16677923#comment-16677923 ] Hadoop QA commented on HBASE-21410: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange} 0m 0s{color} | {color:orange} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 45s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green}131m 20s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 28s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}142m 46s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b | | JIRA Issue | HBASE-21410 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12947192/HBASE-21410.master.003.patch | | Optional Tests | dupname asflicense javac javadoc unit | | uname | Linux f2b48f40907b 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 86cbbdea9e | | maven | version: Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) | | Default Java | 1.8.0_181 | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/14976/testReport/ | | Max. process+thread count | 5104 (vs. ulimit of 1) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/14976/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > A helper page that help find all problematic regions and procedures > --- > > Key: HBASE-21410 > URL: https://issues.apache.org/jira/browse/HBASE-21410 > Project: HBase > Issue Type: Improvement >Affects Versions: 3.0.0, 2.1.0, 2.2.0 >Reporter: Jingyun Tian >Assignee: Jingyun Tian >Priority: Major > Attachments: HBASE-21410.master.001.patch, > HBASE-21410.master.002.patch, HBASE-21410.master.003.patch, Screenshot from > 2018-10-30 19-06-21.png, Screenshot from 2018-10-30 19-06-42.png, Screenshot > from 2018-10-31 10-11-38.png, Screenshot from 2018-10-31 10-11-56.png, > Screenshot from 2018-11-01 17-56-02.png, Screenshot from 2018-11-01 > 17-56-15.png > > > *This page is mainly focus on finding the regions stuck in some state that > cannot be assigned. My proposal of the page is as follows:* > !Screenshot from 2018-10-30 19-06-21.png! > *From this page we can see all regions in RIT queue and their related > procedures. If we can determine that these regions' state are abnormal, we > can click the link 'Procedures as TXT' to get a full list of procedure IDs to > bypass them. Then click 'Regions as TXT' to get a full list of encoded region > names to assign.* > !Screenshot from 2018-10-30 19-06-42.png! > *Some region names are covered by the navigator bar, I'll fix it later.* -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20586) SyncTable tool: Add support for cross-realm remote clusters
[ https://issues.apache.org/jira/browse/HBASE-20586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16677922#comment-16677922 ] Wellington Chevreuil commented on HBASE-20586: -- Thanks [~busbey]. I had tried to play a bit with integration tests and use minikdc, minicluster, etc. Managed to have two of each running in the test, but whenever I change the credentials for the user running the clusters, the one on the other realm crashes. I guess problem here is that the two "fake" clusters in this test are running on same JVM, so I got stuck on that point while trying to implement automated tests. Had not worked on this further lately, but ain't sure if this is at all testable. End to End tests wise, yeah we did have that tested and even deployed on a production environment, where it did work well as an alternative for CopyTable. Maybe we could relax on our automated test policy for pushing this? > SyncTable tool: Add support for cross-realm remote clusters > --- > > Key: HBASE-20586 > URL: https://issues.apache.org/jira/browse/HBASE-20586 > Project: HBase > Issue Type: Improvement > Components: mapreduce, Operability, Replication >Affects Versions: 1.2.0, 2.0.0 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Major > Fix For: 1.5.0, 2.2.0 > > Attachments: HBASE-20586.master.001.patch > > > One possible scenario for HashTable/SyncTable is for synchronize different > clusters, for instance, when replication has been enabled but data existed > already, or due replication issues that may had caused long lags in the > replication. > For secured clusters under different kerberos realms (with cross-realm > properly set), though, current SyncTable version would fail to authenticate > with the remote cluster when trying to read HashTable outputs (when > *sourcehashdir* is remote) and also when trying to read table data on the > remote cluster (when *sourcezkcluster* is remote). > The hdfs error would look like this: > {noformat} > INFO mapreduce.Job: Task Id : attempt_1524358175778_105392_m_00_0, Status > : FAILED > Error: java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: "local-host/1.1.1.1"; > destination host is: "remote-nn":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) > at org.apache.hadoop.ipc.Client.call(Client.java:1506) > at org.apache.hadoop.ipc.Client.call(Client.java:1439) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) > at com.sun.proxy.$Proxy13.getBlockLocations(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:256) > ... > at > org.apache.hadoop.hbase.mapreduce.HashTable$TableHash.readPropertiesFile(HashTable.java:144) > at > org.apache.hadoop.hbase.mapreduce.HashTable$TableHash.read(HashTable.java:105) > at > org.apache.hadoop.hbase.mapreduce.SyncTable$SyncMapper.setup(SyncTable.java:188) > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:142) > ... > Caused by: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]{noformat} > The above can be sorted if the SyncTable job acquires a DT for the remote NN. > Once hdfs related authentication is done, it's also necessary to authenticate > against remote HBase, as the below error would arise: > {noformat} > INFO mapreduce.Job: Task Id : attempt_1524358175778_172414_m_00_0, Status > : FAILED > Error: org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get > the location > at > org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:326) > ... > at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:867) > at > org.apache.hadoop.hbase.mapreduce.SyncTable$SyncMapper.syncRange(SyncTable.java:331) > ... > Caused by: java.io.IOException: Could not set up IO Streams to > remote-rs-host/1.1.1.2:60020 > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:786) > ... > Caused by: java.lang.RuntimeException: SASL authentication failed. The most > likely cause is missing or invalid credentials. Consider 'kinit'. > ... > Caused by: GSSException: No valid credentials provided (Mechanism level: > Failed to find any Kerberos tgt) > ...{noformat} > The above would need additional authentication logic against the remote hbase > cluster. -- This
[jira] [Commented] (HBASE-15557) document SyncTable in ref guide
[ https://issues.apache.org/jira/browse/HBASE-15557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16677913#comment-16677913 ] Wellington Chevreuil commented on HBASE-15557: -- Thanks for noticing that, [~busbey], actually I had submitted new patch file with email info corrected. > document SyncTable in ref guide > --- > > Key: HBASE-15557 > URL: https://issues.apache.org/jira/browse/HBASE-15557 > Project: HBase > Issue Type: Bug > Components: documentation >Affects Versions: 1.2.0 >Reporter: Sean Busbey >Assignee: Wellington Chevreuil >Priority: Critical > Attachments: HBASE-15557.master.001.patch, > HBASE-15557.master.002.patch > > > The docs for SyncTable are insufficient. Brief description from [~davelatham] > HBASE-13639 comment: > {quote} > Sorry for the lack of better documentation, Abhishek Soni. Thanks for > bringing it up. I'll try to provide a better explanation. You may have > already seen it, but if not, the design doc linked in the description above > may also give you some better clues as to how it should be used. > Briefly, the feature is intended to start with a pair of tables in remote > clusters that are already substantially similar and make them identical by > comparing hashes of the data and copying only the diffs instead of having to > copy the entire table. So it is targeted at a very specific use case (with > some work it could generalize to cover things like CopyTable and > VerifyRepliaction but it's not there yet). To use it, you choose one table to > be the "source", and the other table is the "target". After the process is > complete the target table should end up being identical to the source table. > In the source table's cluster, run > org.apache.hadoop.hbase.mapreduce.HashTable and pass it the name of the > source table and an output directory in HDFS. HashTable will scan the source > table, break the data up into row key ranges (default of 8kB per range) and > produce a hash of the data for each range. > Make the hashes available to the target cluster - I'd recommend using DistCp > to copy it across. > In the target table's cluster, run > org.apache.hadoop.hbase.mapreduce.SyncTable and pass it the directory where > you put the hashes, and the names of the source and destination tables. You > will likely also need to specify the source table's ZK quorum via the > --sourcezkcluster option. SyncTable will then read the hash information, and > compute the hashes of the same row ranges for the target table. For any row > range where the hash fails to match, it will open a remote scanner to the > source table, read the data for that range, and do Puts and Deletes to the > target table to update it to match the source. > I hope that clarifies it a bit. Let me know if you need a hand. If anyone > wants to work on getting some documentation into the book, I can try to write > some more but would love a hand on turning it into an actual book patch. > {quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-15557) document SyncTable in ref guide
[ https://issues.apache.org/jira/browse/HBASE-15557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wellington Chevreuil updated HBASE-15557: - Attachment: HBASE-15557.master.002.patch > document SyncTable in ref guide > --- > > Key: HBASE-15557 > URL: https://issues.apache.org/jira/browse/HBASE-15557 > Project: HBase > Issue Type: Bug > Components: documentation >Affects Versions: 1.2.0 >Reporter: Sean Busbey >Assignee: Wellington Chevreuil >Priority: Critical > Attachments: HBASE-15557.master.001.patch, > HBASE-15557.master.002.patch > > > The docs for SyncTable are insufficient. Brief description from [~davelatham] > HBASE-13639 comment: > {quote} > Sorry for the lack of better documentation, Abhishek Soni. Thanks for > bringing it up. I'll try to provide a better explanation. You may have > already seen it, but if not, the design doc linked in the description above > may also give you some better clues as to how it should be used. > Briefly, the feature is intended to start with a pair of tables in remote > clusters that are already substantially similar and make them identical by > comparing hashes of the data and copying only the diffs instead of having to > copy the entire table. So it is targeted at a very specific use case (with > some work it could generalize to cover things like CopyTable and > VerifyRepliaction but it's not there yet). To use it, you choose one table to > be the "source", and the other table is the "target". After the process is > complete the target table should end up being identical to the source table. > In the source table's cluster, run > org.apache.hadoop.hbase.mapreduce.HashTable and pass it the name of the > source table and an output directory in HDFS. HashTable will scan the source > table, break the data up into row key ranges (default of 8kB per range) and > produce a hash of the data for each range. > Make the hashes available to the target cluster - I'd recommend using DistCp > to copy it across. > In the target table's cluster, run > org.apache.hadoop.hbase.mapreduce.SyncTable and pass it the directory where > you put the hashes, and the names of the source and destination tables. You > will likely also need to specify the source table's ZK quorum via the > --sourcezkcluster option. SyncTable will then read the hash information, and > compute the hashes of the same row ranges for the target table. For any row > range where the hash fails to match, it will open a remote scanner to the > source table, read the data for that range, and do Puts and Deletes to the > target table to update it to match the source. > I hope that clarifies it a bit. Let me know if you need a hand. If anyone > wants to work on getting some documentation into the book, I can try to write > some more but would love a hand on turning it into an actual book patch. > {quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19121) HBCK for AMv2 (A.K.A HBCK2)
[ https://issues.apache.org/jira/browse/HBASE-19121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16677906#comment-16677906 ] Nicholas Jiang commented on HBASE-19121: [~stack] https://issues.apache.org/jira/browse/HBASE-21447 I have some question on holes based on HBCK2 tool[https://github.com/apache/hbase-operator-tools/tree/master/hbase-hbck2] > HBCK for AMv2 (A.K.A HBCK2) > --- > > Key: HBASE-19121 > URL: https://issues.apache.org/jira/browse/HBASE-19121 > Project: HBase > Issue Type: Umbrella > Components: hbck, hbck2 >Reporter: stack >Assignee: Umesh Agashe >Priority: Major > Fix For: hbck2-1.0.0 > > Attachments: hbase-19121.master.001.patch > > > We don't have an hbck for the new AM. Old hbck may actually do damage going > against AMv2. > Fix. -- This message was sent by Atlassian JIRA (v7.6.3#76005)