[jira] [Updated] (HBASE-21957) Unify refCount of BucketEntry and refCount of hbase.nio.ByteBuff into one
[ https://issues.apache.org/jira/browse/HBASE-21957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zheng Hu updated HBASE-21957: - Description: After HBASE-12295, we have block with MemoryType.SHARED or MemoryType.EXCLUSIVE, the block in offheap BucketCache will be shared, and have an reference count to track its life cycle. If no rpc reference to the shared block, then the block can be evicted. while after the HBASE-21916, we introduced an refcount for ByteBuff, then I think we can unify the two into one. tried to fix this when preparing patch for HBASE-21879, but seems can be different sub-task, and it won't affect the main logic of HBASE-21879, so create a seperate one. was: After HBASE-12295, we have block with MemoryType.SHARED or MemoryType.EXCLUSIVE, the block in offheap BucketCache will be shared, and have an reference count to track its life cycle. If no rpc reference to the shared block, then the block can be evicted. while after the HBASE-21916, we introduced an refcount for ByteBuff, then I think we can unify the two into one. tried to fix this when preparing patch for HBASE-21879, but seems can be different sub-task, and it won't affect the main logic of HBASE-21879, so create a seperate one. !HBASE-21957-design.png! Attached an picture here, in general , for those HFileBlocks which is mapping to the same BucketEntry, they should share the same refCnt. That's to say, if an bucketEntry has a refCnt-a, then all HFileBlocks related to this bucket entry shoud also use this refCnt-a to track its memory. > Unify refCount of BucketEntry and refCount of hbase.nio.ByteBuff into one > - > > Key: HBASE-21957 > URL: https://issues.apache.org/jira/browse/HBASE-21957 > Project: HBase > Issue Type: Sub-task >Reporter: Zheng Hu >Assignee: Zheng Hu >Priority: Major > Attachments: HBASE-21957-design.png > > > After HBASE-12295, we have block with MemoryType.SHARED or > MemoryType.EXCLUSIVE, the block in offheap BucketCache will be shared, and > have an reference count to track its life cycle. If no rpc reference to the > shared block, then the block can be evicted. > while after the HBASE-21916, we introduced an refcount for ByteBuff, then I > think we can unify the two into one. tried to fix this when preparing patch > for HBASE-21879, but seems can be different sub-task, and it won't affect the > main logic of HBASE-21879, so create a seperate one. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21957) Unify refCount of BucketEntry and refCount of hbase.nio.ByteBuff into one
[ https://issues.apache.org/jira/browse/HBASE-21957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815144#comment-16815144 ] Zheng Hu commented on HBASE-21957: -- Attached an simple desgin document here, because of its complexity. it also will be an part of the doc for the parent issue, I mean I will write an doc for the whole off-heap block reading in HBASE-21879. > Unify refCount of BucketEntry and refCount of hbase.nio.ByteBuff into one > - > > Key: HBASE-21957 > URL: https://issues.apache.org/jira/browse/HBASE-21957 > Project: HBase > Issue Type: Sub-task >Reporter: Zheng Hu >Assignee: Zheng Hu >Priority: Major > Attachments: HBASE-21957-design.png > > > After HBASE-12295, we have block with MemoryType.SHARED or > MemoryType.EXCLUSIVE, the block in offheap BucketCache will be shared, and > have an reference count to track its life cycle. If no rpc reference to the > shared block, then the block can be evicted. > while after the HBASE-21916, we introduced an refcount for ByteBuff, then I > think we can unify the two into one. tried to fix this when preparing patch > for HBASE-21879, but seems can be different sub-task, and it won't affect the > main logic of HBASE-21879, so create a seperate one. > !HBASE-21957-design.png! > Attached an picture here, in general , for those HFileBlocks which is > mapping to the same BucketEntry, they should share the same refCnt. That's > to say, if an bucketEntry has a refCnt-a, then all HFileBlocks related to > this bucket entry shoud also use this refCnt-a to track its memory. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21957) Unify refCount of BucketEntry and refCount of hbase.nio.ByteBuff into one
[ https://issues.apache.org/jira/browse/HBASE-21957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zheng Hu updated HBASE-21957: - Attachment: (was: HBASE-21957-design.png) > Unify refCount of BucketEntry and refCount of hbase.nio.ByteBuff into one > - > > Key: HBASE-21957 > URL: https://issues.apache.org/jira/browse/HBASE-21957 > Project: HBase > Issue Type: Sub-task >Reporter: Zheng Hu >Assignee: Zheng Hu >Priority: Major > > After HBASE-12295, we have block with MemoryType.SHARED or > MemoryType.EXCLUSIVE, the block in offheap BucketCache will be shared, and > have an reference count to track its life cycle. If no rpc reference to the > shared block, then the block can be evicted. > while after the HBASE-21916, we introduced an refcount for ByteBuff, then I > think we can unify the two into one. tried to fix this when preparing patch > for HBASE-21879, but seems can be different sub-task, and it won't affect the > main logic of HBASE-21879, so create a seperate one. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-22208) Create auth manager and expose it in RS
[ https://issues.apache.org/jira/browse/HBASE-22208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815147#comment-16815147 ] HBase QA commented on HBASE-22208: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 20s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 14s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 30s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 34s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 2m 57s{color} | {color:blue} hbase-server in master has 11 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 31s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 8m 52s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green}136m 51s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 38s{color} | {color:green} hbase-rsgroup in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 44s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}182m 40s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/PreCommit-HBASE-Build/63/artifact/patchprocess/Dockerfile | | JIRA Issue | HBASE-22208 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12965543/HBASE-22208.master.001.patch | | Optional Tests | dupname asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux ed132d94faca 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed Feb 13 15:00:41 UTC 2019 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / a74e1ecad9 | | maven | version: Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe;
[GitHub] [hbase] Apache9 opened a new pull request #138: HBASE-22207 Fix flakey TestAssignmentManager.testAssignSocketTimeout
Apache9 opened a new pull request #138: HBASE-22207 Fix flakey TestAssignmentManager.testAssignSocketTimeout URL: https://github.com/apache/hbase/pull/138 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (HBASE-22057) Impose upper-bound on size of ZK ops sent in a single multi()
[ https://issues.apache.org/jira/browse/HBASE-22057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815292#comment-16815292 ] Hudson commented on HBASE-22057: Results for branch branch-1 [build #766 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/766/]: (x) *{color:red}-1 overall{color}* details (if available): (x) {color:red}-1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/766//General_Nightly_Build_Report/] (x) {color:red}-1 jdk7 checks{color} -- For more information [see jdk7 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/766//JDK7_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/766//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. > Impose upper-bound on size of ZK ops sent in a single multi() > - > > Key: HBASE-22057 > URL: https://issues.apache.org/jira/browse/HBASE-22057 > Project: HBase > Issue Type: Bug >Reporter: Josh Elser >Assignee: Josh Elser >Priority: Major > Fix For: 3.0.0, 1.6.0, 2.2.0 > > Attachments: HBASE-22057-branch-1.patch, HBASE-22057.001.patch, > HBASE-22057.002.patch, HBASE-22057.003.patch, HBASE-22057.004.patch > > > In {{ZKUtil#multiOrSequential}}, we accept a list of {{ZKUtilOp}}'s to pass > down to the {{ZooKeeper#multi(Iterable)}} method. > One problem with this approach is that we may generate a large list of ZNodes > to mutate in one batch which exceeds the allowable client package length, > specified by {{jute.maxbuffer}}. > This problem can manifest when we have a large number of WALs to replicate, > queued in ZooKeeper, from a disabled peer. When that peer is dropped, the RS > would submit deletes of those queued WALs. The RS will see ConnectionLoss for > the resulting {{multi()}} calls it tries to make, because we are sending too > large of a client message (because we're trying to delete too many WALs at > once). The result (at least in branch-1 ish versions) is that the RS aborts > after exceeding the ZK retries (as this operation will never succeed). > A simple fix would be to impose a maximum number of Ops to run in a single > batch inside ZKUtil, and split apart the caller-submitted batch into smaller > chunks. Before we make such a change, I do need to make sure that we don't > have any expectations on atomicity of the operations. I'm not sure what ZK > provides here -- for the above example, splitting up batches of deletes is > not an issue, but there could be issues with batches of creates where we only > apply some. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21965) Fix failed split and merge transactions that have failed to roll back
[ https://issues.apache.org/jira/browse/HBASE-21965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815294#comment-16815294 ] Duo Zhang commented on HBASE-21965: --- https://builds.apache.org/job/HBase-Flaky-Tests/job/master/2969/testReport/junit/org.apache.hadoop.hbase.client/TestHbck/testRecoverSplitAfterMetaUpdated_0__async_false_/ Please see this failure [~tianjingyun]? I think it is related? > Fix failed split and merge transactions that have failed to roll back > - > > Key: HBASE-21965 > URL: https://issues.apache.org/jira/browse/HBASE-21965 > Project: HBase > Issue Type: Sub-task > Components: hbck2 >Reporter: Jingyun Tian >Assignee: Jingyun Tian >Priority: Major > Fix For: 3.0.0, 2.3.0 > > Attachments: HBASE-21965.master.001.patch, > HBASE-21965.master.002.patch, HBASE-21965.master.003.patch, > HBASE-21965.master.004.patch, HBASE-21965.master.005.patch, > HBASE-21965.master.006.patch, HBASE-21965.master.007.patch, > HBASE-21965.master.007.patch, HBASE-21965.master.008.patch, > HBASE-21965.master.009.patch, HBASE-21965.master.010.patch, > HBASE-21965.master.011.patch, HBASE-21965.master.012.patch, > HBASE-21965.master.013.patch, HBASE-21965.master.014.patch, > HBASE-21965.master.014.patch > > > Make HBCK2 be able to fix failed split and merge transactions that have > failed to roll back. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21965) Fix failed split and merge transactions that have failed to roll back
[ https://issues.apache.org/jira/browse/HBASE-21965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815295#comment-16815295 ] Duo Zhang commented on HBASE-21965: --- And I think this could go into branch-2.2 directly, but for branch-2.1, we use different procedures to assign/unassign regions so maybe it is not straight-forward. > Fix failed split and merge transactions that have failed to roll back > - > > Key: HBASE-21965 > URL: https://issues.apache.org/jira/browse/HBASE-21965 > Project: HBase > Issue Type: Sub-task > Components: hbck2 >Reporter: Jingyun Tian >Assignee: Jingyun Tian >Priority: Major > Fix For: 3.0.0, 2.3.0 > > Attachments: HBASE-21965.master.001.patch, > HBASE-21965.master.002.patch, HBASE-21965.master.003.patch, > HBASE-21965.master.004.patch, HBASE-21965.master.005.patch, > HBASE-21965.master.006.patch, HBASE-21965.master.007.patch, > HBASE-21965.master.007.patch, HBASE-21965.master.008.patch, > HBASE-21965.master.009.patch, HBASE-21965.master.010.patch, > HBASE-21965.master.011.patch, HBASE-21965.master.012.patch, > HBASE-21965.master.013.patch, HBASE-21965.master.014.patch, > HBASE-21965.master.014.patch > > > Make HBCK2 be able to fix failed split and merge transactions that have > failed to roll back. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work started] (HBASE-17564) Fix remaining calls to deprecated methods of Admin and HBaseAdmin
[ https://issues.apache.org/jira/browse/HBASE-17564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HBASE-17564 started by Jan Hentschel. - > Fix remaining calls to deprecated methods of Admin and HBaseAdmin > - > > Key: HBASE-17564 > URL: https://issues.apache.org/jira/browse/HBASE-17564 > Project: HBase > Issue Type: Improvement >Reporter: Jan Hentschel >Assignee: Jan Hentschel >Priority: Trivial > Fix For: 3.0.0 > > Attachments: HBASE-17564.master.001.patch > > > Fix the remaining calls to deprecated methods of the *Admin* interface and > the *HBaseAdmin* class. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-22084) Rename AccessControlLists to PermissionStorage
[ https://issues.apache.org/jira/browse/HBASE-22084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815168#comment-16815168 ] HBase QA commented on HBASE-22084: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 46s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 23 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 35s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 42s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 59s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 11s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} refguide {color} | {color:blue} 16m 13s{color} | {color:blue} branch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 15s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 21s{color} | {color:blue} hbase-server in master has 11 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 44s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 10s{color} | {color:green} root: The patch generated 0 new + 82 unchanged - 16 fixed = 82 total (was 98) {color} | | {color:red}-1{color} | {color:red} rubocop {color} | {color:red} 0m 6s{color} | {color:red} The patch generated 1 new + 59 unchanged - 1 fixed = 60 total (was 60) {color} | | {color:green}+1{color} | {color:green} ruby-lint {color} | {color:green} 0m 1s{color} | {color:green} There were no new ruby-lint issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:blue}0{color} | {color:blue} refguide {color} | {color:blue} 5m 6s{color} | {color:blue} patch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 31s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 9m 16s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 or 3.0.0. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 15s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}301m 25s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green}
[jira] [Resolved] (HBASE-22196) Split TestRestartCluster
[ https://issues.apache.org/jira/browse/HBASE-22196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-22196. --- Resolution: Fixed Hadoop Flags: Reviewed Pushed to branch-2.2+. > Split TestRestartCluster > > > Key: HBASE-22196 > URL: https://issues.apache.org/jira/browse/HBASE-22196 > Project: HBase > Issue Type: Sub-task >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 3.0.0, 2.2.0, 2.3.0 > > > The logs for later tests are messed up with error messages, like > {noformat} > 2019-04-09 09:41:11,717 WARN [LeaseRenewer:jenkins.hfs.12@localhost:41108] > hdfs.LeaseRenewer(468): Failed to renew lease for > [DFSClient_NONMAPREDUCE_400481390_21] for 55 seconds. Will retry shortly ... > java.net.ConnectException: Call From asf918.gq1.ygridcore.net/67.195.81.138 > to localhost:41108 failed on connection exception: java.net.ConnectException: > Connection refused; For more details see: > http://wiki.apache.org/hadoop/ConnectionRefused > at sun.reflect.GeneratedConstructorAccessor79.newInstance(Unknown > Source) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732) > at org.apache.hadoop.ipc.Client.call(Client.java:1480) > at org.apache.hadoop.ipc.Client.call(Client.java:1413) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) > at com.sun.proxy.$Proxy30.renewLease(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.renewLease(ClientNamenodeProtocolTranslatorPB.java:595) > at sun.reflect.GeneratedMethodAccessor154.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy33.renewLease(Unknown Source) > at sun.reflect.GeneratedMethodAccessor154.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:372) > at com.sun.proxy.$Proxy34.renewLease(Unknown Source) > at sun.reflect.GeneratedMethodAccessor154.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:372) > at com.sun.proxy.$Proxy34.renewLease(Unknown Source) > at org.apache.hadoop.hdfs.DFSClient.renewLease(DFSClient.java:901) > at org.apache.hadoop.hdfs.LeaseRenewer.renew(LeaseRenewer.java:423) > at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:448) > at org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71) > at org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:304) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.net.ConnectException: Connection refused > at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) > at > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) > at > org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) > at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) > at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) > at > org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:615) > at > org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:713) > at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:376) > at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529) > at org.apache.hadoop.ipc.Client.call(Client.java:1452) > ... 26 more > 2019-04-09 09:41:11,949 WARN [RS_OPEN_REGION-regionserver/asf918:33671-1] > regionserver.HStore(1062): Failed flushing store file, retrying num=8 > java.io.IOException: Filesystem closed > at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:817) > at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2114) > at >
[jira] [Updated] (HBASE-22196) Split TestRestartCluster
[ https://issues.apache.org/jira/browse/HBASE-22196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-22196: -- Component/s: test > Split TestRestartCluster > > > Key: HBASE-22196 > URL: https://issues.apache.org/jira/browse/HBASE-22196 > Project: HBase > Issue Type: Sub-task > Components: test >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 3.0.0, 2.2.0, 2.3.0 > > > The logs for later tests are messed up with error messages, like > {noformat} > 2019-04-09 09:41:11,717 WARN [LeaseRenewer:jenkins.hfs.12@localhost:41108] > hdfs.LeaseRenewer(468): Failed to renew lease for > [DFSClient_NONMAPREDUCE_400481390_21] for 55 seconds. Will retry shortly ... > java.net.ConnectException: Call From asf918.gq1.ygridcore.net/67.195.81.138 > to localhost:41108 failed on connection exception: java.net.ConnectException: > Connection refused; For more details see: > http://wiki.apache.org/hadoop/ConnectionRefused > at sun.reflect.GeneratedConstructorAccessor79.newInstance(Unknown > Source) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732) > at org.apache.hadoop.ipc.Client.call(Client.java:1480) > at org.apache.hadoop.ipc.Client.call(Client.java:1413) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) > at com.sun.proxy.$Proxy30.renewLease(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.renewLease(ClientNamenodeProtocolTranslatorPB.java:595) > at sun.reflect.GeneratedMethodAccessor154.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy33.renewLease(Unknown Source) > at sun.reflect.GeneratedMethodAccessor154.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:372) > at com.sun.proxy.$Proxy34.renewLease(Unknown Source) > at sun.reflect.GeneratedMethodAccessor154.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:372) > at com.sun.proxy.$Proxy34.renewLease(Unknown Source) > at org.apache.hadoop.hdfs.DFSClient.renewLease(DFSClient.java:901) > at org.apache.hadoop.hdfs.LeaseRenewer.renew(LeaseRenewer.java:423) > at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:448) > at org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71) > at org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:304) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.net.ConnectException: Connection refused > at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) > at > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) > at > org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) > at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) > at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) > at > org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:615) > at > org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:713) > at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:376) > at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529) > at org.apache.hadoop.ipc.Client.call(Client.java:1452) > ... 26 more > 2019-04-09 09:41:11,949 WARN [RS_OPEN_REGION-regionserver/asf918:33671-1] > regionserver.HStore(1062): Failed flushing store file, retrying num=8 > java.io.IOException: Filesystem closed > at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:817) > at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2114) > at > org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305) >
[GitHub] [hbase] Apache-HBase commented on issue #131: HBASE-19762 Fixed Checkstyle errors in hbase-http
Apache-HBase commented on issue #131: HBASE-19762 Fixed Checkstyle errors in hbase-http URL: https://github.com/apache/hbase/pull/131#issuecomment-482067507 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 0 | Docker mode activated. | | -1 | patch | 7 | https://github.com/apache/hbase/pull/131 does not apply to master. Rebase required? Wrong Branch? See https://yetus.apache.org/documentation/in-progress/precommit-patchnames for help. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hbase/pull/131 | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-131/3/console | | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Updated] (HBASE-19762) Fix Checkstyle errors in hbase-http
[ https://issues.apache.org/jira/browse/HBASE-19762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jan Hentschel updated HBASE-19762: -- Resolution: Fixed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) > Fix Checkstyle errors in hbase-http > --- > > Key: HBASE-19762 > URL: https://issues.apache.org/jira/browse/HBASE-19762 > Project: HBase > Issue Type: Sub-task >Reporter: Jan Hentschel >Assignee: Jan Hentschel >Priority: Minor > Fix For: 3.0.0 > > Attachments: HBASE-19762.master.001.patch, > HBASE-19762.master.002.patch, HBASE-19762.master.003.patch, > HBASE-19762.master.004.patch > > > Fix the remaining Checkstyle errors in the *hbase-http* module and enable > Checkstyle to fail on violations. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-22208) Create auth manager and expose it in RS
[ https://issues.apache.org/jira/browse/HBASE-22208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei updated HBASE-22208: --- Attachment: HBASE-22208.master.002.patch > Create auth manager and expose it in RS > --- > > Key: HBASE-22208 > URL: https://issues.apache.org/jira/browse/HBASE-22208 > Project: HBase > Issue Type: Sub-task >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > Attachments: HBASE-22208.master.001.patch, > HBASE-22208.master.002.patch > > > In HBase access control service, auth manager cache all global, namespace and > table permissions, and performs authorization checks for a given user's > assigned permissions. > The auth manager instance is created when master, RS and region load > AccessController. Its cache is refreshed when acl znode changed. > We can create auth manager when master and RS start and expose it in order to > use procedure to refresh its cache rather than watch ZK. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-22084) Rename AccessControlLists to PermissionStorage
[ https://issues.apache.org/jira/browse/HBASE-22084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei updated HBASE-22084: --- Attachment: (was: HBASE-22084.master.002.patch) > Rename AccessControlLists to PermissionStorage > -- > > Key: HBASE-22084 > URL: https://issues.apache.org/jira/browse/HBASE-22084 > Project: HBase > Issue Type: Sub-task >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > Attachments: HBASE-22084.branch-2.2.001.patch, > HBASE-22084.master.001.patch, HBASE-22084.master.002.patch > > > AccessControlLists is a utility class which deal with get/put/delete > operations with hbase acl table. The name of the class is confusing, so shall > we rename it to PermissionStorage? -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-22084) Rename AccessControlLists to PermissionStorage
[ https://issues.apache.org/jira/browse/HBASE-22084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei updated HBASE-22084: --- Attachment: HBASE-22084.master.002.patch > Rename AccessControlLists to PermissionStorage > -- > > Key: HBASE-22084 > URL: https://issues.apache.org/jira/browse/HBASE-22084 > Project: HBase > Issue Type: Sub-task >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > Attachments: HBASE-22084.branch-2.2.001.patch, > HBASE-22084.master.001.patch, HBASE-22084.master.002.patch > > > AccessControlLists is a utility class which deal with get/put/delete > operations with hbase acl table. The name of the class is confusing, so shall > we rename it to PermissionStorage? -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] [hbase] HorizonNet merged pull request #129: HBASE-22189 Removed remaining usage of StoreFile.getModificationTimeStamp
HorizonNet merged pull request #129: HBASE-22189 Removed remaining usage of StoreFile.getModificationTimeStamp URL: https://github.com/apache/hbase/pull/129 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (HBASE-22072) High read/write intensive regions may cause long crash recovery
[ https://issues.apache.org/jira/browse/HBASE-22072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815297#comment-16815297 ] ramkrishna.s.vasudevan commented on HBASE-22072: Lets see how to take this forward. Will see how can we write a UT for this. > High read/write intensive regions may cause long crash recovery > --- > > Key: HBASE-22072 > URL: https://issues.apache.org/jira/browse/HBASE-22072 > Project: HBase > Issue Type: Bug > Components: Performance, Recovery >Affects Versions: 2.1.2 >Reporter: Pavel >Priority: Major > > Compaction of high read loaded region may leave compacted files undeleted > because of existing scan references: > INFO org.apache.hadoop.hbase.regionserver.HStore - Can't archive compacted > file hdfs://hdfs-ha/hbase... because of either isCompactedAway=true or file > has reference, isReferencedInReads=true, refCount=1, skipping for now > If region is either high write loaded this happens quite often and region may > have few storefiles and tons of undeleted compacted hdfs files. > Region keeps all that files (in my case thousands) untill graceful region > closing procedure, which ignores existing references and drop obsolete files. > It works fine unless consuming some extra hdfs space, but only in case of > normal region closing. If region server crashes than new region server, > responsible for that overfiling region, reads hdfs folder and try to deal > with all undeleted files, producing tons of storefiles, compaction tasks and > consuming abnormal amount of memory, wich may lead to OutOfMemory Exception > and further region servers crash. This stops writing to region because number > of storefiles reach *hbase.hstore.blockingStoreFiles* limit, forces high GC > duty and may take hours to compact all files into working set of files. > Workaround is a periodically check hdfs folders files count and force region > assign for ones with too many files. > It could be nice if regionserver had a setting similar to > hbase.hstore.blockingStoreFiles and invoke attempt to drop undeleted > compacted files if number of files reaches this setting. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] [hbase] HorizonNet commented on issue #131: HBASE-19762 Fixed Checkstyle errors in hbase-http
HorizonNet commented on issue #131: HBASE-19762 Fixed Checkstyle errors in hbase-http URL: https://github.com/apache/hbase/pull/131#issuecomment-482065358 @Apache9 Currently trying to figure out GitHubs merge capabilities. Normally we use `Rebase and merge`, but this time we have two commits. Should I use `Sqash and merge` instead or do we have another approach for handling such cases? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] HorizonNet commented on issue #125: HBASE-22187 Removed remaining usages of ClusterConnection.clearRegionCache
HorizonNet commented on issue #125: HBASE-22187 Removed remaining usages of ClusterConnection.clearRegionCache URL: https://github.com/apache/hbase/pull/125#issuecomment-482074758 Test failures seem to be unrelated to the actual change, because the JVM seems to be crashed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (HBASE-22104) Remove Hadoop 2.7 from next minor releases
[ https://issues.apache.org/jira/browse/HBASE-22104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815365#comment-16815365 ] Peter Somogyi commented on HBASE-22104: --- There are some classes that check which Hadoop version is used with reflection. One example: [https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/util/Threads.java#L281-L330] > Remove Hadoop 2.7 from next minor releases > -- > > Key: HBASE-22104 > URL: https://issues.apache.org/jira/browse/HBASE-22104 > Project: HBase > Issue Type: Task > Components: hadoop2 >Affects Versions: 3.0.0, 1.6.0, 2.3.0 >Reporter: Sean Busbey >Assignee: Josh Elser >Priority: Critical > > Hadoop 2.7 is now EOM ([common-dev@hadoop "\[DISCUSS\] branch 2.7 > EoL"|https://lists.apache.org/thread.html/d1f98c2c386f2f4b980489b543db3d0bb7bdb94ea12f8fc5a90f527b@%3Ccommon-dev.hadoop.apache.org%3E]) > and has an active licensing issue (HADOOP-13794) > Let's go ahead and axe it from the next minor releases. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] [hbase] HorizonNet commented on issue #137: HBASE-22203 Reformatted DemoClient.java
HorizonNet commented on issue #137: HBASE-22203 Reformatted DemoClient.java URL: https://github.com/apache/hbase/pull/137#issuecomment-482040445 @Apache9 Will do it later today. Didn't do it yet, because I was also the author of the PRs. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] Apache9 commented on issue #131: HBASE-19762 Fixed Checkstyle errors in hbase-http
Apache9 commented on issue #131: HBASE-19762 Fixed Checkstyle errors in hbase-http URL: https://github.com/apache/hbase/pull/131#issuecomment-482066010 Please do a force push to your own branch to merge the two commits into one first, and then use 'Rebase and merge'. There is a discussion thread on the dev list about disabling squash and merge so we'd better not use it for now. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (HBASE-22084) Rename AccessControlLists to PermissionStorage
[ https://issues.apache.org/jira/browse/HBASE-22084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815230#comment-16815230 ] Guanghao Zhang commented on HBASE-22084: +1 > Rename AccessControlLists to PermissionStorage > -- > > Key: HBASE-22084 > URL: https://issues.apache.org/jira/browse/HBASE-22084 > Project: HBase > Issue Type: Sub-task >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > Attachments: HBASE-22084.branch-2.2.001.patch, > HBASE-22084.master.001.patch, HBASE-22084.master.002.patch > > > AccessControlLists is a utility class which deal with get/put/delete > operations with hbase acl table. The name of the class is confusing, so shall > we rename it to PermissionStorage? -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HBASE-22189) Remove usage of StoreFile.getModificationTimeStamp
[ https://issues.apache.org/jira/browse/HBASE-22189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jan Hentschel resolved HBASE-22189. --- Resolution: Fixed Fix Version/s: 2.2.1 2.1.5 2.0.6 2.3.0 3.0.0 > Remove usage of StoreFile.getModificationTimeStamp > -- > > Key: HBASE-22189 > URL: https://issues.apache.org/jira/browse/HBASE-22189 > Project: HBase > Issue Type: Task >Reporter: Jan Hentschel >Assignee: Jan Hentschel >Priority: Trivial > Fix For: 3.0.0, 2.3.0, 2.0.6, 2.1.5, 2.2.1 > > > The method StoreFile.getModificationTimeStamp() was deprecated, but is still > used. The remaining usages should be moved to > StoreFile.getModificationTimestamp(). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-22200) WALSplitter.hasRecoveredEdits should use same FS instance from WAL region dir
[ https://issues.apache.org/jira/browse/HBASE-22200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815191#comment-16815191 ] Wellington Chevreuil commented on HBASE-22200: -- Attached third version for branch-2.1, addressing checkstyle issue from previous one. > WALSplitter.hasRecoveredEdits should use same FS instance from WAL region dir > - > > Key: HBASE-22200 > URL: https://issues.apache.org/jira/browse/HBASE-22200 > Project: HBase > Issue Type: Bug >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Major > Labels: S3, WAL > Attachments: HBASE-22200-branch-2.1-001.patch, > HBASE-22200-branch-2.1-002.patch, HBASE-22200-branch-2.1-003.patch, > HBASE-22200-master-001.patch, HBASE-22200-master-002.patch > > > *WALSplitter.hasRecoveredEdits* should use same FS instance from WAL region > dir when checking for recovered.edits files, instead of taking FS instance as > additional method parameter. When specifying different file systems for *wal > dir* and *root dir*, *WALSplitter.hasRecoveredEdits* current implementation > will crash or give wrong results. As of now, it's being used indirectly by > *SplitTableRegionProcedure*. When running tests with *WAL dir* on HDFS and > *root dir* on S3, for example, noticed region split failing with below error: > {noformat} > 2019-04-08 13:53:58,064 ERROR > org.apache.hadoop.hbase.procedure2.ProcedureExecutor: CODE-BUG: Uncaught > runtime exception: pid=98, > state=RUNNABLE:SPLIT_TABLE_REGIONS_CHECK_CLOSED_REGIONS, locked=true; > SplitTableRegionProcedure table=test-tbl, > parent=4c5db01611e97e3abbe02e781e867212, > daughterA=28a0a5e4ef7618899f6bd6dfb5335fe7, > daughterB=05fa26feaf03ebf9e87e099cbd1eabac > java.lang.IllegalArgumentException: Path > hdfs://host-1.example.com:8020/wal_dir/default/test-tbl/4c5db01611e97e3abbe02e781e867212/recovered.edits > scheme must be s3a > at > com.google.common.base.Preconditions.checkArgument(Preconditions.java:115) > at > org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.checkPath(DynamoDBMetadataStore.java:1127) > at > org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.get(DynamoDBMetadataStore.java:437) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2110) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2088) > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:442) > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1668) > at > org.apache.hadoop.hbase.wal.WALSplitter.getSplitEditFilesSorted(WALSplitter.java:576) > at > org.apache.hadoop.hbase.wal.WALSplitter.hasRecoveredEdits(WALSplitter.java:558) > at > org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure.hasRecoveredEdits(SplitTableRegionProcedure.java:148) > at > org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure.executeFromState(SplitTableRegionProcedure.java:255) > {noformat} > Since *WALSplitter.hasRecoveredEdits* already resolves the proper WAL dir for > the region, we can simply re-use FS instance from the path instance for the > WAL dir region, when searching for recovered.edits. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-22200) WALSplitter.hasRecoveredEdits should use same FS instance from WAL region dir
[ https://issues.apache.org/jira/browse/HBASE-22200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wellington Chevreuil updated HBASE-22200: - Attachment: HBASE-22200-branch-2.1-003.patch > WALSplitter.hasRecoveredEdits should use same FS instance from WAL region dir > - > > Key: HBASE-22200 > URL: https://issues.apache.org/jira/browse/HBASE-22200 > Project: HBase > Issue Type: Bug >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Major > Labels: S3, WAL > Attachments: HBASE-22200-branch-2.1-001.patch, > HBASE-22200-branch-2.1-002.patch, HBASE-22200-branch-2.1-003.patch, > HBASE-22200-master-001.patch, HBASE-22200-master-002.patch > > > *WALSplitter.hasRecoveredEdits* should use same FS instance from WAL region > dir when checking for recovered.edits files, instead of taking FS instance as > additional method parameter. When specifying different file systems for *wal > dir* and *root dir*, *WALSplitter.hasRecoveredEdits* current implementation > will crash or give wrong results. As of now, it's being used indirectly by > *SplitTableRegionProcedure*. When running tests with *WAL dir* on HDFS and > *root dir* on S3, for example, noticed region split failing with below error: > {noformat} > 2019-04-08 13:53:58,064 ERROR > org.apache.hadoop.hbase.procedure2.ProcedureExecutor: CODE-BUG: Uncaught > runtime exception: pid=98, > state=RUNNABLE:SPLIT_TABLE_REGIONS_CHECK_CLOSED_REGIONS, locked=true; > SplitTableRegionProcedure table=test-tbl, > parent=4c5db01611e97e3abbe02e781e867212, > daughterA=28a0a5e4ef7618899f6bd6dfb5335fe7, > daughterB=05fa26feaf03ebf9e87e099cbd1eabac > java.lang.IllegalArgumentException: Path > hdfs://host-1.example.com:8020/wal_dir/default/test-tbl/4c5db01611e97e3abbe02e781e867212/recovered.edits > scheme must be s3a > at > com.google.common.base.Preconditions.checkArgument(Preconditions.java:115) > at > org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.checkPath(DynamoDBMetadataStore.java:1127) > at > org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.get(DynamoDBMetadataStore.java:437) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2110) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2088) > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:442) > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1668) > at > org.apache.hadoop.hbase.wal.WALSplitter.getSplitEditFilesSorted(WALSplitter.java:576) > at > org.apache.hadoop.hbase.wal.WALSplitter.hasRecoveredEdits(WALSplitter.java:558) > at > org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure.hasRecoveredEdits(SplitTableRegionProcedure.java:148) > at > org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure.executeFromState(SplitTableRegionProcedure.java:255) > {noformat} > Since *WALSplitter.hasRecoveredEdits* already resolves the proper WAL dir for > the region, we can simply re-use FS instance from the path instance for the > WAL dir region, when searching for recovered.edits. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] [hbase] Apache9 commented on issue #126: HBASE-21718 Implement Admin based on AsyncAdmin
Apache9 commented on issue #126: HBASE-21718 Implement Admin based on AsyncAdmin URL: https://github.com/apache/hbase/pull/126#issuecomment-482024794 Any other concerns? @openinx This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Resolved] (HBASE-22198) Fix flakey TestAsyncTableGetMultiThreaded
[ https://issues.apache.org/jira/browse/HBASE-22198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-22198. --- Resolution: Fixed Hadoop Flags: Reviewed Pushed to branch-2.1+. > Fix flakey TestAsyncTableGetMultiThreaded > - > > Key: HBASE-22198 > URL: https://issues.apache.org/jira/browse/HBASE-22198 > Project: HBase > Issue Type: Bug >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 3.0.0, 2.2.0, 2.3.0, 2.1.5 > > > https://builds.apache.org/job/HBase-Flaky-Tests/job/master/2959/testReport/junit/org.apache.hadoop.hbase.client/TestAsyncTableGetMultiThreaded/test/ > The error is thrown from an admin method, where we do not have any retries if > the region is not online yet. Should be a test issue, let me fix. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-22198) Fix flakey TestAsyncTableGetMultiThreaded
[ https://issues.apache.org/jira/browse/HBASE-22198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-22198: -- Component/s: test > Fix flakey TestAsyncTableGetMultiThreaded > - > > Key: HBASE-22198 > URL: https://issues.apache.org/jira/browse/HBASE-22198 > Project: HBase > Issue Type: Bug > Components: test >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 3.0.0, 2.2.0, 2.3.0, 2.1.5 > > > https://builds.apache.org/job/HBase-Flaky-Tests/job/master/2959/testReport/junit/org.apache.hadoop.hbase.client/TestAsyncTableGetMultiThreaded/test/ > The error is thrown from an admin method, where we do not have any retries if > the region is not online yet. Should be a test issue, let me fix. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] [hbase] HorizonNet commented on issue #137: HBASE-22203 Reformatted DemoClient.java
HorizonNet commented on issue #137: HBASE-22203 Reformatted DemoClient.java URL: https://github.com/apache/hbase/pull/137#issuecomment-482041449 Ok, will do that. It's been a while since I did that and didn't know if with GitHub something has changed about the process (haven't seen it on the mailing list). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (HBASE-21957) Unify refCount of BucketEntry and refCount of hbase.nio.ByteBuff into one
[ https://issues.apache.org/jira/browse/HBASE-21957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815309#comment-16815309 ] Zheng Hu commented on HBASE-21957: -- Attached an initial patch.v1 to show the basic design. NOT the final patch, still need more UT, also need the HBASE-22159 get merged firstly. [~anoop.hbase], [~ram_krish], please take a look for HBASE-22159, need to merge that one firstly, then can put this one on RB and trigger the hadoop QA, Thanks. > Unify refCount of BucketEntry and refCount of hbase.nio.ByteBuff into one > - > > Key: HBASE-21957 > URL: https://issues.apache.org/jira/browse/HBASE-21957 > Project: HBase > Issue Type: Sub-task >Reporter: Zheng Hu >Assignee: Zheng Hu >Priority: Major > Attachments: HBASE-21957.HBASE-21879.v1.patch > > > After HBASE-12295, we have block with MemoryType.SHARED or > MemoryType.EXCLUSIVE, the block in offheap BucketCache will be shared, and > have an reference count to track its life cycle. If no rpc reference to the > shared block, then the block can be evicted. > while after the HBASE-21916, we introduced an refcount for ByteBuff, then I > think we can unify the two into one. tried to fix this when preparing patch > for HBASE-21879, but seems can be different sub-task, and it won't affect the > main logic of HBASE-21879, so create a seperate one. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] [hbase] HorizonNet merged pull request #131: HBASE-19762 Fixed Checkstyle errors in hbase-http
HorizonNet merged pull request #131: HBASE-19762 Fixed Checkstyle errors in hbase-http URL: https://github.com/apache/hbase/pull/131 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] HorizonNet commented on issue #120: HBASE-20494 Updated the version of metrics-core to 3.2.6
HorizonNet commented on issue #120: HBASE-20494 Updated the version of metrics-core to 3.2.6 URL: https://github.com/apache/hbase/pull/120#issuecomment-482073006 Test failures (timeouts) seem to be unrelated to the actual changes. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] Apache9 commented on issue #137: HBASE-22203 Reformatted DemoClient.java
Apache9 commented on issue #137: HBASE-22203 Reformatted DemoClient.java URL: https://github.com/apache/hbase/pull/137#issuecomment-482041009 You just need another committer to approve your changes, but you'd better merge it by yourself :) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Created] (HBASE-22209) sdf
leonjoe created HBASE-22209: --- Summary: sdf Key: HBASE-22209 URL: https://issues.apache.org/jira/browse/HBASE-22209 Project: HBase Issue Type: Bug Components: Admin Affects Versions: 2.1.4 Reporter: leonjoe Fix For: hbase-6055 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-22084) Rename AccessControlLists to PermissionStorage
[ https://issues.apache.org/jira/browse/HBASE-22084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815179#comment-16815179 ] HBase QA commented on HBASE-22084: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 48s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 23 new or modified test files. {color} | || || || || {color:brown} branch-2.2 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 58s{color} | {color:green} branch-2.2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 34s{color} | {color:green} branch-2.2 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 51s{color} | {color:green} branch-2.2 passed {color} | | {color:blue}0{color} | {color:blue} refguide {color} | {color:blue} 15m 48s{color} | {color:blue} branch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 11s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 6s{color} | {color:blue} hbase-server in branch-2.2 has 11 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 59s{color} | {color:green} branch-2.2 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 8s{color} | {color:green} hbase-server: The patch generated 0 new + 83 unchanged - 15 fixed = 83 total (was 98) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s{color} | {color:green} hbase-mapreduce: The patch generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} The patch passed checkstyle in hbase-rsgroup {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 10s{color} | {color:green} The patch passed checkstyle in hbase-shell {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} The patch passed checkstyle in hbase-endpoint {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 57s{color} | {color:green} root: The patch generated 0 new + 83 unchanged - 16 fixed = 83 total (was 99) {color} | | {color:red}-1{color} | {color:red} rubocop {color} | {color:red} 0m 5s{color} | {color:red} The patch generated 1 new + 57 unchanged - 1 fixed = 58 total (was 58) {color} | | {color:green}+1{color} | {color:green} ruby-lint {color} | {color:green} 0m 2s{color} | {color:green} There were no new ruby-lint issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:blue}0{color} | {color:blue} refguide {color} | {color:blue} 5m 37s{color} | {color:blue} patch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 3m
[jira] [Updated] (HBASE-21957) Unify refCount of BucketEntry and refCount of hbase.nio.ByteBuff into one
[ https://issues.apache.org/jira/browse/HBASE-21957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zheng Hu updated HBASE-21957: - Attachment: (was: HBASE-22159.HBASE-21879.v6.patch) > Unify refCount of BucketEntry and refCount of hbase.nio.ByteBuff into one > - > > Key: HBASE-21957 > URL: https://issues.apache.org/jira/browse/HBASE-21957 > Project: HBase > Issue Type: Sub-task >Reporter: Zheng Hu >Assignee: Zheng Hu >Priority: Major > > After HBASE-12295, we have block with MemoryType.SHARED or > MemoryType.EXCLUSIVE, the block in offheap BucketCache will be shared, and > have an reference count to track its life cycle. If no rpc reference to the > shared block, then the block can be evicted. > while after the HBASE-21916, we introduced an refcount for ByteBuff, then I > think we can unify the two into one. tried to fix this when preparing patch > for HBASE-21879, but seems can be different sub-task, and it won't affect the > main logic of HBASE-21879, so create a seperate one. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21957) Unify refCount of BucketEntry and refCount of hbase.nio.ByteBuff into one
[ https://issues.apache.org/jira/browse/HBASE-21957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zheng Hu updated HBASE-21957: - Attachment: HBASE-21957.HBASE-21879.v1.patch > Unify refCount of BucketEntry and refCount of hbase.nio.ByteBuff into one > - > > Key: HBASE-21957 > URL: https://issues.apache.org/jira/browse/HBASE-21957 > Project: HBase > Issue Type: Sub-task >Reporter: Zheng Hu >Assignee: Zheng Hu >Priority: Major > Attachments: HBASE-21957.HBASE-21879.v1.patch > > > After HBASE-12295, we have block with MemoryType.SHARED or > MemoryType.EXCLUSIVE, the block in offheap BucketCache will be shared, and > have an reference count to track its life cycle. If no rpc reference to the > shared block, then the block can be evicted. > while after the HBASE-21916, we introduced an refcount for ByteBuff, then I > think we can unify the two into one. tried to fix this when preparing patch > for HBASE-21879, but seems can be different sub-task, and it won't affect the > main logic of HBASE-21879, so create a seperate one. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21957) Unify refCount of BucketEntry and refCount of hbase.nio.ByteBuff into one
[ https://issues.apache.org/jira/browse/HBASE-21957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zheng Hu updated HBASE-21957: - Attachment: HBASE-22159.HBASE-21879.v6.patch > Unify refCount of BucketEntry and refCount of hbase.nio.ByteBuff into one > - > > Key: HBASE-21957 > URL: https://issues.apache.org/jira/browse/HBASE-21957 > Project: HBase > Issue Type: Sub-task >Reporter: Zheng Hu >Assignee: Zheng Hu >Priority: Major > > After HBASE-12295, we have block with MemoryType.SHARED or > MemoryType.EXCLUSIVE, the block in offheap BucketCache will be shared, and > have an reference count to track its life cycle. If no rpc reference to the > shared block, then the block can be evicted. > while after the HBASE-21916, we introduced an refcount for ByteBuff, then I > think we can unify the two into one. tried to fix this when preparing patch > for HBASE-21879, but seems can be different sub-task, and it won't affect the > main logic of HBASE-21879, so create a seperate one. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-22200) WALSplitter.hasRecoveredEdits should use same FS instance from WAL region dir
[ https://issues.apache.org/jira/browse/HBASE-22200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815328#comment-16815328 ] HBase QA commented on HBASE-22200: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 28s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} branch-2.1 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 8s{color} | {color:green} branch-2.1 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s{color} | {color:green} branch-2.1 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 9s{color} | {color:green} branch-2.1 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 3m 56s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 12s{color} | {color:green} branch-2.1 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s{color} | {color:green} branch-2.1 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 3m 58s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 8m 2s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green}126m 53s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}161m 46s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/PreCommit-HBASE-Build/64/artifact/patchprocess/Dockerfile | | JIRA Issue | HBASE-22200 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12965561/HBASE-22200-branch-2.1-003.patch | | Optional Tests | dupname asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux b8c6f1125e31 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2.1 / 4ceffc83fe | | maven | version: Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) | | Default Java | 1.8.0_181 | | findbugs | v3.1.11 | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/64/testReport/ | | Max. process+thread count | 5002 (vs. ulimit of 1) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/64/console | | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org | This message was automatically generated. >
[jira] [Resolved] (HBASE-22209) sdf
[ https://issues.apache.org/jira/browse/HBASE-22209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jean-Marc Spaggiari resolved HBASE-22209. - Resolution: Invalid > sdf > --- > > Key: HBASE-22209 > URL: https://issues.apache.org/jira/browse/HBASE-22209 > Project: HBase > Issue Type: Bug > Components: Admin >Affects Versions: 2.1.4 >Reporter: leonjoe >Priority: Major > Fix For: hbase-6055 > > Original Estimate: 504h > Remaining Estimate: 504h > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HBASE-17564) Fix remaining calls to deprecated methods of Admin and HBaseAdmin
[ https://issues.apache.org/jira/browse/HBASE-17564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jan Hentschel resolved HBASE-17564. --- Resolution: Duplicate Fix Version/s: (was: 3.0.0) This was already solved as part of HBASE-18428. > Fix remaining calls to deprecated methods of Admin and HBaseAdmin > - > > Key: HBASE-17564 > URL: https://issues.apache.org/jira/browse/HBASE-17564 > Project: HBase > Issue Type: Improvement >Reporter: Jan Hentschel >Assignee: Jan Hentschel >Priority: Trivial > Attachments: HBASE-17564.master.001.patch > > > Fix the remaining calls to deprecated methods of the *Admin* interface and > the *HBaseAdmin* class. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-22144) MultiRowRangeFilter does not work with reversed scans
[ https://issues.apache.org/jira/browse/HBASE-22144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815518#comment-16815518 ] Josh Elser commented on HBASE-22144: Ugh, this doesn't come back to branch-2.0 or branch-2.1 due to HBASE-19008 and HBASE-21129 never being backported to these branches. I'm not sure what the lesser headache would be: making sure those changes don't break public API guarantees or re-implementing this fix. > MultiRowRangeFilter does not work with reversed scans > - > > Key: HBASE-22144 > URL: https://issues.apache.org/jira/browse/HBASE-22144 > Project: HBase > Issue Type: Bug >Reporter: Josh Elser >Assignee: Josh Elser >Priority: Major > Attachments: HBASE-22144.001.patch, HBASE-22144.002.patch, > HBASE-22144.002.patch > > > It appears that MultiRowRangeFilter was never written to function with > reverse scans. There is too much logic that operates with the assumption that > we are always moving "forward" through increasing ranges. It needs to be > rewritten to "traverse" forward or backward, given how the context of the > scan being used. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-22189) Remove usage of StoreFile.getModificationTimeStamp
[ https://issues.apache.org/jira/browse/HBASE-22189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815519#comment-16815519 ] Hudson commented on HBASE-22189: Results for branch master [build #925 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/925/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/master/925//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/master/925//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/master/925//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Remove usage of StoreFile.getModificationTimeStamp > -- > > Key: HBASE-22189 > URL: https://issues.apache.org/jira/browse/HBASE-22189 > Project: HBase > Issue Type: Task >Reporter: Jan Hentschel >Assignee: Jan Hentschel >Priority: Trivial > Fix For: 3.0.0, 2.3.0, 2.0.6, 2.1.5, 2.2.1 > > > The method StoreFile.getModificationTimeStamp() was deprecated, but is still > used. The remaining usages should be moved to > StoreFile.getModificationTimestamp(). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-22084) Rename AccessControlLists to PermissionStorage
[ https://issues.apache.org/jira/browse/HBASE-22084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815419#comment-16815419 ] HBase QA commented on HBASE-22084: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 23 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 15s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 48s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 30s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} refguide {color} | {color:blue} 5m 56s{color} | {color:blue} branch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 43s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 2m 59s{color} | {color:blue} hbase-server in master has 11 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 59s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 22s{color} | {color:green} root: The patch generated 0 new + 82 unchanged - 16 fixed = 82 total (was 98) {color} | | {color:red}-1{color} | {color:red} rubocop {color} | {color:red} 0m 5s{color} | {color:red} The patch generated 1 new + 59 unchanged - 1 fixed = 60 total (was 60) {color} | | {color:green}+1{color} | {color:green} ruby-lint {color} | {color:green} 0m 2s{color} | {color:green} There were no new ruby-lint issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:blue}0{color} | {color:blue} refguide {color} | {color:blue} 5m 52s{color} | {color:blue} patch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 36s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 8m 57s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 or 3.0.0. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 27s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green}221m 7s{color} | {color:green} root in the patch passed. {color} | | {color:green}+1{color} |
[jira] [Commented] (HBASE-22189) Remove usage of StoreFile.getModificationTimeStamp
[ https://issues.apache.org/jira/browse/HBASE-22189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815460#comment-16815460 ] Hudson commented on HBASE-22189: Results for branch branch-2 [build #1813 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1813/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1813//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1813//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1813//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Remove usage of StoreFile.getModificationTimeStamp > -- > > Key: HBASE-22189 > URL: https://issues.apache.org/jira/browse/HBASE-22189 > Project: HBase > Issue Type: Task >Reporter: Jan Hentschel >Assignee: Jan Hentschel >Priority: Trivial > Fix For: 3.0.0, 2.3.0, 2.0.6, 2.1.5, 2.2.1 > > > The method StoreFile.getModificationTimeStamp() was deprecated, but is still > used. The remaining usages should be moved to > StoreFile.getModificationTimestamp(). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-22208) Create auth manager and expose it in RS
[ https://issues.apache.org/jira/browse/HBASE-22208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815456#comment-16815456 ] HBase QA commented on HBASE-22208: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 8s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 25s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 26s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 2m 47s{color} | {color:blue} hbase-server in master has 11 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 30s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 9m 19s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green}140m 15s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 48s{color} | {color:green} hbase-rsgroup in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 50s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}186m 51s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/PreCommit-HBASE-Build/66/artifact/patchprocess/Dockerfile | | JIRA Issue | HBASE-22208 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12965584/HBASE-22208.master.002.patch | | Optional Tests | dupname asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 49344e9830fb 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / fc6e3fc9d7 | | maven | version: Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
[jira] [Commented] (HBASE-22196) Split TestRestartCluster
[ https://issues.apache.org/jira/browse/HBASE-22196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815458#comment-16815458 ] Hudson commented on HBASE-22196: Results for branch branch-2 [build #1813 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1813/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1813//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1813//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1813//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Split TestRestartCluster > > > Key: HBASE-22196 > URL: https://issues.apache.org/jira/browse/HBASE-22196 > Project: HBase > Issue Type: Sub-task > Components: test >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 3.0.0, 2.2.0, 2.3.0 > > > The logs for later tests are messed up with error messages, like > {noformat} > 2019-04-09 09:41:11,717 WARN [LeaseRenewer:jenkins.hfs.12@localhost:41108] > hdfs.LeaseRenewer(468): Failed to renew lease for > [DFSClient_NONMAPREDUCE_400481390_21] for 55 seconds. Will retry shortly ... > java.net.ConnectException: Call From asf918.gq1.ygridcore.net/67.195.81.138 > to localhost:41108 failed on connection exception: java.net.ConnectException: > Connection refused; For more details see: > http://wiki.apache.org/hadoop/ConnectionRefused > at sun.reflect.GeneratedConstructorAccessor79.newInstance(Unknown > Source) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732) > at org.apache.hadoop.ipc.Client.call(Client.java:1480) > at org.apache.hadoop.ipc.Client.call(Client.java:1413) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) > at com.sun.proxy.$Proxy30.renewLease(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.renewLease(ClientNamenodeProtocolTranslatorPB.java:595) > at sun.reflect.GeneratedMethodAccessor154.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy33.renewLease(Unknown Source) > at sun.reflect.GeneratedMethodAccessor154.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:372) > at com.sun.proxy.$Proxy34.renewLease(Unknown Source) > at sun.reflect.GeneratedMethodAccessor154.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:372) > at com.sun.proxy.$Proxy34.renewLease(Unknown Source) > at org.apache.hadoop.hdfs.DFSClient.renewLease(DFSClient.java:901) > at org.apache.hadoop.hdfs.LeaseRenewer.renew(LeaseRenewer.java:423) > at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:448) > at org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71) > at org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:304) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.net.ConnectException: Connection refused > at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) > at > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) > at > org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) > at
[jira] [Commented] (HBASE-22198) Fix flakey TestAsyncTableGetMultiThreaded
[ https://issues.apache.org/jira/browse/HBASE-22198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815459#comment-16815459 ] Hudson commented on HBASE-22198: Results for branch branch-2 [build #1813 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1813/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1813//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1813//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1813//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Fix flakey TestAsyncTableGetMultiThreaded > - > > Key: HBASE-22198 > URL: https://issues.apache.org/jira/browse/HBASE-22198 > Project: HBase > Issue Type: Bug > Components: test >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 3.0.0, 2.2.0, 2.3.0, 2.1.5 > > > https://builds.apache.org/job/HBase-Flaky-Tests/job/master/2959/testReport/junit/org.apache.hadoop.hbase.client/TestAsyncTableGetMultiThreaded/test/ > The error is thrown from an admin method, where we do not have any retries if > the region is not online yet. Should be a test issue, let me fix. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-22189) Remove usage of StoreFile.getModificationTimeStamp
[ https://issues.apache.org/jira/browse/HBASE-22189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815494#comment-16815494 ] Hudson commented on HBASE-22189: Results for branch branch-2.1 [build #1044 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1044/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1044//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1044//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1044//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Remove usage of StoreFile.getModificationTimeStamp > -- > > Key: HBASE-22189 > URL: https://issues.apache.org/jira/browse/HBASE-22189 > Project: HBase > Issue Type: Task >Reporter: Jan Hentschel >Assignee: Jan Hentschel >Priority: Trivial > Fix For: 3.0.0, 2.3.0, 2.0.6, 2.1.5, 2.2.1 > > > The method StoreFile.getModificationTimeStamp() was deprecated, but is still > used. The remaining usages should be moved to > StoreFile.getModificationTimestamp(). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-22198) Fix flakey TestAsyncTableGetMultiThreaded
[ https://issues.apache.org/jira/browse/HBASE-22198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815493#comment-16815493 ] Hudson commented on HBASE-22198: Results for branch branch-2.1 [build #1044 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1044/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1044//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1044//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1044//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Fix flakey TestAsyncTableGetMultiThreaded > - > > Key: HBASE-22198 > URL: https://issues.apache.org/jira/browse/HBASE-22198 > Project: HBase > Issue Type: Bug > Components: test >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 3.0.0, 2.2.0, 2.3.0, 2.1.5 > > > https://builds.apache.org/job/HBase-Flaky-Tests/job/master/2959/testReport/junit/org.apache.hadoop.hbase.client/TestAsyncTableGetMultiThreaded/test/ > The error is thrown from an admin method, where we do not have any retries if > the region is not online yet. Should be a test issue, let me fix. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-22196) Split TestRestartCluster
[ https://issues.apache.org/jira/browse/HBASE-22196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815410#comment-16815410 ] Hudson commented on HBASE-22196: Results for branch master [build #924 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/924/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/master/924//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/master/924//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/master/924//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Split TestRestartCluster > > > Key: HBASE-22196 > URL: https://issues.apache.org/jira/browse/HBASE-22196 > Project: HBase > Issue Type: Sub-task > Components: test >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 3.0.0, 2.2.0, 2.3.0 > > > The logs for later tests are messed up with error messages, like > {noformat} > 2019-04-09 09:41:11,717 WARN [LeaseRenewer:jenkins.hfs.12@localhost:41108] > hdfs.LeaseRenewer(468): Failed to renew lease for > [DFSClient_NONMAPREDUCE_400481390_21] for 55 seconds. Will retry shortly ... > java.net.ConnectException: Call From asf918.gq1.ygridcore.net/67.195.81.138 > to localhost:41108 failed on connection exception: java.net.ConnectException: > Connection refused; For more details see: > http://wiki.apache.org/hadoop/ConnectionRefused > at sun.reflect.GeneratedConstructorAccessor79.newInstance(Unknown > Source) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732) > at org.apache.hadoop.ipc.Client.call(Client.java:1480) > at org.apache.hadoop.ipc.Client.call(Client.java:1413) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) > at com.sun.proxy.$Proxy30.renewLease(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.renewLease(ClientNamenodeProtocolTranslatorPB.java:595) > at sun.reflect.GeneratedMethodAccessor154.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy33.renewLease(Unknown Source) > at sun.reflect.GeneratedMethodAccessor154.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:372) > at com.sun.proxy.$Proxy34.renewLease(Unknown Source) > at sun.reflect.GeneratedMethodAccessor154.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:372) > at com.sun.proxy.$Proxy34.renewLease(Unknown Source) > at org.apache.hadoop.hdfs.DFSClient.renewLease(DFSClient.java:901) > at org.apache.hadoop.hdfs.LeaseRenewer.renew(LeaseRenewer.java:423) > at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:448) > at org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71) > at org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:304) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.net.ConnectException: Connection refused > at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) > at > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) > at > org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) > at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) >
[jira] [Updated] (HBASE-22211) Remove the returnBlock method in CachingBlockReader because we can just call HFileBlock#release directly
[ https://issues.apache.org/jira/browse/HBASE-22211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zheng Hu updated HBASE-22211: - Description: Once HBASE-21957 get resolved, we can remove the returnBlock in this issue. > Remove the returnBlock method in CachingBlockReader because we can just call > HFileBlock#release directly > - > > Key: HBASE-22211 > URL: https://issues.apache.org/jira/browse/HBASE-22211 > Project: HBase > Issue Type: Sub-task >Reporter: Zheng Hu >Assignee: Zheng Hu >Priority: Major > > Once HBASE-21957 get resolved, we can remove the returnBlock in this issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-22149) HBOSS: A FileSystem implementation to provide HBase's required semantics
[ https://issues.apache.org/jira/browse/HBASE-22149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815461#comment-16815461 ] Sean Mackrory commented on HBASE-22149: --- {quote}fs.qualify(path) {quote} Yeah I probably do need that, although it hasn't come up yet in tests. {quote}What about multi-bucket support?{quote} Added that yesterday, actually - in my next patch the ZK client will now be 'jailed' inside a z-node named after the hostname in the URI. That's not quite right for WASB and ABFS but they don't need this anyway. It's right for S3 and GCS. Others may come along that require it to be rethought but that's good enough for now and I'd like to avoid putting any FS-specific logic inside this as long as I can. {quote}S3Mock sounds interesting{quote} Yes, I wondered if it was a faithful enough recreation for the full battery of s3a tests. One side note: even though I got S3Mock working, I did have to rely on APIs designated as Private (specifically the S3ClientFactory stuff). So we need to have a discussion about if we think those APIs might be stable enough to promote to LimitedPrivate({"HBase"}), or perhaps another API wherein I simply hand the FS a ready-to-go S3 client, instead of pointing it at a "Factory" class that will return the client I already made (which is what I have to do now). {quote}For lockListing(), why is a shared lock on the path being listed not sufficient?{quote} Because you want it to have exclusive access to all the children (and in some cases all children recursively) when there may be renames going on inside that path. Other than this particular case, write locks don't have to block when there are read locks above them in the path. For a non-recursive listing, a read-lock on all children of the path you're referencing would be sufficient, but how do you correctly enumerate the children without first having the lock? You end back where you started. Exclusive lock on the parent for listing is a little more aggressive than needed, but it's simple and safe. I've tried to err on that side of things since we can't seem to enumerate all the FS assumptions of HBase. If integration / performance testing finds that there is a particular point of contention, that's a targeted area we can investigate to determine is relaxing the constraints is safe. {quote}Deadlock detection and debuggability{quote} {quote}I don't have much experience with Curator but have heard of it.{quote} Yeah this will definitely warrant some work on that, and operational concerns when problems arise. I've been finding Curator is not as fool-proof as I had hoped, and my next patch actually eliminates the use of curator-framework (in favor of the lower level curator-client) for everything but the actual locking / unlock. The APIs for creating and deleting znodes have actually been very hard to debug. > HBOSS: A FileSystem implementation to provide HBase's required semantics > > > Key: HBASE-22149 > URL: https://issues.apache.org/jira/browse/HBASE-22149 > Project: HBase > Issue Type: New Feature > Components: Filesystem Integration >Reporter: Sean Mackrory >Assignee: Sean Mackrory >Priority: Critical > Attachments: HBASE-22149-hadoop.patch, HBASE-22149-hbase-2.patch, > HBASE-22149-hbase-3.patch, HBASE-22149-hbase.patch > > > (Have been using the name HBOSS for HBase / Object Store Semantics) > I've had some thoughts about how to solve the problem of running HBase on > object stores. There has been some thought in the past about adding the > required semantics to S3Guard, but I have some concerns about that. First, > it's mixing complicated solutions to different problems (bridging the gap > between a flat namespace and a hierarchical namespace vs. solving > inconsistency). Second, it's S3-specific, whereas other objects stores could > use virtually identical solutions. And third, we can't do things like atomic > renames in a true sense. There would have to be some trade-offs specific to > HBase's needs and it's better if we can solve that in an HBase-specific > module without mixing all that logic in with the rest of S3A. > Ideas to solve this above the FileSystem layer have been proposed and > considered (HBASE-20431, for one), and maybe that's the right way forward > long-term, but it certainly seems to be a hard problem and hasn't been done > yet. But I don't know enough of all the internal considerations to make much > of a judgment on that myself. > I propose a FileSystem implementation that wraps another FileSystem instance > and provides locking of FileSystem operations to ensure correct semantics. > Locking could quite possibly be done on the same ZooKeeper ensemble as an > HBase cluster already uses (I'm
[jira] [Commented] (HBASE-22104) Remove Hadoop 2.7 from next minor releases
[ https://issues.apache.org/jira/browse/HBASE-22104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815476#comment-16815476 ] Josh Elser commented on HBASE-22104: Thanks Peter. That's helpful. > Remove Hadoop 2.7 from next minor releases > -- > > Key: HBASE-22104 > URL: https://issues.apache.org/jira/browse/HBASE-22104 > Project: HBase > Issue Type: Task > Components: hadoop2 >Affects Versions: 3.0.0, 1.6.0, 2.3.0 >Reporter: Sean Busbey >Assignee: Josh Elser >Priority: Critical > > Hadoop 2.7 is now EOM ([common-dev@hadoop "\[DISCUSS\] branch 2.7 > EoL"|https://lists.apache.org/thread.html/d1f98c2c386f2f4b980489b543db3d0bb7bdb94ea12f8fc5a90f527b@%3Ccommon-dev.hadoop.apache.org%3E]) > and has an active licensing issue (HADOOP-13794) > Let's go ahead and axe it from the next minor releases. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-22210) Fix hbase-connectors-assembly to include every jar
Balazs Meszaros created HBASE-22210: --- Summary: Fix hbase-connectors-assembly to include every jar Key: HBASE-22210 URL: https://issues.apache.org/jira/browse/HBASE-22210 Project: HBase Issue Type: Task Components: hbase-connectors Affects Versions: connector-1.0.0 Reporter: Balazs Meszaros Assignee: Balazs Meszaros Fix For: connector-1.0.0 After compiling hbase-connectors, {{bin/hbase-connectors kafkaproxy}} throws the following exception: {noformat} Error: Could not find or load main class org.apache.hadoop.hbase.kafka.KafkaProxy {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21957) Unify refCount of BucketEntry and refCount of hbase.nio.ByteBuff into one
[ https://issues.apache.org/jira/browse/HBASE-21957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zheng Hu updated HBASE-21957: - Attachment: HBASE-21957.HBASE-21879.v2.patch > Unify refCount of BucketEntry and refCount of hbase.nio.ByteBuff into one > - > > Key: HBASE-21957 > URL: https://issues.apache.org/jira/browse/HBASE-21957 > Project: HBase > Issue Type: Sub-task >Reporter: Zheng Hu >Assignee: Zheng Hu >Priority: Major > Attachments: HBASE-21957.HBASE-21879.v1.patch, > HBASE-21957.HBASE-21879.v2.patch > > > After HBASE-12295, we have block with MemoryType.SHARED or > MemoryType.EXCLUSIVE, the block in offheap BucketCache will be shared, and > have an reference count to track its life cycle. If no rpc reference to the > shared block, then the block can be evicted. > while after the HBASE-21916, we introduced an refcount for ByteBuff, then I > think we can unify the two into one. tried to fix this when preparing patch > for HBASE-21879, but seems can be different sub-task, and it won't affect the > main logic of HBASE-21879, so create a seperate one. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-22199) Replace "UTF-8" with StandardCharsets.UTF_8 where possible
[ https://issues.apache.org/jira/browse/HBASE-22199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815481#comment-16815481 ] Peter Somogyi commented on HBASE-22199: --- On a previous charset cleanup we used Bytes.toBytes("string") instead of getBytes(StandardCharsets.UTF_8). Wouldn't it be better to use that? See HBASE-19545. > Replace "UTF-8" with StandardCharsets.UTF_8 where possible > -- > > Key: HBASE-22199 > URL: https://issues.apache.org/jira/browse/HBASE-22199 > Project: HBase > Issue Type: Task >Reporter: Jan Hentschel >Assignee: Jan Hentschel >Priority: Trivial > > Currently the String "UTF-8" is used in some places where > StandardCharsets.UTF_8 could be used. To make it easier to maintain, the > current usages of "UTF-8" as a String should be replaced with > StandardCharsets.UTF_8. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21879) Read HFile's block to ByteBuffer directly instead of to byte for reducing young gc purpose
[ https://issues.apache.org/jira/browse/HBASE-21879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815488#comment-16815488 ] Hudson commented on HBASE-21879: Results for branch HBASE-21879 [build #59 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/59/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/59//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/59//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/59//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Read HFile's block to ByteBuffer directly instead of to byte for reducing > young gc purpose > -- > > Key: HBASE-21879 > URL: https://issues.apache.org/jira/browse/HBASE-21879 > Project: HBase > Issue Type: Improvement >Reporter: Zheng Hu >Assignee: Zheng Hu >Priority: Major > Fix For: 3.0.0, 2.3.0 > > Attachments: HBASE-21879.v1.patch, HBASE-21879.v1.patch, > QPS-latencies-before-HBASE-21879.png, gc-data-before-HBASE-21879.png > > > In HFileBlock#readBlockDataInternal, we have the following: > {code} > @VisibleForTesting > protected HFileBlock readBlockDataInternal(FSDataInputStream is, long offset, > long onDiskSizeWithHeaderL, boolean pread, boolean verifyChecksum, > boolean updateMetrics) > throws IOException { > // . > // TODO: Make this ByteBuffer-based. Will make it easier to go to HDFS with > BBPool (offheap). > byte [] onDiskBlock = new byte[onDiskSizeWithHeader + hdrSize]; > int nextBlockOnDiskSize = readAtOffset(is, onDiskBlock, preReadHeaderSize, > onDiskSizeWithHeader - preReadHeaderSize, true, offset + > preReadHeaderSize, pread); > if (headerBuf != null) { > // ... > } > // ... > } > {code} > In the read path, we still read the block from hfile to on-heap byte[], then > copy the on-heap byte[] to offheap bucket cache asynchronously, and in my > 100% get performance test, I also observed some frequent young gc, The > largest memory footprint in the young gen should be the on-heap block byte[]. > In fact, we can read HFile's block to ByteBuffer directly instead of to > byte[] for reducing young gc purpose. we did not implement this before, > because no ByteBuffer reading interface in the older HDFS client, but 2.7+ > has supported this now, so we can fix this now. I think. > Will provide an patch and some perf-comparison for this. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-22189) Remove usage of StoreFile.getModificationTimeStamp
[ https://issues.apache.org/jira/browse/HBASE-22189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815538#comment-16815538 ] Hudson commented on HBASE-22189: Results for branch branch-2.0 [build #1510 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1510/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1510//General_Nightly_Build_Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1510//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1510//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. > Remove usage of StoreFile.getModificationTimeStamp > -- > > Key: HBASE-22189 > URL: https://issues.apache.org/jira/browse/HBASE-22189 > Project: HBase > Issue Type: Task >Reporter: Jan Hentschel >Assignee: Jan Hentschel >Priority: Trivial > Fix For: 3.0.0, 2.3.0, 2.0.6, 2.1.5, 2.2.1 > > > The method StoreFile.getModificationTimeStamp() was deprecated, but is still > used. The remaining usages should be moved to > StoreFile.getModificationTimestamp(). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-22144) MultiRowRangeFilter does not work with reversed scans
[ https://issues.apache.org/jira/browse/HBASE-22144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815464#comment-16815464 ] Josh Elser commented on HBASE-22144: Thanks, Toshi. I think that's a good idea. We can have a discussion about it. I could see us interpreting "forward ranges" for a reverse scan being considered a "bug", too. But, we can see what others think. > MultiRowRangeFilter does not work with reversed scans > - > > Key: HBASE-22144 > URL: https://issues.apache.org/jira/browse/HBASE-22144 > Project: HBase > Issue Type: Bug >Reporter: Josh Elser >Assignee: Josh Elser >Priority: Major > Attachments: HBASE-22144.001.patch, HBASE-22144.002.patch, > HBASE-22144.002.patch > > > It appears that MultiRowRangeFilter was never written to function with > reverse scans. There is too much logic that operates with the assumption that > we are always moving "forward" through increasing ranges. It needs to be > rewritten to "traverse" forward or backward, given how the context of the > scan being used. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-22212) Backport missing filter improvements
Josh Elser created HBASE-22212: -- Summary: Backport missing filter improvements Key: HBASE-22212 URL: https://issues.apache.org/jira/browse/HBASE-22212 Project: HBase Issue Type: Bug Components: Filters Reporter: Josh Elser Assignee: Josh Elser HBASE-19008 and HBASE-21129 were never backported beyond branch-2. I can't find any reason that this was not done. Despite these being public-tagged classes, no incompatible changes were added. The lack of these changes prevents HBASE-22144 from being backported cleanly. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-22144) MultiRowRangeFilter does not work with reversed scans
[ https://issues.apache.org/jira/browse/HBASE-22144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Josh Elser updated HBASE-22144: --- Component/s: Filters > MultiRowRangeFilter does not work with reversed scans > - > > Key: HBASE-22144 > URL: https://issues.apache.org/jira/browse/HBASE-22144 > Project: HBase > Issue Type: Bug > Components: Filters >Reporter: Josh Elser >Assignee: Josh Elser >Priority: Major > Attachments: HBASE-22144.001.patch, HBASE-22144.002.patch, > HBASE-22144.002.patch > > > It appears that MultiRowRangeFilter was never written to function with > reverse scans. There is too much logic that operates with the assumption that > we are always moving "forward" through increasing ranges. It needs to be > rewritten to "traverse" forward or backward, given how the context of the > scan being used. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20151) Bug with SingleColumnValueFilter and FamilyFilter
[ https://issues.apache.org/jira/browse/HBASE-20151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815908#comment-16815908 ] Zheng Hu commented on HBASE-20151: -- Assigned to me firstly, I'll think about this issue, if no good solution, I'll close this. Thanks. > Bug with SingleColumnValueFilter and FamilyFilter > - > > Key: HBASE-20151 > URL: https://issues.apache.org/jira/browse/HBASE-20151 > Project: HBase > Issue Type: Bug >Affects Versions: 2.1.0, 2.0.1, 1.4.5 > Environment: MacOS 10.13.3 > HBase 1.3.1 >Reporter: Steven Sadowski >Assignee: Reid Chan >Priority: Major > Fix For: 3.0.0, 1.5.0, 2.3.0 > > Attachments: HBASE-20151.master.001.patch, > HBASE-20151.master.002.patch, HBASE-20151.master.003.patch, > HBASE-20151.master.004.patch, HBASE-20151.master.004.patch, > HBASE-20151.master.005.patch, HBASE-20151.master.006.patch, > filter-list-type.v1.txt > > > When running the following queries, the result is sometimes return correctly > and other times incorrectly based on the qualifier queried. > Setup: > {code:java} > create 'test', 'a', 'b' > test = get_table 'test' > test.put '1', 'a:1', nil > test.put '1', 'a:10', nil > test.put '1', 'b:2', nil > {code} > > This query works fine when the SCVF's qualifier has length 1 (i.e. '1') : > {code:java} > test.scan({ FILTER => "( > SingleColumnValueFilter('a','1',=,'binary:',true,true) AND > FamilyFilter(=,'binary:b') )"}) > ROW COLUMN+CELL > 1column=b:2, > timestamp=1520455888059, value= > 1 row(s) in 0.0060 seconds > {code} > > The query should return the same result when passed a qualifier of length 2 > (i.e. '10') : > {code:java} > test.scan({ FILTER => "( > SingleColumnValueFilter('a','10',=,'binary:',true,true) AND > FamilyFilter(=,'binary:b') )"}) > ROW COLUMN+CELL > 0 row(s) in 0.0110 seconds > {code} > However, in this case, it does not return any row (expected result would be > to return the same result as the first query). > > Removing the family filter while the qualifier is '10' yields expected > results: > {code:java} > test.scan({ FILTER => "( > SingleColumnValueFilter('a','10',=,'binary:',true,true) )"}) > ROW COLUMN+CELL > 1column=a:1, > timestamp=1520455887954, value= > 1column=a:10, > timestamp=1520455888024, value= > 1column=b:2, > timestamp=1520455888059, value= > 1 row(s) in 0.0140 seconds > {code} > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20993) [Auth] IPC client fallback to simple auth allowed doesn't work
[ https://issues.apache.org/jira/browse/HBASE-20993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815909#comment-16815909 ] Reid Chan commented on HBASE-20993: --- Got stuck in async(netty) ipc, but i'm still working. > [Auth] IPC client fallback to simple auth allowed doesn't work > -- > > Key: HBASE-20993 > URL: https://issues.apache.org/jira/browse/HBASE-20993 > Project: HBase > Issue Type: Bug > Components: Client, IPC/RPC, security >Affects Versions: 1.2.6, 1.3.2, 1.2.7, 1.4.7 >Reporter: Reid Chan >Assignee: Reid Chan >Priority: Critical > Fix For: 1.5.0, 1.4.10, 1.3.4 > > Attachments: HBASE-20993.001.patch, > HBASE-20993.003.branch-1.flowchart.png, HBASE-20993.branch-1.002.patch, > HBASE-20993.branch-1.003.patch, HBASE-20993.branch-1.004.patch, > HBASE-20993.branch-1.005.patch, HBASE-20993.branch-1.006.patch, > HBASE-20993.branch-1.007.patch, HBASE-20993.branch-1.008.patch, > HBASE-20993.branch-1.009.patch, HBASE-20993.branch-1.009.patch, > HBASE-20993.branch-1.010.patch, HBASE-20993.branch-1.011.patch, > HBASE-20993.branch-1.012.patch, HBASE-20993.branch-1.2.001.patch, > HBASE-20993.branch-1.wip.002.patch, HBASE-20993.branch-1.wip.patch, > yetus-local-testpatch-output-009.txt > > > It is easily reproducible. > client's hbase-site.xml: hadoop.security.authentication:kerberos, > hbase.security.authentication:kerberos, > hbase.ipc.client.fallback-to-simple-auth-allowed:true, keytab and principal > are right set > A simple auth hbase cluster, a kerberized hbase client application. > application trying to r/w/c/d table will have following exception: > {code} > javax.security.sasl.SaslException: GSS initiate failed [Caused by > GSSException: No valid credentials provided (Mechanism level: Failed to find > any Kerberos tgt)] > at > com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) > at > org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:743) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:740) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1241) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336) > at > org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58383) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(ConnectionManager.java:1592) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1530) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1552) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1581) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1738) > at > org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38) > at > org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:134) > at > org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4297) > at > org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4289) > at > org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsyncV2(HBaseAdmin.java:753) > at > org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:674) > at >
[jira] [Assigned] (HBASE-20151) Bug with SingleColumnValueFilter and FamilyFilter
[ https://issues.apache.org/jira/browse/HBASE-20151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zheng Hu reassigned HBASE-20151: Assignee: Zheng Hu (was: Reid Chan) > Bug with SingleColumnValueFilter and FamilyFilter > - > > Key: HBASE-20151 > URL: https://issues.apache.org/jira/browse/HBASE-20151 > Project: HBase > Issue Type: Bug >Affects Versions: 2.1.0, 2.0.1, 1.4.5 > Environment: MacOS 10.13.3 > HBase 1.3.1 >Reporter: Steven Sadowski >Assignee: Zheng Hu >Priority: Major > Fix For: 3.0.0, 1.5.0, 2.3.0 > > Attachments: HBASE-20151.master.001.patch, > HBASE-20151.master.002.patch, HBASE-20151.master.003.patch, > HBASE-20151.master.004.patch, HBASE-20151.master.004.patch, > HBASE-20151.master.005.patch, HBASE-20151.master.006.patch, > filter-list-type.v1.txt > > > When running the following queries, the result is sometimes return correctly > and other times incorrectly based on the qualifier queried. > Setup: > {code:java} > create 'test', 'a', 'b' > test = get_table 'test' > test.put '1', 'a:1', nil > test.put '1', 'a:10', nil > test.put '1', 'b:2', nil > {code} > > This query works fine when the SCVF's qualifier has length 1 (i.e. '1') : > {code:java} > test.scan({ FILTER => "( > SingleColumnValueFilter('a','1',=,'binary:',true,true) AND > FamilyFilter(=,'binary:b') )"}) > ROW COLUMN+CELL > 1column=b:2, > timestamp=1520455888059, value= > 1 row(s) in 0.0060 seconds > {code} > > The query should return the same result when passed a qualifier of length 2 > (i.e. '10') : > {code:java} > test.scan({ FILTER => "( > SingleColumnValueFilter('a','10',=,'binary:',true,true) AND > FamilyFilter(=,'binary:b') )"}) > ROW COLUMN+CELL > 0 row(s) in 0.0110 seconds > {code} > However, in this case, it does not return any row (expected result would be > to return the same result as the first query). > > Removing the family filter while the qualifier is '10' yields expected > results: > {code:java} > test.scan({ FILTER => "( > SingleColumnValueFilter('a','10',=,'binary:',true,true) )"}) > ROW COLUMN+CELL > 1column=a:1, > timestamp=1520455887954, value= > 1column=a:10, > timestamp=1520455888024, value= > 1column=b:2, > timestamp=1520455888059, value= > 1 row(s) in 0.0140 seconds > {code} > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-8443) Queue a balancer run when regionservers report in for the first time
[ https://issues.apache.org/jira/browse/HBASE-8443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815907#comment-16815907 ] Biju Nair commented on HBASE-8443: -- Similar to HBASE-3268. Can we close this? [~apurtell] . > Queue a balancer run when regionservers report in for the first time > > > Key: HBASE-8443 > URL: https://issues.apache.org/jira/browse/HBASE-8443 > Project: HBase > Issue Type: Improvement > Components: Balancer >Reporter: Elliott Clark >Priority: Major > > When running integration tests it's apparent that lots of region servers sit > for quite a while in between balancer runs. > I propose > * Queuing one balancer run that will run 30 seconds after a new region server > checks in. > * Reset the balancer period if we queue a balancer run. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-10075) add a locality-aware balancer
[ https://issues.apache.org/jira/browse/HBASE-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815910#comment-16815910 ] Biju Nair commented on HBASE-10075: --- With SLB taking into account data locality to calculate the cost and come-up with target balanced cluster, looks like the idea behind this request is taken care. no? Can this be closed? > add a locality-aware balancer > - > > Key: HBASE-10075 > URL: https://issues.apache.org/jira/browse/HBASE-10075 > Project: HBase > Issue Type: New Feature > Components: Balancer >Affects Versions: 0.94.12 >Reporter: Chengxiang Li >Priority: Major > > basic idea: > during rebalance. For each region server, iterate regions, give each region a > balance score, remove the lowest one until the region number of region server > reach avg floor. > during assignment. match to-be-assigned regions with each active region > server as pairs, give each pair a balance score, the highest win this region. > here is the mark formula: > (1 – tableRegionNumberOnServer/allTableRegionNumber) * tableBalancerWeight > + (1 – regionNumberOnServer/allRegionNumber) * serverBalancerWeight + > regionBlockSizeOnServer/regionBlockSize * localityWeight > + (previousServer?1:0) * stickinessWeight > there are 4 factors which would influence the final balance score: > 1. region balance > 2. table region balance > 3. region locality > 4. region stickiness > through adjust the weight of these 4 factors, we can balance the cluster in > different strategy. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-22020) upgrade to yetus 0.9.0
[ https://issues.apache.org/jira/browse/HBASE-22020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815913#comment-16815913 ] Hudson commented on HBASE-22020: Results for branch HBASE-22020 [build #9 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22020/9/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22020/9//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22020/9//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22020/9//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > upgrade to yetus 0.9.0 > -- > > Key: HBASE-22020 > URL: https://issues.apache.org/jira/browse/HBASE-22020 > Project: HBase > Issue Type: Task > Components: build, community >Reporter: stack >Assignee: Sean Busbey >Priority: Major > Attachments: HBASE-22020-branch-1.v1.patch, HBASE-22020.0.patch, > HBASE-22020.1.patch > > > branch-1/jdk7 checkstyle dtd xml parse complaint; "script engine for language > js can not be found" > See parent for some context. Checkstyle references dtds that were hosted on > puppycrawl, then on sourceforge up until ten days ago. Nightlies are failing > for among other reasons, complaint that there is bad xml in the build... > notably, the unresolvable DTDs. > I'd just update the DTDs but there is a need for a js engine some where and > openjdk7 doesn't ship with one (openjdk8 does). That needs addressing and > then we can backport the parent issue... > See > https://builds.apache.org/view/H-L/view/HBase/job/HBase%20Nightly/job/branch-1/710/artifact/output-general/xml.txt > ... which in case its rolled away, is filled with this message: > "script engine for language js can not be found" -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HBASE-8549) Integrate Favored Nodes into StochasticLoadBalancer
[ https://issues.apache.org/jira/browse/HBASE-8549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack resolved HBASE-8549. -- Resolution: Implemented I'll take your word for it [~gsbiju]. Implemented by HBASE-16942 > Integrate Favored Nodes into StochasticLoadBalancer > --- > > Key: HBASE-8549 > URL: https://issues.apache.org/jira/browse/HBASE-8549 > Project: HBase > Issue Type: Bug > Components: Balancer >Reporter: Elliott Clark >Priority: Major > Attachments: HBASE-8549-0.patch > > > Right now we have a FavoredNodeLoadBalancer. It would be pretty easy to > integrate the favored node list into the strochastic balancer. Then we would > have the best of both worlds. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-16942) Add FavoredStochasticLoadBalancer and FN Candidate generators
[ https://issues.apache.org/jira/browse/HBASE-16942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815916#comment-16815916 ] stack commented on HBASE-16942: --- Do I have to do anything to turn this on [~thiruvel]? If so, maybe release note it please sir? Otherwise, folks are unlikely to find this nice addition. Thanks sir. > Add FavoredStochasticLoadBalancer and FN Candidate generators > - > > Key: HBASE-16942 > URL: https://issues.apache.org/jira/browse/HBASE-16942 > Project: HBase > Issue Type: Sub-task > Components: FavoredNodes >Reporter: Thiruvel Thirumoolan >Assignee: Thiruvel Thirumoolan >Priority: Major > Fix For: 2.0.0 > > Attachments: HBASE-16942.master.001.patch, > HBASE-16942.master.002.patch, HBASE-16942.master.003.patch, > HBASE-16942.master.004.patch, HBASE-16942.master.005.patch, > HBASE-16942.master.006.patch, HBASE-16942.master.007.patch, > HBASE-16942.master.008.patch, HBASE-16942.master.009.patch, > HBASE-16942.master.010.patch, HBASE-16942.master.011.patch, > HBASE-16942.master.012.patch, HBASE_16942_rough_draft.patch > > > This deals with the balancer based enhancements to favored nodes patch as > discussed in HBASE-15532. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-9741) Remove hbase.regions.slop from hbase-default.xml
[ https://issues.apache.org/jira/browse/HBASE-9741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815920#comment-16815920 ] Biju Nair commented on HBASE-9741: -- Current [hbase-default.xml|https://github.com/apache/hbase/blob/f22b8ade631367d584b7065c7c76b3e0b6eca97b/hbase-common/src/main/resources/hbase-default.xml#L624-L630] has the smaller {{slop}} for SLB which is enabled by default and HBase book documents about the higher {{slop}} value used by SimpleLoadBalancer. Can we close this ticket? > Remove hbase.regions.slop from hbase-default.xml > > > Key: HBASE-9741 > URL: https://issues.apache.org/jira/browse/HBASE-9741 > Project: HBase > Issue Type: Bug > Components: Balancer >Reporter: Elliott Clark >Priority: Major > Labels: beginner > Attachments: HBASE-9741-v0.patch, HBASE-9741-v1.patch, > HBASE-9741-v3.patch, HBASE-9741-v4.patch > > > Different balancers have different slop default values. We should remove > hbase.regions.slop from hbase-default.xml -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21152) Add parameter validations for commands in HBase shell
[ https://issues.apache.org/jira/browse/HBASE-21152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815921#comment-16815921 ] Biju Nair commented on HBASE-21152: --- If {{status}} is used as a parameter for a shell command for e.g. {{describe status}} then {{status}} gets executed as a command followed by the user intended command i.e. {{describe}}. Looks like a broader issues and not confined to {{balance_switch}} command alone. > Add parameter validations for commands in HBase shell > - > > Key: HBASE-21152 > URL: https://issues.apache.org/jira/browse/HBASE-21152 > Project: HBase > Issue Type: Improvement > Components: Balancer, shell >Reporter: Toshihiro Suzuki >Assignee: Toshihiro Suzuki >Priority: Major > > One of our customers got confused with "balance_switch" command in HBase > shell. > They mistakenly ran "balance_swich status" command instead of > "balancer_enabled" command to see if the balancer is enabled. However, like > the following, it didn't cause any errors and it looks like the command was > successful. > {code} > hbase> balance_switch status > 1 active master, 0 backup masters, 1 servers, 0 dead, 2. average load > Took 0.0055 seconds > Previous balancer state : true > Took 0.0041 seconds > => "true" > {code} > To make matters worse, the "balance_swich status" command will make the > balancer disabled. > Of course, the command was wrong but I think that's a little bit confusing. > So I think we need to add parameter validations for commands in HBase shell. > I'll check if we need to add validations for any other commands in this Jira. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (HBASE-21152) Add parameter validations for commands in HBase shell
[ https://issues.apache.org/jira/browse/HBASE-21152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815921#comment-16815921 ] Biju Nair edited comment on HBASE-21152 at 4/12/19 3:10 AM: If {{status}} is used as a parameter for a shell command for e.g. {{describe status}} then {{status}} gets executed as a command followed by the user intended command i.e. {{describe}}. Looks like a broader issue and not confined to {{balance_switch}} command alone. was (Author: gsbiju): If {{status}} is used as a parameter for a shell command for e.g. {{describe status}} then {{status}} gets executed as a command followed by the user intended command i.e. {{describe}}. Looks like a broader issues and not confined to {{balance_switch}} command alone. > Add parameter validations for commands in HBase shell > - > > Key: HBASE-21152 > URL: https://issues.apache.org/jira/browse/HBASE-21152 > Project: HBase > Issue Type: Improvement > Components: Balancer, shell >Reporter: Toshihiro Suzuki >Assignee: Toshihiro Suzuki >Priority: Major > > One of our customers got confused with "balance_switch" command in HBase > shell. > They mistakenly ran "balance_swich status" command instead of > "balancer_enabled" command to see if the balancer is enabled. However, like > the following, it didn't cause any errors and it looks like the command was > successful. > {code} > hbase> balance_switch status > 1 active master, 0 backup masters, 1 servers, 0 dead, 2. average load > Took 0.0055 seconds > Previous balancer state : true > Took 0.0041 seconds > => "true" > {code} > To make matters worse, the "balance_swich status" command will make the > balancer disabled. > Of course, the command was wrong but I think that's a little bit confusing. > So I think we need to add parameter validations for commands in HBase shell. > I'll check if we need to add validations for any other commands in this Jira. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-22144) MultiRowRangeFilter does not work with reversed scans
[ https://issues.apache.org/jira/browse/HBASE-22144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815926#comment-16815926 ] Hudson commented on HBASE-22144: Results for branch master [build #926 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/926/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/master/926//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/master/926//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/master/926//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > MultiRowRangeFilter does not work with reversed scans > - > > Key: HBASE-22144 > URL: https://issues.apache.org/jira/browse/HBASE-22144 > Project: HBase > Issue Type: Bug > Components: Filters, scan >Reporter: Josh Elser >Assignee: Josh Elser >Priority: Critical > Fix For: 3.0.0, 2.2.0, 2.3.0, 2.0.6, 2.1.5 > > Attachments: HBASE-22144.001.patch, HBASE-22144.002.patch, > HBASE-22144.002.patch > > > It appears that MultiRowRangeFilter was never written to function with > reverse scans. There is too much logic that operates with the assumption that > we are always moving "forward" through increasing ranges. It needs to be > rewritten to "traverse" forward or backward, given how the context of the > scan being used. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-22194) Snapshot unittests fail on Windows due to invalid file path uri
[ https://issues.apache.org/jira/browse/HBASE-22194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815925#comment-16815925 ] Hudson commented on HBASE-22194: Results for branch master [build #926 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/926/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/master/926//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/master/926//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/master/926//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Snapshot unittests fail on Windows due to invalid file path uri > --- > > Key: HBASE-22194 > URL: https://issues.apache.org/jira/browse/HBASE-22194 > Project: HBase > Issue Type: Bug > Components: regionserver, test >Affects Versions: 3.0.0, 2.2.0 >Reporter: Bahram Chehrazy >Assignee: Bahram Chehrazy >Priority: Major > Fix For: 3.0.0, 1.5.0, 2.3.0 > > Attachments: unittest-fix-for-windows.patch > > > These unittests are failing on Windows because the temporary snapshot file > path is not valid. > hadoop.hbase.client.TestSnapshotTemporaryDirectory > hadoop.hbase.client.TestSnapshotDFSTemporaryDirectory > hadoop.hbase.client.TestSnapshotTemporaryDirectoryWithRegionReplica > > The error is: > > 2019-04-08 23:42:02,080 ERROR [master/MININT-2D9TFVB:0:becomeActiveMaster] > helpers.MarkerIgnoringBase(159): * ABORTING master > minint-2d9tfvb.northamerica.corp.microsoft.com,57169,1554792118500: Unhandled > exception. Starting shutdown. * > java.lang.IllegalArgumentException: *Wrong FS: > file://C:\src\hbase\hbase-server/2f6562be-fe12-49a4-b370-2b6928e5aa72, > expected: file:///* > at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:647) > at > org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:82) > at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:606) > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:824) > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:601) > at > org.apache.hadoop.fs.ChecksumFileSystem.delete(ChecksumFileSystem.java:638) > at > org.apache.hadoop.hbase.master.snapshot.SnapshotManager.resetTempDir(SnapshotManager.java:298) > at > org.apache.hadoop.hbase.master.snapshot.SnapshotManager.initialize(SnapshotManager.java:1186) > at > org.apache.hadoop.hbase.procedure.MasterProcedureManagerHost.initialize(MasterProcedureManagerHost.java:50) > at > org.apache.hadoop.hbase.master.HMaster.initializeZKBasedSystemTrackers(HMaster.java:828) > at > org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:1004) > at > org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2373) > at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:603) > at java.lang.Thread.run(Thread.java:748) > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19762) Fix Checkstyle errors in hbase-http
[ https://issues.apache.org/jira/browse/HBASE-19762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815924#comment-16815924 ] Hudson commented on HBASE-19762: Results for branch master [build #926 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/926/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/master/926//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/master/926//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/master/926//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Fix Checkstyle errors in hbase-http > --- > > Key: HBASE-19762 > URL: https://issues.apache.org/jira/browse/HBASE-19762 > Project: HBase > Issue Type: Sub-task >Reporter: Jan Hentschel >Assignee: Jan Hentschel >Priority: Minor > Fix For: 3.0.0 > > Attachments: HBASE-19762.master.001.patch, > HBASE-19762.master.002.patch, HBASE-19762.master.003.patch, > HBASE-19762.master.004.patch > > > Fix the remaining Checkstyle errors in the *hbase-http* module and enable > Checkstyle to fail on violations. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-10761) StochasticLoadBalancer still uses SimpleLoadBalancer's needBalance logic
[ https://issues.apache.org/jira/browse/HBASE-10761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815927#comment-16815927 ] Biju Nair commented on HBASE-10761: --- Currently [SLB|https://github.com/apache/hbase/blob/baf3ae80f5588ee848176adefc9f56818458a387/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/StochasticLoadBalancer.java#L275] has logic to check {{needBalance}} which seems to satisfy this requirement. Can this be closed? > StochasticLoadBalancer still uses SimpleLoadBalancer's needBalance logic > > > Key: HBASE-10761 > URL: https://issues.apache.org/jira/browse/HBASE-10761 > Project: HBase > Issue Type: Bug > Components: Balancer >Affects Versions: 0.98.0 >Reporter: Victor Xu >Priority: Major > Attachments: HBASE_10761.patch, HBASE_10761_v2.patch > > > StochasticLoadBalancer has become the default balancer since 0.98.0. But its > balanceCluster method still uses the BaseLoadBalancer.needBalance() which is > originally designed for SimpleLoadBalancer. It's all based on the number of > regions on the regionservers. > This can cause such a problem: when the cluster has some Hot Spot Region, the > balance process may not be triggered because the numbers of regions on the > RegionServers are averaged. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-10761) StochasticLoadBalancer still uses SimpleLoadBalancer's needBalance logic
[ https://issues.apache.org/jira/browse/HBASE-10761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815930#comment-16815930 ] HBase QA commented on HBASE-10761: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s{color} | {color:red} HBASE-10761 does not apply to master. Rebase required? Wrong Branch? See https://yetus.apache.org/documentation/in-progress/precommit-patchnames for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HBASE-10761 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12634893/HBASE_10761_v2.patch | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/70/console | | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org | This message was automatically generated. > StochasticLoadBalancer still uses SimpleLoadBalancer's needBalance logic > > > Key: HBASE-10761 > URL: https://issues.apache.org/jira/browse/HBASE-10761 > Project: HBase > Issue Type: Bug > Components: Balancer >Affects Versions: 0.98.0 >Reporter: Victor Xu >Priority: Major > Attachments: HBASE_10761.patch, HBASE_10761_v2.patch > > > StochasticLoadBalancer has become the default balancer since 0.98.0. But its > balanceCluster method still uses the BaseLoadBalancer.needBalance() which is > originally designed for SimpleLoadBalancer. It's all based on the number of > regions on the regionservers. > This can cause such a problem: when the cluster has some Hot Spot Region, the > balance process may not be triggered because the numbers of regions on the > RegionServers are averaged. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (HBASE-10761) StochasticLoadBalancer still uses SimpleLoadBalancer's needBalance logic
[ https://issues.apache.org/jira/browse/HBASE-10761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815927#comment-16815927 ] Biju Nair edited comment on HBASE-10761 at 4/12/19 3:21 AM: Currently [SLB|https://github.com/apache/hbase/blob/baf3ae80f5588ee848176adefc9f56818458a387/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/StochasticLoadBalancer.java#L275] has logic to check {{needBalance}} and not use the one from SimpleLoadBalancer which seems to satisfy this requirement. Can this be closed? was (Author: gsbiju): Currently [SLB|https://github.com/apache/hbase/blob/baf3ae80f5588ee848176adefc9f56818458a387/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/StochasticLoadBalancer.java#L275] has logic to check {{needBalance}} which seems to satisfy this requirement. Can this be closed? > StochasticLoadBalancer still uses SimpleLoadBalancer's needBalance logic > > > Key: HBASE-10761 > URL: https://issues.apache.org/jira/browse/HBASE-10761 > Project: HBase > Issue Type: Bug > Components: Balancer >Affects Versions: 0.98.0 >Reporter: Victor Xu >Priority: Major > Attachments: HBASE_10761.patch, HBASE_10761_v2.patch > > > StochasticLoadBalancer has become the default balancer since 0.98.0. But its > balanceCluster method still uses the BaseLoadBalancer.needBalance() which is > originally designed for SimpleLoadBalancer. It's all based on the number of > regions on the regionservers. > This can cause such a problem: when the cluster has some Hot Spot Region, the > balance process may not be triggered because the numbers of regions on the > RegionServers are averaged. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-10761) StochasticLoadBalancer still uses SimpleLoadBalancer's needBalance logic
[ https://issues.apache.org/jira/browse/HBASE-10761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-10761: -- Resolution: Implemented Status: Resolved (was: Patch Available) Makes sense [~gsbiju] Thanks. Resolving as implemented. > StochasticLoadBalancer still uses SimpleLoadBalancer's needBalance logic > > > Key: HBASE-10761 > URL: https://issues.apache.org/jira/browse/HBASE-10761 > Project: HBase > Issue Type: Bug > Components: Balancer >Affects Versions: 0.98.0 >Reporter: Victor Xu >Priority: Major > Attachments: HBASE_10761.patch, HBASE_10761_v2.patch > > > StochasticLoadBalancer has become the default balancer since 0.98.0. But its > balanceCluster method still uses the BaseLoadBalancer.needBalance() which is > originally designed for SimpleLoadBalancer. It's all based on the number of > regions on the regionservers. > This can cause such a problem: when the cluster has some Hot Spot Region, the > balance process may not be triggered because the numbers of regions on the > RegionServers are averaged. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (HBASE-10761) StochasticLoadBalancer still uses SimpleLoadBalancer's needBalance logic
[ https://issues.apache.org/jira/browse/HBASE-10761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815932#comment-16815932 ] stack edited comment on HBASE-10761 at 4/12/19 3:33 AM: Makes sense [~gsbiju] Thanks. Resolving as implemented. Can open new issue if need more (this one is way old anyways). was (Author: stack): Makes sense [~gsbiju] Thanks. Resolving as implemented. > StochasticLoadBalancer still uses SimpleLoadBalancer's needBalance logic > > > Key: HBASE-10761 > URL: https://issues.apache.org/jira/browse/HBASE-10761 > Project: HBase > Issue Type: Bug > Components: Balancer >Affects Versions: 0.98.0 >Reporter: Victor Xu >Priority: Major > Attachments: HBASE_10761.patch, HBASE_10761_v2.patch > > > StochasticLoadBalancer has become the default balancer since 0.98.0. But its > balanceCluster method still uses the BaseLoadBalancer.needBalance() which is > originally designed for SimpleLoadBalancer. It's all based on the number of > regions on the regionservers. > This can cause such a problem: when the cluster has some Hot Spot Region, the > balance process may not be triggered because the numbers of regions on the > RegionServers are averaged. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (HBASE-12829) Request count in RegionLoad may not accurate to compute the load cost for region
[ https://issues.apache.org/jira/browse/HBASE-12829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815935#comment-16815935 ] Biju Nair edited comment on HBASE-12829 at 4/12/19 3:39 AM: In the current version of SLB, [Read-writeRequestCostFunction|https://github.com/apache/hbase/blob/baf3ae80f5588ee848176adefc9f56818458a387/hbaseserver/src/main/java/org/apache/hadoop/hbase/master/balancer/StochasticLoadBalancer.java#L1465] extends [CostFromRegionLoadAsRateFunction|https://github.com/apache/hbase/blob/baf3ae80f5588ee848176adefc9f56818458a387/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/StochasticLoadBalancer.java#L1436] which in turn uses the [average of the region requests stored for a period|https://github.com/apache/hbase/blob/baf3ae80f5588ee848176adefc9f56818458a387/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/StochasticLoadBalancer.java#L1443] to calculate cost which seems to address this issue. Can this be closed? was (Author: gsbiju): In the current version of SLB, [Read-writeRequestCostFunction|https://github.com/apache/hbase/blob/baf3ae80f5588ee848176adefc9f56818458a387/hbaseserver/src/main/java/org/apache/hadoop/hbase/master/balancer/StochasticLoadBalancer.java#L1465] extends [CostFromRegionLoadAsRateFunction|https://github.com/apache/hbase/blob/baf3ae80f5588ee848176adefc9f56818458a387/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/StochasticLoadBalancer.java#L1436] which in turn uses the [average of the region requests stored for a period|https://github.com/apache/hbase/blob/baf3ae80f5588ee848176adefc9f56818458a387/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/StochasticLoadBalancer.java#L1443] which seems to address this issue. Can this be closed? > Request count in RegionLoad may not accurate to compute the load cost for > region > > > Key: HBASE-12829 > URL: https://issues.apache.org/jira/browse/HBASE-12829 > Project: HBase > Issue Type: Improvement > Components: Balancer >Affects Versions: 0.99.2 >Reporter: Jianwei Cui >Priority: Minor > > StochasticLoadBalancer#RequestCostFunction(ReadRequestCostFunction and > WriteRequestCostFunction) will compute load cost for a region based on a > number of remembered region loads. Each region load records the total count > for read/write request at reported time since it opened. However, the request > count will be reset if region moved, making the new reported count could not > represent the total request. For example, if a region has high write > throughput, the WrtieRequest in region load will be very big after onlined > for a long time, then if the region moved, the new WriteRequest will be much > smaller, making the region contributes much smaller to the cost of its > belonging rs. We may need to consider the region open time to get more > accurate region load. > As another way, how about using read/write request count at each time slots > instead of total request count? The total count will make older read/write > request throughput contribute more to the cost by > CostFromRegionLoadFunction#getRegionLoadCost: > {code} > protected double getRegionLoadCost(Collection regionLoadList) > { > double cost = 0; > for (RegionLoad rl : regionLoadList) { > double toAdd = getCostFromRl(rl); > if (cost == 0) { > cost = toAdd; > } else { > cost = (.5 * cost) + (.5 * toAdd); > } > } > return cost; > } > {code} > For example, assume the balancer now remembers three loads for a region at > time t1, t2, t3(t1 < t2 < t3), the write request is w1, w2, w3 respectively > for time slots [0, t1), [t1, t2), [t2, t3), so the WriteRequest in the region > load at t1, t2, t3 will be w1, w1 + w2, w1 + w2 + w3 and the WriteRequest > cost will be: > {code} > 0.5 * (w1 + w2 + w3) + 0.25 * (w1 + w2) + 0.25 * w1 = w1 + 0.75 * w2 + > 0.5 * w3 > {code} > The w1 contributes more to the cost than w2 and w3. However, intuitively, I > think the recent read/write throughput should represent the current load of > the region better than the older ones. Therefore, how about using w1, w2 and > w3 directly when computing? Then, the cost will become: > {code} > 0.25 * w1 + 0.25 * w2 + 0.5 * w3 > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-12829) Request count in RegionLoad may not accurate to compute the load cost for region
[ https://issues.apache.org/jira/browse/HBASE-12829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815935#comment-16815935 ] Biju Nair commented on HBASE-12829: --- In the current version of SLB, [Read-writeRequestCostFunction|https://github.com/apache/hbase/blob/baf3ae80f5588ee848176adefc9f56818458a387/hbaseserver/src/main/java/org/apache/hadoop/hbase/master/balancer/StochasticLoadBalancer.java#L1465] extends [CostFromRegionLoadAsRateFunction|https://github.com/apache/hbase/blob/baf3ae80f5588ee848176adefc9f56818458a387/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/StochasticLoadBalancer.java#L1436] which in turn uses the [average of the region requests stored for a period|https://github.com/apache/hbase/blob/baf3ae80f5588ee848176adefc9f56818458a387/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/StochasticLoadBalancer.java#L1443] which seems to address this issue. Can this be closed? > Request count in RegionLoad may not accurate to compute the load cost for > region > > > Key: HBASE-12829 > URL: https://issues.apache.org/jira/browse/HBASE-12829 > Project: HBase > Issue Type: Improvement > Components: Balancer >Affects Versions: 0.99.2 >Reporter: Jianwei Cui >Priority: Minor > > StochasticLoadBalancer#RequestCostFunction(ReadRequestCostFunction and > WriteRequestCostFunction) will compute load cost for a region based on a > number of remembered region loads. Each region load records the total count > for read/write request at reported time since it opened. However, the request > count will be reset if region moved, making the new reported count could not > represent the total request. For example, if a region has high write > throughput, the WrtieRequest in region load will be very big after onlined > for a long time, then if the region moved, the new WriteRequest will be much > smaller, making the region contributes much smaller to the cost of its > belonging rs. We may need to consider the region open time to get more > accurate region load. > As another way, how about using read/write request count at each time slots > instead of total request count? The total count will make older read/write > request throughput contribute more to the cost by > CostFromRegionLoadFunction#getRegionLoadCost: > {code} > protected double getRegionLoadCost(Collection regionLoadList) > { > double cost = 0; > for (RegionLoad rl : regionLoadList) { > double toAdd = getCostFromRl(rl); > if (cost == 0) { > cost = toAdd; > } else { > cost = (.5 * cost) + (.5 * toAdd); > } > } > return cost; > } > {code} > For example, assume the balancer now remembers three loads for a region at > time t1, t2, t3(t1 < t2 < t3), the write request is w1, w2, w3 respectively > for time slots [0, t1), [t1, t2), [t2, t3), so the WriteRequest in the region > load at t1, t2, t3 will be w1, w1 + w2, w1 + w2 + w3 and the WriteRequest > cost will be: > {code} > 0.5 * (w1 + w2 + w3) + 0.25 * (w1 + w2) + 0.25 * w1 = w1 + 0.75 * w2 + > 0.5 * w3 > {code} > The w1 contributes more to the cost than w2 and w3. However, intuitively, I > think the recent read/write throughput should represent the current load of > the region better than the older ones. Therefore, how about using w1, w2 and > w3 directly when computing? Then, the cost will become: > {code} > 0.25 * w1 + 0.25 * w2 + 0.5 * w3 > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-22215) Backport MultiRowRangeFilter does not work with reverse scans
Josh Elser created HBASE-22215: -- Summary: Backport MultiRowRangeFilter does not work with reverse scans Key: HBASE-22215 URL: https://issues.apache.org/jira/browse/HBASE-22215 Project: HBase Issue Type: Sub-task Components: Filters Reporter: Josh Elser Assignee: Josh Elser Fix For: 1.5.0, 1.4.10 See parent. Modify and apply to 1.x lines. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-22144) MultiRowRangeFilter does not work with reversed scans
[ https://issues.apache.org/jira/browse/HBASE-22144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Josh Elser updated HBASE-22144: --- Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) > MultiRowRangeFilter does not work with reversed scans > - > > Key: HBASE-22144 > URL: https://issues.apache.org/jira/browse/HBASE-22144 > Project: HBase > Issue Type: Bug > Components: Filters >Reporter: Josh Elser >Assignee: Josh Elser >Priority: Major > Fix For: 3.0.0, 2.2.0, 2.3.0, 2.0.6, 2.1.5 > > Attachments: HBASE-22144.001.patch, HBASE-22144.002.patch, > HBASE-22144.002.patch > > > It appears that MultiRowRangeFilter was never written to function with > reverse scans. There is too much logic that operates with the assumption that > we are always moving "forward" through increasing ranges. It needs to be > rewritten to "traverse" forward or backward, given how the context of the > scan being used. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HBASE-22214) [2.x] Backport missing filter improvements
[ https://issues.apache.org/jira/browse/HBASE-22214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Josh Elser resolved HBASE-22214. Resolution: Fixed > [2.x] Backport missing filter improvements > -- > > Key: HBASE-22214 > URL: https://issues.apache.org/jira/browse/HBASE-22214 > Project: HBase > Issue Type: Bug > Components: Filters >Reporter: Josh Elser >Assignee: Josh Elser >Priority: Major > Fix For: 2.0.6, 2.1.5 > > > HBASE-19008 and HBASE-21129 were never backported beyond branch-2. I can't > find any reason that this was not done. Despite these being public-tagged > classes, no incompatible changes were added. > The lack of these changes prevents HBASE-22144 from being backported cleanly. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-22212) [1.x] Backport missing filter improvements
[ https://issues.apache.org/jira/browse/HBASE-22212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Josh Elser updated HBASE-22212: --- Attachment: HBASE-22212.001.branch-1.patch > [1.x] Backport missing filter improvements > -- > > Key: HBASE-22212 > URL: https://issues.apache.org/jira/browse/HBASE-22212 > Project: HBase > Issue Type: Bug > Components: Filters >Reporter: Josh Elser >Assignee: Josh Elser >Priority: Major > Fix For: 1.5.0, 1.4.10 > > Attachments: HBASE-22212.001.branch-1.patch > > > HBASE-19008 and HBASE-21129 were never backported beyond branch-2. I can't > find any reason that this was not done. Despite these being public-tagged > classes, no incompatible changes were added. > The lack of these changes prevents HBASE-22144 from being backported cleanly. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-22212) [1.x] Backport missing filter improvements
[ https://issues.apache.org/jira/browse/HBASE-22212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Josh Elser updated HBASE-22212: --- Status: Patch Available (was: Open) > [1.x] Backport missing filter improvements > -- > > Key: HBASE-22212 > URL: https://issues.apache.org/jira/browse/HBASE-22212 > Project: HBase > Issue Type: Bug > Components: Filters >Reporter: Josh Elser >Assignee: Josh Elser >Priority: Major > Fix For: 1.5.0, 1.4.10 > > Attachments: HBASE-22212.001.branch-1.patch > > > HBASE-19008 and HBASE-21129 were never backported beyond branch-2. I can't > find any reason that this was not done. Despite these being public-tagged > classes, no incompatible changes were added. > The lack of these changes prevents HBASE-22144 from being backported cleanly. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-22212) [1.x] Backport missing filter improvements
[ https://issues.apache.org/jira/browse/HBASE-22212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815842#comment-16815842 ] Josh Elser commented on HBASE-22212: .001 is a squash of 19008 and 21129 for QA to chug on. > [1.x] Backport missing filter improvements > -- > > Key: HBASE-22212 > URL: https://issues.apache.org/jira/browse/HBASE-22212 > Project: HBase > Issue Type: Bug > Components: Filters >Reporter: Josh Elser >Assignee: Josh Elser >Priority: Major > Fix For: 1.5.0, 1.4.10 > > Attachments: HBASE-22212.001.branch-1.patch > > > HBASE-19008 and HBASE-21129 were never backported beyond branch-2. I can't > find any reason that this was not done. Despite these being public-tagged > classes, no incompatible changes were added. > The lack of these changes prevents HBASE-22144 from being backported cleanly. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-22194) Snapshot unittests fail on Windows due to invalid file path uri
[ https://issues.apache.org/jira/browse/HBASE-22194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815851#comment-16815851 ] Hudson commented on HBASE-22194: Results for branch branch-2 [build #1814 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1814/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1814//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1814//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1814//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Snapshot unittests fail on Windows due to invalid file path uri > --- > > Key: HBASE-22194 > URL: https://issues.apache.org/jira/browse/HBASE-22194 > Project: HBase > Issue Type: Bug > Components: regionserver, test >Affects Versions: 3.0.0, 2.2.0 >Reporter: Bahram Chehrazy >Assignee: Bahram Chehrazy >Priority: Major > Fix For: 3.0.0, 1.5.0, 2.3.0 > > Attachments: unittest-fix-for-windows.patch > > > These unittests are failing on Windows because the temporary snapshot file > path is not valid. > hadoop.hbase.client.TestSnapshotTemporaryDirectory > hadoop.hbase.client.TestSnapshotDFSTemporaryDirectory > hadoop.hbase.client.TestSnapshotTemporaryDirectoryWithRegionReplica > > The error is: > > 2019-04-08 23:42:02,080 ERROR [master/MININT-2D9TFVB:0:becomeActiveMaster] > helpers.MarkerIgnoringBase(159): * ABORTING master > minint-2d9tfvb.northamerica.corp.microsoft.com,57169,1554792118500: Unhandled > exception. Starting shutdown. * > java.lang.IllegalArgumentException: *Wrong FS: > file://C:\src\hbase\hbase-server/2f6562be-fe12-49a4-b370-2b6928e5aa72, > expected: file:///* > at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:647) > at > org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:82) > at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:606) > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:824) > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:601) > at > org.apache.hadoop.fs.ChecksumFileSystem.delete(ChecksumFileSystem.java:638) > at > org.apache.hadoop.hbase.master.snapshot.SnapshotManager.resetTempDir(SnapshotManager.java:298) > at > org.apache.hadoop.hbase.master.snapshot.SnapshotManager.initialize(SnapshotManager.java:1186) > at > org.apache.hadoop.hbase.procedure.MasterProcedureManagerHost.initialize(MasterProcedureManagerHost.java:50) > at > org.apache.hadoop.hbase.master.HMaster.initializeZKBasedSystemTrackers(HMaster.java:828) > at > org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:1004) > at > org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2373) > at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:603) > at java.lang.Thread.run(Thread.java:748) > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-22216) "Waiting on master failover to complete" shows 30 to 40 time per millisecond
Xu Cang created HBASE-22216: --- Summary: "Waiting on master failover to complete" shows 30 to 40 time per millisecond Key: HBASE-22216 URL: https://issues.apache.org/jira/browse/HBASE-22216 Project: HBase Issue Type: Bug Components: proc-v2 Affects Versions: 1.3.0 Reporter: Xu Cang "Waiting on master failover to complete" shows 30 to 40 time per millisecond from one host when master is initializing. This message is too noisy. Need to fix this. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-22217) HBase shell command proposal : "rit assign all"
Xu Cang created HBASE-22217: --- Summary: HBase shell command proposal : "rit assign all" Key: HBASE-22217 URL: https://issues.apache.org/jira/browse/HBASE-22217 Project: HBase Issue Type: New Feature Reporter: Xu Cang HBase shell command proposal : "rit assign all" Currently we have shell command "rit" to list all RITs. It would be handy having a command "rit assign all" to assign all RITs. This equals to getting the list of RITs from 'rit' command and running "assign " one by one. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-22217) HBase shell command proposal : "rit assign all"
[ https://issues.apache.org/jira/browse/HBASE-22217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HBASE-22217: Component/s: shell Region Assignment Operability > HBase shell command proposal : "rit assign all" > > > Key: HBASE-22217 > URL: https://issues.apache.org/jira/browse/HBASE-22217 > Project: HBase > Issue Type: New Feature > Components: Operability, Region Assignment, shell >Reporter: Xu Cang >Priority: Minor > > HBase shell command proposal : "rit assign all" > > Currently we have shell command "rit" to list all RITs. > It would be handy having a command "rit assign all" to assign all RITs. > This equals to getting the list of RITs from 'rit' command and running > "assign " one by one. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-22217) HBase shell command proposal : "rit assign all"
[ https://issues.apache.org/jira/browse/HBASE-22217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815858#comment-16815858 ] Sean Busbey commented on HBASE-22217: - we'll need to be clear in the docs here and on hbck2 what the difference is in the two cases of using the "assign" command on a RIT. > HBase shell command proposal : "rit assign all" > > > Key: HBASE-22217 > URL: https://issues.apache.org/jira/browse/HBASE-22217 > Project: HBase > Issue Type: New Feature >Reporter: Xu Cang >Priority: Minor > > HBase shell command proposal : "rit assign all" > > Currently we have shell command "rit" to list all RITs. > It would be handy having a command "rit assign all" to assign all RITs. > This equals to getting the list of RITs from 'rit' command and running > "assign " one by one. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-16513) Documentation for new RowIndex DataBlockEncoding
[ https://issues.apache.org/jira/browse/HBASE-16513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815893#comment-16815893 ] binlijin commented on HBASE-16513: -- it resolved in parent Jira HBASE-16213, so close it > Documentation for new RowIndex DataBlockEncoding > > > Key: HBASE-16513 > URL: https://issues.apache.org/jira/browse/HBASE-16513 > Project: HBase > Issue Type: Improvement > Components: documentation, Performance >Reporter: binlijin >Priority: Major > Fix For: 3.0.0, 1.5.0 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HBASE-16513) Documentation for new RowIndex DataBlockEncoding
[ https://issues.apache.org/jira/browse/HBASE-16513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] binlijin resolved HBASE-16513. -- Resolution: Resolved Fix Version/s: (was: 1.5.0) (was: 3.0.0) > Documentation for new RowIndex DataBlockEncoding > > > Key: HBASE-16513 > URL: https://issues.apache.org/jira/browse/HBASE-16513 > Project: HBase > Issue Type: Improvement > Components: documentation, Performance >Reporter: binlijin >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-22212) [1.x] Backport missing filter improvements
[ https://issues.apache.org/jira/browse/HBASE-22212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Josh Elser updated HBASE-22212: --- Attachment: HBASE-22212.002.branch-1.patch > [1.x] Backport missing filter improvements > -- > > Key: HBASE-22212 > URL: https://issues.apache.org/jira/browse/HBASE-22212 > Project: HBase > Issue Type: Bug > Components: Filters >Reporter: Josh Elser >Assignee: Josh Elser >Priority: Major > Fix For: 1.5.0, 1.4.10 > > Attachments: HBASE-22212.001.branch-1.patch, > HBASE-22212.002.branch-1.patch > > > HBASE-19008 and HBASE-21129 were never backported beyond branch-2. I can't > find any reason that this was not done. Despite these being public-tagged > classes, no incompatible changes were added. > The lack of these changes prevents HBASE-22144 from being backported cleanly. -- This message was sent by Atlassian JIRA (v7.6.3#76005)