[jira] [Commented] (HBASE-21008) HBase 1.x can not read HBase2 hfiles due to TimeRangeTracker
[ https://issues.apache.org/jira/browse/HBASE-21008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16569078#comment-16569078 ] Jerry He commented on HBASE-21008: -- This is good with me! > HBase 1.x can not read HBase2 hfiles due to TimeRangeTracker > > > Key: HBASE-21008 > URL: https://issues.apache.org/jira/browse/HBASE-21008 > Project: HBase > Issue Type: Bug >Affects Versions: 2.1.0, 1.4.6 >Reporter: Jerry He >Priority: Major > > It looks like HBase 1.x can not open hfiiles written by HBase2 still. > I tested the latest HBase 1.4.6 and 2.1.0. 1.4.6 tried to read and open > regions written by 2.1.0. > {code} > 2018-07-30 16:01:31,274 ERROR [StoreFileOpenerThread-info-1] > regionserver.StoreFile: Error reading timestamp range data from meta -- > proceeding without > java.lang.IllegalArgumentException: Timestamp cannot be negative. > minStamp:5783278630776778969, maxStamp:-4698050386518222402 > at org.apache.hadoop.hbase.io.TimeRange.check(TimeRange.java:112) > at org.apache.hadoop.hbase.io.TimeRange.(TimeRange.java:100) > at > org.apache.hadoop.hbase.regionserver.TimeRangeTracker.toTimeRange(TimeRangeTracker.java:214) > at > org.apache.hadoop.hbase.regionserver.TimeRangeTracker.getTimeRange(TimeRangeTracker.java:198) > at > org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:507) > at > org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:531) > at > org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:521) > at > org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:679) > at > org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:122) > at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:538) > at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:535) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > {code} > Or: > {code} > 2018-07-30 16:01:31,305 ERROR [RS_OPEN_REGION-throb1:34004-0] > handler.OpenRegionHandler: Failed open of > region=janusgraph,,1532630557542.b0fa15cb0bf1b0bf740997b7056c., starting > to roll back the global memstore size. > java.io.IOException: java.io.IOException: java.io.EOFException > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1033) > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:908) > at > org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:876) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6995) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6956) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6927) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6883) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6834) > at > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:364) > at > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:131) > at > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.io.IOException: java.io.EOFException > at > org.apache.hadoop.hbase.regionserver.HStore.openStoreFiles(HStore.java:564) > at > org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:518) > at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:281) > at > org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:5378) > at > org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1007) > at > org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1004) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > ... 3 more > Caused by: java.io.EOFException > at java.io.DataInputStream.readFully(DataInputStream.java:197) > at java.io.DataInputStream.readLong(DataInputStream.java:416) > at >
[jira] [Commented] (HBASE-20997) rebuildUserRegions() does not build ReplicaMapping during master switchover
[ https://issues.apache.org/jira/browse/HBASE-20997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16569076#comment-16569076 ] Pankaj Kumar commented on HBASE-20997: -- lgtm. > rebuildUserRegions() does not build ReplicaMapping during master switchover > --- > > Key: HBASE-20997 > URL: https://issues.apache.org/jira/browse/HBASE-20997 > Project: HBase > Issue Type: Bug > Components: master >Affects Versions: 1.2.6, 1.3.2, 1.5.0, 1.4.6 >Reporter: huaxiang sun >Assignee: huaxiang sun >Priority: Major > Attachments: HBASE-20997-branch-1-v1.patch, > HBASE-20997-branch-1-v2.patch, HBASE-20997-branch-1-v4.patch, > HBASE-20997-branch-1-v5.patch, HBASE-20997-branch-1-v6.patch > > > During master switchover, rebuildUserRegions() does not rebuild master > in-memory defaultReplicaToOtherReplicas map. This puts the cluster in an > inconsistent state. In read replica case, it causes replica parent region > stay online without being unassigned. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21011) Provide CLI option to run oldwals and hfiles cleaner separately
[ https://issues.apache.org/jira/browse/HBASE-21011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tak Lon (Stephen) Wu updated HBASE-21011: - Fix Version/s: 3.0.0 Status: Patch Available (was: Open) > Provide CLI option to run oldwals and hfiles cleaner separately > --- > > Key: HBASE-21011 > URL: https://issues.apache.org/jira/browse/HBASE-21011 > Project: HBase > Issue Type: Improvement > Components: Admin, Client >Affects Versions: 1.4.6, 3.0.0, 2.1.1 >Reporter: Tak Lon (Stephen) Wu >Assignee: Tak Lon (Stephen) Wu >Priority: Minor > Fix For: 3.0.0 > > > Existing logic of cleaner chore is first execute HFiles cleaner and then > oldwals cleaner in a request, and only return succeed if both completes. > There is a use case of running only oldwals cleaner because oldwals uses all > the disk space, and running HFiles cleaner is too slow because either the > amount of old HFiles or directories are too much. So, this change provide the > flexibility for those disabled cleaner by default or would like to execute > admin command to run oldwals and HFiles cleaning procedure individually. > NOTE that we keep the default as running both of them for backward > compatibility, e.g. the proposed admin CLI options are > {noformat} > hbase> cleaner_chore_run > hbase> cleaner_chore_run 'hfiles' > hbase> cleaner_chore_run 'oldwals' > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21011) Provide CLI option to run oldwals and hfiles cleaner separately
[ https://issues.apache.org/jira/browse/HBASE-21011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tak Lon (Stephen) Wu updated HBASE-21011: - Description: Existing logic of cleaner chore is first execute HFiles cleaner and then oldwals cleaner in a request, and only return succeed if both completes. There is a use case of running only oldwals cleaner because oldwals uses all the disk space, and running HFiles cleaner is too slow because either the amount of old HFiles or directories are too much. So, this change provide the flexibility for those disabled cleaner by default or would like to execute admin command to run oldwals and HFiles cleaning procedure individually. NOTE that we keep the default as running both of them for backward compatibility, e.g. the proposed admin CLI options are {noformat} hbase> cleaner_chore_run hbase> cleaner_chore_run 'hfiles' hbase> cleaner_chore_run 'oldwals' {noformat} was: Existing logic of cleaner chore is first execute HFiles cleaner and then oldwals cleaner in a request, and only return succeed if both completes. There is a use case of running only oldwals cleaner because oldwals uses all the disk space, and running HFiles cleaner is too slow because either the amount of old HFiles or directories are too much. So, this change provide the flexibility for those disabled cleaner by default or would like to execute admin command to run oldwals and HFiles cleaning procedure individually. NOTE that we keep the default as running both of them for backward compatibility, e.g. the proposed admin CLI options are {{ hbase> cleaner_chore_run}} {{ hbase> cleaner_chore_run 'hfiles'}} {{ hbase> cleaner_chore_run 'oldwals'}} > Provide CLI option to run oldwals and hfiles cleaner separately > --- > > Key: HBASE-21011 > URL: https://issues.apache.org/jira/browse/HBASE-21011 > Project: HBase > Issue Type: Improvement > Components: Admin, Client >Affects Versions: 3.0.0, 1.4.6, 2.1.1 >Reporter: Tak Lon (Stephen) Wu >Assignee: Tak Lon (Stephen) Wu >Priority: Minor > > Existing logic of cleaner chore is first execute HFiles cleaner and then > oldwals cleaner in a request, and only return succeed if both completes. > There is a use case of running only oldwals cleaner because oldwals uses all > the disk space, and running HFiles cleaner is too slow because either the > amount of old HFiles or directories > are too much. So, this change provide the flexibility for those disabled > cleaner by default or would like to execute admin command to run oldwals and > HFiles cleaning procedure individually. > NOTE that we keep the default as running both of them for backward > compatibility, e.g. the proposed admin CLI options are > {noformat} > hbase> cleaner_chore_run > hbase> cleaner_chore_run 'hfiles' > hbase> cleaner_chore_run 'oldwals' > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21011) Provide CLI option to run oldwals and hfiles cleaner separately
[ https://issues.apache.org/jira/browse/HBASE-21011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tak Lon (Stephen) Wu updated HBASE-21011: - Description: Existing logic of cleaner chore is first execute HFiles cleaner and then oldwals cleaner in a request, and only return succeed if both completes. There is a use case of running only oldwals cleaner because oldwals uses all the disk space, and running HFiles cleaner is too slow because either the amount of old HFiles or directories are too much. So, this change provide the flexibility for those disabled cleaner by default or would like to execute admin command to run oldwals and HFiles cleaning procedure individually. NOTE that we keep the default as running both of them for backward compatibility, e.g. the proposed admin CLI options are {noformat} hbase> cleaner_chore_run hbase> cleaner_chore_run 'hfiles' hbase> cleaner_chore_run 'oldwals' {noformat} was: Existing logic of cleaner chore is first execute HFiles cleaner and then oldwals cleaner in a request, and only return succeed if both completes. There is a use case of running only oldwals cleaner because oldwals uses all the disk space, and running HFiles cleaner is too slow because either the amount of old HFiles or directories are too much. So, this change provide the flexibility for those disabled cleaner by default or would like to execute admin command to run oldwals and HFiles cleaning procedure individually. NOTE that we keep the default as running both of them for backward compatibility, e.g. the proposed admin CLI options are {noformat} hbase> cleaner_chore_run hbase> cleaner_chore_run 'hfiles' hbase> cleaner_chore_run 'oldwals' {noformat} > Provide CLI option to run oldwals and hfiles cleaner separately > --- > > Key: HBASE-21011 > URL: https://issues.apache.org/jira/browse/HBASE-21011 > Project: HBase > Issue Type: Improvement > Components: Admin, Client >Affects Versions: 3.0.0, 1.4.6, 2.1.1 >Reporter: Tak Lon (Stephen) Wu >Assignee: Tak Lon (Stephen) Wu >Priority: Minor > > Existing logic of cleaner chore is first execute HFiles cleaner and then > oldwals cleaner in a request, and only return succeed if both completes. > There is a use case of running only oldwals cleaner because oldwals uses all > the disk space, and running HFiles cleaner is too slow because either the > amount of old HFiles or directories are too much. So, this change provide the > flexibility for those disabled cleaner by default or would like to execute > admin command to run oldwals and HFiles cleaning procedure individually. > NOTE that we keep the default as running both of them for backward > compatibility, e.g. the proposed admin CLI options are > {noformat} > hbase> cleaner_chore_run > hbase> cleaner_chore_run 'hfiles' > hbase> cleaner_chore_run 'oldwals' > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21011) Provide CLI option to run oldwals and hfiles cleaner separately
Tak Lon (Stephen) Wu created HBASE-21011: Summary: Provide CLI option to run oldwals and hfiles cleaner separately Key: HBASE-21011 URL: https://issues.apache.org/jira/browse/HBASE-21011 Project: HBase Issue Type: Improvement Components: Admin, Client Affects Versions: 1.4.6, 3.0.0, 2.1.1 Reporter: Tak Lon (Stephen) Wu Assignee: Tak Lon (Stephen) Wu Existing logic of cleaner chore is first execute HFiles cleaner and then oldwals cleaner in a request, and only return succeed if both completes. There is a use case of running only oldwals cleaner because oldwals uses all the disk space, and running HFiles cleaner is too slow because either the amount of old HFiles or directories are too much. So, this change provide the flexibility for those disabled cleaner by default or would like to execute admin command to run oldwals and HFiles cleaning procedure individually. NOTE that we keep the default as running both of them for backward compatibility, e.g. the proposed admin CLI options are {{ hbase> cleaner_chore_run}} {{ hbase> cleaner_chore_run 'hfiles'}} {{ hbase> cleaner_chore_run 'oldwals'}} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21008) HBase 1.x can not read HBase2 hfiles due to TimeRangeTracker
[ https://issues.apache.org/jira/browse/HBASE-21008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16569064#comment-16569064 ] Chia-Ping Tsai commented on HBASE-21008: {quote}What is preferred here? {quote} Here's is my two cents. # Since we have released 2.0 and 2.1, backporting only the "read" part to 1.x is necessary. # if all 1.x deployment should work with hfiles generated by 2.x, we can file another Jira to change the serialization from protobuf to previous way. (of course, the hfiled impacted by HBASE-18754 still can't work with 1.x...:() > HBase 1.x can not read HBase2 hfiles due to TimeRangeTracker > > > Key: HBASE-21008 > URL: https://issues.apache.org/jira/browse/HBASE-21008 > Project: HBase > Issue Type: Bug >Affects Versions: 2.1.0, 1.4.6 >Reporter: Jerry He >Priority: Major > > It looks like HBase 1.x can not open hfiiles written by HBase2 still. > I tested the latest HBase 1.4.6 and 2.1.0. 1.4.6 tried to read and open > regions written by 2.1.0. > {code} > 2018-07-30 16:01:31,274 ERROR [StoreFileOpenerThread-info-1] > regionserver.StoreFile: Error reading timestamp range data from meta -- > proceeding without > java.lang.IllegalArgumentException: Timestamp cannot be negative. > minStamp:5783278630776778969, maxStamp:-4698050386518222402 > at org.apache.hadoop.hbase.io.TimeRange.check(TimeRange.java:112) > at org.apache.hadoop.hbase.io.TimeRange.(TimeRange.java:100) > at > org.apache.hadoop.hbase.regionserver.TimeRangeTracker.toTimeRange(TimeRangeTracker.java:214) > at > org.apache.hadoop.hbase.regionserver.TimeRangeTracker.getTimeRange(TimeRangeTracker.java:198) > at > org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:507) > at > org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:531) > at > org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:521) > at > org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:679) > at > org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:122) > at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:538) > at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:535) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > {code} > Or: > {code} > 2018-07-30 16:01:31,305 ERROR [RS_OPEN_REGION-throb1:34004-0] > handler.OpenRegionHandler: Failed open of > region=janusgraph,,1532630557542.b0fa15cb0bf1b0bf740997b7056c., starting > to roll back the global memstore size. > java.io.IOException: java.io.IOException: java.io.EOFException > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1033) > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:908) > at > org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:876) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6995) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6956) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6927) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6883) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6834) > at > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:364) > at > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:131) > at > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.io.IOException: java.io.EOFException > at > org.apache.hadoop.hbase.regionserver.HStore.openStoreFiles(HStore.java:564) > at > org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:518) > at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:281) > at > org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:5378) > at > org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1007) > at > org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1004) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at >
[jira] [Commented] (HBASE-21004) Backport to branch-2.0 HBASE-20708 "Remove the usage of RecoverMetaProcedure"
[ https://issues.apache.org/jira/browse/HBASE-21004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16569058#comment-16569058 ] Hudson commented on HBASE-21004: Results for branch branch-2.0 [build #629 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/629/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/629//General_Nightly_Build_Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/629//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/629//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. > Backport to branch-2.0 HBASE-20708 "Remove the usage of RecoverMetaProcedure" > - > > Key: HBASE-21004 > URL: https://issues.apache.org/jira/browse/HBASE-21004 > Project: HBase > Issue Type: Sub-task > Components: amv2 >Reporter: stack >Assignee: stack >Priority: Major > Fix For: 2.0.2 > > Attachments: HBASE-21004.branch-2.0.001.patch, > HBASE-21004.branch-2.0.002.patch > > > Backport parent issue to branch-2.0. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20708) Remove the usage of RecoverMetaProcedure in master startup
[ https://issues.apache.org/jira/browse/HBASE-20708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16569059#comment-16569059 ] Hudson commented on HBASE-20708: Results for branch branch-2.0 [build #629 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/629/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/629//General_Nightly_Build_Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/629//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/629//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. > Remove the usage of RecoverMetaProcedure in master startup > -- > > Key: HBASE-20708 > URL: https://issues.apache.org/jira/browse/HBASE-20708 > Project: HBase > Issue Type: Bug > Components: proc-v2, Region Assignment >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Blocker > Fix For: 3.0.0, 2.1.0 > > Attachments: HBASE-20708-v1.patch, HBASE-20708-v2.patch, > HBASE-20708-v3.patch, HBASE-20708-v4.patch, HBASE-20708-v5.patch, > HBASE-20708-v6.patch, HBASE-20708-v7.patch, HBASE-20708-v8.patch, > HBASE-20708-v9.patch, HBASE-20708-v9.patch, HBASE-20708.patch > > > In HBASE-20700, we make RecoverMetaProcedure use a special lock which is only > used by RMP to avoid dead lock with MoveRegionProcedure. But we will always > schedule a RMP when master starting up, so we still need to make sure that > there is no race between this RMP and other RMPs and SCPs scheduled before > the master restarts. > Please see [#[accompanying design document > |https://docs.google.com/document/d/1_872oHzrhJq4ck7f6zmp1J--zMhsIFvXSZyX1Mxg5MA/edit#heading=h.xy1z4alsq7uy] > ]where we call out the problem being addressed by this issue in more detail > and in which we describe our new approach to Master startup. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20871) Backport HBASE-20847 to branch-2.0: "The parent procedure of RegionTransitionProcedure may not have the table lock"
[ https://issues.apache.org/jira/browse/HBASE-20871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16569056#comment-16569056 ] Hadoop QA commented on HBASE-20871: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} branch-2.0 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 59s{color} | {color:green} branch-2.0 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 15s{color} | {color:green} branch-2.0 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 32s{color} | {color:green} branch-2.0 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 33s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 8s{color} | {color:green} branch-2.0 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} branch-2.0 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s{color} | {color:green} hbase-procedure: The patch generated 0 new + 1 unchanged - 5 fixed = 1 total (was 6) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 19s{color} | {color:red} hbase-server: The patch generated 1 new + 25 unchanged - 1 fixed = 26 total (was 26) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 32s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 12m 24s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.5 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 49s{color} | {color:green} hbase-procedure in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 46s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 69m 45s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.master.procedure.TestMasterProcedureScheduler | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:6f01af0 | | JIRA Issue | HBASE-20871 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12934376/0001-HBASE-20847-The-parent-procedure-of-RegionTransition.branch-2.0.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile |
[jira] [Commented] (HBASE-20965) Separate region server report requests to new handlers
[ https://issues.apache.org/jira/browse/HBASE-20965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16569054#comment-16569054 ] Yu Li commented on HBASE-20965: --- bq. have you encountered similar issue, as you have a thousands size cluster We haven't experienced the same issue in our cluster, but I believe it's a real world issue observed in Xiaomi's production environment. "The jstack shows that most of handlers response for RSReport" is the evidence :-) bq. Does it has negative effects on dealing with other rpcs requests? I think this is a good question and I'd suggest to use {{StealJobQueue}} to better utilize handlers. [~Yi Mei] [~zghaobac] > Separate region server report requests to new handlers > -- > > Key: HBASE-20965 > URL: https://issues.apache.org/jira/browse/HBASE-20965 > Project: HBase > Issue Type: Improvement > Components: Performance >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > Fix For: 3.0.0, 2.2.0 > > Attachments: HBASE-20965.master.001.patch, > HBASE-20965.master.002.patch, HBASE-20965.master.003.patch, > HBASE-20965.master.004.patch > > > In master rpc scheduler, all rpc requests are executed in a thread pool. This > task separates rs report requests to new handlers. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20997) rebuildUserRegions() does not build ReplicaMapping during master switchover
[ https://issues.apache.org/jira/browse/HBASE-20997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16569051#comment-16569051 ] Hadoop QA commented on HBASE-20997: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} branch-1 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 38s{color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s{color} | {color:green} branch-1 passed with JDK v1.8.0_172 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s{color} | {color:green} branch-1 passed with JDK v1.7.0_181 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 21s{color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 2m 52s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s{color} | {color:green} branch-1 passed with JDK v1.8.0_172 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s{color} | {color:green} branch-1 passed with JDK v1.7.0_181 {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s{color} | {color:green} the patch passed with JDK v1.8.0_172 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s{color} | {color:green} the patch passed with JDK v1.7.0_181 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 23s{color} | {color:red} hbase-server: The patch generated 1 new + 24 unchanged - 1 fixed = 25 total (was 25) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 2m 50s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 1m 46s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s{color} | {color:green} the patch passed with JDK v1.8.0_172 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} the patch passed with JDK v1.7.0_181 {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 26s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}118m 18s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:61288f8 | | JIRA Issue | HBASE-20997 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12934366/HBASE-20997-branch-1-v5.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux f304ab3a45b8 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
[jira] [Commented] (HBASE-21008) HBase 1.x can not read HBase2 hfiles due to TimeRangeTracker
[ https://issues.apache.org/jira/browse/HBASE-21008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16569045#comment-16569045 ] Jerry He commented on HBASE-21008: -- {quote}Perhaps we can backport a part of HBASE-18754 to all active 1.x branch in order to make them "can" read the hfiles generated by 2.x {quote} Yeah, only the 'read' part needs to be put in 1.x. This approach is similarly used in HBASE-16189 and HBASE-19052. However, in HBASE-19116, [~stack] made changes in 2.x so that 1.x deployment no longer needs to upgrade to the latest 1.x to work. What is preferred here? > HBase 1.x can not read HBase2 hfiles due to TimeRangeTracker > > > Key: HBASE-21008 > URL: https://issues.apache.org/jira/browse/HBASE-21008 > Project: HBase > Issue Type: Bug >Affects Versions: 2.1.0, 1.4.6 >Reporter: Jerry He >Priority: Major > > It looks like HBase 1.x can not open hfiiles written by HBase2 still. > I tested the latest HBase 1.4.6 and 2.1.0. 1.4.6 tried to read and open > regions written by 2.1.0. > {code} > 2018-07-30 16:01:31,274 ERROR [StoreFileOpenerThread-info-1] > regionserver.StoreFile: Error reading timestamp range data from meta -- > proceeding without > java.lang.IllegalArgumentException: Timestamp cannot be negative. > minStamp:5783278630776778969, maxStamp:-4698050386518222402 > at org.apache.hadoop.hbase.io.TimeRange.check(TimeRange.java:112) > at org.apache.hadoop.hbase.io.TimeRange.(TimeRange.java:100) > at > org.apache.hadoop.hbase.regionserver.TimeRangeTracker.toTimeRange(TimeRangeTracker.java:214) > at > org.apache.hadoop.hbase.regionserver.TimeRangeTracker.getTimeRange(TimeRangeTracker.java:198) > at > org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:507) > at > org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:531) > at > org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:521) > at > org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:679) > at > org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:122) > at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:538) > at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:535) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > {code} > Or: > {code} > 2018-07-30 16:01:31,305 ERROR [RS_OPEN_REGION-throb1:34004-0] > handler.OpenRegionHandler: Failed open of > region=janusgraph,,1532630557542.b0fa15cb0bf1b0bf740997b7056c., starting > to roll back the global memstore size. > java.io.IOException: java.io.IOException: java.io.EOFException > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1033) > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:908) > at > org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:876) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6995) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6956) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6927) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6883) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6834) > at > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:364) > at > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:131) > at > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.io.IOException: java.io.EOFException > at > org.apache.hadoop.hbase.regionserver.HStore.openStoreFiles(HStore.java:564) > at > org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:518) > at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:281) > at > org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:5378) > at > org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1007) > at > org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1004) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at >
[jira] [Updated] (HBASE-20997) rebuildUserRegions() does not build ReplicaMapping during master switchover
[ https://issues.apache.org/jira/browse/HBASE-20997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] huaxiang sun updated HBASE-20997: - Attachment: HBASE-20997-branch-1-v6.patch > rebuildUserRegions() does not build ReplicaMapping during master switchover > --- > > Key: HBASE-20997 > URL: https://issues.apache.org/jira/browse/HBASE-20997 > Project: HBase > Issue Type: Bug > Components: master >Affects Versions: 1.2.6, 1.3.2, 1.5.0, 1.4.6 >Reporter: huaxiang sun >Assignee: huaxiang sun >Priority: Major > Attachments: HBASE-20997-branch-1-v1.patch, > HBASE-20997-branch-1-v2.patch, HBASE-20997-branch-1-v4.patch, > HBASE-20997-branch-1-v5.patch, HBASE-20997-branch-1-v6.patch > > > During master switchover, rebuildUserRegions() does not rebuild master > in-memory defaultReplicaToOtherReplicas map. This puts the cluster in an > inconsistent state. In read replica case, it causes replica parent region > stay online without being unassigned. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20952) Re-visit the WAL API
[ https://issues.apache.org/jira/browse/HBASE-20952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16569037#comment-16569037 ] stack commented on HBASE-20952: --- bq. In general, I would like to start from what we truly need, not what we currently have Agree. Would suggest starting review of existing implementations. Just reading https://bookkeeper.apache.org/docs/latest/api/distributedlog-api/ is interesting ("Single writer!") or namespaces -- is a namespace a "region" in our work -- in here https://bookkeeper.apache.org/distributedlog/docs/latest/user_guide/api/core It has stuff like: add add // bulk add flush flushAndSync ... and then async versions. All sounds good... > Re-visit the WAL API > > > Key: HBASE-20952 > URL: https://issues.apache.org/jira/browse/HBASE-20952 > Project: HBase > Issue Type: Sub-task > Components: wal >Reporter: Josh Elser >Priority: Major > > Take a step back from the current WAL implementations and think about what an > HBase WAL API should look like. What are the primitive calls that we require > to guarantee durability of writes with a high degree of performance? > The API needs to take the current implementations into consideration. We > should also have a mind for what is happening in the Ratis LogService (but > the LogService should not dictate what HBase's WAL API looks like RATIS-272). > Other "systems" inside of HBase that use WALs are replication and > backup Replication has the use-case for "tail"'ing the WAL which we > should provide via our new API. B doesn't do anything fancy (IIRC). We > should make sure all consumers are generally going to be OK with the API we > create. > The API may be "OK" (or OK in a part). We need to also consider other methods > which were "bolted" on such as {{AbstractFSWAL}} and > {{WALFileLengthProvider}}. Other corners of "WAL use" (like the > {{WALSplitter}} should also be looked at to use WAL-APIs only). > We also need to make sure that adequate interface audience and stability > annotations are chosen. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20871) Backport HBASE-20847 to branch-2.0: "The parent procedure of RegionTransitionProcedure may not have the table lock"
[ https://issues.apache.org/jira/browse/HBASE-20871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-20871: -- Status: Patch Available (was: Open) > Backport HBASE-20847 to branch-2.0: "The parent procedure of > RegionTransitionProcedure may not have the table lock" > --- > > Key: HBASE-20871 > URL: https://issues.apache.org/jira/browse/HBASE-20871 > Project: HBase > Issue Type: Bug > Components: amv2 >Reporter: stack >Assignee: stack >Priority: Critical > Fix For: 2.0.2 > > Attachments: > 0001-HBASE-20847-The-parent-procedure-of-RegionTransition.branch-2.0.patch > > > Evaluate HBASE-20847 for backport before we cut 2.0.2. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20829) Remove the addFront assertion in MasterProcedureScheduler.doAdd
[ https://issues.apache.org/jira/browse/HBASE-20829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-20829: -- Status: Patch Available (was: Reopened) > Remove the addFront assertion in MasterProcedureScheduler.doAdd > --- > > Key: HBASE-20829 > URL: https://issues.apache.org/jira/browse/HBASE-20829 > Project: HBase > Issue Type: Bug > Components: Replication >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 3.0.0, 2.0.2, 2.2.0, 2.1.0 > > Attachments: > 0001-HBASE-20829-Remove-the-addFront-assertion-in-MasterP.branch-2.0.patch, > HBASE-20829-debug.patch, HBASE-20829-v1.patch, HBASE-20829.patch, > org.apache.hadoop.hbase.replication.TestSyncReplicationStandbyKillRS-output.txt > > > Timed out. > {noformat} > 2018-06-30 01:32:33,823 ERROR [Time-limited test] > replication.TestSyncReplicationStandbyKillRS(93): Failed to transit standby > cluster to DOWNGRADE_ACTIVE > {noformat} > We failed to transit the state to DA and then wait for it to become DA so > hang there. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20871) Backport HBASE-20847 to branch-2.0: "The parent procedure of RegionTransitionProcedure may not have the table lock"
[ https://issues.apache.org/jira/browse/HBASE-20871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-20871: -- Attachment: 0001-HBASE-20847-The-parent-procedure-of-RegionTransition.branch-2.0.patch > Backport HBASE-20847 to branch-2.0: "The parent procedure of > RegionTransitionProcedure may not have the table lock" > --- > > Key: HBASE-20871 > URL: https://issues.apache.org/jira/browse/HBASE-20871 > Project: HBase > Issue Type: Bug > Components: amv2 >Reporter: stack >Assignee: stack >Priority: Critical > Fix For: 2.0.2 > > Attachments: > 0001-HBASE-20847-The-parent-procedure-of-RegionTransition.branch-2.0.patch > > > Evaluate HBASE-20847 for backport before we cut 2.0.2. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Reopened] (HBASE-20829) Remove the addFront assertion in MasterProcedureScheduler.doAdd
[ https://issues.apache.org/jira/browse/HBASE-20829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack reopened HBASE-20829: --- Reopen to backport to branch-2.0.2. > Remove the addFront assertion in MasterProcedureScheduler.doAdd > --- > > Key: HBASE-20829 > URL: https://issues.apache.org/jira/browse/HBASE-20829 > Project: HBase > Issue Type: Bug > Components: Replication >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 3.0.0, 2.1.0, 2.0.2, 2.2.0 > > Attachments: > 0001-HBASE-20829-Remove-the-addFront-assertion-in-MasterP.branch-2.0.patch, > HBASE-20829-debug.patch, HBASE-20829-v1.patch, HBASE-20829.patch, > org.apache.hadoop.hbase.replication.TestSyncReplicationStandbyKillRS-output.txt > > > Timed out. > {noformat} > 2018-06-30 01:32:33,823 ERROR [Time-limited test] > replication.TestSyncReplicationStandbyKillRS(93): Failed to transit standby > cluster to DOWNGRADE_ACTIVE > {noformat} > We failed to transit the state to DA and then wait for it to become DA so > hang there. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20997) rebuildUserRegions() does not build ReplicaMapping during master switchover
[ https://issues.apache.org/jira/browse/HBASE-20997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16569028#comment-16569028 ] Hadoop QA commented on HBASE-20997: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} branch-1 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 8s{color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s{color} | {color:green} branch-1 passed with JDK v1.8.0_172 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s{color} | {color:green} branch-1 passed with JDK v1.7.0_181 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 27s{color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 2m 58s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s{color} | {color:green} branch-1 passed with JDK v1.8.0_172 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} branch-1 passed with JDK v1.7.0_181 {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s{color} | {color:green} the patch passed with JDK v1.8.0_172 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s{color} | {color:green} the patch passed with JDK v1.7.0_181 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 29s{color} | {color:red} hbase-server: The patch generated 1 new + 24 unchanged - 1 fixed = 25 total (was 25) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 2m 54s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 2m 9s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s{color} | {color:green} the patch passed with JDK v1.8.0_172 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} the patch passed with JDK v1.7.0_181 {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m 28s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}120m 54s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:61288f8 | | JIRA Issue | HBASE-20997 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12934358/HBASE-20997-branch-1-v4.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux f381d606da61 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
[jira] [Updated] (HBASE-20829) Remove the addFront assertion in MasterProcedureScheduler.doAdd
[ https://issues.apache.org/jira/browse/HBASE-20829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-20829: -- Fix Version/s: 2.0.2 > Remove the addFront assertion in MasterProcedureScheduler.doAdd > --- > > Key: HBASE-20829 > URL: https://issues.apache.org/jira/browse/HBASE-20829 > Project: HBase > Issue Type: Bug > Components: Replication >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 3.0.0, 2.1.0, 2.0.2, 2.2.0 > > Attachments: > 0001-HBASE-20829-Remove-the-addFront-assertion-in-MasterP.branch-2.0.patch, > HBASE-20829-debug.patch, HBASE-20829-v1.patch, HBASE-20829.patch, > org.apache.hadoop.hbase.replication.TestSyncReplicationStandbyKillRS-output.txt > > > Timed out. > {noformat} > 2018-06-30 01:32:33,823 ERROR [Time-limited test] > replication.TestSyncReplicationStandbyKillRS(93): Failed to transit standby > cluster to DOWNGRADE_ACTIVE > {noformat} > We failed to transit the state to DA and then wait for it to become DA so > hang there. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20829) Remove the addFront assertion in MasterProcedureScheduler.doAdd
[ https://issues.apache.org/jira/browse/HBASE-20829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16569027#comment-16569027 ] stack commented on HBASE-20829: --- Attached what I pushed to branch-2.0. > Remove the addFront assertion in MasterProcedureScheduler.doAdd > --- > > Key: HBASE-20829 > URL: https://issues.apache.org/jira/browse/HBASE-20829 > Project: HBase > Issue Type: Bug > Components: Replication >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 3.0.0, 2.1.0, 2.0.2, 2.2.0 > > Attachments: > 0001-HBASE-20829-Remove-the-addFront-assertion-in-MasterP.branch-2.0.patch, > HBASE-20829-debug.patch, HBASE-20829-v1.patch, HBASE-20829.patch, > org.apache.hadoop.hbase.replication.TestSyncReplicationStandbyKillRS-output.txt > > > Timed out. > {noformat} > 2018-06-30 01:32:33,823 ERROR [Time-limited test] > replication.TestSyncReplicationStandbyKillRS(93): Failed to transit standby > cluster to DOWNGRADE_ACTIVE > {noformat} > We failed to transit the state to DA and then wait for it to become DA so > hang there. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20829) Remove the addFront assertion in MasterProcedureScheduler.doAdd
[ https://issues.apache.org/jira/browse/HBASE-20829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-20829: -- Attachment: 0001-HBASE-20829-Remove-the-addFront-assertion-in-MasterP.branch-2.0.patch > Remove the addFront assertion in MasterProcedureScheduler.doAdd > --- > > Key: HBASE-20829 > URL: https://issues.apache.org/jira/browse/HBASE-20829 > Project: HBase > Issue Type: Bug > Components: Replication >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 3.0.0, 2.1.0, 2.2.0 > > Attachments: > 0001-HBASE-20829-Remove-the-addFront-assertion-in-MasterP.branch-2.0.patch, > HBASE-20829-debug.patch, HBASE-20829-v1.patch, HBASE-20829.patch, > org.apache.hadoop.hbase.replication.TestSyncReplicationStandbyKillRS-output.txt > > > Timed out. > {noformat} > 2018-06-30 01:32:33,823 ERROR [Time-limited test] > replication.TestSyncReplicationStandbyKillRS(93): Failed to transit standby > cluster to DOWNGRADE_ACTIVE > {noformat} > We failed to transit the state to DA and then wait for it to become DA so > hang there. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21009) Backport to branch-2.0 HBASE-20739 "Add priority for SCP"
[ https://issues.apache.org/jira/browse/HBASE-21009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-21009: -- Resolution: Fixed Status: Resolved (was: Patch Available) Pushed to branch-2.0. > Backport to branch-2.0 HBASE-20739 "Add priority for SCP" > - > > Key: HBASE-21009 > URL: https://issues.apache.org/jira/browse/HBASE-21009 > Project: HBase > Issue Type: Sub-task > Components: amv2 >Reporter: stack >Assignee: stack >Priority: Major > Fix For: 2.0.2 > > Attachments: HBASE-21009.branch-2.0.001.patch, > HBASE-21009.branch-2.0.002.patch, HBASE-21009.branch-2.0.003.patch > > > Backport parent issue to branch-2.0. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work started] (HBASE-20993) [Auth] IPC client fallback to simple auth allowed doesn't work
[ https://issues.apache.org/jira/browse/HBASE-20993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HBASE-20993 started by Jack Bearden. > [Auth] IPC client fallback to simple auth allowed doesn't work > -- > > Key: HBASE-20993 > URL: https://issues.apache.org/jira/browse/HBASE-20993 > Project: HBase > Issue Type: Bug > Components: Client, security >Affects Versions: 1.2.6 >Reporter: Reid Chan >Assignee: Jack Bearden >Priority: Critical > > It is easily reproducible. > client's hbase-site.xml: hadoop.security.authentication:kerberos, > hbase.security.authentication:kerberos, > hbase.ipc.client.fallback-to-simple-auth-allowed:true, keytab and principal > are right set > A simple auth hbase cluster, a kerberized hbase client application. > application trying to r/w/c/d table will have following exception: > {code} > javax.security.sasl.SaslException: GSS initiate failed [Caused by > GSSException: No valid credentials provided (Mechanism level: Failed to find > any Kerberos tgt)] > at > com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) > at > org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:743) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:740) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1241) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336) > at > org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58383) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(ConnectionManager.java:1592) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1530) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1552) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1581) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1738) > at > org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38) > at > org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:134) > at > org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4297) > at > org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4289) > at > org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsyncV2(HBaseAdmin.java:753) > at > org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:674) > at > org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:607) > at > org.playground.hbase.KerberizedClientFallback.main(KerberizedClientFallback.java:55) > Caused by: GSSException: No valid credentials provided (Mechanism level: > Failed to find any Kerberos tgt) > at > sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147) > at > sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122) > at > sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187) > at > sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:224) > at > sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212) > at >
[jira] [Commented] (HBASE-20997) rebuildUserRegions() does not build ReplicaMapping during master switchover
[ https://issues.apache.org/jira/browse/HBASE-20997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16569001#comment-16569001 ] Ted Yu commented on HBASE-20997: +1 > rebuildUserRegions() does not build ReplicaMapping during master switchover > --- > > Key: HBASE-20997 > URL: https://issues.apache.org/jira/browse/HBASE-20997 > Project: HBase > Issue Type: Bug > Components: master >Affects Versions: 1.2.6, 1.3.2, 1.5.0, 1.4.6 >Reporter: huaxiang sun >Assignee: huaxiang sun >Priority: Major > Attachments: HBASE-20997-branch-1-v1.patch, > HBASE-20997-branch-1-v2.patch, HBASE-20997-branch-1-v4.patch, > HBASE-20997-branch-1-v5.patch > > > During master switchover, rebuildUserRegions() does not rebuild master > in-memory defaultReplicaToOtherReplicas map. This puts the cluster in an > inconsistent state. In read replica case, it causes replica parent region > stay online without being unassigned. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20997) rebuildUserRegions() does not build ReplicaMapping during master switchover
[ https://issues.apache.org/jira/browse/HBASE-20997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568998#comment-16568998 ] huaxiang sun commented on HBASE-20997: -- v5 uploaded. > rebuildUserRegions() does not build ReplicaMapping during master switchover > --- > > Key: HBASE-20997 > URL: https://issues.apache.org/jira/browse/HBASE-20997 > Project: HBase > Issue Type: Bug > Components: master >Affects Versions: 1.2.6, 1.3.2, 1.5.0, 1.4.6 >Reporter: huaxiang sun >Assignee: huaxiang sun >Priority: Major > Attachments: HBASE-20997-branch-1-v1.patch, > HBASE-20997-branch-1-v2.patch, HBASE-20997-branch-1-v4.patch, > HBASE-20997-branch-1-v5.patch > > > During master switchover, rebuildUserRegions() does not rebuild master > in-memory defaultReplicaToOtherReplicas map. This puts the cluster in an > inconsistent state. In read replica case, it causes replica parent region > stay online without being unassigned. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20997) rebuildUserRegions() does not build ReplicaMapping during master switchover
[ https://issues.apache.org/jira/browse/HBASE-20997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] huaxiang sun updated HBASE-20997: - Attachment: HBASE-20997-branch-1-v5.patch > rebuildUserRegions() does not build ReplicaMapping during master switchover > --- > > Key: HBASE-20997 > URL: https://issues.apache.org/jira/browse/HBASE-20997 > Project: HBase > Issue Type: Bug > Components: master >Affects Versions: 1.2.6, 1.3.2, 1.5.0, 1.4.6 >Reporter: huaxiang sun >Assignee: huaxiang sun >Priority: Major > Attachments: HBASE-20997-branch-1-v1.patch, > HBASE-20997-branch-1-v2.patch, HBASE-20997-branch-1-v4.patch, > HBASE-20997-branch-1-v5.patch > > > During master switchover, rebuildUserRegions() does not rebuild master > in-memory defaultReplicaToOtherReplicas map. This puts the cluster in an > inconsistent state. In read replica case, it causes replica parent region > stay online without being unassigned. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20896) Port HBASE-20866 to branch-1 and branch-1.4
[ https://issues.apache.org/jira/browse/HBASE-20896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Reid Chan updated HBASE-20896: -- Resolution: Resolved Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) > Port HBASE-20866 to branch-1 and branch-1.4 > > > Key: HBASE-20896 > URL: https://issues.apache.org/jira/browse/HBASE-20896 > Project: HBase > Issue Type: Sub-task > Components: Client, scan >Reporter: Andrew Purtell >Assignee: Vikas Vishwakarma >Priority: Major > Labels: perfomance > Fix For: 1.5.0, 1.4.7 > > Attachments: HBASE-20896.branch-1.4.001.patch, > HBASE-20896.branch-1.4.002.patch, HBASE-20896.branch-1.4.003.patch, > HBASE-20896.branch-1.4.004.patch, HBASE-20896.branch-1.4.005.patch, > HBASE-20896.branch-1.4.006.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20896) Port HBASE-20866 to branch-1 and branch-1.4
[ https://issues.apache.org/jira/browse/HBASE-20896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568996#comment-16568996 ] Reid Chan commented on HBASE-20896: --- Pushed to branch-1 and branch-1.4. Thanks for the vote, sir [~apurtell]. Thanks [~vik.karma], it is a nice patch. > Port HBASE-20866 to branch-1 and branch-1.4 > > > Key: HBASE-20896 > URL: https://issues.apache.org/jira/browse/HBASE-20896 > Project: HBase > Issue Type: Sub-task > Components: Client, scan >Reporter: Andrew Purtell >Assignee: Vikas Vishwakarma >Priority: Major > Labels: perfomance > Fix For: 1.5.0, 1.4.7 > > Attachments: HBASE-20896.branch-1.4.001.patch, > HBASE-20896.branch-1.4.002.patch, HBASE-20896.branch-1.4.003.patch, > HBASE-20896.branch-1.4.004.patch, HBASE-20896.branch-1.4.005.patch, > HBASE-20896.branch-1.4.006.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21009) Backport to branch-2.0 HBASE-20739 "Add priority for SCP"
[ https://issues.apache.org/jira/browse/HBASE-21009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568993#comment-16568993 ] Hadoop QA commented on HBASE-21009: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange} 0m 0s{color} | {color:orange} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} branch-2.0 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 41s{color} | {color:green} branch-2.0 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 43s{color} | {color:green} branch-2.0 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 11s{color} | {color:green} branch-2.0 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 17s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s{color} | {color:green} branch-2.0 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s{color} | {color:green} branch-2.0 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 11s{color} | {color:green} hbase-server: The patch generated 0 new + 8 unchanged - 1 fixed = 8 total (was 9) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 14s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 11m 31s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.5 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green}107m 47s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}147m 47s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:6f01af0 | | JIRA Issue | HBASE-21009 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12934350/HBASE-21009.branch-2.0.003.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 096b65ba4715 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | branch-2.0 / 013ea3e3d2 | | maven | version: Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) | | Default Java | 1.8.0_171 | | findbugs | v3.1.0-RC3 | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/13922/testReport/ | | Max. process+thread count | 4157 (vs. ulimit of 1) | | modules | C: hbase-server U:
[jira] [Commented] (HBASE-20997) rebuildUserRegions() does not build ReplicaMapping during master switchover
[ https://issues.apache.org/jira/browse/HBASE-20997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568991#comment-16568991 ] Ted Yu commented on HBASE-20997: Looks good overall. Please drop the table at the end of the test. > rebuildUserRegions() does not build ReplicaMapping during master switchover > --- > > Key: HBASE-20997 > URL: https://issues.apache.org/jira/browse/HBASE-20997 > Project: HBase > Issue Type: Bug > Components: master >Affects Versions: 1.2.6, 1.3.2, 1.5.0, 1.4.6 >Reporter: huaxiang sun >Assignee: huaxiang sun >Priority: Major > Attachments: HBASE-20997-branch-1-v1.patch, > HBASE-20997-branch-1-v2.patch, HBASE-20997-branch-1-v4.patch > > > During master switchover, rebuildUserRegions() does not rebuild master > in-memory defaultReplicaToOtherReplicas map. This puts the cluster in an > inconsistent state. In read replica case, it causes replica parent region > stay online without being unassigned. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20965) Separate region server report requests to new handlers
[ https://issues.apache.org/jira/browse/HBASE-20965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568988#comment-16568988 ] Reid Chan commented on HBASE-20965: --- {quote}Does it has negative effects on dealing with other rpcs requests?{quote} Number is more persuasive, since it is a performance related issue. All i can tell it may be useful or not, a confirmation by number is better. ping [~carp84] (sorry for my sudden ping), have you encountered similar issue, as you have a thousands size cluster. > Separate region server report requests to new handlers > -- > > Key: HBASE-20965 > URL: https://issues.apache.org/jira/browse/HBASE-20965 > Project: HBase > Issue Type: Improvement > Components: Performance >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > Fix For: 3.0.0, 2.2.0 > > Attachments: HBASE-20965.master.001.patch, > HBASE-20965.master.002.patch, HBASE-20965.master.003.patch, > HBASE-20965.master.004.patch > > > In master rpc scheduler, all rpc requests are executed in a thread pool. This > task separates rs report requests to new handlers. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21003) Fix the flaky TestSplitOrMergeStatus
[ https://issues.apache.org/jira/browse/HBASE-21003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568987#comment-16568987 ] Allan Yang commented on HBASE-21003: Thanks, [~yuzhih...@gmail.com]! Sorry for the typo. > Fix the flaky TestSplitOrMergeStatus > > > Key: HBASE-21003 > URL: https://issues.apache.org/jira/browse/HBASE-21003 > Project: HBase > Issue Type: Bug >Affects Versions: 2.1.0, 2.0.1 >Reporter: Allan Yang >Assignee: Allan Yang >Priority: Major > > TestSplitOrMergeStatus.testSplitSwitch() is flaky because : > 1. Set the split switch to false > 2. Split the region, expect nothing happen > 3. Set the split switch to true, but since the last split request may not > start yet, so setting the switch to true may lead to the last split success > 4. Split the same region again, expect split success. But since the last > split operation may already successful, this one will fail > Maybe we should wait for a while between 2 and 3. > {code} > org.apache.hadoop.hbase.client.DoNotRetryRegionException: > 3f16a57c583e6ecf044c5b7de2e97121 is not OPEN; > regionState={3f16a57c583e6ecf044c5b7de2e97121 state=SPLITTING, > ts=1533239385789, server=asf911.gq1.ygridcore.net,60061,1533239369899} > at > org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.checkOnline(AbstractStateMachineTableProcedure.java:191) > at > org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure.(SplitTableRegionProcedure.java:112) > at > org.apache.hadoop.hbase.master.assignment.AssignmentManager.createSplitProcedure(AssignmentManager.java:756) > at org.apache.hadoop.hbase.master.HMaster$2.run(HMaster.java:1722) > at > org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:131) > at org.apache.hadoop.hbase.master.HMaster.splitRegion(HMaster.java:1714) > at > org.apache.hadoop.hbase.master.MasterRpcServices.splitRegion(MasterRpcServices.java:797) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) > at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) > at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21003) Fix the flaky TestSplitOrMergeStatus
[ https://issues.apache.org/jira/browse/HBASE-21003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allan Yang updated HBASE-21003: --- Description: TestSplitOrMergeStatus.testSplitSwitch() is flaky because : 1. Set the split switch to false 2. Split the region, expect nothing happen 3. Set the split switch to true, but since the last split request may not start yet, so setting the switch to true may lead to the last split success 4. Split the same region again, expect split success. But since the last split operation may already successful, this one will fail Maybe we should wait for a while between 2 and 3. {code} org.apache.hadoop.hbase.client.DoNotRetryRegionException: 3f16a57c583e6ecf044c5b7de2e97121 is not OPEN; regionState={3f16a57c583e6ecf044c5b7de2e97121 state=SPLITTING, ts=1533239385789, server=asf911.gq1.ygridcore.net,60061,1533239369899} at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.checkOnline(AbstractStateMachineTableProcedure.java:191) at org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure.(SplitTableRegionProcedure.java:112) at org.apache.hadoop.hbase.master.assignment.AssignmentManager.createSplitProcedure(AssignmentManager.java:756) at org.apache.hadoop.hbase.master.HMaster$2.run(HMaster.java:1722) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:131) at org.apache.hadoop.hbase.master.HMaster.splitRegion(HMaster.java:1714) at org.apache.hadoop.hbase.master.MasterRpcServices.splitRegion(MasterRpcServices.java:797) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) {code} was: TestSplitOrMergeStatus.testSplitSwitch() is flaky because : 1. Set the split switch to false 2. Split the region, except nothing happen 3. Set the split switch to true, but since the last split request may not start yet, so setting the switch to true may lead to the last split success 4. Split the same region again, except split success. But since the last split operation may already successful, this one will fail Maybe we should wait for a while between 2 and 3. {code} org.apache.hadoop.hbase.client.DoNotRetryRegionException: 3f16a57c583e6ecf044c5b7de2e97121 is not OPEN; regionState={3f16a57c583e6ecf044c5b7de2e97121 state=SPLITTING, ts=1533239385789, server=asf911.gq1.ygridcore.net,60061,1533239369899} at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.checkOnline(AbstractStateMachineTableProcedure.java:191) at org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure.(SplitTableRegionProcedure.java:112) at org.apache.hadoop.hbase.master.assignment.AssignmentManager.createSplitProcedure(AssignmentManager.java:756) at org.apache.hadoop.hbase.master.HMaster$2.run(HMaster.java:1722) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:131) at org.apache.hadoop.hbase.master.HMaster.splitRegion(HMaster.java:1714) at org.apache.hadoop.hbase.master.MasterRpcServices.splitRegion(MasterRpcServices.java:797) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) {code} > Fix the flaky TestSplitOrMergeStatus > > > Key: HBASE-21003 > URL: https://issues.apache.org/jira/browse/HBASE-21003 > Project: HBase > Issue Type: Bug >Affects Versions: 2.1.0, 2.0.1 >Reporter: Allan Yang >Assignee: Allan Yang >Priority: Major > > TestSplitOrMergeStatus.testSplitSwitch() is flaky because : > 1. Set the split switch to false > 2. Split the region, expect nothing happen > 3. Set the split switch to true, but since the last split request may not > start yet, so setting the switch to true may lead to the last split success > 4. Split the same region again, expect split success. But since the last > split operation may already successful, this one will fail > Maybe we should wait for a while between 2 and 3. > {code} > org.apache.hadoop.hbase.client.DoNotRetryRegionException: > 3f16a57c583e6ecf044c5b7de2e97121 is not OPEN; > regionState={3f16a57c583e6ecf044c5b7de2e97121 state=SPLITTING, > ts=1533239385789,
[jira] [Commented] (HBASE-21010) HBase Docker Development Environment
[ https://issues.apache.org/jira/browse/HBASE-21010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568985#comment-16568985 ] Hadoop QA commented on HBASE-21010: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 2m 24s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 3s{color} | {color:blue} Shelldocs was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || || || || || {color:brown} Patch Compile Tests {color} || | {color:red}-1{color} | {color:red} shellcheck {color} | {color:red} 0m 0s{color} | {color:red} The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 10s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 2m 46s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b | | JIRA Issue | HBASE-21010 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12934361/HBASE-21010.001.patch | | Optional Tests | asflicense shellcheck shelldocs | | uname | Linux 787a6f1a0eef 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 14:43:09 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / bd30ca62ef | | maven | version: Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) | | shellcheck | v0.4.4 | | shellcheck | https://builds.apache.org/job/PreCommit-HBASE-Build/13924/artifact/patchprocess/diff-patch-shellcheck.txt | | Max. process+thread count | 43 (vs. ulimit of 1) | | modules | C: . U: . | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/13924/console | | Powered by | Apache Yetus 0.7.0 http://yetus.apache.org | This message was automatically generated. > HBase Docker Development Environment > - > > Key: HBASE-21010 > URL: https://issues.apache.org/jira/browse/HBASE-21010 > Project: HBase > Issue Type: Improvement >Reporter: Jack Bearden >Assignee: Jack Bearden >Priority: Major > Attachments: HBASE-21010.001.patch > > > Hi all, > I have been using the following environment (see patch) for conveniently > building and testing my HBase patches before they hit precommit. This > improvement is a port from Hadoop trunk that was modified to work in our > codebase instead. This Linux environment should more closely resemble Jenkins. > Usage is simple, just run the script and it will build and run a docker > container with your maven cache and hbase directory already set up. From > there, you can execute your maven goals as usual. > As a kicker, this can also be used to run HBase in docker with low resources > to perhaps sniff out and debug flakey tests with maybe less docker overhead. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21010) HBase Docker Development Environment
[ https://issues.apache.org/jira/browse/HBASE-21010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568984#comment-16568984 ] Hadoop QA commented on HBASE-21010: --- (!) A patch to the testing environment has been detected. Re-executing against the patched versions to perform further tests. The console is at https://builds.apache.org/job/PreCommit-HBASE-Build/13924/console in case of problems. > HBase Docker Development Environment > - > > Key: HBASE-21010 > URL: https://issues.apache.org/jira/browse/HBASE-21010 > Project: HBase > Issue Type: Improvement >Reporter: Jack Bearden >Assignee: Jack Bearden >Priority: Major > Attachments: HBASE-21010.001.patch > > > Hi all, > I have been using the following environment (see patch) for conveniently > building and testing my HBase patches before they hit precommit. This > improvement is a port from Hadoop trunk that was modified to work in our > codebase instead. This Linux environment should more closely resemble Jenkins. > Usage is simple, just run the script and it will build and run a docker > container with your maven cache and hbase directory already set up. From > there, you can execute your maven goals as usual. > As a kicker, this can also be used to run HBase in docker with low resources > to perhaps sniff out and debug flakey tests with maybe less docker overhead. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21010) HBase Docker Development Environment
[ https://issues.apache.org/jira/browse/HBASE-21010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jack Bearden updated HBASE-21010: - Description: Hi all, I have been using the following environment (see patch) for conveniently building and testing my HBase patches before they hit precommit. This improvement is a port from Hadoop trunk that was modified to work in our codebase instead. This Linux environment should more closely resemble Jenkins. Usage is simple, just run the script and it will build and run a docker container with your maven cache and hbase directory already set up. From there, you can execute your maven goals as usual. As a kicker, this can also be used to run HBase in docker with low resources to perhaps sniff out and debug flakey tests with maybe less docker overhead. was: Hi all, I have been using the following environment (see patch) for conveniently building and testing my HBase patches before they hit precommit. This improvement is a port from Hadoop trunk that was modified to work in our codebase instead. This Linux environment should more closely resemble Jenkins. Usage is simple, just run the script and it will build and run a docker container with your maven cache and hbase directory already set up. From there, you can execute your maven goals as usual. As a kicker, this can also be used to run HBase in docker with low resources to perhaps sniff out and debug flakey tests. > HBase Docker Development Environment > - > > Key: HBASE-21010 > URL: https://issues.apache.org/jira/browse/HBASE-21010 > Project: HBase > Issue Type: Improvement >Reporter: Jack Bearden >Assignee: Jack Bearden >Priority: Major > Attachments: HBASE-21010.001.patch > > > Hi all, > I have been using the following environment (see patch) for conveniently > building and testing my HBase patches before they hit precommit. This > improvement is a port from Hadoop trunk that was modified to work in our > codebase instead. This Linux environment should more closely resemble Jenkins. > Usage is simple, just run the script and it will build and run a docker > container with your maven cache and hbase directory already set up. From > there, you can execute your maven goals as usual. > As a kicker, this can also be used to run HBase in docker with low resources > to perhaps sniff out and debug flakey tests with maybe less docker overhead. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21010) HBase Docker Development Environment
[ https://issues.apache.org/jira/browse/HBASE-21010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jack Bearden updated HBASE-21010: - Attachment: (was: HBASE-21010.001.patch) > HBase Docker Development Environment > - > > Key: HBASE-21010 > URL: https://issues.apache.org/jira/browse/HBASE-21010 > Project: HBase > Issue Type: Improvement >Reporter: Jack Bearden >Assignee: Jack Bearden >Priority: Major > Attachments: HBASE-21010.001.patch > > > Hi all, > I have been using the following environment (see patch) for conveniently > building and testing my HBase patches before they hit precommit. This > improvement is a port from Hadoop trunk that was modified to work in our > codebase instead. This Linux environment should more closely resemble Jenkins. > Usage is simple, just run the script and it will build and run a docker > container with your maven cache and hbase directory already set up. From > there, you can execute your maven goals as usual. > As a kicker, this can also be used to run HBase in docker with low resources > to perhaps sniff out and debug flakey tests. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21010) HBase Docker Development Environment
[ https://issues.apache.org/jira/browse/HBASE-21010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jack Bearden updated HBASE-21010: - Attachment: HBASE-21010.001.patch Status: Patch Available (was: Open) > HBase Docker Development Environment > - > > Key: HBASE-21010 > URL: https://issues.apache.org/jira/browse/HBASE-21010 > Project: HBase > Issue Type: Improvement >Reporter: Jack Bearden >Assignee: Jack Bearden >Priority: Major > Attachments: HBASE-21010.001.patch > > > Hi all, > I have been using the following environment (see patch) for conveniently > building and testing my HBase patches before they hit precommit. This > improvement is a port from Hadoop trunk that was modified to work in our > codebase instead. This Linux environment should more closely resemble Jenkins. > Usage is simple, just run the script and it will build and run a docker > container with your maven cache and hbase directory already set up. From > there, you can execute your maven goals as usual. > As a kicker, this can also be used to run HBase in docker with low resources > to perhaps sniff out and debug flakey tests. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21010) HBase Docker Development Environment
[ https://issues.apache.org/jira/browse/HBASE-21010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jack Bearden updated HBASE-21010: - Attachment: HBASE-21010.001.patch > HBase Docker Development Environment > - > > Key: HBASE-21010 > URL: https://issues.apache.org/jira/browse/HBASE-21010 > Project: HBase > Issue Type: Improvement >Reporter: Jack Bearden >Assignee: Jack Bearden >Priority: Major > Attachments: HBASE-21010.001.patch > > > Hi all, > I have been using the following environment (see patch) for conveniently > building and testing my HBase patches before they hit precommit. This > improvement is a port from Hadoop trunk that was modified to work in our > codebase instead. This Linux environment should more closely resemble Jenkins. > Usage is simple, just run the script and it will build and run a docker > container with your maven cache and hbase directory already set up. From > there, you can execute your maven goals as usual. > As a kicker, this can also be used to run HBase in docker with low resources > to perhaps sniff out and debug flakey tests. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21010) HBase Docker Development Environment
[ https://issues.apache.org/jira/browse/HBASE-21010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jack Bearden updated HBASE-21010: - Description: Hi all, I have been using the following environment (see patch) for conveniently building and testing my HBase patches before they hit precommit. This improvement is a port from Hadoop trunk that was modified to work in our codebase instead. This Linux environment should more closely resemble Jenkins. Usage is simple, just run the script and it will build and run a docker container with your maven cache and hbase directory already set up. From there, you can execute your maven goals as usual. As a kicker, this can also be used to run HBase in docker with low resources to perhaps sniff out and debug flakey tests. was: Hi all, I have been using the following environment (see patch) for conveniently building and testing my HBase patches before they hit precommit. This improvement is a port from Hadoop trunk that was modified to work in our codebase instead. This Linux environment should more closely resemble Jenkins. Usage is simple, just run the script and it will build and run a docker container with your maven cache and hbase directory already set up. From there, you can execute your maven goals as usual. As a kicker, this can also be used to run HBase in docker with low resources to perhaps sniff out and debug flakey tests. Again, not my work, but I was surprised it wasn't in master. > HBase Docker Development Environment > - > > Key: HBASE-21010 > URL: https://issues.apache.org/jira/browse/HBASE-21010 > Project: HBase > Issue Type: Improvement >Reporter: Jack Bearden >Assignee: Jack Bearden >Priority: Major > > Hi all, > I have been using the following environment (see patch) for conveniently > building and testing my HBase patches before they hit precommit. This > improvement is a port from Hadoop trunk that was modified to work in our > codebase instead. This Linux environment should more closely resemble Jenkins. > Usage is simple, just run the script and it will build and run a docker > container with your maven cache and hbase directory already set up. From > there, you can execute your maven goals as usual. > As a kicker, this can also be used to run HBase in docker with low resources > to perhaps sniff out and debug flakey tests. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21010) HBase Docker Development Environment
[ https://issues.apache.org/jira/browse/HBASE-21010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jack Bearden updated HBASE-21010: - Description: Hi all, I have been using the following environment (see patch) for conveniently building and testing my HBase patches before they hit precommit. This improvement is a port from Hadoop trunk that was modified to work in our codebase instead. This Linux environment should more closely resemble Jenkins. Usage is simple, just run the script and it will build and run a docker container with your maven cache and hbase directory already set up. From there, you can execute your maven goals as usual. As a kicker, this can also be used to run HBase in docker with low resources to perhaps sniff out and debug flakey tests. Again, not my work, but I was surprised it wasn't in master. was: Hi all, I have been using the following environment (see patch) for conveniently building and testing my HBase patches before they hit precommit. This improvement is a port from Hadoop trunk that was modified to work in our codebase instead. This Linux environment should more closely resembles Jenkins. Usage is simple, just run the script and it will build and run a docker container with your maven cache and hbase directory already set up. From there, you can execute your maven goals as usual. As a kicker, this can also be used to run HBase in docker with low resources to perhaps sniff out and debug flakey tests. Again, not my work, but I was surprised it wasn't in master. > HBase Docker Development Environment > - > > Key: HBASE-21010 > URL: https://issues.apache.org/jira/browse/HBASE-21010 > Project: HBase > Issue Type: Improvement >Reporter: Jack Bearden >Assignee: Jack Bearden >Priority: Major > > Hi all, > I have been using the following environment (see patch) for conveniently > building and testing my HBase patches before they hit precommit. This > improvement is a port from Hadoop trunk that was modified to work in our > codebase instead. This Linux environment should more closely resemble Jenkins. > Usage is simple, just run the script and it will build and run a docker > container with your maven cache and hbase directory already set up. From > there, you can execute your maven goals as usual. > As a kicker, this can also be used to run HBase in docker with low resources > to perhaps sniff out and debug flakey tests. Again, not my work, but I was > surprised it wasn't in master. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21010) HBase Docker Development Environment
Jack Bearden created HBASE-21010: Summary: HBase Docker Development Environment Key: HBASE-21010 URL: https://issues.apache.org/jira/browse/HBASE-21010 Project: HBase Issue Type: Improvement Reporter: Jack Bearden Assignee: Jack Bearden Hi all, I have been using the following environment (see patch) for conveniently building and testing my HBase patches before they hit precommit. This improvement is a port from Hadoop trunk that was modified to work in our codebase instead. This Linux environment should more closely resembles Jenkins. Usage is simple, just run the script and it will build and run a docker container with your maven cache and hbase directory already set up. From there, you can execute your maven goals as usual. As a kicker, this can also be used to run HBase in docker with low resources to perhaps sniff out and debug flakey tests. Again, not my work, but I was surprised it wasn't in master. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20997) rebuildUserRegions() does not build ReplicaMapping during master switchover
[ https://issues.apache.org/jira/browse/HBASE-20997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568978#comment-16568978 ] huaxiang sun commented on HBASE-20997: -- [~yuzhih...@gmail.com], a new patch is uploaded to address your comments. For master patch, I will come back to review to see if same issue exists and upload a patch accordingly. Thanks. > rebuildUserRegions() does not build ReplicaMapping during master switchover > --- > > Key: HBASE-20997 > URL: https://issues.apache.org/jira/browse/HBASE-20997 > Project: HBase > Issue Type: Bug > Components: master >Affects Versions: 1.2.6, 1.3.2, 1.5.0, 1.4.6 >Reporter: huaxiang sun >Assignee: huaxiang sun >Priority: Major > Attachments: HBASE-20997-branch-1-v1.patch, > HBASE-20997-branch-1-v2.patch, HBASE-20997-branch-1-v4.patch > > > During master switchover, rebuildUserRegions() does not rebuild master > in-memory defaultReplicaToOtherReplicas map. This puts the cluster in an > inconsistent state. In read replica case, it causes replica parent region > stay online without being unassigned. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20997) rebuildUserRegions() does not build ReplicaMapping during master switchover
[ https://issues.apache.org/jira/browse/HBASE-20997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] huaxiang sun updated HBASE-20997: - Attachment: HBASE-20997-branch-1-v4.patch > rebuildUserRegions() does not build ReplicaMapping during master switchover > --- > > Key: HBASE-20997 > URL: https://issues.apache.org/jira/browse/HBASE-20997 > Project: HBase > Issue Type: Bug > Components: master >Affects Versions: 1.2.6, 1.3.2, 1.5.0, 1.4.6 >Reporter: huaxiang sun >Assignee: huaxiang sun >Priority: Major > Attachments: HBASE-20997-branch-1-v1.patch, > HBASE-20997-branch-1-v2.patch, HBASE-20997-branch-1-v4.patch > > > During master switchover, rebuildUserRegions() does not rebuild master > in-memory defaultReplicaToOtherReplicas map. This puts the cluster in an > inconsistent state. In read replica case, it causes replica parent region > stay online without being unassigned. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21008) HBase 1.x can not read HBase2 hfiles due to TimeRangeTracker
[ https://issues.apache.org/jira/browse/HBASE-21008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568947#comment-16568947 ] Chia-Ping Tsai commented on HBASE-21008: Thanks for the catch! The hfiles generated by 2.x should be backward compatible. Perhaps we can backport a part of HBASE-18754 to all active 1.x branch in order to make them "can" read the hfiles generated by 2.x [~jinghe] any suggestions? > HBase 1.x can not read HBase2 hfiles due to TimeRangeTracker > > > Key: HBASE-21008 > URL: https://issues.apache.org/jira/browse/HBASE-21008 > Project: HBase > Issue Type: Bug >Affects Versions: 2.1.0, 1.4.6 >Reporter: Jerry He >Priority: Major > > It looks like HBase 1.x can not open hfiiles written by HBase2 still. > I tested the latest HBase 1.4.6 and 2.1.0. 1.4.6 tried to read and open > regions written by 2.1.0. > {code} > 2018-07-30 16:01:31,274 ERROR [StoreFileOpenerThread-info-1] > regionserver.StoreFile: Error reading timestamp range data from meta -- > proceeding without > java.lang.IllegalArgumentException: Timestamp cannot be negative. > minStamp:5783278630776778969, maxStamp:-4698050386518222402 > at org.apache.hadoop.hbase.io.TimeRange.check(TimeRange.java:112) > at org.apache.hadoop.hbase.io.TimeRange.(TimeRange.java:100) > at > org.apache.hadoop.hbase.regionserver.TimeRangeTracker.toTimeRange(TimeRangeTracker.java:214) > at > org.apache.hadoop.hbase.regionserver.TimeRangeTracker.getTimeRange(TimeRangeTracker.java:198) > at > org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:507) > at > org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:531) > at > org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:521) > at > org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:679) > at > org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:122) > at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:538) > at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:535) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > {code} > Or: > {code} > 2018-07-30 16:01:31,305 ERROR [RS_OPEN_REGION-throb1:34004-0] > handler.OpenRegionHandler: Failed open of > region=janusgraph,,1532630557542.b0fa15cb0bf1b0bf740997b7056c., starting > to roll back the global memstore size. > java.io.IOException: java.io.IOException: java.io.EOFException > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1033) > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:908) > at > org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:876) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6995) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6956) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6927) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6883) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6834) > at > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:364) > at > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:131) > at > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.io.IOException: java.io.EOFException > at > org.apache.hadoop.hbase.regionserver.HStore.openStoreFiles(HStore.java:564) > at > org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:518) > at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:281) > at > org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:5378) > at > org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1007) > at > org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1004) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > ... 3 more > Caused by:
[jira] [Commented] (HBASE-20885) Remove entry for RPC quota from hbase:quota when RPC quota is removed.
[ https://issues.apache.org/jira/browse/HBASE-20885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568941#comment-16568941 ] Hudson commented on HBASE-20885: Results for branch branch-2.1 [build #140 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/140/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/140//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/140//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/140//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Remove entry for RPC quota from hbase:quota when RPC quota is removed. > -- > > Key: HBASE-20885 > URL: https://issues.apache.org/jira/browse/HBASE-20885 > Project: HBase > Issue Type: Sub-task >Reporter: Sakthi >Assignee: Sakthi >Priority: Minor > Fix For: 3.0.0, 2.0.2, 2.2.0, 2.1.1 > > Attachments: hbase-20885.master.001.patch, > hbase-20885.master.002.patch, hbase-20885.master.003.patch, > hbase-20885.master.003.patch, hbase-20885.master.004.patch, > hbase-20885.master.005.patch > > > When a RPC quota is removed (using LIMIT => 'NONE'), the entry from > hbase:quota table is not completely removed. For e.g. see below: > {noformat} > hbase(main):005:0> create 't2','cf1' > Created table t2 > Took 0.8000 seconds > => Hbase::Table - t2 > hbase(main):006:0> set_quota TYPE => THROTTLE, TABLE => 't2', LIMIT => > '10M/sec' > Took 0.1024 seconds > hbase(main):007:0> list_quotas > OWNER QUOTAS > TABLE => t2 TYPE => THROTTLE, THROTTLE_TYPE => > REQUEST_SIZE, LIMIT => 10M/sec, SCOPE => MACHINE > 1 row(s) > Took 0.0622 seconds > hbase(main):008:0> scan 'hbase:quota' > ROWCOLUMN+CELL > t.t2 column=q:s, timestamp=1531513014463, > value=PBUF\x12\x0B\x12\x09\x08\x04\x10\x80\x80\x80 >\x05 \x02 > 1 row(s) > Took 0.0453 seconds > hbase(main):009:0> set_quota TYPE => THROTTLE, TABLE => 't2', LIMIT => 'NONE' > Took 0.0097 seconds > hbase(main):010:0> list_quotas > OWNER QUOTAS > 0 row(s) > Took 0.0338 seconds > hbase(main):011:0> scan 'hbase:quota' > ROWCOLUMN+CELL > t.t2 column=q:s, timestamp=1531513039505, > value=PBUF\x12\x00 > 1 row(s) > Took 0.0066 seconds > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20952) Re-visit the WAL API
[ https://issues.apache.org/jira/browse/HBASE-20952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568938#comment-16568938 ] Mike Drob commented on HBASE-20952: --- What do we need in terms of durability guarantees? Is sync-after-write necessary? > Re-visit the WAL API > > > Key: HBASE-20952 > URL: https://issues.apache.org/jira/browse/HBASE-20952 > Project: HBase > Issue Type: Sub-task > Components: wal >Reporter: Josh Elser >Priority: Major > > Take a step back from the current WAL implementations and think about what an > HBase WAL API should look like. What are the primitive calls that we require > to guarantee durability of writes with a high degree of performance? > The API needs to take the current implementations into consideration. We > should also have a mind for what is happening in the Ratis LogService (but > the LogService should not dictate what HBase's WAL API looks like RATIS-272). > Other "systems" inside of HBase that use WALs are replication and > backup Replication has the use-case for "tail"'ing the WAL which we > should provide via our new API. B doesn't do anything fancy (IIRC). We > should make sure all consumers are generally going to be OK with the API we > create. > The API may be "OK" (or OK in a part). We need to also consider other methods > which were "bolted" on such as {{AbstractFSWAL}} and > {{WALFileLengthProvider}}. Other corners of "WAL use" (like the > {{WALSplitter}} should also be looked at to use WAL-APIs only). > We also need to make sure that adequate interface audience and stability > annotations are chosen. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20885) Remove entry for RPC quota from hbase:quota when RPC quota is removed.
[ https://issues.apache.org/jira/browse/HBASE-20885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568931#comment-16568931 ] Hudson commented on HBASE-20885: Results for branch branch-2 [build #1062 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1062/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1062//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1062//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1062//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Remove entry for RPC quota from hbase:quota when RPC quota is removed. > -- > > Key: HBASE-20885 > URL: https://issues.apache.org/jira/browse/HBASE-20885 > Project: HBase > Issue Type: Sub-task >Reporter: Sakthi >Assignee: Sakthi >Priority: Minor > Fix For: 3.0.0, 2.0.2, 2.2.0, 2.1.1 > > Attachments: hbase-20885.master.001.patch, > hbase-20885.master.002.patch, hbase-20885.master.003.patch, > hbase-20885.master.003.patch, hbase-20885.master.004.patch, > hbase-20885.master.005.patch > > > When a RPC quota is removed (using LIMIT => 'NONE'), the entry from > hbase:quota table is not completely removed. For e.g. see below: > {noformat} > hbase(main):005:0> create 't2','cf1' > Created table t2 > Took 0.8000 seconds > => Hbase::Table - t2 > hbase(main):006:0> set_quota TYPE => THROTTLE, TABLE => 't2', LIMIT => > '10M/sec' > Took 0.1024 seconds > hbase(main):007:0> list_quotas > OWNER QUOTAS > TABLE => t2 TYPE => THROTTLE, THROTTLE_TYPE => > REQUEST_SIZE, LIMIT => 10M/sec, SCOPE => MACHINE > 1 row(s) > Took 0.0622 seconds > hbase(main):008:0> scan 'hbase:quota' > ROWCOLUMN+CELL > t.t2 column=q:s, timestamp=1531513014463, > value=PBUF\x12\x0B\x12\x09\x08\x04\x10\x80\x80\x80 >\x05 \x02 > 1 row(s) > Took 0.0453 seconds > hbase(main):009:0> set_quota TYPE => THROTTLE, TABLE => 't2', LIMIT => 'NONE' > Took 0.0097 seconds > hbase(main):010:0> list_quotas > OWNER QUOTAS > 0 row(s) > Took 0.0338 seconds > hbase(main):011:0> scan 'hbase:quota' > ROWCOLUMN+CELL > t.t2 column=q:s, timestamp=1531513039505, > value=PBUF\x12\x00 > 1 row(s) > Took 0.0066 seconds > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21009) Backport to branch-2.0 HBASE-20739 "Add priority for SCP"
[ https://issues.apache.org/jira/browse/HBASE-21009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-21009: -- Status: Patch Available (was: Open) > Backport to branch-2.0 HBASE-20739 "Add priority for SCP" > - > > Key: HBASE-21009 > URL: https://issues.apache.org/jira/browse/HBASE-21009 > Project: HBase > Issue Type: Sub-task > Components: amv2 >Reporter: stack >Assignee: stack >Priority: Major > Fix For: 2.0.2 > > Attachments: HBASE-21009.branch-2.0.001.patch, > HBASE-21009.branch-2.0.002.patch, HBASE-21009.branch-2.0.003.patch > > > Backport parent issue to branch-2.0. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21009) Backport to branch-2.0 HBASE-20739 "Add priority for SCP"
[ https://issues.apache.org/jira/browse/HBASE-21009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-21009: -- Attachment: HBASE-21009.branch-2.0.002.patch > Backport to branch-2.0 HBASE-20739 "Add priority for SCP" > - > > Key: HBASE-21009 > URL: https://issues.apache.org/jira/browse/HBASE-21009 > Project: HBase > Issue Type: Sub-task > Components: amv2 >Reporter: stack >Assignee: stack >Priority: Major > Fix For: 2.0.2 > > Attachments: HBASE-21009.branch-2.0.001.patch, > HBASE-21009.branch-2.0.002.patch, HBASE-21009.branch-2.0.003.patch > > > Backport parent issue to branch-2.0. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21009) Backport to branch-2.0 HBASE-20739 "Add priority for SCP"
[ https://issues.apache.org/jira/browse/HBASE-21009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-21009: -- Attachment: HBASE-21009.branch-2.0.001.patch > Backport to branch-2.0 HBASE-20739 "Add priority for SCP" > - > > Key: HBASE-21009 > URL: https://issues.apache.org/jira/browse/HBASE-21009 > Project: HBase > Issue Type: Sub-task > Components: amv2 >Reporter: stack >Assignee: stack >Priority: Major > Fix For: 2.0.2 > > Attachments: HBASE-21009.branch-2.0.001.patch, > HBASE-21009.branch-2.0.002.patch, HBASE-21009.branch-2.0.003.patch > > > Backport parent issue to branch-2.0. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21009) Backport to branch-2.0 HBASE-20739 "Add priority for SCP"
[ https://issues.apache.org/jira/browse/HBASE-21009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-21009: -- Attachment: HBASE-21009.branch-2.0.003.patch > Backport to branch-2.0 HBASE-20739 "Add priority for SCP" > - > > Key: HBASE-21009 > URL: https://issues.apache.org/jira/browse/HBASE-21009 > Project: HBase > Issue Type: Sub-task > Components: amv2 >Reporter: stack >Assignee: stack >Priority: Major > Fix For: 2.0.2 > > Attachments: HBASE-21009.branch-2.0.001.patch, > HBASE-21009.branch-2.0.002.patch, HBASE-21009.branch-2.0.003.patch > > > Backport parent issue to branch-2.0. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21004) Backport to branch-2.0 HBASE-20708 "Remove the usage of RecoverMetaProcedure"
[ https://issues.apache.org/jira/browse/HBASE-21004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-21004: -- Resolution: Fixed Status: Resolved (was: Patch Available) Pushed to branch-2.0. > Backport to branch-2.0 HBASE-20708 "Remove the usage of RecoverMetaProcedure" > - > > Key: HBASE-21004 > URL: https://issues.apache.org/jira/browse/HBASE-21004 > Project: HBase > Issue Type: Sub-task > Components: amv2 >Reporter: stack >Assignee: stack >Priority: Major > Fix For: 2.0.2 > > Attachments: HBASE-21004.branch-2.0.001.patch, > HBASE-21004.branch-2.0.002.patch > > > Backport parent issue to branch-2.0. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21004) Backport to branch-2.0 HBASE-20708 "Remove the usage of RecoverMetaProcedure"
[ https://issues.apache.org/jira/browse/HBASE-21004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568909#comment-16568909 ] Hadoop QA commented on HBASE-21004: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 21 new or modified test files. {color} | || || || || {color:brown} branch-2.0 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 35s{color} | {color:green} branch-2.0 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 54s{color} | {color:green} branch-2.0 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 11s{color} | {color:green} branch-2.0 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 8s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 27s{color} | {color:green} branch-2.0 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 19s{color} | {color:green} branch-2.0 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 14m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 10s{color} | {color:green} The patch hbase-protocol-shaded passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 32s{color} | {color:green} The patch hbase-client passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} hbase-procedure: The patch generated 0 new + 44 unchanged - 1 fixed = 44 total (was 45) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 16s{color} | {color:green} hbase-server: The patch generated 0 new + 334 unchanged - 14 fixed = 334 total (was 348) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 17s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 11m 24s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.5 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 31s{color} | {color:green} hbase-protocol-shaded in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 1s{color} | {color:green} hbase-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 39s{color} | {color:green} hbase-procedure in the patch passed. {color} | | {color:green}+1{color} |
[jira] [Commented] (HBASE-20952) Re-visit the WAL API
[ https://issues.apache.org/jira/browse/HBASE-20952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568906#comment-16568906 ] Zach York commented on HBASE-20952: --- So taking a very rough stab at this based on a few thoughts, but at the most basic level, what functionality do we need from the WAL? Absolutely Necessary: append/put/write/etc getEditsForRegion(Region) - This let's the implementation handle the mess - WALSplitter, multi-wal, etc Probably some sort of delete of edits before a threshold Nice to have: onRegionFlush() - This would be kinda like a coprocessor hook which could add implementation specific logic (rolling file logs, adding some sort of lastFlushed functionality, or any of the things that [~stack] was talking about in his comment) In general, I would like to start from what we truly need, not what we currently have :) . I don't want the interface polluted by implementation specific methods. I'm sure I'm over simplifying things, but I wanted to get the conversation started. > Re-visit the WAL API > > > Key: HBASE-20952 > URL: https://issues.apache.org/jira/browse/HBASE-20952 > Project: HBase > Issue Type: Sub-task > Components: wal >Reporter: Josh Elser >Priority: Major > > Take a step back from the current WAL implementations and think about what an > HBase WAL API should look like. What are the primitive calls that we require > to guarantee durability of writes with a high degree of performance? > The API needs to take the current implementations into consideration. We > should also have a mind for what is happening in the Ratis LogService (but > the LogService should not dictate what HBase's WAL API looks like RATIS-272). > Other "systems" inside of HBase that use WALs are replication and > backup Replication has the use-case for "tail"'ing the WAL which we > should provide via our new API. B doesn't do anything fancy (IIRC). We > should make sure all consumers are generally going to be OK with the API we > create. > The API may be "OK" (or OK in a part). We need to also consider other methods > which were "bolted" on such as {{AbstractFSWAL}} and > {{WALFileLengthProvider}}. Other corners of "WAL use" (like the > {{WALSplitter}} should also be looked at to use WAL-APIs only). > We also need to make sure that adequate interface audience and stability > annotations are chosen. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20997) rebuildUserRegions() does not build ReplicaMapping during master switchover
[ https://issues.apache.org/jira/browse/HBASE-20997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568904#comment-16568904 ] Hadoop QA commented on HBASE-20997: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} branch-1 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 50s{color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s{color} | {color:green} branch-1 passed with JDK v1.8.0_181 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s{color} | {color:green} branch-1 passed with JDK v1.7.0_191 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 25s{color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 2m 37s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s{color} | {color:green} branch-1 passed with JDK v1.8.0_181 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s{color} | {color:green} branch-1 passed with JDK v1.7.0_191 {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s{color} | {color:green} the patch passed with JDK v1.8.0_181 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s{color} | {color:green} the patch passed with JDK v1.7.0_191 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 20s{color} | {color:red} hbase-server: The patch generated 2 new + 24 unchanged - 1 fixed = 26 total (was 25) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 2m 39s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 1m 33s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s{color} | {color:green} the patch passed with JDK v1.8.0_181 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s{color} | {color:green} the patch passed with JDK v1.7.0_191 {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 99m 7s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}123m 44s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:61288f8 | | JIRA Issue | HBASE-20997 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12934331/HBASE-20997-branch-1-v2.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux a093a8ebfaf8 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 14:43:09 UTC 2018 x86_64 x86_64 x86_64
[jira] [Commented] (HBASE-20856) PITA having to set WAL provider in two places
[ https://issues.apache.org/jira/browse/HBASE-20856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568871#comment-16568871 ] Tak Lon (Stephen) Wu commented on HBASE-20856: -- closed those PRs, will follow [~zyork] and [~busbey] and tried to only attach the PR link next time (if that does not work, will follow what [~reidchan] suggested). thanks a lot. > PITA having to set WAL provider in two places > - > > Key: HBASE-20856 > URL: https://issues.apache.org/jira/browse/HBASE-20856 > Project: HBase > Issue Type: Improvement > Components: Operability, wal >Affects Versions: 3.0.0 >Reporter: stack >Assignee: Tak Lon (Stephen) Wu >Priority: Minor > Fix For: 3.0.0, 2.0.2, 2.2.0, 2.1.1 > > Attachments: HBASE-20856.branch-2.001.patch, > HBASE-20856.branch-2.002.patch, HBASE-20856.master.001.patch, > HBASE-20856.master.002.patch, HBASE-20856.master.003.patch > > > Courtesy of [~elserj], I learn that changing WAL we need to set two places... > both hbase.wal.meta_provider and hbase.wal.provider. Operator should only > have to set it in one place; hbase.wal.meta_provider should pick up general > setting unless hbase.wal.meta_provider is explicitly set. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20257) hbase-spark should not depend on com.google.code.findbugs.jsr305
[ https://issues.apache.org/jira/browse/HBASE-20257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568865#comment-16568865 ] Ted Yu commented on HBASE-20257: Ping [~busbey] > hbase-spark should not depend on com.google.code.findbugs.jsr305 > > > Key: HBASE-20257 > URL: https://issues.apache.org/jira/browse/HBASE-20257 > Project: HBase > Issue Type: Task > Components: build, spark >Affects Versions: 3.0.0 >Reporter: Ted Yu >Assignee: Artem Ervits >Priority: Minor > Labels: beginner > Attachments: HBASE-20257.v01.patch, HBASE-20257.v02.patch, > HBASE-20257.v03.patch, HBASE-20257.v04.patch, HBASE-20257.v05.patch > > > The following can be observed in the build output of master branch: > {code} > [WARNING] Rule 0: org.apache.maven.plugins.enforcer.BannedDependencies failed > with message: > We don't allow the JSR305 jar from the Findbugs project, see HBASE-16321. > Found Banned Dependency: com.google.code.findbugs:jsr305:jar:1.3.9 > Use 'mvn dependency:tree' to locate the source of the banned dependencies. > {code} > Here is related snippet from hbase-spark/pom.xml: > {code} > > com.google.code.findbugs > jsr305 > {code} > Dependency on jsr305 should be dropped. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20997) rebuildUserRegions() does not build ReplicaMapping during master switchover
[ https://issues.apache.org/jira/browse/HBASE-20997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568844#comment-16568844 ] Ted Yu commented on HBASE-20997: I ran the test which took ~30 seconds. {code} + @Test(timeout = 24) {code} The timeout is very long, right ? You can shorten it. {code} +TEST_UTIL.startMiniCluster(NUM_MASTERS, NUM_RS); {code} The existing test already does the above. Can you refactor the test so that mini cluster is started only once ? The test portion doesn't apply to master branch. It would be good to know whether master branch needs to be fixed as well. > rebuildUserRegions() does not build ReplicaMapping during master switchover > --- > > Key: HBASE-20997 > URL: https://issues.apache.org/jira/browse/HBASE-20997 > Project: HBase > Issue Type: Bug > Components: master >Affects Versions: 1.2.6, 1.3.2, 1.5.0, 1.4.6 >Reporter: huaxiang sun >Assignee: huaxiang sun >Priority: Major > Attachments: HBASE-20997-branch-1-v1.patch, > HBASE-20997-branch-1-v2.patch > > > During master switchover, rebuildUserRegions() does not rebuild master > in-memory defaultReplicaToOtherReplicas map. This puts the cluster in an > inconsistent state. In read replica case, it causes replica parent region > stay online without being unassigned. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20722) Make RegionServerTracker only depend on children changed event
[ https://issues.apache.org/jira/browse/HBASE-20722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568837#comment-16568837 ] Hudson commented on HBASE-20722: Results for branch branch-2.0 [build #628 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/628/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/628//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/628//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/628//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. > Make RegionServerTracker only depend on children changed event > -- > > Key: HBASE-20722 > URL: https://issues.apache.org/jira/browse/HBASE-20722 > Project: HBase > Issue Type: Sub-task >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 3.0.0, 2.1.0 > > Attachments: HBASE-20722-v1.patch, HBASE-20722.patch > > > For now we will use children changed event for adding RS, and node deleted > event for removing RS. Actually, children changed can also be used for > deleting RS, this will make it easier to control as we do not need to deal > with the concurrency issue between the children changed and node deleted > event. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20885) Remove entry for RPC quota from hbase:quota when RPC quota is removed.
[ https://issues.apache.org/jira/browse/HBASE-20885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568838#comment-16568838 ] Hudson commented on HBASE-20885: Results for branch branch-2.0 [build #628 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/628/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/628//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/628//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/628//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. > Remove entry for RPC quota from hbase:quota when RPC quota is removed. > -- > > Key: HBASE-20885 > URL: https://issues.apache.org/jira/browse/HBASE-20885 > Project: HBase > Issue Type: Sub-task >Reporter: Sakthi >Assignee: Sakthi >Priority: Minor > Fix For: 3.0.0, 2.0.2, 2.2.0, 2.1.1 > > Attachments: hbase-20885.master.001.patch, > hbase-20885.master.002.patch, hbase-20885.master.003.patch, > hbase-20885.master.003.patch, hbase-20885.master.004.patch, > hbase-20885.master.005.patch > > > When a RPC quota is removed (using LIMIT => 'NONE'), the entry from > hbase:quota table is not completely removed. For e.g. see below: > {noformat} > hbase(main):005:0> create 't2','cf1' > Created table t2 > Took 0.8000 seconds > => Hbase::Table - t2 > hbase(main):006:0> set_quota TYPE => THROTTLE, TABLE => 't2', LIMIT => > '10M/sec' > Took 0.1024 seconds > hbase(main):007:0> list_quotas > OWNER QUOTAS > TABLE => t2 TYPE => THROTTLE, THROTTLE_TYPE => > REQUEST_SIZE, LIMIT => 10M/sec, SCOPE => MACHINE > 1 row(s) > Took 0.0622 seconds > hbase(main):008:0> scan 'hbase:quota' > ROWCOLUMN+CELL > t.t2 column=q:s, timestamp=1531513014463, > value=PBUF\x12\x0B\x12\x09\x08\x04\x10\x80\x80\x80 >\x05 \x02 > 1 row(s) > Took 0.0453 seconds > hbase(main):009:0> set_quota TYPE => THROTTLE, TABLE => 't2', LIMIT => 'NONE' > Took 0.0097 seconds > hbase(main):010:0> list_quotas > OWNER QUOTAS > 0 row(s) > Took 0.0338 seconds > hbase(main):011:0> scan 'hbase:quota' > ROWCOLUMN+CELL > t.t2 column=q:s, timestamp=1531513039505, > value=PBUF\x12\x00 > 1 row(s) > Took 0.0066 seconds > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20996) Backport to branch-2.0 HBASE-20722 "Make RegionServerTracker only depend on children changed event"
[ https://issues.apache.org/jira/browse/HBASE-20996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568836#comment-16568836 ] Hudson commented on HBASE-20996: Results for branch branch-2.0 [build #628 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/628/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/628//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/628//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/628//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. > Backport to branch-2.0 HBASE-20722 "Make RegionServerTracker only depend on > children changed event" > --- > > Key: HBASE-20996 > URL: https://issues.apache.org/jira/browse/HBASE-20996 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: stack >Priority: Major > Fix For: 2.0.2 > > Attachments: HBASE-20996.branch-2.0.001.patch, > HBASE-20996.branch-2.0.001.patch > > > The patch in 2.1 looks like its working nicely. The patch does nice cleanup. > Not having this patch in branch-2.0 is messing up backport of other, bigger > patches. Let me include it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HBASE-20969) Create an hbase-operator-tools repo to host hbck2 and later, other toolings
[ https://issues.apache.org/jira/browse/HBASE-20969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack resolved HBASE-20969. --- Resolution: Fixed Assignee: stack Release Note: Added new hbase-operator-tools. See links below for repository location. > Create an hbase-operator-tools repo to host hbck2 and later, other toolings > --- > > Key: HBASE-20969 > URL: https://issues.apache.org/jira/browse/HBASE-20969 > Project: HBase > Issue Type: Sub-task > Components: hbase-operator-tools, hbck2 >Affects Versions: 2.0.1 >Reporter: stack >Assignee: stack >Priority: Major > > Let me make a new repo to host hbck2 and any other operator tools that make > sense to break off from core. > See the discusion thread on dev [1] that blesses this project. > 1. > http://apache-hbase.679495.n3.nabble.com/DISCUSS-Separate-Git-Repository-for-HBCK2-td4096319.html -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Issue Comment Deleted] (HBASE-20997) rebuildUserRegions() does not build ReplicaMapping during master switchover
[ https://issues.apache.org/jira/browse/HBASE-20997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach updated HBASE-20997: --- Comment: was deleted (was: [~huaxiang]Have you started writing the tests?) > rebuildUserRegions() does not build ReplicaMapping during master switchover > --- > > Key: HBASE-20997 > URL: https://issues.apache.org/jira/browse/HBASE-20997 > Project: HBase > Issue Type: Bug > Components: master >Affects Versions: 1.2.6, 1.3.2, 1.5.0, 1.4.6 >Reporter: huaxiang sun >Assignee: huaxiang sun >Priority: Major > Attachments: HBASE-20997-branch-1-v1.patch, > HBASE-20997-branch-1-v2.patch > > > During master switchover, rebuildUserRegions() does not rebuild master > in-memory defaultReplicaToOtherReplicas map. This puts the cluster in an > inconsistent state. In read replica case, it causes replica parent region > stay online without being unassigned. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20997) rebuildUserRegions() does not build ReplicaMapping during master switchover
[ https://issues.apache.org/jira/browse/HBASE-20997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568777#comment-16568777 ] John Leach commented on HBASE-20997: [~huaxiang]Have you started writing the tests? > rebuildUserRegions() does not build ReplicaMapping during master switchover > --- > > Key: HBASE-20997 > URL: https://issues.apache.org/jira/browse/HBASE-20997 > Project: HBase > Issue Type: Bug > Components: master >Affects Versions: 1.2.6, 1.3.2, 1.5.0, 1.4.6 >Reporter: huaxiang sun >Assignee: huaxiang sun >Priority: Major > Attachments: HBASE-20997-branch-1-v1.patch, > HBASE-20997-branch-1-v2.patch > > > During master switchover, rebuildUserRegions() does not rebuild master > in-memory defaultReplicaToOtherReplicas map. This puts the cluster in an > inconsistent state. In read replica case, it causes replica parent region > stay online without being unassigned. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20813) Remove RPC quotas when the associated table/Namespace is dropped off
[ https://issues.apache.org/jira/browse/HBASE-20813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568774#comment-16568774 ] Sakthi commented on HBASE-20813: [~elserj], I agree that the file could be removed. But: {quote}I think it would be better to keep the old class present (for backwards-compat) {quote} I thought this suggestion meant, to keep MasterSpaceQuotaObserver for backwards-compat issues? Hence I create the new MasterQuotasObserver which does both RPC and space auto-deletion. And, didn't remove MasterSpaceQuotaObserver. Please correct me if I misunderstood. > Remove RPC quotas when the associated table/Namespace is dropped off > > > Key: HBASE-20813 > URL: https://issues.apache.org/jira/browse/HBASE-20813 > Project: HBase > Issue Type: Sub-task >Reporter: Sakthi >Assignee: Sakthi >Priority: Minor > Attachments: hbase-20813.master.001.patch > > > In short, the below scenario shouldn't be the case. > {noformat} > hbase(main):023:0> create 't2','cf1' > Created table t2 > Took 0.7405 seconds > => Hbase::Table - t2 > hbase(main):024:0> > hbase(main):025:0* > hbase(main):026:0* set_quota TYPE => THROTTLE, TABLE => 't2', LIMIT => > '10M/sec' > Took 0.0082 seconds > hbase(main):027:0> list_quotas > OWNER QUOTAS > TABLE => t2 TYPE => THROTTLE, THROTTLE_TYPE => REQUEST_SIZE, LIMIT => > 10M/sec, SCOPE => MACHINE > 1 row(s) > Took 0.0291 seconds > hbase(main):028:0> scan 'hbase:quota' > ROW COLUMN+CELL > t.t2 column=q:s, timestamp=1530165010888, > value=PBUF\x12\x0B\x12\x09\x08\x04\x10\x80\x80\x80\x05 \x02 > 1 row(s) > Took 0.0037 seconds > hbase(main):029:0> disable 't2' > Took 0.4328 seconds > hbase(main):030:0> drop 't2' > Took 0.2285 seconds > hbase(main):031:0> list_quotas > OWNER QUOTAS > TABLE => t2 TYPE => THROTTLE, THROTTLE_TYPE => REQUEST_SIZE, LIMIT => > 10M/sec, SCOPE => MACHINE > 1 row(s) > Took 0.0230 seconds > hbase(main):032:0> scan 'hbase:quota' > ROW COLUMN+CELL > t.t2 column=q:s, timestamp=1530165010888, > value=PBUF\x12\x0B\x12\x09\x08\x04\x10\x80\x80\x80\x05 \x02 > 1 row(s) > Took 0.0038 seconds > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20997) rebuildUserRegions() does not build ReplicaMapping during master switchover
[ https://issues.apache.org/jira/browse/HBASE-20997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568765#comment-16568765 ] huaxiang sun commented on HBASE-20997: -- Attach v2 with an unittest case. [~yuzhih...@gmail.com], can you help to take a look? Thanks. > rebuildUserRegions() does not build ReplicaMapping during master switchover > --- > > Key: HBASE-20997 > URL: https://issues.apache.org/jira/browse/HBASE-20997 > Project: HBase > Issue Type: Bug > Components: master >Affects Versions: 1.2.6, 1.3.2, 1.5.0, 1.4.6 >Reporter: huaxiang sun >Assignee: huaxiang sun >Priority: Major > Attachments: HBASE-20997-branch-1-v1.patch, > HBASE-20997-branch-1-v2.patch > > > During master switchover, rebuildUserRegions() does not rebuild master > in-memory defaultReplicaToOtherReplicas map. This puts the cluster in an > inconsistent state. In read replica case, it causes replica parent region > stay online without being unassigned. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20997) rebuildUserRegions() does not build ReplicaMapping during master switchover
[ https://issues.apache.org/jira/browse/HBASE-20997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] huaxiang sun updated HBASE-20997: - Attachment: HBASE-20997-branch-1-v2.patch > rebuildUserRegions() does not build ReplicaMapping during master switchover > --- > > Key: HBASE-20997 > URL: https://issues.apache.org/jira/browse/HBASE-20997 > Project: HBase > Issue Type: Bug > Components: master >Affects Versions: 1.2.6, 1.3.2, 1.5.0, 1.4.6 >Reporter: huaxiang sun >Assignee: huaxiang sun >Priority: Major > Attachments: HBASE-20997-branch-1-v1.patch, > HBASE-20997-branch-1-v2.patch > > > During master switchover, rebuildUserRegions() does not rebuild master > in-memory defaultReplicaToOtherReplicas map. This puts the cluster in an > inconsistent state. In read replica case, it causes replica parent region > stay online without being unassigned. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21005) Maven site configuration causes downstream projects to get a directory named ${project.basedir}
[ https://issues.apache.org/jira/browse/HBASE-21005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568758#comment-16568758 ] stack commented on HBASE-21005: --- I've seen this happen from time to time but never figured it > Maven site configuration causes downstream projects to get a directory named > ${project.basedir} > --- > > Key: HBASE-21005 > URL: https://issues.apache.org/jira/browse/HBASE-21005 > Project: HBase > Issue Type: Bug > Components: build >Affects Versions: 2.0.0 >Reporter: Matt Burgess >Assignee: Josh Elser >Priority: Minor > > Matt told me about this interesting issue they see down in Apache Nifi's build > NiFi depends on HBase for some code that they provide to their users. As a > part of the build process of NiFi, they are seeing a directory named > {{$\{project.basedir}}} get created the first time they build with an empty > Maven repo. Matt reports that after a javax.el artifact is cached, Maven will > stop creating the directory; however, if you wipe that artifact from the > Maven repo, the next build will end up re-creating it. > I believe I've seen this with Phoenix, too, but never investigated why it was > actually happening. > My hunch is that it's related to the local maven repo that we create to > "patch" in our custom maven-fluido-skin jar (HBASE-14785). I'm not sure if we > can "work" around this by pushing the custom local repo into a profile and > only activating that for the mvn-site. Another solution would be to publish > the maven-fluido-jar to central with custom coordinates. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-18477) Umbrella JIRA for HBase Read Replica clusters
[ https://issues.apache.org/jira/browse/HBASE-18477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568740#comment-16568740 ] Zach York commented on HBASE-18477: --- [~busbey] Any chance you can review again when you get the chance? > Umbrella JIRA for HBase Read Replica clusters > - > > Key: HBASE-18477 > URL: https://issues.apache.org/jira/browse/HBASE-18477 > Project: HBase > Issue Type: New Feature >Reporter: Zach York >Assignee: Zach York >Priority: Major > Attachments: HBase Read-Replica Clusters Scope doc.docx, HBase > Read-Replica Clusters Scope doc.pdf, HBase Read-Replica Clusters Scope > doc_v2.docx, HBase Read-Replica Clusters Scope doc_v2.pdf > > > Recently, changes (such as HBASE-17437) have unblocked HBase to run with a > root directory external to the cluster (such as in Amazon S3). This means > that the data is stored outside of the cluster and can be accessible after > the cluster has been terminated. One use case that is often asked about is > pointing multiple clusters to one root directory (sharing the data) to have > read resiliency in the case of a cluster failure. > > This JIRA is an umbrella JIRA to contain all the tasks necessary to create a > read-replica HBase cluster that is pointed at the same root directory. > > This requires making the Read-Replica cluster Read-Only (no metadata > operation or data operations). > Separating the hbase:meta table for each cluster (Otherwise HBase gets > confused with multiple clusters trying to update the meta table with their ip > addresses) > Adding refresh functionality for the meta table to ensure new metadata is > picked up on the read replica cluster. > Adding refresh functionality for HFiles for a given table to ensure new data > is picked up on the read replica cluster. > > This can be used with any existing cluster that is backed by an external > filesystem. > > Please note that this feature is still quite manual (with the potential for > automation later). > > More information on this particular feature can be found here: > https://aws.amazon.com/blogs/big-data/setting-up-read-replica-clusters-with-hbase-on-amazon-s3/ -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-18477) Umbrella JIRA for HBase Read Replica clusters
[ https://issues.apache.org/jira/browse/HBASE-18477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568738#comment-16568738 ] Hudson commented on HBASE-18477: Results for branch HBASE-18477 [build #284 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18477/284/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18477/284//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18477/284//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18477/284//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (x) {color:red}-1 client integration test{color} --Failed when running client tests on top of Hadoop 2. [see log for details|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18477/284//artifact/output-integration/hadoop-2.log]. (note that this means we didn't run on Hadoop 3) > Umbrella JIRA for HBase Read Replica clusters > - > > Key: HBASE-18477 > URL: https://issues.apache.org/jira/browse/HBASE-18477 > Project: HBase > Issue Type: New Feature >Reporter: Zach York >Assignee: Zach York >Priority: Major > Attachments: HBase Read-Replica Clusters Scope doc.docx, HBase > Read-Replica Clusters Scope doc.pdf, HBase Read-Replica Clusters Scope > doc_v2.docx, HBase Read-Replica Clusters Scope doc_v2.pdf > > > Recently, changes (such as HBASE-17437) have unblocked HBase to run with a > root directory external to the cluster (such as in Amazon S3). This means > that the data is stored outside of the cluster and can be accessible after > the cluster has been terminated. One use case that is often asked about is > pointing multiple clusters to one root directory (sharing the data) to have > read resiliency in the case of a cluster failure. > > This JIRA is an umbrella JIRA to contain all the tasks necessary to create a > read-replica HBase cluster that is pointed at the same root directory. > > This requires making the Read-Replica cluster Read-Only (no metadata > operation or data operations). > Separating the hbase:meta table for each cluster (Otherwise HBase gets > confused with multiple clusters trying to update the meta table with their ip > addresses) > Adding refresh functionality for the meta table to ensure new metadata is > picked up on the read replica cluster. > Adding refresh functionality for HFiles for a given table to ensure new data > is picked up on the read replica cluster. > > This can be used with any existing cluster that is backed by an external > filesystem. > > Please note that this feature is still quite manual (with the potential for > automation later). > > More information on this particular feature can be found here: > https://aws.amazon.com/blogs/big-data/setting-up-read-replica-clusters-with-hbase-on-amazon-s3/ -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21006) Balancer - data locality drops 30-40% across all nodes after every cluster-wide rolling restart, not migrating regions back to original RegionServers?
[ https://issues.apache.org/jira/browse/HBASE-21006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568720#comment-16568720 ] Hari Sekhon commented on HBASE-21006: - I will do follow up analysis of this on Monday as it's late here in London. I saw HBASE-18164 but I didn't think it quite fitted, and I didn't see HBASE-18036. If it's covered by that one then I'll close this as a duplicate after we review it next week. > Balancer - data locality drops 30-40% across all nodes after every > cluster-wide rolling restart, not migrating regions back to original > RegionServers? > -- > > Key: HBASE-21006 > URL: https://issues.apache.org/jira/browse/HBASE-21006 > Project: HBase > Issue Type: Improvement > Components: Balancer >Affects Versions: 1.1.2 > Environment: HDP 2.6 >Reporter: Hari Sekhon >Priority: Major > > After doing rolling restarts of my HBase cluster the data locality drops by > 30-40% every time which implies the stochastic balancer is not optimizing for > data locality enough, at least not under the circumstance of rolling > restarts, and that it must not be balancing the regions back to their > original RegionServers. > The stochastic balancer is supposed to take data locality in to account but > if this is the case, surely it should move regions back to their original > RegionServers and data locality should return back to around where it was, > not drop by 30-40% percent every time I need to do some tuning and a rolling > restart. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20813) Remove RPC quotas when the associated table/Namespace is dropped off
[ https://issues.apache.org/jira/browse/HBASE-20813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568713#comment-16568713 ] Josh Elser commented on HBASE-20813: Looks like you forgot to remove the old MasterSpaceQuotaObserver when you renamed it to MasterQuotaObserver. Otherwise, looks good to me. > Remove RPC quotas when the associated table/Namespace is dropped off > > > Key: HBASE-20813 > URL: https://issues.apache.org/jira/browse/HBASE-20813 > Project: HBase > Issue Type: Sub-task >Reporter: Sakthi >Assignee: Sakthi >Priority: Minor > Attachments: hbase-20813.master.001.patch > > > In short, the below scenario shouldn't be the case. > {noformat} > hbase(main):023:0> create 't2','cf1' > Created table t2 > Took 0.7405 seconds > => Hbase::Table - t2 > hbase(main):024:0> > hbase(main):025:0* > hbase(main):026:0* set_quota TYPE => THROTTLE, TABLE => 't2', LIMIT => > '10M/sec' > Took 0.0082 seconds > hbase(main):027:0> list_quotas > OWNER QUOTAS > TABLE => t2 TYPE => THROTTLE, THROTTLE_TYPE => REQUEST_SIZE, LIMIT => > 10M/sec, SCOPE => MACHINE > 1 row(s) > Took 0.0291 seconds > hbase(main):028:0> scan 'hbase:quota' > ROW COLUMN+CELL > t.t2 column=q:s, timestamp=1530165010888, > value=PBUF\x12\x0B\x12\x09\x08\x04\x10\x80\x80\x80\x05 \x02 > 1 row(s) > Took 0.0037 seconds > hbase(main):029:0> disable 't2' > Took 0.4328 seconds > hbase(main):030:0> drop 't2' > Took 0.2285 seconds > hbase(main):031:0> list_quotas > OWNER QUOTAS > TABLE => t2 TYPE => THROTTLE, THROTTLE_TYPE => REQUEST_SIZE, LIMIT => > 10M/sec, SCOPE => MACHINE > 1 row(s) > Took 0.0230 seconds > hbase(main):032:0> scan 'hbase:quota' > ROW COLUMN+CELL > t.t2 column=q:s, timestamp=1530165010888, > value=PBUF\x12\x0B\x12\x09\x08\x04\x10\x80\x80\x80\x05 \x02 > 1 row(s) > Took 0.0038 seconds > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21006) Balancer - data locality drops 30-40% across all nodes after every cluster-wide rolling restart, not migrating regions back to original RegionServers?
[ https://issues.apache.org/jira/browse/HBASE-21006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568706#comment-16568706 ] Josh Elser commented on HBASE-21006: [~harisekhon], are you planning to do analysis of your issue? It seems like you're just dumping a report here which helps no one. HBASE-18036 is pretty much what you're already describing, so I'm apt to close this as invalid unless you have more information to share about your specific situation. > Balancer - data locality drops 30-40% across all nodes after every > cluster-wide rolling restart, not migrating regions back to original > RegionServers? > -- > > Key: HBASE-21006 > URL: https://issues.apache.org/jira/browse/HBASE-21006 > Project: HBase > Issue Type: Improvement > Components: Balancer >Affects Versions: 1.1.2 > Environment: HDP 2.6 >Reporter: Hari Sekhon >Priority: Major > > After doing rolling restarts of my HBase cluster the data locality drops by > 30-40% every time which implies the stochastic balancer is not optimizing for > data locality enough, at least not under the circumstance of rolling > restarts, and that it must not be balancing the regions back to their > original RegionServers. > The stochastic balancer is supposed to take data locality in to account but > if this is the case, surely it should move regions back to their original > RegionServers and data locality should return back to around where it was, > not drop by 30-40% percent every time I need to do some tuning and a rolling > restart. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21004) Backport to branch-2.0 HBASE-20708 "Remove the usage of RecoverMetaProcedure"
[ https://issues.apache.org/jira/browse/HBASE-21004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568701#comment-16568701 ] stack commented on HBASE-21004: --- .002 TestZooKeeper seems flakey. It passes locally. Stole the below from HBASE-20159 to fix the TestMasterShutdown. Fixed checkstyle. {code} diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java index 9bb12c1fe1..c157e37eac 100644 --- a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java +++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java @@ -546,18 +552,20 @@ public class HMaster extends HRegionServer implements MasterServices { public void run() { try { if (!conf.getBoolean("hbase.testing.nocluster", false)) { -try { - int infoPort = putUpJettyServer(); - startActiveMasterManager(infoPort); -} catch (Throwable t) { - // Make sure we log the exception. - String error = "Failed to become Active Master"; - LOG.error(error, t); - // Abort should have been called already. - if (!isAborted()) { -abort(error, t); +Threads.setDaemonThreadRunning(new Thread(() -> { + try { +int infoPort = putUpJettyServer(); +startActiveMasterManager(infoPort); + } catch (Throwable t) { +// Make sure we log the exception. +String error = "Failed to become Active Master"; +LOG.error(error, t); +// Abort should have been called already. +if (!isAborted()) { + abort(error, t); +} } -} +})); } // Fall in here even if we have been aborted. Need to run the shutdown services and // the super run call will do this for us. {code} > Backport to branch-2.0 HBASE-20708 "Remove the usage of RecoverMetaProcedure" > - > > Key: HBASE-21004 > URL: https://issues.apache.org/jira/browse/HBASE-21004 > Project: HBase > Issue Type: Sub-task > Components: amv2 >Reporter: stack >Assignee: stack >Priority: Major > Fix For: 2.0.2 > > Attachments: HBASE-21004.branch-2.0.001.patch, > HBASE-21004.branch-2.0.002.patch > > > Backport parent issue to branch-2.0. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21004) Backport to branch-2.0 HBASE-20708 "Remove the usage of RecoverMetaProcedure"
[ https://issues.apache.org/jira/browse/HBASE-21004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-21004: -- Attachment: HBASE-21004.branch-2.0.002.patch > Backport to branch-2.0 HBASE-20708 "Remove the usage of RecoverMetaProcedure" > - > > Key: HBASE-21004 > URL: https://issues.apache.org/jira/browse/HBASE-21004 > Project: HBase > Issue Type: Sub-task > Components: amv2 >Reporter: stack >Assignee: stack >Priority: Major > Fix For: 2.0.2 > > Attachments: HBASE-21004.branch-2.0.001.patch, > HBASE-21004.branch-2.0.002.patch > > > Backport parent issue to branch-2.0. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20813) Remove RPC quotas when the associated table/Namespace is dropped off
[ https://issues.apache.org/jira/browse/HBASE-20813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568698#comment-16568698 ] Hadoop QA commented on HBASE-20813: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 43s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 4s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 27s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 19s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 20s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 2s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 23s{color} | {color:red} hbase-server: The patch generated 3 new + 155 unchanged - 1 fixed = 158 total (was 156) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 10s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 12m 24s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green}118m 29s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}166m 36s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b | | JIRA Issue | HBASE-20813 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12934292/hbase-20813.master.001.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 0d0f7d27c536 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh | | git revision | master / bd30ca62ef | | maven | version: Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) | | Default Java | 1.8.0_171 | | findbugs | v3.1.0-RC3 | | checkstyle | https://builds.apache.org/job/PreCommit-HBASE-Build/13918/artifact/patchprocess/diff-checkstyle-hbase-server.txt | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/13918/testReport/ | | Max. process+thread count | 4538 (vs. ulimit of 1) | | modules | C: hbase-server U: hbase-server | | Console output |
[jira] [Commented] (HBASE-21006) Balancer - data locality drops 30-40% across all nodes after every cluster-wide rolling restart, not migrating regions back to original RegionServers?
[ https://issues.apache.org/jira/browse/HBASE-21006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568697#comment-16568697 ] Hari Sekhon commented on HBASE-21006: - I have, and pointed them to this Jira - that way Google can do its job in case anyone else is wondering why their data locality is destroyed after every rolling restart. > Balancer - data locality drops 30-40% across all nodes after every > cluster-wide rolling restart, not migrating regions back to original > RegionServers? > -- > > Key: HBASE-21006 > URL: https://issues.apache.org/jira/browse/HBASE-21006 > Project: HBase > Issue Type: Improvement > Components: Balancer >Affects Versions: 1.1.2 > Environment: HDP 2.6 >Reporter: Hari Sekhon >Priority: Major > > After doing rolling restarts of my HBase cluster the data locality drops by > 30-40% every time which implies the stochastic balancer is not optimizing for > data locality enough, at least not under the circumstance of rolling > restarts, and that it must not be balancing the regions back to their > original RegionServers. > The stochastic balancer is supposed to take data locality in to account but > if this is the case, surely it should move regions back to their original > RegionServers and data locality should return back to around where it was, > not drop by 30-40% percent every time I need to do some tuning and a rolling > restart. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-15320) HBase connector for Kafka Connect
[ https://issues.apache.org/jira/browse/HBASE-15320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568678#comment-16568678 ] stack commented on HBASE-15320: --- Oh. Ok. I missed that you were shutting down the core services and that the RS just a skeleton that does replication only. Good. Thanks. All good. > HBase connector for Kafka Connect > - > > Key: HBASE-15320 > URL: https://issues.apache.org/jira/browse/HBASE-15320 > Project: HBase > Issue Type: New Feature > Components: Replication >Reporter: Andrew Purtell >Assignee: Mike Wingert >Priority: Major > Labels: beginner > Fix For: 3.0.0 > > Attachments: 15320.master.16.patch, 15320.master.16.patch, > HBASE-15320.master.1.patch, HBASE-15320.master.10.patch, > HBASE-15320.master.11.patch, HBASE-15320.master.12.patch, > HBASE-15320.master.14.patch, HBASE-15320.master.15.patch, > HBASE-15320.master.2.patch, HBASE-15320.master.3.patch, > HBASE-15320.master.4.patch, HBASE-15320.master.5.patch, > HBASE-15320.master.6.patch, HBASE-15320.master.7.patch, > HBASE-15320.master.8.patch, HBASE-15320.master.8.patch, > HBASE-15320.master.9.patch, HBASE-15320.pdf, HBASE-15320.pdf > > > Implement an HBase connector with source and sink tasks for the Connect > framework (http://docs.confluent.io/2.0.0/connect/index.html) available in > Kafka 0.9 and later. > See also: > http://www.confluent.io/blog/announcing-kafka-connect-building-large-scale-low-latency-data-pipelines > An HBase source > (http://docs.confluent.io/2.0.0/connect/devguide.html#task-example-source-task) > could be implemented as a replication endpoint or WALObserver, publishing > cluster wide change streams from the WAL to one or more topics, with > configurable mapping and partitioning of table changes to topics. > An HBase sink task > (http://docs.confluent.io/2.0.0/connect/devguide.html#sink-tasks) would > persist, with optional transformation (JSON? Avro?, map fields to native > schema?), Kafka SinkRecords into HBase tables. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-15320) HBase connector for Kafka Connect
[ https://issues.apache.org/jira/browse/HBASE-15320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568672#comment-16568672 ] Mike Wingert commented on HBASE-15320: -- [~stack], thanks for the feedback. * When it starts the region server, I'm passing in these parameters: hbase.cluster.distributed=true zookeeper.znode.parent=/kafkaproxy hbase.regionserver.port=17020 hbase.regionserver.info.port=17010 hbase.client.connection.impl=org.apache.hadoop.hbase.kafka.KafkaBridgeConnection hbase.regionserver.admin.service=false hbase.regionserver.client.service=false hbase.wal.provider=org.apache.hadoop.hbase.wal.DisabledWALProvider hbase.regionserver.workers=false hfile.block.cache.size=0.0001 hbase.mob.file.cache.size=0 hbase.masterless=true hbase.regionserver.metahandler.count=1 hbase.regionserver.replication.handler.count=1 hbase.regionserver.handler.count=1 hbase.ipc.server.read.threadpool.size=3 Are there other values I need to set? * it allows you to specify various rules to route the replication messages or drop them Yes, it's read from the file in the conf dir (or specified via command line). In my patch for hbase-21002 I'll add a readme to describe how to use the proxy. > HBase connector for Kafka Connect > - > > Key: HBASE-15320 > URL: https://issues.apache.org/jira/browse/HBASE-15320 > Project: HBase > Issue Type: New Feature > Components: Replication >Reporter: Andrew Purtell >Assignee: Mike Wingert >Priority: Major > Labels: beginner > Fix For: 3.0.0 > > Attachments: 15320.master.16.patch, 15320.master.16.patch, > HBASE-15320.master.1.patch, HBASE-15320.master.10.patch, > HBASE-15320.master.11.patch, HBASE-15320.master.12.patch, > HBASE-15320.master.14.patch, HBASE-15320.master.15.patch, > HBASE-15320.master.2.patch, HBASE-15320.master.3.patch, > HBASE-15320.master.4.patch, HBASE-15320.master.5.patch, > HBASE-15320.master.6.patch, HBASE-15320.master.7.patch, > HBASE-15320.master.8.patch, HBASE-15320.master.8.patch, > HBASE-15320.master.9.patch, HBASE-15320.pdf, HBASE-15320.pdf > > > Implement an HBase connector with source and sink tasks for the Connect > framework (http://docs.confluent.io/2.0.0/connect/index.html) available in > Kafka 0.9 and later. > See also: > http://www.confluent.io/blog/announcing-kafka-connect-building-large-scale-low-latency-data-pipelines > An HBase source > (http://docs.confluent.io/2.0.0/connect/devguide.html#task-example-source-task) > could be implemented as a replication endpoint or WALObserver, publishing > cluster wide change streams from the WAL to one or more topics, with > configurable mapping and partitioning of table changes to topics. > An HBase sink task > (http://docs.confluent.io/2.0.0/connect/devguide.html#sink-tasks) would > persist, with optional transformation (JSON? Avro?, map fields to native > schema?), Kafka SinkRecords into HBase tables. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20965) Separate region server report requests to new handlers
[ https://issues.apache.org/jira/browse/HBASE-20965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-20965: -- Component/s: Performance > Separate region server report requests to new handlers > -- > > Key: HBASE-20965 > URL: https://issues.apache.org/jira/browse/HBASE-20965 > Project: HBase > Issue Type: Improvement > Components: Performance >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > Fix For: 3.0.0, 2.2.0 > > Attachments: HBASE-20965.master.001.patch, > HBASE-20965.master.002.patch, HBASE-20965.master.003.patch, > HBASE-20965.master.004.patch > > > In master rpc scheduler, all rpc requests are executed in a thread pool. This > task separates rs report requests to new handlers. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-15320) HBase connector for Kafka Connect
[ https://issues.apache.org/jira/browse/HBASE-15320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568608#comment-16568608 ] stack commented on HBASE-15320: --- Took another look at the patch as it looks after landing in hbase-connectors. Some things you might consider going forward Mike: * it starts up a barebones region server that just receives replication events On above, it seems like we do not shutdown core regionserver services... the RS we start up is a full-featured instance. There are switches we could set to make it so it does not start Admin and Client services... just the sink for replication. A bit of a description on how the thing works, what landed eventually, would be sweet as a release note and as addition to README over in hbase-connectors... so folks can easily figure how to get this nice new functionality going. The RS we start catches the replication stream and then forwards to Kafka topics? * it allows you to specify various rules to route the replication messages or drop them The sink RS reads these out of conf dir, right? Thanks. > HBase connector for Kafka Connect > - > > Key: HBASE-15320 > URL: https://issues.apache.org/jira/browse/HBASE-15320 > Project: HBase > Issue Type: New Feature > Components: Replication >Reporter: Andrew Purtell >Assignee: Mike Wingert >Priority: Major > Labels: beginner > Fix For: 3.0.0 > > Attachments: 15320.master.16.patch, 15320.master.16.patch, > HBASE-15320.master.1.patch, HBASE-15320.master.10.patch, > HBASE-15320.master.11.patch, HBASE-15320.master.12.patch, > HBASE-15320.master.14.patch, HBASE-15320.master.15.patch, > HBASE-15320.master.2.patch, HBASE-15320.master.3.patch, > HBASE-15320.master.4.patch, HBASE-15320.master.5.patch, > HBASE-15320.master.6.patch, HBASE-15320.master.7.patch, > HBASE-15320.master.8.patch, HBASE-15320.master.8.patch, > HBASE-15320.master.9.patch, HBASE-15320.pdf, HBASE-15320.pdf > > > Implement an HBase connector with source and sink tasks for the Connect > framework (http://docs.confluent.io/2.0.0/connect/index.html) available in > Kafka 0.9 and later. > See also: > http://www.confluent.io/blog/announcing-kafka-connect-building-large-scale-low-latency-data-pipelines > An HBase source > (http://docs.confluent.io/2.0.0/connect/devguide.html#task-example-source-task) > could be implemented as a replication endpoint or WALObserver, publishing > cluster wide change streams from the WAL to one or more topics, with > configurable mapping and partitioning of table changes to topics. > An HBase sink task > (http://docs.confluent.io/2.0.0/connect/devguide.html#sink-tasks) would > persist, with optional transformation (JSON? Avro?, map fields to native > schema?), Kafka SinkRecords into HBase tables. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21004) Backport to branch-2.0 HBASE-20708 "Remove the usage of RecoverMetaProcedure"
[ https://issues.apache.org/jira/browse/HBASE-21004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568595#comment-16568595 ] Hadoop QA commented on HBASE-21004: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 21 new or modified test files. {color} | || || || || {color:brown} branch-2.0 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 38s{color} | {color:green} branch-2.0 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 8s{color} | {color:green} branch-2.0 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 14s{color} | {color:green} branch-2.0 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 20s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 32s{color} | {color:green} branch-2.0 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 15s{color} | {color:green} branch-2.0 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 15m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 10s{color} | {color:green} The patch hbase-protocol-shaded passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 32s{color} | {color:green} The patch hbase-client passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 16s{color} | {color:green} hbase-procedure: The patch generated 0 new + 44 unchanged - 1 fixed = 44 total (was 45) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 18s{color} | {color:red} hbase-server: The patch generated 2 new + 334 unchanged - 14 fixed = 336 total (was 348) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 13s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 11m 28s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.5 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 1m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 29s{color} | {color:green} hbase-protocol-shaded in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 1s{color} | {color:green} hbase-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 38s{color} | {color:green} hbase-procedure in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit
[jira] [Commented] (HBASE-21007) Memory leak in HBase rest server
[ https://issues.apache.org/jira/browse/HBASE-21007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568555#comment-16568555 ] Hadoop QA commented on HBASE-21007: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange} 0m 0s{color} | {color:orange} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 10s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 33s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 39s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 15s{color} | {color:red} hbase-rest: The patch generated 3 new + 1 unchanged - 0 fixed = 4 total (was 1) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 32s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 10m 13s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 28s{color} | {color:green} hbase-rest in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 11s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 37m 31s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b | | JIRA Issue | HBASE-21007 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12934293/HBASE-21007.001.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 84cb48d242bb 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / bd30ca62ef | | maven | version: Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) | | Default Java | 1.8.0_171 | | findbugs | v3.1.0-RC3 | | checkstyle | https://builds.apache.org/job/PreCommit-HBASE-Build/13919/artifact/patchprocess/diff-checkstyle-hbase-rest.txt | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/13919/testReport/ | | Max.
[jira] [Created] (HBASE-21009) Backport to branch-2.0 HBASE-20739 "Add priority for SCP"
stack created HBASE-21009: - Summary: Backport to branch-2.0 HBASE-20739 "Add priority for SCP" Key: HBASE-21009 URL: https://issues.apache.org/jira/browse/HBASE-21009 Project: HBase Issue Type: Sub-task Components: amv2 Reporter: stack Assignee: stack Fix For: 2.0.2 Backport parent issue to branch-2.0. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21008) HBase 1.x can not read HBase2 hfiles due to TimeRangeTracker
[ https://issues.apache.org/jira/browse/HBASE-21008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568532#comment-16568532 ] Jerry He commented on HBASE-21008: -- FYI [~chia7712]. > HBase 1.x can not read HBase2 hfiles due to TimeRangeTracker > > > Key: HBASE-21008 > URL: https://issues.apache.org/jira/browse/HBASE-21008 > Project: HBase > Issue Type: Bug >Affects Versions: 2.1.0, 1.4.6 >Reporter: Jerry He >Priority: Major > > It looks like HBase 1.x can not open hfiiles written by HBase2 still. > I tested the latest HBase 1.4.6 and 2.1.0. 1.4.6 tried to read and open > regions written by 2.1.0. > {code} > 2018-07-30 16:01:31,274 ERROR [StoreFileOpenerThread-info-1] > regionserver.StoreFile: Error reading timestamp range data from meta -- > proceeding without > java.lang.IllegalArgumentException: Timestamp cannot be negative. > minStamp:5783278630776778969, maxStamp:-4698050386518222402 > at org.apache.hadoop.hbase.io.TimeRange.check(TimeRange.java:112) > at org.apache.hadoop.hbase.io.TimeRange.(TimeRange.java:100) > at > org.apache.hadoop.hbase.regionserver.TimeRangeTracker.toTimeRange(TimeRangeTracker.java:214) > at > org.apache.hadoop.hbase.regionserver.TimeRangeTracker.getTimeRange(TimeRangeTracker.java:198) > at > org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:507) > at > org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:531) > at > org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:521) > at > org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:679) > at > org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:122) > at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:538) > at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:535) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > {code} > Or: > {code} > 2018-07-30 16:01:31,305 ERROR [RS_OPEN_REGION-throb1:34004-0] > handler.OpenRegionHandler: Failed open of > region=janusgraph,,1532630557542.b0fa15cb0bf1b0bf740997b7056c., starting > to roll back the global memstore size. > java.io.IOException: java.io.IOException: java.io.EOFException > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1033) > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:908) > at > org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:876) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6995) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6956) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6927) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6883) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6834) > at > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:364) > at > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:131) > at > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.io.IOException: java.io.EOFException > at > org.apache.hadoop.hbase.regionserver.HStore.openStoreFiles(HStore.java:564) > at > org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:518) > at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:281) > at > org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:5378) > at > org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1007) > at > org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1004) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > ... 3 more > Caused by: java.io.EOFException > at java.io.DataInputStream.readFully(DataInputStream.java:197) > at java.io.DataInputStream.readLong(DataInputStream.java:416) > at >
[jira] [Assigned] (HBASE-21007) Memory leak in HBase rest server
[ https://issues.apache.org/jira/browse/HBASE-21007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike Drob reassigned HBASE-21007: - Assignee: Bosko Devetak Thanks for the patch, [~bdevetak]. I have added you to our "contributors" group, so you should be able to self-assign issues in the future. I have already assigned this one to you. The change makes sense to me. Do you think it is possible to add a unit test to verify this and prevent regressions in the future? When submitting patches in the future, we encourage folks to use {{git --format-patch}} so that they retain authorship information and we can properly attribute credit. Alternatively, there is a helper script at {{dev-support/submit-patch.py}}. > Memory leak in HBase rest server > > > Key: HBASE-21007 > URL: https://issues.apache.org/jira/browse/HBASE-21007 > Project: HBase > Issue Type: Bug > Components: REST >Affects Versions: 1.4.0, 1.4.6 >Reporter: Bosko Devetak >Assignee: Bosko Devetak >Priority: Critical > Attachments: HBASE-21007.001.patch > > > When using the URIs like this: > > /sometable/*?limit=$limit=$startrow=$endrow > > where *$limit* is smaller than the range between *$startrow* and *$endrow*, > the rest server will start leaking memory. > > > The bug is in the *TableScanResource.java* class. Basically, the > ResultScanner is not being closed in next() method when the limit has been > reached. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21008) HBase 1.x can not read HBase2 hfiles due to TimeRangeTracker
[ https://issues.apache.org/jira/browse/HBASE-21008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568513#comment-16568513 ] Jerry He commented on HBASE-21008: -- The problem seems to come from HBASE-18754, which removed the TimeRangeTracker Writable, but added a protobuf HBaseProtos.TimeRangeTracker. HBase 1.x will not be able to read the protobuf serialized TimeRangeTracker in hfiles. > HBase 1.x can not read HBase2 hfiles due to TimeRangeTracker > > > Key: HBASE-21008 > URL: https://issues.apache.org/jira/browse/HBASE-21008 > Project: HBase > Issue Type: Bug >Affects Versions: 2.1.0, 1.4.6 >Reporter: Jerry He >Priority: Major > > It looks like HBase 1.x can not open hfiiles written by HBase2 still. > I tested the latest HBase 1.4.6 and 2.1.0. 1.4.6 tried to read and open > regions written by 2.1.0. > {code} > 2018-07-30 16:01:31,274 ERROR [StoreFileOpenerThread-info-1] > regionserver.StoreFile: Error reading timestamp range data from meta -- > proceeding without > java.lang.IllegalArgumentException: Timestamp cannot be negative. > minStamp:5783278630776778969, maxStamp:-4698050386518222402 > at org.apache.hadoop.hbase.io.TimeRange.check(TimeRange.java:112) > at org.apache.hadoop.hbase.io.TimeRange.(TimeRange.java:100) > at > org.apache.hadoop.hbase.regionserver.TimeRangeTracker.toTimeRange(TimeRangeTracker.java:214) > at > org.apache.hadoop.hbase.regionserver.TimeRangeTracker.getTimeRange(TimeRangeTracker.java:198) > at > org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:507) > at > org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:531) > at > org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:521) > at > org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:679) > at > org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:122) > at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:538) > at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:535) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > {code} > Or: > {code} > 2018-07-30 16:01:31,305 ERROR [RS_OPEN_REGION-throb1:34004-0] > handler.OpenRegionHandler: Failed open of > region=janusgraph,,1532630557542.b0fa15cb0bf1b0bf740997b7056c., starting > to roll back the global memstore size. > java.io.IOException: java.io.IOException: java.io.EOFException > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1033) > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:908) > at > org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:876) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6995) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6956) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6927) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6883) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6834) > at > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:364) > at > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:131) > at > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.io.IOException: java.io.EOFException > at > org.apache.hadoop.hbase.regionserver.HStore.openStoreFiles(HStore.java:564) > at > org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:518) > at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:281) > at > org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:5378) > at > org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1007) > at > org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1004) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > ... 3 more > Caused by: java.io.EOFException >
[jira] [Created] (HBASE-21008) HBase 1.x can not read HBase2 hfiles due to TimeRangeTracker
Jerry He created HBASE-21008: Summary: HBase 1.x can not read HBase2 hfiles due to TimeRangeTracker Key: HBASE-21008 URL: https://issues.apache.org/jira/browse/HBASE-21008 Project: HBase Issue Type: Bug Affects Versions: 1.4.6, 2.1.0 Reporter: Jerry He It looks like HBase 1.x can not open hfiiles written by HBase2 still. I tested the latest HBase 1.4.6 and 2.1.0. 1.4.6 tried to read and open regions written by 2.1.0. {code} 2018-07-30 16:01:31,274 ERROR [StoreFileOpenerThread-info-1] regionserver.StoreFile: Error reading timestamp range data from meta -- proceeding without java.lang.IllegalArgumentException: Timestamp cannot be negative. minStamp:5783278630776778969, maxStamp:-4698050386518222402 at org.apache.hadoop.hbase.io.TimeRange.check(TimeRange.java:112) at org.apache.hadoop.hbase.io.TimeRange.(TimeRange.java:100) at org.apache.hadoop.hbase.regionserver.TimeRangeTracker.toTimeRange(TimeRangeTracker.java:214) at org.apache.hadoop.hbase.regionserver.TimeRangeTracker.getTimeRange(TimeRangeTracker.java:198) at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:507) at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:531) at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:521) at org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:679) at org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:122) at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:538) at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:535) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) {code} Or: {code} 2018-07-30 16:01:31,305 ERROR [RS_OPEN_REGION-throb1:34004-0] handler.OpenRegionHandler: Failed open of region=janusgraph,,1532630557542.b0fa15cb0bf1b0bf740997b7056c., starting to roll back the global memstore size. java.io.IOException: java.io.IOException: java.io.EOFException at org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1033) at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:908) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:876) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6995) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6956) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6927) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6883) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6834) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:364) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:131) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.io.IOException: java.io.EOFException at org.apache.hadoop.hbase.regionserver.HStore.openStoreFiles(HStore.java:564) at org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:518) at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:281) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:5378) at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1007) at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1004) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) ... 3 more Caused by: java.io.EOFException at java.io.DataInputStream.readFully(DataInputStream.java:197) at java.io.DataInputStream.readLong(DataInputStream.java:416) at org.apache.hadoop.hbase.regionserver.TimeRangeTracker.readFields(TimeRangeTracker.java:170) at org.apache.hadoop.hbase.util.Writables.copyWritable(Writables.java:161) at org.apache.hadoop.hbase.regionserver.TimeRangeTracker.getTimeRangeTracker(TimeRangeTracker.java:187) at org.apache.hadoop.hbase.regionserver.TimeRangeTracker.getTimeRange(TimeRangeTracker.java:197) at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:507) at
[jira] [Commented] (HBASE-20813) Remove RPC quotas when the associated table/Namespace is dropped off
[ https://issues.apache.org/jira/browse/HBASE-20813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568504#comment-16568504 ] Sakthi commented on HBASE-20813: [~elserj] or [~mdrob] I have uploaded the first patch. Please review. > Remove RPC quotas when the associated table/Namespace is dropped off > > > Key: HBASE-20813 > URL: https://issues.apache.org/jira/browse/HBASE-20813 > Project: HBase > Issue Type: Sub-task >Reporter: Sakthi >Assignee: Sakthi >Priority: Minor > Attachments: hbase-20813.master.001.patch > > > In short, the below scenario shouldn't be the case. > {noformat} > hbase(main):023:0> create 't2','cf1' > Created table t2 > Took 0.7405 seconds > => Hbase::Table - t2 > hbase(main):024:0> > hbase(main):025:0* > hbase(main):026:0* set_quota TYPE => THROTTLE, TABLE => 't2', LIMIT => > '10M/sec' > Took 0.0082 seconds > hbase(main):027:0> list_quotas > OWNER QUOTAS > TABLE => t2 TYPE => THROTTLE, THROTTLE_TYPE => REQUEST_SIZE, LIMIT => > 10M/sec, SCOPE => MACHINE > 1 row(s) > Took 0.0291 seconds > hbase(main):028:0> scan 'hbase:quota' > ROW COLUMN+CELL > t.t2 column=q:s, timestamp=1530165010888, > value=PBUF\x12\x0B\x12\x09\x08\x04\x10\x80\x80\x80\x05 \x02 > 1 row(s) > Took 0.0037 seconds > hbase(main):029:0> disable 't2' > Took 0.4328 seconds > hbase(main):030:0> drop 't2' > Took 0.2285 seconds > hbase(main):031:0> list_quotas > OWNER QUOTAS > TABLE => t2 TYPE => THROTTLE, THROTTLE_TYPE => REQUEST_SIZE, LIMIT => > 10M/sec, SCOPE => MACHINE > 1 row(s) > Took 0.0230 seconds > hbase(main):032:0> scan 'hbase:quota' > ROW COLUMN+CELL > t.t2 column=q:s, timestamp=1530165010888, > value=PBUF\x12\x0B\x12\x09\x08\x04\x10\x80\x80\x80\x05 \x02 > 1 row(s) > Took 0.0038 seconds > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21007) Memory leak in HBase rest server
[ https://issues.apache.org/jira/browse/HBASE-21007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bosko Devetak updated HBASE-21007: -- Attachment: HBASE-21007.001.patch Status: Patch Available (was: Open) > Memory leak in HBase rest server > > > Key: HBASE-21007 > URL: https://issues.apache.org/jira/browse/HBASE-21007 > Project: HBase > Issue Type: Bug > Components: REST >Affects Versions: 1.4.6, 1.4.0 >Reporter: Bosko Devetak >Priority: Critical > Attachments: HBASE-21007.001.patch > > > When using the URIs like this: > > /sometable/*?limit=$limit=$startrow=$endrow > > where *$limit* is smaller than the range between *$startrow* and *$endrow*, > the rest server will start leaking memory. > > > The bug is in the *TableScanResource.java* class. Basically, the > ResultScanner is not being closed in next() method when the limit has been > reached. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21007) Memory leak in HBase rest server
Bosko Devetak created HBASE-21007: - Summary: Memory leak in HBase rest server Key: HBASE-21007 URL: https://issues.apache.org/jira/browse/HBASE-21007 Project: HBase Issue Type: Bug Components: REST Affects Versions: 1.4.6, 1.4.0 Reporter: Bosko Devetak When using the URIs like this: /sometable/*?limit=$limit=$startrow=$endrow where *$limit* is smaller than the range between *$startrow* and *$endrow*, the rest server will start leaking memory. The bug is in the *TableScanResource.java* class. Basically, the ResultScanner is not being closed in next() method when the limit has been reached. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21006) Balancer - data locality drops 30-40% across all nodes after every cluster-wide rolling restart, not migrating regions back to original RegionServers?
[ https://issues.apache.org/jira/browse/HBASE-21006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568492#comment-16568492 ] Josh Elser commented on HBASE-21006: [~harisekhon], have you raised this internally with Hortonworks? > Balancer - data locality drops 30-40% across all nodes after every > cluster-wide rolling restart, not migrating regions back to original > RegionServers? > -- > > Key: HBASE-21006 > URL: https://issues.apache.org/jira/browse/HBASE-21006 > Project: HBase > Issue Type: Improvement > Components: Balancer >Affects Versions: 1.1.2 > Environment: HDP 2.6 >Reporter: Hari Sekhon >Priority: Major > > After doing rolling restarts of my HBase cluster the data locality drops by > 30-40% every time which implies the stochastic balancer is not optimizing for > data locality enough, at least not under the circumstance of rolling > restarts, and that it must not be balancing the regions back to their > original RegionServers. > The stochastic balancer is supposed to take data locality in to account but > if this is the case, surely it should move regions back to their original > RegionServers and data locality should return back to around where it was, > not drop by 30-40% percent every time I need to do some tuning and a rolling > restart. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20813) Remove RPC quotas when the associated table/Namespace is dropped off
[ https://issues.apache.org/jira/browse/HBASE-20813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sakthi updated HBASE-20813: --- Status: Patch Available (was: In Progress) > Remove RPC quotas when the associated table/Namespace is dropped off > > > Key: HBASE-20813 > URL: https://issues.apache.org/jira/browse/HBASE-20813 > Project: HBase > Issue Type: Sub-task >Reporter: Sakthi >Assignee: Sakthi >Priority: Minor > Attachments: hbase-20813.master.001.patch > > > In short, the below scenario shouldn't be the case. > {noformat} > hbase(main):023:0> create 't2','cf1' > Created table t2 > Took 0.7405 seconds > => Hbase::Table - t2 > hbase(main):024:0> > hbase(main):025:0* > hbase(main):026:0* set_quota TYPE => THROTTLE, TABLE => 't2', LIMIT => > '10M/sec' > Took 0.0082 seconds > hbase(main):027:0> list_quotas > OWNER QUOTAS > TABLE => t2 TYPE => THROTTLE, THROTTLE_TYPE => REQUEST_SIZE, LIMIT => > 10M/sec, SCOPE => MACHINE > 1 row(s) > Took 0.0291 seconds > hbase(main):028:0> scan 'hbase:quota' > ROW COLUMN+CELL > t.t2 column=q:s, timestamp=1530165010888, > value=PBUF\x12\x0B\x12\x09\x08\x04\x10\x80\x80\x80\x05 \x02 > 1 row(s) > Took 0.0037 seconds > hbase(main):029:0> disable 't2' > Took 0.4328 seconds > hbase(main):030:0> drop 't2' > Took 0.2285 seconds > hbase(main):031:0> list_quotas > OWNER QUOTAS > TABLE => t2 TYPE => THROTTLE, THROTTLE_TYPE => REQUEST_SIZE, LIMIT => > 10M/sec, SCOPE => MACHINE > 1 row(s) > Took 0.0230 seconds > hbase(main):032:0> scan 'hbase:quota' > ROW COLUMN+CELL > t.t2 column=q:s, timestamp=1530165010888, > value=PBUF\x12\x0B\x12\x09\x08\x04\x10\x80\x80\x80\x05 \x02 > 1 row(s) > Took 0.0038 seconds > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20813) Remove RPC quotas when the associated table/Namespace is dropped off
[ https://issues.apache.org/jira/browse/HBASE-20813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sakthi updated HBASE-20813: --- Attachment: hbase-20813.master.001.patch > Remove RPC quotas when the associated table/Namespace is dropped off > > > Key: HBASE-20813 > URL: https://issues.apache.org/jira/browse/HBASE-20813 > Project: HBase > Issue Type: Sub-task >Reporter: Sakthi >Assignee: Sakthi >Priority: Minor > Attachments: hbase-20813.master.001.patch > > > In short, the below scenario shouldn't be the case. > {noformat} > hbase(main):023:0> create 't2','cf1' > Created table t2 > Took 0.7405 seconds > => Hbase::Table - t2 > hbase(main):024:0> > hbase(main):025:0* > hbase(main):026:0* set_quota TYPE => THROTTLE, TABLE => 't2', LIMIT => > '10M/sec' > Took 0.0082 seconds > hbase(main):027:0> list_quotas > OWNER QUOTAS > TABLE => t2 TYPE => THROTTLE, THROTTLE_TYPE => REQUEST_SIZE, LIMIT => > 10M/sec, SCOPE => MACHINE > 1 row(s) > Took 0.0291 seconds > hbase(main):028:0> scan 'hbase:quota' > ROW COLUMN+CELL > t.t2 column=q:s, timestamp=1530165010888, > value=PBUF\x12\x0B\x12\x09\x08\x04\x10\x80\x80\x80\x05 \x02 > 1 row(s) > Took 0.0037 seconds > hbase(main):029:0> disable 't2' > Took 0.4328 seconds > hbase(main):030:0> drop 't2' > Took 0.2285 seconds > hbase(main):031:0> list_quotas > OWNER QUOTAS > TABLE => t2 TYPE => THROTTLE, THROTTLE_TYPE => REQUEST_SIZE, LIMIT => > 10M/sec, SCOPE => MACHINE > 1 row(s) > Took 0.0230 seconds > hbase(main):032:0> scan 'hbase:quota' > ROW COLUMN+CELL > t.t2 column=q:s, timestamp=1530165010888, > value=PBUF\x12\x0B\x12\x09\x08\x04\x10\x80\x80\x80\x05 \x02 > 1 row(s) > Took 0.0038 seconds > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20885) Remove entry for RPC quota from hbase:quota when RPC quota is removed.
[ https://issues.apache.org/jira/browse/HBASE-20885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568488#comment-16568488 ] Sakthi commented on HBASE-20885: Thanks for your help [~elserj], [~md...@cloudera.com], [~stack], [~Apache9] ! > Remove entry for RPC quota from hbase:quota when RPC quota is removed. > -- > > Key: HBASE-20885 > URL: https://issues.apache.org/jira/browse/HBASE-20885 > Project: HBase > Issue Type: Sub-task >Reporter: Sakthi >Assignee: Sakthi >Priority: Minor > Fix For: 3.0.0, 2.0.2, 2.2.0, 2.1.1 > > Attachments: hbase-20885.master.001.patch, > hbase-20885.master.002.patch, hbase-20885.master.003.patch, > hbase-20885.master.003.patch, hbase-20885.master.004.patch, > hbase-20885.master.005.patch > > > When a RPC quota is removed (using LIMIT => 'NONE'), the entry from > hbase:quota table is not completely removed. For e.g. see below: > {noformat} > hbase(main):005:0> create 't2','cf1' > Created table t2 > Took 0.8000 seconds > => Hbase::Table - t2 > hbase(main):006:0> set_quota TYPE => THROTTLE, TABLE => 't2', LIMIT => > '10M/sec' > Took 0.1024 seconds > hbase(main):007:0> list_quotas > OWNER QUOTAS > TABLE => t2 TYPE => THROTTLE, THROTTLE_TYPE => > REQUEST_SIZE, LIMIT => 10M/sec, SCOPE => MACHINE > 1 row(s) > Took 0.0622 seconds > hbase(main):008:0> scan 'hbase:quota' > ROWCOLUMN+CELL > t.t2 column=q:s, timestamp=1531513014463, > value=PBUF\x12\x0B\x12\x09\x08\x04\x10\x80\x80\x80 >\x05 \x02 > 1 row(s) > Took 0.0453 seconds > hbase(main):009:0> set_quota TYPE => THROTTLE, TABLE => 't2', LIMIT => 'NONE' > Took 0.0097 seconds > hbase(main):010:0> list_quotas > OWNER QUOTAS > 0 row(s) > Took 0.0338 seconds > hbase(main):011:0> scan 'hbase:quota' > ROWCOLUMN+CELL > t.t2 column=q:s, timestamp=1531513039505, > value=PBUF\x12\x00 > 1 row(s) > Took 0.0066 seconds > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-21006) Balancer - data locality drops 30-40% across all nodes after every cluster-wide rolling restart, not migrating regions back to original RegionServers?
[ https://issues.apache.org/jira/browse/HBASE-21006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sekhon updated HBASE-21006: Summary: Balancer - data locality drops 30-40% across all nodes after every cluster-wide rolling restart, not migrating regions back to original RegionServers? (was: Balancer - data locality drops 30-40% after each cluster-wide rolling restart, not migrating regions back to original RegionServers?) > Balancer - data locality drops 30-40% across all nodes after every > cluster-wide rolling restart, not migrating regions back to original > RegionServers? > -- > > Key: HBASE-21006 > URL: https://issues.apache.org/jira/browse/HBASE-21006 > Project: HBase > Issue Type: Improvement > Components: Balancer >Affects Versions: 1.1.2 > Environment: HDP 2.6 >Reporter: Hari Sekhon >Priority: Major > > After doing rolling restarts of my HBase cluster the data locality drops by > 30-40% every time which implies the stochastic balancer is not optimizing for > data locality enough, at least not under the circumstance of rolling > restarts, and that it must not be balancing the regions back to their > original RegionServers. > The stochastic balancer is supposed to take data locality in to account but > if this is the case, surely it should move regions back to their original > RegionServers and data locality should return back to around where it was, > not drop by 30-40% percent every time I need to do some tuning and a rolling > restart. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20896) Port HBASE-20866 to branch-1 and branch-1.4
[ https://issues.apache.org/jira/browse/HBASE-20896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568486#comment-16568486 ] Andrew Purtell commented on HBASE-20896: +1 for branch-1 and branch-1.4, looking good > Port HBASE-20866 to branch-1 and branch-1.4 > > > Key: HBASE-20896 > URL: https://issues.apache.org/jira/browse/HBASE-20896 > Project: HBase > Issue Type: Sub-task > Components: Client, scan >Reporter: Andrew Purtell >Assignee: Vikas Vishwakarma >Priority: Major > Labels: perfomance > Fix For: 1.5.0, 1.4.7 > > Attachments: HBASE-20896.branch-1.4.001.patch, > HBASE-20896.branch-1.4.002.patch, HBASE-20896.branch-1.4.003.patch, > HBASE-20896.branch-1.4.004.patch, HBASE-20896.branch-1.4.005.patch, > HBASE-20896.branch-1.4.006.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)