[jira] [Commented] (HBASE-26398) CellCounter fails for large tables filling up local disk

2021-10-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17435746#comment-17435746
 ] 

Hudson commented on HBASE-26398:


Results for branch branch-2.4
[build #226 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/226/]:
 (/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/226/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/226/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/226/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/226/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> CellCounter fails for large tables filling up local disk
> 
>
> Key: HBASE-26398
> URL: https://issues.apache.org/jira/browse/HBASE-26398
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Affects Versions: 2.2.7, 2.5.0, 3.0.0-alpha-2, 2.3.7, 2.4.8
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Minor
> Fix For: 2.5.0, 2.2.8, 3.0.0-alpha-2, 2.3.8, 2.4.9
>
>
> CellCounter dumps all cell coordinates into its output, which can become huge.
> The spill can fill the local disk on the reducer. 
> CellCounter hardcodes *mapreduce.job.reduces* to *1*, so it is not possible 
> to use multiple reducers to get around this.
> Fixing this is easy, by not hardcoding *mapreduce.job.reduces*, it still 
> defaults to 1, but can be overriden by the user. 
> CellCounter also generates two extra records with constant keys for each 
> cell, which have to be processed by the reducer.
> Even with multiple reducers, these (1/3 of the totcal records) will go the 
> same reducer, which can also fill up the disk.
> This can be fixed by adding a Combiner to the Mapper, which sums the counter 
> records, thereby reducing the Mapper output records to 1/3 of their previous 
> amount, which can be evenly distibuted between the reducers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #3803: HBASE-26304: Reflect out of band locality improvements in metrics and balancer

2021-10-28 Thread GitBox


Apache-HBase commented on pull request #3803:
URL: https://github.com/apache/hbase/pull/3803#issuecomment-954266385


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  2s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  4s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 15s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   5m 15s |  master passed  |
   | +1 :green_heart: |  compile  |   2m  9s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   9m 15s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 15s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   5m  5s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m  6s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m  6s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   9m  8s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 23s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 14s |  hbase-common in the patch passed.  
|
   | +1 :green_heart: |  unit  |  11m  3s |  hbase-balancer in the patch 
passed.  |
   | -1 :x: |  unit  | 206m  6s |  hbase-server in the patch failed.  |
   |  |   | 259m  3s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3803/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3803 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux b3e7f70ab740 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 
19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 60254bc184 |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   | unit | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3803/2/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3803/2/testReport/
 |
   | Max. process+thread count | 2521 (vs. ulimit of 3) |
   | modules | C: hbase-common hbase-balancer hbase-server U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3803/2/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] bbeaudreault commented on pull request #3803: HBASE-26304: Reflect out of band locality improvements in metrics and balancer

2021-10-28 Thread GitBox


bbeaudreault commented on pull request #3803:
URL: https://github.com/apache/hbase/pull/3803#issuecomment-954260825


   Not sure what's up with the HBase bot, but the spotbugs failure is for 
`master` branch, not my branch. The unit test failures are unrelated, look like 
timeouts.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3803: HBASE-26304: Reflect out of band locality improvements in metrics and balancer

2021-10-28 Thread GitBox


Apache-HBase commented on pull request #3803:
URL: https://github.com/apache/hbase/pull/3803#issuecomment-954218474


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 25s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  4s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 47s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 47s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   8m 10s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 13s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 17s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 52s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 46s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 46s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   8m 19s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 14s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 47s |  hbase-common in the patch passed.  
|
   | +1 :green_heart: |  unit  |   6m 55s |  hbase-balancer in the patch 
passed.  |
   | -1 :x: |  unit  | 148m  1s |  hbase-server in the patch failed.  |
   |  |   | 190m 53s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3803/2/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3803 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux f35a7ae4a15f 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 60254bc184 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | unit | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3803/2/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3803/2/testReport/
 |
   | Max. process+thread count | 3540 (vs. ulimit of 3) |
   | modules | C: hbase-common hbase-balancer hbase-server U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3803/2/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-25983) javadoc generation fails on openjdk-11.0.11+9

2021-10-28 Thread Mike Drob (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17435654#comment-17435654
 ] 

Mike Drob commented on HBASE-25983:
---

We're likely looking at YETUS-557 for this.

> javadoc generation fails on openjdk-11.0.11+9
> -
>
> Key: HBASE-25983
> URL: https://issues.apache.org/jira/browse/HBASE-25983
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.4.3
> Environment: maven - 3.5.4 and 3.6.2
> java - openjdk 11.0.11+9
> centos6
> hbase - 2.4.3
>Reporter: Bryan Beaudreault
>Priority: Major
>
> I'm trying to build javadoc for HBase 2.4.3 on jdk11. The command I'm running 
> is as follows:
> {code:java}
> JAVA_HOME=/usr/lib/jvm/java-11-openjdk-11.0.11.0.9-2.el8_4.x86_64/  mvn 
> -Phadoop-3.0 -Phadoop.profile=3.0 -Dhadoop-three.version=3.2.2 
> -Dhadoop.guava.version=27.0-jre -Dslf4j.version=1.7.25 
> -Djetty.version=9.3.29.v20201019 -Dzookeeper.version=3.5.7 -DskipTests 
> -Dcheckstyle.skip=true site{code}
> I've tried this with maven 3.5.4 and 3.6.2. Based on JAVA_HOME above, 
> jdk11.0.11+9.
> {{The error is as follows:}}
> {code:java}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-site-plugin:3.7.1:site (default-site) on 
> project hbase: Error generating maven-javadoc-plugin:3.2.0:aggregate-no-fork 
> report:
>  [ERROR] Exit code: 1 - javadoc: warning - The old Doclet and Taglet APIs in 
> the packages
>  [ERROR] com.sun.javadoc, com.sun.tools.doclets and their implementations
>  [ERROR] are planned to be removed in a future JDK release. These
>  [ERROR] components have been superseded by the new APIs in 
> jdk.javadoc.doclet.
>  [ERROR] Users are strongly recommended to migrate to the new APIs.
>  [ERROR] javadoc: error - invalid flag: -author
>  [ERROR]
>  [ERROR] Command line was: 
> /usr/lib/jvm/java-11-openjdk-11.0.11.0.9-2.el8_4.x86_64/bin/javadoc -J-Xmx2G 
> @options @packages
>  [ERROR]
>  [ERROR] Refer to the generated Javadoc files in 
> '/hbase/rpm/build/BUILD/hbase-2.4.3/target/site/apidocs' dir.
>  [ERROR] -> [Help 1]
>  [ERROR]
>  [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
>  [ERROR] Re-run Maven using the -X switch to enable full debug logging.
>  [ERROR]
>  [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
>  [ERROR] [Help 1]{code}
> I believe this is due to the yetus doclet 
> org.apache.yetus.audience.tools.IncludePublicAnnotationsStandardDoclet. 
> Commenting this doclet out from the userapi and testuserapi reportSets in 
> pom.xml fixes the build.
>  
>  I noticed hbase 2.4.3 depends on audience-annotations 0.5.0, which is very 
> old. I tried updating to 0.13.0, but that did not help. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #3786: HBASE-26271: Cleanup the broken store files under data directory

2021-10-28 Thread GitBox


Apache-HBase commented on pull request #3786:
URL: https://github.com/apache/hbase/pull/3786#issuecomment-954164774


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 11s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ HBASE-26067 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 54s |  HBASE-26067 passed  |
   | +1 :green_heart: |  compile  |   1m 33s |  HBASE-26067 passed  |
   | +1 :green_heart: |  shadedjars  |  10m  1s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  HBASE-26067 passed  |
   | -0 :warning: |  patch  |  11m  5s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 38s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 34s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 34s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   9m 55s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 239m 57s |  hbase-server in the patch failed.  |
   |  |   | 279m 25s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3786/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3786 |
   | JIRA Issue | HBASE-26271 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 0cb6e6a95b9b 4.15.0-142-generic #146-Ubuntu SMP Tue Apr 13 
01:11:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-26067 / 3240b4b39c |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   | unit | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3786/2/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3786/2/testReport/
 |
   | Max. process+thread count | 3362 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3786/2/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3786: HBASE-26271: Cleanup the broken store files under data directory

2021-10-28 Thread GitBox


Apache-HBase commented on pull request #3786:
URL: https://github.com/apache/hbase/pull/3786#issuecomment-954145933


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  2s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ HBASE-26067 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 36s |  HBASE-26067 passed  |
   | +1 :green_heart: |  compile  |   1m  4s |  HBASE-26067 passed  |
   | +1 :green_heart: |  shadedjars  |   9m  9s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  HBASE-26067 passed  |
   | -0 :warning: |  patch  |  10m  0s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 18s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  7s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  7s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   9m 13s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 218m 21s |  hbase-server in the patch passed.  
|
   |  |   | 251m 51s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3786/2/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3786 |
   | JIRA Issue | HBASE-26271 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 2d61df0298f6 4.15.0-143-generic #147-Ubuntu SMP Wed Apr 14 
16:10:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-26067 / 3240b4b39c |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3786/2/testReport/
 |
   | Max. process+thread count | 3889 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3786/2/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3803: HBASE-26304: Reflect out of band locality improvements in metrics and balancer

2021-10-28 Thread GitBox


Apache-HBase commented on pull request #3803:
URL: https://github.com/apache/hbase/pull/3803#issuecomment-954118087


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 28s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 16s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m  4s |  master passed  |
   | +1 :green_heart: |  compile  |   4m 34s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m 47s |  master passed  |
   | +0 :ok: |  refguide  |   3m 25s |  branch has no errors when building the 
reference guide. See footer for rendered docs, which you should manually 
inspect.  |
   | -1 :x: |  spotbugs  |   0m 44s |  hbase-common in master has 1 extant 
spotbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 14s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 50s |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 31s |  the patch passed  |
   | -0 :warning: |  javac  |   0m 32s |  hbase-balancer generated 1 new + 18 
unchanged - 0 fixed = 19 total (was 18)  |
   | -0 :warning: |  checkstyle  |   0m 15s |  hbase-balancer: The patch 
generated 5 new + 0 unchanged - 0 fixed = 5 total (was 0)  |
   | -0 :warning: |  checkstyle  |   1m  4s |  hbase-server: The patch 
generated 8 new + 62 unchanged - 0 fixed = 70 total (was 62)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +0 :ok: |  refguide  |   3m 23s |  patch has no errors when building the 
reference guide. See footer for rendered docs, which you should manually 
inspect.  |
   | +1 :green_heart: |  hadoopcheck  |  19m 14s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1 3.3.0.  |
   | +1 :green_heart: |  spotbugs  |   3m 57s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 40s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  64m 24s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3803/2/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3803 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
checkstyle compile refguide xml |
   | uname | Linux 3e4a3cefe089 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 
23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 60254bc184 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | refguide | 
https://nightlies.apache.org/hbase/HBase/HBase-PreCommit-GitHub-PR/PR-3803/2/yetus-general-check/output/branch-site/book.html
 |
   | spotbugs | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3803/2/artifact/yetus-general-check/output/branch-spotbugs-hbase-common-warnings.html
 |
   | javac | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3803/2/artifact/yetus-general-check/output/diff-compile-javac-hbase-balancer.txt
 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3803/2/artifact/yetus-general-check/output/diff-checkstyle-hbase-balancer.txt
 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3803/2/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt
 |
   | refguide | 
https://nightlies.apache.org/hbase/HBase/HBase-PreCommit-GitHub-PR/PR-3803/2/yetus-general-check/output/patch-site/book.html
 |
   | Max. process+thread count | 96 (vs. ulimit of 3) |
   | modules | C: hbase-common hbase-balancer hbase-server U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3803/2/console
 |
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:

[jira] [Commented] (HBASE-26398) CellCounter fails for large tables filling up local disk

2021-10-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17435615#comment-17435615
 ] 

Hudson commented on HBASE-26398:


Results for branch branch-2
[build #379 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/379/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/379/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/379/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/379/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/379/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> CellCounter fails for large tables filling up local disk
> 
>
> Key: HBASE-26398
> URL: https://issues.apache.org/jira/browse/HBASE-26398
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Affects Versions: 2.2.7, 2.5.0, 3.0.0-alpha-2, 2.3.7, 2.4.8
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Minor
> Fix For: 2.5.0, 2.2.8, 3.0.0-alpha-2, 2.3.8, 2.4.9
>
>
> CellCounter dumps all cell coordinates into its output, which can become huge.
> The spill can fill the local disk on the reducer. 
> CellCounter hardcodes *mapreduce.job.reduces* to *1*, so it is not possible 
> to use multiple reducers to get around this.
> Fixing this is easy, by not hardcoding *mapreduce.job.reduces*, it still 
> defaults to 1, but can be overriden by the user. 
> CellCounter also generates two extra records with constant keys for each 
> cell, which have to be processed by the reducer.
> Even with multiple reducers, these (1/3 of the totcal records) will go the 
> same reducer, which can also fill up the disk.
> This can be fixed by adding a Combiner to the Mapper, which sums the counter 
> records, thereby reducing the Mapper output records to 1/3 of their previous 
> amount, which can be evenly distibuted between the reducers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-26392) Update ClassSize.BYTE_BUFFER for JDK17

2021-10-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17435613#comment-17435613
 ] 

Hudson commented on HBASE-26392:


Results for branch branch-2
[build #379 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/379/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/379/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/379/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/379/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/379/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Update ClassSize.BYTE_BUFFER for JDK17
> --
>
> Key: HBASE-26392
> URL: https://issues.apache.org/jira/browse/HBASE-26392
> Project: HBase
>  Issue Type: Bug
>  Components: java, util
>Reporter: Yutong Xiao
>Assignee: Yutong Xiao
>Priority: Major
> Fix For: 2.5.0, 3.0.0-alpha-2, 2.4.8
>
> Attachments: Java17 Buffer.png, Java8 Buffer.png
>
>
> In JDK17, the implement of Buffer.java is changed a little, which makes the 
> heapsize of BYTE_BUFFER different as earlier JDK. This makes related UTs fail.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-26353) Support loadable dictionaries in hbase-compression-zstd

2021-10-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17435614#comment-17435614
 ] 

Hudson commented on HBASE-26353:


Results for branch branch-2
[build #379 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/379/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/379/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/379/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/379/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/379/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Support loadable dictionaries in hbase-compression-zstd
> ---
>
> Key: HBASE-26353
> URL: https://issues.apache.org/jira/browse/HBASE-26353
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Kyle Purtell
>Assignee: Andrew Kyle Purtell
>Priority: Minor
> Fix For: 2.5.0, 3.0.0-alpha-2
>
>
> ZStandard supports initialization of compressors and decompressors with a 
> precomputed dictionary, which can dramatically improve and speed up 
> compression of tables with small values. For more details, please see [The 
> Case For Small Data 
> Compression|https://github.com/facebook/zstd#the-case-for-small-data-compression].
>  
> If a table is going to have a lot of small values and the user can put 
> together a representative set of files that can be used to train a dictionary 
> for compressing those values, a dictionary can be trained with the {{zstd}} 
> command line utility, available in any zstandard package for your favorite OS:
> Training:
> {noformat}
> $ zstd --maxdict=1126400 --train-fastcover=shrink \
> -o mytable.dict training_files/*
> Trying 82 different sets of parameters
> ...
> k=674  
> d=8
> f=20
> steps=40
> split=75
> accel=1
> Save dictionary of size 1126400 into file mytable.dict
> {noformat}
> Deploy the dictionary file to HDFS or S3, etc.
> Create the table:
> {noformat}
> hbase> create "mytable", 
>   ... ,
>   CONFIGURATION => {
> 'hbase.io.compress.zstd.level' => '6',
> 'hbase.io.compress.zstd.dictionary' => 'hdfs://nn/zdicts/mytable.dict'
>   }
> {noformat}
> Now start storing data. Compression results even for small values will be 
> excellent.
> Note: Beware, if the dictionary is lost, the data will not be decompressable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] bbeaudreault commented on a change in pull request #3803: HBASE-26304: Reflect out of band locality improvements in metrics and balancer

2021-10-28 Thread GitBox


bbeaudreault commented on a change in pull request #3803:
URL: https://github.com/apache/hbase/pull/3803#discussion_r738630776



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/InputStreamBlockDistribution.java
##
@@ -0,0 +1,91 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HDFSBlocksDistribution;
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
+import org.apache.hadoop.hbase.util.FSUtils;
+import org.apache.hadoop.hdfs.client.HdfsDataInputStream;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Computes the HDFSBlockDistribution for a file based on the underlying 
located blocks
+ * for an HdfsDataInputStream reading that file. This computation may involve 
a call to
+ * the namenode, so the value is cached based on
+ * {@link #HBASE_LOCALITY_INPUTSTREAM_DERIVE_CACHE_PERIOD}.
+ */
+@InterfaceAudience.Private
+public class InputStreamBlockDistribution {

Review comment:
   I did not add a test for this class, because it's so simple in nature. I 
did add a test for the FSUtils change, which this class mostly delegates to.  
If someone would prefer I add a test here as well please let me know.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-26304) Reflect out-of-band locality improvements in served requests

2021-10-28 Thread Bryan Beaudreault (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17435597#comment-17435597
 ] 

Bryan Beaudreault commented on HBASE-26304:
---

I was able to get the 3rd (arguably ideal) option to work. I've been running it 
in one of our internal test clusters and it's been working great.

I updated the original description and pushed an updated PR based on the final 
approach taken, so that people don't need to read the above wall of text :)

> Reflect out-of-band locality improvements in served requests
> 
>
> Key: HBASE-26304
> URL: https://issues.apache.org/jira/browse/HBASE-26304
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Bryan Beaudreault
>Assignee: Bryan Beaudreault
>Priority: Major
>
> Once the LocalityHealer has improved locality of a StoreFile (by moving 
> blocks onto the correct host), the Reader's DFSInputStream and Region's 
> localityIndex metric must be refreshed. Without refreshing the 
> DFSInputStream, the improved locality will not improve latencies. In fact, 
> the DFSInputStream may try to fetch blocks that have moved, resulting in a 
> ReplicaNotFoundException. This is automatically retried, but the retry will 
> temporarily increase long tail latencies relative to configured backoff 
> strategy.
> In the original LocalityHealer design, I created a new 
> RefreshHDFSBlockDistribution RPC on the RegionServer. This RPC accepts a list 
> of region names and, for each region store, re-opens the underlying StoreFile 
> if the locality has changed. This implementation was complicated both in 
> integrating callbacks into the HDFS Dispatcher and in terms of safely 
> re-opening StoreFiles without impacting reads or caches. 
> In working to port the LocalityHealer to the Apache projects, I'm taking a 
> different approach:
>  * The part of the LocalityHealer that moves blocks will be an HDFS project 
> contribution
>  * As such, the DFSClient should be able to more gracefully recover from 
> block moves.
>  * Additionally, HBase has some caches of block locations for locality 
> reporting and the balancer. Those need to be kept up-to-date.
> The DFSClient improvements are covered in 
> https://issues.apache.org/jira/browse/HDFS-16261. As such, this issue becomes 
> about updating HBase's block location caches.
> I considered a few different approaches, but the most elegant one I could 
> come up with was to tie the HDFSBlockDistribution metrics directly to the 
> underlying DFSInputStream of each StoreFile's initialReader. That way, our 
> locality metrics are identically representing the block allocations that our 
> reads are going through. This also means that our locality metrics will 
> naturally adjust as the DFSInputStream adjusts to block moves.
> Once we have accurate locality metrics on the regionserver, the Balancer's 
> cache can easily be invalidated via our usual heartbeat methods. 
> RegionServers report to the HMaster periodically, which keeps a 
> ClusterMetrics method up to date. Right before each balancer invocation, the 
> balancer is updated with the latest ClusterMetrics. At this time, we compare 
> the old ClusterMetrics to the new, and invalidate the caches for any regions 
> whose locality has changed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26304) Reflect out-of-band locality improvements in served requests

2021-10-28 Thread Bryan Beaudreault (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Beaudreault updated HBASE-26304:
--
Description: 
Once the LocalityHealer has improved locality of a StoreFile (by moving blocks 
onto the correct host), the Reader's DFSInputStream and Region's localityIndex 
metric must be refreshed. Without refreshing the DFSInputStream, the improved 
locality will not improve latencies. In fact, the DFSInputStream may try to 
fetch blocks that have moved, resulting in a ReplicaNotFoundException. This is 
automatically retried, but the retry will temporarily increase long tail 
latencies relative to configured backoff strategy.

In the original LocalityHealer design, I created a new 
RefreshHDFSBlockDistribution RPC on the RegionServer. This RPC accepts a list 
of region names and, for each region store, re-opens the underlying StoreFile 
if the locality has changed. This implementation was complicated both in 
integrating callbacks into the HDFS Dispatcher and in terms of safely 
re-opening StoreFiles without impacting reads or caches. 

In working to port the LocalityHealer to the Apache projects, I'm taking a 
different approach:
 * The part of the LocalityHealer that moves blocks will be an HDFS project 
contribution
 * As such, the DFSClient should be able to more gracefully recover from block 
moves.
 * Additionally, HBase has some caches of block locations for locality 
reporting and the balancer. Those need to be kept up-to-date.

The DFSClient improvements are covered in 
https://issues.apache.org/jira/browse/HDFS-16261. As such, this issue becomes 
about updating HBase's block location caches.

I considered a few different approaches, but the most elegant one I could come 
up with was to tie the HDFSBlockDistribution metrics directly to the underlying 
DFSInputStream of each StoreFile's initialReader. That way, our locality 
metrics are identically representing the block allocations that our reads are 
going through. This also means that our locality metrics will naturally adjust 
as the DFSInputStream adjusts to block moves.

Once we have accurate locality metrics on the regionserver, the Balancer's 
cache can easily be invalidated via our usual heartbeat methods. RegionServers 
report to the HMaster periodically, which keeps a ClusterMetrics method up to 
date. Right before each balancer invocation, the balancer is updated with the 
latest ClusterMetrics. At this time, we compare the old ClusterMetrics to the 
new, and invalidate the caches for any regions whose locality has changed.

  was:
Once the LocalityHealer has improved locality of a StoreFile (by moving blocks 
onto the correct host), the Reader's DFSInputStream and Region's localityIndex 
metric must be refreshed. Without refreshing the DFSInputStream, the improved 
locality will not improve latencies. In fact, the DFSInputStream may try to 
fetch blocks that have moved, resulting in a ReplicaNotFoundException. This is 
automatically retried, but the retry will temporarily increase long tail 
latencies relative to configured backoff strategy.

 

In the original LocalityHealer design, I created a new 
RefreshHDFSBlockDistribution RPC on the RegionServer. This RPC accepts a list 
of region names and, for each region store, re-opens the underlying StoreFile 
if the locality has changed. This implementation was complicated both in 
integrating callbacks into the HDFS Dispatcher and in terms of safely 
re-opening StoreFiles without impacting reads or caches. 

In working to port the LocalityHealer I'm taking a different approach:
 * The part of the LocalityHealer that moves blocks will be an HDFS project 
contribution
 * As such, the DFSClient should be able to more gracefully recover from block 
moves.
 * Additionally, HBase has some caches of block locations for locality 
reporting and the balancer. Those need to be kept up-to-date.

I will submit a PR with that implementation, but I am also investigating other 
avenues. For example, I noticed 
https://issues.apache.org/jira/browse/HDFS-15119 which doesn't seem ideal but 
maybe can be improved as an automatic lower-level handling of block moves.


> Reflect out-of-band locality improvements in served requests
> 
>
> Key: HBASE-26304
> URL: https://issues.apache.org/jira/browse/HBASE-26304
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Bryan Beaudreault
>Assignee: Bryan Beaudreault
>Priority: Major
>
> Once the LocalityHealer has improved locality of a StoreFile (by moving 
> blocks onto the correct host), the Reader's DFSInputStream and Region's 
> localityIndex metric must be refreshed. Without refreshing the 
> DFSInputStream, the improved locality will not improve latencies. In fact, 
> the DFSInputStream may try to fetch 

[GitHub] [hbase] clarax commented on pull request #3805: HBASE-26311 Balancer gets stuck in cohosted replica distribution

2021-10-28 Thread GitBox


clarax commented on pull request #3805:
URL: https://github.com/apache/hbase/pull/3805#issuecomment-954036650


   All preconmit checked passed. Unrelated pipeline failure.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] clarax commented on pull request #3804: HBASE-26311 Balancer gets stuck in cohosted replica distribution

2021-10-28 Thread GitBox


clarax commented on pull request #3804:
URL: https://github.com/apache/hbase/pull/3804#issuecomment-954036165


   All preconmit checked passed. Unrelated pipeline failure.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3786: HBASE-26271: Cleanup the broken store files under data directory

2021-10-28 Thread GitBox


Apache-HBase commented on pull request #3786:
URL: https://github.com/apache/hbase/pull/3786#issuecomment-954004625


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 33s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ HBASE-26067 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 15s |  HBASE-26067 passed  |
   | +1 :green_heart: |  compile  |   3m 17s |  HBASE-26067 passed  |
   | +1 :green_heart: |  checkstyle  |   1m 11s |  HBASE-26067 passed  |
   | +1 :green_heart: |  spotbugs  |   2m 22s |  HBASE-26067 passed  |
   | -0 :warning: |  patch  |   2m 31s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 13s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 32s |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 32s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   1m 20s |  hbase-server: The patch 
generated 21 new + 62 unchanged - 0 fixed = 83 total (was 62)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  22m 15s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1 3.3.0.  |
   | +1 :green_heart: |  spotbugs  |   2m 20s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 13s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  54m 19s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3786/2/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3786 |
   | JIRA Issue | HBASE-26271 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
checkstyle compile |
   | uname | Linux 5ec5a3338182 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-26067 / 3240b4b39c |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3786/2/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt
 |
   | Max. process+thread count | 96 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3786/2/console
 |
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] wchevreuil commented on a change in pull request #3786: HBASE-26271: Cleanup the broken store files under data directory

2021-10-28 Thread GitBox


wchevreuil commented on a change in pull request #3786:
URL: https://github.com/apache/hbase/pull/3786#discussion_r738496846



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
##
@@ -1895,6 +1900,22 @@ private void initializeThreads() {
   this.storefileRefresher = new 
StorefileRefresherChore(storefileRefreshPeriod,
   onlyMetaRefresh, this, this);
 }
+
+int fileBasedStoreFileCleanerPeriod  = conf.getInt(
+  FileBasedStoreFileCleaner.FILEBASED_STOREFILE_CLEANER_PERIOD,
+  FileBasedStoreFileCleaner.DEFAULT_FILEBASED_STOREFILE_CLEANER_PERIOD);
+int fileBasedStoreFileCleanerDelay  = conf.getInt(
+  FileBasedStoreFileCleaner.FILEBASED_STOREFILE_CLEANER_DELAY,
+  FileBasedStoreFileCleaner.DEFAULT_FILEBASED_STOREFILE_CLEANER_DELAY);
+double fileBasedStoreFileCleanerDelayJitter = conf.getDouble(
+  FileBasedStoreFileCleaner.FILEBASED_STOREFILE_CLEANER_DELAY_JITTER,
+  
FileBasedStoreFileCleaner.DEFAULT_FILEBASED_STOREFILE_CLEANER_DELAY_JITTER);
+double jitterRate = (RandomUtils.nextDouble() - 0.5D) * 
fileBasedStoreFileCleanerDelayJitter;
+long jitterValue = Math.round(fileBasedStoreFileCleanerDelay * jitterRate);

Review comment:
   If you feel it's cleaner this way, I'm ok with that.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3800: HBASE-26347 Support detect and exclude slow DNs in fan-out of WAL

2021-10-28 Thread GitBox


Apache-HBase commented on pull request #3800:
URL: https://github.com/apache/hbase/pull/3800#issuecomment-953933699


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 34s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 30s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m 25s |  master passed  |
   | +1 :green_heart: |  compile  |   4m 16s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m 20s |  master passed  |
   | +1 :green_heart: |  spotbugs  |   2m 45s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 15s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m 16s |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 10s |  the patch passed  |
   | -0 :warning: |  javac  |   0m 31s |  hbase-asyncfs generated 8 new + 22 
unchanged - 7 fixed = 30 total (was 29)  |
   | -0 :warning: |  checkstyle  |   0m 13s |  hbase-asyncfs: The patch 
generated 1 new + 1 unchanged - 1 fixed = 2 total (was 2)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  22m  0s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1 3.3.0.  |
   | -1 :x: |  spotbugs  |   0m 51s |  hbase-asyncfs generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 26s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  59m  0s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hbase-asyncfs |
   |  |  Nullcheck of excludeDatanodeManager at line 468 of value previously 
dereferenced in 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.createOutput(DistributedFileSystem,
 String, boolean, boolean, short, long, EventLoopGroup, Class, 
StreamSlowMonitor)  At FanOutOneBlockAsyncDFSOutputHelper.java:468 of value 
previously dereferenced in 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.createOutput(DistributedFileSystem,
 String, boolean, boolean, short, long, EventLoopGroup, Class, 
StreamSlowMonitor)  At FanOutOneBlockAsyncDFSOutputHelper.java:[line 468] |
   |  |  Should 
org.apache.hadoop.hbase.io.asyncfs.monitor.StreamSlowMonitor$PacketAckData be a 
_static_ inner class?  At StreamSlowMonitor.java:inner class?  At 
StreamSlowMonitor.java:[lines 129-144] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3800/3/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3800 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
checkstyle compile |
   | uname | Linux 12ff38fd70c8 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 60254bc184 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | javac | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3800/3/artifact/yetus-general-check/output/diff-compile-javac-hbase-asyncfs.txt
 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3800/3/artifact/yetus-general-check/output/diff-checkstyle-hbase-asyncfs.txt
 |
   | spotbugs | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3800/3/artifact/yetus-general-check/output/new-spotbugs-hbase-asyncfs.html
 |
   | Max. process+thread count | 96 (vs. ulimit of 3) |
   | modules | C: hbase-asyncfs hbase-server U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3800/3/console
 |
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3800: HBASE-26347 Support detect and exclude slow DNs in fan-out of WAL

2021-10-28 Thread GitBox


Apache-HBase commented on pull request #3800:
URL: https://github.com/apache/hbase/pull/3800#issuecomment-953926661


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  1s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 15s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   5m 16s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 42s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   9m 11s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 15s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   5m  3s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 40s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 40s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   9m 14s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 59s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 41s |  hbase-asyncfs in the patch passed. 
 |
   | -1 :x: |  unit  |  12m 33s |  hbase-server in the patch failed.  |
   |  |   |  51m 26s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3800/3/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3800 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 49d579f84f93 4.15.0-143-generic #147-Ubuntu SMP Wed Apr 14 
16:10:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 60254bc184 |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   | unit | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3800/3/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3800/3/testReport/
 |
   | Max. process+thread count | 751 (vs. ulimit of 3) |
   | modules | C: hbase-asyncfs hbase-server U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3800/3/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3800: HBASE-26347 Support detect and exclude slow DNs in fan-out of WAL

2021-10-28 Thread GitBox


Apache-HBase commented on pull request #3800:
URL: https://github.com/apache/hbase/pull/3800#issuecomment-953917979


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 27s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  4s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 49s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 23s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   8m 13s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 17s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 53s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 23s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 23s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   8m 22s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 52s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 32s |  hbase-asyncfs in the patch passed. 
 |
   | -1 :x: |  unit  |   8m 19s |  hbase-server in the patch failed.  |
   |  |   |  41m 41s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3800/3/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3800 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 1647c465f356 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 
23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 60254bc184 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | unit | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3800/3/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3800/3/testReport/
 |
   | Max. process+thread count | 984 (vs. ulimit of 3) |
   | modules | C: hbase-asyncfs hbase-server U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3800/3/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] wchevreuil commented on a change in pull request #3786: HBASE-26271: Cleanup the broken store files under data directory

2021-10-28 Thread GitBox


wchevreuil commented on a change in pull request #3786:
URL: https://github.com/apache/hbase/pull/3786#discussion_r738440217



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/Compactor.java
##
@@ -537,4 +547,17 @@ protected InternalScanner createScanner(HStore store, 
ScanInfo scanInfo,
 return new StoreScanner(store, scanInfo, scanners, smallestReadPoint, 
earliestPutTs,
 dropDeletesFromRow, dropDeletesToRow);
   }
+
+  public List getCompactionTargets(){
+if (writer == null){

Review comment:
   I'm ok with that.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] sunhelly commented on a change in pull request #3800: HBASE-26347 Support detect and exclude slow DNs in fan-out of WAL

2021-10-28 Thread GitBox


sunhelly commented on a change in pull request #3800:
URL: https://github.com/apache/hbase/pull/3800#discussion_r738426157



##
File path: 
hbase-asyncfs/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutput.java
##
@@ -136,17 +138,22 @@
 
 // should be backed by a thread safe collection
 private final Set unfinishedReplicas;
+private final long dataLength;
+private final long flushTimestamp;
+private long lastAckTimestamp = -1;
 
 public Callback(CompletableFuture future, long ackedLength,
-Collection replicas) {
+final Map replicas, long dataLength) {

Review comment:
   Thanks, I changed back the type of replicas, and use packetDataLen 
instead.

##
File path: 
hbase-asyncfs/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputHelper.java
##
@@ -451,15 +454,29 @@ public NameNodeException(Throwable cause) {
 
   private static FanOutOneBlockAsyncDFSOutput 
createOutput(DistributedFileSystem dfs, String src,
   boolean overwrite, boolean createParent, short replication, long 
blockSize,
-  EventLoopGroup eventLoopGroup, Class channelClass) 
throws IOException {
+  EventLoopGroup eventLoopGroup, Class channelClass,
+  StreamSlowMonitor monitor) throws IOException {
 Configuration conf = dfs.getConf();
 DFSClient client = dfs.getClient();
 String clientName = client.getClientName();
 ClientProtocol namenode = client.getNamenode();
 int createMaxRetries = conf.getInt(ASYNC_DFS_OUTPUT_CREATE_MAX_RETRIES,
   DEFAULT_ASYNC_DFS_OUTPUT_CREATE_MAX_RETRIES);
-DatanodeInfo[] excludesNodes = EMPTY_DN_ARRAY;
+ExcludeDatanodeManager excludeDatanodeManager = monitor == null ? null :
+  monitor.getExcludeDatanodeManager();
+Set toExcludeNodes = new HashSet<>();
 for (int retry = 0;; retry++) {
+  if (excludeDatanodeManager != null) {
+toExcludeNodes.addAll(excludeDatanodeManager.getExcludeDNs().keySet());
+  }
+  if (excludeDatanodeManager != null && retry > 1 && retry >= 
createMaxRetries - 1) {
+// invalid the exclude cache, to avoid not enough replicas

Review comment:
   Yes. This purpose of this design is try to add block without the 
excluded datanodes for the last time, avoiding the circumstance that there are 
not enough datanodes to choose the block targets... But actually, the logic 
here is a bit strange, and the RS will abort at the scenario mentioned above, 
so after RS restart, if choose target failed by the exclude reason, it will 
recover.

##
File path: 
hbase-asyncfs/src/main/java/org/apache/hadoop/hbase/io/asyncfs/monitor/ExcludeDatanodeManager.java
##
@@ -0,0 +1,132 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.io.asyncfs.monitor;
+
+import java.util.Collections;
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.conf.ConfigurationObserver;
+import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.hbase.thirdparty.com.google.common.cache.Cache;
+import org.apache.hbase.thirdparty.com.google.common.cache.CacheBuilder;
+
+@InterfaceAudience.Private
+public class ExcludeDatanodeManager implements ConfigurationObserver {
+  private static final Logger LOG = 
LoggerFactory.getLogger(ExcludeDatanodeManager.class);
+
+  private static final String WAL_MAX_EXCLUDE_SLOW_DATANODE_COUNT_KEY =
+"hbase.regionserver.async.wal.max.exclude.datanode.count";
+  private static final int DEFAULT_WAL_MAX_EXCLUDE_SLOW_DATANODE_COUNT = 3;
+
+  private static final String WAL_EXCLUDE_DATANODE_TTL_KEY =
+"hbase.regionserver.async.wal.exclude.datanode.info.ttl.hour";
+  private static final int DEFAULT_WAL_EXCLUDE_DATANODE_TTL = 6; // 6 hours
+
+  private Cache excludeDNsCache;
+  private final int maxExcludeDNCount;
+  private final Configuration conf;
+  private final Map 

[GitHub] [hbase] wchevreuil commented on a change in pull request #3786: HBASE-26271: Cleanup the broken store files under data directory

2021-10-28 Thread GitBox


wchevreuil commented on a change in pull request #3786:
URL: https://github.com/apache/hbase/pull/3786#discussion_r738419886



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/FileBasedStoreFileCleaner.java
##
@@ -0,0 +1,191 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.ScheduledChore;
+import org.apache.hadoop.hbase.Stoppable;
+import org.apache.hadoop.hbase.io.HFileLink;
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
+import org.apache.hadoop.ipc.RemoteException;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicLong;
+
+/**
+ * This Chore, every time it runs, will clear the unsused HFiles in the data
+ * folder.
+ */
+@InterfaceAudience.Private public class FileBasedStoreFileCleaner extends 
ScheduledChore {
+  private static final Logger LOG = 
LoggerFactory.getLogger(FileBasedStoreFileCleaner.class);
+  public static final String FILEBASED_STOREFILE_CLEANER_ENABLED =
+  "hbase.region.filebased.storefilecleaner.enabled";
+  public static final boolean DEFAULT_FILEBASED_STOREFILE_CLEANER_ENABLED = 
false;
+  public static final String FILEBASED_STOREFILE_CLEANER_TTL =
+  "hbase.region.filebased.storefilecleaner.ttl";
+  public static final long DEFAULT_FILEBASED_STOREFILE_CLEANER_TTL = 1000 * 60 
* 60 * 12; //12h
+  public static final String FILEBASED_STOREFILE_CLEANER_DELAY =
+  "hbase.region.filebased.storefilecleaner.delay";
+  public static final int DEFAULT_FILEBASED_STOREFILE_CLEANER_DELAY = 1000 * 
60 * 60 * 2; //2h
+  public static final String FILEBASED_STOREFILE_CLEANER_DELAY_JITTER =
+  "hbase.region.filebased.storefilecleaner.delay.jitter";
+  public static final double DEFAULT_FILEBASED_STOREFILE_CLEANER_DELAY_JITTER 
= 0.25D;
+  public static final String FILEBASED_STOREFILE_CLEANER_PERIOD =
+  "hbase.region.filebased.storefilecleaner.period";
+  public static final int DEFAULT_FILEBASED_STOREFILE_CLEANER_PERIOD = 1000 * 
60 * 60 * 6; //6h
+
+  private HRegionServer regionServer;
+  private final AtomicBoolean enabled = new AtomicBoolean(true);
+  private long ttl;
+
+  public FileBasedStoreFileCleaner(final int delay, final int period, final 
Stoppable stopper, Configuration conf,
+  HRegionServer regionServer) {
+super("FileBasedStoreFileCleaner", stopper, period, delay);
+this.regionServer = regionServer;
+setEnabled(conf.getBoolean(FILEBASED_STOREFILE_CLEANER_ENABLED, 
DEFAULT_FILEBASED_STOREFILE_CLEANER_ENABLED));
+ttl = conf.getLong(FILEBASED_STOREFILE_CLEANER_TTL, 
DEFAULT_FILEBASED_STOREFILE_CLEANER_TTL);
+  }
+
+  public boolean setEnabled(final boolean enabled) {
+return this.enabled.getAndSet(enabled);
+  }
+
+  public boolean getEnabled() {
+return this.enabled.get();
+  }
+
+  @InterfaceAudience.Private
+  @Override public void chore() {
+if (getEnabled()) {
+  long start = EnvironmentEdgeManager.currentTime();
+  AtomicLong deletedFiles = new AtomicLong(0);
+  AtomicLong failedDeletes = new AtomicLong(0);
+  for (HRegion region : regionServer.getRegions()) {
+for (HStore store : region.getStores()) {
+  //only clean do cleanup in store using file based storefile tracking
+  if (store.getStoreEngine().requireWritingToTmpDirFirst()) {
+continue;
+  }
+  Path storePath =
+  new Path(region.getRegionFileSystem().getRegionDir(), 
store.getColumnFamilyName());
+
+  try {
+List fsStoreFiles = 
Arrays.asList(region.getRegionFileSystem().fs.listStatus(storePath));
+fsStoreFiles.forEach(file -> cleanFileIfNeeded(file, store, 
deletedFiles, failedDeletes));
+  } catch (IOException e) {
+LOG.warn("Failed to list files in 

[jira] [Commented] (HBASE-26392) Update ClassSize.BYTE_BUFFER for JDK17

2021-10-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17435415#comment-17435415
 ] 

Hudson commented on HBASE-26392:


Results for branch master
[build #426 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/426/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/426/General_20Nightly_20Build_20Report/]






(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/426/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/426/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Update ClassSize.BYTE_BUFFER for JDK17
> --
>
> Key: HBASE-26392
> URL: https://issues.apache.org/jira/browse/HBASE-26392
> Project: HBase
>  Issue Type: Bug
>  Components: java, util
>Reporter: Yutong Xiao
>Assignee: Yutong Xiao
>Priority: Major
> Fix For: 2.5.0, 3.0.0-alpha-2, 2.4.8
>
> Attachments: Java17 Buffer.png, Java8 Buffer.png
>
>
> In JDK17, the implement of Buffer.java is changed a little, which makes the 
> heapsize of BYTE_BUFFER different as earlier JDK. This makes related UTs fail.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-26353) Support loadable dictionaries in hbase-compression-zstd

2021-10-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17435416#comment-17435416
 ] 

Hudson commented on HBASE-26353:


Results for branch master
[build #426 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/426/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/426/General_20Nightly_20Build_20Report/]






(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/426/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/426/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Support loadable dictionaries in hbase-compression-zstd
> ---
>
> Key: HBASE-26353
> URL: https://issues.apache.org/jira/browse/HBASE-26353
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Kyle Purtell
>Assignee: Andrew Kyle Purtell
>Priority: Minor
> Fix For: 2.5.0, 3.0.0-alpha-2
>
>
> ZStandard supports initialization of compressors and decompressors with a 
> precomputed dictionary, which can dramatically improve and speed up 
> compression of tables with small values. For more details, please see [The 
> Case For Small Data 
> Compression|https://github.com/facebook/zstd#the-case-for-small-data-compression].
>  
> If a table is going to have a lot of small values and the user can put 
> together a representative set of files that can be used to train a dictionary 
> for compressing those values, a dictionary can be trained with the {{zstd}} 
> command line utility, available in any zstandard package for your favorite OS:
> Training:
> {noformat}
> $ zstd --maxdict=1126400 --train-fastcover=shrink \
> -o mytable.dict training_files/*
> Trying 82 different sets of parameters
> ...
> k=674  
> d=8
> f=20
> steps=40
> split=75
> accel=1
> Save dictionary of size 1126400 into file mytable.dict
> {noformat}
> Deploy the dictionary file to HDFS or S3, etc.
> Create the table:
> {noformat}
> hbase> create "mytable", 
>   ... ,
>   CONFIGURATION => {
> 'hbase.io.compress.zstd.level' => '6',
> 'hbase.io.compress.zstd.dictionary' => 'hdfs://nn/zdicts/mytable.dict'
>   }
> {noformat}
> Now start storing data. Compression results even for small values will be 
> excellent.
> Note: Beware, if the dictionary is lost, the data will not be decompressable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-26396) Remove duplicate thread creation during migrating rsgroup

2021-10-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17435414#comment-17435414
 ] 

Hudson commented on HBASE-26396:


Results for branch master
[build #426 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/426/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/426/General_20Nightly_20Build_20Report/]






(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/426/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/426/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Remove duplicate thread creation during migrating rsgroup
> -
>
> Key: HBASE-26396
> URL: https://issues.apache.org/jira/browse/HBASE-26396
> Project: HBase
>  Issue Type: Bug
>  Components: master, rsgroup
>Affects Versions: 3.0.0-alpha-2
>Reporter: Zhuoyue Huang
>Assignee: Zhuoyue Huang
>Priority: Minor
> Fix For: 3.0.0-alpha-2
>
>
> There is a thread that migrate the table rs group info from RSGroupInfo into 
> the table descriptor.
> The thread is created when RSGroupManager is initialized and is created again 
> when RSGroupStartupWorker is started.
> I think this is a bug. Since this thread will not exit until all table 
> rsgroups are migrated, there is no reason to need two threads to run together?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26304) Reflect out-of-band locality improvements in served requests

2021-10-28 Thread Bryan Beaudreault (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Beaudreault updated HBASE-26304:
--
Description: 
Once the LocalityHealer has improved locality of a StoreFile (by moving blocks 
onto the correct host), the Reader's DFSInputStream and Region's localityIndex 
metric must be refreshed. Without refreshing the DFSInputStream, the improved 
locality will not improve latencies. In fact, the DFSInputStream may try to 
fetch blocks that have moved, resulting in a ReplicaNotFoundException. This is 
automatically retried, but the retry will temporarily increase long tail 
latencies relative to configured backoff strategy.

 

In the original LocalityHealer design, I created a new 
RefreshHDFSBlockDistribution RPC on the RegionServer. This RPC accepts a list 
of region names and, for each region store, re-opens the underlying StoreFile 
if the locality has changed. This implementation was complicated both in 
integrating callbacks into the HDFS Dispatcher and in terms of safely 
re-opening StoreFiles without impacting reads or caches. 

In working to port the LocalityHealer I'm taking a different approach:
 * The part of the LocalityHealer that moves blocks will be an HDFS project 
contribution
 * As such, the DFSClient should be able to more gracefully recover from block 
moves.
 * Additionally, HBase has some caches of block locations for locality 
reporting and the balancer. Those need to be kept up-to-date.

I will submit a PR with that implementation, but I am also investigating other 
avenues. For example, I noticed 
https://issues.apache.org/jira/browse/HDFS-15119 which doesn't seem ideal but 
maybe can be improved as an automatic lower-level handling of block moves.

  was:
Once the LocalityHealer has improved locality of a StoreFile (by moving blocks 
onto the correct host), the Reader's DFSInputStream and Region's localityIndex 
metric must be refreshed. Without refreshing the DFSInputStream, the improved 
locality will not improve latencies. In fact, the DFSInputStream may try to 
fetch blocks that have moved, resulting in a ReplicaNotFoundException. This is 
automatically retried, but the retry will increase long tail latencies relative 
to configured backoff strategy.

See https://issues.apache.org/jira/browse/HDFS-16155 for an improvement in 
backoff strategy which can greatly mitigate latency impact of the missing block 
retry.

Even with that mitigation, a StoreFile is often made up of many blocks. Without 
some sort of intervention, we will continue to hit ReplicaNotFoundException 
over time as clients naturally request data from moved blocks.

In the original LocalityHealer design, I created a new 
RefreshHDFSBlockDistribution RPC on the RegionServer. This RPC accepts a list 
of region names and, for each region store, re-opens the underlying StoreFile 
if the locality has changed.

I will submit a PR with that implementation, but I am also investigating other 
avenues. For example, I noticed 
https://issues.apache.org/jira/browse/HDFS-15119 which doesn't seem ideal but 
maybe can be improved as an automatic lower-level handling of block moves.


> Reflect out-of-band locality improvements in served requests
> 
>
> Key: HBASE-26304
> URL: https://issues.apache.org/jira/browse/HBASE-26304
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Bryan Beaudreault
>Assignee: Bryan Beaudreault
>Priority: Major
>
> Once the LocalityHealer has improved locality of a StoreFile (by moving 
> blocks onto the correct host), the Reader's DFSInputStream and Region's 
> localityIndex metric must be refreshed. Without refreshing the 
> DFSInputStream, the improved locality will not improve latencies. In fact, 
> the DFSInputStream may try to fetch blocks that have moved, resulting in a 
> ReplicaNotFoundException. This is automatically retried, but the retry will 
> temporarily increase long tail latencies relative to configured backoff 
> strategy.
>  
> In the original LocalityHealer design, I created a new 
> RefreshHDFSBlockDistribution RPC on the RegionServer. This RPC accepts a list 
> of region names and, for each region store, re-opens the underlying StoreFile 
> if the locality has changed. This implementation was complicated both in 
> integrating callbacks into the HDFS Dispatcher and in terms of safely 
> re-opening StoreFiles without impacting reads or caches. 
> In working to port the LocalityHealer I'm taking a different approach:
>  * The part of the LocalityHealer that moves blocks will be an HDFS project 
> contribution
>  * As such, the DFSClient should be able to more gracefully recover from 
> block moves.
>  * Additionally, HBase has some caches of block locations for locality 
> reporting and the balancer. Those need to be 

[GitHub] [hbase] Apache-HBase commented on pull request #3804: HBASE-26311 Balancer gets stuck in cohosted replica distribution

2021-10-28 Thread GitBox


Apache-HBase commented on pull request #3804:
URL: https://github.com/apache/hbase/pull/3804#issuecomment-953664302


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   2m 51s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  5s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   6m 17s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   2m  0s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |  10m 29s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   6m 52s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 48s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 48s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |  11m 13s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  6s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 250m 16s |  hbase-server in the patch passed.  
|
   |  |   | 296m 35s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3804/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3804 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 2be00ae6ac6d 4.15.0-143-generic #147-Ubuntu SMP Wed Apr 14 
16:10:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / f2c58fcf68 |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3804/2/testReport/
 |
   | Max. process+thread count | 2391 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3804/2/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-26398) CellCounter fails for large tables filling up local disk

2021-10-28 Thread Duo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17435259#comment-17435259
 ] 

Duo Zhang commented on HBASE-26398:
---

IIRC, we have decided to EOL branch-2.2 long ago and recently we have also 
decided to EOL branch-2.3, so we do not need to consider these two branches...

> CellCounter fails for large tables filling up local disk
> 
>
> Key: HBASE-26398
> URL: https://issues.apache.org/jira/browse/HBASE-26398
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Affects Versions: 2.2.7, 2.5.0, 3.0.0-alpha-2, 2.3.7, 2.4.8
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Minor
> Fix For: 2.5.0, 2.2.8, 3.0.0-alpha-2, 2.3.8, 2.4.9
>
>
> CellCounter dumps all cell coordinates into its output, which can become huge.
> The spill can fill the local disk on the reducer. 
> CellCounter hardcodes *mapreduce.job.reduces* to *1*, so it is not possible 
> to use multiple reducers to get around this.
> Fixing this is easy, by not hardcoding *mapreduce.job.reduces*, it still 
> defaults to 1, but can be overriden by the user. 
> CellCounter also generates two extra records with constant keys for each 
> cell, which have to be processed by the reducer.
> Even with multiple reducers, these (1/3 of the totcal records) will go the 
> same reducer, which can also fill up the disk.
> This can be fixed by adding a Combiner to the Mapper, which sums the counter 
> records, thereby reducing the Mapper output records to 1/3 of their previous 
> amount, which can be evenly distibuted between the reducers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-26398) CellCounter fails for large tables filling up local disk

2021-10-28 Thread Wellington Chevreuil (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil resolved HBASE-26398.
--
Resolution: Fixed

Thanks for the contribution [~stoty]. Had merged into master, branch-2, 
branch-2.4, branch-2.3 and branch-2.2.

> CellCounter fails for large tables filling up local disk
> 
>
> Key: HBASE-26398
> URL: https://issues.apache.org/jira/browse/HBASE-26398
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Affects Versions: 2.2.7, 2.5.0, 3.0.0-alpha-2, 2.3.7, 2.4.8
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Minor
> Fix For: 2.5.0, 2.2.8, 3.0.0-alpha-2, 2.3.8, 2.4.9
>
>
> CellCounter dumps all cell coordinates into its output, which can become huge.
> The spill can fill the local disk on the reducer. 
> CellCounter hardcodes *mapreduce.job.reduces* to *1*, so it is not possible 
> to use multiple reducers to get around this.
> Fixing this is easy, by not hardcoding *mapreduce.job.reduces*, it still 
> defaults to 1, but can be overriden by the user. 
> CellCounter also generates two extra records with constant keys for each 
> cell, which have to be processed by the reducer.
> Even with multiple reducers, these (1/3 of the totcal records) will go the 
> same reducer, which can also fill up the disk.
> This can be fixed by adding a Combiner to the Mapper, which sums the counter 
> records, thereby reducing the Mapper output records to 1/3 of their previous 
> amount, which can be evenly distibuted between the reducers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] sunhelly commented on pull request #3800: HBASE-26347 Support detect and exclude slow DNs in fan-out of WAL

2021-10-28 Thread GitBox


sunhelly commented on pull request #3800:
URL: https://github.com/apache/hbase/pull/3800#issuecomment-953644964


   @Apache9 Thanks for review, I'll change the codes according to the comments.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (HBASE-26398) CellCounter fails for large tables filling up local disk

2021-10-28 Thread Wellington Chevreuil (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil updated HBASE-26398:
-
Fix Version/s: 2.2.8

> CellCounter fails for large tables filling up local disk
> 
>
> Key: HBASE-26398
> URL: https://issues.apache.org/jira/browse/HBASE-26398
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Affects Versions: 2.2.7, 2.5.0, 3.0.0-alpha-2, 2.3.7, 2.4.8
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Minor
> Fix For: 2.5.0, 2.2.8, 3.0.0-alpha-2, 2.3.8, 2.4.9
>
>
> CellCounter dumps all cell coordinates into its output, which can become huge.
> The spill can fill the local disk on the reducer. 
> CellCounter hardcodes *mapreduce.job.reduces* to *1*, so it is not possible 
> to use multiple reducers to get around this.
> Fixing this is easy, by not hardcoding *mapreduce.job.reduces*, it still 
> defaults to 1, but can be overriden by the user. 
> CellCounter also generates two extra records with constant keys for each 
> cell, which have to be processed by the reducer.
> Even with multiple reducers, these (1/3 of the totcal records) will go the 
> same reducer, which can also fill up the disk.
> This can be fixed by adding a Combiner to the Mapper, which sums the counter 
> records, thereby reducing the Mapper output records to 1/3 of their previous 
> amount, which can be evenly distibuted between the reducers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26398) CellCounter fails for large tables filling up local disk

2021-10-28 Thread Wellington Chevreuil (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil updated HBASE-26398:
-
Fix Version/s: 2.3.8

> CellCounter fails for large tables filling up local disk
> 
>
> Key: HBASE-26398
> URL: https://issues.apache.org/jira/browse/HBASE-26398
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Affects Versions: 2.2.7, 2.5.0, 3.0.0-alpha-2, 2.3.7, 2.4.8
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Minor
> Fix For: 2.5.0, 3.0.0-alpha-2, 2.3.8, 2.4.9
>
>
> CellCounter dumps all cell coordinates into its output, which can become huge.
> The spill can fill the local disk on the reducer. 
> CellCounter hardcodes *mapreduce.job.reduces* to *1*, so it is not possible 
> to use multiple reducers to get around this.
> Fixing this is easy, by not hardcoding *mapreduce.job.reduces*, it still 
> defaults to 1, but can be overriden by the user. 
> CellCounter also generates two extra records with constant keys for each 
> cell, which have to be processed by the reducer.
> Even with multiple reducers, these (1/3 of the totcal records) will go the 
> same reducer, which can also fill up the disk.
> This can be fixed by adding a Combiner to the Mapper, which sums the counter 
> records, thereby reducing the Mapper output records to 1/3 of their previous 
> amount, which can be evenly distibuted between the reducers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26398) CellCounter fails for large tables filling up local disk

2021-10-28 Thread Wellington Chevreuil (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil updated HBASE-26398:
-
Fix Version/s: 2.4.9

> CellCounter fails for large tables filling up local disk
> 
>
> Key: HBASE-26398
> URL: https://issues.apache.org/jira/browse/HBASE-26398
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Affects Versions: 2.2.7, 2.5.0, 3.0.0-alpha-2, 2.3.7, 2.4.8
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Minor
> Fix For: 2.5.0, 3.0.0-alpha-2, 2.4.9
>
>
> CellCounter dumps all cell coordinates into its output, which can become huge.
> The spill can fill the local disk on the reducer. 
> CellCounter hardcodes *mapreduce.job.reduces* to *1*, so it is not possible 
> to use multiple reducers to get around this.
> Fixing this is easy, by not hardcoding *mapreduce.job.reduces*, it still 
> defaults to 1, but can be overriden by the user. 
> CellCounter also generates two extra records with constant keys for each 
> cell, which have to be processed by the reducer.
> Even with multiple reducers, these (1/3 of the totcal records) will go the 
> same reducer, which can also fill up the disk.
> This can be fixed by adding a Combiner to the Mapper, which sums the counter 
> records, thereby reducing the Mapper output records to 1/3 of their previous 
> amount, which can be evenly distibuted between the reducers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26398) CellCounter fails for large tables filling up local disk

2021-10-28 Thread Wellington Chevreuil (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil updated HBASE-26398:
-
Fix Version/s: 2.5.0

> CellCounter fails for large tables filling up local disk
> 
>
> Key: HBASE-26398
> URL: https://issues.apache.org/jira/browse/HBASE-26398
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Affects Versions: 2.2.7, 2.5.0, 3.0.0-alpha-2, 2.3.7, 2.4.8
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Minor
> Fix For: 2.5.0, 3.0.0-alpha-2
>
>
> CellCounter dumps all cell coordinates into its output, which can become huge.
> The spill can fill the local disk on the reducer. 
> CellCounter hardcodes *mapreduce.job.reduces* to *1*, so it is not possible 
> to use multiple reducers to get around this.
> Fixing this is easy, by not hardcoding *mapreduce.job.reduces*, it still 
> defaults to 1, but can be overriden by the user. 
> CellCounter also generates two extra records with constant keys for each 
> cell, which have to be processed by the reducer.
> Even with multiple reducers, these (1/3 of the totcal records) will go the 
> same reducer, which can also fill up the disk.
> This can be fixed by adding a Combiner to the Mapper, which sums the counter 
> records, thereby reducing the Mapper output records to 1/3 of their previous 
> amount, which can be evenly distibuted between the reducers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26398) CellCounter fails for large tables filling up local disk

2021-10-28 Thread Wellington Chevreuil (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil updated HBASE-26398:
-
Fix Version/s: 3.0.0-alpha-2

> CellCounter fails for large tables filling up local disk
> 
>
> Key: HBASE-26398
> URL: https://issues.apache.org/jira/browse/HBASE-26398
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Affects Versions: 2.2.7, 2.5.0, 3.0.0-alpha-2, 2.3.7, 2.4.8
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Minor
> Fix For: 3.0.0-alpha-2
>
>
> CellCounter dumps all cell coordinates into its output, which can become huge.
> The spill can fill the local disk on the reducer. 
> CellCounter hardcodes *mapreduce.job.reduces* to *1*, so it is not possible 
> to use multiple reducers to get around this.
> Fixing this is easy, by not hardcoding *mapreduce.job.reduces*, it still 
> defaults to 1, but can be overriden by the user. 
> CellCounter also generates two extra records with constant keys for each 
> cell, which have to be processed by the reducer.
> Even with multiple reducers, these (1/3 of the totcal records) will go the 
> same reducer, which can also fill up the disk.
> This can be fixed by adding a Combiner to the Mapper, which sums the counter 
> records, thereby reducing the Mapper output records to 1/3 of their previous 
> amount, which can be evenly distibuted between the reducers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] wchevreuil merged pull request #3798: HBASE-26398 CellCounter fails for large tables filling up local disk

2021-10-28 Thread GitBox


wchevreuil merged pull request #3798:
URL: https://github.com/apache/hbase/pull/3798


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3805: HBASE-26311 Balancer gets stuck in cohosted replica distribution

2021-10-28 Thread GitBox


Apache-HBase commented on pull request #3805:
URL: https://github.com/apache/hbase/pull/3805#issuecomment-953577413


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   2m 33s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  8s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2.4 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   6m 33s |  branch-2.4 passed  |
   | +1 :green_heart: |  compile  |   1m 42s |  branch-2.4 passed  |
   | +1 :green_heart: |  shadedjars  |  10m 24s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  6s |  branch-2.4 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   6m 32s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 50s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 50s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   9m 51s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  2s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 263m 47s |  hbase-server in the patch passed.  
|
   |  |   | 308m  8s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3805/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3805 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux fc9899a677ca 4.15.0-143-generic #147-Ubuntu SMP Wed Apr 14 
16:10:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2.4 / 377c0586ab |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3805/2/testReport/
 |
   | Max. process+thread count | 2586 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3805/2/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3804: HBASE-26311 Balancer gets stuck in cohosted replica distribution

2021-10-28 Thread GitBox


Apache-HBase commented on pull request #3804:
URL: https://github.com/apache/hbase/pull/3804#issuecomment-953564555


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 33s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  7s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 54s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   1m  0s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   6m 28s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 31s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 59s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 59s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m 33s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 145m 41s |  hbase-server in the patch passed.  
|
   |  |   | 172m 19s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3804/2/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3804 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux fd4e43e07451 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 
23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / f2c58fcf68 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3804/2/testReport/
 |
   | Max. process+thread count | 4168 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3804/2/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-26047) [JDK17] Track JDK17 unit test failures

2021-10-28 Thread Yutong Xiao (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17435191#comment-17435191
 ] 

Yutong Xiao commented on HBASE-26047:
-

Thank you for the help [~huaxiangsun]

> [JDK17] Track JDK17 unit test failures
> --
>
> Key: HBASE-26047
> URL: https://issues.apache.org/jira/browse/HBASE-26047
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> As of now, there are still two failed unit tests after exporting JDK internal 
> modules and the modifier access hack.
> {noformat}
> [ERROR] Tests run: 7, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 0.217 
> s <<< FAILURE! - in org.apache.hadoop.hbase.io.TestHeapSize
> [ERROR] org.apache.hadoop.hbase.io.TestHeapSize.testSizes  Time elapsed: 
> 0.041 s  <<< FAILURE!
> java.lang.AssertionError: expected:<160> but was:<152>
> at 
> org.apache.hadoop.hbase.io.TestHeapSize.testSizes(TestHeapSize.java:335)
> [ERROR] org.apache.hadoop.hbase.io.TestHeapSize.testNativeSizes  Time 
> elapsed: 0.01 s  <<< FAILURE!
> java.lang.AssertionError: expected:<72> but was:<64>
> at 
> org.apache.hadoop.hbase.io.TestHeapSize.testNativeSizes(TestHeapSize.java:134)
> [INFO] Running org.apache.hadoop.hbase.io.Tes
> [ERROR] Tests run: 5, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.697 
> s <<< FAILURE! - in org.apache.hadoop.hbase.ipc.TestBufferChain
> [ERROR] org.apache.hadoop.hbase.ipc.TestBufferChain.testWithSpy  Time 
> elapsed: 0.537 s  <<< ERROR!
> java.lang.NullPointerException: Cannot enter synchronized block because 
> "this.closeLock" is null
> at 
> org.apache.hadoop.hbase.ipc.TestBufferChain.testWithSpy(TestBufferChain.java:119)
> {noformat}
> It appears that JDK17 makes the heap size estimate different than before. Not 
> sure why.
> TestBufferChain.testWithSpy  failure might be because of yet another 
> unexported module.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #3805: HBASE-26311 Balancer gets stuck in cohosted replica distribution

2021-10-28 Thread GitBox


Apache-HBase commented on pull request #3805:
URL: https://github.com/apache/hbase/pull/3805#issuecomment-953537856


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  6s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  8s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2.4 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m  6s |  branch-2.4 passed  |
   | +1 :green_heart: |  compile  |   0m 58s |  branch-2.4 passed  |
   | +1 :green_heart: |  shadedjars  |   6m 33s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  branch-2.4 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 42s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 59s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 59s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m 38s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 211m 37s |  hbase-server in the patch passed.  
|
   |  |   | 239m  4s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3805/2/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3805 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 1e61ca877eab 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 
19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2.4 / 377c0586ab |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3805/2/testReport/
 |
   | Max. process+thread count | 2923 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3805/2/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-26396) Remove duplicate thread creation during migrating rsgroup

2021-10-28 Thread Zhuoyue Huang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17435176#comment-17435176
 ] 

Zhuoyue Huang commented on HBASE-26396:
---

Thanks for reviewing! [~zhangduo]

> Remove duplicate thread creation during migrating rsgroup
> -
>
> Key: HBASE-26396
> URL: https://issues.apache.org/jira/browse/HBASE-26396
> Project: HBase
>  Issue Type: Bug
>  Components: master, rsgroup
>Affects Versions: 3.0.0-alpha-2
>Reporter: Zhuoyue Huang
>Assignee: Zhuoyue Huang
>Priority: Minor
> Fix For: 3.0.0-alpha-2
>
>
> There is a thread that migrate the table rs group info from RSGroupInfo into 
> the table descriptor.
> The thread is created when RSGroupManager is initialized and is created again 
> when RSGroupStartupWorker is started.
> I think this is a bug. Since this thread will not exit until all table 
> rsgroups are migrated, there is no reason to need two threads to run together?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-26392) Update ClassSize.BYTE_BUFFER for JDK17

2021-10-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17435171#comment-17435171
 ] 

Hudson commented on HBASE-26392:


Results for branch branch-2.4
[build #225 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/225/]:
 (/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/225/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/225/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/225/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/225/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Update ClassSize.BYTE_BUFFER for JDK17
> --
>
> Key: HBASE-26392
> URL: https://issues.apache.org/jira/browse/HBASE-26392
> Project: HBase
>  Issue Type: Bug
>  Components: java, util
>Reporter: Yutong Xiao
>Assignee: Yutong Xiao
>Priority: Major
> Fix For: 2.5.0, 3.0.0-alpha-2, 2.4.8
>
> Attachments: Java17 Buffer.png, Java8 Buffer.png
>
>
> In JDK17, the implement of Buffer.java is changed a little, which makes the 
> heapsize of BYTE_BUFFER different as earlier JDK. This makes related UTs fail.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)