[jira] [Resolved] (HBASE-28517) Make properties dynamically configured
[ https://issues.apache.org/jira/browse/HBASE-28517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi resolved HBASE-28517. --- Fix Version/s: 2.6.0 2.4.18 4.0.0-alpha-1 3.0.0-beta-2 2.5.9 Release Note: Make the following properties dynamically configured: * hbase.rs.evictblocksonclose * hbase.rs.cacheblocksonwrite * hbase.block.data.cacheonread Resolution: Fixed Merged to all active branches. Thanks, [~kabhishek4] for the contribution! > Make properties dynamically configured > -- > > Key: HBASE-28517 > URL: https://issues.apache.org/jira/browse/HBASE-28517 > Project: HBase > Issue Type: Improvement >Reporter: Abhishek Kothalikar >Assignee: Abhishek Kothalikar >Priority: Major > Labels: pull-request-available > Fix For: 2.6.0, 2.4.18, 4.0.0-alpha-1, 3.0.0-beta-2, 2.5.9 > > > Make following properties dynamically configured, > hbase.rs.evictblocksonclose > hbase.rs.cacheblocksonwrite > hbase.block.data.cacheonread > for use case scenarios where configuring them dynamically would help in > achieving better throughput. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28247) Add java.base/sun.net.dns and java.base/sun.net.util export to jdk11 JVM test flags
[ https://issues.apache.org/jira/browse/HBASE-28247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi updated HBASE-28247: -- Fix Version/s: 4.0.0-alpha-1 > Add java.base/sun.net.dns and java.base/sun.net.util export to jdk11 JVM > test flags > > > Key: HBASE-28247 > URL: https://issues.apache.org/jira/browse/HBASE-28247 > Project: HBase > Issue Type: Bug > Components: java >Affects Versions: 2.6.0, 3.0.0-alpha-4, 2.4.17, 2.5.6 >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Minor > Fix For: 2.6.0, 2.4.18, 4.0.0-alpha-1, 2.5.7, 3.0.0-beta-2 > > > While testing with JDK17 we have found that we need to add > {noformat} > --add-exports java.base/sun.net.dns=ALL-UNNAMED > --add-exports java.base/sun.net.util=ALL-UNNAMED > {noformat} > on top of what is already defined in _hbase-surefire.jdk11.flags_ , otherwise > RS and Master startup fails in the Hadoop security code. > While this does not affect the test suite (at least not the commonly run > tests), I consider hbase-surefire.jdk11.flags to be an unoffical resource to > getting HBase to run on newer JDK versions. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28252) Add sun.net.dns and sun.net.util to the JDK11+ module exports in the hbase script
[ https://issues.apache.org/jira/browse/HBASE-28252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi updated HBASE-28252: -- Fix Version/s: 4.0.0-alpha-1 > Add sun.net.dns and sun.net.util to the JDK11+ module exports in the hbase > script > - > > Key: HBASE-28252 > URL: https://issues.apache.org/jira/browse/HBASE-28252 > Project: HBase > Issue Type: Bug > Components: scripts >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Major > Fix For: 2.6.0, 2.4.18, 4.0.0-alpha-1, 2.5.7, 3.0.0-beta-2 > > > As noted in HBASE-28247, HBase can run into module permission issues that are > not handled in the current JDK11 options in the hbase startup script. > The surefire test config also includes some JDK17 specific options, we should > also add those as needed. > We are not yet aware of any additional JVM options required by Java 21. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28252) Add sun.net.dns and sun.net.util to the JDK11+ module exports in the hbase script
[ https://issues.apache.org/jira/browse/HBASE-28252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi updated HBASE-28252: -- Fix Version/s: 3.0.0-beta-2 (was: 3.0.0-beta-1) > Add sun.net.dns and sun.net.util to the JDK11+ module exports in the hbase > script > - > > Key: HBASE-28252 > URL: https://issues.apache.org/jira/browse/HBASE-28252 > Project: HBase > Issue Type: Bug > Components: scripts >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Major > Fix For: 2.6.0, 2.4.18, 2.5.7, 3.0.0-beta-2 > > > As noted in HBASE-28247, HBase can run into module permission issues that are > not handled in the current JDK11 options in the hbase startup script. > The surefire test config also includes some JDK17 specific options, we should > also add those as needed. > We are not yet aware of any additional JVM options required by Java 21. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28247) Add java.base/sun.net.dns and java.base/sun.net.util export to jdk11 JVM test flags
[ https://issues.apache.org/jira/browse/HBASE-28247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi updated HBASE-28247: -- Fix Version/s: 3.0.0-beta-2 (was: 3.0.0-beta-1) > Add java.base/sun.net.dns and java.base/sun.net.util export to jdk11 JVM > test flags > > > Key: HBASE-28247 > URL: https://issues.apache.org/jira/browse/HBASE-28247 > Project: HBase > Issue Type: Bug > Components: java >Affects Versions: 2.6.0, 3.0.0-alpha-4, 2.4.17, 2.5.6 >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Minor > Fix For: 2.6.0, 2.4.18, 2.5.7, 3.0.0-beta-2 > > > While testing with JDK17 we have found that we need to add > {noformat} > --add-exports java.base/sun.net.dns=ALL-UNNAMED > --add-exports java.base/sun.net.util=ALL-UNNAMED > {noformat} > on top of what is already defined in _hbase-surefire.jdk11.flags_ , otherwise > RS and Master startup fails in the Hadoop security code. > While this does not affect the test suite (at least not the commonly run > tests), I consider hbase-surefire.jdk11.flags to be an unoffical resource to > getting HBase to run on newer JDK versions. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28261) Sync jvm11 module flags from hbase-surefire.jdk11.flags to bin/hbase
[ https://issues.apache.org/jira/browse/HBASE-28261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi updated HBASE-28261: -- Fix Version/s: 2.6.0 2.4.18 4.0.0-alpha-1 2.5.8 3.0.0-beta-2 Resolution: Fixed Status: Resolved (was: Patch Available) Merged to all active branches. > Sync jvm11 module flags from hbase-surefire.jdk11.flags to bin/hbase > > > Key: HBASE-28261 > URL: https://issues.apache.org/jira/browse/HBASE-28261 > Project: HBase > Issue Type: Bug >Affects Versions: 2.6.0, 2.4.17, 3.0.0, 2.5.7 >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Trivial > Fix For: 2.6.0, 2.4.18, 4.0.0-alpha-1, 2.5.8, 3.0.0-beta-2 > > > All the JDK11 seems to be coming from the runtime code, and we have found > that the missing options are required for the runtime when testing JDK 17. > Only the JDK17 "--add-opens java.base/jdk.internal.util.random=ALL-UNNAMED" > is known to be used by the test code, and probably not by the runtime. > Specifically, these ones are missing: > java.base/java.io=ALL-UNNAMED (pending in HBASE-28259. If that one doesn't > add it to surefire, we need to add that here) > java.base/java.util=ALL-UNNAMED > java.base/java.util.concurrent=ALL-UNNAMED -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Reopened] (HBASE-28247) Add java.base/sun.net.dns and java.base/sun.net.util export to jdk11 JVM test flags
[ https://issues.apache.org/jira/browse/HBASE-28247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi reopened HBASE-28247: --- Reopening because this was not committed to branch-3 by accident. > Add java.base/sun.net.dns and java.base/sun.net.util export to jdk11 JVM > test flags > > > Key: HBASE-28247 > URL: https://issues.apache.org/jira/browse/HBASE-28247 > Project: HBase > Issue Type: Bug > Components: java >Affects Versions: 2.6.0, 3.0.0-alpha-4, 2.4.17, 2.5.6 >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Minor > Fix For: 2.6.0, 2.4.18, 3.0.0-beta-1, 2.5.7 > > > While testing with JDK17 we have found that we need to add > {noformat} > --add-exports java.base/sun.net.dns=ALL-UNNAMED > --add-exports java.base/sun.net.util=ALL-UNNAMED > {noformat} > on top of what is already defined in _hbase-surefire.jdk11.flags_ , otherwise > RS and Master startup fails in the Hadoop security code. > While this does not affect the test suite (at least not the commonly run > tests), I consider hbase-surefire.jdk11.flags to be an unoffical resource to > getting HBase to run on newer JDK versions. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-28247) Add java.base/sun.net.dns and java.base/sun.net.util export to jdk11 JVM test flags
[ https://issues.apache.org/jira/browse/HBASE-28247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi resolved HBASE-28247. --- Resolution: Fixed Merged to branch-3. > Add java.base/sun.net.dns and java.base/sun.net.util export to jdk11 JVM > test flags > > > Key: HBASE-28247 > URL: https://issues.apache.org/jira/browse/HBASE-28247 > Project: HBase > Issue Type: Bug > Components: java >Affects Versions: 2.6.0, 3.0.0-alpha-4, 2.4.17, 2.5.6 >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Minor > Fix For: 2.6.0, 2.4.18, 3.0.0-beta-1, 2.5.7 > > > While testing with JDK17 we have found that we need to add > {noformat} > --add-exports java.base/sun.net.dns=ALL-UNNAMED > --add-exports java.base/sun.net.util=ALL-UNNAMED > {noformat} > on top of what is already defined in _hbase-surefire.jdk11.flags_ , otherwise > RS and Master startup fails in the Hadoop security code. > While this does not affect the test suite (at least not the commonly run > tests), I consider hbase-surefire.jdk11.flags to be an unoffical resource to > getting HBase to run on newer JDK versions. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-28252) Add sun.net.dns and sun.net.util to the JDK11+ module exports in the hbase script
[ https://issues.apache.org/jira/browse/HBASE-28252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi resolved HBASE-28252. --- Resolution: Fixed Merged to branch-3. The commit was partially committed in HBASE-28259 earlier. > Add sun.net.dns and sun.net.util to the JDK11+ module exports in the hbase > script > - > > Key: HBASE-28252 > URL: https://issues.apache.org/jira/browse/HBASE-28252 > Project: HBase > Issue Type: Bug > Components: scripts >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Major > Fix For: 2.6.0, 2.4.18, 3.0.0-beta-1, 2.5.7 > > > As noted in HBASE-28247, HBase can run into module permission issues that are > not handled in the current JDK11 options in the hbase startup script. > The surefire test config also includes some JDK17 specific options, we should > also add those as needed. > We are not yet aware of any additional JVM options required by Java 21. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Reopened] (HBASE-28252) Add sun.net.dns and sun.net.util to the JDK11+ module exports in the hbase script
[ https://issues.apache.org/jira/browse/HBASE-28252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi reopened HBASE-28252: --- Reopening because the commit did not land on branch-3 by accident. > Add sun.net.dns and sun.net.util to the JDK11+ module exports in the hbase > script > - > > Key: HBASE-28252 > URL: https://issues.apache.org/jira/browse/HBASE-28252 > Project: HBase > Issue Type: Bug > Components: scripts >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Major > Fix For: 2.6.0, 2.4.18, 3.0.0-beta-1, 2.5.7 > > > As noted in HBASE-28247, HBase can run into module permission issues that are > not handled in the current JDK11 options in the hbase startup script. > The surefire test config also includes some JDK17 specific options, we should > also add those as needed. > We are not yet aware of any additional JVM options required by Java 21. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-28135) Specify -Xms for tests
[ https://issues.apache.org/jira/browse/HBASE-28135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi resolved HBASE-28135. --- Fix Version/s: 2.6.0 2.4.18 2.5.6 3.0.0-beta-1 Resolution: Fixed Pushed to branch-2.4+. Thanks [~stoty]! > Specify -Xms for tests > -- > > Key: HBASE-28135 > URL: https://issues.apache.org/jira/browse/HBASE-28135 > Project: HBase > Issue Type: Improvement > Components: test >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Major > Fix For: 2.6.0, 2.4.18, 2.5.6, 3.0.0-beta-1 > > > The default -Xms value is JVM dependent, but the host memory size is usually > included in the calculation. > -Xms in turn is used to calculate some GC parameters, for example NewSize and > OldSize, which affect the behaviour of tests. > As the memory consumption on the tests is not dependent on the host VM size, > we could set -Xms for the tests explictly, and enjoy more consistent test > results. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-28133) TestSyncTimeRangeTracker fails with OOM with small -Xms values
[ https://issues.apache.org/jira/browse/HBASE-28133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi resolved HBASE-28133. --- Fix Version/s: 2.6.0 2.4.18 2.5.6 3.0.0-beta-1 Resolution: Fixed Merged to all active branches. Thanks [~stoty]! > TestSyncTimeRangeTracker fails with OOM with small -Xms values > -- > > Key: HBASE-28133 > URL: https://issues.apache.org/jira/browse/HBASE-28133 > Project: HBase > Issue Type: Bug >Affects Versions: 2.4.17 >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Major > Labels: Arm64, test > Fix For: 2.6.0, 2.4.18, 2.5.6, 3.0.0-beta-1 > > > Edit2: It's not the OS, it's the -Xmx value determined from the host memory > size. > Edit: It's related to the OS and it's default java 8 , not to the processor > architecture. > This test seems to be cutting real close to the heap size. > On ARM, it consistently fails on my RHEL8.8 Aarch64 VM with Java 8. > {noformat} > mvn test -P runDevTests -Dtest.build.data.basedirectory=/ram2G > -Dhadoop.profile=3.0 -fn -B -Dtest=TestSyncTimeRangeTracker* -pl hbase-server > ... > [ERROR] > org.apache.hadoop.hbase.regionserver.TestSyncTimeRangeTracker.testConcurrentIncludeTimestampCorrectness > Time elapsed: 1.969 s <<< ERROR! > java.lang.OutOfMemoryError: Java heap space > {noformat} > It seems that Java on ARM has some higher memory overhead than x86_64. > Simply bumping -Xmx from the default 2200m to 2300m allows it to pass. > {noformat} > mvn test -P runDevTests -Dtest.build.data.basedirectory=/ram2G > -Dhadoop.profile=3.0 -fn -B -Dtest=TestSyncTimeRangeTracker* -pl hbase-server > -Dsurefire.Xmx=2300m > ... > [INFO] Running org.apache.hadoop.hbase.regionserver.TestSyncTimeRangeTracker > [INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.395 > s - in org.apache.hadoop.hbase.regionserver.TestSyncTimeRangeTracker > {noformat} > However, the real solution should be reducing the memory usage for this test. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28137) Add scala-parser-combinators dependency to connectors for Spark 3.4
[ https://issues.apache.org/jira/browse/HBASE-28137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi updated HBASE-28137: -- Fix Version/s: hbase-connectors-1.1.0 Resolution: Fixed Status: Resolved (was: Patch Available) Merged to master. Thanks [~stoty]! > Add scala-parser-combinators dependency to connectors for Spark 3.4 > --- > > Key: HBASE-28137 > URL: https://issues.apache.org/jira/browse/HBASE-28137 > Project: HBase > Issue Type: New Feature > Components: spark >Affects Versions: connector-1.0.0 >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Major > Fix For: hbase-connectors-1.1.0 > > > The Spark connector doesn't compile with Spark 3.4 because of a missing > scala-parser-combinators dependency. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28126) TestSimpleRegionNormalizer fails 100% of times on flaky dashboard
[ https://issues.apache.org/jira/browse/HBASE-28126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi updated HBASE-28126: -- Fix Version/s: 2.6.0 2.4.18 2.5.6 3.0.0-beta-1 4.0.0-alpha-1 Resolution: Fixed Status: Resolved (was: Patch Available) Merged to all active branches. Thanks for the review, [~wchevreuil]! > TestSimpleRegionNormalizer fails 100% of times on flaky dashboard > - > > Key: HBASE-28126 > URL: https://issues.apache.org/jira/browse/HBASE-28126 > Project: HBase > Issue Type: Bug > Components: Normalizer >Reporter: Duo Zhang >Assignee: Peter Somogyi >Priority: Major > Fix For: 2.6.0, 2.4.18, 2.5.6, 3.0.0-beta-1, 4.0.0-alpha-1 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28126) TestSimpleRegionNormalizer fails 100% of times on flaky dashboard
[ https://issues.apache.org/jira/browse/HBASE-28126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi updated HBASE-28126: -- Status: Patch Available (was: In Progress) > TestSimpleRegionNormalizer fails 100% of times on flaky dashboard > - > > Key: HBASE-28126 > URL: https://issues.apache.org/jira/browse/HBASE-28126 > Project: HBase > Issue Type: Bug > Components: Normalizer >Reporter: Duo Zhang >Assignee: Peter Somogyi >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Work started] (HBASE-28126) TestSimpleRegionNormalizer fails 100% of times on flaky dashboard
[ https://issues.apache.org/jira/browse/HBASE-28126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HBASE-28126 started by Peter Somogyi. - > TestSimpleRegionNormalizer fails 100% of times on flaky dashboard > - > > Key: HBASE-28126 > URL: https://issues.apache.org/jira/browse/HBASE-28126 > Project: HBase > Issue Type: Bug > Components: Normalizer >Reporter: Duo Zhang >Assignee: Peter Somogyi >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (HBASE-28126) TestSimpleRegionNormalizer fails 100% of times on flaky dashboard
[ https://issues.apache.org/jira/browse/HBASE-28126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi reassigned HBASE-28126: - Assignee: Peter Somogyi > TestSimpleRegionNormalizer fails 100% of times on flaky dashboard > - > > Key: HBASE-28126 > URL: https://issues.apache.org/jira/browse/HBASE-28126 > Project: HBase > Issue Type: Bug > Components: Normalizer >Reporter: Duo Zhang >Assignee: Peter Somogyi >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27978) [hbase-operator-tools] Add spotless in hbase-operator-tools pre-commit check
[ https://issues.apache.org/jira/browse/HBASE-27978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi updated HBASE-27978: -- Fix Version/s: hbase-operator-tools-1.3.0 Resolution: Fixed Status: Resolved (was: Patch Available) Merged to master. Thanks [~nihaljain.cs]! > [hbase-operator-tools] Add spotless in hbase-operator-tools pre-commit check > > > Key: HBASE-27978 > URL: https://issues.apache.org/jira/browse/HBASE-27978 > Project: HBase > Issue Type: Sub-task > Components: build, hbase-operator-tools >Reporter: Nihal Jain >Assignee: Nihal Jain >Priority: Major > Fix For: hbase-operator-tools-1.3.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-28059) Use correct units in RegionLoad#getStoreUncompressedSizeMB()
[ https://issues.apache.org/jira/browse/HBASE-28059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi resolved HBASE-28059. --- Fix Version/s: 2.6.0 2.4.18 2.5.6 Resolution: Fixed Meged to branch-2, branch-2.5, and branch-2.4. Thanks for the contribution, [~charlesconnell]! > Use correct units in RegionLoad#getStoreUncompressedSizeMB() > > > Key: HBASE-28059 > URL: https://issues.apache.org/jira/browse/HBASE-28059 > Project: HBase > Issue Type: Improvement > Components: Admin >Affects Versions: 2.5.5 >Reporter: Charles Connell >Assignee: Charles Connell >Priority: Major > Fix For: 2.6.0, 2.4.18, 2.5.6 > > > When I run a snippet of code like this: > {code:java} > Map regionLoadMap = admin > .getClusterStatus() > .getLoad( > ServerName.parseServerName( > "my-server.my-company.net,60020,1693513660506" > ) > ) > .getRegionsLoad(); > for (byte[] startKey : regionLoadMap.keySet()) { > RegionLoad regionLoad = regionLoadMap.get(startKey); > LOG.info("Region {}: {}", Bytes.toStringBinary(startKey), regionLoad); > } {code} > I get logs like this: > {noformat} > Region , key>,1659484033280.2b89407a1223720344bed425bf3c29b0.: numberOfStores=1, > numberOfStorefiles=3, storeRefCount=0, storefileUncompressedSizeMB=17998848, > lastMajorCompactionTimestamp=1693211464712, storefileSizeMB=5895, > compressionRatio=0.0003, memstoreSizeMB=1, readRequestsCount=118899553, > writeRequestsCount=731192, rootIndexSizeKB=9, totalStaticIndexSizeKB=10413, > totalStaticBloomSizeKB=6592, totalCompactingKVs=0, currentCompactedKVs=0, > compactionProgressPct=NaN, completeSequenceId=78093096, > dataLocality=1.0{noformat} > The {{storefileUncompressedSizeMB}} is vastly larger than the > {{{}storefileSizeMB{}}}, so much more that it's not believable. I checked the > store files in question in this instance. Adding up the uncompressed size > reported in the HFile trailers sums to 5895 MiB, exactly 1024 times less than > the reported 17998848. > The reason for the misreporting is that > {{RegionLoad#getStoreUncompressedSizeMB()}} converts the value from a > {{Size}} object wrong. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28038) Add TLS settings to ZooKeeper client
[ https://issues.apache.org/jira/browse/HBASE-28038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi updated HBASE-28038: -- Fix Version/s: 2.6.0 2.4.18 2.5.6 3.0.0-beta-1 4.0.0-alpha-1 Resolution: Fixed Status: Resolved (was: Patch Available) Pushed to all active branches. Thanks [~andor]! > Add TLS settings to ZooKeeper client > > > Key: HBASE-28038 > URL: https://issues.apache.org/jira/browse/HBASE-28038 > Project: HBase > Issue Type: Improvement > Components: Zookeeper >Affects Versions: 3.0.0-alpha-4, 2.4.17, 2.5.5 >Reporter: Andor Molnar >Assignee: Andor Molnar >Priority: Major > Labels: ssl, tls, zookeeper > Fix For: 2.6.0, 2.4.18, 2.5.6, 3.0.0-beta-1, 4.0.0-alpha-1 > > > ZooKeeper supports TLS connection from its clients. Currently the only way to > set up HBase for this is to add the following Java properties to the HBase > process: > {noformat} > > -Dzookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty > -Dzookeeper.client.secure=true > -Dzookeeper.ssl.keyStore.location=/path/to/keystore.jks > -Dzookeeper.ssl.keyStore.password=password > -Dzookeeper.ssl.trustStore.location=/path/to/truststore.jks > -Dzookeeper.ssl.trustStore.password=password > {noformat} > KeyStore is only needed if ZooKeeper server wants client certificate to be > provided. > I'd like to add these options to hbase-site.xml in the following way: > {noformat} > hbase.zookeeper.property.clientCnxnSocket > hbase.zookeeper.property.client.secure > hbase.zookeeper.property.ssl.keyStore.location > hbase.zookeeper.property.ssl.keyStore.password or > hbase.zookeeper.property.ssl.keyStore.passwordPath > ...{noformat} > It will follow the way that we already do for ZooKeeper clientPort and quorum > settings. ("hbase.zookeeper.property.clientPort", "hbase.zookeeper.quorum") -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28025) Enhance ByteBufferUtils.findCommonPrefix to compare 8 bytes each time
[ https://issues.apache.org/jira/browse/HBASE-28025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi updated HBASE-28025: -- Resolution: Fixed Status: Resolved (was: Patch Available) > Enhance ByteBufferUtils.findCommonPrefix to compare 8 bytes each time > - > > Key: HBASE-28025 > URL: https://issues.apache.org/jira/browse/HBASE-28025 > Project: HBase > Issue Type: Improvement > Components: Performance >Affects Versions: 2.5.4 >Reporter: Becker Ewing >Assignee: Becker Ewing >Priority: Major > Fix For: 2.6.0, 2.4.18, 2.5.6, 3.0.0-beta-1 > > Attachments: HBASE-28025_jmh_benchmarks_src.zip, benchmark_results.txt > > > Currently, the `ByteBufferUtils.findCommonPrefix family of methods compare > two buffers a single byte at a time. In reviewing the patch for HBASE-28012, > [~zhangduo] suggested that the ByteBufferUtils.findCommonPrefix methods could > be enhanced to compare 8 bytes at a time like the > ByteBufferUtils.compareToUnsafe family of methods already does (which was > added in HBASE-12345) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28025) Enhance ByteBufferUtils.findCommonPrefix to compare 8 bytes each time
[ https://issues.apache.org/jira/browse/HBASE-28025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi updated HBASE-28025: -- Status: Patch Available (was: Reopened) Merged addendum to branch-2.4. Thanks for the review, [~zhangduo]! > Enhance ByteBufferUtils.findCommonPrefix to compare 8 bytes each time > - > > Key: HBASE-28025 > URL: https://issues.apache.org/jira/browse/HBASE-28025 > Project: HBase > Issue Type: Improvement > Components: Performance >Affects Versions: 2.5.4 >Reporter: Becker Ewing >Assignee: Becker Ewing >Priority: Major > Fix For: 2.6.0, 2.4.18, 2.5.6, 3.0.0-beta-1 > > Attachments: HBASE-28025_jmh_benchmarks_src.zip, benchmark_results.txt > > > Currently, the `ByteBufferUtils.findCommonPrefix family of methods compare > two buffers a single byte at a time. In reviewing the patch for HBASE-28012, > [~zhangduo] suggested that the ByteBufferUtils.findCommonPrefix methods could > be enhanced to compare 8 bytes at a time like the > ByteBufferUtils.compareToUnsafe family of methods already does (which was > added in HBASE-12345) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Reopened] (HBASE-28025) Enhance ByteBufferUtils.findCommonPrefix to compare 8 bytes each time
[ https://issues.apache.org/jira/browse/HBASE-28025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi reopened HBASE-28025: --- Reopening to apply addendum for branch-2.4. > Enhance ByteBufferUtils.findCommonPrefix to compare 8 bytes each time > - > > Key: HBASE-28025 > URL: https://issues.apache.org/jira/browse/HBASE-28025 > Project: HBase > Issue Type: Improvement > Components: Performance >Affects Versions: 2.5.4 >Reporter: Becker Ewing >Assignee: Becker Ewing >Priority: Major > Fix For: 2.6.0, 2.4.18, 2.5.6, 3.0.0-beta-1 > > Attachments: HBASE-28025_jmh_benchmarks_src.zip, benchmark_results.txt > > > Currently, the `ByteBufferUtils.findCommonPrefix family of methods compare > two buffers a single byte at a time. In reviewing the patch for HBASE-28012, > [~zhangduo] suggested that the ByteBufferUtils.findCommonPrefix methods could > be enhanced to compare 8 bytes at a time like the > ByteBufferUtils.compareToUnsafe family of methods already does (which was > added in HBASE-12345) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-27883) [hbase-connectors] Use log4j2 instead of log4j for logging
[ https://issues.apache.org/jira/browse/HBASE-27883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi resolved HBASE-27883. --- Resolution: Fixed Merged. Thanks [~subrat.mishra]! > [hbase-connectors] Use log4j2 instead of log4j for logging > -- > > Key: HBASE-27883 > URL: https://issues.apache.org/jira/browse/HBASE-27883 > Project: HBase > Issue Type: Task > Components: hbase-connectors >Reporter: Peter Somogyi >Assignee: Subrat Mishra >Priority: Blocker > Fix For: hbase-connectors-1.1.0 > > > Move to log4j2 in hbase-connectors. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-27884) [hbase-filesystem] Use log4j2 instead of log4j for logging
[ https://issues.apache.org/jira/browse/HBASE-27884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi resolved HBASE-27884. --- Release Note: Moved logging to log4j2. Resolution: Fixed Merged to master branch. Thanks [~subrat.mishra] for contributing! > [hbase-filesystem] Use log4j2 instead of log4j for logging > -- > > Key: HBASE-27884 > URL: https://issues.apache.org/jira/browse/HBASE-27884 > Project: HBase > Issue Type: Task > Components: Filesystem Integration >Reporter: Peter Somogyi >Assignee: Subrat Mishra >Priority: Major > Fix For: hbase-filesystem-1.0.0-alpha2 > > > Move to log4j2 in hbase-filesystem. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-27992) Bump exec-maven-plugin to 3.1.0
[ https://issues.apache.org/jira/browse/HBASE-27992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi resolved HBASE-27992. --- Fix Version/s: 2.6.0 2.4.18 2.5.6 3.0.0-beta-1 4.0.0-alpha-1 Resolution: Fixed Merged to branch-2.4+. Thanks for the review [~zhangduo]! > Bump exec-maven-plugin to 3.1.0 > --- > > Key: HBASE-27992 > URL: https://issues.apache.org/jira/browse/HBASE-27992 > Project: HBase > Issue Type: Task > Components: build >Reporter: Peter Somogyi >Assignee: Peter Somogyi >Priority: Trivial > Fix For: 2.6.0, 2.4.18, 2.5.6, 3.0.0-beta-1, 4.0.0-alpha-1 > > > I frequently see IOException in hbase-shaded-with-hadoop-check-invariants > make-sure-validation-files-are-in-sync phase. I'm not sure what is the root > cause but it worth to bump the plugin version to the latest. > {noformat} > [INFO] --- exec-maven-plugin:1.6.0:exec > (make-sure-validation-files-are-in-sync) @ > hbase-shaded-with-hadoop-check-invariants --- > [ERROR] Command execution failed. > java.io.IOException: Stream closed > at java.lang.ProcessBuilder$NullOutputStream.write > (ProcessBuilder.java:433) > at java.io.OutputStream.write (OutputStream.java:116) > at java.io.BufferedOutputStream.flushBuffer (BufferedOutputStream.java:82) > at java.io.BufferedOutputStream.flush (BufferedOutputStream.java:140) > at java.io.FilterOutputStream.close (FilterOutputStream.java:158) > at org.apache.commons.exec.DefaultExecutor.closeProcessStreams > (DefaultExecutor.java:306) > at org.apache.commons.exec.DefaultExecutor.executeInternal > (DefaultExecutor.java:387) > at org.apache.commons.exec.DefaultExecutor.execute > (DefaultExecutor.java:166) > at org.codehaus.mojo.exec.ExecMojo.executeCommandLine (ExecMojo.java:804) > at org.codehaus.mojo.exec.ExecMojo.executeCommandLine (ExecMojo.java:751) > at org.codehaus.mojo.exec.ExecMojo.execute (ExecMojo.java:313) > at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo > (DefaultBuildPluginManager.java:137) > at org.apache.maven.lifecycle.internal.MojoExecutor.execute > (MojoExecutor.java:210) > at org.apache.maven.lifecycle.internal.MojoExecutor.execute > (MojoExecutor.java:156) > at org.apache.maven.lifecycle.internal.MojoExecutor.execute > (MojoExecutor.java:148) > at > org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject > (LifecycleModuleBuilder.java:117) > at > org.apache.maven.lifecycle.internal.builder.multithreaded.MultiThreadedBuilder$1.call > (MultiThreadedBuilder.java:190) > at > org.apache.maven.lifecycle.internal.builder.multithreaded.MultiThreadedBuilder$1.call > (MultiThreadedBuilder.java:186) > at java.util.concurrent.FutureTask.run (FutureTask.java:266) > at java.util.concurrent.Executors$RunnableAdapter.call > (Executors.java:511) > at java.util.concurrent.FutureTask.run (FutureTask.java:266) > at java.util.concurrent.ThreadPoolExecutor.runWorker > (ThreadPoolExecutor.java:1149) > at java.util.concurrent.ThreadPoolExecutor$Worker.run > (ThreadPoolExecutor.java:624) > at java.lang.Thread.run (Thread.java:748) > {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27805) The chunk created by mslab may cause memory fragement and lead to fullgc
[ https://issues.apache.org/jira/browse/HBASE-27805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi updated HBASE-27805: -- Fix Version/s: 4.0.0-alpha-1 > The chunk created by mslab may cause memory fragement and lead to fullgc > > > Key: HBASE-27805 > URL: https://issues.apache.org/jira/browse/HBASE-27805 > Project: HBase > Issue Type: Improvement > Components: documentation >Reporter: Zheng Wang >Assignee: Zheng Wang >Priority: Major > Fix For: 4.0.0-alpha-1 > > Attachments: chunksize-2047k.png, chunksize-2048k-fullgc.png > > > The default size of chunk is 2m, when we use G1, if heapRegionSize equals 4m, > these chunks are allocated as humongous objects, exclusively allocating one > region, then the remaining 2m become memory fragement. > Lots of memory fragement may lead to fullgc even if the percent of used heap > not high enough. > I have tested to reduce the chunk size to 2047k(2m-1k, a bit lesser than half > of heapRegionSize), there was no repeat of the above. > BTW, in G1, humongous objects are objects larger or equal the size of half a > region, and the heapRegionSize is automatically calculated based on the heap > size parameter if not explicitly specified. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-27992) Bump exec-maven-plugin to 3.1.0
Peter Somogyi created HBASE-27992: - Summary: Bump exec-maven-plugin to 3.1.0 Key: HBASE-27992 URL: https://issues.apache.org/jira/browse/HBASE-27992 Project: HBase Issue Type: Task Components: build Reporter: Peter Somogyi Assignee: Peter Somogyi I frequently see IOException in hbase-shaded-with-hadoop-check-invariants make-sure-validation-files-are-in-sync phase. I'm not sure what is the root cause but it worth to bump the plugin version to the latest. {noformat} [INFO] --- exec-maven-plugin:1.6.0:exec (make-sure-validation-files-are-in-sync) @ hbase-shaded-with-hadoop-check-invariants --- [ERROR] Command execution failed. java.io.IOException: Stream closed at java.lang.ProcessBuilder$NullOutputStream.write (ProcessBuilder.java:433) at java.io.OutputStream.write (OutputStream.java:116) at java.io.BufferedOutputStream.flushBuffer (BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush (BufferedOutputStream.java:140) at java.io.FilterOutputStream.close (FilterOutputStream.java:158) at org.apache.commons.exec.DefaultExecutor.closeProcessStreams (DefaultExecutor.java:306) at org.apache.commons.exec.DefaultExecutor.executeInternal (DefaultExecutor.java:387) at org.apache.commons.exec.DefaultExecutor.execute (DefaultExecutor.java:166) at org.codehaus.mojo.exec.ExecMojo.executeCommandLine (ExecMojo.java:804) at org.codehaus.mojo.exec.ExecMojo.executeCommandLine (ExecMojo.java:751) at org.codehaus.mojo.exec.ExecMojo.execute (ExecMojo.java:313) at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo (DefaultBuildPluginManager.java:137) at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:210) at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:156) at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:148) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:117) at org.apache.maven.lifecycle.internal.builder.multithreaded.MultiThreadedBuilder$1.call (MultiThreadedBuilder.java:190) at org.apache.maven.lifecycle.internal.builder.multithreaded.MultiThreadedBuilder$1.call (MultiThreadedBuilder.java:186) at java.util.concurrent.FutureTask.run (FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call (Executors.java:511) at java.util.concurrent.FutureTask.run (FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:624) at java.lang.Thread.run (Thread.java:748) {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Work started] (HBASE-27992) Bump exec-maven-plugin to 3.1.0
[ https://issues.apache.org/jira/browse/HBASE-27992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HBASE-27992 started by Peter Somogyi. - > Bump exec-maven-plugin to 3.1.0 > --- > > Key: HBASE-27992 > URL: https://issues.apache.org/jira/browse/HBASE-27992 > Project: HBase > Issue Type: Task > Components: build >Reporter: Peter Somogyi >Assignee: Peter Somogyi >Priority: Trivial > > I frequently see IOException in hbase-shaded-with-hadoop-check-invariants > make-sure-validation-files-are-in-sync phase. I'm not sure what is the root > cause but it worth to bump the plugin version to the latest. > {noformat} > [INFO] --- exec-maven-plugin:1.6.0:exec > (make-sure-validation-files-are-in-sync) @ > hbase-shaded-with-hadoop-check-invariants --- > [ERROR] Command execution failed. > java.io.IOException: Stream closed > at java.lang.ProcessBuilder$NullOutputStream.write > (ProcessBuilder.java:433) > at java.io.OutputStream.write (OutputStream.java:116) > at java.io.BufferedOutputStream.flushBuffer (BufferedOutputStream.java:82) > at java.io.BufferedOutputStream.flush (BufferedOutputStream.java:140) > at java.io.FilterOutputStream.close (FilterOutputStream.java:158) > at org.apache.commons.exec.DefaultExecutor.closeProcessStreams > (DefaultExecutor.java:306) > at org.apache.commons.exec.DefaultExecutor.executeInternal > (DefaultExecutor.java:387) > at org.apache.commons.exec.DefaultExecutor.execute > (DefaultExecutor.java:166) > at org.codehaus.mojo.exec.ExecMojo.executeCommandLine (ExecMojo.java:804) > at org.codehaus.mojo.exec.ExecMojo.executeCommandLine (ExecMojo.java:751) > at org.codehaus.mojo.exec.ExecMojo.execute (ExecMojo.java:313) > at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo > (DefaultBuildPluginManager.java:137) > at org.apache.maven.lifecycle.internal.MojoExecutor.execute > (MojoExecutor.java:210) > at org.apache.maven.lifecycle.internal.MojoExecutor.execute > (MojoExecutor.java:156) > at org.apache.maven.lifecycle.internal.MojoExecutor.execute > (MojoExecutor.java:148) > at > org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject > (LifecycleModuleBuilder.java:117) > at > org.apache.maven.lifecycle.internal.builder.multithreaded.MultiThreadedBuilder$1.call > (MultiThreadedBuilder.java:190) > at > org.apache.maven.lifecycle.internal.builder.multithreaded.MultiThreadedBuilder$1.call > (MultiThreadedBuilder.java:186) > at java.util.concurrent.FutureTask.run (FutureTask.java:266) > at java.util.concurrent.Executors$RunnableAdapter.call > (Executors.java:511) > at java.util.concurrent.FutureTask.run (FutureTask.java:266) > at java.util.concurrent.ThreadPoolExecutor.runWorker > (ThreadPoolExecutor.java:1149) > at java.util.concurrent.ThreadPoolExecutor$Worker.run > (ThreadPoolExecutor.java:624) > at java.lang.Thread.run (Thread.java:748) > {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-27991) [hbase-examples] MultiThreadedClientExample throws java.lang.ClassCastException
[ https://issues.apache.org/jira/browse/HBASE-27991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17747491#comment-17747491 ] Peter Somogyi commented on HBASE-27991: --- I've added the needed roles for both of you. > [hbase-examples] MultiThreadedClientExample throws > java.lang.ClassCastException > --- > > Key: HBASE-27991 > URL: https://issues.apache.org/jira/browse/HBASE-27991 > Project: HBase > Issue Type: Bug >Reporter: Nikita Pande >Assignee: Nikita Pande >Priority: Minor > > Tried using run() method call of > [https://github.com/apache/hbase/blob/master/hbase-examples/src/main/java/org/apache/hadoop/hbase/client/example/MultiThreadedClientExample.java|https://github.com/apache/hbase/blob/master/hbase-examples/src/main/java/org/apache/hadoop/hbase/client/example/MultiThreadedClientExample.java.] > Following the stack trace of error during runtime > {code:java} > Exception in thread "main" java.io.IOException: > java.lang.reflect.UndeclaredThrowableException > at > org.apache.hadoop.hbase.client.ConnectionFactory.lambda$createConnection$1(ConnectionFactory.java:235) > at org.apache.hadoop.hbase.trace.TraceUtil.trace(TraceUtil.java:216) > at > org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:218) > at > org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:160) > at > org.apache.hadoop.hbase.client.example.MultiThreadedClientExample.run(MultiThreadedClientExample.java:136) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) > at .runMultiThreadedRWOps(xx) > at .main(xx) > Caused by: java.lang.reflect.UndeclaredThrowableException > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1780) > at > org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:328) > at > org.apache.hadoop.hbase.client.ConnectionFactory.lambda$createConnection$1(ConnectionFactory.java:232) > ... 8 more > Caused by: java.lang.reflect.InvocationTargetException > at sun.reflect.GeneratedConstructorAccessor16.newInstance(Unknown Source) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at > org.apache.hadoop.hbase.client.ConnectionFactory.lambda$null$0(ConnectionFactory.java:233) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762) > ... 10 more > Caused by: java.lang.ClassCastException: java.util.concurrent.ForkJoinPool > cannot be cast to java.util.concurrent.ThreadPoolExecutor > at > org.apache.hadoop.hbase.client.ConnectionImplementation.(ConnectionImplementation.java:283) > at > org.apache.hadoop.hbase.client.ConnectionImplementation.(ConnectionImplementation.java:270) > ... 17 more{code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (HBASE-27991) [hbase-examples] MultiThreadedClientExample throws java.lang.ClassCastException
[ https://issues.apache.org/jira/browse/HBASE-27991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi reassigned HBASE-27991: - Assignee: Nikita Pande > [hbase-examples] MultiThreadedClientExample throws > java.lang.ClassCastException > --- > > Key: HBASE-27991 > URL: https://issues.apache.org/jira/browse/HBASE-27991 > Project: HBase > Issue Type: Bug >Reporter: Nikita Pande >Assignee: Nikita Pande >Priority: Minor > > Tried using run() method call of > [https://github.com/apache/hbase/blob/master/hbase-examples/src/main/java/org/apache/hadoop/hbase/client/example/MultiThreadedClientExample.java|https://github.com/apache/hbase/blob/master/hbase-examples/src/main/java/org/apache/hadoop/hbase/client/example/MultiThreadedClientExample.java.] > Following the stack trace of error during runtime > {code:java} > Exception in thread "main" java.io.IOException: > java.lang.reflect.UndeclaredThrowableException > at > org.apache.hadoop.hbase.client.ConnectionFactory.lambda$createConnection$1(ConnectionFactory.java:235) > at org.apache.hadoop.hbase.trace.TraceUtil.trace(TraceUtil.java:216) > at > org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:218) > at > org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:160) > at > org.apache.hadoop.hbase.client.example.MultiThreadedClientExample.run(MultiThreadedClientExample.java:136) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) > at .runMultiThreadedRWOps(xx) > at .main(xx) > Caused by: java.lang.reflect.UndeclaredThrowableException > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1780) > at > org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:328) > at > org.apache.hadoop.hbase.client.ConnectionFactory.lambda$createConnection$1(ConnectionFactory.java:232) > ... 8 more > Caused by: java.lang.reflect.InvocationTargetException > at sun.reflect.GeneratedConstructorAccessor16.newInstance(Unknown Source) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at > org.apache.hadoop.hbase.client.ConnectionFactory.lambda$null$0(ConnectionFactory.java:233) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762) > ... 10 more > Caused by: java.lang.ClassCastException: java.util.concurrent.ForkJoinPool > cannot be cast to java.util.concurrent.ThreadPoolExecutor > at > org.apache.hadoop.hbase.client.ConnectionImplementation.(ConnectionImplementation.java:283) > at > org.apache.hadoop.hbase.client.ConnectionImplementation.(ConnectionImplementation.java:270) > ... 17 more{code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-27897) ConnectionImplementation#locateRegionInMeta should pause and retry when taking user region lock failed
[ https://issues.apache.org/jira/browse/HBASE-27897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi resolved HBASE-27897. --- Resolution: Fixed Merted the addendum to branch-2.4. Thanks for the review, [~zhangduo]! > ConnectionImplementation#locateRegionInMeta should pause and retry when > taking user region lock failed > -- > > Key: HBASE-27897 > URL: https://issues.apache.org/jira/browse/HBASE-27897 > Project: HBase > Issue Type: Improvement > Components: Client >Affects Versions: 2.4.17, 2.5.4 >Reporter: Xiaolin Ha >Assignee: Xiaolin Ha >Priority: Major > Fix For: 2.6.0, 2.4.18, 2.5.6 > > > It just throws exception and skips the pause and retry logic when > ConnectionImplementation#takeUserRegionLock fails. In some circumstances, no > pause and retry by outer logic will make next > ConnectionImplementation#takeUserRegionLock still fails, since all the > threads simultaneously grab the lock. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Reopened] (HBASE-27897) ConnectionImplementation#locateRegionInMeta should pause and retry when taking user region lock failed
[ https://issues.apache.org/jira/browse/HBASE-27897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi reopened HBASE-27897: --- An incorrect cherry-pick to branch-2.4 causes test failures. > ConnectionImplementation#locateRegionInMeta should pause and retry when > taking user region lock failed > -- > > Key: HBASE-27897 > URL: https://issues.apache.org/jira/browse/HBASE-27897 > Project: HBase > Issue Type: Improvement > Components: Client >Affects Versions: 2.4.17, 2.5.4 >Reporter: Xiaolin Ha >Assignee: Xiaolin Ha >Priority: Major > Fix For: 2.6.0, 2.4.18, 2.5.6 > > > It just throws exception and skips the pause and retry logic when > ConnectionImplementation#takeUserRegionLock fails. In some circumstances, no > pause and retry by outer logic will make next > ConnectionImplementation#takeUserRegionLock still fails, since all the > threads simultaneously grab the lock. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-27892) Report memstore on-heap and off-heap size as jmx metrics
[ https://issues.apache.org/jira/browse/HBASE-27892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi resolved HBASE-27892. --- Fix Version/s: (was: 2.4.18) Resolution: Fixed Reverted from branch-2.4. [~jingyu] please reopen this ticket if you want this on branch-2.4. > Report memstore on-heap and off-heap size as jmx metrics > > > Key: HBASE-27892 > URL: https://issues.apache.org/jira/browse/HBASE-27892 > Project: HBase > Issue Type: Improvement > Components: metrics >Reporter: Bryan Beaudreault >Assignee: Jing Yu >Priority: Major > Fix For: 2.6.0, 2.5.6, 3.0.0-beta-1 > > > Currently we only report "memStoreSize" metric in sub=RegionServer bean. I've > noticed a big discrepancy between this metric and the RS UI's "Memstore > On-Heap Size". It seems like "memStoreSize" is the overall data size, while > the on-heap size is coming from our heap estimation which includes POJO heap > overhead, etc. > I have a regionserver with only 750mb of "memStoreSize", but the on-heap size > is over 1gb. This is non-trivial for estimating overall heap size necessary > for a regionserver. Since we have the data, let's report it. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Reopened] (HBASE-27892) Report memstore on-heap and off-heap size as jmx metrics
[ https://issues.apache.org/jira/browse/HBASE-27892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi reopened HBASE-27892: --- This commit causes compilation failure on branch-2.4. Reopening for the revert. > Report memstore on-heap and off-heap size as jmx metrics > > > Key: HBASE-27892 > URL: https://issues.apache.org/jira/browse/HBASE-27892 > Project: HBase > Issue Type: Improvement > Components: metrics >Reporter: Bryan Beaudreault >Assignee: Jing Yu >Priority: Major > Fix For: 2.6.0, 2.4.18, 2.5.6, 3.0.0-beta-1 > > > Currently we only report "memStoreSize" metric in sub=RegionServer bean. I've > noticed a big discrepancy between this metric and the RS UI's "Memstore > On-Heap Size". It seems like "memStoreSize" is the overall data size, while > the on-heap size is coming from our heap estimation which includes POJO heap > overhead, etc. > I have a regionserver with only 750mb of "memStoreSize", but the on-heap size > is over 1gb. This is non-trivial for estimating overall heap size necessary > for a regionserver. Since we have the data, let's report it. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-27933) Stable version outdated on https://hbase.apache.org/downloads.html
[ https://issues.apache.org/jira/browse/HBASE-27933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17733087#comment-17733087 ] Peter Somogyi commented on HBASE-27933: --- Thanks for correcting the fix version, [~zhangduo]. I have to get used to it. :) > Stable version outdated on https://hbase.apache.org/downloads.html > -- > > Key: HBASE-27933 > URL: https://issues.apache.org/jira/browse/HBASE-27933 > Project: HBase > Issue Type: Task >Affects Versions: 2.5.4 >Reporter: Dieter De Paepe >Assignee: Dieter De Paepe >Priority: Minor > Fix For: 4.0.0-alpha-1 > > > In HBASE-27849, the stable version of HBase was updated to 2.5.x. This > updated the [https://downloads.apache.org/hbase/] page. > However, the download page ([https://hbase.apache.org/downloads.html)] still > refers to 2.4.x as the stable version. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-27933) Stable version outdated on https://hbase.apache.org/downloads.html
[ https://issues.apache.org/jira/browse/HBASE-27933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi resolved HBASE-27933. --- Fix Version/s: 3.0.0-beta-1 Resolution: Fixed Merged to master branch. Thanks for the contribution [~dieterdp_ng]! > Stable version outdated on https://hbase.apache.org/downloads.html > -- > > Key: HBASE-27933 > URL: https://issues.apache.org/jira/browse/HBASE-27933 > Project: HBase > Issue Type: Task >Affects Versions: 2.5.4 >Reporter: Dieter De Paepe >Assignee: Dieter De Paepe >Priority: Minor > Fix For: 3.0.0-beta-1 > > > In HBASE-27849, the stable version of HBase was updated to 2.5.x. This > updated the [https://downloads.apache.org/hbase/] page. > However, the download page ([https://hbase.apache.org/downloads.html)] still > refers to 2.4.x as the stable version. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (HBASE-27933) Stable version outdated on https://hbase.apache.org/downloads.html
[ https://issues.apache.org/jira/browse/HBASE-27933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi reassigned HBASE-27933: - Assignee: Dieter De Paepe > Stable version outdated on https://hbase.apache.org/downloads.html > -- > > Key: HBASE-27933 > URL: https://issues.apache.org/jira/browse/HBASE-27933 > Project: HBase > Issue Type: Task >Affects Versions: 2.5.4 >Reporter: Dieter De Paepe >Assignee: Dieter De Paepe >Priority: Minor > > In HBASE-27849, the stable version of HBase was updated to 2.5.x. This > updated the [https://downloads.apache.org/hbase/] page. > However, the download page ([https://hbase.apache.org/downloads.html)] still > refers to 2.4.x as the stable version. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-27884) [hbase-filesystem] Use log4j2 instead of log4j for logging
[ https://issues.apache.org/jira/browse/HBASE-27884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17727893#comment-17727893 ] Peter Somogyi commented on HBASE-27884: --- PR #37 was reverted because it did not exclude transitive log4j dependencies. > [hbase-filesystem] Use log4j2 instead of log4j for logging > -- > > Key: HBASE-27884 > URL: https://issues.apache.org/jira/browse/HBASE-27884 > Project: HBase > Issue Type: Task > Components: Filesystem Integration >Reporter: Peter Somogyi >Assignee: Subrat Mishra >Priority: Major > Fix For: hbase-filesystem-1.0.0-alpha2 > > > Move to log4j2 in hbase-filesystem. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27884) [hbase-filesystem] Use log4j2 instead of log4j for logging
[ https://issues.apache.org/jira/browse/HBASE-27884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi updated HBASE-27884: -- Fix Version/s: hbase-filesystem-1.0.0-alpha2 > [hbase-filesystem] Use log4j2 instead of log4j for logging > -- > > Key: HBASE-27884 > URL: https://issues.apache.org/jira/browse/HBASE-27884 > Project: HBase > Issue Type: Task > Components: Filesystem Integration >Reporter: Peter Somogyi >Assignee: Subrat Mishra >Priority: Major > Fix For: hbase-filesystem-1.0.0-alpha2 > > > Move to log4j2 in hbase-filesystem. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-27884) [hbase-filesystem] Use log4j2 instead of log4j for logging
Peter Somogyi created HBASE-27884: - Summary: [hbase-filesystem] Use log4j2 instead of log4j for logging Key: HBASE-27884 URL: https://issues.apache.org/jira/browse/HBASE-27884 Project: HBase Issue Type: Task Components: Filesystem Integration Reporter: Peter Somogyi Move to log4j2 in hbase-filesystem. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-27883) [hbase-connectors] Use log4j2 instead of log4j for logging
Peter Somogyi created HBASE-27883: - Summary: [hbase-connectors] Use log4j2 instead of log4j for logging Key: HBASE-27883 URL: https://issues.apache.org/jira/browse/HBASE-27883 Project: HBase Issue Type: Task Components: hbase-connectors Reporter: Peter Somogyi Fix For: hbase-connectors-1.1.0 Move to log4j2 in hbase-connectors. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27804) [HBCK2] Correct sample usage of -skip with assigns in HBCK2 docs
[ https://issues.apache.org/jira/browse/HBASE-27804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi updated HBASE-27804: -- Fix Version/s: hbase-operator-tools-1.3.0 Resolution: Fixed Status: Resolved (was: Patch Available) Merged to master. Thanks [~nihaljain.cs]! > [HBCK2] Correct sample usage of -skip with assigns in HBCK2 docs > > > Key: HBASE-27804 > URL: https://issues.apache.org/jira/browse/HBASE-27804 > Project: HBase > Issue Type: Task > Components: hbase-operator-tools, hbck2 >Reporter: Nihal Jain >Assignee: Nihal Jain >Priority: Trivial > Fix For: hbase-operator-tools-1.3.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-27801) Remove redundant avro.version property from Kafka connector
[ https://issues.apache.org/jira/browse/HBASE-27801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi resolved HBASE-27801. --- Fix Version/s: hbase-connectors-1.1.0 Resolution: Fixed Merged to master. Thanks [~stoty]! > Remove redundant avro.version property from Kafka connector > --- > > Key: HBASE-27801 > URL: https://issues.apache.org/jira/browse/HBASE-27801 > Project: HBase > Issue Type: Bug > Components: hbase-connectors, kafka >Affects Versions: connector-1.0.0 >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Minor > Fix For: hbase-connectors-1.1.0 > > > 1.7.7 > is defined both in the main connectors pom, and in the kafka module. > This is not useful. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Reopened] (HBASE-27776) Backport HBASE-27731 (Upgrade commons-validator to version 1.7) to branch-2.5
[ https://issues.apache.org/jira/browse/HBASE-27776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi reopened HBASE-27776: --- > Backport HBASE-27731 (Upgrade commons-validator to version 1.7) to branch-2.5 > - > > Key: HBASE-27776 > URL: https://issues.apache.org/jira/browse/HBASE-27776 > Project: HBase > Issue Type: Task >Reporter: Wes Schuitema >Assignee: Wes Schuitema >Priority: Minor > > This is so we also fix these CVEs for 2.5 > - [CVE-2014-0114|https://nvd.nist.gov/vuln/detail/cve-2014-0114] > - [CVE-2019-10086|https://nvd.nist.gov/vuln/detail/cve-2019-10086] > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-27776) Backport HBASE-27731 (Upgrade commons-validator to version 1.7) to branch-2.5
[ https://issues.apache.org/jira/browse/HBASE-27776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi resolved HBASE-27776. --- Resolution: Invalid > Backport HBASE-27731 (Upgrade commons-validator to version 1.7) to branch-2.5 > - > > Key: HBASE-27776 > URL: https://issues.apache.org/jira/browse/HBASE-27776 > Project: HBase > Issue Type: Task >Reporter: Wes Schuitema >Assignee: Wes Schuitema >Priority: Minor > > This is so we also fix these CVEs for 2.5 > - [CVE-2014-0114|https://nvd.nist.gov/vuln/detail/cve-2014-0114] > - [CVE-2019-10086|https://nvd.nist.gov/vuln/detail/cve-2019-10086] > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27777) bin/hbase --help does not list "omnibus_tarball" options
[ https://issues.apache.org/jira/browse/HBASE-2?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi updated HBASE-2: -- Summary: bin/hbase --help does not list "omnibus_tarball" options (was: bin/hbase --help does not list "omnibus_tatball" options) > bin/hbase --help does not list "omnibus_tarball" options > > > Key: HBASE-2 > URL: https://issues.apache.org/jira/browse/HBASE-2 > Project: HBase > Issue Type: Bug > Components: scripts >Affects Versions: 2.4.17 >Reporter: Nick Dimiduk >Priority: Major > > Launching {{bin/hbase --help}} from the 2.4.17RC0 full distribution tarball, > I see a limited set of options. It looks like we do not source > hbase-config.sh before printing the help message, which means {{HBASE_HOME}} > is not set, and we don't get the extended output. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-27745) Document protoc workarounds with Apple Silicon
[ https://issues.apache.org/jira/browse/HBASE-27745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi resolved HBASE-27745. --- Fix Version/s: 3.0.0-alpha-4 Resolution: Fixed Merged to master. Thanks for the review [~ndimiduk]! > Document protoc workarounds with Apple Silicon > -- > > Key: HBASE-27745 > URL: https://issues.apache.org/jira/browse/HBASE-27745 > Project: HBase > Issue Type: Sub-task > Components: documentation >Reporter: Peter Somogyi >Assignee: Peter Somogyi >Priority: Major > Fix For: 3.0.0-alpha-4 > > > Building hbase 2.x on Apple Silicon is difficult because there is no protoc > library available. > [~ndimiduk] added a solution in HBASE-27741 to use osx-x86_64 but it is also > possible to build protoc locally and use that. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-27745) Document protoc workarounds with Apple Silicon
Peter Somogyi created HBASE-27745: - Summary: Document protoc workarounds with Apple Silicon Key: HBASE-27745 URL: https://issues.apache.org/jira/browse/HBASE-27745 Project: HBase Issue Type: Sub-task Components: documentation Reporter: Peter Somogyi Assignee: Peter Somogyi Building hbase 2.x on Apple Silicon is difficult because there is no protoc library available. [~ndimiduk] added a solution in HBASE-27741 to use osx-x86_64 but it is also possible to build protoc locally and use that. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-27741) Fall back to protoc osx-x86_64 on Apple Silicon
[ https://issues.apache.org/jira/browse/HBASE-27741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17703670#comment-17703670 ] Peter Somogyi commented on HBASE-27741: --- I'm totally fine with -P'!apple-silicon-workaround' flag. As you mentioned earlier we should add these to the reference guide to let the developers decide which version to use. > Fall back to protoc osx-x86_64 on Apple Silicon > --- > > Key: HBASE-27741 > URL: https://issues.apache.org/jira/browse/HBASE-27741 > Project: HBase > Issue Type: Task > Components: build >Affects Versions: 2.4.0, 2.5.0, 2.6.0 >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Minor > Fix For: 2.6.0 > > > Building non-master branches on an Apple Silicon machine fails because > there's no protoc binary available. Use a profile to fall back to the x86 > version of the binary, as per > https://cwiki.apache.org/confluence/display/HADOOP/Develop+on+Apple+Silicon+%28M1%29+macOS > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-27741) Fall back to protoc osx-x86_64 on Apple Silicon
[ https://issues.apache.org/jira/browse/HBASE-27741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17703642#comment-17703642 ] Peter Somogyi commented on HBASE-27741: --- I don't know which is preferable but with your change, it is easier to get newcomers to work with HBase so I'd call that a big advantage. > Fall back to protoc osx-x86_64 on Apple Silicon > --- > > Key: HBASE-27741 > URL: https://issues.apache.org/jira/browse/HBASE-27741 > Project: HBase > Issue Type: Task > Components: build >Affects Versions: 2.4.0, 2.5.0, 2.6.0 >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Minor > > Building non-master branches on an Apple Silicon machine fails because > there's no protoc binary available. Use a profile to fall back to the x86 > version of the binary, as per > https://cwiki.apache.org/confluence/display/HADOOP/Develop+on+Apple+Silicon+%28M1%29+macOS > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27741) Fall back to protoc osx-x86_64 on Apple Silicon
[ https://issues.apache.org/jira/browse/HBASE-27741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi updated HBASE-27741: -- Affects Version/s: 2.4.0 > Fall back to protoc osx-x86_64 on Apple Silicon > --- > > Key: HBASE-27741 > URL: https://issues.apache.org/jira/browse/HBASE-27741 > Project: HBase > Issue Type: Task > Components: build >Affects Versions: 2.4.0, 2.5.0, 2.6.0 >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Minor > > Building non-master branches on an Apple Silicon machine fails because > there's no protoc binary available. Use a profile to fall back to the x86 > version of the binary, as per > https://cwiki.apache.org/confluence/display/HADOOP/Develop+on+Apple+Silicon+%28M1%29+macOS > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-27741) Fall back to protoc osx-x86_64 on Apple Silicon
[ https://issues.apache.org/jira/browse/HBASE-27741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17703579#comment-17703579 ] Peter Somogyi commented on HBASE-27741: --- I'm using a workaround from [~apurtell] to build protoc 2.5.0 locally. Tested this PR and works well. {code:java} curl -sSL https://github.com/protocolbuffers/protobuf/releases/download/v2.5.0/protobuf-2.5.0.tar.gz | tar zx - cd protobuf-2.5.0 curl -L -O https://gist.githubusercontent.com/liusheng/64aee1b27de037f8b9ccf1873b82c413/raw/118c2fce733a9a62a03281753572a45b6efb8639/protobuf-2.5.0-arm64.patch patch -p1 < protobuf-2.5.0-arm64.patch ./configure --disable-shared make mvn install:install-file -DgroupId=com.google.protobuf -DartifactId=protoc -Dversion=2.5.0 -Dclassifier=osx-aarch_64 -Dpackaging=exe -Dfile=src/protoc {code} > Fall back to protoc osx-x86_64 on Apple Silicon > --- > > Key: HBASE-27741 > URL: https://issues.apache.org/jira/browse/HBASE-27741 > Project: HBase > Issue Type: Task > Components: build >Affects Versions: 2.5.0, 2.6.0 >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Minor > > Building non-master branches on an Apple Silicon machine fails because > there's no protoc binary available. Use a profile to fall back to the x86 > version of the binary, as per > https://cwiki.apache.org/confluence/display/HADOOP/Develop+on+Apple+Silicon+%28M1%29+macOS > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-27701) ZStdCodec codec implementation class documentation typo
[ https://issues.apache.org/jira/browse/HBASE-27701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi resolved HBASE-27701. --- Fix Version/s: 3.0.0-alpha-4 Resolution: Fixed Thanks for the contribution, [~frensjan]! > ZStdCodec codec implementation class documentation typo > --- > > Key: HBASE-27701 > URL: https://issues.apache.org/jira/browse/HBASE-27701 > Project: HBase > Issue Type: Bug > Components: documentation >Reporter: Frens Jan Rumph >Assignee: Frens Jan Rumph >Priority: Minor > Fix For: 3.0.0-alpha-4 > > > As mentioned in the [u...@hbase.apache.org|mailto:u...@hbase.apache.org] > mailing list I noticed a small typo in the documentation on compression for > Zstd. The codec implementation class in the documentation is listed as > {{org.apache.hadoop.hbase.io.compress.zstd.ZStdCodec}} while the actual class > is written with a lower case S: {{ZStdCodec.}} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (HBASE-27701) ZStdCodec codec implementation class documentation typo
[ https://issues.apache.org/jira/browse/HBASE-27701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi reassigned HBASE-27701: - Assignee: Frens Jan Rumph > ZStdCodec codec implementation class documentation typo > --- > > Key: HBASE-27701 > URL: https://issues.apache.org/jira/browse/HBASE-27701 > Project: HBase > Issue Type: Bug > Components: documentation >Reporter: Frens Jan Rumph >Assignee: Frens Jan Rumph >Priority: Minor > > As mentioned in the [u...@hbase.apache.org|mailto:u...@hbase.apache.org] > mailing list I noticed a small typo in the documentation on compression for > Zstd. The codec implementation class in the documentation is listed as > {{org.apache.hadoop.hbase.io.compress.zstd.ZStdCodec}} while the actual class > is written with a lower case S: {{ZStdCodec.}} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-27700) rolling-restart.sh stop all masters at the same time
[ https://issues.apache.org/jira/browse/HBASE-27700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17699534#comment-17699534 ] Peter Somogyi commented on HBASE-27700: --- Hi [~jacklove2run], I've added you to the contributor list so you're able to assign HBASE tickets to yourself in Jira. > rolling-restart.sh stop all masters at the same time > > > Key: HBASE-27700 > URL: https://issues.apache.org/jira/browse/HBASE-27700 > Project: HBase > Issue Type: Improvement >Reporter: Jack Yang >Priority: Minor > > The rolling-restart.sh in $HBASE_HOME/bin would stop all master service > (including the backup ones) at the same time, and then restart them at the > same time: > {code:java} > # The content of rolling-restart.sh > ... > # stop all masters before re-start to avoid races for master znode > "$bin"/hbase-daemon.sh --config "${HBASE_CONF_DIR}" stop master > "$bin"/hbase-daemons.sh --config "${HBASE_CONF_DIR}" \ > --hosts "${HBASE_BACKUP_MASTERS}" stop master-backup > # make sure the master znode has been deleted before continuing > zmaster=`$bin/hbase org.apache.hadoop.hbase.util.HBaseConfTool > zookeeper.znode.master` > ... > # all masters are down, now restart > "$bin"/hbase-daemon.sh --config "${HBASE_CONF_DIR}" > ${START_CMD_DIST_MODE} master > "$bin"/hbase-daemons.sh --config "${HBASE_CONF_DIR}" \ > --hosts "${HBASE_BACKUP_MASTERS}" ${START_CMD_DIST_MODE} master-backup {code} > In this way the HMaster service would be unavailable during this period. We > can restart them in a more graceful way, like this: > * Stop the backup masters, and then restart them one by one > * Stop the active master, then one of the backup master would become active > * Start the original active master, now it's the backup one > Will upload patch soon. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-27685) Enable code coverage reporting to SonarQube in HBase
[ https://issues.apache.org/jira/browse/HBASE-27685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi resolved HBASE-27685. --- Fix Version/s: 2.6.0 3.0.0-alpha-4 2.4.17 2.5.4 Resolution: Fixed Merged to branch-2.4+. Thanks [~dora.horvath]! > Enable code coverage reporting to SonarQube in HBase > > > Key: HBASE-27685 > URL: https://issues.apache.org/jira/browse/HBASE-27685 > Project: HBase > Issue Type: Task >Reporter: Dóra Horváth >Assignee: Dóra Horváth >Priority: Minor > Fix For: 2.6.0, 3.0.0-alpha-4, 2.4.17, 2.5.4 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-27679) Bump junit to 4.13.2 in hbase-connectors
[ https://issues.apache.org/jira/browse/HBASE-27679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi resolved HBASE-27679. --- Fix Version/s: hbase-connectors-1.1.0 Resolution: Fixed Merged to master. > Bump junit to 4.13.2 in hbase-connectors > > > Key: HBASE-27679 > URL: https://issues.apache.org/jira/browse/HBASE-27679 > Project: HBase > Issue Type: Task > Components: hbase-connectors >Reporter: Peter Somogyi >Assignee: Peter Somogyi >Priority: Major > Fix For: hbase-connectors-1.1.0 > > > Dependapot reported an issue with junit. Move to the version we have in hbase > main repository. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-27639) Support hbase-connectors compilation with HBase 2.5.3, Hadoop 3.2.4 and Spark 3.2.3
[ https://issues.apache.org/jira/browse/HBASE-27639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17695059#comment-17695059 ] Peter Somogyi commented on HBASE-27639: --- I agree with [~stoty] since 2.4 is the stable release of HBase. > Support hbase-connectors compilation with HBase 2.5.3, Hadoop 3.2.4 and Spark > 3.2.3 > --- > > Key: HBASE-27639 > URL: https://issues.apache.org/jira/browse/HBASE-27639 > Project: HBase > Issue Type: Improvement > Components: hbase-connectors, spark >Reporter: Nihal Jain >Assignee: Nihal Jain >Priority: Major > Fix For: hbase-connectors-1.1.0 > > > Goal is to allow hbase-connectors to compile with: > * HBase: 2.5.3 > * Hadoop: 3.2.4 and > * Spark: 3.2.3 > We could also discuss if we want to bump the versions of the above mentioned > in the pom itself, > or just want to let spark connector compile with above components as the JIRA > title says. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-27639) Support hbase-connectors compilation with HBase 2.5.3, Hadoop 3.2.4 and Spark 3.2.3
[ https://issues.apache.org/jira/browse/HBASE-27639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi resolved HBASE-27639. --- Fix Version/s: hbase-connectors-1.1.0 Resolution: Fixed Merged to master. Thanks for the patch [~nihaljain.cs]. > Support hbase-connectors compilation with HBase 2.5.3, Hadoop 3.2.4 and Spark > 3.2.3 > --- > > Key: HBASE-27639 > URL: https://issues.apache.org/jira/browse/HBASE-27639 > Project: HBase > Issue Type: Improvement > Components: hbase-connectors, spark >Reporter: Nihal Jain >Assignee: Nihal Jain >Priority: Major > Fix For: hbase-connectors-1.1.0 > > > Goal is to allow hbase-connectors to compile with: > * HBase: 2.5.3 > * Hadoop: 3.2.4 and > * Spark: 3.2.3 > We could also discuss if we want to bump the versions of the above mentioned > in the pom itself, > or just want to let spark connector compile with above components as the JIRA > title says. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-27678) Update checkstyle in hbase-connectors
[ https://issues.apache.org/jira/browse/HBASE-27678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi resolved HBASE-27678. --- Fix Version/s: hbase-connectors-1.1.0 Resolution: Fixed Merged to master. > Update checkstyle in hbase-connectors > - > > Key: HBASE-27678 > URL: https://issues.apache.org/jira/browse/HBASE-27678 > Project: HBase > Issue Type: Task > Components: hbase-connectors >Reporter: Peter Somogyi >Assignee: Peter Somogyi >Priority: Major > Fix For: hbase-connectors-1.1.0 > > > There is a known CVE on the used checkstyle in hbase-connectors. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27679) Bump junit to 4.13.2 in hbase-connectors
[ https://issues.apache.org/jira/browse/HBASE-27679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi updated HBASE-27679: -- Summary: Bump junit to 4.13.2 in hbase-connectors (was: Bump junit to 4.13.2) > Bump junit to 4.13.2 in hbase-connectors > > > Key: HBASE-27679 > URL: https://issues.apache.org/jira/browse/HBASE-27679 > Project: HBase > Issue Type: Task > Components: hbase-connectors >Reporter: Peter Somogyi >Assignee: Peter Somogyi >Priority: Major > > Dependapot reported an issue with junit. Move to the version we have in hbase > main repository. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-27679) Bump junit to 4.13.2
Peter Somogyi created HBASE-27679: - Summary: Bump junit to 4.13.2 Key: HBASE-27679 URL: https://issues.apache.org/jira/browse/HBASE-27679 Project: HBase Issue Type: Task Components: hbase-connectors Reporter: Peter Somogyi Assignee: Peter Somogyi Dependapot reported an issue with junit. Move to the version we have in hbase main repository. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-24452) Try github ci for hbase-connectors
[ https://issues.apache.org/jira/browse/HBASE-24452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi resolved HBASE-24452. --- Resolution: Won't Do > Try github ci for hbase-connectors > -- > > Key: HBASE-24452 > URL: https://issues.apache.org/jira/browse/HBASE-24452 > Project: HBase > Issue Type: Improvement >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-27678) Update checkstyle in hbase-connectors
Peter Somogyi created HBASE-27678: - Summary: Update checkstyle in hbase-connectors Key: HBASE-27678 URL: https://issues.apache.org/jira/browse/HBASE-27678 Project: HBase Issue Type: Task Components: hbase-connectors Reporter: Peter Somogyi Assignee: Peter Somogyi There is a known CVE on the used checkstyle in hbase-connectors. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-27665) Update checkstyle in hbase-operator-tools
[ https://issues.apache.org/jira/browse/HBASE-27665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi resolved HBASE-27665. --- Fix Version/s: hbase-operator-tools-1.3.0 Resolution: Fixed Merged to master. Thanks [~zhangduo] for reviewing! > Update checkstyle in hbase-operator-tools > - > > Key: HBASE-27665 > URL: https://issues.apache.org/jira/browse/HBASE-27665 > Project: HBase > Issue Type: Task > Components: hbase-operator-tools >Affects Versions: hbase-operator-tools-1.2.0 >Reporter: Peter Somogyi >Assignee: Peter Somogyi >Priority: Major > Fix For: hbase-operator-tools-1.3.0 > > > The checkstyle dependency is vulnerable in hbase-operator-tools. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-27662) Correct the line logged with flag hbase.procedure.upgrade-to-2-2 in docs
[ https://issues.apache.org/jira/browse/HBASE-27662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17693299#comment-17693299 ] Peter Somogyi commented on HBASE-27662: --- [~yashdodeja], I added you to the contributor list and assigned the ticket to you. > Correct the line logged with flag hbase.procedure.upgrade-to-2-2 in docs > > > Key: HBASE-27662 > URL: https://issues.apache.org/jira/browse/HBASE-27662 > Project: HBase > Issue Type: Improvement > Components: documentation >Reporter: Yash Dodeja >Assignee: Yash Dodeja >Priority: Minor > > The https://hbase.apache.org/book.html#upgrade2.2 doc says to search for a > "READY TO ROLLING UPGRADE" log in master after setting the flag whereas no > such log exists. The actual log line indicating that procedure store is empty > is "UPGRADE OK: All existed procedures have been finished, quit..." -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (HBASE-27662) Correct the line logged with flag hbase.procedure.upgrade-to-2-2 in docs
[ https://issues.apache.org/jira/browse/HBASE-27662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi reassigned HBASE-27662: - Assignee: Yash Dodeja > Correct the line logged with flag hbase.procedure.upgrade-to-2-2 in docs > > > Key: HBASE-27662 > URL: https://issues.apache.org/jira/browse/HBASE-27662 > Project: HBase > Issue Type: Improvement > Components: documentation >Reporter: Yash Dodeja >Assignee: Yash Dodeja >Priority: Minor > > The https://hbase.apache.org/book.html#upgrade2.2 doc says to search for a > "READY TO ROLLING UPGRADE" log in master after setting the flag whereas no > such log exists. The actual log line indicating that procedure store is empty > is "UPGRADE OK: All existed procedures have been finished, quit..." -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-25610) Support multiple tables as input in generateMissingTableDescriptorFile command in HBCK2
[ https://issues.apache.org/jira/browse/HBASE-25610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi updated HBASE-25610: -- Release Note: The generateMissingTableDescriptorFile command accepts multiple table names or generates the missing .tableinfo files for all tables when no table name is specified. > Support multiple tables as input in generateMissingTableDescriptorFile > command in HBCK2 > --- > > Key: HBASE-25610 > URL: https://issues.apache.org/jira/browse/HBASE-25610 > Project: HBase > Issue Type: Improvement > Components: hbase-operator-tools, hbck2 >Reporter: Sanjeet Nishad >Assignee: Sanjeet Nishad >Priority: Minor > Fix For: hbase-operator-tools-1.3.0 > > > Currently 'generateMissingTableDescriptorFile' command in HBCK2 supports only > 1 TableName as input. HBCK's _fixOrphanTables()_ had a support for fixing the > missing .tableinfo files for a list of tables and it also supported fixing > the missing descriptor for all tables if no tables were specified. > It looks like a convinient enhancement to have in > generateMissingTableDescriptorFile command of HBCK2. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-25610) Support multiple tables as input in generateMissingTableDescriptorFile command in HBCK2
[ https://issues.apache.org/jira/browse/HBASE-25610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi resolved HBASE-25610. --- Fix Version/s: hbase-operator-tools-1.3.0 Resolution: Fixed Merged to master. Thanks for the patch [~sanjeetnishad]! > Support multiple tables as input in generateMissingTableDescriptorFile > command in HBCK2 > --- > > Key: HBASE-25610 > URL: https://issues.apache.org/jira/browse/HBASE-25610 > Project: HBase > Issue Type: Improvement > Components: hbase-operator-tools, hbck2 >Reporter: Sanjeet Nishad >Assignee: Sanjeet Nishad >Priority: Minor > Fix For: hbase-operator-tools-1.3.0 > > > Currently 'generateMissingTableDescriptorFile' command in HBCK2 supports only > 1 TableName as input. HBCK's _fixOrphanTables()_ had a support for fixing the > missing .tableinfo files for a list of tables and it also supported fixing > the missing descriptor for all tables if no tables were specified. > It looks like a convinient enhancement to have in > generateMissingTableDescriptorFile command of HBCK2. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-27665) Update checkstyle in hbase-operator-tools
Peter Somogyi created HBASE-27665: - Summary: Update checkstyle in hbase-operator-tools Key: HBASE-27665 URL: https://issues.apache.org/jira/browse/HBASE-27665 Project: HBase Issue Type: Task Components: hbase-operator-tools Affects Versions: hbase-operator-tools-1.2.0 Reporter: Peter Somogyi Assignee: Peter Somogyi The checkstyle dependency is vulnerable in hbase-operator-tools. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-27630) hbase-spark bulkload stage directory limited to hdfs only
[ https://issues.apache.org/jira/browse/HBASE-27630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi resolved HBASE-27630. --- Resolution: Fixed Merged to master branch in the hbase-connectors repository. Thanks for the patch [~sergey.soldatov]! > hbase-spark bulkload stage directory limited to hdfs only > - > > Key: HBASE-27630 > URL: https://issues.apache.org/jira/browse/HBASE-27630 > Project: HBase > Issue Type: Bug > Components: spark >Affects Versions: connector-1.0.0 >Reporter: Sergey Soldatov >Assignee: Sergey Soldatov >Priority: Major > Fix For: hbase-connectors-1.1.0 > > > It's impossible to set up the staging directory for bulkload operation in > spark-hbase connector to any other filesystem different from hdfs. That might > be a problem for deployments where hbase.rootdir points to cloud storage. In > this case, an additional copy task from hdfs to cloud storage would be > required before loading hfiles to hbase. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27630) hbase-spark bulkload stage directory limited to hdfs only
[ https://issues.apache.org/jira/browse/HBASE-27630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi updated HBASE-27630: -- Fix Version/s: hbase-connectors-1.1.0 > hbase-spark bulkload stage directory limited to hdfs only > - > > Key: HBASE-27630 > URL: https://issues.apache.org/jira/browse/HBASE-27630 > Project: HBase > Issue Type: Bug > Components: spark >Affects Versions: connector-1.0.0 >Reporter: Sergey Soldatov >Assignee: Sergey Soldatov >Priority: Major > Fix For: hbase-connectors-1.1.0 > > > It's impossible to set up the staging directory for bulkload operation in > spark-hbase connector to any other filesystem different from hdfs. That might > be a problem for deployments where hbase.rootdir points to cloud storage. In > this case, an additional copy task from hdfs to cloud storage would be > required before loading hfiles to hbase. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27630) hbase-spark bulkload stage directory limited to hdfs only
[ https://issues.apache.org/jira/browse/HBASE-27630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi updated HBASE-27630: -- Affects Version/s: connector-1.0.0 (was: 3.0.0-alpha-3) > hbase-spark bulkload stage directory limited to hdfs only > - > > Key: HBASE-27630 > URL: https://issues.apache.org/jira/browse/HBASE-27630 > Project: HBase > Issue Type: Bug > Components: spark >Affects Versions: connector-1.0.0 >Reporter: Sergey Soldatov >Assignee: Sergey Soldatov >Priority: Major > > It's impossible to set up the staging directory for bulkload operation in > spark-hbase connector to any other filesystem different from hdfs. That might > be a problem for deployments where hbase.rootdir points to cloud storage. In > this case, an additional copy task from hdfs to cloud storage would be > required before loading hfiles to hbase. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-27590) Change Iterable to List in SnapshotFileCache
[ https://issues.apache.org/jira/browse/HBASE-27590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi resolved HBASE-27590. --- Resolution: Fixed Cherry-picked to branch-2.4. > Change Iterable to List in SnapshotFileCache > > > Key: HBASE-27590 > URL: https://issues.apache.org/jira/browse/HBASE-27590 > Project: HBase > Issue Type: Improvement >Reporter: Peter Somogyi >Assignee: Peter Somogyi >Priority: Minor > Fix For: 2.6.0, 3.0.0-alpha-4, 2.5.4 > > Attachments: flame-1.html > > > The HFileCleaners can have low performance on large /archive area when used > with slow storage like S3. The snapshot write lock in SnapshotFileCache is > held while the file metadata is fetched from S3. Due to this even with > multiple cleaner threads only a single cleaner can effectively delete files > from the archive. > File metadata collection is performed before SnapshotHFileCleaner just by > changing the passed parameter type in FileCleanerDelegate from Iterable to > List. > Running with the below cleaner configurations I observed that the lock held > in SnapshotFileCache went down from 45000ms to 100ms when it was running for > 1000 files in a directory. The complete evaluation and deletion for this > folder took the same time but since the file metadata fetch from S3 was done > outside of the lock the multiple cleaner threads were able to run > concurrently. > {noformat} > hbase.cleaner.directory.sorting=false > hbase.cleaner.scan.dir.concurrent.size=0.75 > hbase.regionserver.hfilecleaner.small.thread.count=16 > hbase.regionserver.hfilecleaner.large.thread.count=8 > {noformat} > The files to evaluate are already passed in a List to > CleanerChore.checkAndDeleteFiles but it is converted to an Iterable to run > the checks on the configured cleaners. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Reopened] (HBASE-27590) Change Iterable to List in SnapshotFileCache
[ https://issues.apache.org/jira/browse/HBASE-27590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi reopened HBASE-27590: --- > Change Iterable to List in SnapshotFileCache > > > Key: HBASE-27590 > URL: https://issues.apache.org/jira/browse/HBASE-27590 > Project: HBase > Issue Type: Improvement >Reporter: Peter Somogyi >Assignee: Peter Somogyi >Priority: Minor > Fix For: 2.6.0, 3.0.0-alpha-4, 2.5.4 > > Attachments: flame-1.html > > > The HFileCleaners can have low performance on large /archive area when used > with slow storage like S3. The snapshot write lock in SnapshotFileCache is > held while the file metadata is fetched from S3. Due to this even with > multiple cleaner threads only a single cleaner can effectively delete files > from the archive. > File metadata collection is performed before SnapshotHFileCleaner just by > changing the passed parameter type in FileCleanerDelegate from Iterable to > List. > Running with the below cleaner configurations I observed that the lock held > in SnapshotFileCache went down from 45000ms to 100ms when it was running for > 1000 files in a directory. The complete evaluation and deletion for this > folder took the same time but since the file metadata fetch from S3 was done > outside of the lock the multiple cleaner threads were able to run > concurrently. > {noformat} > hbase.cleaner.directory.sorting=false > hbase.cleaner.scan.dir.concurrent.size=0.75 > hbase.regionserver.hfilecleaner.small.thread.count=16 > hbase.regionserver.hfilecleaner.large.thread.count=8 > {noformat} > The files to evaluate are already passed in a List to > CleanerChore.checkAndDeleteFiles but it is converted to an Iterable to run > the checks on the configured cleaners. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-27629) Backport HBASE-27043 to branch-2.4
[ https://issues.apache.org/jira/browse/HBASE-27629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi resolved HBASE-27629. --- Fix Version/s: 2.4.17 Resolution: Fixed Merged to branch-2.4. > Backport HBASE-27043 to branch-2.4 > -- > > Key: HBASE-27629 > URL: https://issues.apache.org/jira/browse/HBASE-27629 > Project: HBase > Issue Type: Sub-task >Reporter: Peter Somogyi >Assignee: Peter Somogyi >Priority: Major > Fix For: 2.4.17 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-27627) Backport HBASE-25899 to branch-2.4
[ https://issues.apache.org/jira/browse/HBASE-27627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi resolved HBASE-27627. --- Fix Version/s: 2.4.17 Resolution: Fixed Merged backport to branch-2.4. Thanks for the review [~taklwu] ! > Backport HBASE-25899 to branch-2.4 > -- > > Key: HBASE-27627 > URL: https://issues.apache.org/jira/browse/HBASE-27627 > Project: HBase > Issue Type: Sub-task >Reporter: Peter Somogyi >Assignee: Peter Somogyi >Priority: Major > Fix For: 2.4.17 > > > Backport HBASE-25899 to branch-2.4 as it can increase the speed of archive > cleanup. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-27629) Backport HBASE-27043 to branch-2.4
Peter Somogyi created HBASE-27629: - Summary: Backport HBASE-27043 to branch-2.4 Key: HBASE-27629 URL: https://issues.apache.org/jira/browse/HBASE-27629 Project: HBase Issue Type: Sub-task Reporter: Peter Somogyi Assignee: Peter Somogyi -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-27628) Spotless fix in RELEASENOTES.md
[ https://issues.apache.org/jira/browse/HBASE-27628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi resolved HBASE-27628. --- Fix Version/s: 2.4.17 2.5.4 Resolution: Fixed Merged trivial fix to branch-2.4 and branch-2.5. > Spotless fix in RELEASENOTES.md > --- > > Key: HBASE-27628 > URL: https://issues.apache.org/jira/browse/HBASE-27628 > Project: HBase > Issue Type: Bug >Affects Versions: 2.4.16, 2.5.3 >Reporter: Peter Somogyi >Assignee: Peter Somogyi >Priority: Trivial > Fix For: 2.4.17, 2.5.4 > > > There is a whitespace vilotation in RELEASENOTES.md on branch-2.4 and > branch-2.5 causing the pre-commit and nightly builds to fail. > {noformat} > [ERROR] Failed to execute goal > com.diffplug.spotless:spotless-maven-plugin:2.27.2:check (default-cli) on > project hbase: The following files had format violations: > [ERROR] RELEASENOTES.md > [ERROR] @@ -85,7 +85,7 @@ > [ERROR] > [ERROR] > *·[HBASE-27529](https://issues.apache.org/jira/browse/HBASE-27529)·|·*Major*·|·**Provide·RS·coproc·ability·to·attach·WAL·extended·attributes·to·mutations·at·replication·sink** > [ERROR] > [ERROR] > -New·regionserver·coproc·endpoints·that·can·be·used·by·coproc·at·the·replication·sink·cluster·if·WAL·has·extended·attributes.· > [ERROR] > +New·regionserver·coproc·endpoints·that·can·be·used·by·coproc·at·the·replication·sink·cluster·if·WAL·has·extended·attributes. > [ERROR] > Using·the·new·endpoints,·WAL·extended·attributes·can·be·transferred·to·Mutation·attributes·at·the·replication·sink·cluster. > {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27628) Spotless fix in RELEASENOTES.md
[ https://issues.apache.org/jira/browse/HBASE-27628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi updated HBASE-27628: -- Summary: Spotless fix in RELEASENOTES.md (was: Spotbugs fix in RELEASENOTES.md) > Spotless fix in RELEASENOTES.md > --- > > Key: HBASE-27628 > URL: https://issues.apache.org/jira/browse/HBASE-27628 > Project: HBase > Issue Type: Bug >Affects Versions: 2.4.16, 2.5.3 >Reporter: Peter Somogyi >Assignee: Peter Somogyi >Priority: Trivial > > There is a whitespace vilotation in RELEASENOTES.md on branch-2.4 and > branch-2.5 causing the pre-commit and nightly builds to fail. > {noformat} > [ERROR] Failed to execute goal > com.diffplug.spotless:spotless-maven-plugin:2.27.2:check (default-cli) on > project hbase: The following files had format violations: > [ERROR] RELEASENOTES.md > [ERROR] @@ -85,7 +85,7 @@ > [ERROR] > [ERROR] > *·[HBASE-27529](https://issues.apache.org/jira/browse/HBASE-27529)·|·*Major*·|·**Provide·RS·coproc·ability·to·attach·WAL·extended·attributes·to·mutations·at·replication·sink** > [ERROR] > [ERROR] > -New·regionserver·coproc·endpoints·that·can·be·used·by·coproc·at·the·replication·sink·cluster·if·WAL·has·extended·attributes.· > [ERROR] > +New·regionserver·coproc·endpoints·that·can·be·used·by·coproc·at·the·replication·sink·cluster·if·WAL·has·extended·attributes. > [ERROR] > Using·the·new·endpoints,·WAL·extended·attributes·can·be·transferred·to·Mutation·attributes·at·the·replication·sink·cluster. > {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-27628) Spotbugs fix in RELEASENOTES.md
Peter Somogyi created HBASE-27628: - Summary: Spotbugs fix in RELEASENOTES.md Key: HBASE-27628 URL: https://issues.apache.org/jira/browse/HBASE-27628 Project: HBase Issue Type: Bug Affects Versions: 2.5.3, 2.4.16 Reporter: Peter Somogyi Assignee: Peter Somogyi There is a whitespace vilotation in RELEASENOTES.md on branch-2.4 and branch-2.5 causing the pre-commit and nightly builds to fail. {noformat} [ERROR] Failed to execute goal com.diffplug.spotless:spotless-maven-plugin:2.27.2:check (default-cli) on project hbase: The following files had format violations: [ERROR] RELEASENOTES.md [ERROR] @@ -85,7 +85,7 @@ [ERROR] [ERROR] *·[HBASE-27529](https://issues.apache.org/jira/browse/HBASE-27529)·|·*Major*·|·**Provide·RS·coproc·ability·to·attach·WAL·extended·attributes·to·mutations·at·replication·sink** [ERROR] [ERROR] -New·regionserver·coproc·endpoints·that·can·be·used·by·coproc·at·the·replication·sink·cluster·if·WAL·has·extended·attributes.· [ERROR] +New·regionserver·coproc·endpoints·that·can·be·used·by·coproc·at·the·replication·sink·cluster·if·WAL·has·extended·attributes. [ERROR] Using·the·new·endpoints,·WAL·extended·attributes·can·be·transferred·to·Mutation·attributes·at·the·replication·sink·cluster. {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27529) Provide RS coproc ability to attach WAL extended attributes to mutations at replication sink
[ https://issues.apache.org/jira/browse/HBASE-27529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi updated HBASE-27529: -- Release Note: New regionserver coproc endpoints that can be used by coproc at the replication sink cluster if WAL has extended attributes. Using the new endpoints, WAL extended attributes can be transferred to Mutation attributes at the replication sink cluster. was: New regionserver coproc endpoints that can be used by coproc at the replication sink cluster if WAL has extended attributes. Using the new endpoints, WAL extended attributes can be transferred to Mutation attributes at the replication sink cluster. > Provide RS coproc ability to attach WAL extended attributes to mutations at > replication sink > > > Key: HBASE-27529 > URL: https://issues.apache.org/jira/browse/HBASE-27529 > Project: HBase > Issue Type: Improvement > Components: Coprocessors, Replication >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Fix For: 2.6.0, 3.0.0-alpha-4, 2.4.16, 2.5.3 > > > HBase provides coproc ability to enhance WALKey attributes (a.k.a. WAL > annotations) in order for the replication sink cluster to build required > metadata with the mutations. The endpoint is preWALAppend(). This ability was > provided by HBASE-22622. The map of extended attributes is optional and hence > not directly used by hbase internally. > For any hbase downstreamers to build CDC (Change Data Capture) like > functionality, it might required additional metadata in addition to the ones > being used by hbase already (replication scope, list of cluster ids, seq id, > table name, region id etc). For instance, Phoenix uses many additional > attributes like tenant id, schema name, table type etc. > We already have this extended map of attributes available in WAL protobuf, to > provide us the capability to (de)serialize it. While creating new > ReplicateWALEntryRequest from the list of WAL entires, we are able to > serialize the additional attributes. Similarly, at the replication sink side, > the deserialized WALEntry has the extended attributes available. > At the sink cluster, we should be able to attach the deserialized extended > attributes to the newly generated mutations so that the peer cluster can > utilize the mutation attributes to re-build required metadata. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-27590) Change Iterable to List in SnapshotFileCache
[ https://issues.apache.org/jira/browse/HBASE-27590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi resolved HBASE-27590. --- Fix Version/s: 2.6.0 3.0.0-alpha-4 2.5.4 Resolution: Fixed Merged to branch-2.5, branch-2, and master. Will merge to branch-2.4 once HBASE-27627 gets resolved. Thanks for the suggestion and review [~zhangduo]! > Change Iterable to List in SnapshotFileCache > > > Key: HBASE-27590 > URL: https://issues.apache.org/jira/browse/HBASE-27590 > Project: HBase > Issue Type: Improvement >Reporter: Peter Somogyi >Assignee: Peter Somogyi >Priority: Minor > Fix For: 2.6.0, 3.0.0-alpha-4, 2.5.4 > > Attachments: flame-1.html > > > The HFileCleaners can have low performance on large /archive area when used > with slow storage like S3. The snapshot write lock in SnapshotFileCache is > held while the file metadata is fetched from S3. Due to this even with > multiple cleaner threads only a single cleaner can effectively delete files > from the archive. > File metadata collection is performed before SnapshotHFileCleaner just by > changing the passed parameter type in FileCleanerDelegate from Iterable to > List. > Running with the below cleaner configurations I observed that the lock held > in SnapshotFileCache went down from 45000ms to 100ms when it was running for > 1000 files in a directory. The complete evaluation and deletion for this > folder took the same time but since the file metadata fetch from S3 was done > outside of the lock the multiple cleaner threads were able to run > concurrently. > {noformat} > hbase.cleaner.directory.sorting=false > hbase.cleaner.scan.dir.concurrent.size=0.75 > hbase.regionserver.hfilecleaner.small.thread.count=16 > hbase.regionserver.hfilecleaner.large.thread.count=8 > {noformat} > The files to evaluate are already passed in a List to > CleanerChore.checkAndDeleteFiles but it is converted to an Iterable to run > the checks on the configured cleaners. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27624) Cannot Specify Namespace via the hbase.table Option in Spark Connector
[ https://issues.apache.org/jira/browse/HBASE-27624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi updated HBASE-27624: -- Affects Version/s: hbase-connectors-1.0.1 (was: 1.0.1) > Cannot Specify Namespace via the hbase.table Option in Spark Connector > -- > > Key: HBASE-27624 > URL: https://issues.apache.org/jira/browse/HBASE-27624 > Project: HBase > Issue Type: Bug > Components: hbase-connectors, spark >Affects Versions: hbase-connectors-1.0.1 >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Major > Fix For: hbase-connectors-1.0.1 > > > When using the old mapping format and specifying the HBase table via the > _hbase.table_ option, the connector passes the namespaced string to HBase, > and we get > {noformat} > Caused by: java.lang.IllegalArgumentException: Illegal character code:58, <:> > at 7. User-space table qualifiers may only contain 'alphanumeric characters' > and digits: staplesHbaseNamespace:staplesHbaseTableName > at > org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:187) > at > org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:138) > at org.apache.hadoop.hbase.TableName.(TableName.java:320) > at > org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:354) > at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:484){noformat} > This seems to be related to the changes in HBASE-24276 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-27624) Cannot Specify Namespace via the hbase.table Option in Spark Connector
[ https://issues.apache.org/jira/browse/HBASE-27624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi resolved HBASE-27624. --- Fix Version/s: hbase-connectors-1.0.1 Resolution: Fixed Pushed to master. Thanks [~stoty]! > Cannot Specify Namespace via the hbase.table Option in Spark Connector > -- > > Key: HBASE-27624 > URL: https://issues.apache.org/jira/browse/HBASE-27624 > Project: HBase > Issue Type: Bug > Components: hbase-connectors, spark >Affects Versions: 1.0.1 >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Major > Fix For: hbase-connectors-1.0.1 > > > When using the old mapping format and specifying the HBase table via the > _hbase.table_ option, the connector passes the namespaced string to HBase, > and we get > {noformat} > Caused by: java.lang.IllegalArgumentException: Illegal character code:58, <:> > at 7. User-space table qualifiers may only contain 'alphanumeric characters' > and digits: staplesHbaseNamespace:staplesHbaseTableName > at > org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:187) > at > org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:138) > at org.apache.hadoop.hbase.TableName.(TableName.java:320) > at > org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:354) > at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:484){noformat} > This seems to be related to the changes in HBASE-24276 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-27627) Backport HBASE-25899 to branch-2.4
Peter Somogyi created HBASE-27627: - Summary: Backport HBASE-25899 to branch-2.4 Key: HBASE-27627 URL: https://issues.apache.org/jira/browse/HBASE-27627 Project: HBase Issue Type: Sub-task Reporter: Peter Somogyi Assignee: Peter Somogyi Backport HBASE-25899 to branch-2.4 as it can increase the speed of archive cleanup. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27590) Change Iterable to List in SnapshotFileCache
[ https://issues.apache.org/jira/browse/HBASE-27590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi updated HBASE-27590: -- Summary: Change Iterable to List in SnapshotFileCache (was: Change Iterable to List in CleanerChore) > Change Iterable to List in SnapshotFileCache > > > Key: HBASE-27590 > URL: https://issues.apache.org/jira/browse/HBASE-27590 > Project: HBase > Issue Type: Improvement >Reporter: Peter Somogyi >Assignee: Peter Somogyi >Priority: Minor > Attachments: flame-1.html > > > The HFileCleaners can have low performance on large /archive area when used > with slow storage like S3. The snapshot write lock in SnapshotFileCache is > held while the file metadata is fetched from S3. Due to this even with > multiple cleaner threads only a single cleaner can effectively delete files > from the archive. > File metadata collection is performed before SnapshotHFileCleaner just by > changing the passed parameter type in FileCleanerDelegate from Iterable to > List. > Running with the below cleaner configurations I observed that the lock held > in SnapshotFileCache went down from 45000ms to 100ms when it was running for > 1000 files in a directory. The complete evaluation and deletion for this > folder took the same time but since the file metadata fetch from S3 was done > outside of the lock the multiple cleaner threads were able to run > concurrently. > {noformat} > hbase.cleaner.directory.sorting=false > hbase.cleaner.scan.dir.concurrent.size=0.75 > hbase.regionserver.hfilecleaner.small.thread.count=16 > hbase.regionserver.hfilecleaner.large.thread.count=8 > {noformat} > The files to evaluate are already passed in a List to > CleanerChore.checkAndDeleteFiles but it is converted to an Iterable to run > the checks on the configured cleaners. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-27590) Change Iterable to List in CleanerChore
[ https://issues.apache.org/jira/browse/HBASE-27590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17681049#comment-17681049 ] Peter Somogyi commented on HBASE-27590: --- DISCUSS thread at dev@: https://lists.apache.org/thread/bc1rkttscncsg75po9v0wdsqyovtz7d5 > Change Iterable to List in CleanerChore > --- > > Key: HBASE-27590 > URL: https://issues.apache.org/jira/browse/HBASE-27590 > Project: HBase > Issue Type: Improvement >Reporter: Peter Somogyi >Assignee: Peter Somogyi >Priority: Minor > Attachments: flame-1.html > > > The HFileCleaners can have low performance on large /archive area when used > with slow storage like S3. The snapshot write lock in SnapshotFileCache is > held while the file metadata is fetched from S3. Due to this even with > multiple cleaner threads only a single cleaner can effectively delete files > from the archive. > File metadata collection is performed before SnapshotHFileCleaner just by > changing the passed parameter type in FileCleanerDelegate from Iterable to > List. > Running with the below cleaner configurations I observed that the lock held > in SnapshotFileCache went down from 45000ms to 100ms when it was running for > 1000 files in a directory. The complete evaluation and deletion for this > folder took the same time but since the file metadata fetch from S3 was done > outside of the lock the multiple cleaner threads were able to run > concurrently. > {noformat} > hbase.cleaner.directory.sorting=false > hbase.cleaner.scan.dir.concurrent.size=0.75 > hbase.regionserver.hfilecleaner.small.thread.count=16 > hbase.regionserver.hfilecleaner.large.thread.count=8 > {noformat} > The files to evaluate are already passed in a List to > CleanerChore.checkAndDeleteFiles but it is converted to an Iterable to run > the checks on the configured cleaners. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-27590) Change Iterable to List in CleanerChore
[ https://issues.apache.org/jira/browse/HBASE-27590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17680512#comment-17680512 ] Peter Somogyi commented on HBASE-27590: --- The attached flame-1.html shows that inside the SnapshotFileCache.getUnreferencedFiles most of the time is spent in S3 listing. > Change Iterable to List in CleanerChore > --- > > Key: HBASE-27590 > URL: https://issues.apache.org/jira/browse/HBASE-27590 > Project: HBase > Issue Type: Improvement >Reporter: Peter Somogyi >Assignee: Peter Somogyi >Priority: Minor > Attachments: flame-1.html > > > The HFileCleaners can have low performance on large /archive area when used > with slow storage like S3. The snapshot write lock in SnapshotFileCache is > held while the file metadata is fetched from S3. Due to this even with > multiple cleaner threads only a single cleaner can effectively delete files > from the archive. > File metadata collection is performed before SnapshotHFileCleaner just by > changing the passed parameter type in FileCleanerDelegate from Iterable to > List. > Running with the below cleaner configurations I observed that the lock held > in SnapshotFileCache went down from 45000ms to 100ms when it was running for > 1000 files in a directory. The complete evaluation and deletion for this > folder took the same time but since the file metadata fetch from S3 was done > outside of the lock the multiple cleaner threads were able to run > concurrently. > {noformat} > hbase.cleaner.directory.sorting=false > hbase.cleaner.scan.dir.concurrent.size=0.75 > hbase.regionserver.hfilecleaner.small.thread.count=16 > hbase.regionserver.hfilecleaner.large.thread.count=8 > {noformat} > The files to evaluate are already passed in a List to > CleanerChore.checkAndDeleteFiles but it is converted to an Iterable to run > the checks on the configured cleaners. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27590) Change Iterable to List in CleanerChore
[ https://issues.apache.org/jira/browse/HBASE-27590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi updated HBASE-27590: -- Attachment: flame-1.html > Change Iterable to List in CleanerChore > --- > > Key: HBASE-27590 > URL: https://issues.apache.org/jira/browse/HBASE-27590 > Project: HBase > Issue Type: Improvement >Reporter: Peter Somogyi >Assignee: Peter Somogyi >Priority: Minor > Attachments: flame-1.html > > > The HFileCleaners can have low performance on large /archive area when used > with slow storage like S3. The snapshot write lock in SnapshotFileCache is > held while the file metadata is fetched from S3. Due to this even with > multiple cleaner threads only a single cleaner can effectively delete files > from the archive. > File metadata collection is performed before SnapshotHFileCleaner just by > changing the passed parameter type in FileCleanerDelegate from Iterable to > List. > Running with the below cleaner configurations I observed that the lock held > in SnapshotFileCache went down from 45000ms to 100ms when it was running for > 1000 files in a directory. The complete evaluation and deletion for this > folder took the same time but since the file metadata fetch from S3 was done > outside of the lock the multiple cleaner threads were able to run > concurrently. > {noformat} > hbase.cleaner.directory.sorting=false > hbase.cleaner.scan.dir.concurrent.size=0.75 > hbase.regionserver.hfilecleaner.small.thread.count=16 > hbase.regionserver.hfilecleaner.large.thread.count=8 > {noformat} > The files to evaluate are already passed in a List to > CleanerChore.checkAndDeleteFiles but it is converted to an Iterable to run > the checks on the configured cleaners. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Work started] (HBASE-27590) Change Iterable to List in CleanerChore
[ https://issues.apache.org/jira/browse/HBASE-27590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HBASE-27590 started by Peter Somogyi. - > Change Iterable to List in CleanerChore > --- > > Key: HBASE-27590 > URL: https://issues.apache.org/jira/browse/HBASE-27590 > Project: HBase > Issue Type: Improvement >Reporter: Peter Somogyi >Assignee: Peter Somogyi >Priority: Minor > > The HFileCleaners can have low performance on large /archive area when used > with slow storage like S3. The snapshot write lock in SnapshotFileCache is > held while the file metadata is fetched from S3. Due to this even with > multiple cleaner threads only a single cleaner can effectively delete files > from the archive. > File metadata collection is performed before SnapshotHFileCleaner just by > changing the passed parameter type in FileCleanerDelegate from Iterable to > List. > Running with the below cleaner configurations I observed that the lock held > in SnapshotFileCache went down from 45000ms to 100ms when it was running for > 1000 files in a directory. The complete evaluation and deletion for this > folder took the same time but since the file metadata fetch from S3 was done > outside of the lock the multiple cleaner threads were able to run > concurrently. > {noformat} > hbase.cleaner.directory.sorting=false > hbase.cleaner.scan.dir.concurrent.size=0.75 > hbase.regionserver.hfilecleaner.small.thread.count=16 > hbase.regionserver.hfilecleaner.large.thread.count=8 > {noformat} > The files to evaluate are already passed in a List to > CleanerChore.checkAndDeleteFiles but it is converted to an Iterable to run > the checks on the configured cleaners. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-27590) Change Iterable to List in CleanerChore
Peter Somogyi created HBASE-27590: - Summary: Change Iterable to List in CleanerChore Key: HBASE-27590 URL: https://issues.apache.org/jira/browse/HBASE-27590 Project: HBase Issue Type: Improvement Reporter: Peter Somogyi Assignee: Peter Somogyi The HFileCleaners can have low performance on large /archive area when used with slow storage like S3. The snapshot write lock in SnapshotFileCache is held while the file metadata is fetched from S3. Due to this even with multiple cleaner threads only a single cleaner can effectively delete files from the archive. File metadata collection is performed before SnapshotHFileCleaner just by changing the passed parameter type in FileCleanerDelegate from Iterable to List. Running with the below cleaner configurations I observed that the lock held in SnapshotFileCache went down from 45000ms to 100ms when it was running for 1000 files in a directory. The complete evaluation and deletion for this folder took the same time but since the file metadata fetch from S3 was done outside of the lock the multiple cleaner threads were able to run concurrently. {noformat} hbase.cleaner.directory.sorting=false hbase.cleaner.scan.dir.concurrent.size=0.75 hbase.regionserver.hfilecleaner.small.thread.count=16 hbase.regionserver.hfilecleaner.large.thread.count=8 {noformat} The files to evaluate are already passed in a List to CleanerChore.checkAndDeleteFiles but it is converted to an Iterable to run the checks on the configured cleaners. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-27493) Allow namespace admins to clone snapshots created by them
[ https://issues.apache.org/jira/browse/HBASE-27493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi resolved HBASE-27493. --- Fix Version/s: 2.6.0 3.0.0-alpha-4 Resolution: Fixed Merged to branch-2 and master. [~bszabolcs] can you fill the release notes? > Allow namespace admins to clone snapshots created by them > - > > Key: HBASE-27493 > URL: https://issues.apache.org/jira/browse/HBASE-27493 > Project: HBase > Issue Type: Improvement > Components: snapshots >Affects Versions: 3.0.0-alpha-3, 2.5.1 >Reporter: Szabolcs Bukros >Assignee: Szabolcs Bukros >Priority: Major > Fix For: 2.6.0, 3.0.0-alpha-4 > > > Creating a snapshot requires table admin permissions. But cloning it requires > global admin permissions unless the user owns the snapshot and wants to > recreate the original table the snapshot was based on using the same table > name. This puts unnecessary load on the few people having global admin > permissions. I would like to relax this rule a bit and allow the owner of the > snapshot to clone it into any namespace where they have admin permissions > regardless of the table name used. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27565) Make the initial corePoolSize configurable for ChoreService
[ https://issues.apache.org/jira/browse/HBASE-27565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi updated HBASE-27565: -- Release Note: Add 'hbase.choreservice.initial.pool.size' configuration property to set the initial number of threads for the ChoreService. > Make the initial corePoolSize configurable for ChoreService > --- > > Key: HBASE-27565 > URL: https://issues.apache.org/jira/browse/HBASE-27565 > Project: HBase > Issue Type: Improvement > Components: conf >Reporter: Peter Somogyi >Assignee: Peter Somogyi >Priority: Major > > The initial corePoolSize for ChoreService is set to 1. The pool size is > increased when a scheduled task misses its start time. > On a cluster where the archive size is large, the HFileCleaner could run for > a very long time and block the rest of the chores to run. > By making the initial pool size configurable we could solve the bottleneck of > caused by a long-running HFileCleaner chore. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27565) Make the initial corePoolSize configurable for ChoreService
[ https://issues.apache.org/jira/browse/HBASE-27565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi updated HBASE-27565: -- Fix Version/s: 2.6.0 3.0.0-alpha-4 2.4.16 2.5.3 Resolution: Fixed Status: Resolved (was: Patch Available) Merged to branch-2.4+. Thanks [~zhangduo] for the review. > Make the initial corePoolSize configurable for ChoreService > --- > > Key: HBASE-27565 > URL: https://issues.apache.org/jira/browse/HBASE-27565 > Project: HBase > Issue Type: Improvement > Components: conf >Reporter: Peter Somogyi >Assignee: Peter Somogyi >Priority: Major > Fix For: 2.6.0, 3.0.0-alpha-4, 2.4.16, 2.5.3 > > > The initial corePoolSize for ChoreService is set to 1. The pool size is > increased when a scheduled task misses its start time. > On a cluster where the archive size is large, the HFileCleaner could run for > a very long time and block the rest of the chores to run. > By making the initial pool size configurable we could solve the bottleneck of > caused by a long-running HFileCleaner chore. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27565) Make the initial corePoolSize configurable for ChoreService
[ https://issues.apache.org/jira/browse/HBASE-27565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi updated HBASE-27565: -- Status: Patch Available (was: Open) > Make the initial corePoolSize configurable for ChoreService > --- > > Key: HBASE-27565 > URL: https://issues.apache.org/jira/browse/HBASE-27565 > Project: HBase > Issue Type: Improvement > Components: conf >Reporter: Peter Somogyi >Assignee: Peter Somogyi >Priority: Major > > The initial corePoolSize for ChoreService is set to 1. The pool size is > increased when a scheduled task misses its start time. > On a cluster where the archive size is large, the HFileCleaner could run for > a very long time and block the rest of the chores to run. > By making the initial pool size configurable we could solve the bottleneck of > caused by a long-running HFileCleaner chore. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-27565) Make the initial corePoolSize configurable for ChoreService
Peter Somogyi created HBASE-27565: - Summary: Make the initial corePoolSize configurable for ChoreService Key: HBASE-27565 URL: https://issues.apache.org/jira/browse/HBASE-27565 Project: HBase Issue Type: Improvement Components: conf Reporter: Peter Somogyi Assignee: Peter Somogyi The initial corePoolSize for ChoreService is set to 1. The pool size is increased when a scheduled task misses its start time. On a cluster where the archive size is large, the HFileCleaner could run for a very long time and block the rest of the chores to run. By making the initial pool size configurable we could solve the bottleneck of caused by a long-running HFileCleaner chore. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-27554) Test failures on branch-2.4 with corrupted exclude list
[ https://issues.apache.org/jira/browse/HBASE-27554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi resolved HBASE-27554. --- Resolution: Fixed Recent nightly tests were successful on branch-2.4 after the cleanup. > Test failures on branch-2.4 with corrupted exclude list > --- > > Key: HBASE-27554 > URL: https://issues.apache.org/jira/browse/HBASE-27554 > Project: HBase > Issue Type: Bug > Components: jenkins >Affects Versions: 2.4.16 >Reporter: Peter Somogyi >Assignee: Peter Somogyi >Priority: Major > > Nightly builds and PRs on branch-2.4 are failing with an invalid exclude list. > Executed unit test command: > {code:java} > /opt/maven/bin/mvn --batch-mode > -Dmaven.repo.local=/home/jenkins/jenkins-home/workspace/Base-PreCommit-GitHub-PR_PR-4944/yetus-m2/hbase-branch-2.4-patch-0 > --threads=4 > -Djava.io.tmpdir=/home/jenkins/jenkins-home/workspace/Base-PreCommit-GitHub-PR_PR-4944/yetus-jdk8-hadoop2-check/src/target > -DHBasePatchProcess -PrunAllTests > -Dtest.exclude.pattern=**/replication.regionserver.TestMetaRegionReplicaReplicationEndpoint.java,**/client.TestMetaRegionLocationCache.java,**/master.balancer.TestStochasticLoadBalancerRegionReplicaWithRacks.java,**/replication.TestZKReplicationQueueStorageWARNING: > All illegal access operations will be denied in a future > release.java,**/replication.regionserver.TestBasicWALEntryStreamFSHLog.java > -Dsurefire.firstPartForkCount=0.5C -Dsurefire.secondPartForkCount=0.5C clean > test -fae {code} > The latest exclude list contains "WARNING: All illegal access operations will > be denied in a future release" and maven treats this as a new parameter. As a > result unit tests are failing on CI that rely on the exclude list. > [https://ci-hbase.apache.org/job/HBase-Find-Flaky-Tests/job/branch-2.4/lastSuccessfulBuild/artifact/output/excludes/*view*/] > {noformat} > **/replication.regionserver.TestMetaRegionReplicaReplicationEndpoint.java,**/client.TestMetaRegionLocationCache.java,**/master.balancer.TestStochasticLoadBalancerRegionReplicaWithRacks.java,**/replication.TestZKReplicationQueueStorageWARNING: > All illegal access operations will be denied in a future > release.java,**/replication.regionserver.TestBasicWALEntryStreamFSHLog.java > {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-23340) hmaster /hbase/replication/rs session expired (hbase replication default value is true, we don't use ) causes logcleaner can not clean oldWALs, which resulits in oldW
[ https://issues.apache.org/jira/browse/HBASE-23340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi resolved HBASE-23340. --- Resolution: Fixed Merged the subtask to branch-2.4. > hmaster /hbase/replication/rs session expired (hbase replication default > value is true, we don't use ) causes logcleaner can not clean oldWALs, which > resulits in oldWALs too large (more than 2TB) > - > > Key: HBASE-23340 > URL: https://issues.apache.org/jira/browse/HBASE-23340 > Project: HBase > Issue Type: Improvement > Components: master >Affects Versions: 3.0.0-alpha-1, 2.2.3 >Reporter: jackylau >Assignee: Bo Cui >Priority: Major > Fix For: 2.5.0, 3.0.0-alpha-1 > > Attachments: Snipaste_2019-11-21_10-39-25.png, > Snipaste_2019-11-21_14-10-36.png > > > hmaster /hbase/replication/rs session expired (hbase replication default > value is true, we don't use ) causes logcleaner can not clean oldWALs, which > resulits in oldWALs too large (more than 2TB). > !Snipaste_2019-11-21_10-39-25.png! > > !Snipaste_2019-11-21_14-10-36.png! > > we can solve it by following : > 1) increase the session timeout(but i think it is not a good idea. because we > do not know how long to set is suitable) > 2) close the hbase replication. It is not a good idea too, when our user uses > this feature > 3) we need add retry times, for example when it has already happened three > times, we set the ReplicationLogCleaner and SnapShotCleaner stop > that is all my ideas, i do not konw it is suitable, If it is suitable, could > i commit a PR? > Does anynode have a good idea. -- This message was sent by Atlassian Jira (v8.20.10#820010)