[GitHub] [hbase] Apache9 commented on pull request #2113: HBASE-24286: HMaster won't become healthy after after cloning or crea…
Apache9 commented on pull request #2113: URL: https://github.com/apache/hbase/pull/2113#issuecomment-662844100 I haven't gotten the point of 'Unknown Servers' here. When setting up a region server, we will create a WAL directory for it. And when restarting master, we will pull the current set of region servers from zookeeper, and compare with the WAL directories listed from the WAL filesystem, and if there are some directories on the WAL filesystem, but not on zookeeper, we will schedule SCP for them. So what is the real problem here? The waiting for namespace table online prevents us scheduling SCPs for dead servers? Or we just lost the WAL directories for region servers? If latter, we will lose data? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] WenFeiYi commented on a change in pull request #2021: HBASE-24665 all wal of RegionGroupingProvider together roll
WenFeiYi commented on a change in pull request #2021: URL: https://github.com/apache/hbase/pull/2021#discussion_r459238534 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/wal/AbstractWALRoller.java ## @@ -58,31 +58,31 @@ protected static final String WAL_ROLL_PERIOD_KEY = "hbase.regionserver.logroll.period"; - protected final ConcurrentMap walNeedsRoll = new ConcurrentHashMap<>(); + protected final ConcurrentMap wals = new ConcurrentHashMap<>(); protected final T abortable; - private volatile long lastRollTime = System.currentTimeMillis(); // Period to roll log. private final long rollPeriod; private final int threadWakeFrequency; // The interval to check low replication on hlog's pipeline - private long checkLowReplicationInterval; + private final long checkLowReplicationInterval; private volatile boolean running = true; public void addWAL(WAL wal) { // check without lock first -if (walNeedsRoll.containsKey(wal)) { +if (wals.containsKey(wal)) { return; } // this is to avoid race between addWAL and requestRollAll. synchronized (this) { - if (walNeedsRoll.putIfAbsent(wal, Boolean.FALSE) == null) { + if (wals.putIfAbsent(wal, new RollController(wal)) == null) { wal.registerWALActionsListener(new WALActionsListener() { @Override public void logRollRequested(WALActionsListener.RollRequestReason reason) { // TODO logs will contend with each other here, replace with e.g. DelayedQueue Review comment: @ramkrish86 the purpose of this PR is to make each wal separate roll when using multiwal. thanks review. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2125: HBASE-24713 RS startup with FSHLog throws NPE after HBASE-21751
Apache-HBase commented on pull request #2125: URL: https://github.com/apache/hbase/pull/2125#issuecomment-662840010 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 26s | Docker mode activated. | | -0 :warning: | yetus | 0m 2s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 1s | master passed | | +1 :green_heart: | compile | 0m 55s | master passed | | +1 :green_heart: | shadedjars | 6m 1s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 37s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 45s | the patch passed | | +1 :green_heart: | compile | 0m 55s | the patch passed | | +1 :green_heart: | javac | 0m 55s | the patch passed | | +1 :green_heart: | shadedjars | 6m 1s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 36s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 43m 10s | hbase-server in the patch failed. | | | | 67m 39s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2125/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2125 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux ceef5e5fcdd9 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 8191fbdd7d | | Default Java | 1.8.0_232 | | unit | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2125/1/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2125/1/testReport/ | | Max. process+thread count | 441 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2125/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2125: HBASE-24713 RS startup with FSHLog throws NPE after HBASE-21751
Apache-HBase commented on pull request #2125: URL: https://github.com/apache/hbase/pull/2125#issuecomment-662838726 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 4m 22s | Docker mode activated. | | -0 :warning: | yetus | 0m 4s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 34s | master passed | | +1 :green_heart: | compile | 1m 9s | master passed | | +1 :green_heart: | shadedjars | 6m 11s | branch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 43s | hbase-server in master failed. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 4s | the patch passed | | +1 :green_heart: | compile | 1m 4s | the patch passed | | +1 :green_heart: | javac | 1m 4s | the patch passed | | +1 :green_heart: | shadedjars | 5m 45s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 40s | hbase-server in the patch failed. | ||| _ Other Tests _ | | -1 :x: | unit | 32m 51s | hbase-server in the patch failed. | | | | 62m 42s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2125/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2125 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 529b39821bc0 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 8191fbdd7d | | Default Java | 2020-01-14 | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2125/1/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2125/1/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt | | unit | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2125/1/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2125/1/testReport/ | | Max. process+thread count | 747 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2125/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] WenFeiYi commented on a change in pull request #2021: HBASE-24665 all wal of RegionGroupingProvider together roll
WenFeiYi commented on a change in pull request #2021: URL: https://github.com/apache/hbase/pull/2021#discussion_r459235284 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/wal/AbstractWALRoller.java ## @@ -58,31 +58,31 @@ protected static final String WAL_ROLL_PERIOD_KEY = "hbase.regionserver.logroll.period"; - protected final ConcurrentMap walNeedsRoll = new ConcurrentHashMap<>(); + protected final ConcurrentMap wals = new ConcurrentHashMap<>(); Review comment: rollWals? more suitable? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] WenFeiYi commented on a change in pull request #2021: HBASE-24665 all wal of RegionGroupingProvider together roll
WenFeiYi commented on a change in pull request #2021: URL: https://github.com/apache/hbase/pull/2021#discussion_r459233528 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/wal/AbstractWALRoller.java ## @@ -58,31 +58,31 @@ protected static final String WAL_ROLL_PERIOD_KEY = "hbase.regionserver.logroll.period"; - protected final ConcurrentMap walNeedsRoll = new ConcurrentHashMap<>(); + protected final ConcurrentMap wals = new ConcurrentHashMap<>(); protected final T abortable; - private volatile long lastRollTime = System.currentTimeMillis(); // Period to roll log. private final long rollPeriod; private final int threadWakeFrequency; // The interval to check low replication on hlog's pipeline - private long checkLowReplicationInterval; + private final long checkLowReplicationInterval; private volatile boolean running = true; public void addWAL(WAL wal) { // check without lock first -if (walNeedsRoll.containsKey(wal)) { +if (wals.containsKey(wal)) { return; } // this is to avoid race between addWAL and requestRollAll. synchronized (this) { - if (walNeedsRoll.putIfAbsent(wal, Boolean.FALSE) == null) { + if (wals.putIfAbsent(wal, new RollController(wal)) == null) { wal.registerWALActionsListener(new WALActionsListener() { @Override public void logRollRequested(WALActionsListener.RollRequestReason reason) { // TODO logs will contend with each other here, replace with e.g. DelayedQueue Review comment: No, the todo is pre-existing. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2125: HBASE-24713 RS startup with FSHLog throws NPE after HBASE-21751
Apache-HBase commented on pull request #2125: URL: https://github.com/apache/hbase/pull/2125#issuecomment-662833671 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 33s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 47s | master passed | | +1 :green_heart: | checkstyle | 1m 31s | master passed | | +1 :green_heart: | spotbugs | 2m 47s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 21s | the patch passed | | +1 :green_heart: | checkstyle | 1m 18s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 15m 3s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 2m 46s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 13s | The patch does not generate ASF License warnings. | | | | 43m 21s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2125/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2125 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux 9f62c4c562ae 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 8191fbdd7d | | Max. process+thread count | 84 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2125/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2113: HBASE-24286: HMaster won't become healthy after after cloning or crea…
Apache-HBase commented on pull request #2113: URL: https://github.com/apache/hbase/pull/2113#issuecomment-662832516 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 12s | Docker mode activated. | | -0 :warning: | yetus | 0m 6s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2 Compile Tests _ | | +0 :ok: | mvndep | 0m 15s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 37s | branch-2 passed | | +1 :green_heart: | compile | 1m 35s | branch-2 passed | | +1 :green_heart: | shadedjars | 6m 25s | branch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 18s | hbase-common in branch-2 failed. | | -0 :warning: | javadoc | 0m 41s | hbase-server in branch-2 failed. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 19s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 25s | the patch passed | | +1 :green_heart: | compile | 1m 35s | the patch passed | | +1 :green_heart: | javac | 1m 35s | the patch passed | | +1 :green_heart: | shadedjars | 6m 23s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 16s | hbase-common in the patch failed. | | -0 :warning: | javadoc | 0m 41s | hbase-server in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 57s | hbase-common in the patch passed. | | +1 :green_heart: | unit | 205m 8s | hbase-server in the patch passed. | | | | 237m 54s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2113/3/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2113 | | JIRA Issue | HBASE-24286 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 72870c2f59a6 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / ce4e692699 | | Default Java | 2020-01-14 | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2113/3/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-common.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2113/3/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2113/3/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-common.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2113/3/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2113/3/testReport/ | | Max. process+thread count | 2618 (vs. ulimit of 12500) | | modules | C: hbase-common hbase-server U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2113/3/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] ramkrish86 commented on a change in pull request #2021: HBASE-24665 all wal of RegionGroupingProvider together roll
ramkrish86 commented on a change in pull request #2021: URL: https://github.com/apache/hbase/pull/2021#discussion_r459224188 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/wal/AbstractWALRoller.java ## @@ -58,31 +58,31 @@ protected static final String WAL_ROLL_PERIOD_KEY = "hbase.regionserver.logroll.period"; - protected final ConcurrentMap walNeedsRoll = new ConcurrentHashMap<>(); + protected final ConcurrentMap wals = new ConcurrentHashMap<>(); protected final T abortable; - private volatile long lastRollTime = System.currentTimeMillis(); // Period to roll log. private final long rollPeriod; private final int threadWakeFrequency; // The interval to check low replication on hlog's pipeline - private long checkLowReplicationInterval; + private final long checkLowReplicationInterval; private volatile boolean running = true; public void addWAL(WAL wal) { // check without lock first -if (walNeedsRoll.containsKey(wal)) { +if (wals.containsKey(wal)) { return; } // this is to avoid race between addWAL and requestRollAll. synchronized (this) { - if (walNeedsRoll.putIfAbsent(wal, Boolean.FALSE) == null) { + if (wals.putIfAbsent(wal, new RollController(wal)) == null) { wal.registerWALActionsListener(new WALActionsListener() { @Override public void logRollRequested(WALActionsListener.RollRequestReason reason) { // TODO logs will contend with each other here, replace with e.g. DelayedQueue Review comment: Is this talking about what this PR is trying to do? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] ramkrish86 commented on a change in pull request #2021: HBASE-24665 all wal of RegionGroupingProvider together roll
ramkrish86 commented on a change in pull request #2021: URL: https://github.com/apache/hbase/pull/2021#discussion_r459223780 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/wal/AbstractWALRoller.java ## @@ -58,31 +58,31 @@ protected static final String WAL_ROLL_PERIOD_KEY = "hbase.regionserver.logroll.period"; - protected final ConcurrentMap walNeedsRoll = new ConcurrentHashMap<>(); + protected final ConcurrentMap wals = new ConcurrentHashMap<>(); Review comment: walRolls? instead of wals? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] gkanade opened a new pull request #2125: null check for writer if not initialized yet during syncrunner run
gkanade opened a new pull request #2125: URL: https://github.com/apache/hbase/pull/2125 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2124: HBASE-24743 Reject to add a peer which replicate to itself earlier
Apache-HBase commented on pull request #2124: URL: https://github.com/apache/hbase/pull/2124#issuecomment-662818684 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 2m 34s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ branch-2 Compile Tests _ | | +0 :ok: | mvndep | 0m 15s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 50s | branch-2 passed | | +1 :green_heart: | checkstyle | 1m 27s | branch-2 passed | | +1 :green_heart: | spotbugs | 2m 8s | branch-2 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 11s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 35s | the patch passed | | +1 :green_heart: | checkstyle | 1m 24s | the patch passed | | -0 :warning: | rubocop | 0m 6s | The patch generated 60 new + 295 unchanged - 16 fixed = 355 total (was 311) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 12m 36s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 2m 14s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 22s | The patch does not generate ASF License warnings. | | | | 38m 43s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2124/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2124 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle rubocop | | uname | Linux df794d950834 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / ce4e692699 | | rubocop | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2124/1/artifact/yetus-general-check/output/diff-patch-rubocop.txt | | Max. process+thread count | 84 (vs. ulimit of 12500) | | modules | C: hbase-server hbase-shell U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2124/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 rubocop=0.80.0 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2113: HBASE-24286: HMaster won't become healthy after after cloning or crea…
Apache-HBase commented on pull request #2113: URL: https://github.com/apache/hbase/pull/2113#issuecomment-662816300 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 36s | Docker mode activated. | | -0 :warning: | yetus | 0m 7s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2 Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 36s | branch-2 passed | | +1 :green_heart: | compile | 1m 18s | branch-2 passed | | +1 :green_heart: | shadedjars | 5m 7s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 0s | branch-2 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 14s | the patch passed | | +1 :green_heart: | compile | 1m 17s | the patch passed | | +1 :green_heart: | javac | 1m 17s | the patch passed | | +1 :green_heart: | shadedjars | 4m 59s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 56s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 23s | hbase-common in the patch passed. | | +1 :green_heart: | unit | 138m 25s | hbase-server in the patch passed. | | | | 164m 54s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2113/3/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2113 | | JIRA Issue | HBASE-24286 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux fa3cc9c7ad94 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / ce4e692699 | | Default Java | 1.8.0_232 | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2113/3/testReport/ | | Max. process+thread count | 4227 (vs. ulimit of 12500) | | modules | C: hbase-common hbase-server U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2113/3/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24665) all wal of RegionGroupingProvider together roll
[ https://issues.apache.org/jira/browse/HBASE-24665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163212#comment-17163212 ] wenfeiyi666 commented on HBASE-24665: - [~zhangduo] and [~zghao], please review, thanks. > all wal of RegionGroupingProvider together roll > --- > > Key: HBASE-24665 > URL: https://issues.apache.org/jira/browse/HBASE-24665 > Project: HBase > Issue Type: Bug >Affects Versions: 2.3.0, master, 2.1.10, 1.4.14, 2.2.6 >Reporter: wenfeiyi666 >Assignee: wenfeiyi666 >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.1, 2.1.10, 1.4.14, 2.2.7 > > > when use multiwal, any a wal request roll, all wal will be together roll. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #2077: HBASE-24684 Fetch ReplicationSink servers list from HMaster instead o…
Apache-HBase commented on pull request #2077: URL: https://github.com/apache/hbase/pull/2077#issuecomment-662813745 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 22s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +0 :ok: | prototool | 0m 1s | prototool was not available. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ HBASE-24666 Compile Tests _ | | +0 :ok: | mvndep | 0m 37s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 26s | HBASE-24666 passed | | +1 :green_heart: | checkstyle | 2m 46s | HBASE-24666 passed | | +1 :green_heart: | spotbugs | 7m 23s | HBASE-24666 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 13s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 23s | the patch passed | | +1 :green_heart: | checkstyle | 0m 11s | The patch passed checkstyle in hbase-protocol-shaded | | +1 :green_heart: | checkstyle | 0m 27s | The patch passed checkstyle in hbase-client | | +1 :green_heart: | checkstyle | 1m 8s | hbase-server: The patch generated 0 new + 188 unchanged - 2 fixed = 188 total (was 190) | | +1 :green_heart: | checkstyle | 0m 56s | The patch passed checkstyle in hbase-thrift | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 11m 58s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | hbaseprotoc | 2m 30s | the patch passed | | +1 :green_heart: | spotbugs | 8m 6s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 49s | The patch does not generate ASF License warnings. | | | | 53m 34s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2077/9/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2077 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle cc hbaseprotoc prototool | | uname | Linux 5aa09c5cbc02 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | HBASE-24666 / fae9f0cd51 | | Max. process+thread count | 94 (vs. ulimit of 12500) | | modules | C: hbase-protocol-shaded hbase-client hbase-server hbase-thrift U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2077/9/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24526) Deadlock executing assign meta procedure
[ https://issues.apache.org/jira/browse/HBASE-24526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163209#comment-17163209 ] Michael Stack commented on HBASE-24526: --- Update: I've put aside trying to find the issue we saw on the RegionServer side which we originally thought a deadlock; I cannot reproduce it after multiple runs. Hopefully next time it happens we have enough debug in place. The notes above [~zhangduo] are great. Hoist them out into new Pv2 Scheduling improvements issue? I'm afraid they'll be lost in the shuffle if left deep down here. Could include working on stuff like "It is not easy to fix since the MasterProcedureScheduler is really really complicated...", quoting you from HBASE-19976. > Deadlock executing assign meta procedure > > > Key: HBASE-24526 > URL: https://issues.apache.org/jira/browse/HBASE-24526 > Project: HBase > Issue Type: Bug > Components: proc-v2, Region Assignment >Affects Versions: 2.3.0 >Reporter: Nick Dimiduk >Priority: Critical > > I have what appears to be a deadlock while assigning meta. During recovery, > master creates the assign procedure for meta, and immediately marks meta as > assigned in zookeeper. It then creates the subprocedure to open meta on the > target region. However, the PEWorker pool is full of procedures that are > stuck, I think because their calls to update meta are going nowhere. For what > it's worth, the balancer is running concurrently, and has calculated a plan > size of 41. > From the master log, > {noformat} > 2020-06-06 00:34:07,314 INFO > org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure: > Starting pid=17802, ppid=17801, > state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; > TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; > state=OPEN, location=null; forceNewPlan=true, retain=false > 2020-06-06 00:34:07,465 INFO > org.apache.hadoop.hbase.zookeeper.MetaTableLocator: Setting hbase:meta > (replicaId=0) location in ZooKeeper as > hbasedn139.example.com,16020,1591403576247 > 2020-06-06 00:34:07,466 INFO > org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Initialized > subprocedures=[{pid=17803, ppid=17802, state=RUNNABLE; > org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure}] > {noformat} > {{pid=17803}} is not mentioned again. hbasedn139 never receives an > {{openRegion}} RPC. > Meanwhile, additional procedures are scheduled and picked up by workers, each > getting "stuck". I see log lines for all 16 PEWorker threads, saying that > they are stuck. > {noformat} > 2020-06-06 00:34:07,961 INFO > org.apache.hadoop.hbase.master.procedure.MasterProcedureScheduler: Took xlock > for pid=17804, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; > TransitRegionStateProcedure table=IntegrationTestBigLinkedList, > region=54f4f6c0e921e6d25e6043cba79c09aa, REOPEN/MOVE > 2020-06-06 00:34:07,961 INFO > org.apache.hadoop.hbase.master.assignment.RegionStateStore: pid=17804 > updating hbase:meta row=54f4f6c0e921e6d25e6043cba79c09aa, > regionState=CLOSING, regionLocation=hbasedn046.example.com,16020,1591402383956 > ... > 2020-06-06 00:34:22,295 WARN > org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Worker stuck > PEWorker-16(pid=17804), run time 14.3340 sec > ... > 2020-06-06 00:34:27,295 WARN > org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Worker stuck > PEWorker-16(pid=17804), run time 19.3340 sec > ... > {noformat} > The cluster stays in this state, with PEWorker thread stuck for upwards of 15 > minutes. Eventually master starts logging > {noformat} > 2020-06-06 00:50:18,033 INFO > org.apache.hadoop.hbase.client.RpcRetryingCallerImpl: Call exception, > tries=30, retries=31, started=970072 ms ago, cancelled=false, msg=Call queue > is full on hbasedn139.example.com,16020,1591403576247, too many items queued > ?, details=row > 'IntegrationTestBigLinkedList,,1591398987965.54f4f6c0e921e6d25e6043cba79c09aa.' > on table 'hbase:meta' at region=hbase:meta,,1. > 1588230740, hostname=hbasedn139.example.com,16020,1591403576247, seqNum=-1, > see https://s.apache.org/timeout > {noformat} > The master never recovers on its own. > I'm not sure how common this condition might be. This popped after about 20 > total hours of running ITBLL with ServerKillingMonkey. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #2122: HBASE-24743 Reject to add a peer which replicate to itself earlier
Apache-HBase commented on pull request #2122: URL: https://github.com/apache/hbase/pull/2122#issuecomment-662811987 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 31s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 25s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 39s | master passed | | +1 :green_heart: | checkstyle | 1m 22s | master passed | | +1 :green_heart: | spotbugs | 1m 58s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 13s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 25s | the patch passed | | +1 :green_heart: | checkstyle | 1m 19s | the patch passed | | -0 :warning: | rubocop | 0m 5s | The patch generated 66 new + 312 unchanged - 19 fixed = 378 total (was 331) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 11m 16s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 2m 7s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 26s | The patch does not generate ASF License warnings. | | | | 34m 10s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2122/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2122 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle rubocop | | uname | Linux d2599250b1be 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 8191fbdd7d | | rubocop | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2122/1/artifact/yetus-general-check/output/diff-patch-rubocop.txt | | Max. process+thread count | 94 (vs. ulimit of 12500) | | modules | C: hbase-server hbase-shell U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2122/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 rubocop=0.80.0 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2123: HBASE-24738 [Shell] processlist command fails with ERROR: Unexpected end of file from server when SSL enabled
Apache-HBase commented on pull request #2123: URL: https://github.com/apache/hbase/pull/2123#issuecomment-662811329 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 3m 5s | Docker mode activated. | | -0 :warning: | yetus | 0m 2s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 48s | master passed | | +1 :green_heart: | javadoc | 0m 14s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 29s | the patch passed | | +1 :green_heart: | javadoc | 0m 12s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 6m 53s | hbase-shell in the patch passed. | | | | 20m 44s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2123/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2123 | | Optional Tests | javac javadoc unit | | uname | Linux 7ba487c079cf 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 8191fbdd7d | | Default Java | 2020-01-14 | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2123/1/testReport/ | | Max. process+thread count | 1546 (vs. ulimit of 12500) | | modules | C: hbase-shell U: hbase-shell | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2123/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2123: HBASE-24738 [Shell] processlist command fails with ERROR: Unexpected end of file from server when SSL enabled
Apache-HBase commented on pull request #2123: URL: https://github.com/apache/hbase/pull/2123#issuecomment-662808030 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 26s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | ||| _ Patch Compile Tests _ | | -0 :warning: | rubocop | 0m 4s | The patch generated 8 new + 73 unchanged - 2 fixed = 81 total (was 75) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 13s | The patch does not generate ASF License warnings. | | | | 3m 4s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2123/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2123 | | Optional Tests | dupname asflicense rubocop | | uname | Linux 7462c4b59f2d 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 8191fbdd7d | | rubocop | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2123/1/artifact/yetus-general-check/output/diff-patch-rubocop.txt | | Max. process+thread count | 51 (vs. ulimit of 12500) | | modules | C: hbase-shell U: hbase-shell | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2123/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) rubocop=0.80.0 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2123: HBASE-24738 [Shell] processlist command fails with ERROR: Unexpected end of file from server when SSL enabled
Apache-HBase commented on pull request #2123: URL: https://github.com/apache/hbase/pull/2123#issuecomment-662811010 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 39s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 50s | master passed | | +1 :green_heart: | javadoc | 0m 18s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 33s | the patch passed | | +1 :green_heart: | javadoc | 0m 15s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 7m 5s | hbase-shell in the patch passed. | | | | 18m 50s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2123/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2123 | | Optional Tests | javac javadoc unit | | uname | Linux bb7bf3ad7413 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 8191fbdd7d | | Default Java | 1.8.0_232 | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2123/1/testReport/ | | Max. process+thread count | 2022 (vs. ulimit of 12500) | | modules | C: hbase-shell U: hbase-shell | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2123/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Comment Edited] (HBASE-24632) Enable procedure-based log splitting as default in hbase3
[ https://issues.apache.org/jira/browse/HBASE-24632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163200#comment-17163200 ] Michael Stack edited comment on HBASE-24632 at 7/23/20, 4:00 AM: - I was going to push this as default in hbase-2.4 and on hbase3 in the morning unless objections. I've been running it a while and its nice... There is a procedure per WAL split and ServerCrashProcedure doesn't block and wait till ALL WALs split before it can move forward; now it schedules all the WAL splits and then suspends itself freeing up the ProcedureExecutor to do other work. was (Author: stack): I was going to push this as default in hbase-2.4 and on hbase3 in the morning unless objections. I've been running it a while and its nice... There is a procedure per WAL split and ServerCrashProcedure doesn't block and wait till ALL WALs split before it can move forward; now it schedules all the WAL splits and then suspends itself freeing up the ProcedureExecutor. > Enable procedure-based log splitting as default in hbase3 > - > > Key: HBASE-24632 > URL: https://issues.apache.org/jira/browse/HBASE-24632 > Project: HBase > Issue Type: Sub-task > Components: wal >Reporter: Michael Stack >Priority: Major > Fix For: 3.0.0-alpha-1, 2.4.0 > > > Means changing this value in HConstants to false: >public static final boolean DEFAULT_HBASE_SPLIT_COORDINATED_BY_ZK = true; > Should probably also deprecate the current zk distributed split too so we can > clear out those classes to. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24632) Enable procedure-based log splitting as default in hbase3
[ https://issues.apache.org/jira/browse/HBASE-24632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163200#comment-17163200 ] Michael Stack commented on HBASE-24632: --- I was going to push this as default in hbase-2.4 and on hbase3 in the morning unless objections. I've been running it a while and its nice... There is a procedure per WAL split and ServerCrashProcedure doesn't block and wait till ALL WALs split before it can move forward; now it schedules all the WAL splits and then suspends itself freeing up the ProcedureExecutor. > Enable procedure-based log splitting as default in hbase3 > - > > Key: HBASE-24632 > URL: https://issues.apache.org/jira/browse/HBASE-24632 > Project: HBase > Issue Type: Sub-task > Components: wal >Reporter: Michael Stack >Priority: Major > Fix For: 3.0.0-alpha-1, 2.4.0 > > > Means changing this value in HConstants to false: >public static final boolean DEFAULT_HBASE_SPLIT_COORDINATED_BY_ZK = true; > Should probably also deprecate the current zk distributed split too so we can > clear out those classes to. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] infraio opened a new pull request #2124: HBASE-24743 Reject to add a peer which replicate to itself earlier
infraio opened a new pull request #2124: URL: https://github.com/apache/hbase/pull/2124 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] infraio commented on pull request #2124: HBASE-24743 Reject to add a peer which replicate to itself earlier
infraio commented on pull request #2124: URL: https://github.com/apache/hbase/pull/2124#issuecomment-662809521 PR for branch-2. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] pankaj72981 commented on pull request #2123: HBASE-24738 [Shell] processlist command fails with ERROR: Unexpected end of file from server when SSL enabled
pankaj72981 commented on pull request #2123: URL: https://github.com/apache/hbase/pull/2123#issuecomment-662807014 @bitoffdev please review. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] taklwu commented on pull request #2113: HBASE-24286: HMaster won't become healthy after after cloning or crea…
taklwu commented on pull request #2113: URL: https://github.com/apache/hbase/pull/2113#issuecomment-662806848 thanks @Apache9 we can switch to use standard `SCP` instead, for this case running `SCP` should be the same. The only difference is `HBCKSCP` may rescan the meta table (slower) if it cannot find any in-memory region states, IMO that rescan is always skipped because the unknown server were normally brought by `loadMeta()` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] pankaj72981 opened a new pull request #2123: HBASE-24738 [Shell] processlist command fails with ERROR: Unexpected end of file from server when SSL enabled
pankaj72981 opened a new pull request #2123: URL: https://github.com/apache/hbase/pull/2123 Uncommented the code to handle the SSL scenario. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] infraio opened a new pull request #2122: HBASE-24743 Reject to add a peer which replicate to itself earlier
infraio opened a new pull request #2122: URL: https://github.com/apache/hbase/pull/2122 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-23634) Enable "Split WAL to HFile" by default
[ https://issues.apache.org/jira/browse/HBASE-23634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack updated HBASE-23634: -- Fix Version/s: (was: 2.4.0) > Enable "Split WAL to HFile" by default > -- > > Key: HBASE-23634 > URL: https://issues.apache.org/jira/browse/HBASE-23634 > Project: HBase > Issue Type: Task >Affects Versions: 3.0.0-alpha-1, 2.3.0 >Reporter: Guanghao Zhang >Priority: Blocker > Fix For: 3.0.0-alpha-1 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23634) Enable "Split WAL to HFile" by default
[ https://issues.apache.org/jira/browse/HBASE-23634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163177#comment-17163177 ] Michael Stack commented on HBASE-23634: --- {quote}1、before compaction, large number of small hfiles affect read and write performance of region 2、a hfile needs 3 NN RPCs to bulkload during openRegion(validate、rename、createReader) if bulkLoadService ThreadNum is 3, and hfiles is 20(because wal number is 20), and RS is 100, region is 2K*100, and openRegion thread is 75 so hbase needs 3*3*75*100 concurrent NN RPCs and needs 3*20*2K*100 total NN RPCs {quote} >From [~Bo Cui] We can quibble with some of the assessment made above but it does suggest a better accounting is needed before we enable this as the default: * Compare recovered.edits write amplification vs that of writing small hfiles then immediately doing a rewrite via compaction (I like the [~zghao] interpretation of Bo Cui's list as opening the recovered.hfiles as part of the Region w/ the compaction bringing them into Store directory from the .tmp dir) * Replay of recovered.edits inline w/ open as opposed to just opening the file (MTTR benefits). * A compare of NN RPCs as noted above by Bo Cui. * The copy from bulkload of hfile validation is broken – for recovered hfiles and for bulk load – when recovery is for hfiles for meta table (see sub-issue) but the problem is deep-seated needing lots of work to fix. We could remove the validation since the 'system' wrote the files as [~zghao] suggests or move the validation to file open as part of open Region (could end up failing the Region open more often). One question, if we only partially write an HFile and we don't complete (because crash splitting the WAL say), does it get sidelined, cleaned up? Just wondering. Thanks. Unscheduling from 2.4 for now. leaving against hbase3. > Enable "Split WAL to HFile" by default > -- > > Key: HBASE-23634 > URL: https://issues.apache.org/jira/browse/HBASE-23634 > Project: HBase > Issue Type: Task >Affects Versions: 3.0.0-alpha-1, 2.3.0 >Reporter: Guanghao Zhang >Priority: Blocker > Fix For: 3.0.0-alpha-1, 2.4.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #2113: HBASE-24286: HMaster won't become healthy after after cloning or crea…
Apache-HBase commented on pull request #2113: URL: https://github.com/apache/hbase/pull/2113#issuecomment-662791330 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 34s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ branch-2 Compile Tests _ | | +0 :ok: | mvndep | 0m 25s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 14s | branch-2 passed | | +1 :green_heart: | checkstyle | 1m 30s | branch-2 passed | | +1 :green_heart: | spotbugs | 2m 35s | branch-2 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 13s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 7s | the patch passed | | +1 :green_heart: | checkstyle | 1m 29s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 11m 18s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 2m 59s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 25s | The patch does not generate ASF License warnings. | | | | 35m 19s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2113/3/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2113 | | JIRA Issue | HBASE-24286 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux acadf2fab521 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / ce4e692699 | | Max. process+thread count | 94 (vs. ulimit of 12500) | | modules | C: hbase-common hbase-server U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2113/3/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache9 commented on pull request #2113: HBASE-24286: HMaster won't become healthy after after cloning or crea…
Apache9 commented on pull request #2113: URL: https://github.com/apache/hbase/pull/2113#issuecomment-662787462 I haven't looked at the patch yet, but what I want to say is that, it is not a good idea to use HBCK stuff in automatic fail recovery. The design of HBCK is that, the operation is dangerous so only operators can perform it, manually, and the operators need to know the risk. Will report back later after reviewing the patch, and also I need to learn what is the problem we want to solve. Thanks. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2113: HBASE-24286: HMaster won't become healthy after after cloning or crea…
Apache-HBase commented on pull request #2113: URL: https://github.com/apache/hbase/pull/2113#issuecomment-662784028 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 10m 24s | Docker mode activated. | | -0 :warning: | yetus | 0m 7s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2 Compile Tests _ | | +0 :ok: | mvndep | 0m 39s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 5m 7s | branch-2 passed | | +1 :green_heart: | compile | 1m 46s | branch-2 passed | | +1 :green_heart: | shadedjars | 6m 34s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 15s | branch-2 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 31s | the patch passed | | +1 :green_heart: | compile | 1m 42s | the patch passed | | +1 :green_heart: | javac | 1m 42s | the patch passed | | +1 :green_heart: | shadedjars | 6m 57s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 11s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 53s | hbase-common in the patch passed. | | -1 :x: | unit | 204m 40s | hbase-server in the patch failed. | | | | 249m 12s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2113/2/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2113 | | JIRA Issue | HBASE-24286 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 43e4b1a50653 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / ce4e692699 | | Default Java | 1.8.0_232 | | unit | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2113/2/artifact/yetus-jdk8-hadoop2-check/output/patch-unit-hbase-server.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2113/2/testReport/ | | Max. process+thread count | 2829 (vs. ulimit of 12500) | | modules | C: hbase-common hbase-server U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2113/2/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24749) Direct insert HFiles and Persist in-memory HFile tracking
[ https://issues.apache.org/jira/browse/HBASE-24749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163163#comment-17163163 ] Michael Stack commented on HBASE-24749: --- {quote}and if any HFile is being written successfully without a even marker, we probably need a repair hook (maybe HBCK) to consider including the written storefile back to be tracked. {quote} If an HFile is written successfully but no marker in the WAL, then it doesn't exist, right? As part of the WAL replay you will reconstitute it from edits in the WAL? On HBASE-14090, it is old but still cool, virtuous, aiming to hit a bigger target. A question for your that I think might be of general utility is whether you have surveyed the calls to the NN made by HBase on a regular basis? It would be good to get a list of renames – files and dirs – for your project but also, my sense is that we are profligate w/ our NN calls. If a survey and report, we might be able up performance at least around MTTR. Anyways, just a thought. > Direct insert HFiles and Persist in-memory HFile tracking > - > > Key: HBASE-24749 > URL: https://issues.apache.org/jira/browse/HBASE-24749 > Project: HBase > Issue Type: Umbrella > Components: Compaction, HFile >Affects Versions: 3.0.0-alpha-1 >Reporter: Tak-Lon (Stephen) Wu >Assignee: Tak-Lon (Stephen) Wu >Priority: Major > Labels: design, discussion, objectstore, storeFile, storeengine > Attachments: 1B100m-25m25m-performance.pdf, Apache HBase - Direct > insert HFiles and Persist in-memory HFile tracking.pdf > > > We propose a new feature (a new store engine) to remove the {{.tmp}} > directory used in the commit stage for common HFile operations such as flush > and compaction to improve the write throughput and latency on object stores. > Specifically for S3 filesystems, this will also mitigate read-after-write > inconsistencies caused by immediate HFiles validation after moving the > HFile(s) to data directory. > Please see attached for this proposal and the initial result captured with > 25m (25m operations) and 1B (100m operations) YCSB workload A LOAD and RUN, > and workload C RUN result. > The goal of this JIRA is to discuss with the community if the proposed > improvement on the object stores use case makes senses and if we miss > anything should be included. > Improvement Highlights > 1. Lower write latency, especially the p99+ > 2. Higher write throughput on flush and compaction > 3. Lower MTTR on region (re)open or assignment > 4. Remove consistent check dependencies (e.g. DynamoDB) supported by file > system implementation -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-11288) Splittable Meta
[ https://issues.apache.org/jira/browse/HBASE-11288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163154#comment-17163154 ] Duo Zhang commented on HBASE-11288: --- I have to explain a bit here so people will not misunderstand me. The ITBLL stuff is proposed by you [~toffer], so I do not spend much time on you as I think you will accept it. The cache stuff is considered as off topic many times here so I spent a lot of time to convience others and finally [~stack] promised to open a separated issue to discuss it, and he did lots of work to open a design doc and collect feedbacks. And I've already done implementing HBASE-24459 to show how we can implement cache server for master local region. It is a bit surprise to me that now you say we do not have an consensus yet so you are just waiting. Maybe it is my fault, I should ask you more times on it, though it is proposed by you. And back to ITBLL, yes, I had a different opinion at the first place as I did not think it would provide any useful feedbacks, but what I also said is that, you are free to run it as you like, you prove your statement. [~stack] sir, what I mean is, I do not think the RESULT can change my mind, as [~toffer] also said in the latest reply, master itself is not stable, so the base line is a problem. It does mean, I will never change my mind, no matter what you do. This is completely different. If you can get a reasonable result from the ITBLL run, I will change my mind. And on technical suggestion on the ITBLL, since it is only a POC, and the assignment part is almost the same on master and branch-2, you could try to rebase your currect POC to branch-2? [~stack] and [~ndimiduk] have done a lot of work to stabilize branch-2, so I think the base line of branch-2 will be much better. And when failure happens, you need to find out the root cause, it is because the coded added by you or not. Of course it will good if we can pass ITBLL. And I want to speak again, let's make progress here, this is an very important feature. Recently the AWS guys want to add more things(the storefiles list) to meta table, which will make it more larger. So [~toffer], please do not say that you are just waiting, you are part of the community. You should show others your plan on how to make progress here. Thanks. > Splittable Meta > --- > > Key: HBASE-11288 > URL: https://issues.apache.org/jira/browse/HBASE-11288 > Project: HBase > Issue Type: Umbrella > Components: meta >Reporter: Francis Christopher Liu >Assignee: Francis Christopher Liu >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #2121: Backport "HBASE-24295 [Chaos Monkey] abstract logging through the class hierarchy" and addendum to branch-1
Apache-HBase commented on pull request #2121: URL: https://github.com/apache/hbase/pull/2121#issuecomment-662768059 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 7m 2s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 37 new or modified test files. | ||| _ branch-1 Compile Tests _ | | +1 :green_heart: | mvninstall | 9m 55s | branch-1 passed | | +1 :green_heart: | compile | 0m 21s | branch-1 passed with JDK v1.8.0_262 | | +1 :green_heart: | compile | 0m 29s | branch-1 passed with JDK v1.7.0_272 | | +1 :green_heart: | checkstyle | 0m 42s | branch-1 passed | | +1 :green_heart: | shadedjars | 3m 1s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 30s | branch-1 passed with JDK v1.8.0_262 | | +1 :green_heart: | javadoc | 0m 17s | branch-1 passed with JDK v1.7.0_272 | | +0 :ok: | spotbugs | 3m 53s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 0s | branch-1 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 54s | the patch passed | | +1 :green_heart: | compile | 0m 20s | the patch passed with JDK v1.8.0_262 | | +1 :green_heart: | javac | 0m 20s | the patch passed | | +1 :green_heart: | compile | 0m 26s | the patch passed with JDK v1.7.0_272 | | +1 :green_heart: | javac | 0m 26s | the patch passed | | -1 :x: | checkstyle | 0m 31s | hbase-it: The patch generated 5 new + 7 unchanged - 0 fixed = 12 total (was 7) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedjars | 2m 50s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | hadoopcheck | 4m 41s | Patch does not cause any errors with Hadoop 2.8.5 2.9.2. | | +1 :green_heart: | javadoc | 0m 12s | the patch passed with JDK v1.8.0_262 | | +1 :green_heart: | javadoc | 0m 16s | the patch passed with JDK v1.7.0_272 | | +1 :green_heart: | findbugs | 0m 0s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 4s | hbase-it in the patch passed. | | +1 :green_heart: | asflicense | 0m 22s | The patch does not generate ASF License warnings. | | | | 38m 32s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2121/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2121 | | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux bbc2628f57e8 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/Base-PreCommit-GitHub-PR_PR-2121/out/precommit/personality/provided.sh | | git revision | branch-1 / 527e4a6 | | Default Java | 1.7.0_272 | | Multi-JDK versions | /usr/lib/jvm/zulu-8-amd64:1.8.0_262 /usr/lib/jvm/zulu-7-amd64:1.7.0_272 | | checkstyle | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2121/1/artifact/out/diff-checkstyle-hbase-it.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2121/1/testReport/ | | Max. process+thread count | 497 (vs. ulimit of 1) | | modules | C: hbase-it U: hbase-it | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2121/1/console | | versions | git=1.9.1 maven=3.0.5 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2113: HBASE-24286: HMaster won't become healthy after after cloning or crea…
Apache-HBase commented on pull request #2113: URL: https://github.com/apache/hbase/pull/2113#issuecomment-662762955 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 29s | Docker mode activated. | | -0 :warning: | yetus | 0m 7s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2 Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 13s | branch-2 passed | | +1 :green_heart: | compile | 1m 29s | branch-2 passed | | +1 :green_heart: | shadedjars | 5m 48s | branch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 19s | hbase-common in branch-2 failed. | | -0 :warning: | javadoc | 0m 42s | hbase-server in branch-2 failed. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 21s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 53s | the patch passed | | +1 :green_heart: | compile | 1m 28s | the patch passed | | +1 :green_heart: | javac | 1m 28s | the patch passed | | +1 :green_heart: | shadedjars | 5m 44s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 18s | hbase-common in the patch failed. | | -0 :warning: | javadoc | 0m 37s | hbase-server in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 35s | hbase-common in the patch passed. | | -1 :x: | unit | 125m 46s | hbase-server in the patch failed. | | | | 156m 34s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2113/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2113 | | JIRA Issue | HBASE-24286 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 45f3be02e791 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / ce4e692699 | | Default Java | 2020-01-14 | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2113/2/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-common.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2113/2/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2113/2/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-common.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2113/2/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt | | unit | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2113/2/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2113/2/testReport/ | | Max. process+thread count | 4356 (vs. ulimit of 12500) | | modules | C: hbase-common hbase-server U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2113/2/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] ndimiduk commented on pull request #2121: Backport "HBASE-24295 [Chaos Monkey] abstract logging through the class hierarchy" and addendum to branch-1
ndimiduk commented on pull request #2121: URL: https://github.com/apache/hbase/pull/2121#issuecomment-662755944 The backport of HBASE-24295 as discussed on HBASE-24662. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] ndimiduk opened a new pull request #2121: Backport "HBASE-24295 [Chaos Monkey] abstract logging through the class hierarchy" and addendum to branch-1
ndimiduk opened a new pull request #2121: URL: https://github.com/apache/hbase/pull/2121 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Reopened] (HBASE-24295) [Chaos Monkey] abstract logging through the class hierarchy
[ https://issues.apache.org/jira/browse/HBASE-24295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk reopened HBASE-24295: -- Reopen for branch-1 backport. > [Chaos Monkey] abstract logging through the class hierarchy > --- > > Key: HBASE-24295 > URL: https://issues.apache.org/jira/browse/HBASE-24295 > Project: HBase > Issue Type: Task > Components: integration tests >Affects Versions: 2.3.0 >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Minor > Fix For: 3.0.0-alpha-1, 2.3.0 > > > Running chaos monkey and watching the logs, it's very difficult to tell what > actions are actually running. There's lots of shared methods through the > class hierarchy that extends from {{abstract class Action}}, and each class > comes with its own {{Logger}}. As a result, the logs have useless stuff like > {noformat} > INFO actions.Action: Started regionserver... > {noformat} > Add {{protected abstract Logger getLogger()}} to the class's internal > interface, and have the concrete implementations provide their logger. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #2119: Backport "HBASE-24696 Include JVM information on Web UI under "Software Attributes"" to branch-2.2
Apache-HBase commented on pull request #2119: URL: https://github.com/apache/hbase/pull/2119#issuecomment-662751827 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 33s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -0 :warning: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ branch-2.2 Compile Tests _ | | +0 :ok: | mvndep | 0m 15s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 5m 36s | branch-2.2 passed | | +1 :green_heart: | compile | 2m 1s | branch-2.2 passed | | +1 :green_heart: | checkstyle | 2m 28s | branch-2.2 passed | | +1 :green_heart: | shadedjars | 4m 23s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 26s | branch-2.2 passed | | +0 :ok: | spotbugs | 1m 34s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 5m 52s | branch-2.2 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 14s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 5m 6s | the patch passed | | +1 :green_heart: | compile | 2m 13s | the patch passed | | +1 :green_heart: | javac | 2m 13s | the patch passed | | +1 :green_heart: | checkstyle | 2m 45s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedjars | 4m 50s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | hadoopcheck | 28m 8s | Patch does not cause any errors with Hadoop 2.8.5 2.9.2 2.10.0 or 3.1.2 3.2.1. | | +1 :green_heart: | javadoc | 1m 38s | the patch passed | | +1 :green_heart: | findbugs | 6m 58s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 180m 47s | hbase-server in the patch failed. | | +1 :green_heart: | unit | 3m 50s | hbase-thrift in the patch passed. | | +1 :green_heart: | unit | 5m 1s | hbase-rest in the patch passed. | | +1 :green_heart: | asflicense | 1m 19s | The patch does not generate ASF License warnings. | | | | 273m 42s | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hbase.quotas.TestRegionSizeUse | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2119/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2119 | | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 8fe20924b6bd 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/Base-PreCommit-GitHub-PR_PR-2119/out/precommit/personality/provided.sh | | git revision | branch-2.2 / 2c08876d70 | | Default Java | 1.8.0_181 | | unit | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2119/1/artifact/out/patch-unit-hbase-server.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2119/1/testReport/ | | Max. process+thread count | 5172 (vs. ulimit of 1) | | modules | C: hbase-server hbase-thrift hbase-rest U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2119/1/console | | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24749) Direct insert HFiles and Persist in-memory HFile tracking
[ https://issues.apache.org/jira/browse/HBASE-24749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163136#comment-17163136 ] Tak-Lon (Stephen) Wu commented on HBASE-24749: -- Thanks Stack, HBASE-14090 and their [design doc|https://docs.google.com/document/d/10tSCSSWPwdFqOLLYtY2aVFe6iCIrsBk4Vqm8LSGUfhQ/edit#] seems share several directions, e.g. how to use a hbase:meta column to track storefile and create a cleaner efficiently remove left over Storefiles. So, we will take another review with those design docs to see what we can pick within this proposed scope. Also as [~elserj] pointed out from the dev@ list, Accumulo is also managing their data/RFile with a metadata table. Without the support from ZK, write an extra edit to the WAL as a `commit` marker and reuse it for recovering the ROOT region (for hbase:meta and maybe the MasterRegion) make senses, and if the flush failed normally, we don't write a new marker and remove that storefile if written. and if any HFile is being written successfully without a even marker, we probably need a repair hook (maybe HBCK) to consider including the written storefile back to be tracked. > Direct insert HFiles and Persist in-memory HFile tracking > - > > Key: HBASE-24749 > URL: https://issues.apache.org/jira/browse/HBASE-24749 > Project: HBase > Issue Type: Umbrella > Components: Compaction, HFile >Affects Versions: 3.0.0-alpha-1 >Reporter: Tak-Lon (Stephen) Wu >Assignee: Tak-Lon (Stephen) Wu >Priority: Major > Labels: design, discussion, objectstore, storeFile, storeengine > Attachments: 1B100m-25m25m-performance.pdf, Apache HBase - Direct > insert HFiles and Persist in-memory HFile tracking.pdf > > > We propose a new feature (a new store engine) to remove the {{.tmp}} > directory used in the commit stage for common HFile operations such as flush > and compaction to improve the write throughput and latency on object stores. > Specifically for S3 filesystems, this will also mitigate read-after-write > inconsistencies caused by immediate HFiles validation after moving the > HFile(s) to data directory. > Please see attached for this proposal and the initial result captured with > 25m (25m operations) and 1B (100m operations) YCSB workload A LOAD and RUN, > and workload C RUN result. > The goal of this JIRA is to discuss with the community if the proposed > improvement on the object stores use case makes senses and if we miss > anything should be included. > Improvement Highlights > 1. Lower write latency, especially the p99+ > 2. Higher write throughput on flush and compaction > 3. Lower MTTR on region (re)open or assignment > 4. Remove consistent check dependencies (e.g. DynamoDB) supported by file > system implementation -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24692) WebUI header bar overlaps page content when window is too narrow
[ https://issues.apache.org/jira/browse/HBASE-24692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163124#comment-17163124 ] Mingliang Liu commented on HBASE-24692: --- [~ndimiduk] Feel free to re-assign if you have a plan. When I took this up, I was thinking it can be fixed with some quick adjustment. It makes perfect sense to upgrade the Bootstrap to newer version. Grouping into sub-menus also is a good idea since this is growing longer. I'll find time next months working on this if this is not yet re-assigned. > WebUI header bar overlaps page content when window is too narrow > > > Key: HBASE-24692 > URL: https://issues.apache.org/jira/browse/HBASE-24692 > Project: HBase > Issue Type: Bug > Components: UI >Affects Versions: 2.3.0 >Reporter: Nick Dimiduk >Assignee: Mingliang Liu >Priority: Minor > Attachments: 24692-ex1.png, 24692-ex2.png, 24692-ex3.png, > 24692-ex4.png > > > It seems the CSS on our WebUI is such that the header will expand down > vertically as the content wraps dynamically. However, the page content does > not shift down along with it, resulting in the header overlapping the page > content. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24734) Wrong comparator opening Region when 'split-to-WAL' enabled.
[ https://issues.apache.org/jira/browse/HBASE-24734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163123#comment-17163123 ] Michael Stack commented on HBASE-24734: --- This is a super rare bug. Happens in the verify of the recovered hfile on open where we check to see the first and last keys are inside the RegionInfo spec. By chance, the hfile first or last key here happened to be 'in-range' if you used the right, meta comparator but out of range if you used the raw bytes comparator as RegionInfo#containsRange does. Digging, the RegionInfo#containsRange methods that have been around for ever (weirdly duplicated mentioned first in HRegionInfo – deprecated – and then moved to RegionInfoBuilder) does NOT support range check for meta table. Even the hardcoded FIRST_META_REGIONINFO fails to override containsRange in the Interface to provide versions that use the meta table comparator (The Cell#METACOMPARATOR doesn't offer a means of comparing byte arrays... just Cells and ByteBuffers and byte [] combinations). Let this issue be for fixing this (rangeCheck is done verifying bulk load... we should at least fail bulk load if trying to bulk load meta... ). > Wrong comparator opening Region when 'split-to-WAL' enabled. > > > Key: HBASE-24734 > URL: https://issues.apache.org/jira/browse/HBASE-24734 > Project: HBase > Issue Type: Sub-task > Components: HFile, MTTR >Reporter: Michael Stack >Priority: Major > > Came across this when we were testing the 'split-to-hfile' feature running > ITBLL: > > {code:java} > 2020-07-10 10:16:49,983 INFO org.apache.hadoop.hbase.regionserver.HRegion: > Closing region hbase:meta,,1.15882307402020-07-10 10:16:49,997 INFO > org.apache.hadoop.hbase.regionserver.HRegion: Closed > hbase:meta,,1.15882307402020-07-10 10:16:49,998 WARN > org.apache.hadoop.hbase.regionserver.handler.AssignRegionHandler: Fatal error > occurred while opening region hbase:meta,,1.1588230740, > aborting...java.lang.IllegalArgumentException: Invalid range: > IntegrationTestBigLinkedList,,1594350463222.8f89e01a5245e79946e22d8a8ab4698b. > > > IntegrationTestBigLinkedList,\x10\x02J\xA1,1594349535271.be24dc276f686e6dcc7fb9d3f91c8387. > at > org.apache.hadoop.hbase.client.RegionInfoBuilder$MutableRegionInfo.containsRange(RegionInfoBuilder.java:300) > at > org.apache.hadoop.hbase.regionserver.HStore.tryCommitRecoveredHFile(HStore.java:) > at > org.apache.hadoop.hbase.regionserver.HRegion.loadRecoveredHFilesIfAny(HRegion.java:5442) > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:1010) > at > org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:950) >at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7490) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegionFromTableDir(HRegion.java:7448) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7424) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7382) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7333) > at > org.apache.hadoop.hbase.regionserver.handler.AssignRegionHandler.process(AssignRegionHandler.java:135) > at > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104) > at > java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) > at > java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) > at java.base/java.lang.Thread.run(Thread.java:834)2020-07-10 > 10:16:50,005 ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: * > ABORTING region server hbasedn149.example.org,16020,1594375563853: Failed to > open region hbase:meta,,1.1588230740 and can not recover > *java.lang.IllegalArgumentException: Invalid range: > IntegrationTestBigLinkedList,,1594350463222.8f89e01a5245e79946e22d8a8ab4698b. > > > IntegrationTestBigLinkedList,\x10\x02J\xA1,1594349535271.be24dc276f686e6dcc7fb9d3f91c8387. > {code} > Seems basic case of wrong comparator. Below passes if I use the meta > comparator > {code:java} > @Test > public void testBinaryKeys() throws Exception { > Set set = new TreeSet<>(CellComparatorImpl.COMPARATOR); > final byte [] fam = Bytes.toBytes("col"); > final byte [] qf = Bytes.toBytes("umn"); > final byte [] nb = new byte[0]; > Cell [] keys = { > createByteBufferKeyValueFromKeyValue( > new KeyValue(Bytes.toBytes("a,\u\u,2"), fam, qf, 2, > nb)), > createByteBufferKeyValueFromKeyValue( > new KeyValue(Bytes.toBytes("a,\u0001,3"), fam, qf, 3, nb)),
[jira] [Commented] (HBASE-22146) SpaceQuotaViolationPolicy Disable is not working in Namespace level
[ https://issues.apache.org/jira/browse/HBASE-22146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163121#comment-17163121 ] Hudson commented on HBASE-22146: Results for branch branch-2.2 [build #917 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/917/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/917//General_Nightly_Build_Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/917//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/917//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > SpaceQuotaViolationPolicy Disable is not working in Namespace level > --- > > Key: HBASE-22146 > URL: https://issues.apache.org/jira/browse/HBASE-22146 > Project: HBase > Issue Type: Bug >Affects Versions: 3.0.0-alpha-1, 2.0.0 >Reporter: Uma Maheswari >Assignee: Surbhi Kochhar >Priority: Major > Labels: Quota, space > Fix For: 3.0.0-alpha-1, 2.3.1, 2.4.0, 2.2.7 > > > SpaceQuotaViolationPolicy Disable is not working in Namespace level > PFB the steps: > * Create Namespace and set Quota violation policy as Disable > * Create tables under namespace and violate Quota > Expected result: Tables to get disabled > Actual Result: Tables are not getting disabled > Note: mutation operation is not allowed on the table -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24692) WebUI header bar overlaps page content when window is too narrow
[ https://issues.apache.org/jira/browse/HBASE-24692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163119#comment-17163119 ] Nick Dimiduk commented on HBASE-24692: -- I looked over this with a JS/CSS programmer friend. First suggestion is to upgrade from this ancient version of Bootstrap. Barring that, and coming directly from the Bootstrap documentation, we should reduce the number of items in the menus. The suggested approach is to group some of these together into sub-menus. > WebUI header bar overlaps page content when window is too narrow > > > Key: HBASE-24692 > URL: https://issues.apache.org/jira/browse/HBASE-24692 > Project: HBase > Issue Type: Bug > Components: UI >Affects Versions: 2.3.0 >Reporter: Nick Dimiduk >Assignee: Mingliang Liu >Priority: Minor > Attachments: 24692-ex1.png, 24692-ex2.png, 24692-ex3.png, > 24692-ex4.png > > > It seems the CSS on our WebUI is such that the header will expand down > vertically as the content wraps dynamically. However, the page content does > not shift down along with it, resulting in the header overlapping the page > content. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] taklwu commented on pull request #2113: HBASE-24286: HMaster won't become healthy after after cloning or crea…
taklwu commented on pull request #2113: URL: https://github.com/apache/hbase/pull/2113#issuecomment-662730825 while waiting for the unit tests runs, I want to bring up two extra topics and may follow on new JIRA(s) 1. We reverted a [change](https://github.com/apache/hbase/commit/4d5efec76718032a1e55024fd5133409e4be3cb8#diff-21659161b1393e6632730dcbea205fd8R74-R75) in [HBASE-24471](https://github.com/apache/hbase/commit/4d5efec76718032a1e55024fd5133409e4be3cb8) that always deletes existing meta table if we're restarting on a fresh cluster with No WALs and No ZK data. I'm wondered if @Apache9 added this meta table removal for a special requirement on branch-2.3+, and that was the major behavior changes between branch-2.2 (it didn't delete meta if exists) and branch-2.3+. Here, should we add a feature flag to enable this meta directory removal ? IMO migration from an cluster with existing meta table and other tables may fail and need HBCK to repair region states (pending unit test suite completes to prove our change is safe). 2. With/Without this PR, I found a potential master cannot initialize issues and could be a bug on a dynamic hostname environment. If we only keep ZK data and has no WALs support, the location of meta table have the old hostname, and it hangs and waits for meta region to be online on that old hosts. however, it cannot be online because InitMetaProcedure cannot be submitted as meta region considers as `OPEN` and blocks by the condition of [`if (rs != null && rs.isOffline()) {)`](https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java#L1051-L1060). Normally, if WALs exist, the missing server will be expires and meta region will come online after the SCP handled that dead server. is this behavior as expected? do you guys think we should support this corner case? ``` ### for case 2. 2020-07-22 13:16:05,802 INFO [master/localhost:0:becomeActiveMaster] master.HMaster(1020): hbase:meta {1588230740 state=OPEN, ts=1595448965762, server=localhost,54945,1595448957980} ... 2020-07-22 15:04:33,802 WARN [master/localhost:0:becomeActiveMaster] master.HMaster(1230): hbase:meta,,1.1588230740 is NOT online; state={1588230740 state=OPEN, ts=1595455438210, server=localhost,62506,1595455430742}; ServerCrashProcedures=false. Master startup cannot progress, in holding-pattern until region onlined. ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2113: HBASE-24286: HMaster won't become healthy after after cloning or crea…
Apache-HBase commented on pull request #2113: URL: https://github.com/apache/hbase/pull/2113#issuecomment-662729548 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 4m 3s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ branch-2 Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 32s | branch-2 passed | | +1 :green_heart: | checkstyle | 1m 31s | branch-2 passed | | +1 :green_heart: | spotbugs | 2m 40s | branch-2 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 14s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 14s | the patch passed | | +1 :green_heart: | checkstyle | 1m 29s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 11m 32s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 2m 55s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 26s | The patch does not generate ASF License warnings. | | | | 39m 19s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2113/2/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2113 | | JIRA Issue | HBASE-24286 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux 95f531c7f9df 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / ce4e692699 | | Max. process+thread count | 94 (vs. ulimit of 12500) | | modules | C: hbase-common hbase-server U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2113/2/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] taklwu commented on a change in pull request #2113: HBASE-24286: HMaster won't become healthy after after cloning or crea…
taklwu commented on a change in pull request #2113: URL: https://github.com/apache/hbase/pull/2113#discussion_r459100872 ## File path: hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestRecreateCluster.java ## @@ -0,0 +1,185 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.master; + +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertFalse; +import static org.junit.Assert.assertTrue; + +import java.io.IOException; +import java.time.Duration; +import java.util.List; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hbase.Cell; +import org.apache.hadoop.hbase.HBaseClassTestRule; +import org.apache.hadoop.hbase.HBaseTestingUtility; +import org.apache.hadoop.hbase.HConstants; +import org.apache.hadoop.hbase.MiniHBaseCluster; +import org.apache.hadoop.hbase.StartMiniClusterOption; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.client.Get; +import org.apache.hadoop.hbase.client.Put; +import org.apache.hadoop.hbase.client.RegionInfo; +import org.apache.hadoop.hbase.client.Result; +import org.apache.hadoop.hbase.client.Table; +import org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore; +import org.apache.hadoop.hbase.regionserver.HRegionServer; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.util.Bytes; +import org.apache.hadoop.hbase.zookeeper.ZooKeeperHelper; +import org.apache.zookeeper.ZKUtil; +import org.apache.zookeeper.ZooKeeper; +import org.junit.ClassRule; +import org.junit.Rule; +import org.junit.Test; +import org.junit.experimental.categories.Category; +import org.junit.rules.TestName; + +/** + * Test reuse data directory when cluster failover with a set of new region servers with + * different hostnames. For any hbase system table and user table can be assigned normally after + * cluster restart + */ +@Category({ LargeTests.class }) +public class TestRecreateCluster { + @ClassRule + public static final HBaseClassTestRule CLASS_RULE = + HBaseClassTestRule.forClass(TestRecreateCluster.class); + + @Rule + public TestName name = new TestName(); + + private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); + private static final int NUM_RS = 3; + private static final long LIVE_REGION_SERVER_TIMEOUT_MS = Duration.ofMinutes(3).toMillis(); + + @Test + public void testRecreateCluster_UserTableDisabled() throws Exception { +TEST_UTIL.startMiniCluster(NUM_RS); +try { + TableName tableName = TableName.valueOf("t1"); + prepareDataBeforeRecreate(TEST_UTIL, tableName); + TEST_UTIL.getAdmin().disableTable(tableName); + TEST_UTIL.waitTableDisabled(tableName.getName()); + restartCluster(); + TEST_UTIL.getAdmin().enableTable(tableName); + validateDataAfterRecreate(TEST_UTIL, tableName); +} finally { + TEST_UTIL.shutdownMiniCluster(); +} + } + + @Test + public void testRecreateCluster_UserTableEnabled() throws Exception { +TEST_UTIL.startMiniCluster(NUM_RS); +try { + TableName tableName = TableName.valueOf("t1"); + prepareDataBeforeRecreate(TEST_UTIL, tableName); + restartCluster(); + validateDataAfterRecreate(TEST_UTIL, tableName); +} finally { + TEST_UTIL.shutdownMiniCluster(); +} + } + + private void restartCluster() throws Exception { +// flush cache so that everything is on disk +TEST_UTIL.getMiniHBaseCluster().flushcache(); + +// delete all wal data +Path walRootPath = TEST_UTIL.getMiniHBaseCluster().getMaster().getWALRootDir(); +TEST_UTIL.shutdownMiniHBaseCluster(); +TEST_UTIL.getDFSCluster().getFileSystem().delete( +new Path(walRootPath.toString(), HConstants.HREGION_LOGDIR_NAME), true); +TEST_UTIL.getDFSCluster().getFileSystem().delete( Review comment: refactor to wait will procedure completed and delete the WALs and Master procedure WALs. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For qu
[GitHub] [hbase] taklwu commented on a change in pull request #2113: HBASE-24286: HMaster won't become healthy after after cloning or crea…
taklwu commented on a change in pull request #2113: URL: https://github.com/apache/hbase/pull/2113#discussion_r459100536 ## File path: hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestRecreateCluster.java ## @@ -0,0 +1,185 @@ +/** Review comment: fixed, sorry that I copied from other files and i found many place in hbase has this header lol This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (HBASE-24749) Direct insert HFiles and Persist in-memory HFile tracking
[ https://issues.apache.org/jira/browse/HBASE-24749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zach York reassigned HBASE-24749: - Assignee: Tak-Lon (Stephen) Wu > Direct insert HFiles and Persist in-memory HFile tracking > - > > Key: HBASE-24749 > URL: https://issues.apache.org/jira/browse/HBASE-24749 > Project: HBase > Issue Type: Umbrella > Components: Compaction, HFile >Affects Versions: 3.0.0-alpha-1 >Reporter: Tak-Lon (Stephen) Wu >Assignee: Tak-Lon (Stephen) Wu >Priority: Major > Labels: design, discussion, objectstore, storeFile, storeengine > Attachments: 1B100m-25m25m-performance.pdf, Apache HBase - Direct > insert HFiles and Persist in-memory HFile tracking.pdf > > > We propose a new feature (a new store engine) to remove the {{.tmp}} > directory used in the commit stage for common HFile operations such as flush > and compaction to improve the write throughput and latency on object stores. > Specifically for S3 filesystems, this will also mitigate read-after-write > inconsistencies caused by immediate HFiles validation after moving the > HFile(s) to data directory. > Please see attached for this proposal and the initial result captured with > 25m (25m operations) and 1B (100m operations) YCSB workload A LOAD and RUN, > and workload C RUN result. > The goal of this JIRA is to discuss with the community if the proposed > improvement on the object stores use case makes senses and if we miss > anything should be included. > Improvement Highlights > 1. Lower write latency, especially the p99+ > 2. Higher write throughput on flush and compaction > 3. Lower MTTR on region (re)open or assignment > 4. Remove consistent check dependencies (e.g. DynamoDB) supported by file > system implementation -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24749) Direct insert HFiles and Persist in-memory HFile tracking
[ https://issues.apache.org/jira/browse/HBASE-24749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163074#comment-17163074 ] Zach York commented on HBASE-24749: --- Hmm, perhaps a flush of the root table could include the currently tracked files (for root) in the flush edit, then replaying the WAL for the root would be a pretty good guarantee. If you don't have the WAL durability, there is a potential for dataloss anyways. > Direct insert HFiles and Persist in-memory HFile tracking > - > > Key: HBASE-24749 > URL: https://issues.apache.org/jira/browse/HBASE-24749 > Project: HBase > Issue Type: Umbrella > Components: Compaction, HFile >Affects Versions: 3.0.0-alpha-1 >Reporter: Tak-Lon (Stephen) Wu >Priority: Major > Labels: design, discussion, objectstore, storeFile, storeengine > Attachments: 1B100m-25m25m-performance.pdf, Apache HBase - Direct > insert HFiles and Persist in-memory HFile tracking.pdf > > > We propose a new feature (a new store engine) to remove the {{.tmp}} > directory used in the commit stage for common HFile operations such as flush > and compaction to improve the write throughput and latency on object stores. > Specifically for S3 filesystems, this will also mitigate read-after-write > inconsistencies caused by immediate HFiles validation after moving the > HFile(s) to data directory. > Please see attached for this proposal and the initial result captured with > 25m (25m operations) and 1B (100m operations) YCSB workload A LOAD and RUN, > and workload C RUN result. > The goal of this JIRA is to discuss with the community if the proposed > improvement on the object stores use case makes senses and if we miss > anything should be included. > Improvement Highlights > 1. Lower write latency, especially the p99+ > 2. Higher write throughput on flush and compaction > 3. Lower MTTR on region (re)open or assignment > 4. Remove consistent check dependencies (e.g. DynamoDB) supported by file > system implementation -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24749) Direct insert HFiles and Persist in-memory HFile tracking
[ https://issues.apache.org/jira/browse/HBASE-24749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163069#comment-17163069 ] Michael Stack commented on HBASE-24749: --- On where to write ROOT hfile list, when no zk, I suppose you can't adapt the ROOT Region to write a manifest file that gets updated as file set changes because you don't have sufficient guarantees from storage? Will your WAL filesystem have better semantics? On events such as flush and compaction, we write markers to the WAL w/ notes listing files that participated in the event. On recovery, we read these events completing compactions if all participants present and it looked like we crashed after compaction completed but before we got to slot the new files into place and remove the old. Could you use this mechanism – aggregating result of hfile list changes (flushes/compactions)? Or add an event of your own that would make your job easier? > Direct insert HFiles and Persist in-memory HFile tracking > - > > Key: HBASE-24749 > URL: https://issues.apache.org/jira/browse/HBASE-24749 > Project: HBase > Issue Type: Umbrella > Components: Compaction, HFile >Affects Versions: 3.0.0-alpha-1 >Reporter: Tak-Lon (Stephen) Wu >Priority: Major > Labels: design, discussion, objectstore, storeFile, storeengine > Attachments: 1B100m-25m25m-performance.pdf, Apache HBase - Direct > insert HFiles and Persist in-memory HFile tracking.pdf > > > We propose a new feature (a new store engine) to remove the {{.tmp}} > directory used in the commit stage for common HFile operations such as flush > and compaction to improve the write throughput and latency on object stores. > Specifically for S3 filesystems, this will also mitigate read-after-write > inconsistencies caused by immediate HFiles validation after moving the > HFile(s) to data directory. > Please see attached for this proposal and the initial result captured with > 25m (25m operations) and 1B (100m operations) YCSB workload A LOAD and RUN, > and workload C RUN result. > The goal of this JIRA is to discuss with the community if the proposed > improvement on the object stores use case makes senses and if we miss > anything should be included. > Improvement Highlights > 1. Lower write latency, especially the p99+ > 2. Higher write throughput on flush and compaction > 3. Lower MTTR on region (re)open or assignment > 4. Remove consistent check dependencies (e.g. DynamoDB) supported by file > system implementation -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-21721) reduce write#syncs() times
[ https://issues.apache.org/jira/browse/HBASE-21721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163054#comment-17163054 ] Michael Stack commented on HBASE-21721: --- {quote}As I read this change, if a difference between highestUnsyncedSequence and currentSequence post-sync, then the syncfutures inside this difference will be released but their sync may not have come in so yes, less syncs but our accounting will be off. {quote} I see now how it could make a sync cover more edits; my bad (I read [~anoop.hbase] 's comments in PR... that helped). Looking at AsyncFSWAL, it does the math differently so this does not apply there. If an update, could test and apply; would help older hbases. Thanks. > reduce write#syncs() times > -- > > Key: HBASE-21721 > URL: https://issues.apache.org/jira/browse/HBASE-21721 > Project: HBase > Issue Type: Improvement >Affects Versions: 1.3.1, 2.1.1, master, 2.2.3 >Reporter: Bo Cui >Priority: Major > > the number of write#syncs can be reduced by updating the > highestUnsyncedSequence: > before write#sync(), get the current highestUnsyncedSequence > after write#sync, highestSyncedSequence=highestUnsyncedSequence > > {code:title=FSHLog.java|borderStyle=solid} > // Some comments here > public void run() > { > long currentSequence; > while (!isInterrupted()) { > int syncCount = 0; > try { > while (true) { > ... > try { > Trace.addTimelineAnnotation("syncing writer"); > long unSyncedFlushSeq = highestUnsyncedSequence; > writer.sync(); > Trace.addTimelineAnnotation("writer synced"); > if( unSyncedFlushSeq > currentSequence ) currentSequence = > unSyncedFlushSeq; > currentSequence = updateHighestSyncedSequence(currentSequence); > } catch (IOException e) { > LOG.error("Error syncing, request close of WAL", e); > lastException = e; > } catch (Exception e) { >... > } > } > {code} > Add code > long unSyncedFlushSeq = highestUnsyncedSequence; > if( unSyncedFlushSeq > currentSequence ) currentSequence = unSyncedFlushSeq; -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-21721) reduce write#syncs() times
[ https://issues.apache.org/jira/browse/HBASE-21721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163041#comment-17163041 ] Michael Stack commented on HBASE-21721: --- As I read this change, if a difference between highestUnsyncedSequence and currentSequence post-sync, then the syncfutures inside this difference will be released but their sync may not have come in so yes, less syncs but our accounting will be off. Please correct me if I have it wrong. Code is different now in FSHLog and default is async WAL; is this needed there? Thanks [~Bo Cui] > reduce write#syncs() times > -- > > Key: HBASE-21721 > URL: https://issues.apache.org/jira/browse/HBASE-21721 > Project: HBase > Issue Type: Improvement >Affects Versions: 1.3.1, 2.1.1, master, 2.2.3 >Reporter: Bo Cui >Priority: Major > > the number of write#syncs can be reduced by updating the > highestUnsyncedSequence: > before write#sync(), get the current highestUnsyncedSequence > after write#sync, highestSyncedSequence=highestUnsyncedSequence > > {code:title=FSHLog.java|borderStyle=solid} > // Some comments here > public void run() > { > long currentSequence; > while (!isInterrupted()) { > int syncCount = 0; > try { > while (true) { > ... > try { > Trace.addTimelineAnnotation("syncing writer"); > long unSyncedFlushSeq = highestUnsyncedSequence; > writer.sync(); > Trace.addTimelineAnnotation("writer synced"); > if( unSyncedFlushSeq > currentSequence ) currentSequence = > unSyncedFlushSeq; > currentSequence = updateHighestSyncedSequence(currentSequence); > } catch (IOException e) { > LOG.error("Error syncing, request close of WAL", e); > lastException = e; > } catch (Exception e) { >... > } > } > {code} > Add code > long unSyncedFlushSeq = highestUnsyncedSequence; > if( unSyncedFlushSeq > currentSequence ) currentSequence = unSyncedFlushSeq; -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24696) Include JVM information on Web UI under "Software Attributes"
[ https://issues.apache.org/jira/browse/HBASE-24696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163036#comment-17163036 ] Nick Dimiduk commented on HBASE-24696: -- 2.2 and 2.1 backport PRs are up and waiting on precommit checks. https://github.com/apache/hbase/pull/2119 https://github.com/apache/hbase/pull/2120 > Include JVM information on Web UI under "Software Attributes" > - > > Key: HBASE-24696 > URL: https://issues.apache.org/jira/browse/HBASE-24696 > Project: HBase > Issue Type: Improvement > Components: UI >Reporter: Nick Dimiduk >Assignee: Mingliang Liu >Priority: Minor > Fix For: 3.0.0-alpha-1, 2.3.1, 1.7.0, 2.4.0 > > Attachments: Screen Shot 2020-07-17 at 10.55.56 PM.png > > > It's a small thing, but seems like an omission. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] ndimiduk commented on a change in pull request #2034: Backport "HBASE-24625 AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced file length. (#1970)" to branch
ndimiduk commented on a change in pull request #2034: URL: https://github.com/apache/hbase/pull/2034#discussion_r459047138 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/ProtobufLogWriter.java ## @@ -85,6 +90,12 @@ public void sync(boolean forceSync) throws IOException { } else { fsdos.hflush(); } +AtomicUtils.updateMax(this.syncedLength, fsdos.getPos()); Review comment: @Apache9 I read your later comments about keeping `AtomicUtils` because the built-in `AtomicLong` does not provide the short-circuit semantics you have implemented there. Sorry, I cannot find that thread to reply there. I dug around and don't see any intrinsics in the JVM that would implement this for us, so I agree with keeping our implementation. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2118: HBASE-24758 Avoid flooding replication source RSes logs when no sinks…
Apache-HBase commented on pull request #2118: URL: https://github.com/apache/hbase/pull/2118#issuecomment-662653363 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 22s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 10s | master passed | | +1 :green_heart: | compile | 0m 57s | master passed | | +1 :green_heart: | shadedjars | 6m 19s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 42s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 8s | the patch passed | | +1 :green_heart: | compile | 1m 1s | the patch passed | | +1 :green_heart: | javac | 1m 1s | the patch passed | | +1 :green_heart: | shadedjars | 6m 13s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 38s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 216m 5s | hbase-server in the patch passed. | | | | 243m 38s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2118/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2118 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux a8ff26aeedda 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 8191fbdd7d | | Default Java | 1.8.0_232 | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2118/1/testReport/ | | Max. process+thread count | 3945 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2118/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] ndimiduk opened a new pull request #2120: Backport "HBASE-24696 Include JVM information on Web UI under "Software Attributes"" to branch-2.1
ndimiduk opened a new pull request #2120: URL: https://github.com/apache/hbase/pull/2120 Signed-off-by: Viraj Jasani Signed-off-by: Nick Dimiduk This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2118: HBASE-24758 Avoid flooding replication source RSes logs when no sinks…
Apache-HBase commented on pull request #2118: URL: https://github.com/apache/hbase/pull/2118#issuecomment-662634420 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 28s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 43s | master passed | | +1 :green_heart: | compile | 1m 9s | master passed | | +1 :green_heart: | shadedjars | 6m 19s | branch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 44s | hbase-server in master failed. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 29s | the patch passed | | +1 :green_heart: | compile | 1m 9s | the patch passed | | +1 :green_heart: | javac | 1m 9s | the patch passed | | +1 :green_heart: | shadedjars | 6m 23s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 41s | hbase-server in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 190m 15s | hbase-server in the patch passed. | | | | 218m 10s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2118/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2118 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux dc199c8b1c94 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 8191fbdd7d | | Default Java | 2020-01-14 | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2118/1/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2118/1/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2118/1/testReport/ | | Max. process+thread count | 3390 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2118/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] ndimiduk opened a new pull request #2119: Backport "HBASE-24696 Include JVM information on Web UI under "Software Attributes"" to branch-2.2
ndimiduk opened a new pull request #2119: URL: https://github.com/apache/hbase/pull/2119 Signed-off-by: Viraj Jasani Signed-off-by: Nick Dimiduk This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Reopened] (HBASE-24696) Include JVM information on Web UI under "Software Attributes"
[ https://issues.apache.org/jira/browse/HBASE-24696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk reopened HBASE-24696: -- Oops. We missed 2.2.x. Reopening. > Include JVM information on Web UI under "Software Attributes" > - > > Key: HBASE-24696 > URL: https://issues.apache.org/jira/browse/HBASE-24696 > Project: HBase > Issue Type: Improvement > Components: UI >Reporter: Nick Dimiduk >Assignee: Mingliang Liu >Priority: Minor > Fix For: 3.0.0-alpha-1, 2.3.1, 1.7.0, 2.4.0 > > Attachments: Screen Shot 2020-07-17 at 10.55.56 PM.png > > > It's a small thing, but seems like an omission. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24696) Include JVM information on Web UI under "Software Attributes"
[ https://issues.apache.org/jira/browse/HBASE-24696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163016#comment-17163016 ] Nick Dimiduk commented on HBASE-24696: -- [~liuml07], [~vjasani] thank you both! > Include JVM information on Web UI under "Software Attributes" > - > > Key: HBASE-24696 > URL: https://issues.apache.org/jira/browse/HBASE-24696 > Project: HBase > Issue Type: Improvement > Components: UI >Reporter: Nick Dimiduk >Assignee: Mingliang Liu >Priority: Minor > Fix For: 3.0.0-alpha-1, 2.3.1, 1.7.0, 2.4.0 > > Attachments: Screen Shot 2020-07-17 at 10.55.56 PM.png > > > It's a small thing, but seems like an omission. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] taklwu edited a comment on pull request #2113: HBASE-24286: HMaster won't become healthy after after cloning or crea…
taklwu edited a comment on pull request #2113: URL: https://github.com/apache/hbase/pull/2113#issuecomment-662590771 Thanks Josh, and honestly I didn't know the logic till now. And here is the finding for both situations you're concerning: first case 1. hbase:meta has assigned regions to a set of RegionServers rs1 2. All hosts of rs1 are shutdown and destroyed (i.e. meta still contains references to them) 3. A new set of RegionServers are created, rs2, which have completely unique hostnames to rs1 4. All MasterProcWALs from the cluster with rs1 are lost. second case 1. I have a healthy cluster (1 master, many RS) 2. I stop the master 3. I kill one RS 3a. I do not restart that RS 4. I restart the master There is three Key parts in the normal system to handle `region server has been deleted`, MasterProcWALs/MasterRegion for `DEAD` server being tracked by SCP, Region servers name exists in WAL for `possibly live` servers. If MasterProcWALs/MasterRegion both exist after a cluster restarts, when `RegionServerTracker` starts, `RegionServerTracker` figures out all online servers, and if we don't have Znode (with same hostname when restart?) for `possibly live` servers, marked they are dead and scheduled SCP for it as well as continue the SCP for already dead servers. That would be normal cases. ``` 2020-07-22 09:55:24,729 INFO [master/localhost:0:becomeActiveMaster] master.RegionServerTracker(123): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 3 possibly 'live' servers, and 0 'splitting'. 2020-07-22 09:55:24,730 DEBUG [master/localhost:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(183): Node /hbase/draining/localhost,55572,1595436917066 already deleted, retry=false 2020-07-22 09:55:24,730 INFO [master/localhost:0:becomeActiveMaster] master.ServerManager(585): Processing expiration of localhost,55572,1595436917066 on localhost,55667,1595436924374 2020-07-22 09:55:24,755 DEBUG [master/localhost:0:becomeActiveMaster] procedure2.ProcedureExecutor(1050): Stored pid=12, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure localhost,55572,1595436917066, splitWal=true, meta=true ``` Then in the case of deleting MasterProcWALs (or MasterRegion in branch-2.3+) and kept the ZK nodes, even there is no procedure MasterProcWALs restored from, as long as we have the WAL from for previous host, we can still schedule SCP for it. but if MasterProcWALs and WAL are deleted, neither of the first and second cases will not operating normally. The case we were originally trying to solve that is falling into the situation of MasterProcWALs and WAL are deleted after cluster restarted, we don't have the WAL, MasterProcWALs/MasterRegion and Zookeeper but HFiles, then those servers are under unknown and regions cannot be reassigned. About the unit tests failure, NowI'm hitting a strange issue, my tests works fine if I delete WAL, MasterProcWALs, and ZK baseZNode in branch-2.2. However, with the same setup in branch-2.3+ and master will hangs the master initialization if the ZK baseZNode is deleted with or without my changes. (what has been changed in branch-2.3? I found MasterRegion but not sure why that's related to ZK data, is it a bug? ) Interestingly, my fix works if keep the baseZnode, so, I'm trying to figure out a right way to cleanup zookeeper such it matched the one of the cloud use cases that WAL on HDFS and ZK are also deleted when HBase cluster terminated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] joshelser commented on a change in pull request #2094: HBASE-24680 Refactor the checkAndMutate code on the server side
joshelser commented on a change in pull request #2094: URL: https://github.com/apache/hbase/pull/2094#discussion_r458952500 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java ## @@ -759,12 +810,150 @@ default boolean postCheckAndDelete(ObserverContext * @param delete delete to commit if check succeeds * @param result from the CheckAndDelete * @return the possibly transformed returned value to return to client + * + * @deprecated since 3.0.0 and will be removed in 4.0.0. Use + * {@link #postCheckAndMutate(ObserverContext, CheckAndMutate, CheckAndMutateResult)} instead. */ + @Deprecated default boolean postCheckAndDelete(ObserverContext c, byte[] row, Filter filter, Delete delete, boolean result) throws IOException { return result; } + /** + * Called before checkAndMutate + * + * Call CoprocessorEnvironment#bypass to skip default actions. + * If 'bypass' is set, we skip out on calling any subsequent chained coprocessors. + * + * Note: Do not retain references to any Cells in actions beyond the life of this invocation. + * If need a Cell reference for later use, copy the cell and use that. + * @param c the environment provided by the region server + * @param checkAndMutate the CheckAndMutate object + * @param result the default value of the result + * @return the return value to return to client if bypassing default processing + * @throws IOException if an error occurred on the coprocessor + */ + default CheckAndMutateResult preCheckAndMutate(ObserverContext c, +CheckAndMutate checkAndMutate, CheckAndMutateResult result) throws IOException { +if (checkAndMutate.getAction() instanceof Put) { Review comment: Every time I see this, I want to suggest you use a [visitor pattern](https://en.wikipedia.org/wiki/Visitor_pattern) to reduce the boilerplate, but that would require putting more logic on Put/Delete which not worth it. :shrug: ## File path: hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java ## @@ -3543,4 +3546,72 @@ public static RSGroupInfo toGroupInfo(RSGroupProtos.RSGroupInfo proto) { return RSGroupProtos.RSGroupInfo.newBuilder().setName(pojo.getName()).addAllServers(hostports) .addAllTables(tables).addAllConfiguration(configuration).build(); } + + public static CheckAndMutate toCheckAndMutate(ClientProtos.Condition condition, +MutationProto mutation, CellScanner cellScanner) throws IOException { +byte[] row = condition.getRow().toByteArray(); +CheckAndMutate.Builder builder = CheckAndMutate.newBuilder(row); +Filter filter = condition.hasFilter() ? ProtobufUtil.toFilter(condition.getFilter()) : null; +if (filter != null) { + builder.ifMatches(filter); +} else { + builder.ifMatches(condition.getFamily().toByteArray(), +condition.getQualifier().toByteArray(), +CompareOperator.valueOf(condition.getCompareType().name()), +ProtobufUtil.toComparator(condition.getComparator()).getValue()); +} +TimeRange timeRange = condition.hasTimeRange() ? + ProtobufUtil.toTimeRange(condition.getTimeRange()) : TimeRange.allTime(); +builder.timeRange(timeRange); + +try { + MutationType type = mutation.getMutateType(); + switch (type) { +case PUT: + return builder.build(ProtobufUtil.toPut(mutation, cellScanner)); +case DELETE: + return builder.build(ProtobufUtil.toDelete(mutation, cellScanner)); +default: + throw new DoNotRetryIOException("Unsupported mutate type: " + type.name()); + } +} catch (IllegalArgumentException e) { + throw new DoNotRetryIOException(e.getMessage()); +} + } + + public static CheckAndMutate toCheckAndMutate(ClientProtos.Condition condition, +List mutations) throws IOException { +assert mutations.size() > 0; +byte[] row = condition.getRow().toByteArray(); +CheckAndMutate.Builder builder = CheckAndMutate.newBuilder(row); +Filter filter = condition.hasFilter() ? ProtobufUtil.toFilter(condition.getFilter()) : null; +if (filter != null) { + builder.ifMatches(filter); +} else { + builder.ifMatches(condition.getFamily().toByteArray(), +condition.getQualifier().toByteArray(), +CompareOperator.valueOf(condition.getCompareType().name()), +ProtobufUtil.toComparator(condition.getComparator()).getValue()); +} +TimeRange timeRange = condition.hasTimeRange() ? + ProtobufUtil.toTimeRange(condition.getTimeRange()) : TimeRange.allTime(); +builder.timeRange(timeRange); + +try { + if (mutations.size() == 1) { +if (mutations.get(0) instanceof Put) { Review comment: nit: `Mutation m = mutations.get(0)` and then use `m` for the rest of this block. ## File path: hbase-server/src/test/java/org/apache/hadoop/hbase/regio
[GitHub] [hbase] Apache-HBase commented on pull request #2055: HBASE-24625 AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced file length(addendum)
Apache-HBase commented on pull request #2055: URL: https://github.com/apache/hbase/pull/2055#issuecomment-662606021 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 14s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 20s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 59s | master passed | | +1 :green_heart: | compile | 1m 16s | master passed | | +1 :green_heart: | shadedjars | 6m 8s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 52s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 14s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 43s | the patch passed | | +1 :green_heart: | compile | 1m 17s | the patch passed | | +1 :green_heart: | javac | 1m 17s | the patch passed | | +1 :green_heart: | shadedjars | 6m 4s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 51s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 41s | hbase-asyncfs in the patch passed. | | +1 :green_heart: | unit | 203m 35s | hbase-server in the patch passed. | | | | 233m 12s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2055/3/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2055 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 01330bad0ec0 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 8191fbdd7d | | Default Java | 1.8.0_232 | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2055/3/testReport/ | | Max. process+thread count | 3642 (vs. ulimit of 12500) | | modules | C: hbase-asyncfs hbase-server U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2055/3/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] taklwu edited a comment on pull request #2113: HBASE-24286: HMaster won't become healthy after after cloning or crea…
taklwu edited a comment on pull request #2113: URL: https://github.com/apache/hbase/pull/2113#issuecomment-662590771 Thanks Josh, and honestly I didn't know the logic till now. And here is the finding for both sitation you're concerning: first case 1. hbase:meta has assigned regions to a set of RegionServers rs1 2. All hosts of rs1 are shutdown and destroyed (i.e. meta still contains references to them) 3. A new set of RegionServers are created, rs2, which have completely unique hostnames to rs1 4. All MasterProcWALs from the cluster with rs1 are lost. second case 1. I have a healthy cluster (1 master, many RS) 2. I stop the master 3. I kill one RS 3a. I do not restart that RS 4. I restart the master There is three Key parts in the normal system to handle `region server has been deleted`, MasterProcWALs/MasterRegion for `DEAD` server being tracked by SCP, Region servers name exists in WAL for `possibly live` servers. If MasterProcWALs/MasterRegion both exist after a cluster restarts, when `RegionServerTracker` starts, `RegionServerTracker` figures out all online servers, and if we don't have Znode (with same hostname when restart?) for `possibly live` servers, marked they are dead and scheduled SCP for it as well as continue the SCP for already dead servers. That would be normal cases. ``` 2020-07-22 09:55:24,729 INFO [master/localhost:0:becomeActiveMaster] master.RegionServerTracker(123): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 3 possibly 'live' servers, and 0 'splitting'. 2020-07-22 09:55:24,730 DEBUG [master/localhost:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(183): Node /hbase/draining/localhost,55572,1595436917066 already deleted, retry=false 2020-07-22 09:55:24,730 INFO [master/localhost:0:becomeActiveMaster] master.ServerManager(585): Processing expiration of localhost,55572,1595436917066 on localhost,55667,1595436924374 2020-07-22 09:55:24,755 DEBUG [master/localhost:0:becomeActiveMaster] procedure2.ProcedureExecutor(1050): Stored pid=12, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure localhost,55572,1595436917066, splitWal=true, meta=true ``` Then in the case of deleting MasterProcWALs (or MasterRegion in branch-2.3+) and kept the ZK nodes, even there is no procedure MasterProcWALs restored from, as long as we have the WAL from for previous host, we can still schedule SCP for it. but if MasterProcWALs and WAL are deleted, neither of the first and second cases will not operating normally. The case we were originally trying to solve that is falling into the situation of MasterProcWALs and WAL are deleted after cluster restarted, we don't have the WAL, MasterProcWALs/MasterRegion and Zookeeper but HFiles, then those servers are under unknown and regions cannot be reassigned. About the unit tests failure, NowI'm hitting a strange issue, my tests works fine if I delete WAL, MasterProcWALs, and ZK baseZNode in branch-2.2. However, with the same setup in branch-2.3+ and master will hangs the master initialization if the ZK baseZNode is deleted with or without my changes. (what has been changed in branch-2.3? I found MasterRegion but not sure why that's related to ZK data, is it a bug? ) Interestingly, my fix works if keep the baseZnode, so, I'm trying to figure out a right way to cleanup zookeeper such it matched the one of the cloud use cases that WAL on HDFS and ZK are also deleted when HBase cluster terminated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] taklwu edited a comment on pull request #2113: HBASE-24286: HMaster won't become healthy after after cloning or crea…
taklwu edited a comment on pull request #2113: URL: https://github.com/apache/hbase/pull/2113#issuecomment-662590771 Thanks Josh, and honestly I didn't know the logic till now. And here is the finding for both sitation you're concerning: first case 1. hbase:meta has assigned regions to a set of RegionServers rs1 2. All hosts of rs1 are shutdown and destroyed (i.e. meta still contains references to them) 3. A new set of RegionServers are created, rs2, which have completely unique hostnames to rs1 4. All MasterProcWALs from the cluster with rs1 are lost. second case 1. I have a healthy cluster (1 master, many RS) 2. I stop the master 3. I kill one RS 3a. I do not restart that RS 4. I restart the master There is three Key parts in the normal system to handle `region server has been deleted`, MasterProcWALs/MasterRegion for `DEAD` server being tracked by SCP, Region servers name exists in WAL for `possibly live` servers. If MasterProcWALs/MasterRegion both exist after a cluster restarts, when server RegionServerTracker starts, RegionServerTracker figure out all online servers, and if we don't have Znode (with same hostname when restart?) for `possibly live` servers, marked they are dead and scheduled SCP for it as well as continue the SCP for already dead servers. That would be normal cases. ``` 2020-07-22 09:55:24,729 INFO [master/localhost:0:becomeActiveMaster] master.RegionServerTracker(123): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 3 possibly 'live' servers, and 0 'splitting'. 2020-07-22 09:55:24,730 DEBUG [master/localhost:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(183): Node /hbase/draining/localhost,55572,1595436917066 already deleted, retry=false 2020-07-22 09:55:24,730 INFO [master/localhost:0:becomeActiveMaster] master.ServerManager(585): Processing expiration of localhost,55572,1595436917066 on localhost,55667,1595436924374 2020-07-22 09:55:24,755 DEBUG [master/localhost:0:becomeActiveMaster] procedure2.ProcedureExecutor(1050): Stored pid=12, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure localhost,55572,1595436917066, splitWal=true, meta=true ``` Then in the case of deleting MasterProcWALs (or MasterRegion in branch-2.3+) and kept the ZK nodes, even there is no procedure MasterProcWALs restored from, as long as we have the WAL from for previous host, we can still schedule SCP for it. but if MasterProcWALs and WAL are deleted, neither of the first and second cases will not operating normally. The case we were originally trying to solve that is falling into the situation of MasterProcWALs and WAL are deleted after cluster restarted, we don't have the WAL, MasterProcWALs/MasterRegion and Zookeeper but HFiles, then those servers are under unknown and regions cannot be reassigned. About the unit tests failure, NowI'm hitting a strange issue, my tests works fine if I delete WAL, MasterProcWALs, and ZK baseZNode in branch-2.2. However, with the same setup in branch-2.3+ and master will hangs the master initialization if the ZK baseZNode is deleted with or without my changes. (what has been changed in branch-2.3? I found MasterRegion but not sure why that's related to ZK data, is it a bug? ) Interestingly, my fix works if keep the baseZnode, so, I'm trying to figure out a right way to cleanup zookeeper such it matched the one of the cloud use cases that WAL on HDFS and ZK are also deleted when HBase cluster terminated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] liuml07 commented on pull request #2117: HBASE-24696 Include JVM information on Web UI under "Software Attributes"
liuml07 commented on pull request #2117: URL: https://github.com/apache/hbase/pull/2117#issuecomment-662591070 Thank you @virajjasani ! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] taklwu commented on pull request #2113: HBASE-24286: HMaster won't become healthy after after cloning or crea…
taklwu commented on pull request #2113: URL: https://github.com/apache/hbase/pull/2113#issuecomment-662590771 Thanks Josh, and honestly I didn't know the logic till now. And here is the finding: > 1. hbase:meta has assigned regions to a set of RegionServers rs1 2. All hosts of rs1 are shutdown and destroyed (i.e. meta still contains references to them) 3. A new set of RegionServers are created, rs2, which have completely unique hostnames to rs1 4. All MasterProcWALs from the cluster with rs1 are lost. There is three Key parts in the normal system to handle `region server has been deleted`, MasterProcWALs/MasterRegion for `DEAD` server being tracked by SCP, Region servers name exists in WAL for `possibly live` servers. If MasterProcWALs/MasterRegion both exist after a cluster restarts, when server RegionServerTracker starts, RegionServerTracker figure out all online servers, and if we don't have Znode (with same hostname when restart?) for `possibly live` servers, marked they are dead and scheduled SCP for it as well as continue the SCP for already dead servers. That would be normal cases. ``` 2020-07-22 09:55:24,729 INFO [master/localhost:0:becomeActiveMaster] master.RegionServerTracker(123): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 3 possibly 'live' servers, and 0 'splitting'. 2020-07-22 09:55:24,730 DEBUG [master/localhost:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(183): Node /hbase/draining/localhost,55572,1595436917066 already deleted, retry=false 2020-07-22 09:55:24,730 INFO [master/localhost:0:becomeActiveMaster] master.ServerManager(585): Processing expiration of localhost,55572,1595436917066 on localhost,55667,1595436924374 2020-07-22 09:55:24,755 DEBUG [master/localhost:0:becomeActiveMaster] procedure2.ProcedureExecutor(1050): Stored pid=12, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure localhost,55572,1595436917066, splitWal=true, meta=true ``` Then in the case of deleting MasterProcWALs (or MasterRegion in branch-2.3+) and kept the ZK nodes, even there is no procedure MasterProcWALs restored from, as long as we have the WAL from for previous host, we can still schedule SCP for it. However, the case we were originally trying to solve that is out of any of the above situation after cluster restarted, we don't have the WAL, MasterProcWALs/MasterRegion and Zookeeper but HFiles, then those servers are under unknown and regions cannot be reassigned. About the unit tests failure, NowI'm hitting a strange issue, my tests works fine if I delete WAL, MasterProcWALs, and ZK baseZNode in branch-2.2. However, with the same setup in branch-2.3+ and master will hangs the master initialization if the ZK baseZNode is deleted with or without my changes. (what has been changed in branch-2.3? I found MasterRegion but not sure why that's related to ZK data, is it a bug? ) Interestingly, my fix works if keep the baseZnode, so, I'm trying to figure out a right way to cleanup zookeeper such it matched the one of the cloud use cases that WAL on HDFS and ZK are also deleted when HBase cluster terminated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2055: HBASE-24625 AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced file length(addendum)
Apache-HBase commented on pull request #2055: URL: https://github.com/apache/hbase/pull/2055#issuecomment-662569286 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 31s | Docker mode activated. | | -0 :warning: | yetus | 0m 6s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 21s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 13s | master passed | | +1 :green_heart: | compile | 1m 24s | master passed | | +1 :green_heart: | shadedjars | 5m 52s | branch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 19s | hbase-asyncfs in master failed. | | -0 :warning: | javadoc | 0m 39s | hbase-server in master failed. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 5s | the patch passed | | +1 :green_heart: | compile | 1m 24s | the patch passed | | +1 :green_heart: | javac | 1m 24s | the patch passed | | +1 :green_heart: | shadedjars | 5m 48s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 17s | hbase-asyncfs in the patch failed. | | -0 :warning: | javadoc | 0m 40s | hbase-server in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 26s | hbase-asyncfs in the patch passed. | | +1 :green_heart: | unit | 129m 4s | hbase-server in the patch passed. | | | | 158m 41s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2055/3/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2055 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 1b3d18e6402b 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 8191fbdd7d | | Default Java | 2020-01-14 | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2055/3/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-asyncfs.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2055/3/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2055/3/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-asyncfs.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2055/3/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2055/3/testReport/ | | Max. process+thread count | 4290 (vs. ulimit of 12500) | | modules | C: hbase-asyncfs hbase-server U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2055/3/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-11288) Splittable Meta
[ https://issues.apache.org/jira/browse/HBASE-11288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162937#comment-17162937 ] Michael Stack commented on HBASE-11288: --- {quote}I seem to have made a mistake and thought that we'd come to an agreement before moving forward so I've just been waiting. Looks like people are doing what they mentioned they wanted to do, so to move things forward on my end let me go ahead then and run ITBLL. 2-tiered assignment may not be the most contentious thing right now but it sounded like it was still important? So I plan to first start run ITBLL on unchanged master to get a baseline and then run with the patch. Has anyone run ITBLL on master? Is there a particular commit that I should be using, etc? {quote} Understood. There was no agreement on a path forward, just a reset, listing of what we want from this issue as suggested by [~apurtell] [~zhangduo] repeated a suggestion he'd made a few times. There was no sign-off. I did the piece that had been suggested of me – ROOT load distribution discussion -- in an effort at moving this project forward since this was posited a blocker (though I thought otherwise). The ITBLL testing, you suggested you'd do to demonstrate tiered assign is (perhaps) not a problem, the main objection to the ROOT-as-general-Region approach, would seem like a useful exercise but it seems whatever the result, it won't change Duo's mind (as he states above – correct me if I have this wrong) which seems like a problem. On ITBLL against Master, I'm not sure it even passes. It works on branch-2 most of the time there is an odd failure that needs looking into. I've not tried Master. > Splittable Meta > --- > > Key: HBASE-11288 > URL: https://issues.apache.org/jira/browse/HBASE-11288 > Project: HBase > Issue Type: Umbrella > Components: meta >Reporter: Francis Christopher Liu >Assignee: Francis Christopher Liu >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (HBASE-24757) ReplicationSink should limit the batch size for batch mutations based on hbase.rpc.rows.warning.threshold
[ https://issues.apache.org/jira/browse/HBASE-24757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162919#comment-17162919 ] Viraj Jasani edited comment on HBASE-24757 at 7/22/20, 4:30 PM: HBASE-18027 is about keeping size limit for replication RPCs to 95% of the max request size derived from hbase.ipc.max.request.size. However, this size limit is for request payload in bytes at source side. The proposal of this Jira is to limit no of rows that is sent by sink as part of batch mutate call. We already have byte size limit, now we need limit for count of rows in batch. was (Author: vjasani): HBASE-18027 is about limiting size limit for replication RPCs to 95% of the max request size derived from hbase.ipc.max.request.size. However, this size limit is for request payload in bytes at source side. The proposal of this Jira is to limit no of rows that is sent by sink as part of batch mutate call. We already have byte size limit, now we need limit for count of rows in batch. > ReplicationSink should limit the batch size for batch mutations based on > hbase.rpc.rows.warning.threshold > - > > Key: HBASE-24757 > URL: https://issues.apache.org/jira/browse/HBASE-24757 > Project: HBase > Issue Type: Improvement >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > > At times there are quite a large no of WAL Edits to ship as part of > Replication and sometimes replication queues accumulate huge list of Edits to > process. ReplicationSink at the sink server usually goes through all Edits > and creates map of table -> list of rows grouped by clusterIds, and performs > batch mutation of all rows per table level. However, there is no limit to no > of Rows that are sent as part of batch mutate call. If no of rows > limit > threshold defined by hbase.rpc.rows.warning.threshold, we usually get warn > "Large batch operation detected". If hbase.rpc.rows.size.threshold.reject is > turned on, RS will reject the whole batch without processing. > We should let Replication Sink honour this threshold value and accordingly > keep the size lower per batch mutation call. > Replication triggered batch mutations should always be consumed but keeping > limit of mutation low enough will let the system function at the same pace > and without triggering warnings. This will also restrict exploitation of heap > and cpu cycles at the destination. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] virajjasani commented on a change in pull request #2118: HBASE-24758 Avoid flooding replication source RSes logs when no sinks…
virajjasani commented on a change in pull request #2118: URL: https://github.com/apache/hbase/pull/2118#discussion_r458921944 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HBaseInterClusterReplicationEndpoint.java ## @@ -127,6 +127,7 @@ private boolean dropOnDeletedTables; private boolean dropOnDeletedColumnFamilies; private boolean isSerial = false; + private long lastSinkFetchTime; Review comment: +1 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] virajjasani commented on a change in pull request #2118: HBASE-24758 Avoid flooding replication source RSes logs when no sinks…
virajjasani commented on a change in pull request #2118: URL: https://github.com/apache/hbase/pull/2118#discussion_r458920644 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSinkManager.java ## @@ -150,9 +151,14 @@ public synchronized void reportSinkSuccess(SinkPeer sinkPeer) { */ public synchronized void chooseSinks() { List slaveAddresses = endpoint.getRegionServers(); Review comment: Further down in `setRegionServers(fetchSlavesAddresses(this.getZkw()))`, `fetchSlavesAddresses()`is not returning null, but empty list. So maybe we can safely log warning for `if(slaveAddresses.size()==0)` ? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] joshelser commented on a change in pull request #2118: HBASE-24758 Avoid flooding replication source RSes logs when no sinks…
joshelser commented on a change in pull request #2118: URL: https://github.com/apache/hbase/pull/2118#discussion_r458916606 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HBaseInterClusterReplicationEndpoint.java ## @@ -127,6 +127,7 @@ private boolean dropOnDeletedTables; private boolean dropOnDeletedColumnFamilies; private boolean isSerial = false; + private long lastSinkFetchTime; Review comment: JVM will initialize this to zero which is of consequence to the log message (will make sure that you get the `LOG.warn` the first time). Explicitly initialize it here so with a comment so we know that? ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSinkManager.java ## @@ -150,9 +151,14 @@ public synchronized void reportSinkSuccess(SinkPeer sinkPeer) { */ public synchronized void chooseSinks() { List slaveAddresses = endpoint.getRegionServers(); -Collections.shuffle(slaveAddresses, ThreadLocalRandom.current()); -int numSinks = (int) Math.ceil(slaveAddresses.size() * ratio); -sinks = slaveAddresses.subList(0, numSinks); +if(slaveAddresses==null){ Review comment: Doesn't look like HBaseReplicationEndpoint ever returns null. Guarding against custom endpoint implementations? We should expose `getRegionServers` on a base-class or interface and explicitly say that we expect a non-null answer. Follow-on.. If easy, this would be good to write a quick unit test to cover this method. ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HBaseInterClusterReplicationEndpoint.java ## @@ -513,8 +514,14 @@ public boolean replicate(ReplicateContext replicateContext) { int numSinks = replicationSinkMgr.getNumSinks(); if (numSinks == 0) { - LOG.warn("{} No replication sinks found, returning without replicating. " -+ "The source should retry with the same set of edits.", logPeerId()); + if((System.currentTimeMillis() - lastSinkFetchTime) >= (maxRetriesMultiplier*1000)) { +LOG.warn( + "No replication sinks found, returning without replicating. " ++ "The source should retry with the same set of edits. Not logging this again for " ++ "the next " + maxRetriesMultiplier + " seconds."); Review comment: nit, parameterized logging ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HBaseInterClusterReplicationEndpoint.java ## @@ -513,8 +514,14 @@ public boolean replicate(ReplicateContext replicateContext) { int numSinks = replicationSinkMgr.getNumSinks(); if (numSinks == 0) { - LOG.warn("{} No replication sinks found, returning without replicating. " -+ "The source should retry with the same set of edits.", logPeerId()); + if((System.currentTimeMillis() - lastSinkFetchTime) >= (maxRetriesMultiplier*1000)) { +LOG.warn( + "No replication sinks found, returning without replicating. " ++ "The source should retry with the same set of edits. Not logging this again for " ++ "the next " + maxRetriesMultiplier + " seconds."); +lastSinkFetchTime = System.currentTimeMillis(); + } + sleepForRetries("No sinks available at peer", sleepMultiplier); Review comment: Might it be helpful to include which peer (in the case that we have multiple) here? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24757) ReplicationSink should limit the batch size for batch mutations based on hbase.rpc.rows.warning.threshold
[ https://issues.apache.org/jira/browse/HBASE-24757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162919#comment-17162919 ] Viraj Jasani commented on HBASE-24757: -- HBASE-18027 is about limiting size limit for replication RPCs to 95% of the max request size derived from hbase.ipc.max.request.size. However, this size limit is for request payload in bytes at source side. The proposal of this Jira is to limit no of rows that is sent by sink as part of batch mutate call. We already have byte size limit, now we need limit for count of rows in batch. > ReplicationSink should limit the batch size for batch mutations based on > hbase.rpc.rows.warning.threshold > - > > Key: HBASE-24757 > URL: https://issues.apache.org/jira/browse/HBASE-24757 > Project: HBase > Issue Type: Improvement >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > > At times there are quite a large no of WAL Edits to ship as part of > Replication and sometimes replication queues accumulate huge list of Edits to > process. ReplicationSink at the sink server usually goes through all Edits > and creates map of table -> list of rows grouped by clusterIds, and performs > batch mutation of all rows per table level. However, there is no limit to no > of Rows that are sent as part of batch mutate call. If no of rows > limit > threshold defined by hbase.rpc.rows.warning.threshold, we usually get warn > "Large batch operation detected". If hbase.rpc.rows.size.threshold.reject is > turned on, RS will reject the whole batch without processing. > We should let Replication Sink honour this threshold value and accordingly > keep the size lower per batch mutation call. > Replication triggered batch mutations should always be consumed but keeping > limit of mutation low enough will let the system function at the same pace > and without triggering warnings. This will also restrict exploitation of heap > and cpu cycles at the destination. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #2118: HBASE-24758 Avoid flooding replication source RSes logs when no sinks…
Apache-HBase commented on pull request #2118: URL: https://github.com/apache/hbase/pull/2118#issuecomment-662540504 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 36s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 55s | master passed | | +1 :green_heart: | checkstyle | 1m 7s | master passed | | +1 :green_heart: | spotbugs | 2m 3s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 30s | the patch passed | | +1 :green_heart: | checkstyle | 1m 2s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 11m 53s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 2m 12s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 16s | The patch does not generate ASF License warnings. | | | | 33m 57s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2118/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2118 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux f67b86594775 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 8191fbdd7d | | Max. process+thread count | 94 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2118/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24757) ReplicationSink should limit the batch size for batch mutations based on hbase.rpc.rows.warning.threshold
[ https://issues.apache.org/jira/browse/HBASE-24757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162908#comment-17162908 ] Wellington Chevreuil commented on HBASE-24757: -- Wasn't this supposedly addressed by HBASE-18027? > ReplicationSink should limit the batch size for batch mutations based on > hbase.rpc.rows.warning.threshold > - > > Key: HBASE-24757 > URL: https://issues.apache.org/jira/browse/HBASE-24757 > Project: HBase > Issue Type: Improvement >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > > At times there are quite a large no of WAL Edits to ship as part of > Replication and sometimes replication queues accumulate huge list of Edits to > process. ReplicationSink at the sink server usually goes through all Edits > and creates map of table -> list of rows grouped by clusterIds, and performs > batch mutation of all rows per table level. However, there is no limit to no > of Rows that are sent as part of batch mutate call. If no of rows > limit > threshold defined by hbase.rpc.rows.warning.threshold, we usually get warn > "Large batch operation detected". If hbase.rpc.rows.size.threshold.reject is > turned on, RS will reject the whole batch without processing. > We should let Replication Sink honour this threshold value and accordingly > keep the size lower per batch mutation call. > Replication triggered batch mutations should always be consumed but keeping > limit of mutation low enough will let the system function at the same pace > and without triggering warnings. This will also restrict exploitation of heap > and cpu cycles at the destination. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] wchevreuil opened a new pull request #2118: HBASE-24758 Avoid flooding replication source RSes logs when no sinks…
wchevreuil opened a new pull request #2118: URL: https://github.com/apache/hbase/pull/2118 … are available This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (HBASE-24758) Avoid flooding replication source RSes logs when no sinks are available
Wellington Chevreuil created HBASE-24758: Summary: Avoid flooding replication source RSes logs when no sinks are available Key: HBASE-24758 URL: https://issues.apache.org/jira/browse/HBASE-24758 Project: HBase Issue Type: Improvement Reporter: Wellington Chevreuil Assignee: Wellington Chevreuil On HBaseInterClusterReplicationEndpoint.replicate, if no sinks are returned by ReplicationSinkManager, say remote peer is not available, we log message below and return false to source shipper thread, which then keeps retrying, flooding source RS log with the below messages: {noformat} WARN org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint: No replication sinks found, returning without replicating. The source should retry with the same set of edits. {noformat} This condition could also cause ReplicationSinkManager.chooseSinks to blow an NPE. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-24757) ReplicationSink should limit the batch size for batch mutations based on hbase.rpc.rows.warning.threshold
[ https://issues.apache.org/jira/browse/HBASE-24757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viraj Jasani reassigned HBASE-24757: Assignee: Viraj Jasani > ReplicationSink should limit the batch size for batch mutations based on > hbase.rpc.rows.warning.threshold > - > > Key: HBASE-24757 > URL: https://issues.apache.org/jira/browse/HBASE-24757 > Project: HBase > Issue Type: Improvement >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > > At times there are quite a large no of WAL Edits to ship as part of > Replication and sometimes replication queues accumulate huge list of Edits to > process. ReplicationSink at the sink server usually goes through all Edits > and creates map of table -> list of rows grouped by clusterIds, and performs > batch mutation of all rows per table level. However, there is no limit to no > of Rows that are sent as part of batch mutate call. If no of rows > limit > threshold defined by hbase.rpc.rows.warning.threshold, we usually get warn > "Large batch operation detected". If hbase.rpc.rows.size.threshold.reject is > turned on, RS will reject the whole batch without processing. > We should let Replication Sink honour this threshold value and accordingly > keep the size lower per batch mutation call. > Replication triggered batch mutations should always be consumed but keeping > limit of mutation low enough will let the system function at the same pace > and without triggering warnings. This will also restrict exploitation of heap > and cpu cycles at the destination. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-24757) ReplicationSink should limit the batch size for batch mutations based on hbase.rpc.rows.warning.threshold
Viraj Jasani created HBASE-24757: Summary: ReplicationSink should limit the batch size for batch mutations based on hbase.rpc.rows.warning.threshold Key: HBASE-24757 URL: https://issues.apache.org/jira/browse/HBASE-24757 Project: HBase Issue Type: Improvement Reporter: Viraj Jasani At times there are quite a large no of WAL Edits to ship as part of Replication and sometimes replication queues accumulate huge list of Edits to process. ReplicationSink at the sink server usually goes through all Edits and creates map of table -> list of rows grouped by clusterIds, and performs batch mutation of all rows per table level. However, there is no limit to no of Rows that are sent as part of batch mutate call. If no of rows > limit threshold defined by hbase.rpc.rows.warning.threshold, we usually get warn "Large batch operation detected". If hbase.rpc.rows.size.threshold.reject is turned on, RS will reject the whole batch without processing. We should let Replication Sink honour this threshold value and accordingly keep the size lower per batch mutation call. Replication triggered batch mutations should always be consumed but keeping limit of mutation low enough will let the system function at the same pace and without triggering warnings. This will also restrict exploitation of heap and cpu cycles at the destination. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #2055: HBASE-24625 AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced file length(addendum)
Apache-HBase commented on pull request #2055: URL: https://github.com/apache/hbase/pull/2055#issuecomment-662502589 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 8s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 20s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 58s | master passed | | +1 :green_heart: | checkstyle | 1m 24s | master passed | | +1 :green_heart: | spotbugs | 2m 38s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 12s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 43s | the patch passed | | -0 :warning: | checkstyle | 1m 12s | hbase-server: The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 12m 13s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 2m 56s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 22s | The patch does not generate ASF License warnings. | | | | 38m 16s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2055/3/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2055 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux 6d0e4ebc0716 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 8191fbdd7d | | checkstyle | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2055/3/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt | | Max. process+thread count | 84 (vs. ulimit of 12500) | | modules | C: hbase-asyncfs hbase-server U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2055/3/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2055: HBASE-24625 AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced file length(addendum)
Apache-HBase commented on pull request #2055: URL: https://github.com/apache/hbase/pull/2055#issuecomment-662479415 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 28s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 23s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 5m 14s | master passed | | +1 :green_heart: | compile | 1m 45s | master passed | | +1 :green_heart: | shadedjars | 7m 26s | branch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 20s | hbase-asyncfs in master failed. | | -0 :warning: | javadoc | 0m 48s | hbase-server in master failed. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 5m 13s | the patch passed | | +1 :green_heart: | compile | 1m 45s | the patch passed | | +1 :green_heart: | javac | 1m 45s | the patch passed | | +1 :green_heart: | shadedjars | 7m 10s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 18s | hbase-asyncfs in the patch failed. | | -0 :warning: | javadoc | 0m 45s | hbase-server in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 44s | hbase-asyncfs in the patch passed. | | -1 :x: | unit | 219m 2s | hbase-server in the patch failed. | | | | 255m 46s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2055/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2055 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 97014b589767 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 8191fbdd7d | | Default Java | 2020-01-14 | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2055/2/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-asyncfs.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2055/2/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2055/2/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-asyncfs.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2055/2/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt | | unit | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2055/2/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2055/2/testReport/ | | Max. process+thread count | 2992 (vs. ulimit of 12500) | | modules | C: hbase-asyncfs hbase-server U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2055/2/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2055: HBASE-24625 AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced file length(addendum)
Apache-HBase commented on pull request #2055: URL: https://github.com/apache/hbase/pull/2055#issuecomment-662475702 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 40s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 21s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 12s | master passed | | +1 :green_heart: | compile | 1m 31s | master passed | | +1 :green_heart: | shadedjars | 6m 48s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 4s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 25s | the patch passed | | +1 :green_heart: | compile | 1m 27s | the patch passed | | +1 :green_heart: | javac | 1m 27s | the patch passed | | +1 :green_heart: | shadedjars | 6m 11s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 49s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 44s | hbase-asyncfs in the patch passed. | | +1 :green_heart: | unit | 216m 25s | hbase-server in the patch passed. | | | | 249m 12s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.9 Server=19.03.9 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2055/2/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2055 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 2350e6e60a68 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 8191fbdd7d | | Default Java | 1.8.0_232 | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2055/2/testReport/ | | Max. process+thread count | 3075 (vs. ulimit of 12500) | | modules | C: hbase-asyncfs hbase-server U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2055/2/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24662) Update DumpClusterStatusAction to notice changes in region server count
[ https://issues.apache.org/jira/browse/HBASE-24662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162824#comment-17162824 ] Hudson commented on HBASE-24662: Results for branch master [build #1791 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1791/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/master/1791/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/master/1791/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://builds.apache.org/job/HBase%20Nightly/job/master/1791/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Update DumpClusterStatusAction to notice changes in region server count > --- > > Key: HBASE-24662 > URL: https://issues.apache.org/jira/browse/HBASE-24662 > Project: HBase > Issue Type: Task > Components: integration tests >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.1, 2.4.0 > > > Sometimes running chaos monkey, I've found that we lose accounting of region > servers. I've taken to a manual process of checking the reported list against > a known reference. It occurs to me that ChaosMonkey has a known reference, > and it can do this accounting for me. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-22146) SpaceQuotaViolationPolicy Disable is not working in Namespace level
[ https://issues.apache.org/jira/browse/HBASE-22146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162825#comment-17162825 ] Hudson commented on HBASE-22146: Results for branch master [build #1791 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1791/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/master/1791/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/master/1791/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://builds.apache.org/job/HBase%20Nightly/job/master/1791/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > SpaceQuotaViolationPolicy Disable is not working in Namespace level > --- > > Key: HBASE-22146 > URL: https://issues.apache.org/jira/browse/HBASE-22146 > Project: HBase > Issue Type: Bug >Affects Versions: 3.0.0-alpha-1, 2.0.0 >Reporter: Uma Maheswari >Assignee: Surbhi Kochhar >Priority: Major > Labels: Quota, space > Fix For: 3.0.0-alpha-1, 2.3.1, 2.4.0, 2.2.7 > > > SpaceQuotaViolationPolicy Disable is not working in Namespace level > PFB the steps: > * Create Namespace and set Quota violation policy as Disable > * Create tables under namespace and violate Quota > Expected result: Tables to get disabled > Actual Result: Tables are not getting disabled > Note: mutation operation is not allowed on the table -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24555) Clear the description of hbase.hregion.max.filesize
[ https://issues.apache.org/jira/browse/HBASE-24555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162820#comment-17162820 ] Hudson commented on HBASE-24555: Results for branch master [build #1791 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1791/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/master/1791/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/master/1791/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://builds.apache.org/job/HBase%20Nightly/job/master/1791/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Clear the description of hbase.hregion.max.filesize > --- > > Key: HBASE-24555 > URL: https://issues.apache.org/jira/browse/HBASE-24555 > Project: HBase > Issue Type: Improvement > Components: documentation >Affects Versions: 3.0.0-alpha-1 >Reporter: Zheng Wang >Assignee: Zheng Wang >Priority: Minor > Fix For: 3.0.0-alpha-1 > > > After we improve the splitpolicy in HBASE-24664, seems it is better to clear > this option's meaning. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24718) Generic NamedQueue framework for recent in-memory history (refactor slowlog)
[ https://issues.apache.org/jira/browse/HBASE-24718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162823#comment-17162823 ] Hudson commented on HBASE-24718: Results for branch master [build #1791 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1791/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/master/1791/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/master/1791/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://builds.apache.org/job/HBase%20Nightly/job/master/1791/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Generic NamedQueue framework for recent in-memory history (refactor slowlog) > > > Key: HBASE-24718 > URL: https://issues.apache.org/jira/browse/HBASE-24718 > Project: HBase > Issue Type: Sub-task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Fix For: 3.0.0-alpha-1, 2.4.0 > > Attachments: Screen Shot 2020-07-20 at 2.50.34 PM.png > > > As per the discussion on parent jira, we should come up with named queue > (online ring buffer) to serve recent history for multiple use-cases like > slowlog, balancer decision, other region activities e.g flush, compaction, > split, merge etc. > Since we already have slow/large rpc logs in ring buffer (HBASE-22978), as > part of this Jira, the proposal is to refactor slowlog provider to get > generic payload for ring buffer and based on event type (slow_log is the only > one for now), we can have separate internal in-memory queues. > After this refactor, it should be relatively simpler to use the same > framework and create more cases like parent Jira (balancer decision in ring > buffer). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24743) Reject to add a peer which replicate to itself earlier
[ https://issues.apache.org/jira/browse/HBASE-24743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162822#comment-17162822 ] Hudson commented on HBASE-24743: Results for branch master [build #1791 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1791/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/master/1791/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/master/1791/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://builds.apache.org/job/HBase%20Nightly/job/master/1791/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Reject to add a peer which replicate to itself earlier > -- > > Key: HBASE-24743 > URL: https://issues.apache.org/jira/browse/HBASE-24743 > Project: HBase > Issue Type: Improvement >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang >Priority: Major > Fix For: 3.0.0-alpha-1, 2.4.0 > > > Now there are one check in ReplicationSource#initialize method > {code:java} > // In rare case, zookeeper setting may be messed up. That leads to the > incorrect > // peerClusterId value, which is the same as the source clusterId > if (clusterId.equals(peerClusterId) && > !replicationEndpoint.canReplicateToSameCluster()) { > this.terminate("ClusterId " + clusterId + " is replicating to itself: > peerClusterId " > + peerClusterId + " which is not allowed by ReplicationEndpoint:" > + replicationEndpoint.getClass().getName(), null, false); > this.manager.removeSource(this); > return; > } > {code} > This check should move to AddPeerProcedure's precheck. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24696) Include JVM information on Web UI under "Software Attributes"
[ https://issues.apache.org/jira/browse/HBASE-24696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162821#comment-17162821 ] Hudson commented on HBASE-24696: Results for branch master [build #1791 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1791/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/master/1791/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/master/1791/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://builds.apache.org/job/HBase%20Nightly/job/master/1791/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Include JVM information on Web UI under "Software Attributes" > - > > Key: HBASE-24696 > URL: https://issues.apache.org/jira/browse/HBASE-24696 > Project: HBase > Issue Type: Improvement > Components: UI >Reporter: Nick Dimiduk >Assignee: Mingliang Liu >Priority: Minor > Fix For: 3.0.0-alpha-1, 2.3.1, 1.7.0, 2.4.0 > > Attachments: Screen Shot 2020-07-17 at 10.55.56 PM.png > > > It's a small thing, but seems like an omission. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24718) Generic NamedQueue framework for recent in-memory history (refactor slowlog)
[ https://issues.apache.org/jira/browse/HBASE-24718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162803#comment-17162803 ] Hudson commented on HBASE-24718: Results for branch branch-2 [build #2756 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2756/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2756/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2756/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2756/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2756/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Generic NamedQueue framework for recent in-memory history (refactor slowlog) > > > Key: HBASE-24718 > URL: https://issues.apache.org/jira/browse/HBASE-24718 > Project: HBase > Issue Type: Sub-task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Fix For: 3.0.0-alpha-1, 2.4.0 > > Attachments: Screen Shot 2020-07-20 at 2.50.34 PM.png > > > As per the discussion on parent jira, we should come up with named queue > (online ring buffer) to serve recent history for multiple use-cases like > slowlog, balancer decision, other region activities e.g flush, compaction, > split, merge etc. > Since we already have slow/large rpc logs in ring buffer (HBASE-22978), as > part of this Jira, the proposal is to refactor slowlog provider to get > generic payload for ring buffer and based on event type (slow_log is the only > one for now), we can have separate internal in-memory queues. > After this refactor, it should be relatively simpler to use the same > framework and create more cases like parent Jira (balancer decision in ring > buffer). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-22146) SpaceQuotaViolationPolicy Disable is not working in Namespace level
[ https://issues.apache.org/jira/browse/HBASE-22146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162802#comment-17162802 ] Hudson commented on HBASE-22146: Results for branch branch-2 [build #2756 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2756/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2756/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2756/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2756/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2756/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > SpaceQuotaViolationPolicy Disable is not working in Namespace level > --- > > Key: HBASE-22146 > URL: https://issues.apache.org/jira/browse/HBASE-22146 > Project: HBase > Issue Type: Bug >Affects Versions: 3.0.0-alpha-1, 2.0.0 >Reporter: Uma Maheswari >Assignee: Surbhi Kochhar >Priority: Major > Labels: Quota, space > Fix For: 3.0.0-alpha-1, 2.3.1, 2.4.0, 2.2.7 > > > SpaceQuotaViolationPolicy Disable is not working in Namespace level > PFB the steps: > * Create Namespace and set Quota violation policy as Disable > * Create tables under namespace and violate Quota > Expected result: Tables to get disabled > Actual Result: Tables are not getting disabled > Note: mutation operation is not allowed on the table -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24662) Update DumpClusterStatusAction to notice changes in region server count
[ https://issues.apache.org/jira/browse/HBASE-24662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162801#comment-17162801 ] Hudson commented on HBASE-24662: Results for branch branch-2 [build #2756 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2756/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2756/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2756/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2756/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2756/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Update DumpClusterStatusAction to notice changes in region server count > --- > > Key: HBASE-24662 > URL: https://issues.apache.org/jira/browse/HBASE-24662 > Project: HBase > Issue Type: Task > Components: integration tests >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.1, 2.4.0 > > > Sometimes running chaos monkey, I've found that we lose accounting of region > servers. I've taken to a manual process of checking the reported list against > a known reference. It occurs to me that ChaosMonkey has a known reference, > and it can do this accounting for me. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24696) Include JVM information on Web UI under "Software Attributes"
[ https://issues.apache.org/jira/browse/HBASE-24696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162793#comment-17162793 ] Hudson commented on HBASE-24696: Results for branch branch-2.3 [build #189 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/189/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/189/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/189/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/189/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/189/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Include JVM information on Web UI under "Software Attributes" > - > > Key: HBASE-24696 > URL: https://issues.apache.org/jira/browse/HBASE-24696 > Project: HBase > Issue Type: Improvement > Components: UI >Reporter: Nick Dimiduk >Assignee: Mingliang Liu >Priority: Minor > Fix For: 3.0.0-alpha-1, 2.3.1, 1.7.0, 2.4.0 > > Attachments: Screen Shot 2020-07-17 at 10.55.56 PM.png > > > It's a small thing, but seems like an omission. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24662) Update DumpClusterStatusAction to notice changes in region server count
[ https://issues.apache.org/jira/browse/HBASE-24662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162794#comment-17162794 ] Hudson commented on HBASE-24662: Results for branch branch-2.3 [build #189 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/189/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/189/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/189/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/189/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/189/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Update DumpClusterStatusAction to notice changes in region server count > --- > > Key: HBASE-24662 > URL: https://issues.apache.org/jira/browse/HBASE-24662 > Project: HBase > Issue Type: Task > Components: integration tests >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.1, 2.4.0 > > > Sometimes running chaos monkey, I've found that we lose accounting of region > servers. I've taken to a manual process of checking the reported list against > a known reference. It occurs to me that ChaosMonkey has a known reference, > and it can do this accounting for me. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24742) Improve performance of SKIP vs SEEK logic
[ https://issues.apache.org/jira/browse/HBASE-24742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162795#comment-17162795 ] Hudson commented on HBASE-24742: Results for branch branch-2.3 [build #189 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/189/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/189/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/189/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/189/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/189/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Improve performance of SKIP vs SEEK logic > - > > Key: HBASE-24742 > URL: https://issues.apache.org/jira/browse/HBASE-24742 > Project: HBase > Issue Type: Bug > Components: Performance, regionserver >Affects Versions: 3.0.0-alpha-1, 1.7.0, 2.4.0 >Reporter: Lars Hofhansl >Assignee: Lars Hofhansl >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.1, 1.7.0, 2.4.0, 2.1.10, 2.2.6 > > Attachments: 24742-master.txt, hbase-1.6-regression-flame-graph.png, > hbase-24742-branch-1.txt > > > In our testing of HBase 1.3 against the current tip of branch-1 we saw a 30% > slowdown in scanning scenarios. > We tracked it back to HBASE-17958 and HBASE-19863. > Both add comparisons to one of the tightest HBase has. > [~bharathv] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-22146) SpaceQuotaViolationPolicy Disable is not working in Namespace level
[ https://issues.apache.org/jira/browse/HBASE-22146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162796#comment-17162796 ] Hudson commented on HBASE-22146: Results for branch branch-2.3 [build #189 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/189/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/189/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/189/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/189/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/189/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > SpaceQuotaViolationPolicy Disable is not working in Namespace level > --- > > Key: HBASE-22146 > URL: https://issues.apache.org/jira/browse/HBASE-22146 > Project: HBase > Issue Type: Bug >Affects Versions: 3.0.0-alpha-1, 2.0.0 >Reporter: Uma Maheswari >Assignee: Surbhi Kochhar >Priority: Major > Labels: Quota, space > Fix For: 3.0.0-alpha-1, 2.3.1, 2.4.0, 2.2.7 > > > SpaceQuotaViolationPolicy Disable is not working in Namespace level > PFB the steps: > * Create Namespace and set Quota violation policy as Disable > * Create tables under namespace and violate Quota > Expected result: Tables to get disabled > Actual Result: Tables are not getting disabled > Note: mutation operation is not allowed on the table -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24710) Incorrect checksum calculation in saveVersion.sh
[ https://issues.apache.org/jira/browse/HBASE-24710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162792#comment-17162792 ] Hudson commented on HBASE-24710: Results for branch branch-2.3 [build #189 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/189/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/189/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/189/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/189/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/189/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Incorrect checksum calculation in saveVersion.sh > > > Key: HBASE-24710 > URL: https://issues.apache.org/jira/browse/HBASE-24710 > Project: HBase > Issue Type: Bug > Components: scripts >Reporter: Peter Somogyi >Assignee: Peter Somogyi >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.1, 2.4.0 > > > The saveVersion.sh file does not parse the srcChecksum correctly during the > releasing process when dev-support/create-release/hbase-rm/Dockerfile is > used. This results in missing Source Checksum. > Master UI displays this: > |HBase Source Checksum|(stdin)=| -- This message was sent by Atlassian Jira (v8.3.4#803005)