[GitHub] [hbase] Apache-HBase commented on pull request #2275: HBASE-24884 BulkLoadHFilesTool/LoadIncrementalHFiles should accept -D…
Apache-HBase commented on pull request #2275: URL: https://github.com/apache/hbase/pull/2275#issuecomment-675861618 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 9s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ branch-2 Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 41s | branch-2 passed | | +1 :green_heart: | checkstyle | 1m 10s | branch-2 passed | | +1 :green_heart: | spotbugs | 2m 0s | branch-2 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 12s | the patch passed | | +1 :green_heart: | checkstyle | 1m 5s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 11m 33s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 2m 8s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 16s | The patch does not generate ASF License warnings. | | | | 33m 33s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2275/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2275 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux dd32c50ff6ca 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / 7335dbc834 | | Max. process+thread count | 94 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2275/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2273: Backport "HBASE-24627 Normalize one table at a time" to branch-2
Apache-HBase commented on pull request #2273: URL: https://github.com/apache/hbase/pull/2273#issuecomment-675854086 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 22s | Docker mode activated. | | -0 :warning: | yetus | 0m 7s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2 Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 10s | branch-2 passed | | +1 :green_heart: | compile | 3m 18s | branch-2 passed | | +1 :green_heart: | shadedjars | 5m 41s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 59s | branch-2 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 15s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 38s | the patch passed | | +1 :green_heart: | compile | 3m 6s | the patch passed | | +1 :green_heart: | javac | 3m 6s | the patch passed | | +1 :green_heart: | shadedjars | 5m 46s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 2m 10s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 44s | hbase-protocol-shaded in the patch passed. | | +1 :green_heart: | unit | 2m 37s | hbase-client in the patch passed. | | +1 :green_heart: | unit | 202m 40s | hbase-server in the patch passed. | | +1 :green_heart: | unit | 5m 24s | hbase-thrift in the patch passed. | | +1 :green_heart: | unit | 9m 51s | hbase-shell in the patch passed. | | | | 255m 32s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2273/1/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2273 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 7fb9d879 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / 7335dbc834 | | Default Java | 1.8.0_232 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2273/1/testReport/ | | Max. process+thread count | 2339 (vs. ulimit of 12500) | | modules | C: hbase-protocol-shaded hbase-client hbase-server hbase-thrift hbase-shell U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2273/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] taklwu commented on a change in pull request #2237: HBASE-24833: Bootstrap should not delete the META table directory if …
taklwu commented on a change in pull request #2237: URL: https://github.com/apache/hbase/pull/2237#discussion_r472680071 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/InitMetaProcedure.java ## @@ -167,4 +171,35 @@ protected void completionCleanup(MasterProcedureEnv env) { public void await() throws InterruptedException { latch.await(); } + + private static boolean deleteMetaTableDirectoryIfPartial(FileSystem rootDirectoryFs, +Path metaTableDir) throws IOException { +boolean isPartial = true; +try { + TableDescriptor metaDescriptor = +FSTableDescriptors.getTableDescriptorFromFs(rootDirectoryFs, metaTableDir); + // when entering the state of INIT_META_WRITE_FS_LAYOUT, if a meta table directory is found, + // the meta table should not have any useful data and considers as partial. + // if we find any valid HFiles, operator should fix the meta e.g. via HBCK. + if (metaDescriptor != null && metaDescriptor.getColumnFamilyCount() > 0) { +RemoteIterator iterator = rootDirectoryFs.listFiles(metaTableDir, true); +while (iterator.hasNext()) { + LocatedFileStatus status = iterator.next(); + if (StoreFileInfo.isHFile(status.getPath()) && HFile +.isHFileFormat(rootDirectoryFs, status.getPath())) { +isPartial = false; +break; + } +} + } +} finally { + if (!isPartial) { +throw new IOException("Meta table is not partial, please sideline this meta directory " + + "or run HBCK to fix this meta table, e.g. rebuild the server hostname in ZNode for the " + + "meta region"); + } Review comment: [updated] yeah, I saw the `UnsupportedOperationException` and the timeout in the unit test logs, then the master didn't stop, such that I added a section in HMaster to catch this exception and fail the master startup with another IOException. Did I do it wrong ? or any idea how I can fail the procedure right? ``` // wait meta to be initialized after we start procedure executor if (initMetaProc != null) { initMetaProc.await(); if (initMetaProc.isFailed() && initMetaProc.hasException()) { throw new IOException("Failed to initialize meta table", initMetaProc.getException()); } ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-24884) BulkLoadHFilesTool/LoadIncrementalHFiles should accept -D options from command line parameters
[ https://issues.apache.org/jira/browse/HBASE-24884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Toshihiro Suzuki updated HBASE-24884: - Fix Version/s: 3.0.0-alpha-1 > BulkLoadHFilesTool/LoadIncrementalHFiles should accept -D options from > command line parameters > -- > > Key: HBASE-24884 > URL: https://issues.apache.org/jira/browse/HBASE-24884 > Project: HBase > Issue Type: Bug >Reporter: Toshihiro Suzuki >Assignee: Toshihiro Suzuki >Priority: Minor > Fix For: 3.0.0-alpha-1 > > > Currently, BulkLoadHFilesTool/LoadIncrementalHFiles doesn't accept -D options > from command line parameters. It should support them. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] brfrn169 commented on pull request #2275: HBASE-24884 BulkLoadHFilesTool/LoadIncrementalHFiles should accept -D…
brfrn169 commented on pull request #2275: URL: https://github.com/apache/hbase/pull/2275#issuecomment-675850891 Waiting for QA. I will commit this if the QA is okay. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-24902) CLUSTER quota
[ https://issues.apache.org/jira/browse/HBASE-24902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] xuqinya updated HBASE-24902: Summary: CLUSTER quota (was: localRegionSize is incorrect in CLUSTER quota ) > CLUSTER quota > -- > > Key: HBASE-24902 > URL: https://issues.apache.org/jira/browse/HBASE-24902 > Project: HBase > Issue Type: Bug >Reporter: xuqinya >Assignee: xuqinya >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] brfrn169 opened a new pull request #2275: HBASE-24884 BulkLoadHFilesTool/LoadIncrementalHFiles should accept -D…
brfrn169 opened a new pull request #2275: URL: https://github.com/apache/hbase/pull/2275 … options from command line parameters Signed-off-by: Peter Somogyi This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-24902) localRegionSize is incorrect in CLUSTER quota
[ https://issues.apache.org/jira/browse/HBASE-24902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] xuqinya updated HBASE-24902: Description: (was: In Cluster quota , use [ClusterLimit / TotalTableRegionNum * MachineTableRegionNum] as machine limit. So, localRegionSize should be the the number of regions in the local machine. It should not be the number of regions in the table.) > localRegionSize is incorrect in CLUSTER quota > -- > > Key: HBASE-24902 > URL: https://issues.apache.org/jira/browse/HBASE-24902 > Project: HBase > Issue Type: Bug >Reporter: xuqinya >Assignee: xuqinya >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #2222: HBASE-23834 HBase fails to run on Hadoop 3.3.0/3.2.2/3.1.4 due to jet…
Apache-HBase commented on pull request #: URL: https://github.com/apache/hbase/pull/#issuecomment-675849699 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 5s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | shelldocs | 0m 0s | Shelldocs was not available. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 33s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 39s | master passed | | +1 :green_heart: | checkstyle | 1m 59s | master passed | | +1 :green_heart: | spotbugs | 13m 10s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 41s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 24s | the patch passed | | +1 :green_heart: | checkstyle | 1m 59s | root: The patch generated 0 new + 123 unchanged - 3 fixed = 123 total (was 126) | | +1 :green_heart: | shellcheck | 0m 1s | There were no new shellcheck issues. | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 13s | The patch has no ill-formed XML file. | | +1 :green_heart: | hadoopcheck | 11m 38s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 13m 54s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 2m 30s | The patch does not generate ASF License warnings. | | | | 64m 6s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-/6/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/ | | Optional Tests | dupname asflicense hadoopcheck xml spotbugs hbaseanti checkstyle shellcheck shelldocs | | uname | Linux 9c089db284a2 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 1231ac0784 | | Max. process+thread count | 137 (vs. ulimit of 12500) | | modules | C: hbase-resource-bundle hbase-http hbase-server hbase-thrift hbase-it hbase-rest hbase-shaded hbase-shaded/hbase-shaded-client hbase-shaded/hbase-shaded-testing-util hbase-shaded/hbase-shaded-check-invariants hbase-shaded/hbase-shaded-with-hadoop-check-invariants . U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-/6/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) shellcheck=0.4.6 spotbugs=3.1.12 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-24903) 'scandetail' log message is missing when responseTooSlow happens in the rpc that closes the scanner
[ https://issues.apache.org/jira/browse/HBASE-24903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Javier Akira Luca de Tena updated HBASE-24903: -- Affects Version/s: 1.7.0 > 'scandetail' log message is missing when responseTooSlow happens in the rpc > that closes the scanner > --- > > Key: HBASE-24903 > URL: https://issues.apache.org/jira/browse/HBASE-24903 > Project: HBase > Issue Type: Bug >Affects Versions: 1.7.0, 1.4.13 >Reporter: Javier Akira Luca de Tena >Priority: Minor > > 'scandetail' log message is missing when responseTooSlow happens in the rpc > that closes the scanner. > RSRpcServices.closeScanner is called before logging the slowLog in > RPCServer.logResponse. > Since closeScanner removes the scanner entry from scanners map, logResponse > can't find scanDetails when calling RSRpcServices.getScanDetailsWithId. > > I have reproduced it by exhausting the region (no more results in the > region), which sets moreResultsInRegion = false and cause to closeScanner in > same rpc > [https://github.com/apache/hbase/blob/c2e0cf989e4a86169219161d4d889db80288e636/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java#L3235-L3237] > At least, this seems the behavior in branch-1. > > I think this bug was introduced at > https://issues.apache.org/jira/browse/HBASE-17489. > Note that this is a completely different case than > https://issues.apache.org/jira/browse/HBASE-24282. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #2249: HBASE-24871 Replication may loss data when refresh recovered replicat…
Apache-HBase commented on pull request #2249: URL: https://github.com/apache/hbase/pull/2249#issuecomment-675848054 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 41s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 17s | master passed | | +1 :green_heart: | checkstyle | 1m 16s | master passed | | +1 :green_heart: | spotbugs | 2m 36s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 35s | the patch passed | | +1 :green_heart: | checkstyle | 1m 23s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 14m 6s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 2m 49s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 15s | The patch does not generate ASF License warnings. | | | | 41m 55s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2249/2/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2249 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux edf6d42d7b0f 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 1231ac0784 | | Max. process+thread count | 84 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2249/2/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] brfrn169 merged pull request #2260: HBASE-24884 BulkLoadHFilesTool/LoadIncrementalHFiles should accept -D…
brfrn169 merged pull request #2260: URL: https://github.com/apache/hbase/pull/2260 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] taklwu commented on a change in pull request #2237: HBASE-24833: Bootstrap should not delete the META table directory if …
taklwu commented on a change in pull request #2237: URL: https://github.com/apache/hbase/pull/2237#discussion_r472680071 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/InitMetaProcedure.java ## @@ -167,4 +171,35 @@ protected void completionCleanup(MasterProcedureEnv env) { public void await() throws InterruptedException { latch.await(); } + + private static boolean deleteMetaTableDirectoryIfPartial(FileSystem rootDirectoryFs, +Path metaTableDir) throws IOException { +boolean isPartial = true; +try { + TableDescriptor metaDescriptor = +FSTableDescriptors.getTableDescriptorFromFs(rootDirectoryFs, metaTableDir); + // when entering the state of INIT_META_WRITE_FS_LAYOUT, if a meta table directory is found, + // the meta table should not have any useful data and considers as partial. + // if we find any valid HFiles, operator should fix the meta e.g. via HBCK. + if (metaDescriptor != null && metaDescriptor.getColumnFamilyCount() > 0) { +RemoteIterator iterator = rootDirectoryFs.listFiles(metaTableDir, true); +while (iterator.hasNext()) { + LocatedFileStatus status = iterator.next(); + if (StoreFileInfo.isHFile(status.getPath()) && HFile +.isHFileFormat(rootDirectoryFs, status.getPath())) { +isPartial = false; +break; + } +} + } +} finally { + if (!isPartial) { +throw new IOException("Meta table is not partial, please sideline this meta directory " + + "or run HBCK to fix this meta table, e.g. rebuild the server hostname in ZNode for the " + + "meta region"); + } Review comment: [updated] yeah, I saw the `UnsupportedOperationException` and the timeout in the unit test logs, then the master didn't stop, such that I added a section in HMaster to catch this exception. Did I do it wrong ? or any idea how I can fail the procedure right? ``` initMetaProc.await(); if (initMetaProc.isFailed() && initMetaProc.hasException()) { throw new IOException("Failed to initialize meta table", initMetaProc.getException()); } ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] taklwu commented on a change in pull request #2237: HBASE-24833: Bootstrap should not delete the META table directory if …
taklwu commented on a change in pull request #2237: URL: https://github.com/apache/hbase/pull/2237#discussion_r472680071 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/InitMetaProcedure.java ## @@ -167,4 +171,35 @@ protected void completionCleanup(MasterProcedureEnv env) { public void await() throws InterruptedException { latch.await(); } + + private static boolean deleteMetaTableDirectoryIfPartial(FileSystem rootDirectoryFs, +Path metaTableDir) throws IOException { +boolean isPartial = true; +try { + TableDescriptor metaDescriptor = +FSTableDescriptors.getTableDescriptorFromFs(rootDirectoryFs, metaTableDir); + // when entering the state of INIT_META_WRITE_FS_LAYOUT, if a meta table directory is found, + // the meta table should not have any useful data and considers as partial. + // if we find any valid HFiles, operator should fix the meta e.g. via HBCK. + if (metaDescriptor != null && metaDescriptor.getColumnFamilyCount() > 0) { +RemoteIterator iterator = rootDirectoryFs.listFiles(metaTableDir, true); +while (iterator.hasNext()) { + LocatedFileStatus status = iterator.next(); + if (StoreFileInfo.isHFile(status.getPath()) && HFile +.isHFileFormat(rootDirectoryFs, status.getPath())) { +isPartial = false; +break; + } +} + } +} finally { + if (!isPartial) { +throw new IOException("Meta table is not partial, please sideline this meta directory " + + "or run HBCK to fix this meta table, e.g. rebuild the server hostname in ZNode for the " + + "meta region"); + } Review comment: yeah, I saw the `UnsupportedOperationException` and the timeout in the unit test logs, then the master didn't stop, such that I added a section in HMaster to catch this exception. Did I do it wrong ? or any idea how I can fail the procedure? ``` initMetaProc.await(); if (initMetaProc.isFailed() && initMetaProc.hasException()) { throw new IOException("Failed to initialize meta table", initMetaProc.getException()); } ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] taklwu commented on a change in pull request #2237: HBASE-24833: Bootstrap should not delete the META table directory if …
taklwu commented on a change in pull request #2237: URL: https://github.com/apache/hbase/pull/2237#discussion_r472680071 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/InitMetaProcedure.java ## @@ -167,4 +171,35 @@ protected void completionCleanup(MasterProcedureEnv env) { public void await() throws InterruptedException { latch.await(); } + + private static boolean deleteMetaTableDirectoryIfPartial(FileSystem rootDirectoryFs, +Path metaTableDir) throws IOException { +boolean isPartial = true; +try { + TableDescriptor metaDescriptor = +FSTableDescriptors.getTableDescriptorFromFs(rootDirectoryFs, metaTableDir); + // when entering the state of INIT_META_WRITE_FS_LAYOUT, if a meta table directory is found, + // the meta table should not have any useful data and considers as partial. + // if we find any valid HFiles, operator should fix the meta e.g. via HBCK. + if (metaDescriptor != null && metaDescriptor.getColumnFamilyCount() > 0) { +RemoteIterator iterator = rootDirectoryFs.listFiles(metaTableDir, true); +while (iterator.hasNext()) { + LocatedFileStatus status = iterator.next(); + if (StoreFileInfo.isHFile(status.getPath()) && HFile +.isHFileFormat(rootDirectoryFs, status.getPath())) { +isPartial = false; +break; + } +} + } +} finally { + if (!isPartial) { +throw new IOException("Meta table is not partial, please sideline this meta directory " + + "or run HBCK to fix this meta table, e.g. rebuild the server hostname in ZNode for the " + + "meta region"); + } Review comment: yeah, I saw it the not support rollback then the master didn't stop, such that I added a section in HMaster to catch this exception. Did I do it wrong ? or any idea how I can fail the procedure? ``` initMetaProc.await(); if (initMetaProc.isFailed() && initMetaProc.hasException()) { throw new IOException("Failed to initialize meta table", initMetaProc.getException()); } ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2223: HBASE-24837 Ignore... check what fails when zk-based WAL splitter ena…
Apache-HBase commented on pull request #2223: URL: https://github.com/apache/hbase/pull/2223#issuecomment-675844225 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 40s | Docker mode activated. | | -0 :warning: | yetus | 0m 5s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2 Compile Tests _ | | +0 :ok: | mvndep | 0m 18s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 5m 4s | branch-2 passed | | +1 :green_heart: | compile | 1m 53s | branch-2 passed | | +1 :green_heart: | shadedjars | 7m 44s | branch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 22s | hbase-common in branch-2 failed. | | -0 :warning: | javadoc | 0m 49s | hbase-server in branch-2 failed. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 24s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 5m 13s | the patch passed | | +1 :green_heart: | compile | 2m 6s | the patch passed | | +1 :green_heart: | javac | 2m 6s | the patch passed | | +1 :green_heart: | shadedjars | 7m 31s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 17s | hbase-common in the patch failed. | | -0 :warning: | javadoc | 0m 40s | hbase-server in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 44s | hbase-common in the patch passed. | | +1 :green_heart: | unit | 135m 39s | hbase-server in the patch passed. | | | | 172m 55s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2223/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2223 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 7d8a4ea3661d 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / 7335dbc834 | | Default Java | 2020-01-14 | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2223/2/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-common.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2223/2/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2223/2/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-common.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2223/2/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2223/2/testReport/ | | Max. process+thread count | 4326 (vs. ulimit of 12500) | | modules | C: hbase-common hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2223/2/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (HBASE-24903) 'scandetail' log message is missing when responseTooSlow happens in the rpc that closes the scanner
Javier Akira Luca de Tena created HBASE-24903: - Summary: 'scandetail' log message is missing when responseTooSlow happens in the rpc that closes the scanner Key: HBASE-24903 URL: https://issues.apache.org/jira/browse/HBASE-24903 Project: HBase Issue Type: Bug Affects Versions: 1.4.13 Reporter: Javier Akira Luca de Tena 'scandetail' log message is missing when responseTooSlow happens in the rpc that closes the scanner. RSRpcServices.closeScanner is called before logging the slowLog in RPCServer.logResponse. Since closeScanner removes the scanner entry from scanners map, logResponse can't find scanDetails when calling RSRpcServices.getScanDetailsWithId. I have reproduced it by exhausting the region (no more results in the region), which sets moreResultsInRegion = false and cause to closeScanner in same rpc [https://github.com/apache/hbase/blob/c2e0cf989e4a86169219161d4d889db80288e636/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java#L3235-L3237] At least, this seems the behavior in branch-1. I think this bug was introduced at https://issues.apache.org/jira/browse/HBASE-17489. Note that this is a completely different case than https://issues.apache.org/jira/browse/HBASE-24282. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24898) Can not set all hours as offpeak hour now
[ https://issues.apache.org/jira/browse/HBASE-24898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17180251#comment-17180251 ] Zheng Wang commented on HBASE-24898: Thanks for the suggestion, will dig more afternoon. > Can not set all hours as offpeak hour now > - > > Key: HBASE-24898 > URL: https://issues.apache.org/jira/browse/HBASE-24898 > Project: HBase > Issue Type: Bug > Components: tooling >Reporter: Zheng Wang >Assignee: Zheng Wang >Priority: Major > > The valid number in OffPeakHours is 0-23, and the end hour not included, so > we can not set all hours as offpeak hour now. > It is not useful for users in general, but useful for unit test, eg: > TestStochasticLoadBalancer.testMoveCostMultiplier, in this case, the > multiplier of move cost should be a lower value in offpeak, and we expect it > always as offpeak hour no matter when it runs. > My proposal is just change the valid number from 0-23 to 0-24, then we can > easily apply this pr to all active branchs, and folks do not need to change > them configuration. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #2010: HBASE-24391 Implement meta split
Apache-HBase commented on pull request #2010: URL: https://github.com/apache/hbase/pull/2010#issuecomment-675842357 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 41s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ HBASE-11288.splittable-meta Compile Tests _ | | +0 :ok: | mvndep | 0m 27s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 32s | HBASE-11288.splittable-meta passed | | +1 :green_heart: | checkstyle | 2m 17s | HBASE-11288.splittable-meta passed | | +1 :green_heart: | spotbugs | 4m 36s | HBASE-11288.splittable-meta passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 14s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 10s | the patch passed | | -0 :warning: | checkstyle | 1m 13s | hbase-server: The patch generated 3 new + 240 unchanged - 3 fixed = 243 total (was 243) | | -0 :warning: | whitespace | 0m 0s | The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | +1 :green_heart: | hadoopcheck | 14m 43s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 4m 3s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 37s | The patch does not generate ASF License warnings. | | | | 46m 57s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2010/8/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2010 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux 1392667ea604 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | HBASE-11288.splittable-meta / 52ba06a707 | | checkstyle | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2010/8/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt | | whitespace | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2010/8/artifact/yetus-general-check/output/whitespace-eol.txt | | Max. process+thread count | 94 (vs. ulimit of 12500) | | modules | C: hbase-client hbase-balancer hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2010/8/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2261: HBASE-24528 : BalancerDecision queue implementation in HMaster with Admin API
Apache-HBase commented on pull request #2261: URL: https://github.com/apache/hbase/pull/2261#issuecomment-675841830 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 29s | Docker mode activated. | | -0 :warning: | yetus | 0m 4s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 24s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 22s | master passed | | +1 :green_heart: | compile | 2m 57s | master passed | | +1 :green_heart: | shadedjars | 5m 59s | branch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 15s | root in master failed. | | -0 :warning: | javadoc | 0m 24s | hbase-client in master failed. | | -0 :warning: | javadoc | 0m 18s | hbase-common in master failed. | | -0 :warning: | javadoc | 0m 19s | hbase-hadoop-compat in master failed. | | -0 :warning: | javadoc | 0m 38s | hbase-server in master failed. | | -0 :warning: | javadoc | 0m 50s | hbase-thrift in master failed. | | -0 :warning: | patch | 10m 16s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 13s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 4s | the patch passed | | +1 :green_heart: | compile | 2m 49s | the patch passed | | +1 :green_heart: | javac | 2m 49s | the patch passed | | +1 :green_heart: | shadedjars | 5m 53s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 17s | hbase-common in the patch failed. | | -0 :warning: | javadoc | 0m 18s | hbase-hadoop-compat in the patch failed. | | -0 :warning: | javadoc | 0m 24s | hbase-client in the patch failed. | | -0 :warning: | javadoc | 0m 39s | hbase-server in the patch failed. | | -0 :warning: | javadoc | 0m 53s | hbase-thrift in the patch failed. | | -0 :warning: | javadoc | 0m 15s | root in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 165m 20s | root in the patch passed. | | | | 203m 57s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2261/4/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2261 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 578ffe23e4c4 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 98e35842eb | | Default Java | 2020-01-14 | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2261/4/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-root.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2261/4/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-client.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2261/4/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-common.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2261/4/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-hadoop-compat.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2261/4/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2261/4/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-thrift.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2261/4/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-common.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2261/4/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-hadoop-compat.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2261/4/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-client.txt | | javadoc |
[GitHub] [hbase] Apache-HBase commented on pull request #2223: HBASE-24837 Ignore... check what fails when zk-based WAL splitter ena…
Apache-HBase commented on pull request #2223: URL: https://github.com/apache/hbase/pull/2223#issuecomment-675841075 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 35s | Docker mode activated. | | -0 :warning: | yetus | 0m 6s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2 Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 34s | branch-2 passed | | +1 :green_heart: | compile | 1m 18s | branch-2 passed | | +1 :green_heart: | shadedjars | 5m 5s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 57s | branch-2 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 12s | the patch passed | | +1 :green_heart: | compile | 1m 18s | the patch passed | | +1 :green_heart: | javac | 1m 18s | the patch passed | | +1 :green_heart: | shadedjars | 5m 5s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 57s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 20s | hbase-common in the patch passed. | | +1 :green_heart: | unit | 136m 3s | hbase-server in the patch passed. | | | | 162m 17s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2223/2/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2223 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux dc044948689a 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / 7335dbc834 | | Default Java | 1.8.0_232 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2223/2/testReport/ | | Max. process+thread count | 4051 (vs. ulimit of 12500) | | modules | C: hbase-common hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2223/2/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-24902) localRegionSize is incorrect in CLUSTER quota
[ https://issues.apache.org/jira/browse/HBASE-24902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] xuqinya updated HBASE-24902: Description: In Cluster quota , use [ClusterLimit / TotalTableRegionNum * MachineTableRegionNum] as machine limit. So, localRegionSize should be the the number of regions in the local machine. It should not be the number of regions in the table. (was: In Cluster quota , use [ClusterLimit / TotalTableRegionNum * MachineTableRegionNum] as machine limit. So, localRegionSize should be the the number of region in the local machine. It should not be the number of regions in the table.) > localRegionSize is incorrect in CLUSTER quota > -- > > Key: HBASE-24902 > URL: https://issues.apache.org/jira/browse/HBASE-24902 > Project: HBase > Issue Type: Bug >Reporter: xuqinya >Assignee: xuqinya >Priority: Major > > In Cluster quota , use [ClusterLimit / TotalTableRegionNum * > MachineTableRegionNum] as machine limit. So, localRegionSize should be the > the number of regions in the local machine. It should not be the number of > regions in the table. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #2274: HBASE-24896 'Stuck' in static initialization creating RegionInfo inst…
Apache-HBase commented on pull request #2274: URL: https://github.com/apache/hbase/pull/2274#issuecomment-675837123 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 33s | Docker mode activated. | | -0 :warning: | yetus | 0m 8s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2.3 Compile Tests _ | | +0 :ok: | mvndep | 0m 17s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 39s | branch-2.3 passed | | +1 :green_heart: | compile | 1m 42s | branch-2.3 passed | | +1 :green_heart: | shadedjars | 5m 3s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 17s | branch-2.3 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 26s | the patch passed | | +1 :green_heart: | compile | 1m 40s | the patch passed | | +1 :green_heart: | javac | 1m 40s | the patch passed | | +1 :green_heart: | shadedjars | 5m 6s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 15s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 2m 16s | hbase-client in the patch passed. | | +1 :green_heart: | unit | 0m 42s | hbase-zookeeper in the patch passed. | | +1 :green_heart: | unit | 153m 48s | hbase-server in the patch passed. | | | | 184m 51s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2274/1/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2274 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux ce005da07678 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2.3 / 6202ee6b64 | | Default Java | 1.8.0_232 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2274/1/testReport/ | | Max. process+thread count | 3615 (vs. ulimit of 12500) | | modules | C: hbase-client hbase-zookeeper hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2274/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2274: HBASE-24896 'Stuck' in static initialization creating RegionInfo inst…
Apache-HBase commented on pull request #2274: URL: https://github.com/apache/hbase/pull/2274#issuecomment-675835722 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 24s | Docker mode activated. | | -0 :warning: | yetus | 0m 8s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2.3 Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 26s | branch-2.3 passed | | +1 :green_heart: | compile | 1m 58s | branch-2.3 passed | | +1 :green_heart: | shadedjars | 6m 5s | branch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 31s | hbase-client in branch-2.3 failed. | | -0 :warning: | javadoc | 0m 41s | hbase-server in branch-2.3 failed. | | -0 :warning: | javadoc | 0m 16s | hbase-zookeeper in branch-2.3 failed. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 21s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 2s | the patch passed | | +1 :green_heart: | compile | 1m 54s | the patch passed | | +1 :green_heart: | javac | 1m 54s | the patch passed | | +1 :green_heart: | shadedjars | 6m 11s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 31s | hbase-client in the patch failed. | | -0 :warning: | javadoc | 0m 17s | hbase-zookeeper in the patch failed. | | -0 :warning: | javadoc | 0m 49s | hbase-server in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 2m 39s | hbase-client in the patch passed. | | +1 :green_heart: | unit | 0m 49s | hbase-zookeeper in the patch passed. | | +1 :green_heart: | unit | 142m 39s | hbase-server in the patch passed. | | | | 178m 27s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2274/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2274 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 75e9a9fe9270 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2.3 / 6202ee6b64 | | Default Java | 2020-01-14 | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2274/1/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-client.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2274/1/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2274/1/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-zookeeper.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2274/1/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-client.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2274/1/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-zookeeper.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2274/1/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2274/1/testReport/ | | Max. process+thread count | 3908 (vs. ulimit of 12500) | | modules | C: hbase-client hbase-zookeeper hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2274/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (HBASE-24902) localRegionSize is incorrect in CLUSTER quota
xuqinya created HBASE-24902: --- Summary: localRegionSize is incorrect in CLUSTER quota Key: HBASE-24902 URL: https://issues.apache.org/jira/browse/HBASE-24902 Project: HBase Issue Type: Bug Reporter: xuqinya Assignee: xuqinya In Cluster quota , use [ClusterLimit / TotalTableRegionNum * MachineTableRegionNum] as machine limit. So, localRegionSize should be the the number of region in the local machine. It should not be the number of regions in the table. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-24799) Do not call make_binary_release for hbase-thirdparty in release scripts
[ https://issues.apache.org/jira/browse/HBASE-24799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-24799. --- Fix Version/s: 3.0.0-alpha-1 Hadoop Flags: Reviewed Release Note: Skip make_binary_release call for hbase-thirdparty in release scripts as we only publish src tarballs for hbase-thirdparty. Resolution: Fixed Merged to master. Thanks [~psomogyi] for reviewing. > Do not call make_binary_release for hbase-thirdparty in release scripts > --- > > Key: HBASE-24799 > URL: https://issues.apache.org/jira/browse/HBASE-24799 > Project: HBase > Issue Type: Bug > Components: scripts >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 3.0.0-alpha-1 > > > As we only public src tarballs for hbase-thirdparty. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache9 merged pull request #2177: HBASE-24799 Do not call make_binary_release for hbase-thirdparty in release scripts
Apache9 merged pull request #2177: URL: https://github.com/apache/hbase/pull/2177 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache9 commented on a change in pull request #2237: HBASE-24833: Bootstrap should not delete the META table directory if …
Apache9 commented on a change in pull request #2237: URL: https://github.com/apache/hbase/pull/2237#discussion_r472640228 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/InitMetaProcedure.java ## @@ -167,4 +171,35 @@ protected void completionCleanup(MasterProcedureEnv env) { public void await() throws InterruptedException { latch.await(); } + + private static boolean deleteMetaTableDirectoryIfPartial(FileSystem rootDirectoryFs, +Path metaTableDir) throws IOException { +boolean isPartial = true; +try { + TableDescriptor metaDescriptor = +FSTableDescriptors.getTableDescriptorFromFs(rootDirectoryFs, metaTableDir); + // when entering the state of INIT_META_WRITE_FS_LAYOUT, if a meta table directory is found, + // the meta table should not have any useful data and considers as partial. + // if we find any valid HFiles, operator should fix the meta e.g. via HBCK. + if (metaDescriptor != null && metaDescriptor.getColumnFamilyCount() > 0) { +RemoteIterator iterator = rootDirectoryFs.listFiles(metaTableDir, true); +while (iterator.hasNext()) { + LocatedFileStatus status = iterator.next(); + if (StoreFileInfo.isHFile(status.getPath()) && HFile +.isHFileFormat(rootDirectoryFs, status.getPath())) { +isPartial = false; +break; + } +} + } +} finally { + if (!isPartial) { +throw new IOException("Meta table is not partial, please sideline this meta directory " + + "or run HBCK to fix this meta table, e.g. rebuild the server hostname in ZNode for the " + + "meta region"); + } Review comment: In general the approach is fine. The only concern is the implementation. I do not think InitMetaProcedure support rollback, so what will happen if we throw exception here? You will see a ERROR log in the output to say that the procedure does not support rollback? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache9 commented on a change in pull request #2255: HBASE-24877 Add option to avoid aborting RS process upon uncaught exc…
Apache9 commented on a change in pull request #2255: URL: https://github.com/apache/hbase/pull/2255#discussion_r472636537 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceShipper.java ## @@ -290,7 +290,22 @@ private boolean updateLogPosition(WALEntryBatch batch) { public void startup(UncaughtExceptionHandler handler) { String name = Thread.currentThread().getName(); Threads.setDaemonThreadRunning(this, - name + ".replicationSource.shipper" + walGroupId + "," + source.getQueueId(), handler); + name + ".replicationSource.shipper" + walGroupId + "," + source.getQueueId(), + (t,e) -> { Review comment: OK, the code is almost the same... Then I think we could move the logic into uncaughtException method? If abortOnError is true, we about, otherwise we will try to refresh the source. ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java ## @@ -373,7 +389,21 @@ private void tryStartNewShipper(String walGroupId, PriorityBlockingQueue q Threads.setDaemonThreadRunning( walReader, Thread.currentThread().getName() + ".replicationSource.wal-reader." + walGroupId + "," + queueId, -this::uncaughtException); +(t,e) -> { Review comment: So here it is for wal reader. I think refreshSources and retry is an acceptable way. Then let's just test the abortOnError flag here? If it is true, we will call uncaughtException, otherwise we will try to refresh the replication source. ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java ## @@ -583,16 +617,27 @@ private void initialize() { PriorityBlockingQueue queue = entry.getValue(); tryStartNewShipper(walGroupId, queue); } +this.startupOngoing.set(false); } @Override public void startup() { // mark we are running now this.sourceRunning = true; -initThread = new Thread(this::initialize); -Threads.setDaemonThreadRunning(initThread, - Thread.currentThread().getName() + ".replicationSource," + this.queueId, - this::uncaughtException); +this.retryStartup.set(true); Review comment: This flag is only used in this method? Let's use a local var instead of a class member field? ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java ## @@ -583,16 +617,27 @@ private void initialize() { PriorityBlockingQueue queue = entry.getValue(); tryStartNewShipper(walGroupId, queue); } +this.startupOngoing.set(false); } @Override public void startup() { // mark we are running now this.sourceRunning = true; -initThread = new Thread(this::initialize); -Threads.setDaemonThreadRunning(initThread, - Thread.currentThread().getName() + ".replicationSource," + this.queueId, - this::uncaughtException); +this.retryStartup.set(true); +do { + if(retryStartup.get()) { +retryStartup.set(false); +startupOngoing.set(true); Review comment: So this one is exactly the same with source.isActive? Can we just make use of that flag instead of introducing a new one? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24898) Can not set all hours as offpeak hour now
[ https://issues.apache.org/jira/browse/HBASE-24898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17180233#comment-17180233 ] Duo Zhang commented on HBASE-24898: --- So let's use this issue to reimplement CurrentHourProvider? And then we can rewrite TestStochasticLoadBalancer.testMoveCostMultiplier to fix it. > Can not set all hours as offpeak hour now > - > > Key: HBASE-24898 > URL: https://issues.apache.org/jira/browse/HBASE-24898 > Project: HBase > Issue Type: Bug > Components: tooling >Reporter: Zheng Wang >Assignee: Zheng Wang >Priority: Major > > The valid number in OffPeakHours is 0-23, and the end hour not included, so > we can not set all hours as offpeak hour now. > It is not useful for users in general, but useful for unit test, eg: > TestStochasticLoadBalancer.testMoveCostMultiplier, in this case, the > multiplier of move cost should be a lower value in offpeak, and we expect it > always as offpeak hour no matter when it runs. > My proposal is just change the valid number from 0-23 to 0-24, then we can > easily apply this pr to all active branchs, and folks do not need to change > them configuration. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24898) Can not set all hours as offpeak hour now
[ https://issues.apache.org/jira/browse/HBASE-24898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17180232#comment-17180232 ] Duo Zhang commented on HBASE-24898: --- And maybe it is incorrect... {code} public static int getCurrentHour() { Tick tick = CurrentHourProvider.tick; if(System.currentTimeMillis() < tick.expirationTimeInMillis) { return tick.currentHour; } CurrentHourProvider.tick = tick = nextTick(); return tick.currentHour; } {code} Think of you call this method every 2 hours, the returned hour will always be less than the correct value... > Can not set all hours as offpeak hour now > - > > Key: HBASE-24898 > URL: https://issues.apache.org/jira/browse/HBASE-24898 > Project: HBase > Issue Type: Bug > Components: tooling >Reporter: Zheng Wang >Assignee: Zheng Wang >Priority: Major > > The valid number in OffPeakHours is 0-23, and the end hour not included, so > we can not set all hours as offpeak hour now. > It is not useful for users in general, but useful for unit test, eg: > TestStochasticLoadBalancer.testMoveCostMultiplier, in this case, the > multiplier of move cost should be a lower value in offpeak, and we expect it > always as offpeak hour no matter when it runs. > My proposal is just change the valid number from 0-23 to 0-24, then we can > easily apply this pr to all active branchs, and folks do not need to change > them configuration. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24898) Can not set all hours as offpeak hour now
[ https://issues.apache.org/jira/browse/HBASE-24898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17180230#comment-17180230 ] Duo Zhang commented on HBASE-24898: --- I think we'd better reimplement CurrentHourProvider, to let it use EnvironmentEdge.currentTime instead of System.currentTimeMillies so we could control the hour we return. And the implementation itself is not thread safe? > Can not set all hours as offpeak hour now > - > > Key: HBASE-24898 > URL: https://issues.apache.org/jira/browse/HBASE-24898 > Project: HBase > Issue Type: Bug > Components: tooling >Reporter: Zheng Wang >Assignee: Zheng Wang >Priority: Major > > The valid number in OffPeakHours is 0-23, and the end hour not included, so > we can not set all hours as offpeak hour now. > It is not useful for users in general, but useful for unit test, eg: > TestStochasticLoadBalancer.testMoveCostMultiplier, in this case, the > multiplier of move cost should be a lower value in offpeak, and we expect it > always as offpeak hour no matter when it runs. > My proposal is just change the valid number from 0-23 to 0-24, then we can > easily apply this pr to all active branchs, and folks do not need to change > them configuration. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24898) Can not set all hours as offpeak hour now
[ https://issues.apache.org/jira/browse/HBASE-24898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17180228#comment-17180228 ] Duo Zhang commented on HBASE-24898: --- I do not think we should change our logic only because we need to pass a UT, unless the change is trivial and does not introduce new problem. Let me take a look at TestStochasticLoadBalancer.testMoveCostMultiplier to see if we could find another way to implement the test. > Can not set all hours as offpeak hour now > - > > Key: HBASE-24898 > URL: https://issues.apache.org/jira/browse/HBASE-24898 > Project: HBase > Issue Type: Bug > Components: tooling >Reporter: Zheng Wang >Assignee: Zheng Wang >Priority: Major > > The valid number in OffPeakHours is 0-23, and the end hour not included, so > we can not set all hours as offpeak hour now. > It is not useful for users in general, but useful for unit test, eg: > TestStochasticLoadBalancer.testMoveCostMultiplier, in this case, the > multiplier of move cost should be a lower value in offpeak, and we expect it > always as offpeak hour no matter when it runs. > My proposal is just change the valid number from 0-23 to 0-24, then we can > easily apply this pr to all active branchs, and folks do not need to change > them configuration. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] nyl3532016 commented on pull request #2260: HBASE-24884 BulkLoadHFilesTool/LoadIncrementalHFiles should accept -D…
nyl3532016 commented on pull request #2260: URL: https://github.com/apache/hbase/pull/2260#issuecomment-675820400 +1 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-24898) Can not set all hours as offpeak hour now
[ https://issues.apache.org/jira/browse/HBASE-24898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zheng Wang updated HBASE-24898: --- Description: The valid number in OffPeakHours is 0-23, and the end hour not included, so we can not set all hours as offpeak hour now. It is not useful for users in general, but useful for unit test, eg: TestStochasticLoadBalancer.testMoveCostMultiplier, in this case, the multiplier of move cost should be a lower value in offpeak, and we expect it always as offpeak hour no matter when it runs. My proposal is just change the valid number from 0-23 to 0-24, then we can easily apply this pr to all active branchs, and folks do not need to change them configuration. was: The valid number in OffPeakHours is 0-23, and the end hour not included, so we can not set all hours as offpeak hour now. It is not useful for users in general, but useful for unit test, eg: TestStochasticLoadBalancer.testMoveCostMultiplier, in this case, the multiplier of move cost should be a lower value in offpeak, and we expect it always as offpeak hour no matter when it runs. My proposal is just change the valid number from 0-23 to 0-24, then we can easily apply this pr to all active branchs, and folks do not need to change them configuration. > Can not set all hours as offpeak hour now > - > > Key: HBASE-24898 > URL: https://issues.apache.org/jira/browse/HBASE-24898 > Project: HBase > Issue Type: Bug > Components: tooling >Reporter: Zheng Wang >Assignee: Zheng Wang >Priority: Major > > The valid number in OffPeakHours is 0-23, and the end hour not included, so > we can not set all hours as offpeak hour now. > It is not useful for users in general, but useful for unit test, eg: > TestStochasticLoadBalancer.testMoveCostMultiplier, in this case, the > multiplier of move cost should be a lower value in offpeak, and we expect it > always as offpeak hour no matter when it runs. > My proposal is just change the valid number from 0-23 to 0-24, then we can > easily apply this pr to all active branchs, and folks do not need to change > them configuration. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24898) Can not set all hours as offpeak hour now
[ https://issues.apache.org/jira/browse/HBASE-24898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zheng Wang updated HBASE-24898: --- Description: The valid number in OffPeakHours is 0-23, and the end hour not included, so we can not set all hours as offpeak hour now. It is not useful for users in general, but useful for unit test, eg: TestStochasticLoadBalancer.testMoveCostMultiplier, in this case, the multiplier of move cost should be a lower value in offpeak, and we expect it always as offpeak hour no matter when it runs. My proposal is just change the valid number from 0-23 to 0-24, then we can easily apply this pr to all active branchs, and folks do not need to change them configuration. was: The valid number in OffPeakHours is 0-23, and the end hour not included, so we can not set all hours as offpeak hour now. It is not useful for users in general, but useful for unit test, eg: My proposal is just change the valid number from 0-23 to 0-24, then we can easily apply this pr to all active branchs, and folks do not need to change them configuration. > Can not set all hours as offpeak hour now > - > > Key: HBASE-24898 > URL: https://issues.apache.org/jira/browse/HBASE-24898 > Project: HBase > Issue Type: Bug > Components: tooling >Reporter: Zheng Wang >Assignee: Zheng Wang >Priority: Major > > The valid number in OffPeakHours is 0-23, and the end hour not included, so > we can not set all hours as offpeak hour now. > It is not useful for users in general, but useful for unit test, eg: > TestStochasticLoadBalancer.testMoveCostMultiplier, in this case, the > multiplier of move cost should be a lower value in offpeak, and we expect it > always as offpeak hour no matter when it runs. > > My proposal is just change the valid number from 0-23 to 0-24, then we can > easily apply this pr to all active branchs, and folks do not need to change > them configuration. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24886) Remove deprecated methods in RowMutations
[ https://issues.apache.org/jira/browse/HBASE-24886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-24886: -- Hadoop Flags: Incompatible change,Reviewed Release Note: Removed RowMutations.add(Put) and RowMutations.add(Delete). Use RowMutations.add(Mutation) directly. > Remove deprecated methods in RowMutations > - > > Key: HBASE-24886 > URL: https://issues.apache.org/jira/browse/HBASE-24886 > Project: HBase > Issue Type: Sub-task > Components: Client >Reporter: Duo Zhang >Assignee: niuyulin >Priority: Major > > Such as add(Put) and add(Delete). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24898) Can not set all hours as offpeak hour now
[ https://issues.apache.org/jira/browse/HBASE-24898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zheng Wang updated HBASE-24898: --- Description: The valid number in OffPeakHours is 0-23, and the end hour not included, so we can not set all hours as offpeak hour now. It is not useful for users in general, but useful for unit test, eg: My proposal is just change the valid number from 0-23 to 0-24, then we can easily apply this pr to all active branchs, and folks do not need to change them configuration. was: The valid number in OffPeakHours is 0-23, and the end hour not included, so we can not set all hours as offpeak hour now. My proposal is just change the valid number from 0-23 to 0-24, then we can easily apply this pr to all active branchs, and folks do not need to change them configuration. > Can not set all hours as offpeak hour now > - > > Key: HBASE-24898 > URL: https://issues.apache.org/jira/browse/HBASE-24898 > Project: HBase > Issue Type: Bug > Components: tooling >Reporter: Zheng Wang >Assignee: Zheng Wang >Priority: Major > > The valid number in OffPeakHours is 0-23, and the end hour not included, so > we can not set all hours as offpeak hour now. > It is not useful for users in general, but useful for unit test, eg: > My proposal is just change the valid number from 0-23 to 0-24, then we can > easily apply this pr to all active branchs, and folks do not need to change > them configuration. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-24886) Remove deprecated methods in RowMutations
[ https://issues.apache.org/jira/browse/HBASE-24886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-24886. --- Fix Version/s: 3.0.0-alpha-1 Resolution: Fixed Merged to master. Thanks [~niuyulin] for contributing. > Remove deprecated methods in RowMutations > - > > Key: HBASE-24886 > URL: https://issues.apache.org/jira/browse/HBASE-24886 > Project: HBase > Issue Type: Sub-task > Components: Client >Reporter: Duo Zhang >Assignee: niuyulin >Priority: Major > Fix For: 3.0.0-alpha-1 > > > Such as add(Put) and add(Delete). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #2261: HBASE-24528 : BalancerDecision queue implementation in HMaster with Admin API
Apache-HBase commented on pull request #2261: URL: https://github.com/apache/hbase/pull/2261#issuecomment-675817866 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 3m 26s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | prototool | 0m 0s | prototool was not available. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 22s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 11s | master passed | | +1 :green_heart: | checkstyle | 2m 24s | master passed | | +0 :ok: | refguide | 5m 28s | branch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect. | | +1 :green_heart: | spotbugs | 20m 58s | master passed | | -0 :warning: | patch | 2m 27s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 13s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 19s | the patch passed | | -0 :warning: | checkstyle | 2m 49s | root: The patch generated 26 new + 437 unchanged - 0 fixed = 463 total (was 437) | | -0 :warning: | rubocop | 0m 26s | The patch generated 5 new + 594 unchanged - 2 fixed = 599 total (was 596) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 1s | The patch has no ill-formed XML file. | | +0 :ok: | refguide | 6m 17s | patch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect. | | +1 :green_heart: | hadoopcheck | 13m 31s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | hbaseprotoc | 8m 17s | the patch passed | | +1 :green_heart: | spotbugs | 23m 57s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 1m 34s | The patch does not generate ASF License warnings. | | | | 108m 37s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2261/4/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2261 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle refguide xml cc hbaseprotoc prototool rubocop | | uname | Linux fa91c148e4a2 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 98e35842eb | | refguide | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2261/4/artifact/yetus-general-check/output/branch-site/book.html | | checkstyle | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2261/4/artifact/yetus-general-check/output/diff-checkstyle-root.txt | | rubocop | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2261/4/artifact/yetus-general-check/output/diff-patch-rubocop.txt | | refguide | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2261/4/artifact/yetus-general-check/output/patch-site/book.html | | Max. process+thread count | 122 (vs. ulimit of 12500) | | modules | C: hbase-protocol-shaded hbase-common hbase-hadoop-compat hbase-client hbase-server hbase-thrift hbase-shell . U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2261/4/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 rubocop=0.80.0 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-24898) Can not set all hours as offpeak hour now
[ https://issues.apache.org/jira/browse/HBASE-24898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zheng Wang updated HBASE-24898: --- Summary: Can not set all hours as offpeak hour now (was: Can not set 23:00~24:00 as offpeak hour now) > Can not set all hours as offpeak hour now > - > > Key: HBASE-24898 > URL: https://issues.apache.org/jira/browse/HBASE-24898 > Project: HBase > Issue Type: Bug > Components: tooling >Reporter: Zheng Wang >Assignee: Zheng Wang >Priority: Major > > The valid number in OffPeakHours is 0-23, and the end hour not included, so > we can not set 23:00-24:00 as offpeak hour now. > My proposal is just change the valid number from 0-23 to 0-24, then we can > easily apply this pr to all active branchs, and folks do not need to change > them configuration. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24898) Can not set all hours as offpeak hour now
[ https://issues.apache.org/jira/browse/HBASE-24898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zheng Wang updated HBASE-24898: --- Description: The valid number in OffPeakHours is 0-23, and the end hour not included, so we can not set all hours as offpeak hour now. My proposal is just change the valid number from 0-23 to 0-24, then we can easily apply this pr to all active branchs, and folks do not need to change them configuration. was: The valid number in OffPeakHours is 0-23, and the end hour not included, so we can not set 23:00-24:00 as offpeak hour now. My proposal is just change the valid number from 0-23 to 0-24, then we can easily apply this pr to all active branchs, and folks do not need to change them configuration. > Can not set all hours as offpeak hour now > - > > Key: HBASE-24898 > URL: https://issues.apache.org/jira/browse/HBASE-24898 > Project: HBase > Issue Type: Bug > Components: tooling >Reporter: Zheng Wang >Assignee: Zheng Wang >Priority: Major > > The valid number in OffPeakHours is 0-23, and the end hour not included, so > we can not set all hours as offpeak hour now. > My proposal is just change the valid number from 0-23 to 0-24, then we can > easily apply this pr to all active branchs, and folks do not need to change > them configuration. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24880) Remove ReplicationPeerConfigUpgrader
[ https://issues.apache.org/jira/browse/HBASE-24880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17180218#comment-17180218 ] Duo Zhang commented on HBASE-24880: --- Please fill the release note as this is an incompatible change. We need to tell users what we have broken. > Remove ReplicationPeerConfigUpgrader > > > Key: HBASE-24880 > URL: https://issues.apache.org/jira/browse/HBASE-24880 > Project: HBase > Issue Type: Sub-task > Components: Replication >Reporter: Duo Zhang >Assignee: niuyulin >Priority: Major > Fix For: 3.0.0-alpha-1 > > > The comment says it will be removed in 3.x. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24880) Remove ReplicationPeerConfigUpgrader
[ https://issues.apache.org/jira/browse/HBASE-24880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-24880: -- Hadoop Flags: Incompatible change,Reviewed (was: Reviewed) > Remove ReplicationPeerConfigUpgrader > > > Key: HBASE-24880 > URL: https://issues.apache.org/jira/browse/HBASE-24880 > Project: HBase > Issue Type: Sub-task > Components: Replication >Reporter: Duo Zhang >Assignee: niuyulin >Priority: Major > Fix For: 3.0.0-alpha-1 > > > The comment says it will be removed in 3.x. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24887) Remove Row.compareTo
[ https://issues.apache.org/jira/browse/HBASE-24887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-24887: -- Hadoop Flags: Incompatible change,Reviewed (was: Reviewed) > Remove Row.compareTo > > > Key: HBASE-24887 > URL: https://issues.apache.org/jira/browse/HBASE-24887 > Project: HBase > Issue Type: Sub-task >Reporter: Duo Zhang >Assignee: Junhong Xu >Priority: Major > Fix For: 3.0.0-alpha-1 > > > Which means Row will not extend Comparable any more. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24887) Remove Row.compareTo
[ https://issues.apache.org/jira/browse/HBASE-24887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17180217#comment-17180217 ] Duo Zhang commented on HBASE-24887: --- Please fill the release note? [~Joseph295]. > Remove Row.compareTo > > > Key: HBASE-24887 > URL: https://issues.apache.org/jira/browse/HBASE-24887 > Project: HBase > Issue Type: Sub-task >Reporter: Duo Zhang >Assignee: Junhong Xu >Priority: Major > Fix For: 3.0.0-alpha-1 > > > Which means Row will not extend Comparable any more. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache9 merged pull request #2267: HBASE-24886 Remove deprecated methods in RowMutations
Apache9 merged pull request #2267: URL: https://github.com/apache/hbase/pull/2267 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache9 commented on pull request #2274: HBASE-24896 'Stuck' in static initialization creating RegionInfo inst…
Apache9 commented on pull request #2274: URL: https://github.com/apache/hbase/pull/2274#issuecomment-675811729 And in general, I do not think we want users to make use of RegionInfo.UNDEFINED directly? It should be put to RegionInfoBuilder at the first place... This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2223: HBASE-24837 Ignore... check what fails when zk-based WAL splitter ena…
Apache-HBase commented on pull request #2223: URL: https://github.com/apache/hbase/pull/2223#issuecomment-675809131 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 17s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ branch-2 Compile Tests _ | | +0 :ok: | mvndep | 0m 15s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 14s | branch-2 passed | | +1 :green_heart: | checkstyle | 1m 38s | branch-2 passed | | +1 :green_heart: | spotbugs | 3m 34s | branch-2 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 14s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 24s | the patch passed | | +1 :green_heart: | checkstyle | 1m 54s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 2s | The patch has no ill-formed XML file. | | +1 :green_heart: | hadoopcheck | 14m 26s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 4m 6s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 29s | The patch does not generate ASF License warnings. | | | | 45m 34s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2223/2/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2223 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle xml | | uname | Linux 58e8f91db37a 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / 7335dbc834 | | Max. process+thread count | 84 (vs. ulimit of 12500) | | modules | C: hbase-common hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2223/2/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2273: Backport "HBASE-24627 Normalize one table at a time" to branch-2
Apache-HBase commented on pull request #2273: URL: https://github.com/apache/hbase/pull/2273#issuecomment-675807417 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 44s | Docker mode activated. | | -0 :warning: | yetus | 0m 8s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2 Compile Tests _ | | +0 :ok: | mvndep | 0m 18s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 19s | branch-2 passed | | +1 :green_heart: | compile | 3m 37s | branch-2 passed | | +1 :green_heart: | shadedjars | 6m 5s | branch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 29s | hbase-client in branch-2 failed. | | -0 :warning: | javadoc | 0m 40s | hbase-server in branch-2 failed. | | -0 :warning: | javadoc | 0m 49s | hbase-thrift in branch-2 failed. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 20s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 9s | the patch passed | | +1 :green_heart: | compile | 3m 36s | the patch passed | | +1 :green_heart: | javac | 3m 36s | the patch passed | | +1 :green_heart: | shadedjars | 6m 4s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 30s | hbase-client in the patch failed. | | -0 :warning: | javadoc | 0m 42s | hbase-server in the patch failed. | | -0 :warning: | javadoc | 0m 52s | hbase-thrift in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 52s | hbase-protocol-shaded in the patch passed. | | +1 :green_heart: | unit | 2m 32s | hbase-client in the patch passed. | | -1 :x: | unit | 7m 17s | hbase-server in the patch failed. | | +1 :green_heart: | unit | 4m 12s | hbase-thrift in the patch passed. | | +1 :green_heart: | unit | 8m 0s | hbase-shell in the patch passed. | | | | 60m 32s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2273/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2273 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux ef37df607fd4 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / 7335dbc834 | | Default Java | 2020-01-14 | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2273/1/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-client.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2273/1/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2273/1/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-thrift.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2273/1/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-client.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2273/1/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2273/1/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-thrift.txt | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2273/1/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2273/1/testReport/ | | Max. process+thread count | 2349 (vs. ulimit of 12500) | | modules | C: hbase-protocol-shaded hbase-client hbase-server hbase-thrift hbase-shell U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2273/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above
[jira] [Resolved] (HBASE-24872) refactor valueOf PoolType
[ https://issues.apache.org/jira/browse/HBASE-24872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] niuyulin resolved HBASE-24872. -- Resolution: Fixed > refactor valueOf PoolType > - > > Key: HBASE-24872 > URL: https://issues.apache.org/jira/browse/HBASE-24872 > Project: HBase > Issue Type: Improvement > Components: Client >Reporter: niuyulin >Assignee: niuyulin >Priority: Minor > Fix For: 3.0.0-alpha-1 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #2273: Backport "HBASE-24627 Normalize one table at a time" to branch-2
Apache-HBase commented on pull request #2273: URL: https://github.com/apache/hbase/pull/2273#issuecomment-675803796 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 2m 33s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | prototool | 0m 0s | prototool was not available. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ branch-2 Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 33s | branch-2 passed | | +1 :green_heart: | checkstyle | 2m 51s | branch-2 passed | | +1 :green_heart: | spotbugs | 6m 52s | branch-2 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 14s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 10s | the patch passed | | -0 :warning: | checkstyle | 1m 8s | hbase-server: The patch generated 1 new + 107 unchanged - 2 fixed = 108 total (was 109) | | -0 :warning: | rubocop | 0m 12s | The patch generated 5 new + 364 unchanged - 0 fixed = 369 total (was 364) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 11m 24s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | hbaseprotoc | 2m 44s | the patch passed | | +1 :green_heart: | spotbugs | 7m 28s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 1m 3s | The patch does not generate ASF License warnings. | | | | 53m 37s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2273/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2273 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle cc hbaseprotoc prototool rubocop | | uname | Linux 6801b0b72bd8 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / 7335dbc834 | | checkstyle | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2273/1/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt | | rubocop | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2273/1/artifact/yetus-general-check/output/diff-patch-rubocop.txt | | Max. process+thread count | 94 (vs. ulimit of 12500) | | modules | C: hbase-protocol-shaded hbase-client hbase-server hbase-thrift hbase-shell U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2273/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 rubocop=0.80.0 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-24872) refactor valueOf PoolType
[ https://issues.apache.org/jira/browse/HBASE-24872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] niuyulin updated HBASE-24872: - Component/s: Client Fix Version/s: 3.0.0-alpha-1 > refactor valueOf PoolType > - > > Key: HBASE-24872 > URL: https://issues.apache.org/jira/browse/HBASE-24872 > Project: HBase > Issue Type: Improvement > Components: Client >Reporter: niuyulin >Assignee: niuyulin >Priority: Minor > Fix For: 3.0.0-alpha-1 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] nyl3532016 commented on pull request #2250: HBASE-24872 refactor valueOf PoolType
nyl3532016 commented on pull request #2250: URL: https://github.com/apache/hbase/pull/2250#issuecomment-675803464 Thanks for review @huaxiangsun This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2274: HBASE-24896 'Stuck' in static initialization creating RegionInfo inst…
Apache-HBase commented on pull request #2274: URL: https://github.com/apache/hbase/pull/2274#issuecomment-675802441 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 29s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ branch-2.3 Compile Tests _ | | +0 :ok: | mvndep | 0m 17s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 36s | branch-2.3 passed | | +1 :green_heart: | checkstyle | 2m 7s | branch-2.3 passed | | +1 :green_heart: | spotbugs | 3m 39s | branch-2.3 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 15s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 10s | the patch passed | | -0 :warning: | checkstyle | 0m 41s | hbase-client: The patch generated 1 new + 318 unchanged - 0 fixed = 319 total (was 318) | | -0 :warning: | checkstyle | 1m 16s | hbase-server: The patch generated 2 new + 412 unchanged - 16 fixed = 414 total (was 428) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 17m 0s | Patch does not cause any errors with Hadoop 2.10.0 or 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 5m 22s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 40s | The patch does not generate ASF License warnings. | | | | 48m 45s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2274/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2274 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux a09bab3a56bf 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2.3 / 6202ee6b64 | | checkstyle | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2274/1/artifact/yetus-general-check/output/diff-checkstyle-hbase-client.txt | | checkstyle | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2274/1/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt | | Max. process+thread count | 94 (vs. ulimit of 12500) | | modules | C: hbase-client hbase-zookeeper hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2274/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] saintstack commented on pull request #2274: HBASE-24896 'Stuck' in static initialization creating RegionInfo inst…
saintstack commented on pull request #2274: URL: https://github.com/apache/hbase/pull/2274#issuecomment-675774822 Your suggestion is better @bharathv more code change but looks safe to me. RegionInfoBuilder, the host for FIRST_META_REGIONINFO is @InterfaceAudience.Private so moving it is 'allowed'. Thought of adding back something like the below to RegionInfoBuilder in case any downstream references... public static final RegionInfo FIRST_META_REGIONINFO = RegionInfo.FIRST_META_REGIONINFO; ... but that might bring back the statics load deadlock in a new guise. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] saintstack commented on a change in pull request #2274: HBASE-24896 'Stuck' in static initialization creating RegionInfo inst…
saintstack commented on a change in pull request #2274: URL: https://github.com/apache/hbase/pull/2274#discussion_r472543730 ## File path: hbase-client/src/main/java/org/apache/hadoop/hbase/client/RegionInfo.java ## @@ -69,7 +65,14 @@ */ @InterfaceAudience.Public public interface RegionInfo extends Comparable { - RegionInfo UNDEFINED = RegionInfoBuilder.newBuilder(TableName.valueOf("__UNDEFINED__")).build(); + /** + * Do not use. + * @deprecated Since 2.3.1; to be removed in 4.0.0 with no replacement. + */ + // Removed because creation was creating a static deadlock, HBASE-24896 + @Deprecated + RegionInfo UNDEFINED = null; Review comment: I'd glanced in passing but there are ~100 refs to old location. Let me try it... make sure we are not just moving this problem. Would be better I agree if it worked and we could avoid this deprecation. @bharathv This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-24764) Add support of adding default peer configs via hbase-site.xml for all replication peers.
[ https://issues.apache.org/jira/browse/HBASE-24764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ankit Jain updated HBASE-24764: --- Summary: Add support of adding default peer configs via hbase-site.xml for all replication peers. (was: Add support of enabling default peer configs via hbase-site.xml for all replication peers.) > Add support of adding default peer configs via hbase-site.xml for all > replication peers. > > > Key: HBASE-24764 > URL: https://issues.apache.org/jira/browse/HBASE-24764 > Project: HBase > Issue Type: Improvement >Reporter: Ankit Jain >Assignee: Ankit Jain >Priority: Minor > > Currently, if a user needs to apply some common peer configs to all the > default replication peers, the only way is to execute update_peer_config via > CLI which requires manual intervention and can be tedious in case of large > deployment fleet. > As part of this JIRA, we plan to add the support to have default replication > peer configs as part of hbase-site.xml like > hbase.replication.peer.default.config="k1=v1;k2=v2.." which can be just > applied by a rolling restart. Example below: > > hbase.replication.peer.default.configs > hbase.replication.source.custom.walentryfilters=x,y,z;hbase.rpc.protection=abc;hbase.xxx.custom_property=123 > > This will be empty by default, but one can override to have default configs > in place. > The final peer configuration would be a merge of this default config + > whatever users override during the peer creation/update (if any). > Related Jira: https://issues.apache.org/jira/browse/HBASE-17543. HBASE-17543 > added the support to add the WALEntryFilters to default endpoint via peer > configuration. By this new Jira we are extending the support to update peer > configs via hbase-site.xml. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24764) Add support of enabling default peer configs via hbase-site.xml for all replication peers.
[ https://issues.apache.org/jira/browse/HBASE-24764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ankit Jain updated HBASE-24764: --- Summary: Add support of enabling default peer configs via hbase-site.xml for all replication peers. (was: Add support of enabling default peer configs via hbase-site.xml for all default replication peers.) > Add support of enabling default peer configs via hbase-site.xml for all > replication peers. > -- > > Key: HBASE-24764 > URL: https://issues.apache.org/jira/browse/HBASE-24764 > Project: HBase > Issue Type: Improvement >Reporter: Ankit Jain >Assignee: Ankit Jain >Priority: Minor > > Currently, if a user needs to apply some common peer configs to all the > default replication peers, the only way is to execute update_peer_config via > CLI which requires manual intervention and can be tedious in case of large > deployment fleet. > As part of this JIRA, we plan to add the support to have default replication > peer configs as part of hbase-site.xml like > hbase.replication.peer.default.config="k1=v1;k2=v2.." which can be just > applied by a rolling restart. Example below: > > hbase.replication.peer.default.configs > hbase.replication.source.custom.walentryfilters=x,y,z;hbase.rpc.protection=abc;hbase.xxx.custom_property=123 > > This will be empty by default, but one can override to have default configs > in place. > The final peer configuration would be a merge of this default config + > whatever users override during the peer creation/update (if any). > Related Jira: https://issues.apache.org/jira/browse/HBASE-17543. HBASE-17543 > added the support to add the WALEntryFilters to default endpoint via peer > configuration. By this new Jira we are extending the support to update peer > configs via hbase-site.xml. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] bharathv commented on a change in pull request #2274: HBASE-24896 'Stuck' in static initialization creating RegionInfo inst…
bharathv commented on a change in pull request #2274: URL: https://github.com/apache/hbase/pull/2274#discussion_r472538396 ## File path: hbase-client/src/main/java/org/apache/hadoop/hbase/client/RegionInfo.java ## @@ -69,7 +65,14 @@ */ @InterfaceAudience.Public public interface RegionInfo extends Comparable { - RegionInfo UNDEFINED = RegionInfoBuilder.newBuilder(TableName.valueOf("__UNDEFINED__")).build(); + /** + * Do not use. + * @deprecated Since 2.3.1; to be removed in 4.0.0 with no replacement. + */ + // Removed because creation was creating a static deadlock, HBASE-24896 + @Deprecated + RegionInfo UNDEFINED = null; Review comment: > You suggesting move FIRST_META_REGIONINFO define into RegionInfo? Exactly. I think thats one way to break the loop without this deprecation schedule. I don't know if it has other implications (especially if it breaks the semantics described in HBASE-17980). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] saintstack commented on a change in pull request #2274: HBASE-24896 'Stuck' in static initialization creating RegionInfo inst…
saintstack commented on a change in pull request #2274: URL: https://github.com/apache/hbase/pull/2274#discussion_r472510700 ## File path: hbase-client/src/main/java/org/apache/hadoop/hbase/client/RegionInfo.java ## @@ -69,7 +65,14 @@ */ @InterfaceAudience.Public public interface RegionInfo extends Comparable { - RegionInfo UNDEFINED = RegionInfoBuilder.newBuilder(TableName.valueOf("__UNDEFINED__")).build(); + /** + * Do not use. + * @deprecated Since 2.3.1; to be removed in 4.0.0 with no replacement. + */ + // Removed because creation was creating a static deadlock, HBASE-24896 + @Deprecated + RegionInfo UNDEFINED = null; Review comment: Thanks for the review @bharathv Say more. You suggesting move FIRST_META_REGIONINFO define into RegionInfo? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] bharathv commented on a change in pull request #2274: HBASE-24896 'Stuck' in static initialization creating RegionInfo inst…
bharathv commented on a change in pull request #2274: URL: https://github.com/apache/hbase/pull/2274#discussion_r472503645 ## File path: hbase-client/src/main/java/org/apache/hadoop/hbase/client/RegionInfo.java ## @@ -69,7 +65,14 @@ */ @InterfaceAudience.Public public interface RegionInfo extends Comparable { - RegionInfo UNDEFINED = RegionInfoBuilder.newBuilder(TableName.valueOf("__UNDEFINED__")).build(); + /** + * Do not use. + * @deprecated Since 2.3.1; to be removed in 4.0.0 with no replacement. + */ + // Removed because creation was creating a static deadlock, HBASE-24896 + @Deprecated + RegionInfo UNDEFINED = null; Review comment: Was wondering if we can move the following into RegionInfo to avoid this deprecation stuff. For that we will have to switch to RegionInfoBuilder c'tor because the following usage is a private c'tor. Curious if you considered that and didn't want to do it for some reason. The c'tor was specifically designed for this usecase, so I may be missing some context here. public static final RegionInfo FIRST_META_REGIONINFO = new MutableRegionInfo(1L, TableName.META_TABLE_NAME, RegionInfo.DEFAULT_REPLICA_ID); This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] taklwu commented on a change in pull request #2237: HBASE-24833: Bootstrap should not delete the META table directory if …
taklwu commented on a change in pull request #2237: URL: https://github.com/apache/hbase/pull/2237#discussion_r472482587 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/InitMetaProcedure.java ## @@ -167,4 +171,35 @@ protected void completionCleanup(MasterProcedureEnv env) { public void await() throws InterruptedException { latch.await(); } + + private static boolean deleteMetaTableDirectoryIfPartial(FileSystem rootDirectoryFs, +Path metaTableDir) throws IOException { +boolean isPartial = true; +try { + TableDescriptor metaDescriptor = +FSTableDescriptors.getTableDescriptorFromFs(rootDirectoryFs, metaTableDir); + // when entering the state of INIT_META_WRITE_FS_LAYOUT, if a meta table directory is found, + // the meta table should not have any useful data and considers as partial. + // if we find any valid HFiles, operator should fix the meta e.g. via HBCK. + if (metaDescriptor != null && metaDescriptor.getColumnFamilyCount() > 0) { +RemoteIterator iterator = rootDirectoryFs.listFiles(metaTableDir, true); +while (iterator.hasNext()) { + LocatedFileStatus status = iterator.next(); + if (StoreFileInfo.isHFile(status.getPath()) && HFile +.isHFileFormat(rootDirectoryFs, status.getPath())) { +isPartial = false; +break; + } +} + } +} finally { + if (!isPartial) { +throw new IOException("Meta table is not partial, please sideline this meta directory " + + "or run HBCK to fix this meta table, e.g. rebuild the server hostname in ZNode for the " + + "meta region"); + } Review comment: @Apache9, we're failing the `InitMetaProcedure` with an `IOException`, and HMaster will fail the master startup if `InitMetaProcedure` is `FAILED` with an exception. Still, alternatively, we could continue the bootstrap without throwing (but this is not good as you recommended) so, do you think this change align with your comments? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24901) Create versatile hbase-shell table formatter
[ https://issues.apache.org/jira/browse/HBASE-24901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17180063#comment-17180063 ] Michael Stack commented on HBASE-24901: --- Need pictures! > Create versatile hbase-shell table formatter > > > Key: HBASE-24901 > URL: https://issues.apache.org/jira/browse/HBASE-24901 > Project: HBase > Issue Type: Improvement > Components: shell >Affects Versions: 3.0.0-alpha-1 >Reporter: Elliot Miller >Assignee: Elliot Miller >Priority: Major > > As a user, I would like a simple interface for shell output that can be > expressed as a table (ie. output with a fixed number of columns and > potentially many rows). To be clear, this new formatter is not specifically > for HBase "tables." Table is used in the broader sense here. > Goals > - Do not require more than one output cell loaded in memory at a time > - Support many implementations like aligned human-friendly tables, unaligned > delimited, and JSON > Non-goals > - Don't load all the headers into memory at once. > - This may seem like a goal with merit, but we are unlikely to find a use > case for this formatter with many columns. For example: since HBase tables > aren't relational, our scan output will not have an output column for every > HBase column. Instead, each output row will correspond to an HBase cell. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-24806) Small Updates to Functionality of Shell IRB Workspace
[ https://issues.apache.org/jira/browse/HBASE-24806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack resolved HBASE-24806. --- Hadoop Flags: Reviewed Resolution: Fixed Merged to master. Thanks for the cleanup Elliot. > Small Updates to Functionality of Shell IRB Workspace > - > > Key: HBASE-24806 > URL: https://issues.apache.org/jira/browse/HBASE-24806 > Project: HBase > Issue Type: Sub-task > Components: shell >Affects Versions: 3.0.0-alpha-1 >Reporter: Elliot Miller >Assignee: Elliot Miller >Priority: Major > Fix For: 3.0.0-alpha-1 > > > There are a few amendments I want to make to the first patch for shell IRB > workspaces: > # Hide the new warning: "irb: warn: can't alias help from irb_help." > # Split Shell::Shell#eval_io into eval_io and exception_handler. This will > be a better separation of concerns for both usage and testing. > ## Why is this change so important? At the moment, eval_io may raise > SystemExit, which would cause the ruby test executor to quit without running > all tests. The method eval_io also used to refer to a global variable > $fullTraceback, which is a poor separation of concerns. > # Allow finding script2run in the load path. While undocumented, the 2.x > shell did this, so we may need to do this for compatibility. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] saintstack merged pull request #2232: HBASE-24806 Small Updates to Functionality of Shell IRB Workspace
saintstack merged pull request #2232: URL: https://github.com/apache/hbase/pull/2232 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-24896) 'Stuck' in static initialization creating RegionInfo instance
[ https://issues.apache.org/jira/browse/HBASE-24896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack updated HBASE-24896: -- Hadoop Flags: Incompatible change,Reviewed Release Note: Deprecates and sets the RegionInfo#UNDEFINED static to null to avoid deadlock during static initialization. This define was added for internal use by HBASE-22723 for 3.0.0-alpha-1, 2.3.0, 2.0.6, 2.2.1, 2.1.6. > 'Stuck' in static initialization creating RegionInfo instance > - > > Key: HBASE-24896 > URL: https://issues.apache.org/jira/browse/HBASE-24896 > Project: HBase > Issue Type: Bug >Affects Versions: 2.3.1 >Reporter: Michael Stack >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.2 > > Attachments: hbasedn192-jstack-0.webarchive, > hbasedn192-jstack-1.webarchive, hbasedn192-jstack-2.webarchive > > > We ran into the following deadlocked server in testing. The priority handlers > seem stuck across multiple thread dumps. Seven of the ten total priority > threads have this state: > {code:java} > "RpcServer.priority.RWQ.Fifo.read.handler=5,queue=1,port=16020" #82 daemon > prio=5 os_prio=0 cpu=0.70ms elapsed=315627.86s allocated=3744B > defined_classes=0 tid=0x7f3da0983040 nid=0x62d9 in Object.wait() > [0x7f3d9bc8c000] >java.lang.Thread.State: RUNNABLE > at > org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:3327) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1491) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.newRegionScanner(RSRpcServices.java:3143) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3478) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44858) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:393) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318) > {code} > The anomalous three are as follows: > h3. #1 > {code:java} > "RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=16020" #77 daemon > prio=5 os_prio=0 cpu=175.98ms elapsed=315627.86s allocated=2153K > defined_classes=14 tid=0x7f3da0ae6ec0 nid=0x62d4 in Object.wait() > [0x7f3d9c19] >java.lang.Thread.State: RUNNABLE > at > org.apache.hadoop.hbase.client.RegionInfo.(RegionInfo.java:72) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:3327) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1491) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2912) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44856) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:393) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318){code} > ...which is the creation of the UNDEFINED in RegionInfo here: > {color:#808000}@InterfaceAudience.Public{color}{color:#80}public > interface {color}RegionInfo {color:#80}extends > {color}Comparable { > RegionInfo {color:#660e7a}UNDEFINED {color}= > RegionInfoBuilder.newBuilder(TableName.valueOf({color:#008000}"__UNDEFINED__"{color})).build(); > > h3. #2 > {code:java} > "RpcServer.priority.RWQ.Fifo.read.handler=4,queue=1,port=16020" #81 daemon > prio=5 os_prio=0 cpu=53.85ms elapsed=315627.86s allocated=81984B > defined_classes=3 tid=0x7f3da0981590 nid=0x62d8 in Object.wait() > [0x7f3d9bd8c000] >java.lang.Thread.State: RUNNABLE > at > org.apache.hadoop.hbase.client.RegionInfoBuilder.(RegionInfoBuilder.java:49) > at > org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toRegionInfo(ProtobufUtil.java:3231) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.executeOpenRegionProcedures(RSRpcServices.java:3755) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.lambda$executeProcedures$2(RSRpcServices.java:3827) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices$$Lambda$173/0x0017c0e40040.accept(Unknown > Source) > at java.util.ArrayList.forEach(java.base@11.0.6/ArrayList.java:1540) > at > java.util.Collections$UnmodifiableCollection.forEach(java.base@11.0.6/Collections.java:1085) > at >
[jira] [Resolved] (HBASE-24874) Fix hbase-shell access to ModifiableTableDescriptor methods
[ https://issues.apache.org/jira/browse/HBASE-24874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack resolved HBASE-24874. --- Fix Version/s: 3.0.0-alpha-1 Hadoop Flags: Reviewed Resolution: Fixed Thanks for the fix [~bitoffdev] > Fix hbase-shell access to ModifiableTableDescriptor methods > --- > > Key: HBASE-24874 > URL: https://issues.apache.org/jira/browse/HBASE-24874 > Project: HBase > Issue Type: Bug > Components: shell >Affects Versions: 3.0.0-alpha-1 >Reporter: Elliot Miller >Assignee: Elliot Miller >Priority: Major > Fix For: 3.0.0-alpha-1 > > > HBASE-20819 prepared us for HBase 3.x by removing usages of the deprecated > HTableDescriptor and HColumnDescriptor classes from the shell. However, it > did use two methods from the ModifiableTableDescriptor, which was only public > for compatibility/migration and was marked with > {{@InterfaceAudience.Private}}. When {{ModifiableTableDescriptor}} was made > private last week by HBASE-24507 it broke two hbase-shell commands > (*describe* and *alter* when used to set a coprocessor) that were using > methods from {{ModifiableTableDescriptor}} (these methods are not present on > the general {{TableDescriptor}} interface). > This story will remove the two references in hbase-shell to methods on the > now-private {{ModifiableTableDescriptor}} class and will find appropriate > replacements for the calls. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] saintstack merged pull request #2268: HBASE-24874 Fix hbase-shell access to ModifiableTableDescriptor methods
saintstack merged pull request #2268: URL: https://github.com/apache/hbase/pull/2268 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24896) 'Stuck' in static initialization creating RegionInfo instance
[ https://issues.apache.org/jira/browse/HBASE-24896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17180045#comment-17180045 ] Michael Stack commented on HBASE-24896: --- I put up a PR that undoes some of the circular references static initializing... > 'Stuck' in static initialization creating RegionInfo instance > - > > Key: HBASE-24896 > URL: https://issues.apache.org/jira/browse/HBASE-24896 > Project: HBase > Issue Type: Bug >Affects Versions: 2.3.1 >Reporter: Michael Stack >Priority: Major > Attachments: hbasedn192-jstack-0.webarchive, > hbasedn192-jstack-1.webarchive, hbasedn192-jstack-2.webarchive > > > We ran into the following deadlocked server in testing. The priority handlers > seem stuck across multiple thread dumps. Seven of the ten total priority > threads have this state: > {code:java} > "RpcServer.priority.RWQ.Fifo.read.handler=5,queue=1,port=16020" #82 daemon > prio=5 os_prio=0 cpu=0.70ms elapsed=315627.86s allocated=3744B > defined_classes=0 tid=0x7f3da0983040 nid=0x62d9 in Object.wait() > [0x7f3d9bc8c000] >java.lang.Thread.State: RUNNABLE > at > org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:3327) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1491) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.newRegionScanner(RSRpcServices.java:3143) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3478) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44858) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:393) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318) > {code} > The anomalous three are as follows: > h3. #1 > {code:java} > "RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=16020" #77 daemon > prio=5 os_prio=0 cpu=175.98ms elapsed=315627.86s allocated=2153K > defined_classes=14 tid=0x7f3da0ae6ec0 nid=0x62d4 in Object.wait() > [0x7f3d9c19] >java.lang.Thread.State: RUNNABLE > at > org.apache.hadoop.hbase.client.RegionInfo.(RegionInfo.java:72) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:3327) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1491) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2912) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44856) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:393) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318){code} > ...which is the creation of the UNDEFINED in RegionInfo here: > {color:#808000}@InterfaceAudience.Public{color}{color:#80}public > interface {color}RegionInfo {color:#80}extends > {color}Comparable { > RegionInfo {color:#660e7a}UNDEFINED {color}= > RegionInfoBuilder.newBuilder(TableName.valueOf({color:#008000}"__UNDEFINED__"{color})).build(); > > h3. #2 > {code:java} > "RpcServer.priority.RWQ.Fifo.read.handler=4,queue=1,port=16020" #81 daemon > prio=5 os_prio=0 cpu=53.85ms elapsed=315627.86s allocated=81984B > defined_classes=3 tid=0x7f3da0981590 nid=0x62d8 in Object.wait() > [0x7f3d9bd8c000] >java.lang.Thread.State: RUNNABLE > at > org.apache.hadoop.hbase.client.RegionInfoBuilder.(RegionInfoBuilder.java:49) > at > org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toRegionInfo(ProtobufUtil.java:3231) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.executeOpenRegionProcedures(RSRpcServices.java:3755) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.lambda$executeProcedures$2(RSRpcServices.java:3827) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices$$Lambda$173/0x0017c0e40040.accept(Unknown > Source) > at java.util.ArrayList.forEach(java.base@11.0.6/ArrayList.java:1540) > at > java.util.Collections$UnmodifiableCollection.forEach(java.base@11.0.6/Collections.java:1085) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.executeProcedures(RSRpcServices.java:3827) > at >
[jira] [Updated] (HBASE-24896) 'Stuck' in static initialization creating RegionInfo instance
[ https://issues.apache.org/jira/browse/HBASE-24896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack updated HBASE-24896: -- Fix Version/s: 2.3.2 3.0.0-alpha-1 > 'Stuck' in static initialization creating RegionInfo instance > - > > Key: HBASE-24896 > URL: https://issues.apache.org/jira/browse/HBASE-24896 > Project: HBase > Issue Type: Bug >Affects Versions: 2.3.1 >Reporter: Michael Stack >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.2 > > Attachments: hbasedn192-jstack-0.webarchive, > hbasedn192-jstack-1.webarchive, hbasedn192-jstack-2.webarchive > > > We ran into the following deadlocked server in testing. The priority handlers > seem stuck across multiple thread dumps. Seven of the ten total priority > threads have this state: > {code:java} > "RpcServer.priority.RWQ.Fifo.read.handler=5,queue=1,port=16020" #82 daemon > prio=5 os_prio=0 cpu=0.70ms elapsed=315627.86s allocated=3744B > defined_classes=0 tid=0x7f3da0983040 nid=0x62d9 in Object.wait() > [0x7f3d9bc8c000] >java.lang.Thread.State: RUNNABLE > at > org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:3327) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1491) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.newRegionScanner(RSRpcServices.java:3143) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3478) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44858) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:393) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318) > {code} > The anomalous three are as follows: > h3. #1 > {code:java} > "RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=16020" #77 daemon > prio=5 os_prio=0 cpu=175.98ms elapsed=315627.86s allocated=2153K > defined_classes=14 tid=0x7f3da0ae6ec0 nid=0x62d4 in Object.wait() > [0x7f3d9c19] >java.lang.Thread.State: RUNNABLE > at > org.apache.hadoop.hbase.client.RegionInfo.(RegionInfo.java:72) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:3327) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1491) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2912) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44856) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:393) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318){code} > ...which is the creation of the UNDEFINED in RegionInfo here: > {color:#808000}@InterfaceAudience.Public{color}{color:#80}public > interface {color}RegionInfo {color:#80}extends > {color}Comparable { > RegionInfo {color:#660e7a}UNDEFINED {color}= > RegionInfoBuilder.newBuilder(TableName.valueOf({color:#008000}"__UNDEFINED__"{color})).build(); > > h3. #2 > {code:java} > "RpcServer.priority.RWQ.Fifo.read.handler=4,queue=1,port=16020" #81 daemon > prio=5 os_prio=0 cpu=53.85ms elapsed=315627.86s allocated=81984B > defined_classes=3 tid=0x7f3da0981590 nid=0x62d8 in Object.wait() > [0x7f3d9bd8c000] >java.lang.Thread.State: RUNNABLE > at > org.apache.hadoop.hbase.client.RegionInfoBuilder.(RegionInfoBuilder.java:49) > at > org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toRegionInfo(ProtobufUtil.java:3231) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.executeOpenRegionProcedures(RSRpcServices.java:3755) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.lambda$executeProcedures$2(RSRpcServices.java:3827) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices$$Lambda$173/0x0017c0e40040.accept(Unknown > Source) > at java.util.ArrayList.forEach(java.base@11.0.6/ArrayList.java:1540) > at > java.util.Collections$UnmodifiableCollection.forEach(java.base@11.0.6/Collections.java:1085) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.executeProcedures(RSRpcServices.java:3827) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:34896)
[GitHub] [hbase] saintstack commented on pull request #2274: HBASE-24896 'Stuck' in static initialization creating RegionInfo inst…
saintstack commented on pull request #2274: URL: https://github.com/apache/hbase/pull/2274#issuecomment-675680734 Hard to reproduce so no test. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] saintstack opened a new pull request #2274: HBASE-24896 'Stuck' in static initialization creating RegionInfo inst…
saintstack opened a new pull request #2274: URL: https://github.com/apache/hbase/pull/2274 …ance Patch deprecates and nulls RegionInfo#UNDEFINED (added by HBASE-22723) so as to break possible static initialization deadlock. Adds a local UNDEFINED to the only place where it is used, in CatalogJanitor doing fixup. Cleans up checkstyle complaints in RegionInfo. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-24896) 'Stuck' in static initialization creating RegionInfo instance
[ https://issues.apache.org/jira/browse/HBASE-24896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack updated HBASE-24896: -- Summary: 'Stuck' in static initialization creating RegionInfo instance (was: 'Stuck' creating RegionInfo instance) > 'Stuck' in static initialization creating RegionInfo instance > - > > Key: HBASE-24896 > URL: https://issues.apache.org/jira/browse/HBASE-24896 > Project: HBase > Issue Type: Bug >Affects Versions: 2.3.1 >Reporter: Michael Stack >Priority: Major > Attachments: hbasedn192-jstack-0.webarchive, > hbasedn192-jstack-1.webarchive, hbasedn192-jstack-2.webarchive > > > We ran into the following deadlocked server in testing. The priority handlers > seem stuck across multiple thread dumps. Seven of the ten total priority > threads have this state: > {code:java} > "RpcServer.priority.RWQ.Fifo.read.handler=5,queue=1,port=16020" #82 daemon > prio=5 os_prio=0 cpu=0.70ms elapsed=315627.86s allocated=3744B > defined_classes=0 tid=0x7f3da0983040 nid=0x62d9 in Object.wait() > [0x7f3d9bc8c000] >java.lang.Thread.State: RUNNABLE > at > org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:3327) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1491) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.newRegionScanner(RSRpcServices.java:3143) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3478) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44858) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:393) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318) > {code} > The anomalous three are as follows: > h3. #1 > {code:java} > "RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=16020" #77 daemon > prio=5 os_prio=0 cpu=175.98ms elapsed=315627.86s allocated=2153K > defined_classes=14 tid=0x7f3da0ae6ec0 nid=0x62d4 in Object.wait() > [0x7f3d9c19] >java.lang.Thread.State: RUNNABLE > at > org.apache.hadoop.hbase.client.RegionInfo.(RegionInfo.java:72) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:3327) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1491) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2912) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44856) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:393) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318){code} > ...which is the creation of the UNDEFINED in RegionInfo here: > {color:#808000}@InterfaceAudience.Public{color}{color:#80}public > interface {color}RegionInfo {color:#80}extends > {color}Comparable { > RegionInfo {color:#660e7a}UNDEFINED {color}= > RegionInfoBuilder.newBuilder(TableName.valueOf({color:#008000}"__UNDEFINED__"{color})).build(); > > h3. #2 > {code:java} > "RpcServer.priority.RWQ.Fifo.read.handler=4,queue=1,port=16020" #81 daemon > prio=5 os_prio=0 cpu=53.85ms elapsed=315627.86s allocated=81984B > defined_classes=3 tid=0x7f3da0981590 nid=0x62d8 in Object.wait() > [0x7f3d9bd8c000] >java.lang.Thread.State: RUNNABLE > at > org.apache.hadoop.hbase.client.RegionInfoBuilder.(RegionInfoBuilder.java:49) > at > org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toRegionInfo(ProtobufUtil.java:3231) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.executeOpenRegionProcedures(RSRpcServices.java:3755) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.lambda$executeProcedures$2(RSRpcServices.java:3827) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices$$Lambda$173/0x0017c0e40040.accept(Unknown > Source) > at java.util.ArrayList.forEach(java.base@11.0.6/ArrayList.java:1540) > at > java.util.Collections$UnmodifiableCollection.forEach(java.base@11.0.6/Collections.java:1085) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.executeProcedures(RSRpcServices.java:3827) > at >
[jira] [Updated] (HBASE-24901) Create versatile hbase-shell table formatter
[ https://issues.apache.org/jira/browse/HBASE-24901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elliot Miller updated HBASE-24901: -- Description: As a user, I would like a simple interface for shell output that can be expressed as a table (ie. output with a fixed number of columns and potentially many rows). To be clear, this new formatter is not specifically for HBase "tables." Table is used in the broader sense here. Goals - Do not require more than one output cell loaded in memory at a time - Support many implementations like aligned human-friendly tables, unaligned delimited, and JSON Non-goals - Don't load all the headers into memory at once. - This may seem like a goal with merit, but we are unlikely to find a use case for this formatter with many columns. For example: since HBase tables aren't relational, our scan output will not have an output column for every HBase column. Instead, each output row will correspond to an HBase cell. was: As a user, I would like a simple interface for shell output that can be expressed as a table (ie. output with a fixed number of columns and potentially many rows). To be clear, this new formatter is not specifically for HBase "tables." Table is used in the broader sense here. Goals - Do not require more than one output cell loaded in memory at a time - Support many implementations like aligned human-friendly tables, unaligned delimited, and JSON Non-goals - Don't load all the headers into memory at once. - This may seem like a goal with merit, but we are unlikely to find a use case for this formatter with many columns. For example: since HBase tables aren't relational, our scan output will not have an output column for every HBase column. Instead, each output row will correspond to an HBase cell. > Create versatile hbase-shell table formatter > > > Key: HBASE-24901 > URL: https://issues.apache.org/jira/browse/HBASE-24901 > Project: HBase > Issue Type: Improvement > Components: shell >Affects Versions: 3.0.0-alpha-1 >Reporter: Elliot Miller >Assignee: Elliot Miller >Priority: Major > > As a user, I would like a simple interface for shell output that can be > expressed as a table (ie. output with a fixed number of columns and > potentially many rows). To be clear, this new formatter is not specifically > for HBase "tables." Table is used in the broader sense here. > Goals > - Do not require more than one output cell loaded in memory at a time > - Support many implementations like aligned human-friendly tables, unaligned > delimited, and JSON > Non-goals > - Don't load all the headers into memory at once. > - This may seem like a goal with merit, but we are unlikely to find a use > case for this formatter with many columns. For example: since HBase tables > aren't relational, our scan output will not have an output column for every > HBase column. Instead, each output row will correspond to an HBase cell. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-24901) Create versatile hbase-shell table formatter
Elliot Miller created HBASE-24901: - Summary: Create versatile hbase-shell table formatter Key: HBASE-24901 URL: https://issues.apache.org/jira/browse/HBASE-24901 Project: HBase Issue Type: Improvement Components: shell Affects Versions: 3.0.0-alpha-1 Reporter: Elliot Miller Assignee: Elliot Miller As a user, I would like a simple interface for shell output that can be expressed as a table (ie. output with a fixed number of columns and potentially many rows). To be clear, this new formatter is not specifically for HBase "tables." Table is used in the broader sense here. Goals - Do not require more than one output cell loaded in memory at a time - Support many implementations like aligned human-friendly tables, unaligned delimited, and JSON Non-goals - Don't load all the headers into memory at once. - This may seem like a goal with merit, but we are unlikely to find a use case for this formatter with many columns. For example: since HBase tables aren't relational, our scan output will not have an output column for every HBase column. Instead, each output row will correspond to an HBase cell. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #2272: HBASE-24898 Can not set 23:00~24:00 as offpeak hour now
Apache-HBase commented on pull request #2272: URL: https://github.com/apache/hbase/pull/2272#issuecomment-675662971 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 27s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 23s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 34s | master passed | | +1 :green_heart: | compile | 2m 19s | master passed | | +1 :green_heart: | shadedjars | 5m 39s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 2m 55s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 13s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 17s | the patch passed | | +1 :green_heart: | compile | 2m 16s | the patch passed | | +1 :green_heart: | javac | 2m 16s | the patch passed | | +1 :green_heart: | shadedjars | 5m 35s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 2m 54s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 320m 30s | root in the patch failed. | | | | 352m 36s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2272/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2272 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 698ce5c67287 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / ea26463a33 | | Default Java | 1.8.0_232 | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2272/1/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-root.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2272/1/testReport/ | | Max. process+thread count | 5535 (vs. ulimit of 12500) | | modules | C: hbase-common hbase-server . U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2272/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24896) 'Stuck' creating RegionInfo instance
[ https://issues.apache.org/jira/browse/HBASE-24896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17180012#comment-17180012 ] Michael Stack commented on HBASE-24896: --- Thank you for taking a look [~bharathv] Helps. I had a suspicion that this some strange loading issue but you call it better. It does look like the server came up and just locked-up...Hard to tell for sure since it a test run over the w/e and it was found stuck Monday. > 'Stuck' creating RegionInfo instance > > > Key: HBASE-24896 > URL: https://issues.apache.org/jira/browse/HBASE-24896 > Project: HBase > Issue Type: Bug >Affects Versions: 2.3.1 >Reporter: Michael Stack >Priority: Major > Attachments: hbasedn192-jstack-0.webarchive, > hbasedn192-jstack-1.webarchive, hbasedn192-jstack-2.webarchive > > > We ran into the following deadlocked server in testing. The priority handlers > seem stuck across multiple thread dumps. Seven of the ten total priority > threads have this state: > {code:java} > "RpcServer.priority.RWQ.Fifo.read.handler=5,queue=1,port=16020" #82 daemon > prio=5 os_prio=0 cpu=0.70ms elapsed=315627.86s allocated=3744B > defined_classes=0 tid=0x7f3da0983040 nid=0x62d9 in Object.wait() > [0x7f3d9bc8c000] >java.lang.Thread.State: RUNNABLE > at > org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:3327) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1491) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.newRegionScanner(RSRpcServices.java:3143) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3478) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44858) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:393) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318) > {code} > The anomalous three are as follows: > h3. #1 > {code:java} > "RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=16020" #77 daemon > prio=5 os_prio=0 cpu=175.98ms elapsed=315627.86s allocated=2153K > defined_classes=14 tid=0x7f3da0ae6ec0 nid=0x62d4 in Object.wait() > [0x7f3d9c19] >java.lang.Thread.State: RUNNABLE > at > org.apache.hadoop.hbase.client.RegionInfo.(RegionInfo.java:72) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:3327) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1491) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2912) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44856) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:393) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318){code} > ...which is the creation of the UNDEFINED in RegionInfo here: > {color:#808000}@InterfaceAudience.Public{color}{color:#80}public > interface {color}RegionInfo {color:#80}extends > {color}Comparable { > RegionInfo {color:#660e7a}UNDEFINED {color}= > RegionInfoBuilder.newBuilder(TableName.valueOf({color:#008000}"__UNDEFINED__"{color})).build(); > > h3. #2 > {code:java} > "RpcServer.priority.RWQ.Fifo.read.handler=4,queue=1,port=16020" #81 daemon > prio=5 os_prio=0 cpu=53.85ms elapsed=315627.86s allocated=81984B > defined_classes=3 tid=0x7f3da0981590 nid=0x62d8 in Object.wait() > [0x7f3d9bd8c000] >java.lang.Thread.State: RUNNABLE > at > org.apache.hadoop.hbase.client.RegionInfoBuilder.(RegionInfoBuilder.java:49) > at > org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toRegionInfo(ProtobufUtil.java:3231) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.executeOpenRegionProcedures(RSRpcServices.java:3755) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.lambda$executeProcedures$2(RSRpcServices.java:3827) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices$$Lambda$173/0x0017c0e40040.accept(Unknown > Source) > at java.util.ArrayList.forEach(java.base@11.0.6/ArrayList.java:1540) > at > java.util.Collections$UnmodifiableCollection.forEach(java.base@11.0.6/Collections.java:1085) > at >
[GitHub] [hbase] ndimiduk opened a new pull request #2273: Backport "HBASE-24627 Normalize one table at a time" to branch-2
ndimiduk opened a new pull request #2273: URL: https://github.com/apache/hbase/pull/2273 Introduce an additional method to our Admin interface that allow an operator to selectivly run the normalizer. The IPC protocol supports general table name select via compound filter. Signed-off-by: Sean Busbey Signed-off-by: Viraj Jasani This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2261: HBASE-24528 : BalancerDecision queue implementation in HMaster with Admin API
Apache-HBase commented on pull request #2261: URL: https://github.com/apache/hbase/pull/2261#issuecomment-675631929 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 23s | Docker mode activated. | | -0 :warning: | yetus | 0m 4s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 31s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 4s | master passed | | +1 :green_heart: | compile | 2m 39s | master passed | | +1 :green_heart: | shadedjars | 6m 5s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 5m 5s | master passed | | -0 :warning: | patch | 12m 15s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 13s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 55s | the patch passed | | +1 :green_heart: | compile | 2m 58s | the patch passed | | +1 :green_heart: | javac | 2m 58s | the patch passed | | +1 :green_heart: | shadedjars | 6m 49s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 5m 10s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 320m 9s | root in the patch failed. | | | | 361m 48s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2261/3/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2261 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 35e267241f42 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / ea26463a33 | | Default Java | 1.8.0_232 | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2261/3/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-root.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2261/3/testReport/ | | Max. process+thread count | 2859 (vs. ulimit of 12500) | | modules | C: hbase-protocol-shaded hbase-common hbase-hadoop-compat hbase-client hbase-server hbase-thrift hbase-shell . U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2261/3/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (HBASE-24900) Make retain assignment configurable during SCP
Pankaj Kumar created HBASE-24900: Summary: Make retain assignment configurable during SCP Key: HBASE-24900 URL: https://issues.apache.org/jira/browse/HBASE-24900 Project: HBase Issue Type: Sub-task Reporter: Pankaj Kumar Assignee: Pankaj Kumar HBASE-23035 change the "retain" assignment to round-robin assignment during SCP which will make the failover faster and surely improve the availability, but this will impact the scan performance in non-cloud scenario. This jira will make this assignment plan configurable. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #2272: HBASE-24898 Can not set 23:00~24:00 as offpeak hour now
Apache-HBase commented on pull request #2272: URL: https://github.com/apache/hbase/pull/2272#issuecomment-675627638 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 2m 16s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 20s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 38s | master passed | | +1 :green_heart: | compile | 3m 2s | master passed | | +1 :green_heart: | shadedjars | 6m 20s | branch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 15s | root in master failed. | | -0 :warning: | javadoc | 0m 15s | hbase-common in master failed. | | -0 :warning: | javadoc | 0m 40s | hbase-server in master failed. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 12s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 29s | the patch passed | | +1 :green_heart: | compile | 3m 7s | the patch passed | | +1 :green_heart: | javac | 3m 7s | the patch passed | | +1 :green_heart: | shadedjars | 6m 45s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 20s | hbase-common in the patch failed. | | -0 :warning: | javadoc | 0m 47s | hbase-server in the patch failed. | | -0 :warning: | javadoc | 0m 19s | root in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 242m 12s | root in the patch passed. | | | | 278m 5s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2272/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2272 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 04ffc70b2c56 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / ea26463a33 | | Default Java | 2020-01-14 | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2272/1/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-root.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2272/1/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-common.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2272/1/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2272/1/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-common.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2272/1/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2272/1/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-root.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2272/1/testReport/ | | Max. process+thread count | 4889 (vs. ulimit of 12500) | | modules | C: hbase-common hbase-server . U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2272/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-14847) Add FIFO compaction section to HBase book
[ https://issues.apache.org/jira/browse/HBASE-14847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17179985#comment-17179985 ] Vladimir Rodionov commented on HBASE-14847: --- Sure, go ahead. > Add FIFO compaction section to HBase book > - > > Key: HBASE-14847 > URL: https://issues.apache.org/jira/browse/HBASE-14847 > Project: HBase > Issue Type: Task > Components: documentation >Affects Versions: 2.0.0 >Reporter: Vladimir Rodionov >Priority: Major > > HBASE-14468 introduced new compaction policy. Book needs to be updated. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24896) 'Stuck' creating RegionInfo instance
[ https://issues.apache.org/jira/browse/HBASE-24896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17179974#comment-17179974 ] Bharath Vissapragada commented on HBASE-24896: -- This happened right after startup? I skimmed through the jstacks and the code and wondering if we are running into a some circular static block dependency that is causing a deadlock. Simpler example [here|[http://ternarysearch.blogspot.com/2013/07/static-initialization-deadlock.html]]. In this case, the dependency seems to be among RegionInfo -> RegioninfoBuilder -> MutableRegionInfo (c'tor) -> RegionInfo. May be we need to unnest those? If we see the jstacks from the above blog post, they are also stuck in Object wait(), hence my strong suspicion in these dependencies. > 'Stuck' creating RegionInfo instance > > > Key: HBASE-24896 > URL: https://issues.apache.org/jira/browse/HBASE-24896 > Project: HBase > Issue Type: Bug >Affects Versions: 2.3.1 >Reporter: Michael Stack >Priority: Major > Attachments: hbasedn192-jstack-0.webarchive, > hbasedn192-jstack-1.webarchive, hbasedn192-jstack-2.webarchive > > > We ran into the following deadlocked server in testing. The priority handlers > seem stuck across multiple thread dumps. Seven of the ten total priority > threads have this state: > {code:java} > "RpcServer.priority.RWQ.Fifo.read.handler=5,queue=1,port=16020" #82 daemon > prio=5 os_prio=0 cpu=0.70ms elapsed=315627.86s allocated=3744B > defined_classes=0 tid=0x7f3da0983040 nid=0x62d9 in Object.wait() > [0x7f3d9bc8c000] >java.lang.Thread.State: RUNNABLE > at > org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:3327) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1491) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.newRegionScanner(RSRpcServices.java:3143) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3478) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44858) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:393) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318) > {code} > The anomalous three are as follows: > h3. #1 > {code:java} > "RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=16020" #77 daemon > prio=5 os_prio=0 cpu=175.98ms elapsed=315627.86s allocated=2153K > defined_classes=14 tid=0x7f3da0ae6ec0 nid=0x62d4 in Object.wait() > [0x7f3d9c19] >java.lang.Thread.State: RUNNABLE > at > org.apache.hadoop.hbase.client.RegionInfo.(RegionInfo.java:72) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:3327) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1491) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2912) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44856) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:393) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318){code} > ...which is the creation of the UNDEFINED in RegionInfo here: > {color:#808000}@InterfaceAudience.Public{color}{color:#80}public > interface {color}RegionInfo {color:#80}extends > {color}Comparable { > RegionInfo {color:#660e7a}UNDEFINED {color}= > RegionInfoBuilder.newBuilder(TableName.valueOf({color:#008000}"__UNDEFINED__"{color})).build(); > > h3. #2 > {code:java} > "RpcServer.priority.RWQ.Fifo.read.handler=4,queue=1,port=16020" #81 daemon > prio=5 os_prio=0 cpu=53.85ms elapsed=315627.86s allocated=81984B > defined_classes=3 tid=0x7f3da0981590 nid=0x62d8 in Object.wait() > [0x7f3d9bd8c000] >java.lang.Thread.State: RUNNABLE > at > org.apache.hadoop.hbase.client.RegionInfoBuilder.(RegionInfoBuilder.java:49) > at > org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toRegionInfo(ProtobufUtil.java:3231) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.executeOpenRegionProcedures(RSRpcServices.java:3755) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.lambda$executeProcedures$2(RSRpcServices.java:3827) > at >
[GitHub] [hbase] virajjasani commented on pull request #2261: HBASE-24528 : BalancerDecision queue implementation in HMaster with Admin API
virajjasani commented on pull request #2261: URL: https://github.com/apache/hbase/pull/2261#issuecomment-675607377 @apurtell Update so far: Addressed all concerns including generic Admin API for future use-cases. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2261: HBASE-24528 : BalancerDecision queue implementation in HMaster with Admin API
Apache-HBase commented on pull request #2261: URL: https://github.com/apache/hbase/pull/2261#issuecomment-675593553 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 2m 17s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 23s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 27s | master passed | | +1 :green_heart: | compile | 3m 4s | master passed | | +1 :green_heart: | shadedjars | 6m 24s | branch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 15s | hbase-common in master failed. | | -0 :warning: | javadoc | 0m 17s | hbase-hadoop-compat in master failed. | | -0 :warning: | javadoc | 0m 24s | hbase-client in master failed. | | -0 :warning: | javadoc | 0m 42s | hbase-server in master failed. | | -0 :warning: | javadoc | 0m 57s | hbase-thrift in master failed. | | -0 :warning: | javadoc | 0m 13s | root in master failed. | | -0 :warning: | patch | 10m 43s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 12s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 39s | the patch passed | | +1 :green_heart: | compile | 3m 7s | the patch passed | | +1 :green_heart: | javac | 3m 7s | the patch passed | | +1 :green_heart: | shadedjars | 6m 24s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 15s | hbase-common in the patch failed. | | -0 :warning: | javadoc | 0m 17s | hbase-hadoop-compat in the patch failed. | | -0 :warning: | javadoc | 0m 25s | hbase-client in the patch failed. | | -0 :warning: | javadoc | 0m 40s | hbase-server in the patch failed. | | -0 :warning: | javadoc | 0m 58s | hbase-thrift in the patch failed. | | -0 :warning: | javadoc | 0m 13s | root in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 243m 40s | root in the patch passed. | | | | 285m 9s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2261/3/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2261 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 753447dee88d 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / ea26463a33 | | Default Java | 2020-01-14 | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2261/3/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-common.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2261/3/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-hadoop-compat.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2261/3/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-client.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2261/3/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2261/3/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-thrift.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2261/3/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-root.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2261/3/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-common.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2261/3/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-hadoop-compat.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2261/3/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-client.txt | | javadoc |
[jira] [Updated] (HBASE-24627) Normalize one table at a time
[ https://issues.apache.org/jira/browse/HBASE-24627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-24627: - Fix Version/s: 3.0.0-alpha-1 > Normalize one table at a time > - > > Key: HBASE-24627 > URL: https://issues.apache.org/jira/browse/HBASE-24627 > Project: HBase > Issue Type: Improvement > Components: Normalizer >Affects Versions: 3.0.0-alpha-1, 2.4.0 >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Major > Fix For: 3.0.0-alpha-1 > > > Out API and shell command around Normalizer is an all-or-nothing invocation. > We should support an operator requesting to normalize a table at a time. > One use-case is for someone wanting to enable the normalizer for the first > time. It would be nice to do a controlled roll-out of the normalizer, keeping > it disabled at first, calling normalize one table at a time, and then turning > it on after all tables have been normalized. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] ndimiduk merged pull request #2215: HBASE-24627 Normalize one table at a time
ndimiduk merged pull request #2215: URL: https://github.com/apache/hbase/pull/2215 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (HBASE-24563) Make hbck chore aware of replica region and check/fix replica region consistency
[ https://issues.apache.org/jira/browse/HBASE-24563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Huaxiang Sun resolved HBASE-24563. -- Resolution: Duplicate It is covered by other jiras, no need for this one. > Make hbck chore aware of replica region and check/fix replica region > consistency > > > Key: HBASE-24563 > URL: https://issues.apache.org/jira/browse/HBASE-24563 > Project: HBase > Issue Type: Improvement > Components: read replicas >Affects Versions: 2.3.0 >Reporter: Huaxiang Sun >Assignee: Huaxiang Sun >Priority: Major > > Hbck1 checks/fix only primary region consistency and ignores replica region. > In hbase 2, hbck chore needs to be aware of replica region and check its > consistency as well. Hbck2 needs to fix replica region inconsistency. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] huaxiangsun merged pull request #2250: HBASE-24872 refactor valueOf PoolType
huaxiangsun merged pull request #2250: URL: https://github.com/apache/hbase/pull/2250 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] huaxiangsun commented on pull request #2250: HBASE-24872 refactor valueOf PoolType
huaxiangsun commented on pull request #2250: URL: https://github.com/apache/hbase/pull/2250#issuecomment-675584851 Thanks for explain, @nyl3532016. So basically, after Reusable pooltype is removed, there is no need to check allowedPoolTypes anymore. Looks good to me. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] huaxiangsun commented on pull request #2271: HBASE-24897 Assign primary regions firstly when create table with reg…
huaxiangsun commented on pull request #2271: URL: https://github.com/apache/hbase/pull/2271#issuecomment-675574773 Catch the exception here? Wondering this could happen after the regions are assigned as well. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2271: HBASE-24897 Assign primary regions firstly when create table with reg…
Apache-HBase commented on pull request #2271: URL: https://github.com/apache/hbase/pull/2271#issuecomment-675573450 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 2m 2s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | prototool | 0m 0s | prototool was not available. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ branch-2.2 Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 11s | branch-2.2 passed | | +1 :green_heart: | compile | 1m 34s | branch-2.2 passed | | +1 :green_heart: | checkstyle | 1m 30s | branch-2.2 passed | | +1 :green_heart: | shadedjars | 4m 4s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 52s | branch-2.2 passed | | +0 :ok: | spotbugs | 3m 15s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 5m 41s | branch-2.2 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 54s | the patch passed | | +1 :green_heart: | compile | 1m 36s | the patch passed | | +1 :green_heart: | cc | 1m 36s | the patch passed | | +1 :green_heart: | javac | 1m 36s | the patch passed | | +1 :green_heart: | checkstyle | 1m 29s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedjars | 4m 5s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | hadoopcheck | 25m 9s | Patch does not cause any errors with Hadoop 2.8.5 2.9.2 2.10.0 or 3.1.2 3.2.1. | | +1 :green_heart: | hbaseprotoc | 1m 28s | the patch passed | | +1 :green_heart: | javadoc | 0m 50s | the patch passed | | +1 :green_heart: | findbugs | 5m 40s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 41s | hbase-protocol-shaded in the patch passed. | | +1 :green_heart: | unit | 160m 40s | hbase-server in the patch passed. | | +1 :green_heart: | asflicense | 1m 8s | The patch does not generate ASF License warnings. | | | | 231m 34s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2271/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2271 | | Optional Tests | dupname asflicense cc unit hbaseprotoc prototool javac javadoc spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 57f30e136875 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-home/workspace/Base-PreCommit-GitHub-PR_PR-2271/out/precommit/personality/provided.sh | | git revision | branch-2.2 / 8329591b45 | | Default Java | 1.8.0_181 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2271/1/testReport/ | | Max. process+thread count | 3699 (vs. ulimit of 12500) | | modules | C: hbase-protocol-shaded hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2271/1/console | | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2268: HBASE-24874 Fix hbase-shell access to ModifiableTableDescriptor methods
Apache-HBase commented on pull request #2268: URL: https://github.com/apache/hbase/pull/2268#issuecomment-675539262 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 28s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 12s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 51s | master passed | | +1 :green_heart: | compile | 0m 49s | master passed | | +1 :green_heart: | shadedjars | 5m 41s | branch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 26s | hbase-client in master failed. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 3s | the patch passed | | +1 :green_heart: | compile | 0m 50s | the patch passed | | +1 :green_heart: | javac | 0m 50s | the patch passed | | +1 :green_heart: | shadedjars | 5m 45s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 25s | hbase-client in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 9s | hbase-client in the patch passed. | | +1 :green_heart: | unit | 7m 4s | hbase-shell in the patch passed. | | | | 32m 37s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2268/3/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2268 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux b8fe12441cd1 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / ea26463a33 | | Default Java | 2020-01-14 | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2268/3/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-client.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2268/3/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-client.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2268/3/testReport/ | | Max. process+thread count | 2306 (vs. ulimit of 12500) | | modules | C: hbase-client hbase-shell U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2268/3/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2268: HBASE-24874 Fix hbase-shell access to ModifiableTableDescriptor methods
Apache-HBase commented on pull request #2268: URL: https://github.com/apache/hbase/pull/2268#issuecomment-675538148 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 27s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 13s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 19s | master passed | | +1 :green_heart: | compile | 0m 46s | master passed | | +1 :green_heart: | shadedjars | 5m 38s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 35s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 17s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 23s | the patch passed | | +1 :green_heart: | compile | 0m 45s | the patch passed | | +1 :green_heart: | javac | 0m 45s | the patch passed | | +1 :green_heart: | shadedjars | 5m 35s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 36s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 1s | hbase-client in the patch passed. | | +1 :green_heart: | unit | 6m 58s | hbase-shell in the patch passed. | | | | 30m 41s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2268/3/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2268 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux dded57bf35ca 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / ea26463a33 | | Default Java | 1.8.0_232 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2268/3/testReport/ | | Max. process+thread count | 2343 (vs. ulimit of 12500) | | modules | C: hbase-client hbase-shell U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2268/3/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2268: HBASE-24874 Fix hbase-shell access to ModifiableTableDescriptor methods
Apache-HBase commented on pull request #2268: URL: https://github.com/apache/hbase/pull/2268#issuecomment-675537428 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 27s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 13s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 23s | master passed | | +1 :green_heart: | checkstyle | 0m 39s | master passed | | +1 :green_heart: | spotbugs | 0m 56s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 14s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 20s | the patch passed | | +1 :green_heart: | checkstyle | 0m 37s | the patch passed | | -0 :warning: | rubocop | 0m 18s | The patch generated 31 new + 546 unchanged - 5 fixed = 577 total (was 551) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 10m 57s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 1m 6s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 24s | The patch does not generate ASF License warnings. | | | | 29m 28s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2268/3/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2268 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle rubocop | | uname | Linux 3e67e7c2aca1 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / ea26463a33 | | rubocop | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2268/3/artifact/yetus-general-check/output/diff-patch-rubocop.txt | | Max. process+thread count | 94 (vs. ulimit of 12500) | | modules | C: hbase-client hbase-shell U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2268/3/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 rubocop=0.80.0 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24896) 'Stuck' creating RegionInfo instance
[ https://issues.apache.org/jira/browse/HBASE-24896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17179686#comment-17179686 ] Michael Stack commented on HBASE-24896: --- Attached thread dumps. Odd is that all threads are RUNNABLE state, none BLOCKED. Also odd at the moment to me is that we are stuck getting from a Map whose key is a String but the thread dump shows us doing construction on an Interface (RegionInfo). > 'Stuck' creating RegionInfo instance > > > Key: HBASE-24896 > URL: https://issues.apache.org/jira/browse/HBASE-24896 > Project: HBase > Issue Type: Bug >Affects Versions: 2.3.1 >Reporter: Michael Stack >Priority: Major > Attachments: hbasedn192-jstack-0.webarchive, > hbasedn192-jstack-1.webarchive, hbasedn192-jstack-2.webarchive > > > We ran into the following deadlocked server in testing. The priority handlers > seem stuck across multiple thread dumps. Seven of the ten total priority > threads have this state: > {code:java} > "RpcServer.priority.RWQ.Fifo.read.handler=5,queue=1,port=16020" #82 daemon > prio=5 os_prio=0 cpu=0.70ms elapsed=315627.86s allocated=3744B > defined_classes=0 tid=0x7f3da0983040 nid=0x62d9 in Object.wait() > [0x7f3d9bc8c000] >java.lang.Thread.State: RUNNABLE > at > org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:3327) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1491) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.newRegionScanner(RSRpcServices.java:3143) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3478) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44858) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:393) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318) > {code} > The anomalous three are as follows: > h3. #1 > {code:java} > "RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=16020" #77 daemon > prio=5 os_prio=0 cpu=175.98ms elapsed=315627.86s allocated=2153K > defined_classes=14 tid=0x7f3da0ae6ec0 nid=0x62d4 in Object.wait() > [0x7f3d9c19] >java.lang.Thread.State: RUNNABLE > at > org.apache.hadoop.hbase.client.RegionInfo.(RegionInfo.java:72) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:3327) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1491) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2912) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44856) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:393) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318){code} > ...which is the creation of the UNDEFINED in RegionInfo here: > {color:#808000}@InterfaceAudience.Public{color}{color:#80}public > interface {color}RegionInfo {color:#80}extends > {color}Comparable { > RegionInfo {color:#660e7a}UNDEFINED {color}= > RegionInfoBuilder.newBuilder(TableName.valueOf({color:#008000}"__UNDEFINED__"{color})).build(); > > h3. #2 > {code:java} > "RpcServer.priority.RWQ.Fifo.read.handler=4,queue=1,port=16020" #81 daemon > prio=5 os_prio=0 cpu=53.85ms elapsed=315627.86s allocated=81984B > defined_classes=3 tid=0x7f3da0981590 nid=0x62d8 in Object.wait() > [0x7f3d9bd8c000] >java.lang.Thread.State: RUNNABLE > at > org.apache.hadoop.hbase.client.RegionInfoBuilder.(RegionInfoBuilder.java:49) > at > org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toRegionInfo(ProtobufUtil.java:3231) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.executeOpenRegionProcedures(RSRpcServices.java:3755) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.lambda$executeProcedures$2(RSRpcServices.java:3827) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices$$Lambda$173/0x0017c0e40040.accept(Unknown > Source) > at java.util.ArrayList.forEach(java.base@11.0.6/ArrayList.java:1540) > at > java.util.Collections$UnmodifiableCollection.forEach(java.base@11.0.6/Collections.java:1085) > at >
[jira] [Updated] (HBASE-24896) Deadlock creating RegionInfo instance
[ https://issues.apache.org/jira/browse/HBASE-24896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack updated HBASE-24896: -- Attachment: hbasedn192-jstack-2.webarchive hbasedn192-jstack-1.webarchive hbasedn192-jstack-0.webarchive > Deadlock creating RegionInfo instance > - > > Key: HBASE-24896 > URL: https://issues.apache.org/jira/browse/HBASE-24896 > Project: HBase > Issue Type: Bug >Affects Versions: 2.3.1 >Reporter: Michael Stack >Priority: Major > Attachments: hbasedn192-jstack-0.webarchive, > hbasedn192-jstack-1.webarchive, hbasedn192-jstack-2.webarchive > > > We ran into the following deadlocked server in testing. The priority handlers > seem stuck across multiple thread dumps. Seven of the ten total priority > threads have this state: > {code:java} > "RpcServer.priority.RWQ.Fifo.read.handler=5,queue=1,port=16020" #82 daemon > prio=5 os_prio=0 cpu=0.70ms elapsed=315627.86s allocated=3744B > defined_classes=0 tid=0x7f3da0983040 nid=0x62d9 in Object.wait() > [0x7f3d9bc8c000] >java.lang.Thread.State: RUNNABLE > at > org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:3327) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1491) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.newRegionScanner(RSRpcServices.java:3143) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3478) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44858) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:393) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318) > {code} > The anomalous three are as follows: > h3. #1 > {code:java} > "RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=16020" #77 daemon > prio=5 os_prio=0 cpu=175.98ms elapsed=315627.86s allocated=2153K > defined_classes=14 tid=0x7f3da0ae6ec0 nid=0x62d4 in Object.wait() > [0x7f3d9c19] >java.lang.Thread.State: RUNNABLE > at > org.apache.hadoop.hbase.client.RegionInfo.(RegionInfo.java:72) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:3327) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1491) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2912) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44856) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:393) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318){code} > ...which is the creation of the UNDEFINED in RegionInfo here: > {color:#808000}@InterfaceAudience.Public{color}{color:#80}public > interface {color}RegionInfo {color:#80}extends > {color}Comparable { > RegionInfo {color:#660e7a}UNDEFINED {color}= > RegionInfoBuilder.newBuilder(TableName.valueOf({color:#008000}"__UNDEFINED__"{color})).build(); > > h3. #2 > {code:java} > "RpcServer.priority.RWQ.Fifo.read.handler=4,queue=1,port=16020" #81 daemon > prio=5 os_prio=0 cpu=53.85ms elapsed=315627.86s allocated=81984B > defined_classes=3 tid=0x7f3da0981590 nid=0x62d8 in Object.wait() > [0x7f3d9bd8c000] >java.lang.Thread.State: RUNNABLE > at > org.apache.hadoop.hbase.client.RegionInfoBuilder.(RegionInfoBuilder.java:49) > at > org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toRegionInfo(ProtobufUtil.java:3231) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.executeOpenRegionProcedures(RSRpcServices.java:3755) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.lambda$executeProcedures$2(RSRpcServices.java:3827) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices$$Lambda$173/0x0017c0e40040.accept(Unknown > Source) > at java.util.ArrayList.forEach(java.base@11.0.6/ArrayList.java:1540) > at > java.util.Collections$UnmodifiableCollection.forEach(java.base@11.0.6/Collections.java:1085) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.executeProcedures(RSRpcServices.java:3827) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:34896) > at
[jira] [Updated] (HBASE-24896) 'Stuck' creating RegionInfo instance
[ https://issues.apache.org/jira/browse/HBASE-24896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack updated HBASE-24896: -- Summary: 'Stuck' creating RegionInfo instance (was: Deadlock creating RegionInfo instance) > 'Stuck' creating RegionInfo instance > > > Key: HBASE-24896 > URL: https://issues.apache.org/jira/browse/HBASE-24896 > Project: HBase > Issue Type: Bug >Affects Versions: 2.3.1 >Reporter: Michael Stack >Priority: Major > Attachments: hbasedn192-jstack-0.webarchive, > hbasedn192-jstack-1.webarchive, hbasedn192-jstack-2.webarchive > > > We ran into the following deadlocked server in testing. The priority handlers > seem stuck across multiple thread dumps. Seven of the ten total priority > threads have this state: > {code:java} > "RpcServer.priority.RWQ.Fifo.read.handler=5,queue=1,port=16020" #82 daemon > prio=5 os_prio=0 cpu=0.70ms elapsed=315627.86s allocated=3744B > defined_classes=0 tid=0x7f3da0983040 nid=0x62d9 in Object.wait() > [0x7f3d9bc8c000] >java.lang.Thread.State: RUNNABLE > at > org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:3327) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1491) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.newRegionScanner(RSRpcServices.java:3143) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3478) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44858) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:393) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318) > {code} > The anomalous three are as follows: > h3. #1 > {code:java} > "RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=16020" #77 daemon > prio=5 os_prio=0 cpu=175.98ms elapsed=315627.86s allocated=2153K > defined_classes=14 tid=0x7f3da0ae6ec0 nid=0x62d4 in Object.wait() > [0x7f3d9c19] >java.lang.Thread.State: RUNNABLE > at > org.apache.hadoop.hbase.client.RegionInfo.(RegionInfo.java:72) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:3327) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1491) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2912) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44856) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:393) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318){code} > ...which is the creation of the UNDEFINED in RegionInfo here: > {color:#808000}@InterfaceAudience.Public{color}{color:#80}public > interface {color}RegionInfo {color:#80}extends > {color}Comparable { > RegionInfo {color:#660e7a}UNDEFINED {color}= > RegionInfoBuilder.newBuilder(TableName.valueOf({color:#008000}"__UNDEFINED__"{color})).build(); > > h3. #2 > {code:java} > "RpcServer.priority.RWQ.Fifo.read.handler=4,queue=1,port=16020" #81 daemon > prio=5 os_prio=0 cpu=53.85ms elapsed=315627.86s allocated=81984B > defined_classes=3 tid=0x7f3da0981590 nid=0x62d8 in Object.wait() > [0x7f3d9bd8c000] >java.lang.Thread.State: RUNNABLE > at > org.apache.hadoop.hbase.client.RegionInfoBuilder.(RegionInfoBuilder.java:49) > at > org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toRegionInfo(ProtobufUtil.java:3231) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.executeOpenRegionProcedures(RSRpcServices.java:3755) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.lambda$executeProcedures$2(RSRpcServices.java:3827) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices$$Lambda$173/0x0017c0e40040.accept(Unknown > Source) > at java.util.ArrayList.forEach(java.base@11.0.6/ArrayList.java:1540) > at > java.util.Collections$UnmodifiableCollection.forEach(java.base@11.0.6/Collections.java:1085) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.executeProcedures(RSRpcServices.java:3827) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:34896) > at
[jira] [Commented] (HBASE-24898) Can not set 23:00~24:00 as offpeak hour now
[ https://issues.apache.org/jira/browse/HBASE-24898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17179680#comment-17179680 ] Zheng Wang commented on HBASE-24898: {quote}Just use 0-0? And I think you could use n-n, where n is [0, 23]? {quote} 0 <= targetHour && targetHour < 0? seems not make sense. {quote}And what is the usage of setting all hours as off peak? Isn't it the same with all hours as peak? {quote} Used in TestStochasticLoadBalancer.testMoveCostMultiplier, which should use a lower multiplier in offpeak. > Can not set 23:00~24:00 as offpeak hour now > --- > > Key: HBASE-24898 > URL: https://issues.apache.org/jira/browse/HBASE-24898 > Project: HBase > Issue Type: Bug > Components: tooling >Reporter: Zheng Wang >Assignee: Zheng Wang >Priority: Major > > The valid number in OffPeakHours is 0-23, and the end hour not included, so > we can not set 23:00-24:00 as offpeak hour now. > My proposal is just change the valid number from 0-23 to 0-24, then we can > easily apply this pr to all active branchs, and folks do not need to change > them configuration. -- This message was sent by Atlassian Jira (v8.3.4#803005)