[GitHub] [hadoop] hadoop-yetus commented on pull request #5660: HDFS-17014. HttpFS Add Support getStatus API
hadoop-yetus commented on PR #5660: URL: https://github.com/apache/hadoop/pull/5660#issuecomment-1555658752 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 12m 11s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 4s | | trunk passed | | +1 :green_heart: | compile | 0m 30s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 0m 28s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | checkstyle | 0m 36s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 44s | | trunk passed | | +1 :green_heart: | javadoc | 0m 33s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 0m 29s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 1m 3s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 27s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 20s | | the patch passed | | +1 :green_heart: | compile | 0m 21s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 0m 21s | | the patch passed | | +1 :green_heart: | compile | 0m 19s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | javac | 0m 19s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 19s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 27s | | the patch passed | | +1 :green_heart: | javadoc | 0m 21s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 0m 21s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 0m 48s | | the patch passed | | +1 :green_heart: | shadedclient | 19m 55s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 5m 38s | | hadoop-hdfs-httpfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 37s | | The patch does not generate ASF License warnings. | | | | 103m 3s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5660/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5660 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 5c277d873bdc 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 87679d5277e49ed4ef1d39d766da452cb25aa095 | | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5660/5/testReport/ | | Max. process+thread count | 839 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-httpfs U: hadoop-hdfs-project/hadoop-hdfs-httpfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5660/5/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org Fo
[GitHub] [hadoop] haiyang1987 commented on pull request #5667: HDFS-17017. Fix the issue of arguments number limit in report command in DFSAdmin
haiyang1987 commented on PR #5667: URL: https://github.com/apache/hadoop/pull/5667#issuecomment-1555609612 Update PR. please @ayushtkn @virajjasani help review it again, thanks! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5678: HDFS-17022. Fix the exception message to print the Identifier pattern
hadoop-yetus commented on PR #5678: URL: https://github.com/apache/hadoop/pull/5678#issuecomment-155575 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 16m 38s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 36m 10s | | trunk passed | | +1 :green_heart: | compile | 0m 25s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 0m 24s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | checkstyle | 0m 28s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 38s | | trunk passed | | +1 :green_heart: | javadoc | 0m 27s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 0m 23s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 0m 57s | | trunk passed | | +1 :green_heart: | shadedclient | 23m 19s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 19s | | the patch passed | | +1 :green_heart: | compile | 0m 19s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 0m 19s | | the patch passed | | +1 :green_heart: | compile | 0m 17s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | javac | 0m 17s | | the patch passed | | +1 :green_heart: | blanks | 0m 1s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 14s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 26s | | the patch passed | | +1 :green_heart: | javadoc | 0m 19s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 0m 17s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 0m 48s | | the patch passed | | +1 :green_heart: | shadedclient | 23m 11s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 5m 14s | | hadoop-hdfs-httpfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 34s | | The patch does not generate ASF License warnings. | | | | 115m 31s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5678/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5678 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux a0e92d075eeb 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 422d63c373655d99f00dc5af4a1a573e8e335387 | | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5678/2/testReport/ | | Max. process+thread count | 815 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-httpfs U: hadoop-hdfs-project/hadoop-hdfs-httpfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5678/2/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub
[GitHub] [hadoop] zhtttylz commented on a diff in pull request #5660: HDFS-17014. HttpFS Add Support getStatus API
zhtttylz commented on code in PR #5660: URL: https://github.com/apache/hadoop/pull/5660#discussion_r1199540321 ## hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java: ## @@ -2081,6 +2085,32 @@ private void testGetFileLinkStatus() throws Exception { assertTrue(fs.getFileLinkStatus(linkToFile).isSymlink()); } + private void testGetStatus() throws Exception { +if (isLocalFS()) { + // do not test the getStatus for local FS. + return; +} +final Path path = new Path("/foo"); +FileSystem fs = FileSystem.get(path.toUri(), this.getProxiedFSConf()); +if (fs instanceof DistributedFileSystem) { + DistributedFileSystem dfs = + (DistributedFileSystem) FileSystem.get(path.toUri(), this.getProxiedFSConf()); + FileSystem httpFs = this.getHttpFSFileSystem(); + + FsStatus dfsFsStatus = dfs.getStatus(path); + FsStatus httpFsStatus = httpFs.getStatus(path); + + //Validate used free and capacity are the same as DistributedFileSystem + assertEquals(dfsFsStatus.getUsed(), httpFsStatus.getUsed()); + assertEquals(dfsFsStatus.getRemaining(), httpFsStatus.getRemaining()); + assertEquals(dfsFsStatus.getCapacity(), httpFsStatus.getCapacity()); + httpFs.close(); + dfs.close(); +}else{ + Assert.fail(fs.getClass().getSimpleName() + " is not of type DistributedFileSystem."); +} Review Comment: Thank you very much for your valuable suggestion. I genuinely appreciate it and will make the necessary adjustments to the code promptly. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18709) Add curator based ZooKeeper communication support over SSL/TLS into the common library
[ https://issues.apache.org/jira/browse/HADOOP-18709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17724475#comment-17724475 ] ASF GitHub Bot commented on HADOOP-18709: - szilard-nemeth commented on PR #5638: URL: https://github.com/apache/hadoop/pull/5638#issuecomment-1555435928 Hi @ferdelyi , I added a couple of review comments. Could you please also add comments to the test class (e.g. in javadoc) about how the added certificate files, keystore and truststore files were generated, for example you may also add the commands that created those files. As a reader of the test class, I woulnd't have any idea how those files got there and if any issue comes up in the future, the javadoc would tell. Thanks. > Add curator based ZooKeeper communication support over SSL/TLS into the > common library > -- > > Key: HADOOP-18709 > URL: https://issues.apache.org/jira/browse/HADOOP-18709 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ferenc Erdelyi >Assignee: Ferenc Erdelyi >Priority: Major > Labels: pull-request-available > > With HADOOP-16579 the ZooKeeper client is capable of securing communication > with SSL. > To follow the convention introduced in HADOOP-14741, proposing to add to the > core-default.xml the following configurations, as the groundwork for the > components to enable encrypted communication between the individual > components and ZooKeeper: > * hadoop.zk.ssl.keystore.location > * hadoop.zk.ssl.keystore.password > * hadoop.zk.ssl.truststore.location > * hadoop.zk.ssl.truststore.password > These parameters along with the component-specific ssl.client.enable option > (e.g. yarn.zookeeper.ssl.client.enable) should be passed to the > ZKCuratorManager to build the CuratorFramework. The ZKCuratorManager needs a > new overloaded start() method to build the encrypted communication. > * The secured ZK Client uses Netty, hence the dependency is included in the > pom.xml. Added netty-handler and netty-transport-native-epoll dependency to > the pom.xml based on ZOOKEEPER-3494 - "No need to depend on netty-all (SSL)". > * The change was exclusively tested with the unit test, which is a kind of > integration test, as a ZK Server was brought up and the communication tested > between the client and the server. > * This code change is in the common code base and there is no component > calling it yet. Once YARN-11468 - "Zookeeper SSL/TLS support" is implemented, > we can test it in a real cluster environment. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] szilard-nemeth commented on pull request #5638: HADOOP-18709. Add curator based ZooKeeper communication support over…
szilard-nemeth commented on PR #5638: URL: https://github.com/apache/hadoop/pull/5638#issuecomment-1555435928 Hi @ferdelyi , I added a couple of review comments. Could you please also add comments to the test class (e.g. in javadoc) about how the added certificate files, keystore and truststore files were generated, for example you may also add the commands that created those files. As a reader of the test class, I woulnd't have any idea how those files got there and if any issue comes up in the future, the javadoc would tell. Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18709) Add curator based ZooKeeper communication support over SSL/TLS into the common library
[ https://issues.apache.org/jira/browse/HADOOP-18709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17724474#comment-17724474 ] ASF GitHub Bot commented on HADOOP-18709: - szilard-nemeth commented on PR #5638: URL: https://github.com/apache/hadoop/pull/5638#issuecomment-1555433823 > Thank you Szilard for the CR. > > The change was exclusively tested with the unit test, which is a kind of integration test, as a ZK Server was brought up and the communication tested between the client and the server. > > This code change is in the common code base and there is no component calling it yet. Once [YARN-11468](https://issues.apache.org/jira/browse/YARN-11468) [Zookeeper SSL/TLS support] is implemented, we can test it in a real cluster environment. > > Wondering if we should update the [hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs-rbf/dependency-analysis.html](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs-rbf/dependency-analysis.html) page with the Netty dependency? The parameter descriptions are added to the commit to the core-default.xml. I see, thanks for the info. Didn't know about the YARN jira. I don't think you need to update the dependency report, TBH I never updated it and I don't know how it's generated. Probably copied from the output of some script? Our codebase might have a reference to this somewhere, in markdown files. > Add curator based ZooKeeper communication support over SSL/TLS into the > common library > -- > > Key: HADOOP-18709 > URL: https://issues.apache.org/jira/browse/HADOOP-18709 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ferenc Erdelyi >Assignee: Ferenc Erdelyi >Priority: Major > Labels: pull-request-available > > With HADOOP-16579 the ZooKeeper client is capable of securing communication > with SSL. > To follow the convention introduced in HADOOP-14741, proposing to add to the > core-default.xml the following configurations, as the groundwork for the > components to enable encrypted communication between the individual > components and ZooKeeper: > * hadoop.zk.ssl.keystore.location > * hadoop.zk.ssl.keystore.password > * hadoop.zk.ssl.truststore.location > * hadoop.zk.ssl.truststore.password > These parameters along with the component-specific ssl.client.enable option > (e.g. yarn.zookeeper.ssl.client.enable) should be passed to the > ZKCuratorManager to build the CuratorFramework. The ZKCuratorManager needs a > new overloaded start() method to build the encrypted communication. > * The secured ZK Client uses Netty, hence the dependency is included in the > pom.xml. Added netty-handler and netty-transport-native-epoll dependency to > the pom.xml based on ZOOKEEPER-3494 - "No need to depend on netty-all (SSL)". > * The change was exclusively tested with the unit test, which is a kind of > integration test, as a ZK Server was brought up and the communication tested > between the client and the server. > * This code change is in the common code base and there is no component > calling it yet. Once YARN-11468 - "Zookeeper SSL/TLS support" is implemented, > we can test it in a real cluster environment. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] szilard-nemeth commented on pull request #5638: HADOOP-18709. Add curator based ZooKeeper communication support over…
szilard-nemeth commented on PR #5638: URL: https://github.com/apache/hadoop/pull/5638#issuecomment-1555433823 > Thank you Szilard for the CR. > > The change was exclusively tested with the unit test, which is a kind of integration test, as a ZK Server was brought up and the communication tested between the client and the server. > > This code change is in the common code base and there is no component calling it yet. Once [YARN-11468](https://issues.apache.org/jira/browse/YARN-11468) [Zookeeper SSL/TLS support] is implemented, we can test it in a real cluster environment. > > Wondering if we should update the [hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs-rbf/dependency-analysis.html](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs-rbf/dependency-analysis.html) page with the Netty dependency? The parameter descriptions are added to the commit to the core-default.xml. I see, thanks for the info. Didn't know about the YARN jira. I don't think you need to update the dependency report, TBH I never updated it and I don't know how it's generated. Probably copied from the output of some script? Our codebase might have a reference to this somewhere, in markdown files. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] szilard-nemeth commented on a diff in pull request #5638: HADOOP-18709. Add curator based ZooKeeper communication support over…
szilard-nemeth commented on code in PR #5638: URL: https://github.com/apache/hadoop/pull/5638#discussion_r1199527654 ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/curator/ZKCuratorManager.java: ## @@ -503,4 +644,50 @@ private void setJaasConfiguration(ZKClientConfig zkClientConfig) throws IOExcept zkClientConfig.setProperty(ZKClientConfig.LOGIN_CONTEXT_NAME_KEY, JAAS_CLIENT_ENTRY); } } -} \ No newline at end of file + + /** + * Helper class to contain the Truststore/Keystore paths for the ZK client connection over + * SSL/TLS. + */ + public static class TruststoreKeystore{ +private static String keystoreLocation; +private static String keystorePassword; +private static String truststoreLocation; +private static String truststorePassword; +/** Configuration for the ZooKeeper connection when SSL/TLS is enabled. + * When a value is not configured, ensure that empty string is set instead of null. + * @param conf ZooKeeper Client configuration + */ +public TruststoreKeystore(Configuration conf){ + + keystoreLocation = + StringUtils.defaultString(conf.get(CommonConfigurationKeys.ZK_SSL_KEYSTORE_LOCATION, Review Comment: Why the StringUtils.defaultString is needed? I mean, conf.get() will return an empty string if the config is not found, given that you passed empty strings for all conf.get calls already. ## hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/curator/TestSecureZKCuratorManager.java: ## @@ -0,0 +1,192 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.util.curator; + +import org.apache.curator.test.InstanceSpec; +import org.apache.curator.test.TestingServer; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.CommonConfigurationKeys; +import org.apache.zookeeper.ZooKeeper; +import org.apache.zookeeper.client.ZKClientConfig; +import org.apache.zookeeper.common.ClientX509Util; +import org.junit.After; +import org.junit.Before; +import org.junit.Test; + +import java.io.File; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.Map; + +import static org.apache.hadoop.fs.FileContext.LOG; +import static org.junit.Assert.assertEquals; + + +/** + * Test the manager for ZooKeeper Curator when SSL/TLS is enabled for the ZK server-client + * connection. + */ +public class TestSecureZKCuratorManager { + + private TestingServer server; + private ZKCuratorManager curator; + private Configuration hadoopConf; + static final Integer SECURE_CLIENT_PORT = 2281; + static final Integer JUTE_MAXBUFFER = 4; + static final File ZK_DATA_DIR = new File("testZkSSLClientConnectionDataDir"); + + @Before + public void setup() throws Exception { +Integer defaultValue = -1; +Map customConfiguration = new HashMap<>(); +customConfiguration.put("secureClientPort", SECURE_CLIENT_PORT.toString()); +customConfiguration.put("audit.enable", true); +this.hadoopConf = setUpSecure(); +InstanceSpec spec = new InstanceSpec(ZK_DATA_DIR, SECURE_CLIENT_PORT, +defaultValue, +defaultValue, +true, Review Comment: Extracting these (at least 1, 100, and 10) to static finals would make this more readable and straightforward. ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/curator/ZKCuratorManager.java: ## @@ -452,21 +502,50 @@ public static class HadoopZookeeperFactory implements ZookeeperFactory { private final String zkPrincipal; private final String kerberosPrincipal; private final String kerberosKeytab; +private final Boolean sslEnabled; +/** + * Constructor for the helper class to configure the ZooKeeper client connection. + * @param zkPrincipal Optional. + */ public HadoopZookeeperFactory(String zkPrincipal) { this(zkPrincipal, null, null); } - +/** + * Constructor for the helper class to configure the ZooKeeper client connection. + * @param zkPrincipal Optional. + * @param kerberosPrincipal Optional. Use along with kerberosKeytab. + * @param kerbero
[jira] [Commented] (HADOOP-18709) Add curator based ZooKeeper communication support over SSL/TLS into the common library
[ https://issues.apache.org/jira/browse/HADOOP-18709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17724473#comment-17724473 ] ASF GitHub Bot commented on HADOOP-18709: - szilard-nemeth commented on code in PR #5638: URL: https://github.com/apache/hadoop/pull/5638#discussion_r1199527654 ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/curator/ZKCuratorManager.java: ## @@ -503,4 +644,50 @@ private void setJaasConfiguration(ZKClientConfig zkClientConfig) throws IOExcept zkClientConfig.setProperty(ZKClientConfig.LOGIN_CONTEXT_NAME_KEY, JAAS_CLIENT_ENTRY); } } -} \ No newline at end of file + + /** + * Helper class to contain the Truststore/Keystore paths for the ZK client connection over + * SSL/TLS. + */ + public static class TruststoreKeystore{ +private static String keystoreLocation; +private static String keystorePassword; +private static String truststoreLocation; +private static String truststorePassword; +/** Configuration for the ZooKeeper connection when SSL/TLS is enabled. + * When a value is not configured, ensure that empty string is set instead of null. + * @param conf ZooKeeper Client configuration + */ +public TruststoreKeystore(Configuration conf){ + + keystoreLocation = + StringUtils.defaultString(conf.get(CommonConfigurationKeys.ZK_SSL_KEYSTORE_LOCATION, Review Comment: Why the StringUtils.defaultString is needed? I mean, conf.get() will return an empty string if the config is not found, given that you passed empty strings for all conf.get calls already. ## hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/curator/TestSecureZKCuratorManager.java: ## @@ -0,0 +1,192 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.util.curator; + +import org.apache.curator.test.InstanceSpec; +import org.apache.curator.test.TestingServer; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.CommonConfigurationKeys; +import org.apache.zookeeper.ZooKeeper; +import org.apache.zookeeper.client.ZKClientConfig; +import org.apache.zookeeper.common.ClientX509Util; +import org.junit.After; +import org.junit.Before; +import org.junit.Test; + +import java.io.File; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.Map; + +import static org.apache.hadoop.fs.FileContext.LOG; +import static org.junit.Assert.assertEquals; + + +/** + * Test the manager for ZooKeeper Curator when SSL/TLS is enabled for the ZK server-client + * connection. + */ +public class TestSecureZKCuratorManager { + + private TestingServer server; + private ZKCuratorManager curator; + private Configuration hadoopConf; + static final Integer SECURE_CLIENT_PORT = 2281; + static final Integer JUTE_MAXBUFFER = 4; + static final File ZK_DATA_DIR = new File("testZkSSLClientConnectionDataDir"); + + @Before + public void setup() throws Exception { +Integer defaultValue = -1; +Map customConfiguration = new HashMap<>(); +customConfiguration.put("secureClientPort", SECURE_CLIENT_PORT.toString()); +customConfiguration.put("audit.enable", true); +this.hadoopConf = setUpSecure(); +InstanceSpec spec = new InstanceSpec(ZK_DATA_DIR, SECURE_CLIENT_PORT, +defaultValue, +defaultValue, +true, Review Comment: Extracting these (at least 1, 100, and 10) to static finals would make this more readable and straightforward. ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/curator/ZKCuratorManager.java: ## @@ -452,21 +502,50 @@ public static class HadoopZookeeperFactory implements ZookeeperFactory { private final String zkPrincipal; private final String kerberosPrincipal; private final String kerberosKeytab; +private final Boolean sslEnabled; +/** + * Constructor for the helper class to configure the ZooKeeper client connection. + * @param zkPrincipal Optional. + */ public HadoopZookeeperFactory(String zkPrincipal) { this(zkPr
[jira] [Commented] (HADOOP-18709) Add curator based ZooKeeper communication support over SSL/TLS into the common library
[ https://issues.apache.org/jira/browse/HADOOP-18709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17724469#comment-17724469 ] ASF GitHub Bot commented on HADOOP-18709: - szilard-nemeth commented on code in PR #5638: URL: https://github.com/apache/hadoop/pull/5638#discussion_r1199521345 ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/curator/ZKCuratorManager.java: ## @@ -478,10 +558,53 @@ public ZooKeeper newZooKeeper(String connectString, int sessionTimeout, if (zkClientConfig.isSaslClientEnabled() && !isJaasConfigurationSet(zkClientConfig)) { setJaasConfiguration(zkClientConfig); } + if (sslEnabled) { +setSslConfiguration(zkClientConfig); + } return new ZooKeeper(connectString, sessionTimeout, watcher, canBeReadOnly, zkClientConfig); } +/** + * Configure ZooKeeper Client with SSL/TLS connection. + * @param zkClientConfig ZooKeeper Client configuration + * */ +private void setSslConfiguration(ZKClientConfig zkClientConfig) throws ConfigurationException { + this.setSslConfiguration(zkClientConfig, new ClientX509Util()); +} +public void setSslConfiguration(ZKClientConfig zkClientConfig, ClientX509Util x509Util ) Review Comment: There was also a missing one before method: validateSslConfiguration Can you fix the javadoc as it is not starting with /** (but with: /*) ? Something must be odd with your formatter. > Add curator based ZooKeeper communication support over SSL/TLS into the > common library > -- > > Key: HADOOP-18709 > URL: https://issues.apache.org/jira/browse/HADOOP-18709 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ferenc Erdelyi >Assignee: Ferenc Erdelyi >Priority: Major > Labels: pull-request-available > > With HADOOP-16579 the ZooKeeper client is capable of securing communication > with SSL. > To follow the convention introduced in HADOOP-14741, proposing to add to the > core-default.xml the following configurations, as the groundwork for the > components to enable encrypted communication between the individual > components and ZooKeeper: > * hadoop.zk.ssl.keystore.location > * hadoop.zk.ssl.keystore.password > * hadoop.zk.ssl.truststore.location > * hadoop.zk.ssl.truststore.password > These parameters along with the component-specific ssl.client.enable option > (e.g. yarn.zookeeper.ssl.client.enable) should be passed to the > ZKCuratorManager to build the CuratorFramework. The ZKCuratorManager needs a > new overloaded start() method to build the encrypted communication. > * The secured ZK Client uses Netty, hence the dependency is included in the > pom.xml. Added netty-handler and netty-transport-native-epoll dependency to > the pom.xml based on ZOOKEEPER-3494 - "No need to depend on netty-all (SSL)". > * The change was exclusively tested with the unit test, which is a kind of > integration test, as a ZK Server was brought up and the communication tested > between the client and the server. > * This code change is in the common code base and there is no component > calling it yet. Once YARN-11468 - "Zookeeper SSL/TLS support" is implemented, > we can test it in a real cluster environment. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] szilard-nemeth commented on a diff in pull request #5638: HADOOP-18709. Add curator based ZooKeeper communication support over…
szilard-nemeth commented on code in PR #5638: URL: https://github.com/apache/hadoop/pull/5638#discussion_r1199521345 ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/curator/ZKCuratorManager.java: ## @@ -478,10 +558,53 @@ public ZooKeeper newZooKeeper(String connectString, int sessionTimeout, if (zkClientConfig.isSaslClientEnabled() && !isJaasConfigurationSet(zkClientConfig)) { setJaasConfiguration(zkClientConfig); } + if (sslEnabled) { +setSslConfiguration(zkClientConfig); + } return new ZooKeeper(connectString, sessionTimeout, watcher, canBeReadOnly, zkClientConfig); } +/** + * Configure ZooKeeper Client with SSL/TLS connection. + * @param zkClientConfig ZooKeeper Client configuration + * */ +private void setSslConfiguration(ZKClientConfig zkClientConfig) throws ConfigurationException { + this.setSslConfiguration(zkClientConfig, new ClientX509Util()); +} +public void setSslConfiguration(ZKClientConfig zkClientConfig, ClientX509Util x509Util ) Review Comment: There was also a missing one before method: validateSslConfiguration Can you fix the javadoc as it is not starting with /** (but with: /*) ? Something must be odd with your formatter. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18709) Add curator based ZooKeeper communication support over SSL/TLS into the common library
[ https://issues.apache.org/jira/browse/HADOOP-18709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17724468#comment-17724468 ] ASF GitHub Bot commented on HADOOP-18709: - szilard-nemeth commented on code in PR #5638: URL: https://github.com/apache/hadoop/pull/5638#discussion_r1199522108 ## hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/curator/TestSecureZKCuratorManager.java: ## @@ -0,0 +1,157 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.util.curator; + +import org.apache.curator.test.InstanceSpec; +import org.apache.curator.test.TestingServer; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.CommonConfigurationKeys; +import org.apache.zookeeper.ZooKeeper; +import org.apache.zookeeper.client.ZKClientConfig; +import org.apache.zookeeper.common.ClientX509Util; +import org.junit.After; +import org.junit.Before; +import org.junit.Test; + +import java.io.File; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.Map; + +import static org.apache.hadoop.fs.FileContext.LOG; +import static org.junit.Assert.assertEquals; + + +/** + * Test the manager for ZooKeeper Curator when SSL/TLS is enabled for the ZK server-client connection. + */ +public class TestSecureZKCuratorManager { + + private TestingServer server; + private ZKCuratorManager curator; + private Configuration hadoopConf; + private Integer secureClientPort = 2281; + private File zkDataDir = new File("testZkSSLClientConnectionDataDir"); + + @Before + public void setup() throws Exception { +//set zkServer +this.hadoopConf = setUpSecure(); +Map customConfiguration = new HashMap<>(); + customConfiguration.put("secureClientPort",this.secureClientPort.toString()); +customConfiguration.put("audit.enable",true); + +InstanceSpec spec = new InstanceSpec( +this.zkDataDir, +this.secureClientPort, +-1, +-1, +true, +1, +100, +10, +customConfiguration); +this.server = new TestingServer(spec, true); +hadoopConf.set(CommonConfigurationKeys.ZK_ADDRESS, this.server.getConnectString()); +this.curator = new ZKCuratorManager(hadoopConf); +this.curator.start(new ArrayList<>(), true); + } + + public Configuration setUpSecure() throws Exception { +Configuration hadoopConf = new Configuration(); +String testDataPath = "src/test/java/org/apache/hadoop/util/curator/resources/data"; +System.setProperty("zookeeper.serverCnxnFactory", "org.apache.zookeeper.server.NettyServerCnxnFactory"); +//System.setProperty("zookeeper.client.secure", "true"); + + +System.setProperty("zookeeper.ssl.keyStore.location", testDataPath + "/ssl/keystore.jks"); +System.setProperty("zookeeper.ssl.keyStore.password", "password"); +System.setProperty("zookeeper.ssl.trustStore.location", testDataPath + "/ssl/truststore.jks"); +System.setProperty("zookeeper.ssl.trustStore.password", "password"); +System.setProperty("zookeeper.request.timeout", "12345"); + +System.setProperty("jute.maxbuffer", "469296129"); + +System.setProperty("javax.net.debug", "ssl"); +System.setProperty("zookeeper.authProvider.x509", "org.apache.zookeeper.server.auth.X509AuthenticationProvider"); + + +hadoopConf.set(CommonConfigurationKeys.ZK_SSL_KEYSTORE_LOCATION, testDataPath + "/ssl/keystore.jks"); +hadoopConf.set(CommonConfigurationKeys.ZK_SSL_KEYSTORE_PASSWORD, "password"); +hadoopConf.set(CommonConfigurationKeys.ZK_SSL_TRUSTSTORE_LOCATION, testDataPath + "/ssl/truststore.jks"); +hadoopConf.set(CommonConfigurationKeys.ZK_SSL_TRUSTSTORE_PASSWORD, "password"); +return hadoopConf; + } + + @After + public void teardown() throws Exception { +this.curator.close(); +if (this.server != null) { + this.server.close(); + this.server = null; Review Comment: As the setup is annotated with @Before, it always initializes this.server with an Object. I don't think setting it to null makes anything better. Let's leave it as
[GitHub] [hadoop] szilard-nemeth commented on a diff in pull request #5638: HADOOP-18709. Add curator based ZooKeeper communication support over…
szilard-nemeth commented on code in PR #5638: URL: https://github.com/apache/hadoop/pull/5638#discussion_r1199522108 ## hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/curator/TestSecureZKCuratorManager.java: ## @@ -0,0 +1,157 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.util.curator; + +import org.apache.curator.test.InstanceSpec; +import org.apache.curator.test.TestingServer; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.CommonConfigurationKeys; +import org.apache.zookeeper.ZooKeeper; +import org.apache.zookeeper.client.ZKClientConfig; +import org.apache.zookeeper.common.ClientX509Util; +import org.junit.After; +import org.junit.Before; +import org.junit.Test; + +import java.io.File; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.Map; + +import static org.apache.hadoop.fs.FileContext.LOG; +import static org.junit.Assert.assertEquals; + + +/** + * Test the manager for ZooKeeper Curator when SSL/TLS is enabled for the ZK server-client connection. + */ +public class TestSecureZKCuratorManager { + + private TestingServer server; + private ZKCuratorManager curator; + private Configuration hadoopConf; + private Integer secureClientPort = 2281; + private File zkDataDir = new File("testZkSSLClientConnectionDataDir"); + + @Before + public void setup() throws Exception { +//set zkServer +this.hadoopConf = setUpSecure(); +Map customConfiguration = new HashMap<>(); + customConfiguration.put("secureClientPort",this.secureClientPort.toString()); +customConfiguration.put("audit.enable",true); + +InstanceSpec spec = new InstanceSpec( +this.zkDataDir, +this.secureClientPort, +-1, +-1, +true, +1, +100, +10, +customConfiguration); +this.server = new TestingServer(spec, true); +hadoopConf.set(CommonConfigurationKeys.ZK_ADDRESS, this.server.getConnectString()); +this.curator = new ZKCuratorManager(hadoopConf); +this.curator.start(new ArrayList<>(), true); + } + + public Configuration setUpSecure() throws Exception { +Configuration hadoopConf = new Configuration(); +String testDataPath = "src/test/java/org/apache/hadoop/util/curator/resources/data"; +System.setProperty("zookeeper.serverCnxnFactory", "org.apache.zookeeper.server.NettyServerCnxnFactory"); +//System.setProperty("zookeeper.client.secure", "true"); + + +System.setProperty("zookeeper.ssl.keyStore.location", testDataPath + "/ssl/keystore.jks"); +System.setProperty("zookeeper.ssl.keyStore.password", "password"); +System.setProperty("zookeeper.ssl.trustStore.location", testDataPath + "/ssl/truststore.jks"); +System.setProperty("zookeeper.ssl.trustStore.password", "password"); +System.setProperty("zookeeper.request.timeout", "12345"); + +System.setProperty("jute.maxbuffer", "469296129"); + +System.setProperty("javax.net.debug", "ssl"); +System.setProperty("zookeeper.authProvider.x509", "org.apache.zookeeper.server.auth.X509AuthenticationProvider"); + + +hadoopConf.set(CommonConfigurationKeys.ZK_SSL_KEYSTORE_LOCATION, testDataPath + "/ssl/keystore.jks"); +hadoopConf.set(CommonConfigurationKeys.ZK_SSL_KEYSTORE_PASSWORD, "password"); +hadoopConf.set(CommonConfigurationKeys.ZK_SSL_TRUSTSTORE_LOCATION, testDataPath + "/ssl/truststore.jks"); +hadoopConf.set(CommonConfigurationKeys.ZK_SSL_TRUSTSTORE_PASSWORD, "password"); +return hadoopConf; + } + + @After + public void teardown() throws Exception { +this.curator.close(); +if (this.server != null) { + this.server.close(); + this.server = null; Review Comment: As the setup is annotated with @Before, it always initializes this.server with an Object. I don't think setting it to null makes anything better. Let's leave it as it is, though. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For qu
[jira] [Commented] (HADOOP-18709) Add curator based ZooKeeper communication support over SSL/TLS into the common library
[ https://issues.apache.org/jira/browse/HADOOP-18709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17724467#comment-17724467 ] ASF GitHub Bot commented on HADOOP-18709: - szilard-nemeth commented on code in PR #5638: URL: https://github.com/apache/hadoop/pull/5638#discussion_r1199521669 ## hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/curator/TestSecureZKCuratorManager.java: ## @@ -0,0 +1,157 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.util.curator; + +import org.apache.curator.test.InstanceSpec; +import org.apache.curator.test.TestingServer; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.CommonConfigurationKeys; +import org.apache.zookeeper.ZooKeeper; +import org.apache.zookeeper.client.ZKClientConfig; +import org.apache.zookeeper.common.ClientX509Util; +import org.junit.After; +import org.junit.Before; +import org.junit.Test; + +import java.io.File; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.Map; + +import static org.apache.hadoop.fs.FileContext.LOG; +import static org.junit.Assert.assertEquals; + + +/** + * Test the manager for ZooKeeper Curator when SSL/TLS is enabled for the ZK server-client connection. + */ +public class TestSecureZKCuratorManager { + + private TestingServer server; + private ZKCuratorManager curator; + private Configuration hadoopConf; + private Integer secureClientPort = 2281; + private File zkDataDir = new File("testZkSSLClientConnectionDataDir"); + + @Before + public void setup() throws Exception { +//set zkServer +this.hadoopConf = setUpSecure(); +Map customConfiguration = new HashMap<>(); + customConfiguration.put("secureClientPort",this.secureClientPort.toString()); +customConfiguration.put("audit.enable",true); + +InstanceSpec spec = new InstanceSpec( +this.zkDataDir, +this.secureClientPort, +-1, +-1, +true, +1, +100, +10, +customConfiguration); +this.server = new TestingServer(spec, true); +hadoopConf.set(CommonConfigurationKeys.ZK_ADDRESS, this.server.getConnectString()); +this.curator = new ZKCuratorManager(hadoopConf); +this.curator.start(new ArrayList<>(), true); + } + + public Configuration setUpSecure() throws Exception { +Configuration hadoopConf = new Configuration(); +String testDataPath = "src/test/java/org/apache/hadoop/util/curator/resources/data"; +System.setProperty("zookeeper.serverCnxnFactory", "org.apache.zookeeper.server.NettyServerCnxnFactory"); +//System.setProperty("zookeeper.client.secure", "true"); + + +System.setProperty("zookeeper.ssl.keyStore.location", testDataPath + "/ssl/keystore.jks"); +System.setProperty("zookeeper.ssl.keyStore.password", "password"); +System.setProperty("zookeeper.ssl.trustStore.location", testDataPath + "/ssl/truststore.jks"); +System.setProperty("zookeeper.ssl.trustStore.password", "password"); +System.setProperty("zookeeper.request.timeout", "12345"); + +System.setProperty("jute.maxbuffer", "469296129"); Review Comment: I see. thanks > Add curator based ZooKeeper communication support over SSL/TLS into the > common library > -- > > Key: HADOOP-18709 > URL: https://issues.apache.org/jira/browse/HADOOP-18709 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ferenc Erdelyi >Assignee: Ferenc Erdelyi >Priority: Major > Labels: pull-request-available > > With HADOOP-16579 the ZooKeeper client is capable of securing communication > with SSL. > To follow the convention introduced in HADOOP-14741, proposing to add to the > core-default.xml the following configurations, as the groundwork for the > components to enable encrypted communication between the individual > components and ZooKeeper: > * hadoop.zk.ssl.keystore.location > * hadoop
[GitHub] [hadoop] szilard-nemeth commented on a diff in pull request #5638: HADOOP-18709. Add curator based ZooKeeper communication support over…
szilard-nemeth commented on code in PR #5638: URL: https://github.com/apache/hadoop/pull/5638#discussion_r1199521669 ## hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/curator/TestSecureZKCuratorManager.java: ## @@ -0,0 +1,157 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.util.curator; + +import org.apache.curator.test.InstanceSpec; +import org.apache.curator.test.TestingServer; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.CommonConfigurationKeys; +import org.apache.zookeeper.ZooKeeper; +import org.apache.zookeeper.client.ZKClientConfig; +import org.apache.zookeeper.common.ClientX509Util; +import org.junit.After; +import org.junit.Before; +import org.junit.Test; + +import java.io.File; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.Map; + +import static org.apache.hadoop.fs.FileContext.LOG; +import static org.junit.Assert.assertEquals; + + +/** + * Test the manager for ZooKeeper Curator when SSL/TLS is enabled for the ZK server-client connection. + */ +public class TestSecureZKCuratorManager { + + private TestingServer server; + private ZKCuratorManager curator; + private Configuration hadoopConf; + private Integer secureClientPort = 2281; + private File zkDataDir = new File("testZkSSLClientConnectionDataDir"); + + @Before + public void setup() throws Exception { +//set zkServer +this.hadoopConf = setUpSecure(); +Map customConfiguration = new HashMap<>(); + customConfiguration.put("secureClientPort",this.secureClientPort.toString()); +customConfiguration.put("audit.enable",true); + +InstanceSpec spec = new InstanceSpec( +this.zkDataDir, +this.secureClientPort, +-1, +-1, +true, +1, +100, +10, +customConfiguration); +this.server = new TestingServer(spec, true); +hadoopConf.set(CommonConfigurationKeys.ZK_ADDRESS, this.server.getConnectString()); +this.curator = new ZKCuratorManager(hadoopConf); +this.curator.start(new ArrayList<>(), true); + } + + public Configuration setUpSecure() throws Exception { +Configuration hadoopConf = new Configuration(); +String testDataPath = "src/test/java/org/apache/hadoop/util/curator/resources/data"; +System.setProperty("zookeeper.serverCnxnFactory", "org.apache.zookeeper.server.NettyServerCnxnFactory"); +//System.setProperty("zookeeper.client.secure", "true"); + + +System.setProperty("zookeeper.ssl.keyStore.location", testDataPath + "/ssl/keystore.jks"); +System.setProperty("zookeeper.ssl.keyStore.password", "password"); +System.setProperty("zookeeper.ssl.trustStore.location", testDataPath + "/ssl/truststore.jks"); +System.setProperty("zookeeper.ssl.trustStore.password", "password"); +System.setProperty("zookeeper.request.timeout", "12345"); + +System.setProperty("jute.maxbuffer", "469296129"); Review Comment: I see. thanks -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18709) Add curator based ZooKeeper communication support over SSL/TLS into the common library
[ https://issues.apache.org/jira/browse/HADOOP-18709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17724466#comment-17724466 ] ASF GitHub Bot commented on HADOOP-18709: - szilard-nemeth commented on code in PR #5638: URL: https://github.com/apache/hadoop/pull/5638#discussion_r1199521447 ## hadoop-common-project/hadoop-common/pom.xml: ## @@ -342,6 +342,14 @@ + Review Comment: Cool, thanks :) > Add curator based ZooKeeper communication support over SSL/TLS into the > common library > -- > > Key: HADOOP-18709 > URL: https://issues.apache.org/jira/browse/HADOOP-18709 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ferenc Erdelyi >Assignee: Ferenc Erdelyi >Priority: Major > Labels: pull-request-available > > With HADOOP-16579 the ZooKeeper client is capable of securing communication > with SSL. > To follow the convention introduced in HADOOP-14741, proposing to add to the > core-default.xml the following configurations, as the groundwork for the > components to enable encrypted communication between the individual > components and ZooKeeper: > * hadoop.zk.ssl.keystore.location > * hadoop.zk.ssl.keystore.password > * hadoop.zk.ssl.truststore.location > * hadoop.zk.ssl.truststore.password > These parameters along with the component-specific ssl.client.enable option > (e.g. yarn.zookeeper.ssl.client.enable) should be passed to the > ZKCuratorManager to build the CuratorFramework. The ZKCuratorManager needs a > new overloaded start() method to build the encrypted communication. > * The secured ZK Client uses Netty, hence the dependency is included in the > pom.xml. Added netty-handler and netty-transport-native-epoll dependency to > the pom.xml based on ZOOKEEPER-3494 - "No need to depend on netty-all (SSL)". > * The change was exclusively tested with the unit test, which is a kind of > integration test, as a ZK Server was brought up and the communication tested > between the client and the server. > * This code change is in the common code base and there is no component > calling it yet. Once YARN-11468 - "Zookeeper SSL/TLS support" is implemented, > we can test it in a real cluster environment. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] szilard-nemeth commented on a diff in pull request #5638: HADOOP-18709. Add curator based ZooKeeper communication support over…
szilard-nemeth commented on code in PR #5638: URL: https://github.com/apache/hadoop/pull/5638#discussion_r1199521447 ## hadoop-common-project/hadoop-common/pom.xml: ## @@ -342,6 +342,14 @@ + Review Comment: Cool, thanks :) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18709) Add curator based ZooKeeper communication support over SSL/TLS into the common library
[ https://issues.apache.org/jira/browse/HADOOP-18709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17724465#comment-17724465 ] ASF GitHub Bot commented on HADOOP-18709: - szilard-nemeth commented on code in PR #5638: URL: https://github.com/apache/hadoop/pull/5638#discussion_r1199521345 ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/curator/ZKCuratorManager.java: ## @@ -478,10 +558,53 @@ public ZooKeeper newZooKeeper(String connectString, int sessionTimeout, if (zkClientConfig.isSaslClientEnabled() && !isJaasConfigurationSet(zkClientConfig)) { setJaasConfiguration(zkClientConfig); } + if (sslEnabled) { +setSslConfiguration(zkClientConfig); + } return new ZooKeeper(connectString, sessionTimeout, watcher, canBeReadOnly, zkClientConfig); } +/** + * Configure ZooKeeper Client with SSL/TLS connection. + * @param zkClientConfig ZooKeeper Client configuration + * */ +private void setSslConfiguration(ZKClientConfig zkClientConfig) throws ConfigurationException { + this.setSslConfiguration(zkClientConfig, new ClientX509Util()); +} +public void setSslConfiguration(ZKClientConfig zkClientConfig, ClientX509Util x509Util ) Review Comment: There was also a missing one before method: validateSslConfiguration I fixed the javadoc as it was not starting with /** (but with: /*) before committing the change. Something must be odd with your formatter. > Add curator based ZooKeeper communication support over SSL/TLS into the > common library > -- > > Key: HADOOP-18709 > URL: https://issues.apache.org/jira/browse/HADOOP-18709 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ferenc Erdelyi >Assignee: Ferenc Erdelyi >Priority: Major > Labels: pull-request-available > > With HADOOP-16579 the ZooKeeper client is capable of securing communication > with SSL. > To follow the convention introduced in HADOOP-14741, proposing to add to the > core-default.xml the following configurations, as the groundwork for the > components to enable encrypted communication between the individual > components and ZooKeeper: > * hadoop.zk.ssl.keystore.location > * hadoop.zk.ssl.keystore.password > * hadoop.zk.ssl.truststore.location > * hadoop.zk.ssl.truststore.password > These parameters along with the component-specific ssl.client.enable option > (e.g. yarn.zookeeper.ssl.client.enable) should be passed to the > ZKCuratorManager to build the CuratorFramework. The ZKCuratorManager needs a > new overloaded start() method to build the encrypted communication. > * The secured ZK Client uses Netty, hence the dependency is included in the > pom.xml. Added netty-handler and netty-transport-native-epoll dependency to > the pom.xml based on ZOOKEEPER-3494 - "No need to depend on netty-all (SSL)". > * The change was exclusively tested with the unit test, which is a kind of > integration test, as a ZK Server was brought up and the communication tested > between the client and the server. > * This code change is in the common code base and there is no component > calling it yet. Once YARN-11468 - "Zookeeper SSL/TLS support" is implemented, > we can test it in a real cluster environment. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] szilard-nemeth commented on a diff in pull request #5638: HADOOP-18709. Add curator based ZooKeeper communication support over…
szilard-nemeth commented on code in PR #5638: URL: https://github.com/apache/hadoop/pull/5638#discussion_r1199521345 ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/curator/ZKCuratorManager.java: ## @@ -478,10 +558,53 @@ public ZooKeeper newZooKeeper(String connectString, int sessionTimeout, if (zkClientConfig.isSaslClientEnabled() && !isJaasConfigurationSet(zkClientConfig)) { setJaasConfiguration(zkClientConfig); } + if (sslEnabled) { +setSslConfiguration(zkClientConfig); + } return new ZooKeeper(connectString, sessionTimeout, watcher, canBeReadOnly, zkClientConfig); } +/** + * Configure ZooKeeper Client with SSL/TLS connection. + * @param zkClientConfig ZooKeeper Client configuration + * */ +private void setSslConfiguration(ZKClientConfig zkClientConfig) throws ConfigurationException { + this.setSslConfiguration(zkClientConfig, new ClientX509Util()); +} +public void setSslConfiguration(ZKClientConfig zkClientConfig, ClientX509Util x509Util ) Review Comment: There was also a missing one before method: validateSslConfiguration I fixed the javadoc as it was not starting with /** (but with: /*) before committing the change. Something must be odd with your formatter. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18709) Add curator based ZooKeeper communication support over SSL/TLS into the common library
[ https://issues.apache.org/jira/browse/HADOOP-18709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17724464#comment-17724464 ] ASF GitHub Bot commented on HADOOP-18709: - szilard-nemeth commented on code in PR #5638: URL: https://github.com/apache/hadoop/pull/5638#discussion_r1199521127 ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/curator/ZKCuratorManager.java: ## @@ -157,12 +175,44 @@ public void start(List authInfos) throws IOException { authInfos.add(new AuthInfo(zkAuth.getScheme(), zkAuth.getAuth())); } +/* Pre-check on SSL/TLS client connection requirements to emit the name of the +configuration missing. It improves supportability. */ +if(sslEnabled) { + if (StringUtils.isEmpty(conf.get(CommonConfigurationKeys.ZK_SSL_KEYSTORE_LOCATION))) { +throw new ConfigurationException( Review Comment: I meant a method called something like validateX(String confKey) that throws the exception. The exception message is repeated 4 times, but it's not the end of the world if we don't do this. It's okay how it is now :) > Add curator based ZooKeeper communication support over SSL/TLS into the > common library > -- > > Key: HADOOP-18709 > URL: https://issues.apache.org/jira/browse/HADOOP-18709 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ferenc Erdelyi >Assignee: Ferenc Erdelyi >Priority: Major > Labels: pull-request-available > > With HADOOP-16579 the ZooKeeper client is capable of securing communication > with SSL. > To follow the convention introduced in HADOOP-14741, proposing to add to the > core-default.xml the following configurations, as the groundwork for the > components to enable encrypted communication between the individual > components and ZooKeeper: > * hadoop.zk.ssl.keystore.location > * hadoop.zk.ssl.keystore.password > * hadoop.zk.ssl.truststore.location > * hadoop.zk.ssl.truststore.password > These parameters along with the component-specific ssl.client.enable option > (e.g. yarn.zookeeper.ssl.client.enable) should be passed to the > ZKCuratorManager to build the CuratorFramework. The ZKCuratorManager needs a > new overloaded start() method to build the encrypted communication. > * The secured ZK Client uses Netty, hence the dependency is included in the > pom.xml. Added netty-handler and netty-transport-native-epoll dependency to > the pom.xml based on ZOOKEEPER-3494 - "No need to depend on netty-all (SSL)". > * The change was exclusively tested with the unit test, which is a kind of > integration test, as a ZK Server was brought up and the communication tested > between the client and the server. > * This code change is in the common code base and there is no component > calling it yet. Once YARN-11468 - "Zookeeper SSL/TLS support" is implemented, > we can test it in a real cluster environment. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] szilard-nemeth commented on a diff in pull request #5638: HADOOP-18709. Add curator based ZooKeeper communication support over…
szilard-nemeth commented on code in PR #5638: URL: https://github.com/apache/hadoop/pull/5638#discussion_r1199521127 ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/curator/ZKCuratorManager.java: ## @@ -157,12 +175,44 @@ public void start(List authInfos) throws IOException { authInfos.add(new AuthInfo(zkAuth.getScheme(), zkAuth.getAuth())); } +/* Pre-check on SSL/TLS client connection requirements to emit the name of the +configuration missing. It improves supportability. */ +if(sslEnabled) { + if (StringUtils.isEmpty(conf.get(CommonConfigurationKeys.ZK_SSL_KEYSTORE_LOCATION))) { +throw new ConfigurationException( Review Comment: I meant a method called something like validateX(String confKey) that throws the exception. The exception message is repeated 4 times, but it's not the end of the world if we don't do this. It's okay how it is now :) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ayushtkn commented on a diff in pull request #5660: HDFS-17014. HttpFS Add Support getStatus API
ayushtkn commented on code in PR #5660: URL: https://github.com/apache/hadoop/pull/5660#discussion_r1199520649 ## hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java: ## @@ -2081,6 +2085,32 @@ private void testGetFileLinkStatus() throws Exception { assertTrue(fs.getFileLinkStatus(linkToFile).isSymlink()); } + private void testGetStatus() throws Exception { +if (isLocalFS()) { + // do not test the getStatus for local FS. + return; +} +final Path path = new Path("/foo"); +FileSystem fs = FileSystem.get(path.toUri(), this.getProxiedFSConf()); +if (fs instanceof DistributedFileSystem) { + DistributedFileSystem dfs = + (DistributedFileSystem) FileSystem.get(path.toUri(), this.getProxiedFSConf()); + FileSystem httpFs = this.getHttpFSFileSystem(); + + FsStatus dfsFsStatus = dfs.getStatus(path); + FsStatus httpFsStatus = httpFs.getStatus(path); + + //Validate used free and capacity are the same as DistributedFileSystem + assertEquals(dfsFsStatus.getUsed(), httpFsStatus.getUsed()); + assertEquals(dfsFsStatus.getRemaining(), httpFsStatus.getRemaining()); + assertEquals(dfsFsStatus.getCapacity(), httpFsStatus.getCapacity()); + httpFs.close(); + dfs.close(); +}else{ + Assert.fail(fs.getClass().getSimpleName() + " is not of type DistributedFileSystem."); +} Review Comment: it is still not formatted, space before and after else. ``` } else { ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18745) Fix the exception message to print the Identifier pattern
[ https://issues.apache.org/jira/browse/HADOOP-18745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena updated HADOOP-18745: -- Description: In case of an incorrect string passed as value, it would throw an exception, but the message doesn't print the identifier pattern. {code:java} java.lang.IllegalArgumentException: [] = [[a] must be {2}{code} instead of {code:java} java.lang.IllegalArgumentException: [] = [[a] must be [a-zA-Z_][a-zA-Z0-9_\-]*{code} Ref to original discussion: https://github.com/apache/hadoop/pull/5669#discussion_r1198937053 was: In case of an incorrect string passed as value, it would throw an exception, but the message doesn't print the identifier pattern. {code:java} java.lang.IllegalArgumentException: [] = [[a] must be {2}{code} instead of {code:java} java.lang.IllegalArgumentException: [] = [[a] must be [a-zA-Z_][a-zA-Z0-9_\-]*{code} > Fix the exception message to print the Identifier pattern > - > > Key: HADOOP-18745 > URL: https://issues.apache.org/jira/browse/HADOOP-18745 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Nishtha Shah >Assignee: Nishtha Shah >Priority: Minor > > In case of an incorrect string passed as value, it would throw an exception, > but the message doesn't print the identifier pattern. > {code:java} > java.lang.IllegalArgumentException: [] = [[a] must be {2}{code} > instead of > {code:java} > java.lang.IllegalArgumentException: [] = [[a] must be > [a-zA-Z_][a-zA-Z0-9_\-]*{code} > Ref to original discussion: > https://github.com/apache/hadoop/pull/5669#discussion_r1198937053 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17518) Usage of incorrect regex range A-z
[ https://issues.apache.org/jira/browse/HADOOP-17518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17724459#comment-17724459 ] Ayush Saxena commented on HADOOP-17518: --- Committed to trunk. Thanx [~nishtha11shah] for the contribution!!! > Usage of incorrect regex range A-z > -- > > Key: HADOOP-17518 > URL: https://issues.apache.org/jira/browse/HADOOP-17518 > Project: Hadoop Common > Issue Type: Bug >Reporter: Marcono1234 >Assignee: Nishtha Shah >Priority: Minor > Labels: pull-request-available > > There are two cases where the regex {{A-z}} is used. I assume that is a typo > (and should be {{A-Z}}) because {{A-z}} matches: > - {{A-Z}} > - {{\[}}, {{\}}, {{\]}}, {{^}}, {{_}}, {{`}} > - {{a-z}} > Affected: > - > https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/util/Check.java#L109 > (and > https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/util/Check.java#L115) > - > https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/resourcetypes/ResourceTypesTestHelper.java#L38 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17518) Usage of incorrect regex range A-z
[ https://issues.apache.org/jira/browse/HADOOP-17518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17724460#comment-17724460 ] ASF subversion and git services commented on HADOOP-17518: -- Commit 5272ed86708afdce8bd878f7bd734ce4c0c369cf in hadoop's branch refs/heads/trunk from NishthaShah [ https://gitbox.apache.org/repos/asf?p=hadoop.git;h=5272ed86708 ] HADOOP-17518. Update the regex to A-Z (#5669). Contributed by Nishtha Shah. Signed-off-by: Ayush Saxena > Usage of incorrect regex range A-z > -- > > Key: HADOOP-17518 > URL: https://issues.apache.org/jira/browse/HADOOP-17518 > Project: Hadoop Common > Issue Type: Bug >Reporter: Marcono1234 >Assignee: Nishtha Shah >Priority: Minor > Labels: pull-request-available > > There are two cases where the regex {{A-z}} is used. I assume that is a typo > (and should be {{A-Z}}) because {{A-z}} matches: > - {{A-Z}} > - {{\[}}, {{\}}, {{\]}}, {{^}}, {{_}}, {{`}} > - {{a-z}} > Affected: > - > https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/util/Check.java#L109 > (and > https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/util/Check.java#L115) > - > https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/resourcetypes/ResourceTypesTestHelper.java#L38 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17518) Usage of incorrect regex range A-z
[ https://issues.apache.org/jira/browse/HADOOP-17518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HADOOP-17518: Labels: pull-request-available (was: ) > Usage of incorrect regex range A-z > -- > > Key: HADOOP-17518 > URL: https://issues.apache.org/jira/browse/HADOOP-17518 > Project: Hadoop Common > Issue Type: Bug >Reporter: Marcono1234 >Assignee: Nishtha Shah >Priority: Minor > Labels: pull-request-available > > There are two cases where the regex {{A-z}} is used. I assume that is a typo > (and should be {{A-Z}}) because {{A-z}} matches: > - {{A-Z}} > - {{\[}}, {{\}}, {{\]}}, {{^}}, {{_}}, {{`}} > - {{a-z}} > Affected: > - > https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/util/Check.java#L109 > (and > https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/util/Check.java#L115) > - > https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/resourcetypes/ResourceTypesTestHelper.java#L38 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-17518) Usage of incorrect regex range A-z
[ https://issues.apache.org/jira/browse/HADOOP-17518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena resolved HADOOP-17518. --- Fix Version/s: 3.4.0 Hadoop Flags: Reviewed Resolution: Fixed > Usage of incorrect regex range A-z > -- > > Key: HADOOP-17518 > URL: https://issues.apache.org/jira/browse/HADOOP-17518 > Project: Hadoop Common > Issue Type: Bug >Reporter: Marcono1234 >Assignee: Nishtha Shah >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0 > > > There are two cases where the regex {{A-z}} is used. I assume that is a typo > (and should be {{A-Z}}) because {{A-z}} matches: > - {{A-Z}} > - {{\[}}, {{\}}, {{\]}}, {{^}}, {{_}}, {{`}} > - {{a-z}} > Affected: > - > https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/util/Check.java#L109 > (and > https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/util/Check.java#L115) > - > https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/resourcetypes/ResourceTypesTestHelper.java#L38 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17518) Usage of incorrect regex range A-z
[ https://issues.apache.org/jira/browse/HADOOP-17518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17724457#comment-17724457 ] ASF GitHub Bot commented on HADOOP-17518: - ayushtkn merged PR #5669: URL: https://github.com/apache/hadoop/pull/5669 > Usage of incorrect regex range A-z > -- > > Key: HADOOP-17518 > URL: https://issues.apache.org/jira/browse/HADOOP-17518 > Project: Hadoop Common > Issue Type: Bug >Reporter: Marcono1234 >Assignee: Nishtha Shah >Priority: Minor > > There are two cases where the regex {{A-z}} is used. I assume that is a typo > (and should be {{A-Z}}) because {{A-z}} matches: > - {{A-Z}} > - {{\[}}, {{\}}, {{\]}}, {{^}}, {{_}}, {{`}} > - {{a-z}} > Affected: > - > https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/util/Check.java#L109 > (and > https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/util/Check.java#L115) > - > https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/resourcetypes/ResourceTypesTestHelper.java#L38 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ayushtkn merged pull request #5669: HADOOP-17518. Update the regex to A-Z
ayushtkn merged PR #5669: URL: https://github.com/apache/hadoop/pull/5669 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18709) Add curator based ZooKeeper communication support over SSL/TLS into the common library
[ https://issues.apache.org/jira/browse/HADOOP-18709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17724450#comment-17724450 ] ASF GitHub Bot commented on HADOOP-18709: - hadoop-yetus commented on PR #5638: URL: https://github.com/apache/hadoop/pull/5638#issuecomment-1555369000 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 50s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +0 :ok: | shelldocs | 0m 1s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 8 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 39m 10s | | trunk passed | | +1 :green_heart: | compile | 15m 35s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 14m 30s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | checkstyle | 1m 16s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 35s | | trunk passed | | +1 :green_heart: | javadoc | 1m 17s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 0m 51s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 2m 39s | | trunk passed | | +1 :green_heart: | shadedclient | 22m 21s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 50s | | the patch passed | | +1 :green_heart: | compile | 15m 0s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 15m 0s | | the patch passed | | +1 :green_heart: | compile | 14m 30s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | javac | 14m 30s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 8s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 32s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | javadoc | 1m 8s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 0m 52s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 2m 33s | | the patch passed | | +1 :green_heart: | shadedclient | 22m 27s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 19m 9s | | hadoop-common in the patch passed. | | -1 :x: | asflicense | 1m 0s | [/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5638/13/artifact/out/results-asflicense.txt) | The patch generated 5 ASF License warnings. | | | | 181m 47s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5638/13/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5638 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle shellcheck shelldocs | | uname | Linux 13421f96a9da 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 521931a58471fe5da6a2fd792f7550f5b737ef46 | | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5638/13/testReport/ | | Max. process+thread count | 1332 (vs. ulimit of 5500) |
[GitHub] [hadoop] hadoop-yetus commented on pull request #5638: HADOOP-18709. Add curator based ZooKeeper communication support over…
hadoop-yetus commented on PR #5638: URL: https://github.com/apache/hadoop/pull/5638#issuecomment-1555369000 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 50s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +0 :ok: | shelldocs | 0m 1s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 8 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 39m 10s | | trunk passed | | +1 :green_heart: | compile | 15m 35s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 14m 30s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | checkstyle | 1m 16s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 35s | | trunk passed | | +1 :green_heart: | javadoc | 1m 17s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 0m 51s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 2m 39s | | trunk passed | | +1 :green_heart: | shadedclient | 22m 21s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 50s | | the patch passed | | +1 :green_heart: | compile | 15m 0s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 15m 0s | | the patch passed | | +1 :green_heart: | compile | 14m 30s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | javac | 14m 30s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 8s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 32s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | javadoc | 1m 8s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 0m 52s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 2m 33s | | the patch passed | | +1 :green_heart: | shadedclient | 22m 27s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 19m 9s | | hadoop-common in the patch passed. | | -1 :x: | asflicense | 1m 0s | [/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5638/13/artifact/out/results-asflicense.txt) | The patch generated 5 ASF License warnings. | | | | 181m 47s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5638/13/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5638 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle shellcheck shelldocs | | uname | Linux 13421f96a9da 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 521931a58471fe5da6a2fd792f7550f5b737ef46 | | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5638/13/testReport/ | | Max. process+thread count | 1332 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5638/13/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 shellcheck=0
[GitHub] [hadoop] hadoop-yetus commented on pull request #5676: YARN-6648. BackPort [GPG] Add SubClusterCleaner in Global Policy Generator.
hadoop-yetus commented on PR #5676: URL: https://github.com/apache/hadoop/pull/5676#issuecomment-1555355369 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 38s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 18m 33s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 28s | | trunk passed | | +1 :green_heart: | compile | 7m 9s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 6m 22s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | checkstyle | 1m 48s | | trunk passed | | +1 :green_heart: | mvnsite | 5m 35s | | trunk passed | | +1 :green_heart: | javadoc | 6m 0s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 5m 9s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 16m 22s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 53s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 30s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 40s | | the patch passed | | +1 :green_heart: | compile | 6m 49s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 6m 49s | | the patch passed | | +1 :green_heart: | compile | 6m 38s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | javac | 6m 38s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 38s | | the patch passed | | +1 :green_heart: | mvnsite | 5m 14s | | the patch passed | | +1 :green_heart: | javadoc | 5m 30s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 4m 45s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 16m 43s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 45s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 234m 8s | | hadoop-yarn in the patch passed. | | +1 :green_heart: | unit | 1m 15s | | hadoop-yarn-api in the patch passed. | | +1 :green_heart: | unit | 5m 48s | | hadoop-yarn-common in the patch passed. | | +1 :green_heart: | unit | 3m 40s | | hadoop-yarn-server-common in the patch passed. | | +1 :green_heart: | unit | 0m 39s | | hadoop-yarn-server-globalpolicygenerator in the patch passed. | | +1 :green_heart: | asflicense | 1m 7s | | The patch does not generate ASF License warnings. | | | | 433m 49s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5676/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5676 | | Optional Tests | dupname asflicense codespell detsecrets xmllint compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle | | uname | Linux b01c19df6afc 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 91484a0a50db781772b1f66ca4a02059bb8b8464 | | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5676/2/testReport/ | | Max. process+thread count | 2700 (vs. ulimit of 55
[jira] [Commented] (HADOOP-18207) Introduce hadoop-logging module
[ https://issues.apache.org/jira/browse/HADOOP-18207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17724432#comment-17724432 ] ASF GitHub Bot commented on HADOOP-18207: - hadoop-yetus commented on PR #5503: URL: https://github.com/apache/hadoop/pull/5503#issuecomment-1555322970 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 34s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 3s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 80 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 16m 0s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 19m 35s | | trunk passed | | +1 :green_heart: | compile | 16m 52s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 15m 41s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | checkstyle | 4m 3s | | trunk passed | | +1 :green_heart: | mvnsite | 22m 44s | | trunk passed | | +1 :green_heart: | javadoc | 19m 5s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 17m 12s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +0 :ok: | spotbugs | 0m 40s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 21m 5s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 21m 28s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 18s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 16m 27s | | the patch passed | | +1 :green_heart: | compile | 16m 27s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 16m 27s | | the patch passed | | +1 :green_heart: | compile | 15m 19s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | javac | 15m 19s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 3m 39s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5503/10/artifact/out/results-checkstyle-root.txt) | root: The patch generated 31 new + 1168 unchanged - 43 fixed = 1199 total (was 1211) | | +1 :green_heart: | mvnsite | 21m 11s | | the patch passed | | +1 :green_heart: | javadoc | 19m 24s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 17m 29s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +0 :ok: | spotbugs | 0m 23s | | hadoop-project has no data from spotbugs | | +1 :green_heart: | shadedclient | 21m 23s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 24s | | hadoop-project in the patch passed. | | +1 :green_heart: | unit | 0m 29s | | hadoop-logging in the patch passed. | | +1 :green_heart: | unit | 0m 35s | | hadoop-minikdc in the patch passed. | | +1 :green_heart: | unit | 26m 51s | | hadoop-common-project in the patch passed. | | +1 :green_heart: | unit | 234m 0s | | hadoop-yarn in the patch passed. | | +1 :green_heart: | unit | 3m 29s | | hadoop-auth in the patch passed. | | +1 :green_heart: | unit | 0m 45s | | hadoop-auth-examples in the patch passed. | | +1 :green_heart: | unit | 19m 3s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 3m 50s | | hadoop-kms in the patch passed. | | -1 :x: | unit | 210m 4s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5503/10/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed.
[jira] [Commented] (HADOOP-18207) Introduce hadoop-logging module
[ https://issues.apache.org/jira/browse/HADOOP-18207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17724431#comment-17724431 ] ASF GitHub Bot commented on HADOOP-18207: - hadoop-yetus commented on PR #5503: URL: https://github.com/apache/hadoop/pull/5503#issuecomment-1555322944 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 33s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 3s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 80 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 16m 6s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 19m 47s | | trunk passed | | +1 :green_heart: | compile | 16m 53s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 15m 51s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | checkstyle | 3m 55s | | trunk passed | | +1 :green_heart: | mvnsite | 22m 58s | | trunk passed | | +1 :green_heart: | javadoc | 19m 5s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 17m 12s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +0 :ok: | spotbugs | 0m 40s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 20m 23s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 20m 44s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 16m 42s | | the patch passed | | +1 :green_heart: | compile | 16m 17s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 16m 17s | | the patch passed | | +1 :green_heart: | compile | 15m 29s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | javac | 15m 29s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 3m 49s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5503/9/artifact/out/results-checkstyle-root.txt) | root: The patch generated 31 new + 1168 unchanged - 43 fixed = 1199 total (was 1211) | | +1 :green_heart: | mvnsite | 21m 37s | | the patch passed | | +1 :green_heart: | javadoc | 19m 22s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 17m 30s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +0 :ok: | spotbugs | 0m 24s | | hadoop-project has no data from spotbugs | | +1 :green_heart: | shadedclient | 21m 26s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 26s | | hadoop-project in the patch passed. | | +1 :green_heart: | unit | 0m 27s | | hadoop-logging in the patch passed. | | +1 :green_heart: | unit | 0m 35s | | hadoop-minikdc in the patch passed. | | +1 :green_heart: | unit | 26m 51s | | hadoop-common-project in the patch passed. | | +1 :green_heart: | unit | 234m 10s | | hadoop-yarn in the patch passed. | | +1 :green_heart: | unit | 3m 26s | | hadoop-auth in the patch passed. | | +1 :green_heart: | unit | 0m 45s | | hadoop-auth-examples in the patch passed. | | +1 :green_heart: | unit | 19m 4s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 3m 48s | | hadoop-kms in the patch passed. | | +1 :green_heart: | unit | 208m 11s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | unit | 2m 54s | | hadoop-hdfs-client in the patch passed. | | +1 :green_heart: | unit | 21m 39s | | hadoop-hdfs-rbf in the pa
[GitHub] [hadoop] hadoop-yetus commented on pull request #5503: HADOOP-18207. Introduce hadoop-logging module
hadoop-yetus commented on PR #5503: URL: https://github.com/apache/hadoop/pull/5503#issuecomment-1555322970 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 34s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 3s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 80 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 16m 0s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 19m 35s | | trunk passed | | +1 :green_heart: | compile | 16m 52s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 15m 41s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | checkstyle | 4m 3s | | trunk passed | | +1 :green_heart: | mvnsite | 22m 44s | | trunk passed | | +1 :green_heart: | javadoc | 19m 5s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 17m 12s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +0 :ok: | spotbugs | 0m 40s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 21m 5s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 21m 28s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 18s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 16m 27s | | the patch passed | | +1 :green_heart: | compile | 16m 27s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 16m 27s | | the patch passed | | +1 :green_heart: | compile | 15m 19s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | javac | 15m 19s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 3m 39s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5503/10/artifact/out/results-checkstyle-root.txt) | root: The patch generated 31 new + 1168 unchanged - 43 fixed = 1199 total (was 1211) | | +1 :green_heart: | mvnsite | 21m 11s | | the patch passed | | +1 :green_heart: | javadoc | 19m 24s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 17m 29s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +0 :ok: | spotbugs | 0m 23s | | hadoop-project has no data from spotbugs | | +1 :green_heart: | shadedclient | 21m 23s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 24s | | hadoop-project in the patch passed. | | +1 :green_heart: | unit | 0m 29s | | hadoop-logging in the patch passed. | | +1 :green_heart: | unit | 0m 35s | | hadoop-minikdc in the patch passed. | | +1 :green_heart: | unit | 26m 51s | | hadoop-common-project in the patch passed. | | +1 :green_heart: | unit | 234m 0s | | hadoop-yarn in the patch passed. | | +1 :green_heart: | unit | 3m 29s | | hadoop-auth in the patch passed. | | +1 :green_heart: | unit | 0m 45s | | hadoop-auth-examples in the patch passed. | | +1 :green_heart: | unit | 19m 3s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 3m 50s | | hadoop-kms in the patch passed. | | -1 :x: | unit | 210m 4s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5503/10/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | unit | 2m 53s | | hadoop-hdfs-client in the patch passed. | | +1 :green_heart: | unit | 21m 39s | | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | unit | 8m 59s | | hadoop-mapreduce-client-app
[GitHub] [hadoop] hadoop-yetus commented on pull request #5503: HADOOP-18207. Introduce hadoop-logging module
hadoop-yetus commented on PR #5503: URL: https://github.com/apache/hadoop/pull/5503#issuecomment-1555322944 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 33s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 3s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 80 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 16m 6s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 19m 47s | | trunk passed | | +1 :green_heart: | compile | 16m 53s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 15m 51s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | checkstyle | 3m 55s | | trunk passed | | +1 :green_heart: | mvnsite | 22m 58s | | trunk passed | | +1 :green_heart: | javadoc | 19m 5s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 17m 12s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +0 :ok: | spotbugs | 0m 40s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 20m 23s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 20m 44s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 16m 42s | | the patch passed | | +1 :green_heart: | compile | 16m 17s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 16m 17s | | the patch passed | | +1 :green_heart: | compile | 15m 29s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | javac | 15m 29s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 3m 49s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5503/9/artifact/out/results-checkstyle-root.txt) | root: The patch generated 31 new + 1168 unchanged - 43 fixed = 1199 total (was 1211) | | +1 :green_heart: | mvnsite | 21m 37s | | the patch passed | | +1 :green_heart: | javadoc | 19m 22s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 17m 30s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +0 :ok: | spotbugs | 0m 24s | | hadoop-project has no data from spotbugs | | +1 :green_heart: | shadedclient | 21m 26s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 26s | | hadoop-project in the patch passed. | | +1 :green_heart: | unit | 0m 27s | | hadoop-logging in the patch passed. | | +1 :green_heart: | unit | 0m 35s | | hadoop-minikdc in the patch passed. | | +1 :green_heart: | unit | 26m 51s | | hadoop-common-project in the patch passed. | | +1 :green_heart: | unit | 234m 10s | | hadoop-yarn in the patch passed. | | +1 :green_heart: | unit | 3m 26s | | hadoop-auth in the patch passed. | | +1 :green_heart: | unit | 0m 45s | | hadoop-auth-examples in the patch passed. | | +1 :green_heart: | unit | 19m 4s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 3m 48s | | hadoop-kms in the patch passed. | | +1 :green_heart: | unit | 208m 11s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | unit | 2m 54s | | hadoop-hdfs-client in the patch passed. | | +1 :green_heart: | unit | 21m 39s | | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | unit | 9m 1s | | hadoop-mapreduce-client-app in the patch passed. | | +1 :green_heart: | unit | 7m 26s | | hadoop-mapreduce-client-core in the patch passed. | | -1 :x: | unit | 141m 47s | [/patch
[GitHub] [hadoop] hadoop-yetus commented on pull request #5332: YARN-11041. Replace all occurences of queuePath with the new QueuePath class - followup
hadoop-yetus commented on PR #5332: URL: https://github.com/apache/hadoop/pull/5332#issuecomment-1555215001 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 58s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 2s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 81 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 46s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 23m 14s | | trunk passed | | +1 :green_heart: | compile | 7m 50s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 6m 46s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | checkstyle | 1m 53s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 13s | | trunk passed | | +1 :green_heart: | javadoc | 2m 13s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 1m 56s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 4m 14s | | trunk passed | | +1 :green_heart: | shadedclient | 24m 29s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 24m 49s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 24s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 29s | | the patch passed | | +1 :green_heart: | compile | 6m 49s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 6m 49s | | the patch passed | | +1 :green_heart: | compile | 6m 42s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | javac | 6m 42s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 47s | [/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5332/11/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt) | hadoop-yarn-project/hadoop-yarn: The patch generated 7 new + 1648 unchanged - 204 fixed = 1655 total (was 1852) | | +1 :green_heart: | mvnsite | 2m 1s | | the patch passed | | +1 :green_heart: | javadoc | 1m 54s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 1m 46s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 4m 11s | | the patch passed | | +1 :green_heart: | shadedclient | 23m 53s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 101m 2s | | hadoop-yarn-server-resourcemanager in the patch passed. | | +1 :green_heart: | unit | 27m 48s | | hadoop-yarn-client in the patch passed. | | +1 :green_heart: | unit | 0m 38s | | hadoop-yarn-server-router in the patch passed. | | +1 :green_heart: | asflicense | 0m 51s | | The patch does not generate ASF License warnings. | | | | 277m 46s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5332/11/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5332 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 2e03a55b7538 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 5a6251dd5737de21f2f3ca5f384cefdb39807e8c | | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:P
[GitHub] [hadoop] hadoop-yetus commented on pull request #5672: YARN-7720. Race condition between second app attempt and UAM timeout when first attempt node is down.
hadoop-yetus commented on PR #5672: URL: https://github.com/apache/hadoop/pull/5672#issuecomment-1555201867 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 48s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 46s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 24m 55s | | trunk passed | | +1 :green_heart: | compile | 7m 32s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 6m 44s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | checkstyle | 1m 45s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 40s | | trunk passed | | +1 :green_heart: | javadoc | 2m 36s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 2m 22s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 5m 55s | | trunk passed | | +1 :green_heart: | shadedclient | 25m 8s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 23s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 52s | | the patch passed | | +1 :green_heart: | compile | 6m 56s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 6m 56s | | the patch passed | | +1 :green_heart: | compile | 6m 41s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | javac | 6m 41s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 39s | | the patch passed | | +1 :green_heart: | mvnsite | 2m 28s | | the patch passed | | +1 :green_heart: | javadoc | 2m 20s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 2m 10s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 6m 5s | | the patch passed | | +1 :green_heart: | shadedclient | 24m 33s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 3s | | hadoop-yarn-api in the patch passed. | | +1 :green_heart: | unit | 5m 20s | | hadoop-yarn-common in the patch passed. | | +1 :green_heart: | unit | 101m 28s | | hadoop-yarn-server-resourcemanager in the patch passed. | | +1 :green_heart: | asflicense | 0m 48s | | The patch does not generate ASF License warnings. | | | | 262m 22s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5672/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5672 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint | | uname | Linux 2fef0dde8e6f 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 830d7af6ef27524f201f1581cc17e5508146acc7 | | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5672/3/testReport/ | | Max. process+thread count | 899 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-ya
[GitHub] [hadoop] goiri commented on pull request #5663: YARN-11478. [Federation] SQLFederationStateStore Support Store ApplicationSubmitData.
goiri commented on PR #5663: URL: https://github.com/apache/hadoop/pull/5663#issuecomment-1555198815 Let's fix the checktyles and get a clean build (the failed unit tests look unrelated). -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri commented on a diff in pull request #5676: YARN-6648. BackPort [GPG] Add SubClusterCleaner in Global Policy Generator.
goiri commented on code in PR #5676: URL: https://github.com/apache/hadoop/pull/5676#discussion_r1199338094 ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/subclustercleaner/SubClusterCleaner.java: ## @@ -0,0 +1,112 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.server.globalpolicygenerator.subclustercleaner; + +import java.util.Date; +import java.util.Map; + +import org.apache.commons.lang.time.DurationFormatUtils; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.yarn.conf.YarnConfiguration; +import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId; +import org.apache.hadoop.yarn.server.federation.store.records.SubClusterInfo; +import org.apache.hadoop.yarn.server.federation.store.records.SubClusterState; +import org.apache.hadoop.yarn.server.globalpolicygenerator.GPGContext; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * The sub-cluster cleaner is one of the GPG's services that periodically checks + * the membership table in FederationStateStore and mark sub-clusters that have + * not sent a heartbeat in certain amount of time as LOST. + */ +public class SubClusterCleaner implements Runnable { + + private static final Logger LOG = + LoggerFactory.getLogger(SubClusterCleaner.class); + + private GPGContext gpgContext; + private long heartbeatExpirationMillis; + + /** + * The sub-cluster cleaner runnable is invoked by the sub cluster cleaner + * service to check the membership table and remove sub clusters that have not + * sent a heart beat in some amount of time. + * + * @param conf configuration. + * @param gpgContext GPGContext. + */ + public SubClusterCleaner(Configuration conf, GPGContext gpgContext) { +this.heartbeatExpirationMillis = +conf.getLong(YarnConfiguration.GPG_SUBCLUSTER_EXPIRATION_MS, Review Comment: getTimeDuration() ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java: ## @@ -4326,6 +4326,24 @@ public static boolean isAclEnabled(Configuration conf) { public static final boolean DEFAULT_ROUTER_WEBAPP_PARTIAL_RESULTS_ENABLED = false; + private static final String FEDERATION_GPG_PREFIX = + FEDERATION_PREFIX + "gpg."; + + // The number of threads to use for the GPG scheduled executor service + public static final String GPG_SCHEDULED_EXECUTOR_THREADS = + FEDERATION_GPG_PREFIX + "scheduled.executor.threads"; + public static final int DEFAULT_GPG_SCHEDULED_EXECUTOR_THREADS = 10; + + // The interval at which the subcluster cleaner runs, -1 means disabled + public static final String GPG_SUBCLUSTER_CLEANER_INTERVAL_MS = + FEDERATION_GPG_PREFIX + "subcluster.cleaner.interval-ms"; + public static final long DEFAULT_GPG_SUBCLUSTER_CLEANER_INTERVAL_MS = -1; + + // The expiration time for a subcluster heartbeat, default is 30 minutes + public static final String GPG_SUBCLUSTER_EXPIRATION_MS = + FEDERATION_GPG_PREFIX + "subcluster.heartbeat.expiration-ms"; + public static final long DEFAULT_GPG_SUBCLUSTER_EXPIRATION_MS = 180; Review Comment: Make it getTimeDuration() friendly. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] simbadzina commented on a diff in pull request #5674: HDFS-17020. RBF: mount table addAll should print failed records in std error
simbadzina commented on code in PR #5674: URL: https://github.com/apache/hadoop/pull/5674#discussion_r1199299711 ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java: ## @@ -1918,6 +1920,9 @@ public void testAddMultipleMountPointsFailure() throws Exception { "-faulttolerant"}; // mount points were already added assertNotEquals(0, ToolRunner.run(admin, argv)); + +assertTrue("The error message should return failed entries", +err.toString().contains("Cannot add mount points: [0SLASH0testAddMultiMountPoints-01")); Review Comment: `/` and `:` are replaced here: https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/driver/impl/StateStoreSerializableImpl.java#L104 Could you please create a small function to reverse that so the keys in the error message are readable. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] simbadzina commented on a diff in pull request #5674: HDFS-17020. RBF: mount table addAll should print failed records in std error
simbadzina commented on code in PR #5674: URL: https://github.com/apache/hadoop/pull/5674#discussion_r1199299711 ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java: ## @@ -1918,6 +1920,9 @@ public void testAddMultipleMountPointsFailure() throws Exception { "-faulttolerant"}; // mount points were already added assertNotEquals(0, ToolRunner.run(admin, argv)); + +assertTrue("The error message should return failed entries", +err.toString().contains("Cannot add mount points: [0SLASH0testAddMultiMountPoints-01")); Review Comment: `/` and `:` are replaced here: https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/driver/impl/StateStoreSerializableImpl.java#L104 Can you create a small function to reverse that so the keys in the error message are readable -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5353: HDFS-16909. Make judging null statment out from for loop in ReplicaMap#mergeAll method.
hadoop-yetus commented on PR #5353: URL: https://github.com/apache/hadoop/pull/5353#issuecomment-1555072013 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 33s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 37m 26s | | trunk passed | | +1 :green_heart: | compile | 1m 17s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 1m 11s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | checkstyle | 1m 5s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 20s | | trunk passed | | +1 :green_heart: | javadoc | 1m 9s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 1m 34s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 3m 23s | | trunk passed | | +1 :green_heart: | shadedclient | 22m 46s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 6s | | the patch passed | | +1 :green_heart: | compile | 1m 9s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 1m 9s | | the patch passed | | +1 :green_heart: | compile | 1m 2s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | javac | 1m 2s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 50s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 9s | | the patch passed | | +1 :green_heart: | javadoc | 0m 49s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 1m 27s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 3m 34s | | the patch passed | | +1 :green_heart: | shadedclient | 25m 29s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 208m 3s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5353/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 46s | | The patch does not generate ASF License warnings. | | | | 315m 52s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestObserverNode | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5353/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5353 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 45a823ce41ef 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 74b01d43007c7de41265f2700d00de7534fe16ef | | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5353/2/testReport/ | | Max. process+thread count | 2979 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5353/2/console | | versions | git=2
[GitHub] [hadoop] hadoop-yetus commented on pull request #5672: YARN-7720. Race condition between second app attempt and UAM timeout when first attempt node is down.
hadoop-yetus commented on PR #5672: URL: https://github.com/apache/hadoop/pull/5672#issuecomment-1555055872 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 9s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 56s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 22m 45s | | trunk passed | | +1 :green_heart: | compile | 7m 40s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 6m 53s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | checkstyle | 1m 49s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 41s | | trunk passed | | +1 :green_heart: | javadoc | 2m 40s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 2m 21s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 5m 59s | | trunk passed | | +1 :green_heart: | shadedclient | 26m 57s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 25s | | Maven dependency ordering for patch | | -1 :x: | mvninstall | 0m 29s | [/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5672/2/artifact/out/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt) | hadoop-yarn-server-resourcemanager in the patch failed. | | -1 :x: | compile | 2m 34s | [/patch-compile-hadoop-yarn-project_hadoop-yarn-jdkUbuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5672/2/artifact/out/patch-compile-hadoop-yarn-project_hadoop-yarn-jdkUbuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1.txt) | hadoop-yarn in the patch failed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1. | | -1 :x: | javac | 2m 34s | [/patch-compile-hadoop-yarn-project_hadoop-yarn-jdkUbuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5672/2/artifact/out/patch-compile-hadoop-yarn-project_hadoop-yarn-jdkUbuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1.txt) | hadoop-yarn in the patch failed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1. | | -1 :x: | compile | 2m 16s | [/patch-compile-hadoop-yarn-project_hadoop-yarn-jdkPrivateBuild-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5672/2/artifact/out/patch-compile-hadoop-yarn-project_hadoop-yarn-jdkPrivateBuild-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09.txt) | hadoop-yarn in the patch failed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09. | | -1 :x: | javac | 2m 16s | [/patch-compile-hadoop-yarn-project_hadoop-yarn-jdkPrivateBuild-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5672/2/artifact/out/patch-compile-hadoop-yarn-project_hadoop-yarn-jdkPrivateBuild-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09.txt) | hadoop-yarn in the patch failed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09. | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 27s | | the patch passed | | -1 :x: | mvnsite | 0m 32s | [/patch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5672/2/artifact/out/patch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt) | hadoop-yarn-server-resourcemanager in the patch failed. | | +1 :green_heart: | javadoc | 1m 55s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 1
[GitHub] [hadoop] hadoop-yetus commented on pull request #5643: HDFS-17003. Erasure coding: invalidate wrong block after reporting bad blocks from datanode
hadoop-yetus commented on PR #5643: URL: https://github.com/apache/hadoop/pull/5643#issuecomment-1555054384 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 35s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 29s | | trunk passed | | +1 :green_heart: | compile | 1m 15s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 1m 11s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | checkstyle | 1m 7s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 18s | | trunk passed | | +1 :green_heart: | javadoc | 1m 9s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 1m 34s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 3m 15s | | trunk passed | | +1 :green_heart: | shadedclient | 22m 34s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 5s | | the patch passed | | +1 :green_heart: | compile | 1m 8s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 1m 8s | | the patch passed | | +1 :green_heart: | compile | 1m 3s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | javac | 1m 3s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 53s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5643/8/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 6 new + 79 unchanged - 0 fixed = 85 total (was 79) | | +1 :green_heart: | mvnsite | 1m 10s | | the patch passed | | +1 :green_heart: | javadoc | 0m 50s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 1m 22s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 3m 24s | | the patch passed | | +1 :green_heart: | shadedclient | 25m 16s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 204m 29s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5643/8/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 50s | | The patch does not generate ASF License warnings. | | | | 307m 40s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5643/8/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5643 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 1a8d1fdd7f16 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / ccf3a53aa5050c4259ff85b6970b8f60b5410f50 | | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5643/8/testReport/ | | Max. process+thread count | 3332 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: ha
[GitHub] [hadoop] virajjasani commented on pull request #5503: HADOOP-18207. Introduce hadoop-logging module
virajjasani commented on PR #5503: URL: https://github.com/apache/hadoop/pull/5503#issuecomment-1555018381 no problem @Hexiaoqiao, thank you -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5332: YARN-11041. Replace all occurences of queuePath with the new QueuePath class - followup
hadoop-yetus commented on PR #5332: URL: https://github.com/apache/hadoop/pull/5332#issuecomment-1555013561 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 52s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 81 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 16m 9s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 22m 44s | | trunk passed | | +1 :green_heart: | compile | 7m 41s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 6m 48s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | checkstyle | 1m 56s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 15s | | trunk passed | | +1 :green_heart: | javadoc | 2m 10s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 1m 53s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 4m 14s | | trunk passed | | +1 :green_heart: | shadedclient | 24m 41s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 25m 2s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 27s | | Maven dependency ordering for patch | | -1 :x: | mvninstall | 0m 34s | [/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5332/10/artifact/out/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt) | hadoop-yarn-server-resourcemanager in the patch failed. | | -1 :x: | mvninstall | 0m 22s | [/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5332/10/artifact/out/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt) | hadoop-yarn-client in the patch failed. | | -1 :x: | mvninstall | 0m 20s | [/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5332/10/artifact/out/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.txt) | hadoop-yarn-server-router in the patch failed. | | -1 :x: | compile | 2m 32s | [/patch-compile-hadoop-yarn-project_hadoop-yarn-jdkUbuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5332/10/artifact/out/patch-compile-hadoop-yarn-project_hadoop-yarn-jdkUbuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1.txt) | hadoop-yarn in the patch failed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1. | | -1 :x: | javac | 2m 32s | [/patch-compile-hadoop-yarn-project_hadoop-yarn-jdkUbuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5332/10/artifact/out/patch-compile-hadoop-yarn-project_hadoop-yarn-jdkUbuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1.txt) | hadoop-yarn in the patch failed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1. | | -1 :x: | compile | 2m 15s | [/patch-compile-hadoop-yarn-project_hadoop-yarn-jdkPrivateBuild-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5332/10/artifact/out/patch-compile-hadoop-yarn-project_hadoop-yarn-jdkPrivateBuild-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09.txt) | hadoop-yarn in the patch failed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09. | | -1 :x: | javac | 2m 15s | [/patch-compile-hadoop-yarn-project_hadoop-yarn-jdkPrivateBuild-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5332/10/artifact/out/patch-compile-hadoop-yarn-project_hadoop-yarn-jdkPrivateBuild-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09.txt) | hadoop-yarn in the patch failed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09. | | +1 :green_heart: | blanks
[GitHub] [hadoop] hadoop-yetus commented on pull request #5669: HADOOP-17518. Update the regex to A-Z
hadoop-yetus commented on PR #5669: URL: https://github.com/apache/hadoop/pull/5669#issuecomment-1554896812 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 53s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 20m 14s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 19m 48s | | trunk passed | | +1 :green_heart: | compile | 15m 32s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 14m 14s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | checkstyle | 3m 45s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 55s | | trunk passed | | +1 :green_heart: | javadoc | 1m 49s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 1m 44s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 3m 0s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 36s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 29s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 0m 54s | | the patch passed | | +1 :green_heart: | compile | 14m 42s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 14m 42s | | the patch passed | | +1 :green_heart: | compile | 14m 20s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | javac | 14m 20s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 3m 36s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 50s | | the patch passed | | +1 :green_heart: | javadoc | 1m 46s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 1m 44s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 3m 9s | | the patch passed | | +1 :green_heart: | shadedclient | 21m 41s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 6m 1s | | hadoop-hdfs-httpfs in the patch passed. | | +1 :green_heart: | unit | 5m 46s | | hadoop-yarn-common in the patch passed. | | +1 :green_heart: | asflicense | 1m 4s | | The patch does not generate ASF License warnings. | | | | 187m 5s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5669/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5669 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux f53ae00e5c0d 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 5f68adc824107aee36c41369a41bad936bb2801e | | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5669/3/testReport/ | | Max. process+thread count | 847 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-httpfs hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: . | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5669/3/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automa
[GitHub] [hadoop] slfan1989 commented on a diff in pull request #5672: YARN-7720. Race condition between second app attempt and UAM timeout when first attempt node is down.
slfan1989 commented on code in PR #5672: URL: https://github.com/apache/hadoop/pull/5672#discussion_r1199137369 ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java: ## @@ -703,6 +703,24 @@ protected static void validateConfigs(Configuration conf) { + ", " + YarnConfiguration.RM_NM_HEARTBEAT_INTERVAL_MS + "=" + heartbeatIntvl); } + +if (HAUtil.isFederationEnabled(conf)) { + /* + * In Yarn Federation, we need UAMs in secondary sub-clusters to stay + * alive when the next attempt AM in home sub-cluster gets launched. If + * the previous AM died because the node is lost after NM timeout. It will + * already be too late if AM timeout is even shorter. + */ + int amExpireIntvl = conf.getInt(YarnConfiguration.RM_AM_EXPIRY_INTERVAL_MS, Review Comment: @goiri We can first check if the user's parameter consists entirely of numbers. If it does, we will use getInt for parsing. If it does not consist entirely of digits, we will use getTimeDuration for parsing. Can we rewrite the code like this? ``` String rmAmExpiryIntervalMS = conf.get(YarnConfiguration.RM_AM_EXPIRY_INTERVAL_MS); long amExpireIntvl; if (NumberUtils.isDigits(rmAmExpiryIntervalMS)) { amExpireIntvl = conf.getLong(YarnConfiguration.RM_AM_EXPIRY_INTERVAL_MS, YarnConfiguration.DEFAULT_RM_AM_EXPIRY_INTERVAL_MS); } else { amExpireIntvl = conf.getTimeDuration(YarnConfiguration.RM_AM_EXPIRY_INTERVAL_MS, YarnConfiguration.DEFAULT_RM_AM_EXPIRY_INTERVAL_MS, TimeUnit.MILLISECONDS); } ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5660: HDFS-17014. HttpFS Add Support getStatus API
hadoop-yetus commented on PR #5660: URL: https://github.com/apache/hadoop/pull/5660#issuecomment-1554801530 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 41s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 37m 41s | | trunk passed | | +1 :green_heart: | compile | 0m 27s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 0m 26s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | checkstyle | 0m 36s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 45s | | trunk passed | | +1 :green_heart: | javadoc | 0m 32s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 0m 27s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 1m 2s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 12s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 22s | | the patch passed | | +1 :green_heart: | compile | 0m 21s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 0m 21s | | the patch passed | | +1 :green_heart: | compile | 0m 20s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | javac | 0m 20s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 17s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 25s | | the patch passed | | +1 :green_heart: | javadoc | 0m 22s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 0m 19s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 0m 48s | | the patch passed | | +1 :green_heart: | shadedclient | 21m 11s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 5m 43s | | hadoop-hdfs-httpfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 36s | | The patch does not generate ASF License warnings. | | | | 98m 38s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5660/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5660 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux ff067653a591 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / f410191abc25157ea27f3e61ecbc3f15164da154 | | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5660/4/testReport/ | | Max. process+thread count | 827 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-httpfs U: hadoop-hdfs-project/hadoop-hdfs-httpfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5660/4/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org Fo
[GitHub] [hadoop] hadoop-yetus commented on pull request #5678: HADOOP-18745. Fix the exception message to print the Identifier pattern
hadoop-yetus commented on PR #5678: URL: https://github.com/apache/hadoop/pull/5678#issuecomment-1554750410 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 52s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 38m 16s | | trunk passed | | +1 :green_heart: | compile | 0m 25s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 0m 23s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | checkstyle | 0m 28s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 39s | | trunk passed | | +1 :green_heart: | javadoc | 0m 28s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 0m 24s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 0m 58s | | trunk passed | | -1 :x: | shadedclient | 7m 26s | | branch has errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 19s | | the patch passed | | +1 :green_heart: | compile | 0m 20s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 0m 20s | | the patch passed | | +1 :green_heart: | compile | 0m 17s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | javac | 0m 17s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 14s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 24s | | the patch passed | | +1 :green_heart: | javadoc | 0m 20s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 0m 18s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 0m 47s | | the patch passed | | +1 :green_heart: | shadedclient | 22m 56s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 5m 17s | | hadoop-hdfs-httpfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 34s | | The patch does not generate ASF License warnings. | | | | 85m 47s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5678/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5678 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 0a75c21f8ca5 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 422d63c373655d99f00dc5af4a1a573e8e335387 | | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5678/1/testReport/ | | Max. process+thread count | 813 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-httpfs U: hadoop-hdfs-project/hadoop-hdfs-httpfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5678/1/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the
[GitHub] [hadoop] NishthaShah opened a new pull request, #5678: HADOOP-18745. Fix the exception message to print the Identifier pattern
NishthaShah opened a new pull request, #5678: URL: https://github.com/apache/hadoop/pull/5678 ### Description of PR Print the identifier pattern in the exception thrown if the string doesn't match the pattern. java.lang.IllegalArgumentException: [] = [!] must be "[a-zA-z_][a-zA-Z0-9_\-]*" ### How was this patch tested? Tested the patch by running the testcases in TestCheck and removing the expected exception thrown in case of invalid string ### For code changes: - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] haiyang1987 commented on pull request #5667: HDFS-17017. Fix the issue of arguments number limit in report command in DFSAdmin
haiyang1987 commented on PR #5667: URL: https://github.com/apache/hadoop/pull/5667#issuecomment-1554608527 > > a miss on the actual PR. "Period" > > I agree. > > @haiyang1987 for this PR, since you already have the opportunity, I would like to propose these changes so that any new argument in future will not have to go through the same fate: > > ``` > diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java > index d717476dded..c25e2cf3579 100644 > --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java > +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java > @@ -489,7 +489,11 @@ public DFSAdmin(Configuration conf) { >protected DistributedFileSystem getDFS() throws IOException { > return AdminHelper.checkAndGetDFS(getFS(), getConf()); >} > - > + > + public static final String[] DFS_REPORT_ARGS = > + new String[] {"-live", "-dead", "-decommissioning", "-enteringmaintenance", "-inmaintenance", > + "-slownodes"}; > + >/** > * Gives a report on how the FileSystem is doing. > * @exception IOException if the filesystem does not exist. > @@ -581,16 +585,16 @@ public void report(String[] argv, int i) throws IOException { > List args = Arrays.asList(argv); > // Truncate already handled arguments before parsing report()-specific ones > args = new ArrayList(args.subList(i, args.size())); > -final boolean listLive = StringUtils.popOption("-live", args); > -final boolean listDead = StringUtils.popOption("-dead", args); > +final boolean listLive = StringUtils.popOption(DFS_REPORT_ARGS[0], args); > +final boolean listDead = StringUtils.popOption(DFS_REPORT_ARGS[1], args); > final boolean listDecommissioning = > -StringUtils.popOption("-decommissioning", args); > +StringUtils.popOption(DFS_REPORT_ARGS[2], args); > final boolean listEnteringMaintenance = > -StringUtils.popOption("-enteringmaintenance", args); > +StringUtils.popOption(DFS_REPORT_ARGS[3], args); > final boolean listInMaintenance = > -StringUtils.popOption("-inmaintenance", args); > +StringUtils.popOption(DFS_REPORT_ARGS[4], args); > final boolean listSlowNodes = > -StringUtils.popOption("-slownodes", args); > +StringUtils.popOption(DFS_REPORT_ARGS[5], args); > > > // If no filter flags are found, then list all DN types > @@ -2399,7 +2403,7 @@ public int run(String[] argv) { > return exitCode; >} > } else if ("-report".equals(cmd)) { > - if (argv.length > 6) { > + if (argv.length > 7) { > printUsage(cmd); > return exitCode; >} > diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSAdmin.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSAdmin.java > index d81aebf3c2e..eaa7a88ca0d 100644 > --- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSAdmin.java > +++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSAdmin.java > @@ -795,6 +795,16 @@ public void testReportCommand() throws Exception { >resetStream(); >assertEquals(0, ToolRunner.run(dfsAdmin, new String[] {"-report"})); >verifyNodesAndCorruptBlocks(numDn, numDn - 1, 1, 1, client, 0L, 0L); > + > + // verify report command for list all DN types > + resetStream(); > + String[] reportWithArg = new String[DFSAdmin.DFS_REPORT_ARGS.length + 1]; > + reportWithArg[0] = "-report"; > + int k=1; > + for (int i = 0; i < DFSAdmin.DFS_REPORT_ARGS.length; i++) { > +reportWithArg[k++] = DFSAdmin.DFS_REPORT_ARGS[i]; > + } > + assertEquals(0, ToolRunner.run(dfsAdmin, reportWithArg)); > } >} > ``` > > once again, thanks for the PR. Thanks for the suggestion, I think this change will solve the problem better. I'll update the pr later. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] haiyang1987 commented on pull request #5667: HDFS-17017. Fix the issue of arguments number limit in report command in DFSAdmin
haiyang1987 commented on PR #5667: URL: https://github.com/apache/hadoop/pull/5667#issuecomment-1554602389 > In case you find the numbers messed up for any of the other commands as well, can you raise a ticket to fix it as well? yeah, I will review the code and if I find that the other commands have the same problem, I will fix them as well. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-18745) Fix the exception message to print the Identifier pattern
[ https://issues.apache.org/jira/browse/HADOOP-18745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nishtha Shah reassigned HADOOP-18745: - Assignee: Nishtha Shah > Fix the exception message to print the Identifier pattern > - > > Key: HADOOP-18745 > URL: https://issues.apache.org/jira/browse/HADOOP-18745 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Nishtha Shah >Assignee: Nishtha Shah >Priority: Minor > > In case of an incorrect string passed as value, it would throw an exception, > but the message doesn't print the identifier pattern. > {code:java} > java.lang.IllegalArgumentException: [] = [[a] must be {2}{code} > instead of > {code:java} > java.lang.IllegalArgumentException: [] = [[a] must be > [a-zA-Z_][a-zA-Z0-9_\-]*{code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-18745) Fix the exception message to print the Identifier pattern
Nishtha Shah created HADOOP-18745: - Summary: Fix the exception message to print the Identifier pattern Key: HADOOP-18745 URL: https://issues.apache.org/jira/browse/HADOOP-18745 Project: Hadoop Common Issue Type: Improvement Reporter: Nishtha Shah In case of an incorrect string passed as value, it would throw an exception, but the message doesn't print the identifier pattern. {code:java} java.lang.IllegalArgumentException: [] = [[a] must be {2}{code} instead of {code:java} java.lang.IllegalArgumentException: [] = [[a] must be [a-zA-Z_][a-zA-Z0-9_\-]*{code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] haiyang1987 commented on pull request #5667: HDFS-17017. Fix the issue of arguments number limit in report command in DFSAdmin
haiyang1987 commented on PR #5667: URL: https://github.com/apache/hadoop/pull/5667#issuecomment-1554599509 Thanks @ayushtkn @virajjasani help me review this PR. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5676: YARN-6648. BackPort [GPG] Add SubClusterCleaner in Global Policy Generator.
hadoop-yetus commented on PR #5676: URL: https://github.com/apache/hadoop/pull/5676#issuecomment-1554594208 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 34s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 55s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 20s | | trunk passed | | +1 :green_heart: | compile | 7m 14s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 7m 3s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | checkstyle | 1m 46s | | trunk passed | | +1 :green_heart: | mvnsite | 5m 39s | | trunk passed | | +1 :green_heart: | javadoc | 5m 49s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 5m 16s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 17m 33s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 10s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 26s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 42s | | the patch passed | | +1 :green_heart: | compile | 6m 57s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 6m 57s | | the patch passed | | +1 :green_heart: | compile | 6m 34s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | javac | 6m 34s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 35s | | the patch passed | | +1 :green_heart: | mvnsite | 5m 18s | | the patch passed | | -1 :x: | javadoc | 3m 19s | [/results-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn-jdkUbuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5676/1/artifact/out/results-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn-jdkUbuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1.txt) | hadoop-yarn-project_hadoop-yarn-jdkUbuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 generated 2 new + 356 unchanged - 0 fixed = 358 total (was 356) | | -1 :x: | javadoc | 0m 19s | [/results-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-globalpolicygenerator-jdkUbuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5676/1/artifact/out/results-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-globalpolicygenerator-jdkUbuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1.txt) | hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-globalpolicygenerator-jdkUbuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) | | -1 :x: | javadoc | 2m 50s | [/results-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn-jdkPrivateBuild-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5676/1/artifact/out/results-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn-jdkPrivateBuild-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09.txt) | hadoop-yarn-project_hadoop-yarn-jdkPrivateBuild-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 generated 2 new + 584 unchanged - 0 fixed = 586 total (was 584) | | -1 :x: | javadoc | 0m 16s | [/results-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-globalpolicygenerator-jdkPrivateBuild-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5676/1/artifact/out/results-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-glo
[GitHub] [hadoop] NishthaShah commented on a diff in pull request #5669: HADOOP-17518. Update the regex to A-Z
NishthaShah commented on code in PR #5669: URL: https://github.com/apache/hadoop/pull/5669#discussion_r1198949950 ## hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/lib/util/TestCheck.java: ## @@ -116,6 +116,16 @@ public void validIdentifierInvalid3() throws Exception { Check.validIdentifier("1", 1, ""); } + @Test(expected = IllegalArgumentException.class) + public void validIdentifierInvalid4() throws Exception { +Check.validIdentifier("`a", 1, ""); + } + + @Test(expected = IllegalArgumentException.class) + public void validIdentifierInvalid5() throws Exception { +Check.validIdentifier("[a", 1, ""); + } + Review Comment: Sure, thanks @ayushtkn. Let me fix the current maxLength and would fix the other issue as well next -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] NishthaShah commented on a diff in pull request #5669: HADOOP-17518. Update the regex to A-Z
NishthaShah commented on code in PR #5669: URL: https://github.com/apache/hadoop/pull/5669#discussion_r1198949950 ## hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/lib/util/TestCheck.java: ## @@ -116,6 +116,16 @@ public void validIdentifierInvalid3() throws Exception { Check.validIdentifier("1", 1, ""); } + @Test(expected = IllegalArgumentException.class) + public void validIdentifierInvalid4() throws Exception { +Check.validIdentifier("`a", 1, ""); + } + + @Test(expected = IllegalArgumentException.class) + public void validIdentifierInvalid5() throws Exception { +Check.validIdentifier("[a", 1, ""); + } + Review Comment: Sure, thanks @ayushtkn. Let me fix the current maxLength and would fix the issue other issue as well next -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hfutatzhanghb commented on a diff in pull request #5353: HDFS-16909. Make judging null statment out from for loop in ReplicaMap#mergeAll method.
hfutatzhanghb commented on code in PR #5353: URL: https://github.com/apache/hadoop/pull/5353#discussion_r1198943812 ## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ReplicaMap.java: ## @@ -178,13 +178,13 @@ void mergeAll(ReplicaMap other) { for (ReplicaInfo replicaInfo : replicaInfos) { replicaSet.add(replicaInfo); } +if (curSet == null) { Review Comment: @Hexiaoqiao , done. Thanks a lot for this suggestion. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ayushtkn merged pull request #5569: HDFS-16697.Add code to check the minimumRedundantVolumes value and add related log messages.
ayushtkn merged PR #5569: URL: https://github.com/apache/hadoop/pull/5569 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hfutatzhanghb commented on a diff in pull request #5643: HDFS-17003. Erasure coding: invalidate wrong block after reporting bad blocks from datanode
hfutatzhanghb commented on code in PR #5643: URL: https://github.com/apache/hadoop/pull/5643#discussion_r1198937622 ## hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReadStripedFileWithDecoding.java: ## @@ -169,6 +171,108 @@ public void testInvalidateBlock() throws IOException, InterruptedException { } } + @Test Review Comment: @Hexiaoqiao , Sir, I have added some java doc. Thanks for your suggestions. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ayushtkn commented on a diff in pull request #5669: HADOOP-17518. Update the regex to A-Z
ayushtkn commented on code in PR #5669: URL: https://github.com/apache/hadoop/pull/5669#discussion_r1198937053 ## hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/lib/util/TestCheck.java: ## @@ -116,6 +116,16 @@ public void validIdentifierInvalid3() throws Exception { Check.validIdentifier("1", 1, ""); } + @Test(expected = IllegalArgumentException.class) + public void validIdentifierInvalid4() throws Exception { +Check.validIdentifier("`a", 1, ""); + } + + @Test(expected = IllegalArgumentException.class) + public void validIdentifierInvalid5() throws Exception { +Check.validIdentifier("[a", 1, ""); + } + Review Comment: Debugging further. It is because you specified max length as 1, change it to 2 and it should work the way we want. Else it is still IAE ``` java.lang.IllegalArgumentException: [] = [`a] exceeds max len [1] at org.apache.hadoop.lib.util.Check.validIdentifier(Check.java:129) at org.apache.hadoop.lib.util.TestCheck.validIdentifierInvalid4(TestCheck.java:121) ``` So increase the param to 2. Btw. there is one more bug in the code here: ``` throw new IllegalArgumentException( MessageFormat.format("[{0}] = [{1}] must be '{2}'", name, value, IDENTIFIER_PATTERN_STR)); ``` the value of {2} doesn't get printed mostly because of single quotes, removing that fixes that, in case interested, yo u can raise a ticket and fix it as well, will be happy to review :-) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] Hexiaoqiao commented on pull request #5503: HADOOP-18207. Introduce hadoop-logging module
Hexiaoqiao commented on PR #5503: URL: https://github.com/apache/hadoop/pull/5503#issuecomment-1554533102 > FYI @Hexiaoqiao if you have bandwidth to review. Thanks Sorry for late response. Not familiar about log module, I think involve @Apache9 or @jojochuang should be better here. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ashutoshcipher commented on pull request #5028: MAPREDUCE-7419. Upgrade Junit 4 to 5 in hadoop-mapreduce-client-common
ashutoshcipher commented on PR #5028: URL: https://github.com/apache/hadoop/pull/5028#issuecomment-1554532846 Will make changes in my next commit. Thanks -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ayushtkn commented on pull request #5028: MAPREDUCE-7419. Upgrade Junit 4 to 5 in hadoop-mapreduce-client-common
ayushtkn commented on PR #5028: URL: https://github.com/apache/hadoop/pull/5028#issuecomment-1554522912 Hi @ashutoshcipher I just meant don't use *, there rather expand ``` diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/test/java/org/apache/hadoop/mapreduce/v2/util/TestMRApps.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/test/java/org/apache/hadoop/mapreduce/v2/util/TestMRApps.java index 6da0867f411..349528c7731 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/test/java/org/apache/hadoop/mapreduce/v2/util/TestMRApps.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/test/java/org/apache/hadoop/mapreduce/v2/util/TestMRApps.java @@ -18,7 +18,13 @@ package org.apache.hadoop.mapreduce.v2.util; -import static org.junit.jupiter.api.Assertions.*; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertNotNull; +import static org.junit.jupiter.api.Assertions.assertNull; +import static org.junit.jupiter.api.Assertions.assertThrows; +import static org.junit.jupiter.api.Assertions.assertTrue; +import static org.junit.jupiter.api.Assertions.fail; import static org.mockito.Mockito.mock; import static org.mockito.Mockito.when; ``` and the second was if we don't need to remove public and things can work like that also, we should let it stay, it will reduce our code changes, and less chances of people blaming us if something breaks around that to us. there is a code formatter for hadoop in case interested https://github.com/apache/hadoop/blob/trunk/dev-support/code-formatter/hadoop_idea_formatter.xml -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ashutoshcipher commented on pull request #5028: MAPREDUCE-7419. Upgrade Junit 4 to 5 in hadoop-mapreduce-client-common
ashutoshcipher commented on PR #5028: URL: https://github.com/apache/hadoop/pull/5028#issuecomment-1554514446 > Have triggered the build again, test were failing due to unable to create native thread. Changes lgtm @ashutoshcipher you missed answering/adressing [#5028 (comment)](https://github.com/apache/hadoop/pull/5028#discussion_r1192841709) [#5028 (comment)](https://github.com/apache/hadoop/pull/5028#discussion_r1192843639) Sorry @ayushtkn for missing it. Can you please help with import order thing ? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ashutoshcipher commented on a diff in pull request #5028: MAPREDUCE-7419. Upgrade Junit 4 to 5 in hadoop-mapreduce-client-common
ashutoshcipher commented on code in PR #5028: URL: https://github.com/apache/hadoop/pull/5028#discussion_r1198912667 ## hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/test/java/org/apache/hadoop/mapred/TestJobClientGetJob.java: ## @@ -18,15 +18,15 @@ package org.apache.hadoop.mapred; -import static org.junit.Assert.assertNotNull; - import java.io.IOException; import org.apache.hadoop.conf.Configuration; + +import static org.junit.jupiter.api.Assertions.assertNotNull; Review Comment: Hi @ayushtkn . I used Intellij to optimize imports. Can you share what should be ideal order here ? Thanks -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hfutatzhanghb commented on a diff in pull request #5643: HDFS-17003. Erasure coding: invalidate wrong block after reporting bad blocks from datanode
hfutatzhanghb commented on code in PR #5643: URL: https://github.com/apache/hadoop/pull/5643#discussion_r1198906389 ## hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReadStripedFileWithDecoding.java: ## @@ -169,6 +171,108 @@ public void testInvalidateBlock() throws IOException, InterruptedException { } } + @Test + public void testCorruptionECBlockInvalidate() throws Exception { + +final Path file = new Path("/invalidate_corrupted"); +final int length = BLOCK_SIZE * NUM_DATA_UNITS; +final byte[] bytes = StripedFileTestUtil.generateBytes(length); +DFSTestUtil.writeFile(dfs, file, bytes); + +int dnIndex = findFirstDataNode(cluster, dfs, file, +CELL_SIZE * NUM_DATA_UNITS); +int dnIndex2 = findDataNodeAtIndex(cluster, dfs, file, +CELL_SIZE * NUM_DATA_UNITS, 2); +Assert.assertNotEquals(-1, dnIndex); +Assert.assertNotEquals(-1, dnIndex2); + +LocatedStripedBlock slb = (LocatedStripedBlock) dfs.getClient() +.getLocatedBlocks(file.toString(), 0, CELL_SIZE * NUM_DATA_UNITS) +.get(0); +final LocatedBlock[] blks = StripedBlockUtil.parseStripedBlockGroup(slb, +CELL_SIZE, NUM_DATA_UNITS, NUM_PARITY_UNITS); + +final Block b = blks[0].getBlock().getLocalBlock(); +final Block b2 = blks[1].getBlock().getLocalBlock(); + +// find the first block file Review Comment: Fixed. Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] NishthaShah commented on a diff in pull request #5669: HADOOP-17518. Update the regex to A-Z
NishthaShah commented on code in PR #5669: URL: https://github.com/apache/hadoop/pull/5669#discussion_r1198905374 ## hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/lib/util/TestCheck.java: ## @@ -116,6 +116,16 @@ public void validIdentifierInvalid3() throws Exception { Check.validIdentifier("1", 1, ""); } + @Test(expected = IllegalArgumentException.class) + public void validIdentifierInvalid4() throws Exception { +Check.validIdentifier("`a", 1, ""); + } + + @Test(expected = IllegalArgumentException.class) + public void validIdentifierInvalid5() throws Exception { +Check.validIdentifier("[a", 1, ""); + } + Review Comment: Just to clarify my understanding, I have added tests and [expecting it would throw exceptions](https://github.com/apache/hadoop/pull/5669/files#diff-2704e2ed55a403b0863f01a628e5b9d8c19db8f58752d758bacc6dd8777b863dR119) in case the first character is not from a-z, A-Z and _ with my fix. So this test cases without my fix should be failing. Are you seeing something different? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] Hexiaoqiao commented on a diff in pull request #5643: HDFS-17003. Erasure coding: invalidate wrong block after reporting bad blocks from datanode
Hexiaoqiao commented on code in PR #5643: URL: https://github.com/apache/hadoop/pull/5643#discussion_r1198901155 ## hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReadStripedFileWithDecoding.java: ## @@ -169,6 +171,108 @@ public void testInvalidateBlock() throws IOException, InterruptedException { } } + @Test + public void testCorruptionECBlockInvalidate() throws Exception { + +final Path file = new Path("/invalidate_corrupted"); +final int length = BLOCK_SIZE * NUM_DATA_UNITS; +final byte[] bytes = StripedFileTestUtil.generateBytes(length); +DFSTestUtil.writeFile(dfs, file, bytes); + +int dnIndex = findFirstDataNode(cluster, dfs, file, +CELL_SIZE * NUM_DATA_UNITS); +int dnIndex2 = findDataNodeAtIndex(cluster, dfs, file, +CELL_SIZE * NUM_DATA_UNITS, 2); +Assert.assertNotEquals(-1, dnIndex); +Assert.assertNotEquals(-1, dnIndex2); + +LocatedStripedBlock slb = (LocatedStripedBlock) dfs.getClient() +.getLocatedBlocks(file.toString(), 0, CELL_SIZE * NUM_DATA_UNITS) +.get(0); +final LocatedBlock[] blks = StripedBlockUtil.parseStripedBlockGroup(slb, +CELL_SIZE, NUM_DATA_UNITS, NUM_PARITY_UNITS); + +final Block b = blks[0].getBlock().getLocalBlock(); +final Block b2 = blks[1].getBlock().getLocalBlock(); + +// find the first block file Review Comment: Please use a capital letter at the beginning of the sentences and period at the end of it for all annotation. ## hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReadStripedFileWithDecoding.java: ## @@ -169,6 +171,108 @@ public void testInvalidateBlock() throws IOException, InterruptedException { } } + @Test Review Comment: Just suggest to add some java doc for the new unit test about what do you want to cover case. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] Hexiaoqiao commented on a diff in pull request #5353: HDFS-16909. Make judging null statment out from for loop in ReplicaMap#mergeAll method.
Hexiaoqiao commented on code in PR #5353: URL: https://github.com/apache/hadoop/pull/5353#discussion_r1198896990 ## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ReplicaMap.java: ## @@ -178,13 +178,13 @@ void mergeAll(ReplicaMap other) { for (ReplicaInfo replicaInfo : replicaInfos) { replicaSet.add(replicaInfo); } +if (curSet == null) { Review Comment: This condition should be following here, ``` if (curSet == null && !replicaSet.isEmpty()) { // Add an entry for block pool if it does not exist already curSet = new LightWeightResizableGSet<>(); map.put(bp, curSet); } ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5638: HADOOP-18709. Add curator based ZooKeeper communication support over…
hadoop-yetus commented on PR #5638: URL: https://github.com/apache/hadoop/pull/5638#issuecomment-1554469170 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 49s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 8 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 35s | | trunk passed | | +1 :green_heart: | compile | 15m 28s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 14m 24s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | checkstyle | 1m 14s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 34s | | trunk passed | | +1 :green_heart: | javadoc | 1m 17s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 0m 52s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 2m 37s | | trunk passed | | +1 :green_heart: | shadedclient | 22m 10s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 51s | | the patch passed | | +1 :green_heart: | compile | 15m 1s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 15m 1s | | the patch passed | | +1 :green_heart: | compile | 14m 22s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | javac | 14m 22s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 8s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 33s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 1s | | No new issues. | | +1 :green_heart: | javadoc | 1m 7s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 0m 52s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 2m 35s | | the patch passed | | +1 :green_heart: | shadedclient | 22m 21s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 19m 4s | | hadoop-common in the patch passed. | | -1 :x: | asflicense | 1m 3s | [/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5638/12/artifact/out/results-asflicense.txt) | The patch generated 5 ASF License warnings. | | | | 174m 30s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5638/12/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5638 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle shellcheck shelldocs | | uname | Linux 663beea57d0a 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 521931a58471fe5da6a2fd792f7550f5b737ef46 | | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5638/12/testReport/ | | Max. process+thread count | 1322 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5638/12/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 shellcheck=0
[GitHub] [hadoop] hadoop-yetus commented on pull request #5569: HDFS-16697.Add code to check the minimumRedundantVolumes value and add related log messages.
hadoop-yetus commented on PR #5569: URL: https://github.com/apache/hadoop/pull/5569#issuecomment-1554419828 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 49s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 36m 25s | | trunk passed | | +1 :green_heart: | compile | 1m 24s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 1m 14s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | checkstyle | 1m 8s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 23s | | trunk passed | | +1 :green_heart: | javadoc | 1m 14s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 1m 35s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 3m 40s | | trunk passed | | +1 :green_heart: | shadedclient | 26m 31s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 11s | | the patch passed | | +1 :green_heart: | compile | 1m 16s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 1m 16s | | the patch passed | | +1 :green_heart: | compile | 1m 9s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | javac | 1m 9s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 58s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 17s | | the patch passed | | +1 :green_heart: | javadoc | 0m 59s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 1m 32s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 3m 30s | | the patch passed | | +1 :green_heart: | shadedclient | 26m 16s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 244m 8s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5569/11/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 47s | | The patch does not generate ASF License warnings. | | | | 356m 51s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5569/11/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5569 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux a568938a93ce 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 2e3f9f7c36768a338eff18649c3d8f287cb67d1f | | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5569/11/testReport/ | | Max. process+thread count | 2154 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5569/11/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This messag
[GitHub] [hadoop] hadoop-yetus commented on pull request #5569: HDFS-16697.Add code to check the minimumRedundantVolumes value and add related log messages.
hadoop-yetus commented on PR #5569: URL: https://github.com/apache/hadoop/pull/5569#issuecomment-1554417724 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 5s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 35m 58s | | trunk passed | | +1 :green_heart: | compile | 1m 19s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 1m 12s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | checkstyle | 1m 7s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 24s | | trunk passed | | +1 :green_heart: | javadoc | 1m 11s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 1m 37s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 3m 44s | | trunk passed | | +1 :green_heart: | shadedclient | 26m 54s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 11s | | the patch passed | | +1 :green_heart: | compile | 1m 14s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 1m 14s | | the patch passed | | +1 :green_heart: | compile | 1m 6s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | javac | 1m 6s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 54s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 14s | | the patch passed | | +1 :green_heart: | javadoc | 0m 56s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 1m 29s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 3m 38s | | the patch passed | | +1 :green_heart: | shadedclient | 26m 21s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 245m 40s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5569/10/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 1m 5s | | The patch does not generate ASF License warnings. | | | | 358m 42s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.TestRollingUpgrade | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5569/10/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5569 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 4eb6e1effbc8 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 2e3f9f7c36768a338eff18649c3d8f287cb67d1f | | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5569/10/testReport/ | | Max. process+thread count | 2058 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5569/10/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https:/
[GitHub] [hadoop] hadoop-yetus commented on pull request #5677: [YARN-11496] Improve TimelineService log format.
hadoop-yetus commented on PR #5677: URL: https://github.com/apache/hadoop/pull/5677#issuecomment-1554388820 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 32s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 8s | | trunk passed | | +1 :green_heart: | compile | 0m 27s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 0m 26s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | checkstyle | 0m 29s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 31s | | trunk passed | | +1 :green_heart: | javadoc | 0m 37s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 0m 26s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 0m 59s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 32s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 20s | | the patch passed | | +1 :green_heart: | compile | 0m 19s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 0m 19s | | the patch passed | | +1 :green_heart: | compile | 0m 19s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | javac | 0m 19s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 14s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 20s | | the patch passed | | +1 :green_heart: | javadoc | 0m 20s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 0m 18s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 0m 46s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 11s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 42s | | hadoop-yarn-server-timelineservice in the patch passed. | | +1 :green_heart: | asflicense | 0m 38s | | The patch does not generate ASF License warnings. | | | | 86m 4s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5677/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5677 | | JIRA Issue | YARN-11496 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux b4c5da6e953a 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / c428455f04c79355fb0f4274b7b5d4c5b7ca1e5d | | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5677/2/testReport/ | | Max. process+thread count | 559 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5677/2/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was aut
[GitHub] [hadoop] rohit-kb commented on pull request #5639: HADOOP-18711. Upgrade nimbus jwt jar due to issues in its embedded shaded json-smart code
rohit-kb commented on PR #5639: URL: https://github.com/apache/hadoop/pull/5639#issuecomment-1554375086 Hi @ayushtkn , following up on above comment, I think there are two options to proceed further: 1. One is to cherry-pick [HADOOP-18131](https://issues.apache.org/jira/browse/HADOOP-18131) but adding **log4j** exclusions due to **log4j** in **trunk** and **reload4j** in **branch-3.3**. 2. Another one is to retain the filters in this PR. I was thinking of going with second one for the moment as the first one might complicate things further. Please provide your input on the same. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5677: [YARN-11496] Improve TimelineService log format.
hadoop-yetus commented on PR #5677: URL: https://github.com/apache/hadoop/pull/5677#issuecomment-1554372162 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 33s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 57s | | trunk passed | | +1 :green_heart: | compile | 0m 30s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 0m 26s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | checkstyle | 0m 30s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 31s | | trunk passed | | +1 :green_heart: | javadoc | 0m 37s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 0m 28s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 0m 58s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 11s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 19s | | the patch passed | | +1 :green_heart: | compile | 0m 20s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 0m 20s | | the patch passed | | +1 :green_heart: | compile | 0m 19s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | javac | 0m 19s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 15s | [/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5677/1/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice.txt) | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice: The patch generated 2 new + 38 unchanged - 0 fixed = 40 total (was 38) | | +1 :green_heart: | mvnsite | 0m 22s | | the patch passed | | +1 :green_heart: | javadoc | 0m 20s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 0m 19s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 0m 48s | | the patch passed | | +1 :green_heart: | shadedclient | 19m 53s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 36s | | hadoop-yarn-server-timelineservice in the patch passed. | | +1 :green_heart: | asflicense | 0m 36s | | The patch does not generate ASF License warnings. | | | | 85m 45s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5677/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5677 | | JIRA Issue | YARN-11496 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux f9a99e89315f 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 630c030ed580343a252abb1802d2b80363d347d8 | | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5677/1/testReport/ | | Max. process+thread count | 703 (vs. ulimit of 5500) | | modu
[GitHub] [hadoop] leixm opened a new pull request, #5677: [YARN-11496] Improve TimelineService log format.
leixm opened a new pull request, #5677: URL: https://github.com/apache/hadoop/pull/5677 ### Description of PR Improve TimelineService log format. ### How was this patch tested? Existing UTs. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5667: HDFS-17017. Fix the issue of arguments number limit in report command in DFSAdmin
hadoop-yetus commented on PR #5667: URL: https://github.com/apache/hadoop/pull/5667#issuecomment-1554227074 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 37s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 55s | | trunk passed | | +1 :green_heart: | compile | 1m 16s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 1m 10s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | checkstyle | 1m 8s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 18s | | trunk passed | | +1 :green_heart: | javadoc | 1m 9s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 1m 31s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 3m 16s | | trunk passed | | +1 :green_heart: | shadedclient | 23m 12s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 6s | | the patch passed | | +1 :green_heart: | compile | 1m 6s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 1m 6s | | the patch passed | | +1 :green_heart: | compile | 1m 2s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | javac | 1m 2s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 53s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 11s | | the patch passed | | +1 :green_heart: | javadoc | 0m 50s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 1m 23s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 3m 2s | | the patch passed | | +1 :green_heart: | shadedclient | 22m 35s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 212m 46s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5667/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 47s | | The patch does not generate ASF License warnings. | | | | 313m 13s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestObserverNode | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5667/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5667 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 79e2051688e4 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 9041764e27f8556cc14acdfda60b479cda172a11 | | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5667/2/testReport/ | | Max. process+thread count | 3737 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5667/2/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was
[GitHub] [hadoop] zhangshuyan0 commented on pull request #5353: HDFS-16909. Make judging null statment out from for loop in ReplicaMap#mergeAll method.
zhangshuyan0 commented on PR #5353: URL: https://github.com/apache/hadoop/pull/5353#issuecomment-1554198518 +1. @Hexiaoqiao Would you mind taking another check? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org