Apache Hadoop qbt Report: branch-3.2+JDK8 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.2-java8-linux-x86_64/122/ [Oct 20, 2023, 10:05:54 AM] (github) YARN-11578. Cache fs supports chmod in LogAggregationFileController. (#6120) (#6143) -1 overall The following subsystems voted -1: asflicense blanks hadolint pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml Failed junit tests : hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 hadoop.mapred.uploader.TestFrameworkUploader hadoop.yarn.sls.appmaster.TestAMSimulator cc: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.2-java8-linux-x86_64/122/artifact/out/results-compile-cc-root.txt [48K] javac: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.2-java8-linux-x86_64/122/artifact/out/results-compile-javac-root.txt [332K] blanks: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.2-java8-linux-x86_64/122/artifact/out/blanks-eol.txt [13M] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.2-java8-linux-x86_64/122/artifact/out/blanks-tabs.txt [2.0M] checkstyle: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.2-java8-linux-x86_64/122/artifact/out/results-checkstyle-root.txt [14M] hadolint: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.2-java8-linux-x86_64/122/artifact/out/results-hadolint.txt [8.0K] pathlen: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.2-java8-linux-x86_64/122/artifact/out/results-pathlen.txt [16K] pylint: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.2-java8-linux-x86_64/122/artifact/out/results-pylint.txt [148K] shellcheck: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.2-java8-linux-x86_64/122/artifact/out/results-shellcheck.txt [20K] xml: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.2-java8-linux-x86_64/122/artifact/out/xml.txt [16K] javadoc: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.2-java8-linux-x86_64/122/artifact/out/results-javadoc-javadoc-root.txt [1.7M] unit: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.2-java8-linux-x86_64/122/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [528K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.2-java8-linux-x86_64/122/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt [12K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.2-java8-linux-x86_64/122/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-uploader.txt [12K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.2-java8-linux-x86_64/122/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt [12K] asflicense: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.2-java8-linux-x86_64/122/artifact/out/results-asflicense.txt [4.0K] Powered by Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-18952) FsCommand Stat class set the timeZone"UTC", which is different from the machine's timeZone
liang yu created HADOOP-18952: - Summary: FsCommand Stat class set the timeZone"UTC", which is different from the machine's timeZone Key: HADOOP-18952 URL: https://issues.apache.org/jira/browse/HADOOP-18952 Project: Hadoop Common Issue Type: Bug Environment: Using Hadoop 3.3.4-release Reporter: liang yu Attachments: image-2023-10-26-10-00-19-636.png Using Hadoop version 3.3.4 When executing Ls command and Stat command on the same hadoop file, I get two timestamps. !image-2023-10-26-10-00-19-636.png! I am in China, the timezone is “UTC+8”, so the timestamp from LS command is correct and timestamp from STAT command is wrong. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1191/ No changes -1 overall The following subsystems voted -1: compile golang hadolint mvninstall mvnsite pathlen unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck whitespace mvninstall: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1191/artifact/out/patch-mvninstall-root.txt [56K] compile: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1191/artifact/out/patch-compile-root.txt [32K] cc: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1191/artifact/out/patch-compile-root.txt [32K] golang: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1191/artifact/out/patch-compile-root.txt [32K] javac: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1191/artifact/out/patch-compile-root.txt [32K] checkstyle: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1191/artifact/out/buildtool-patch-checkstyle-root.txt [20K] hadolint: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1191/artifact/out/diff-patch-hadolint.txt [4.0K] mvnsite: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1191/artifact/out/patch-mvnsite-root.txt [20K] pathlen: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1191/artifact/out/pathlen.txt [12K] pylint: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1191/artifact/out/diff-patch-pylint.txt [20K] shellcheck: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1191/artifact/out/diff-patch-shellcheck.txt [72K] whitespace: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1191/artifact/out/whitespace-eol.txt [12M] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1191/artifact/out/whitespace-tabs.txt [1.3M] javadoc: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1191/artifact/out/patch-javadoc-root.txt [40K] unit: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1191/artifact/out/patch-unit-hadoop-assemblies.txt [4.0K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1191/artifact/out/patch-unit-hadoop-build-tools.txt [20K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1191/artifact/out/patch-unit-hadoop-common-project_hadoop-annotations.txt [4.0K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1191/artifact/out/patch-unit-hadoop-common-project_hadoop-auth.txt [8.0K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1191/artifact/out/patch-unit-hadoop-common-project_hadoop-auth-examples.txt [4.0K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1191/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [4.0K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1191/artifact/out/patch-unit-hadoop-common-project_hadoop-kms.txt [8.0K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1191/artifact/out/patch-unit-hadoop-common-project_hadoop-minikdc.txt [4.0K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1191/artifact/out/patch-unit-hadoop-common-project_hadoop-nfs.txt [4.0K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1191/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [4.0K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1191/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-client.txt [4.0K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1191/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt [8.0K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1191/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt [4.0K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1191/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-nfs.txt [8.0K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1191/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt [8.0K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1191/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt [4.0K]
Apache Hadoop qbt Report: trunk+JDK11 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/572/ [Oct 23, 2023, 2:35:33 PM] (Ayush Saxena) HDFS-17228. Improve documentation related to BlockManager. (#6195). Contributed by JiangHua Zhu. [Oct 23, 2023, 2:42:39 PM] (github) HDFS-17235. Fix javadoc errors in BlockManager (#6214). Contributed by Haiyang Hu. [Oct 23, 2023, 6:03:15 PM] (github) HADOOP-18919. Zookeeper SSL/TLS support in HDFS ZKFC (#6194) [Oct 23, 2023, 9:06:02 PM] (github) HADOOP-18868. Optimize the configuration and use of callqueue overflow trigger failover (#5998) [Oct 24, 2023, 1:28:05 AM] (github) YARN-11500. [Addendum] Fix typos in hadoop-yarn-server-common#federation. (#6212) Contributed by Shilun Fan. [Oct 24, 2023, 1:36:06 AM] (github) YARN-11576. Improve FederationInterceptorREST AuditLog. (#6117) Contributed by Shilun Fan. [Oct 24, 2023, 11:28:40 AM] (github) HADOOP-18949. upgrade maven dependency plugin due to CVE-2021-26291. (#6219) [Oct 24, 2023, 5:17:52 PM] (github) HADOOP-18889. Third party storage followup. (#6186) [Oct 24, 2023, 8:39:03 PM] (github) HDFS-17237. Remove IPCLoggerChannelMetrics when the logger is closed (#6217) [Oct 25, 2023, 3:43:12 AM] (github) HDFS-17231. HA: Safemode should exit when resources are from low to available. (#6207). Contributed by Gu Peng. -1 overall The following subsystems voted -1: blanks hadolint mvnsite pathlen spotbugs unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-common-project/hadoop-common/src/test/resources/xml/external-dtd.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml spotbugs : module:hadoop-hdfs-project/hadoop-hdfs Redundant nullcheck of oldLock, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory)) Redundant null check at DataStorage.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory)) Redundant null check at DataStorage.java:[line 695] Redundant nullcheck of metaChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long, FileInputStream, FileChannel, String) Redundant null check at MappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long, FileInputStream, FileChannel, String) Redundant null check at MappableBlockLoader.java:[line 138] Redundant nullcheck of blockChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at MemoryMappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at MemoryMappableBlockLoader.java:[line 75] Redundant nullcheck of blockChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at NativePmemMappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at NativePmemMappableBlockLoader.java:[line 85] Redundant nullcheck of metaChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$$PmemMappedRegion,, long, FileInputStream, FileChannel, String) Redundant null check at
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1390/ [Oct 24, 2023, 11:28:40 AM] (github) HADOOP-18949. upgrade maven dependency plugin due to CVE-2021-26291. (#6219) [Oct 24, 2023, 5:17:52 PM] (github) HADOOP-18889. Third party storage followup. (#6186) [Oct 24, 2023, 8:39:03 PM] (github) HDFS-17237. Remove IPCLoggerChannelMetrics when the logger is closed (#6217) -1 overall The following subsystems voted -1: blanks hadolint pathlen xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-common-project/hadoop-common/src/test/resources/xml/external-dtd.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml cc: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1390/artifact/out/results-compile-cc-root.txt [96K] javac: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1390/artifact/out/results-compile-javac-root.txt [12K] blanks: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1390/artifact/out/blanks-eol.txt [15M] https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1390/artifact/out/blanks-tabs.txt [2.0M] checkstyle: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1390/artifact/out/results-checkstyle-root.txt [13M] hadolint: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1390/artifact/out/results-hadolint.txt [20K] pathlen: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1390/artifact/out/results-pathlen.txt [16K] pylint: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1390/artifact/out/results-pylint.txt [20K] shellcheck: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1390/artifact/out/results-shellcheck.txt [24K] xml: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1390/artifact/out/xml.txt [24K] javadoc: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1390/artifact/out/results-javadoc-javadoc-root.txt [244K] Powered by Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-18951) S3A third party: document "Certificate doesn't match"
Steve Loughran created HADOOP-18951: --- Summary: S3A third party: document "Certificate doesn't match" Key: HADOOP-18951 URL: https://issues.apache.org/jira/browse/HADOOP-18951 Project: Hadoop Common Issue Type: Sub-task Components: documentation, fs/s3 Affects Versions: 3.3.6, 3.4.0 Reporter: Steve Loughran A recurrent problems with third party stores is that the user gets an error message about HTTP certificates {code} Unable to execute HTTP request: Certificate for doesn't match any of the subject alternative names: [*.dev.net] {code} This is happening because # the store uses HTTPS and there is an organization certificate # the store does support virtual hostname access -but it does not match the HTTPS wildcard Fix: switch to path style access *add this detail to the third party store doc* -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-18948) S3A. Add option fs.s3a.directory.operations.purge.uploads to purge on rename/delete
[ https://issues.apache.org/jira/browse/HADOOP-18948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-18948. - Fix Version/s: 3.4.0 Release Note: S3A directory delete and rename will optionally abort all pending uploads under the to-be-deleted paths when fs.s3a.directory.operations.purge.upload is true It is off by default. Resolution: Fixed > S3A. Add option fs.s3a.directory.operations.purge.uploads to purge on > rename/delete > --- > > Key: HADOOP-18948 > URL: https://issues.apache.org/jira/browse/HADOOP-18948 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0 > > > On third-party stores without lifecycle rules its possible to accrue many GB > of pending multipart uploads, including from > * magic committer jobs where spark driver/MR AM failed before commit/abort > * distcp jobs which timeout and get aborted > * any client code writing datasets which are interrupted before close. > Although there's a purge pending uploads option, that's dangerous because if > any fs is instantiated with it, it can destroy in-flight work > otherwise, the "hadoop s3guard uploads" command does work but needs > scheduling/manual execution > proposed: add a new property {{fs.s3a.directory.operations.purge.uploads}} > which will automatically cancel all pending uploads under a path > * delete: everything under the dir > * rename: all under the source dir > This will be done in parallel to the normal operation, but no attempt to post > abortMultipartUploads in different threads. The assumption here is that this > is rare. And it'll be off by default as in AWS people should have rules for > these things. > + doc (third_party?) > + add new counter/metric for abort operations, count and duration > + test to include cost assertions -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-18933) upgrade netty to 4.1.100 due to CVE
[ https://issues.apache.org/jira/browse/HADOOP-18933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-18933. - Resolution: Fixed > upgrade netty to 4.1.100 due to CVE > --- > > Key: HADOOP-18933 > URL: https://issues.apache.org/jira/browse/HADOOP-18933 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: PJ Fanning >Assignee: PJ Fanning >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > > follow up to https://issues.apache.org/jira/browse/HADOOP-18783 > https://netty.io/news/2023/10/10/4-1-100-Final.html > security advisory > https://github.com/netty/netty/security/advisories/GHSA-xpw8-rcwv-8f8p > "HTTP/2 Rapid Reset Attack - DDoS vector in the HTTP/2 protocol due RST > framesHTTP/2 Rapid Reset Attack - DDoS vector in the HTTP/2 protocol due RST > frames -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-18950) upgrade avro to 1.11.3 due to CVE
Xuze Yang created HADOOP-18950: -- Summary: upgrade avro to 1.11.3 due to CVE Key: HADOOP-18950 URL: https://issues.apache.org/jira/browse/HADOOP-18950 Project: Hadoop Common Issue Type: Bug Components: common Reporter: Xuze Yang [https://nvd.nist.gov/vuln/detail/CVE-2023-39410] When deserializing untrusted or corrupted data, it is possible for a reader to consume memory beyond the allowed constraints and thus lead to out of memory on the system. This issue affects Java applications using Apache Avro Java SDK up to and including 1.11.2. Users should update to apache-avro version 1.11.3 which addresses this issue. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-18932) Upgrade AWS v2 SDK to 2.20.160 and v1 to 1.12.565
[ https://issues.apache.org/jira/browse/HADOOP-18932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-18932. - Fix Version/s: 3.4.0 Assignee: Steve Loughran Resolution: Fixed > Upgrade AWS v2 SDK to 2.20.160 and v1 to 1.12.565 > - > > Key: HADOOP-18932 > URL: https://issues.apache.org/jira/browse/HADOOP-18932 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > > Bump up the sdk versions for both...even if we don't ship v1 it helps us > qualify releases with newer versions, and means that an upgrade of that alone > to branch-3.3 will be in sync. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-18920) RPC Metrics : Optimize logic for log slow RPCs
[ https://issues.apache.org/jira/browse/HADOOP-18920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ZanderXu resolved HADOOP-18920. --- Fix Version/s: 3.4.0 Hadoop Flags: Reviewed Resolution: Fixed > RPC Metrics : Optimize logic for log slow RPCs > -- > > Key: HADOOP-18920 > URL: https://issues.apache.org/jira/browse/HADOOP-18920 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Haiyang Hu >Assignee: Haiyang Hu >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > > HADOOP-12325 implement a capability where "slow" RPCs are logged in NN log. > Current processing logic is the "slow" RPCs are to be those whose processing > time is outside 3 standard deviation. > However, in practice it is found that many logs of slow rpc are currently > output, and sometimes RPCs with a processing time of 1ms are also declared as > slow, this is not in line with actual expectations. > Therefore, consider optimize the logic conditions of slow RPC and add a > `logSlowRPCThresholdMs` variable to judge whether the current RPCas slow so > that the expected slow RPC log can be logger. > for `logSlowRPCThresholdMs`, we can support dynamic refresh to facilitate > adjustments based on the actual operating conditions of the hdfs cluster. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org