Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/822/ [Mar 27, 2022 1:23:48 PM] (noreply) HDFS-16355. Improve the description of dfs.block.scanner.volume.bytes.per.second (#3724) -1 overall The following subsystems voted -1: blanks pathlen spotbugs unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml spotbugs : module:hadoop-tools/hadoop-sls org.apache.hadoop.yarn.sls.AMRunner.setInputTraces(String[]) may expose internal representation by storing an externally mutable object into AMRunner.inputTraces At AMRunner.java:by storing an externally mutable object into AMRunner.inputTraces At AMRunner.java:[line 267] Write to static field org.apache.hadoop.yarn.sls.AMRunner.REMAINING_APPS from instance method org.apache.hadoop.yarn.sls.AMRunner.startAM() At AMRunner.java:from instance method org.apache.hadoop.yarn.sls.AMRunner.startAM() At AMRunner.java:[line 116] spotbugs : module:hadoop-tools org.apache.hadoop.yarn.sls.AMRunner.setInputTraces(String[]) may expose internal representation by storing an externally mutable object into AMRunner.inputTraces At AMRunner.java:by storing an externally mutable object into AMRunner.inputTraces At AMRunner.java:[line 267] Write to static field org.apache.hadoop.yarn.sls.AMRunner.REMAINING_APPS from instance method org.apache.hadoop.yarn.sls.AMRunner.startAM() At AMRunner.java:from instance method org.apache.hadoop.yarn.sls.AMRunner.startAM() At AMRunner.java:[line 116] spotbugs : module:root org.apache.hadoop.yarn.sls.AMRunner.setInputTraces(String[]) may expose internal representation by storing an externally mutable object into AMRunner.inputTraces At AMRunner.java:by storing an externally mutable object into AMRunner.inputTraces At AMRunner.java:[line 267] Write to static field org.apache.hadoop.yarn.sls.AMRunner.REMAINING_APPS from instance method org.apache.hadoop.yarn.sls.AMRunner.startAM() At AMRunner.java:from instance method org.apache.hadoop.yarn.sls.AMRunner.startAM() At AMRunner.java:[line 116] Failed junit tests : hadoop.yarn.conf.TestYarnConfigurationFields hadoop.yarn.sls.TestSLSRunner hadoop.yarn.sls.TestSLSGenericSynth hadoop.yarn.sls.TestSLSStreamAMSynth cc: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/822/artifact/out/results-compile-cc-root.txt [96K] javac: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/822/artifact/out/results-compile-javac-root.txt [340K] blanks: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/822/artifact/out/blanks-eol.txt [13M] https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/822/artifact/out/blanks-tabs.txt [2.0M] checkstyle: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/822/artifact/out/results-checkstyle-root.txt [14M] pathlen: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/822/artifact/out/results-pathlen.txt [16K] pylint: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/822/artifact/out/results-pylint.txt [20K] shellcheck: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/822/artifact/out/results-shellcheck.txt [28K] xml: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/822/artifact/out/xml.txt [24K] javadoc: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/822/artifact/out/results-javadoc-javadoc-root.txt [400K] spotbugs: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/822/artifact/out/branch-spotbugs-hadoop-tools_hadoop-sls-warnings.html [8.0K]
Re: [E] [NOTICE] Attaching patches in JIRA issue no longer works
If we're not using patches on JIRA anymore, why are we using JIRA at all? Why don't we just use GitHub Issues? Using JIRA to then redirect to GitHub seems unintuitive and will fracture the information between two different places. Do the conversations happen on JIRA or on a GitHub PR? Having conversations on both is confusing and splitting information. I would rather use JIRA with patches or GitHub Issues with PRs. I think anything in between splits information and makes it hard to find. Eric On Sun, Mar 27, 2022 at 1:25 PM Akira Ajisaka wrote: > Dear Hadoop developers, > > I've disabled the Precommit-(HADOOP|HDFS|MAPREDUCE|YARN)-Build jobs. > If you attach a patch to a JIRA issue, the Jenkins precommit job won't run. > Please use GitHub PR for code review. > > Background: > - > https://urldefense.com/v3/__https://issues.apache.org/jira/browse/HADOOP-17798__;!!Op6eflyXZCqGR5I!Swsnm6LmEvbzZPTXn9xJuCkXtLBzb7zHkK2P_Cw-dH5K2IwoSEzQBC2oQG0D$ > - > https://urldefense.com/v3/__https://lists.apache.org/thread/6g3n4wo3b3tpq2qxyyth3y8m9z4mcj8p__;!!Op6eflyXZCqGR5I!Swsnm6LmEvbzZPTXn9xJuCkXtLBzb7zHkK2P_Cw-dH5K2IwoSEzQBHA0JrdK$ > > Thanks and regards, > Akira >
[jira] [Created] (HADOOP-18184) s3a prefetching stream to support unbuffer()
Steve Loughran created HADOOP-18184: --- Summary: s3a prefetching stream to support unbuffer() Key: HADOOP-18184 URL: https://issues.apache.org/jira/browse/HADOOP-18184 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Affects Versions: 3.4.0 Reporter: Steve Loughran Apache Impala uses unbuffer() to free up all client side resources held by a stream, so allowing it to have a map of available (path -> stream) objects, retained across queries. This saves on having to reopen the files, with the cost of HEAD checks etc. S3AInputStream just closes its http connection. here there is a lot more state to discard, but all memory and file storage must be freed. until this done, ITestS3AContractUnbuffer must skip when the prefetch stream is used. its notable that the other tests don't fail, even though the stream doesn't implement the interface; the graceful degradation handles that. it should fail if the test xml resource says the stream does it, but that the stream capabilities say it doesn't. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-18183) s3a audit logs to publish range start/end of GET requests in audit header
Steve Loughran created HADOOP-18183: --- Summary: s3a audit logs to publish range start/end of GET requests in audit header Key: HADOOP-18183 URL: https://issues.apache.org/jira/browse/HADOOP-18183 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Affects Versions: 3.3.2 Reporter: Steve Loughran we don't get the range of ranged get requests in s3 server logs, because the AWS s3 log doesn't record that information. we can see it's a partial get from the 206 response, but the length of data retrieved is lost. LoggingAuditor.beforeExecution() would need to recognise a ranged GET and determine the extra key-val pairs for range start and end (rs & re?) we might need to modify {{HttpReferrerAuditHeader.buildHttpReferrer()}} to take a map of so it can dynamically create a header for each request; currently that is not in there. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-18182) S3File to store reference to active S3Object in a field.
Steve Loughran created HADOOP-18182: --- Summary: S3File to store reference to active S3Object in a field. Key: HADOOP-18182 URL: https://issues.apache.org/jira/browse/HADOOP-18182 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Affects Versions: 3.4.0 Reporter: Steve Loughran HADOOP-17338 showed us how recent {{S3Object.finalize()}} can call stream.close() and so close an active stream if a GC happens during a read. replicate the same fix here. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-18181) move org.apache.hadoop.fs.common package into hadoop-common module
Steve Loughran created HADOOP-18181: --- Summary: move org.apache.hadoop.fs.common package into hadoop-common module Key: HADOOP-18181 URL: https://issues.apache.org/jira/browse/HADOOP-18181 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Affects Versions: 3.4.0 Reporter: Steve Loughran move org.apache.hadoop.fs.common package from hadoop-aws, along with any tests, into hadoop-common jar and the+ package org.apache.hadoop.fs.implel (except for any bits we find are broadly useful in applications to use any new APIs, in which case somewhere mre public, such as o.a.h.util.functional for the futures work) -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-18176) s3a prefetching stream to move off twitter FuturePool
[ https://issues.apache.org/jira/browse/HADOOP-18176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-18176. - Resolution: Duplicate duplicated by HADOOP-18180. which comes with a PR > s3a prefetching stream to move off twitter FuturePool > - > > Key: HADOOP-18176 > URL: https://issues.apache.org/jira/browse/HADOOP-18176 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Priority: Major > > This has to be a blocker for the merge I'm afraid: move off twitter's util > lib and its future pool > it's not just another jar, its a full scala runtime. and as we know, that's a > very brittle runtime. for existing scala code. that's their problem...we > don't want to get involved in this > {code} > [INFO] +- com.twitter:util-core_2.11:jar:21.2.0:compile > [INFO] | +- org.scala-lang:scala-library:jar:2.11.12:compile > [INFO] | +- com.twitter:util-function_2.11:jar:21.2.0:compile > [INFO] | +- > org.scala-lang.modules:scala-collection-compat_2.11:jar:2.1.2:compile > [INFO] | +- org.scala-lang:scala-reflect:jar:2.11.12:compile > [INFO] | \- > org.scala-lang.modules:scala-parser-combinators_2.11:jar:1.1.2:compile > {code} -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-18180) Remove use of scala jar twitter util-core
PJ Fanning created HADOOP-18180: --- Summary: Remove use of scala jar twitter util-core Key: HADOOP-18180 URL: https://issues.apache.org/jira/browse/HADOOP-18180 Project: Hadoop Common Issue Type: Sub-task Reporter: PJ Fanning This jar will cause trouble for scala projects like spark that use different scala versions from the scala 2.11 used in the twitter util-core -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[ANNOUNCE] Apache Hadoop 3.2.3 release
Hi all, It gives me great pleasure to announce that the Apache Hadoop community has released Apache Hadoop 3.2.3. This is the third stable release of Apache Hadoop 3.2 line. It contains 328 bug fixes, improvements and enhancements since 3.2.2. For details of bug fixes, improvements, and other enhancements since the previous 3.2.2 release, please check release notes[1] and changelog[2]. [1]: https://hadoop.apache.org/docs/r3.2.3/hadoop-project-dist/hadoop-common/release/3.2.3/RELEASENOTES.3.2.3.html [2]: https://hadoop.apache.org/docs/r3.2.3/hadoop-project-dist/hadoop-common/release/3.2.3/CHANGELOG.3.2.3.html Best Regards, Masatake Iwasaki - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-18179) Boost S3A Stream Read Performance
Steve Loughran created HADOOP-18179: --- Summary: Boost S3A Stream Read Performance Key: HADOOP-18179 URL: https://issues.apache.org/jira/browse/HADOOP-18179 Project: Hadoop Common Issue Type: Improvement Components: fs/s3 Affects Versions: 3.3.2 Reporter: Steve Loughran -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/614/ No changes -1 overall The following subsystems voted -1: asflicense hadolint mvnsite pathlen unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.io.compress.snappy.TestSnappyCompressorDecompressor hadoop.fs.TestFileUtil hadoop.hdfs.server.datanode.TestDirectoryScanner hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat hadoop.hdfs.server.federation.router.TestRouterQuota hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver hadoop.hdfs.server.federation.router.TestRouterAllResolver hadoop.hdfs.server.federation.resolver.order.TestLocalResolver hadoop.yarn.server.resourcemanager.TestClientRMService hadoop.yarn.server.resourcemanager.monitor.invariants.TestMetricsInvariantChecker hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter hadoop.mapreduce.lib.input.TestLineRecordReader hadoop.mapred.TestLineRecordReader hadoop.mapreduce.v2.app.rm.TestRMContainerAllocator hadoop.tools.TestDistCpSystem hadoop.yarn.sls.TestSLSRunner hadoop.resourceestimator.solver.impl.TestLpSolver hadoop.resourceestimator.service.TestResourceEstimatorService cc: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/614/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/614/artifact/out/diff-compile-javac-root.txt [472K] checkstyle: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/614/artifact/out/diff-checkstyle-root.txt [14M] hadolint: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/614/artifact/out/diff-patch-hadolint.txt [4.0K] mvnsite: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/614/artifact/out/patch-mvnsite-root.txt [560K] pathlen: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/614/artifact/out/pathlen.txt [12K] pylint: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/614/artifact/out/diff-patch-pylint.txt [20K] shellcheck: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/614/artifact/out/diff-patch-shellcheck.txt [72K] whitespace: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/614/artifact/out/whitespace-eol.txt [12M] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/614/artifact/out/whitespace-tabs.txt [1.3M] javadoc: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/614/artifact/out/patch-javadoc-root.txt [40K] unit: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/614/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [224K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/614/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [428K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/614/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt [12K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/614/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt [40K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/614/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt [20K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/614/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [112K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/614/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt [104K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/614/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt [36K]