Found this on dev@hadoop -> Moving to common-dev (the ML we use)
I think there was some initiative to enable Windows Pre-Commit for every PR
and that seems to have gone wild, either the number of PRs raised are way
more than the capacity the nodes can handle or something got misconfigured
in the j
For more details, see
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/667/
[Apr 26, 2024, 11:21:29 AM] (github) YARN-11690. Update container executor to
use CGROUP2_SUPER_MAGIC in cgroup 2 scenarios (#6771)
[Apr 26, 2024, 1:00:00 PM] (github) YARN-11674. Add CPUResourceHand
For more details, see
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1572/
[Apr 26, 2024, 11:21:29 AM] (github) YARN-11690. Update container executor to
use CGROUP2_SUPER_MAGIC in cgroup 2 scenarios (#6771)
[Apr 26, 2024, 1:00:00 PM] (github) YARN-11674. Add CPUResourceHand
Xi Chen created HADOOP-19159:
Summary: Fix hadoop-aws document for
fs.s3a.committer.abort.pending.uploads
Key: HADOOP-19159
URL: https://issues.apache.org/jira/browse/HADOOP-19159
Project: Hadoop Common
For more details, see
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1375/
No changes
-1 overall
The following subsystems voted -1:
asflicense hadolint mvnsite pathlen unit
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc