[
https://issues.apache.org/jira/browse/HADOOP-17386?focusedWorklogId=712102&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-712102
]
ASF GitHub Bot logged work on HADOOP-17386:
-------------------------------------------
Author: ASF GitHub Bot
Created on: 20/Jan/22 14:29
Start Date: 20/Jan/22 14:29
Worklog Time Spent: 10m
Work Description: hadoop-yetus commented on pull request #3908:
URL: https://github.com/apache/hadoop/pull/3908#issuecomment-1017566251
:broken_heart: **-1 overall**
| Vote | Subsystem | Runtime | Logfile | Comment |
|:----:|----------:|--------:|:--------:|:-------:|
| +0 :ok: | reexec | 0m 54s | | Docker mode activated. |
|||| _ Prechecks _ |
| +1 :green_heart: | dupname | 0m 0s | | No case conflicting files
found. |
| +0 :ok: | codespell | 0m 0s | | codespell was not available. |
| +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available.
|
| +1 :green_heart: | @author | 0m 0s | | The patch does not contain
any @author tags. |
| -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include
any new or modified tests. Please justify why no new tests are needed for this
patch. Also please list what manual steps were performed to verify this patch.
|
|||| _ trunk Compile Tests _ |
| +0 :ok: | mvndep | 49m 43s | | Maven dependency ordering for branch |
| +1 :green_heart: | mvninstall | 24m 37s | | trunk passed |
| +1 :green_heart: | compile | 24m 18s | | trunk passed with JDK
Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04 |
| +1 :green_heart: | compile | 20m 49s | | trunk passed with JDK
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
| +1 :green_heart: | mvnsite | 2m 27s | | trunk passed |
| +1 :green_heart: | javadoc | 1m 39s | | trunk passed with JDK
Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04 |
| +1 :green_heart: | javadoc | 2m 18s | | trunk passed with JDK
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
| +1 :green_heart: | shadedclient | 149m 3s | | branch has no errors
when building and testing our client artifacts. |
|||| _ Patch Compile Tests _ |
| +0 :ok: | mvndep | 0m 23s | | Maven dependency ordering for patch |
| +1 :green_heart: | mvninstall | 1m 33s | | the patch passed |
| +1 :green_heart: | compile | 23m 23s | | the patch passed with JDK
Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04 |
| +1 :green_heart: | javac | 23m 23s | | the patch passed |
| +1 :green_heart: | compile | 20m 44s | | the patch passed with JDK
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
| +1 :green_heart: | javac | 20m 44s | | the patch passed |
| -1 :x: | blanks | 0m 0s |
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3908/1/artifact/out/blanks-eol.txt)
| The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix
<<patch_file>>. Refer https://git-scm.com/docs/git-apply |
| +1 :green_heart: | mvnsite | 2m 29s | | the patch passed |
| +1 :green_heart: | xml | 0m 1s | | The patch has no ill-formed XML
file. |
| +1 :green_heart: | javadoc | 1m 35s | | the patch passed with JDK
Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04 |
| +1 :green_heart: | javadoc | 2m 14s | | the patch passed with JDK
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
| +1 :green_heart: | shadedclient | 28m 13s | | patch has no errors
when building and testing our client artifacts. |
|||| _ Other Tests _ |
| +1 :green_heart: | unit | 17m 14s | | hadoop-common in the patch
passed. |
| +1 :green_heart: | unit | 2m 28s | | hadoop-aws in the patch passed.
|
| +1 :green_heart: | asflicense | 0m 50s | | The patch does not
generate ASF License warnings. |
| | | 249m 11s | | |
| Subsystem | Report/Notes |
|----------:|:-------------|
| Docker | ClientAPI=1.41 ServerAPI=1.41 base:
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3908/1/artifact/out/Dockerfile
|
| GITHUB PR | https://github.com/apache/hadoop/pull/3908 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall
mvnsite unit shadedclient codespell xml markdownlint |
| uname | Linux 8bb10d40c351 4.15.0-162-generic #170-Ubuntu SMP Mon Oct 18
11:38:05 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/bin/hadoop.sh |
| git revision | trunk / c0cef333911b3a0c49bc8939da259458faac7d8d |
| Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
| Multi-JDK versions |
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04
/usr/lib/jvm/java-8-openjdk-amd64:Private
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
| Test Results |
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3908/1/testReport/ |
| Max. process+thread count | 1252 (vs. ulimit of 5500) |
| modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws
U: . |
| Console output |
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3908/1/console |
| versions | git=2.25.1 maven=3.6.3 |
| Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
This message was automatically generated.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 712102)
Time Spent: 20m (was: 10m)
> fs.s3a.buffer.dir to be under Yarn container path on yarn applications
> ----------------------------------------------------------------------
>
> Key: HADOOP-17386
> URL: https://issues.apache.org/jira/browse/HADOOP-17386
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: 3.3.0
> Reporter: Steve Loughran
> Priority: Major
> Labels: pull-request-available
> Time Spent: 20m
> Remaining Estimate: 0h
>
> # fs.s3a.buffer.dir defaults to hadoop.tmp.dir which is /tmp or similar
> # we use this for storing file blocks during upload
> # staging committers use it for all files in a task, which can be a lot more
> # a lot of systems don't clean up /tmp until reboot -and if they stay up for
> a long time then they accrue files written through s3a staging committer from
> spark containers which fail
> Fix: use ${env.LOCAL_DIRS:-${hadoop.tmp.dir}}/s3a as the option so that if
> env.LOCAL_DIRS is set is used over hadoop.tmp.dir. YARN-deployed apps will
> use that for the buffer dir. When the app container is destroyed, so is the
> directory.
--
This message was sent by Atlassian Jira
(v8.20.1#820001)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]