[GitHub] [hadoop] hadoop-yetus commented on pull request #3170: HDFS-16107.Split RPC configuration to isolate RPC.

2021-07-28 Thread GitBox


hadoop-yetus commented on pull request #3170:
URL: https://github.com/apache/hadoop/pull/3170#issuecomment-24322


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 37s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 52s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  19m  1s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  3s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 32s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 30s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 13s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 59s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 56s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  21m 56s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m  7s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  19m  7s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  4s | 
[/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3170/6/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 4 new + 261 
unchanged - 0 fixed = 265 total (was 261)  |
   | +1 :green_heart: |  mvnsite  |   1m 32s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 47s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  15m 49s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  17m 24s | 
[/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3170/6/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  0s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 186m 56s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.conf.TestCommonConfigurationFields |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3170/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3170 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux 6910a0b0a0d6 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / cad27bd53fce9ae759ffa0ab8e714139b6922548 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3170/6/testReport/ |
   | Max. process+thread count | 1267 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 

[jira] [Work logged] (HADOOP-17682) ABFS: Support FileStatus input to OpenFileWithOptions() via OpenFileParameters

2021-07-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17682?focusedWorklogId=631004=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-631004
 ]

ASF GitHub Bot logged work on HADOOP-17682:
---

Author: ASF GitHub Bot
Created on: 29/Jul/21 05:51
Start Date: 29/Jul/21 05:51
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2975:
URL: https://github.com/apache/hadoop/pull/2975#issuecomment-23045


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 19s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 31s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 28s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m  4s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 59s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  15m 19s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 17s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2975/19/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 1 new + 4 unchanged - 0 
fixed = 5 total (was 4)  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  14m 33s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 59s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 31s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  77m 47s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2975/19/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2975 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux df0867049b15 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 79e8f8e5bd1f34fc375cea1bbc2eff75b23025ee |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2975: HADOOP-17682. ABFS: Support FileStatus input to OpenFileWithOptions() via OpenFileParameters

2021-07-28 Thread GitBox


hadoop-yetus commented on pull request #2975:
URL: https://github.com/apache/hadoop/pull/2975#issuecomment-23045


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 19s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 31s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 28s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m  4s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 59s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  15m 19s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 17s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2975/19/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 1 new + 4 unchanged - 0 
fixed = 5 total (was 4)  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  14m 33s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 59s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 31s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  77m 47s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2975/19/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2975 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux df0867049b15 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 79e8f8e5bd1f34fc375cea1bbc2eff75b23025ee |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2975/19/testReport/ |
   | Max. process+thread count | 546 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2975/19/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   

[GitHub] [hadoop] hadoop-yetus commented on pull request #1808: MAPREDUCE-7258. HistoryServerRest.html#Task_Counters_API, modify the jobTaskCounters's itemName from taskcounterGroup to taskCounterGrou

2021-07-28 Thread GitBox


hadoop-yetus commented on pull request #1808:
URL: https://github.com/apache/hadoop/pull/1808#issuecomment-19372


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  49m 17s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  5s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  72m 17s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 53s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 51s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  97m 30s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-1808/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1808 |
   | Optional Tests | dupname asflicense mvnsite codespell markdownlint |
   | uname | Linux 7e8754657748 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 
06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / e226bc4f1c0e2b20caf2d095df6ed58eceaff347 |
   | Max. process+thread count | 555 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs U: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-1808/1/console |
   | versions | git=2.25.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12491) Hadoop-common - Avoid unsafe split and append on fields that might be IPv6 literals

2021-07-28 Thread Hemanth Boyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hemanth Boyina updated HADOOP-12491:

Attachment: HADOOP-12491-HADOOP-17800.003.patch

> Hadoop-common - Avoid unsafe split and append on fields that might be IPv6 
> literals
> ---
>
> Key: HADOOP-12491
> URL: https://issues.apache.org/jira/browse/HADOOP-12491
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: HADOOP-11890
>Reporter: Nemanja Matkovic
>Assignee: Nemanja Matkovic
>Priority: Major
>  Labels: ipv6
> Fix For: HADOOP-11890
>
> Attachments: HADOOP-12491-HADOOP-11890.1.patch, 
> HADOOP-12491-HADOOP-11890.2.patch, HADOOP-12491-HADOOP-17800.001.patch, 
> HADOOP-12491-HADOOP-17800.002.patch, HADOOP-12491-HADOOP-17800.003.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> Hadoop-common portion of HADOOP-12122



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang merged pull request #2836: HDFS-15936.Solve BlockSender#sendPacket() does not record SocketTimeout exception.

2021-07-28 Thread GitBox


jojochuang merged pull request #2836:
URL: https://github.com/apache/hadoop/pull/2836


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on pull request #2836: HDFS-15936.Solve BlockSender#sendPacket() does not record SocketTimeout exception.

2021-07-28 Thread GitBox


jojochuang commented on pull request #2836:
URL: https://github.com/apache/hadoop/pull/2836#issuecomment-00975


   Merging it per Viraj and cxorm's review.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17682) ABFS: Support FileStatus input to OpenFileWithOptions() via OpenFileParameters

2021-07-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17682?focusedWorklogId=630980=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-630980
 ]

ASF GitHub Bot logged work on HADOOP-17682:
---

Author: ASF GitHub Bot
Created on: 29/Jul/21 04:36
Start Date: 29/Jul/21 04:36
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2975:
URL: https://github.com/apache/hadoop/pull/2975#issuecomment-888794912


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 55s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 23s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2975/18/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | -1 :x: |  compile  |   0m 22s | 
[/branch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2975/18/artifact/out/branch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  hadoop-azure in trunk failed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  compile  |   0m 25s | 
[/branch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2975/18/artifact/out/branch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  hadoop-azure in trunk failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | -0 :warning: |  checkstyle  |   0m 20s | 
[/buildtool-branch-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2975/18/artifact/out/buildtool-branch-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  The patch fails to run checkstyle in hadoop-azure  |
   | -1 :x: |  mvnsite  |   0m 23s | 
[/branch-mvnsite-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2975/18/artifact/out/branch-mvnsite-hadoop-tools_hadoop-azure.txt)
 |  hadoop-azure in trunk failed.  |
   | -1 :x: |  javadoc  |   0m 22s | 
[/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2975/18/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  hadoop-azure in trunk failed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  javadoc  |   0m 23s | 
[/branch-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2975/18/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  hadoop-azure in trunk failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | -1 :x: |  spotbugs  |   0m 22s | 
[/branch-spotbugs-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2975/18/artifact/out/branch-spotbugs-hadoop-tools_hadoop-azure.txt)
 |  hadoop-azure in trunk failed.  |
   | +1 :green_heart: |  shadedclient  |   2m 17s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |   2m 41s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 22s | 
[/patch-mvninstall-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2975/18/artifact/out/patch-mvninstall-hadoop-tools_hadoop-azure.txt)
 |  hadoop-azure in the patch failed.  |
   | -1 :x: |  compile  |   0m 22s | 
[/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2975/18/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  javac  |   0m 22s | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2975: HADOOP-17682. ABFS: Support FileStatus input to OpenFileWithOptions() via OpenFileParameters

2021-07-28 Thread GitBox


hadoop-yetus commented on pull request #2975:
URL: https://github.com/apache/hadoop/pull/2975#issuecomment-888794912


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 55s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 23s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2975/18/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | -1 :x: |  compile  |   0m 22s | 
[/branch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2975/18/artifact/out/branch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  hadoop-azure in trunk failed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  compile  |   0m 25s | 
[/branch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2975/18/artifact/out/branch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  hadoop-azure in trunk failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | -0 :warning: |  checkstyle  |   0m 20s | 
[/buildtool-branch-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2975/18/artifact/out/buildtool-branch-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  The patch fails to run checkstyle in hadoop-azure  |
   | -1 :x: |  mvnsite  |   0m 23s | 
[/branch-mvnsite-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2975/18/artifact/out/branch-mvnsite-hadoop-tools_hadoop-azure.txt)
 |  hadoop-azure in trunk failed.  |
   | -1 :x: |  javadoc  |   0m 22s | 
[/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2975/18/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  hadoop-azure in trunk failed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  javadoc  |   0m 23s | 
[/branch-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2975/18/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  hadoop-azure in trunk failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | -1 :x: |  spotbugs  |   0m 22s | 
[/branch-spotbugs-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2975/18/artifact/out/branch-spotbugs-hadoop-tools_hadoop-azure.txt)
 |  hadoop-azure in trunk failed.  |
   | +1 :green_heart: |  shadedclient  |   2m 17s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |   2m 41s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 22s | 
[/patch-mvninstall-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2975/18/artifact/out/patch-mvninstall-hadoop-tools_hadoop-azure.txt)
 |  hadoop-azure in the patch failed.  |
   | -1 :x: |  compile  |   0m 22s | 
[/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2975/18/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  javac  |   0m 22s | 
[/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2975/18/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  compile  |   0m 23s | 

[jira] [Work logged] (HADOOP-17817) HADOOP-17817. S3A to raise IOE if both S3-CSE and S3Guard enabled

2021-07-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17817?focusedWorklogId=630973=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-630973
 ]

ASF GitHub Bot logged work on HADOOP-17817:
---

Author: ASF GitHub Bot
Created on: 29/Jul/21 03:58
Start Date: 29/Jul/21 03:58
Worklog Time Spent: 10m 
  Work Description: mehakmeet commented on pull request #3239:
URL: https://github.com/apache/hadoop/pull/3239#issuecomment-888781408


   I ran the tests in S3-CSE ON S3Guard ON, S3-CSE OFF S3-Guard ON, and S3-CSE 
OFF S3-Guard OFF. More of a mistake that I thought I had run the S3-CSE ON and 
S3-Guard OFF test suite. 
   That's true, don't think anyone would see these failures since you have to 
set up the CSE configs to cover this type of testing. 
   What about the case where we have both S3Guard ON and S3 CSE ON btw? Then 
all tests would fail, should all be skipped then as well?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 630973)
Time Spent: 1h 50m  (was: 1h 40m)

> HADOOP-17817. S3A to raise IOE if both S3-CSE and S3Guard enabled
> -
>
> Key: HADOOP-17817
> URL: https://issues.apache.org/jira/browse/HADOOP-17817
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Mehakmeet Singh
>Assignee: Mehakmeet Singh
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Throw an exception if S3Guard and S3 Client-side encryption are enabled on a 
> bucket. Follow-up to HADOOP-13887. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mehakmeet commented on pull request #3239: HADOOP-17817. Throw an exception if S3 client-side encryption is enabled on S3Guard enabled bucket

2021-07-28 Thread GitBox


mehakmeet commented on pull request #3239:
URL: https://github.com/apache/hadoop/pull/3239#issuecomment-888781408


   I ran the tests in S3-CSE ON S3Guard ON, S3-CSE OFF S3-Guard ON, and S3-CSE 
OFF S3-Guard OFF. More of a mistake that I thought I had run the S3-CSE ON and 
S3-Guard OFF test suite. 
   That's true, don't think anyone would see these failures since you have to 
set up the CSE configs to cover this type of testing. 
   What about the case where we have both S3Guard ON and S3 CSE ON btw? Then 
all tests would fail, should all be skipped then as well?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17812) NPE in S3AInputStream read() after failure to reconnect to store

2021-07-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17812?focusedWorklogId=630971=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-630971
 ]

ASF GitHub Bot logged work on HADOOP-17812:
---

Author: ASF GitHub Bot
Created on: 29/Jul/21 03:44
Start Date: 29/Jul/21 03:44
Worklog Time Spent: 10m 
  Work Description: wbo4958 commented on pull request #3222:
URL: https://github.com/apache/hadoop/pull/3222#issuecomment-888777338


   Thx @steveloughran, I re-tested the integration test in AWS. and the result 
can be found at 
https://issues.apache.org/jira/browse/HADOOP-17812?focusedCommentId=17389239=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17389239
   
   Tested
   s3: us-west-2


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 630971)
Time Spent: 3h 10m  (was: 3h)

> NPE in S3AInputStream read() after failure to reconnect to store
> 
>
> Key: HADOOP-17812
> URL: https://issues.apache.org/jira/browse/HADOOP-17812
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.2.2, 3.3.1
>Reporter: Bobby Wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: failsafe-report.html.gz, s3a-test.tar.gz
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> when [reading from S3a 
> storage|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L450],
>  SSLException (which extends IOException) happens, which will trigger 
> [onReadFailure|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L458].
> onReadFailure calls "reopen". it will first close the original 
> *wrappedStream* and set *wrappedStream = null*, and then it will try to 
> [re-get 
> *wrappedStream*|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L184].
>  But what if the previous code [obtaining 
> S3Object|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L183]
>  throw exception, then "wrappedStream" will be null.
> And the 
> [retry|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L446]
>  mechanism may re-execute the 
> [wrappedStream.read|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L450]
>  and cause NPE.
>  
> For more details, please refer to 
> [https://github.com/NVIDIA/spark-rapids/issues/2915]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] wbo4958 commented on pull request #3222: HADOOP-17812. NPE in S3AInputStream read() after failure to reconnect to store

2021-07-28 Thread GitBox


wbo4958 commented on pull request #3222:
URL: https://github.com/apache/hadoop/pull/3222#issuecomment-888777338


   Thx @steveloughran, I re-tested the integration test in AWS. and the result 
can be found at 
https://issues.apache.org/jira/browse/HADOOP-17812?focusedCommentId=17389239=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17389239
   
   Tested
   s3: us-west-2


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17812) NPE in S3AInputStream read() after failure to reconnect to store

2021-07-28 Thread Bobby Wang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17389239#comment-17389239
 ] 

Bobby Wang commented on HADOOP-17812:
-

Hi Steve,

I just found an aws account from my colleague and did the integration test, The 
result can be found in attachment named  [^failsafe-report.html.gz]

 

The auth-keys.xml I used is like that

 

 
{code:java}


 test.fs.s3a.name
 s3a://testawss3a/
 

 fs.contract.test.fs.s3a
 ${test.fs.s3a.name}
 

 fs.s3a.access.key
 AWS access key ID. Omit for IAM role-based 
authentication.
 XXX
 

 fs.s3a.secret.key
 AWS secret key. Omit for IAM role-based 
authentication.
 X
 

 fs.s3a.scale.test.csvfile
 s3a://landsat-pds/scene_list.gz
 

 
 test.sts.endpoint
 Specific endpoint to use for STS requests.
 sts.amazonaws.com
 

 
 fs.s3a.path.style.access
 true
 

{code}
 

 

The tests that failed in *ITestS3ADeleteCost* and *ITestS3ARenameCost*  and 
*ITestS3AFileOperationCost* are because of `DynamoDB table 'testawss3a' does 
not exist in region us-west-2; auto-creation is turned off`

 

one test *testCustomSignerAndInitializer* failed is because of NPE

_java.lang.NullPointerException at 
org.apache.hadoop.fs.s3a.auth.ITestCustomSigner$CustomSignerInitializer$StoreValue.access$200(ITestCustomSigner.java:255)
 at 
org.apache.hadoop.fs.s3a.auth.ITestCustomSigner$CustomSigner.sign(ITestCustomSigner.java:187)
 at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1305)
 at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1145)
 at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:802)
 at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:770)
 at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:744)
 at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:704)
 at 
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:686)
 at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:550) at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:530) at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5437) at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5384) at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5378) at 
com.amazonaws.services.s3.AmazonS3Client.listObjectsV2(AmazonS3Client.java:970) 
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$listObjects$11(S3AFileSystem.java:2490)
 at 
org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:499)
 at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:414) at 
org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:377) at 
org.apache.hadoop.fs.s3a.S3AFileSystem.listObjects(S3AFileSystem.java:2481) at 
org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:3720) 
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:3583)
 at 
org.apache.hadoop.fs.s3a.S3AFileSystem$MkdirOperationCallbacksImpl.probePathStatus(S3AFileSystem.java:3350)
 at 
org.apache.hadoop.fs.s3a.impl.MkdirOperation.probePathStatusOrNull(MkdirOperation.java:135)
 at 
org.apache.hadoop.fs.s3a.impl.MkdirOperation.getPathStatusExpectingDir(MkdirOperation.java:150)
 at 
org.apache.hadoop.fs.s3a.impl.MkdirOperation.execute(MkdirOperation.java:80) at 
org.apache.hadoop.fs.s3a.impl.MkdirOperation.execute(MkdirOperation.java:45)_

 

and *testDistCpWithIterator* failed is because of Timeout

 

the detailed information can be found in the attachment.

 

Please help to check it. I really appreciate it.

 

Thx

> NPE in S3AInputStream read() after failure to reconnect to store
> 
>
> Key: HADOOP-17812
> URL: https://issues.apache.org/jira/browse/HADOOP-17812
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.2.2, 3.3.1
>Reporter: Bobby Wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: failsafe-report.html.gz, s3a-test.tar.gz
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> when [reading from S3a 
> storage|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L450],
>  SSLException (which extends IOException) happens, which will trigger 
> [onReadFailure|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L458].
> onReadFailure calls "reopen". it will first close the original 
> 

[GitHub] [hadoop] hadoop-yetus commented on pull request #3235: HDFS-16143. De-flake TestEditLogTailer#testStandbyTriggersLogRollsWhenTailInProgressEdits

2021-07-28 Thread GitBox


hadoop-yetus commented on pull request #3235:
URL: https://github.com/apache/hadoop/pull/3235#issuecomment-888775284


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 34s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 41s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 32s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 18s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 59s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 39s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 45s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 13s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 59s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 46s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m 55s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 428m  6s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/15/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 40s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 524m 41s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatus 
|
   |   | hadoop.hdfs.TestViewDistributedFileSystemContract |
   |   | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes |
   |   | 
hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor |
   |   | hadoop.hdfs.TestHDFSFileSystemContract |
   |   | hadoop.hdfs.web.TestWebHdfsFileSystemContract |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/15/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3235 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 485687b991ab 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b5554b6e2732d30569e19d2e5be7ece2d8ff7c68 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/15/testReport/ |
   | Max. process+thread count | 2687 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 

[jira] [Updated] (HADOOP-17812) NPE in S3AInputStream read() after failure to reconnect to store

2021-07-28 Thread Bobby Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bobby Wang updated HADOOP-17812:

Attachment: failsafe-report.html.gz

> NPE in S3AInputStream read() after failure to reconnect to store
> 
>
> Key: HADOOP-17812
> URL: https://issues.apache.org/jira/browse/HADOOP-17812
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.2.2, 3.3.1
>Reporter: Bobby Wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: failsafe-report.html.gz, s3a-test.tar.gz
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> when [reading from S3a 
> storage|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L450],
>  SSLException (which extends IOException) happens, which will trigger 
> [onReadFailure|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L458].
> onReadFailure calls "reopen". it will first close the original 
> *wrappedStream* and set *wrappedStream = null*, and then it will try to 
> [re-get 
> *wrappedStream*|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L184].
>  But what if the previous code [obtaining 
> S3Object|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L183]
>  throw exception, then "wrappedStream" will be null.
> And the 
> [retry|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L446]
>  mechanism may re-execute the 
> [wrappedStream.read|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L450]
>  and cause NPE.
>  
> For more details, please refer to 
> [https://github.com/NVIDIA/spark-rapids/issues/2915]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17763) DistCp job fails when AM is killed

2021-07-28 Thread Bilwa S T (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17389217#comment-17389217
 ] 

Bilwa S T commented on HADOOP-17763:


Hi [~ayushtkn] [~epayne]
can you please take a look at updated patch ?

> DistCp job fails when AM is killed
> --
>
> Key: HADOOP-17763
> URL: https://issues.apache.org/jira/browse/HADOOP-17763
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
> Attachments: HADOOP-17763.001.patch, HADOOP-17763.002.patch
>
>
> Job fails as tasks fail with below exception
> {code:java}
> 2021-06-11 18:48:47,047 | ERROR | IPC Server handler 0 on 27101 | Task: 
> attempt_1623387358383_0006_m_00_1000 - exited : 
> java.io.FileNotFoundException: File does not exist: 
> hdfs://hacluster/staging-dir/dsperf/.staging/_distcp-646531269/fileList.seq
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1637)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1630)
>  at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1645)
>  at org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1863)
>  at org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1886)
>  at 
> org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader.initialize(SequenceFileRecordReader.java:54)
>  at 
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:560)
>  at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:798)
>  at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
>  at org.apache.hadoop.mapred.YarnChild$1.run(YarnChild.java:183)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1761)
>  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:177)
>  | TaskAttemptListenerImpl.java:304{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jianghuazhu commented on pull request #2836: HDFS-15936.Solve BlockSender#sendPacket() does not record SocketTimeout exception.

2021-07-28 Thread GitBox


jianghuazhu commented on pull request #2836:
URL: https://github.com/apache/hadoop/pull/2836#issuecomment-888762016


   @cxorm , thank you very much for your work. Can this pr be merged into the 
trunk branch? If there is still a lack of other work, I am willing to 
contribute.
   thank you very much.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #3235: HDFS-16143. De-flake TestEditLogTailer#testStandbyTriggersLogRollsWhenTailInProgressEdits

2021-07-28 Thread GitBox


hadoop-yetus commented on pull request #3235:
URL: https://github.com/apache/hadoop/pull/3235#issuecomment-888751847


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  2s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 25s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 35s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 33s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  3s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 38s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 35s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 28s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 59s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 42s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 33s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 355m 25s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/17/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 41s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 456m 45s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor |
   |   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
   |   | hadoop.hdfs.TestHDFSFileSystemContract |
   |   | hadoop.hdfs.web.TestWebHdfsFileSystemContract |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/17/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3235 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 68198bb52dd2 4.15.0-142-generic #146-Ubuntu SMP Tue Apr 13 
01:11:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b5554b6e2732d30569e19d2e5be7ece2d8ff7c68 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/17/testReport/ |
   | Max. process+thread count | 1931 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/17/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #3235: HDFS-16143. De-flake TestEditLogTailer#testStandbyTriggersLogRollsWhenTailInProgressEdits

2021-07-28 Thread GitBox


hadoop-yetus commented on pull request #3235:
URL: https://github.com/apache/hadoop/pull/3235#issuecomment-888747021


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 57s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 23s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 16s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 56s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 22s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 15s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 49s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  8s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 52s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 24s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m 40s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 351m 50s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/16/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 443m 30s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor |
   |   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/16/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3235 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux b17f3991cb8c 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 
06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b5554b6e2732d30569e19d2e5be7ece2d8ff7c68 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/16/testReport/ |
   | Max. process+thread count | 2221 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/16/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   

[GitHub] [hadoop] hadoop-yetus commented on pull request #3235: HDFS-16143. De-flake TestEditLogTailer#testStandbyTriggersLogRollsWhenTailInProgressEdits

2021-07-28 Thread GitBox


hadoop-yetus commented on pull request #3235:
URL: https://github.com/apache/hadoop/pull/3235#issuecomment-888701605


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  30m 47s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 59s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 23s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 18s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 26s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  8s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 51s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m  9s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  15m 52s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 232m 14s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 39s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 316m 30s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/18/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3235 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 568954c467a5 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b5554b6e2732d30569e19d2e5be7ece2d8ff7c68 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/18/testReport/ |
   | Max. process+thread count | 3330 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/18/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (HADOOP-17819) Add extensions to ProtobufRpcEngine RequestHeaderProto

2021-07-28 Thread Konstantin Shvachko (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko resolved HADOOP-17819.
--
Fix Version/s: 3.2.3
   2.10.2
   3.4.0
 Hadoop Flags: Reviewed
 Assignee: Hector Sandoval Chaverri
   Resolution: Fixed

I just committed this. Thank you [~hchaverri]

> Add extensions to ProtobufRpcEngine RequestHeaderProto
> --
>
> Key: HADOOP-17819
> URL: https://issues.apache.org/jira/browse/HADOOP-17819
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Hector Sandoval Chaverri
>Assignee: Hector Sandoval Chaverri
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 2.10.2, 3.2.3
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The header used in ProtobufRpcEngine messages doesn't allow for new 
> properties to be added by child classes. We can add a range of extensions 
> that can be useful for proto classes that need to extend RequestHeaderProto.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17819) Add extensions to ProtobufRpcEngine RequestHeaderProto

2021-07-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17819?focusedWorklogId=630816=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-630816
 ]

ASF GitHub Bot logged work on HADOOP-17819:
---

Author: ASF GitHub Bot
Created on: 28/Jul/21 22:28
Start Date: 28/Jul/21 22:28
Worklog Time Spent: 10m 
  Work Description: shvachko commented on pull request #3242:
URL: https://github.com/apache/hadoop/pull/3242#issuecomment-888661912


   +1 Will be committing this shortly


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 630816)
Time Spent: 0.5h  (was: 20m)

> Add extensions to ProtobufRpcEngine RequestHeaderProto
> --
>
> Key: HADOOP-17819
> URL: https://issues.apache.org/jira/browse/HADOOP-17819
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Hector Sandoval Chaverri
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The header used in ProtobufRpcEngine messages doesn't allow for new 
> properties to be added by child classes. We can add a range of extensions 
> that can be useful for proto classes that need to extend RequestHeaderProto.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] shvachko commented on pull request #3242: HADOOP-17819. Add extensions to ProtobufRpcEngine RequestHeaderProto

2021-07-28 Thread GitBox


shvachko commented on pull request #3242:
URL: https://github.com/apache/hadoop/pull/3242#issuecomment-888661912


   +1 Will be committing this shortly


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17811) ABFS ExponentialRetryPolicy doesn't pick up configuration values

2021-07-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17811?focusedWorklogId=630799=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-630799
 ]

ASF GitHub Bot logged work on HADOOP-17811:
---

Author: ASF GitHub Bot
Created on: 28/Jul/21 21:31
Start Date: 28/Jul/21 21:31
Worklog Time Spent: 10m 
  Work Description: brianloss commented on pull request #3221:
URL: https://github.com/apache/hadoop/pull/3221#issuecomment-888635485


   > Can you do the cherrypick and test of the branch-3.3 locally, let me know 
if it's all good there and I'll merge it in there too.
   
   @steveloughran my local cherry pick against branch-3.3 had no problems 
running tests in East US. There were two failures that ran fine individually: 
ITestAzureBlobFileSystemRandomRead.testValidateSeekBounds (HNS-SharedKey) and 
ITestAbfsListStatusRemoteIterator.testWithAbfsIteratorDisabledWithoutHasNext 
(NonHNS-SharedKey).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 630799)
Time Spent: 2h 50m  (was: 2h 40m)

> ABFS ExponentialRetryPolicy doesn't pick up configuration values
> 
>
> Key: HADOOP-17811
> URL: https://issues.apache.org/jira/browse/HADOOP-17811
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, fs/azure
>Reporter: Brian Frank Loss
>Assignee: Brian Frank Loss
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> The ABFS driver uses ExponentialRetryPolicy to handle throttling by the ADLS 
> Gen 2 API. The number of retries can already be configured by setting the 
> property fs.azure.io.retry.max.retries. However, none of the other properties 
> on ExponentialRetryPolicy can be set even though configuration properties 
> have already been defined for them. Allow the additional properties (min/max 
> retry wait, default wait) to be configured.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] brianloss commented on pull request #3221: HADOOP-17811: ABFS ExponentialRetryPolicy doesn't pick up configuration values

2021-07-28 Thread GitBox


brianloss commented on pull request #3221:
URL: https://github.com/apache/hadoop/pull/3221#issuecomment-888635485


   > Can you do the cherrypick and test of the branch-3.3 locally, let me know 
if it's all good there and I'll merge it in there too.
   
   @steveloughran my local cherry pick against branch-3.3 had no problems 
running tests in East US. There were two failures that ran fine individually: 
ITestAzureBlobFileSystemRandomRead.testValidateSeekBounds (HNS-SharedKey) and 
ITestAbfsListStatusRemoteIterator.testWithAbfsIteratorDisabledWithoutHasNext 
(NonHNS-SharedKey).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #3246: YARN-10848. Vcore allocation problem with DefaultResourceCalculator

2021-07-28 Thread GitBox


hadoop-yetus commented on pull request #3246:
URL: https://github.com/apache/hadoop/pull/3246#issuecomment-888614796


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  2s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m  0s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  3s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 56s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 49s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  0s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 52s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m  5s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 53s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 56s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 56s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 47s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 47s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 50s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 53s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  14m 50s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 115m 39s | 
[/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3246/1/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt)
 |  hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 192m  4s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerMultiNodes
 |
   |   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
   |   | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestLeafQueue |
   |   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerAutoCreatedQueuePreemption
 |
   |   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerAsyncScheduling
 |
   |   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
   |   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing |
   |   | 
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesSchedulerActivities |
   |   | 
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesSchedulerActivitiesWithMultiNodesEnabled
 |
   |   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerSurgicalPreemption
 |
   |   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerLazyPreemption
 |
   |   | hadoop.yarn.server.resourcemanager.scheduler.TestSchedulerHealth |
   |   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler |
   |   | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestReservations 
|
   |   | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2971: MAPREDUCE-7341. Intermediate Manifest Committer

2021-07-28 Thread GitBox


hadoop-yetus commented on pull request #2971:
URL: https://github.com/apache/hadoop/pull/2971#issuecomment-888603337


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 44s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  2s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 23 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 57s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 17s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 12s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  18m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   3m 43s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m  9s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   3m 15s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +0 :ok: |  spotbugs  |   0m 41s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | +1 :green_heart: |  shadedclient  |  14m 52s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | -1 :x: |  javac  |  20m 36s | 
[/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/27/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 4 new + 1912 unchanged - 1 
fixed = 1916 total (was 1913)  |
   | +1 :green_heart: |  compile  |  18m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | -1 :x: |  javac  |  18m 26s | 
[/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/27/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 4 new + 1788 
unchanged - 1 fixed = 1792 total (was 1789)  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/27/artifact/out/blanks-eol.txt)
 |  The patch has 4 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   4m  0s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/27/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 36 new + 0 unchanged - 0 fixed = 36 total (was 0) 
 |
   | +1 :green_heart: |  mvnsite  |   4m  4s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m 10s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   3m 10s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +0 :ok: |  spotbugs  |   0m 37s |  |  hadoop-project has no data from 
spotbugs  |
   | -1 :x: |  spotbugs  |   1m 44s | 
[/new-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/27/artifact/out/new-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.html)
 |  
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  14m 47s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 37s |  |  hadoop-project in 

[jira] [Work logged] (HADOOP-17812) NPE in S3AInputStream read() after failure to reconnect to store

2021-07-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17812?focusedWorklogId=630731=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-630731
 ]

ASF GitHub Bot logged work on HADOOP-17812:
---

Author: ASF GitHub Bot
Created on: 28/Jul/21 19:45
Start Date: 28/Jul/21 19:45
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #3222:
URL: https://github.com/apache/hadoop/pull/3222#issuecomment-888573781


   bq. BTW, I will upload the integration test results in JIRA later.
   
   
   
   don't need the full results, just add a comment here about where you tested 
(e.g. https://github.com/apache/hadoop/pull/3240#issuecomment-887549690 ) and 
https://github.com/apache/hadoop/pull/3240#issuecomment-887737121 )
   
   Regarding those failures -your AWS credentials are for some private store 
aren't they? So all tests referencing landsat data are failing as you are not 
authed by AWS.
   
   The testing,md doc shows  how you can switch to a different large file in 
your own store, after which it turns off a set of tests which won't be valid.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 630731)
Time Spent: 3h  (was: 2h 50m)

> NPE in S3AInputStream read() after failure to reconnect to store
> 
>
> Key: HADOOP-17812
> URL: https://issues.apache.org/jira/browse/HADOOP-17812
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.2.2, 3.3.1
>Reporter: Bobby Wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: s3a-test.tar.gz
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> when [reading from S3a 
> storage|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L450],
>  SSLException (which extends IOException) happens, which will trigger 
> [onReadFailure|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L458].
> onReadFailure calls "reopen". it will first close the original 
> *wrappedStream* and set *wrappedStream = null*, and then it will try to 
> [re-get 
> *wrappedStream*|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L184].
>  But what if the previous code [obtaining 
> S3Object|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L183]
>  throw exception, then "wrappedStream" will be null.
> And the 
> [retry|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L446]
>  mechanism may re-execute the 
> [wrappedStream.read|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L450]
>  and cause NPE.
>  
> For more details, please refer to 
> [https://github.com/NVIDIA/spark-rapids/issues/2915]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #3222: HADOOP-17812. NPE in S3AInputStream read() after failure to reconnect to store

2021-07-28 Thread GitBox


steveloughran commented on pull request #3222:
URL: https://github.com/apache/hadoop/pull/3222#issuecomment-888573781


   bq. BTW, I will upload the integration test results in JIRA later.
   
   
   
   don't need the full results, just add a comment here about where you tested 
(e.g. https://github.com/apache/hadoop/pull/3240#issuecomment-887549690 ) and 
https://github.com/apache/hadoop/pull/3240#issuecomment-887737121 )
   
   Regarding those failures -your AWS credentials are for some private store 
aren't they? So all tests referencing landsat data are failing as you are not 
authed by AWS.
   
   The testing,md doc shows  how you can switch to a different large file in 
your own store, after which it turns off a set of tests which won't be valid.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17139) Re-enable optimized copyFromLocal implementation in S3AFileSystem

2021-07-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17139?focusedWorklogId=630730=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-630730
 ]

ASF GitHub Bot logged work on HADOOP-17139:
---

Author: ASF GitHub Bot
Created on: 28/Jul/21 19:41
Start Date: 28/Jul/21 19:41
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on a change in pull request 
#3101:
URL: https://github.com/apache/hadoop/pull/3101#discussion_r678596071



##
File path: 
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
##
@@ -1427,57 +1427,91 @@ set to TRUE. If destination already exists, and the 
destination contents must be
 then `overwrite` flag must be set to TRUE.
 
  Preconditions
+Source and destination must be different
+```python
+if src = dest : raise FileExistsException
+```
 
-The source file or directory must exist:
+Destination and source must not be descendants one another
+```python
+if isDescendant(src, dest) or isDescendant(dest, src) : TODO
+```
 
-if not exists(FS, src) : raise FileNotFoundException
+The source file or directory must exist locally:
+```python
+if not exists(LocalFS, src) : raise FileNotFoundException
+```
 
 Directories cannot be copied into files regardless to what the overwrite flag 
is set to:
 
-if isDir(FS, src) and isFile(FS, dst) : raise PathExistsException
+```python
+if isDir(LocalFS, src) and isFile(FS, dst) : raise PathExistsException
+```
 
 For all cases, except the one for which the above precondition throws, the 
overwrite flag must be
-set to TRUE for the operation to succeed. This will also overwrite any files / 
directories at the
-destination:
-
-if exists(FS, dst) && not overwrite : raise PathExistsException
-
- Postconditions
-Copying a file into an existing directory at destination with a non-existing 
file at destination
-
-if isFile(fs, src) and not exists(FS, dst) => success
-
-Copying a file into an existing directory at destination with an existing file 
at destination and
-overwrite set to TRUE
-
-if isFile(FS, src) and overwrite and exists(FS, dst) => success
-
+set to TRUE for the operation to succeed if destination exists. This will also 
overwrite any files
+ / directories at the destination:
 
-Copying a file into a non-existent directory. POSIX file systems would fail 
this operation, HDFS
-allows this to happen creating all the directories in the destination path.
-
-if isFile(FS, src) and not exists(FS, parent(dst)) => success
-
-Copying directory into destination directory - last part of the destination 
path doesn't exist e.g.
-`/src/bar/ -> /dst/foo/ => /dst/foo/` with the precondition that `/dst/` 
exists but `/dst/foo/`
-doesn't:
-
-if isDir(FS, src) and not exists(FS, dst) => success
-
-Copying directory into destination directory - last part of the destination 
path exists e.g.
-`/src/bar/ -> /dst/foo/ => /dst/foo/bar/` with the precondition that 
`/dst/foo/` exists but
-`/dst/foo/bar/` doesn't:
-
-if isDir(FS, src) and exists(FS, dst) => success
+```python
+if exists(FS, dst) and not overwrite : raise PathExistsException
+```
 
-Copying a directory into a destination directory - last part of destination 
path and source directory
-name exist e.g. `/src/foo/ -> /dst/` with the precondition that `/dst/foo/` 
exists. This operation
-will only succeed if the overwrite flag is set to TRUE
+ Determining the final name of the copy
+Given a base path on the source `base` and a child path `child` where `base` 
is in
+`ancestors(child) + child`:
 
-if isDir(FS, src) and exists(FS, dst) and overwrite => success
+```python
+def final_name(base, child, dest):
+is base = child:
+return dest
+else:
+return dest + childElements(base, child)
+```
 
-For all operations if the `delSrc` flag is set to TRUE then the source will be 
deleted. If source
-is a directory then it will be recursively deleted.
+ Outcome where source is a file `isFile(LocalFS, src)`
+For a file, data at destination becomes that of the source. All ancestors are 
directories.
+```python
+if isFile(LocalFS, src) and (not exists(FS, dest) or (exists(FS, dest) and 
overwrite)):
+FS' = FS where:
+FS'.Files[dest] = LocalFS.Files[src]
+FS'.Directories = FS.Directories + ancestors(FS, dest)
+LocalFS' = LocalFS where
+not delSrc or (delSrc = true and delete(LocalFS, src, false))
+else if isFile(LocalFS, src) and isDir(FS, dest):
+FS' = FS where:
+let d = final_name(src, dest)
+FS'.Files[d] = LocalFS.Files[src]
+LocalFS' = LocalFS where:
+not delSrc or (delSrc = true and delete(LocalFS, src, false))
+```
+There are no expectations that the file changes are atomic for both local 
`LocalFS` and remote `FS`.
+
+ Outcome where source is a directory `isDir(LocalFS, src)`
+```python
+if is Dir(LocalFS, 

[GitHub] [hadoop] steveloughran commented on a change in pull request #3101: HADOOP-17139 Re-enable optimized copyFromLocal implementation in S3AFileSystem

2021-07-28 Thread GitBox


steveloughran commented on a change in pull request #3101:
URL: https://github.com/apache/hadoop/pull/3101#discussion_r678596071



##
File path: 
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
##
@@ -1427,57 +1427,91 @@ set to TRUE. If destination already exists, and the 
destination contents must be
 then `overwrite` flag must be set to TRUE.
 
  Preconditions
+Source and destination must be different
+```python
+if src = dest : raise FileExistsException
+```
 
-The source file or directory must exist:
+Destination and source must not be descendants one another
+```python
+if isDescendant(src, dest) or isDescendant(dest, src) : TODO
+```
 
-if not exists(FS, src) : raise FileNotFoundException
+The source file or directory must exist locally:
+```python
+if not exists(LocalFS, src) : raise FileNotFoundException
+```
 
 Directories cannot be copied into files regardless to what the overwrite flag 
is set to:
 
-if isDir(FS, src) and isFile(FS, dst) : raise PathExistsException
+```python
+if isDir(LocalFS, src) and isFile(FS, dst) : raise PathExistsException
+```
 
 For all cases, except the one for which the above precondition throws, the 
overwrite flag must be
-set to TRUE for the operation to succeed. This will also overwrite any files / 
directories at the
-destination:
-
-if exists(FS, dst) && not overwrite : raise PathExistsException
-
- Postconditions
-Copying a file into an existing directory at destination with a non-existing 
file at destination
-
-if isFile(fs, src) and not exists(FS, dst) => success
-
-Copying a file into an existing directory at destination with an existing file 
at destination and
-overwrite set to TRUE
-
-if isFile(FS, src) and overwrite and exists(FS, dst) => success
-
+set to TRUE for the operation to succeed if destination exists. This will also 
overwrite any files
+ / directories at the destination:
 
-Copying a file into a non-existent directory. POSIX file systems would fail 
this operation, HDFS
-allows this to happen creating all the directories in the destination path.
-
-if isFile(FS, src) and not exists(FS, parent(dst)) => success
-
-Copying directory into destination directory - last part of the destination 
path doesn't exist e.g.
-`/src/bar/ -> /dst/foo/ => /dst/foo/` with the precondition that `/dst/` 
exists but `/dst/foo/`
-doesn't:
-
-if isDir(FS, src) and not exists(FS, dst) => success
-
-Copying directory into destination directory - last part of the destination 
path exists e.g.
-`/src/bar/ -> /dst/foo/ => /dst/foo/bar/` with the precondition that 
`/dst/foo/` exists but
-`/dst/foo/bar/` doesn't:
-
-if isDir(FS, src) and exists(FS, dst) => success
+```python
+if exists(FS, dst) and not overwrite : raise PathExistsException
+```
 
-Copying a directory into a destination directory - last part of destination 
path and source directory
-name exist e.g. `/src/foo/ -> /dst/` with the precondition that `/dst/foo/` 
exists. This operation
-will only succeed if the overwrite flag is set to TRUE
+ Determining the final name of the copy
+Given a base path on the source `base` and a child path `child` where `base` 
is in
+`ancestors(child) + child`:
 
-if isDir(FS, src) and exists(FS, dst) and overwrite => success
+```python
+def final_name(base, child, dest):
+is base = child:
+return dest
+else:
+return dest + childElements(base, child)
+```
 
-For all operations if the `delSrc` flag is set to TRUE then the source will be 
deleted. If source
-is a directory then it will be recursively deleted.
+ Outcome where source is a file `isFile(LocalFS, src)`
+For a file, data at destination becomes that of the source. All ancestors are 
directories.
+```python
+if isFile(LocalFS, src) and (not exists(FS, dest) or (exists(FS, dest) and 
overwrite)):
+FS' = FS where:
+FS'.Files[dest] = LocalFS.Files[src]
+FS'.Directories = FS.Directories + ancestors(FS, dest)
+LocalFS' = LocalFS where
+not delSrc or (delSrc = true and delete(LocalFS, src, false))
+else if isFile(LocalFS, src) and isDir(FS, dest):
+FS' = FS where:
+let d = final_name(src, dest)
+FS'.Files[d] = LocalFS.Files[src]
+LocalFS' = LocalFS where:
+not delSrc or (delSrc = true and delete(LocalFS, src, false))
+```
+There are no expectations that the file changes are atomic for both local 
`LocalFS` and remote `FS`.
+
+ Outcome where source is a directory `isDir(LocalFS, src)`
+```python
+if is Dir(LocalFS, src) and (isFile(FS, dest) or isFile(FS, dest + 
childElements(src))):
+raise FileAlreadyExistsException
+else if isDir(LocalFS, src):
+dest' = dest

Review comment:
   move to the else of the `if` below so it can be final

##
File path: 
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
##
@@ -1427,57 +1427,91 @@ set to TRUE. If destination already exists, and the 

[jira] [Updated] (HADOOP-17811) ABFS ExponentialRetryPolicy doesn't pick up configuration values

2021-07-28 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17811:

Summary: ABFS ExponentialRetryPolicy doesn't pick up configuration values  
(was: ABFS: Allow all ExponentialRetryPolicy properties to be configured)

> ABFS ExponentialRetryPolicy doesn't pick up configuration values
> 
>
> Key: HADOOP-17811
> URL: https://issues.apache.org/jira/browse/HADOOP-17811
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, fs/azure
>Reporter: Brian Frank Loss
>Assignee: Brian Frank Loss
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> The ABFS driver uses ExponentialRetryPolicy to handle throttling by the ADLS 
> Gen 2 API. The number of retries can already be configured by setting the 
> property fs.azure.io.retry.max.retries. However, none of the other properties 
> on ExponentialRetryPolicy can be set even though configuration properties 
> have already been defined for them. Allow the additional properties (min/max 
> retry wait, default wait) to be configured.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17811) ABFS: Allow all ExponentialRetryPolicy properties to be configured

2021-07-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17811?focusedWorklogId=630729=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-630729
 ]

ASF GitHub Bot logged work on HADOOP-17811:
---

Author: ASF GitHub Bot
Created on: 28/Jul/21 19:28
Start Date: 28/Jul/21 19:28
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #3221:
URL: https://github.com/apache/hadoop/pull/3221#issuecomment-888563527


   > ITestAzureBlobFileSystemLease 
   talk to @billierinaldi there.
   
   w.r.t ITestAbfsStreamStatistics, if there's no JIRA on that test failure, 
file one, with the stack trace. It'll inevitably be some counting mismatch
   
   Can you do the cherrypick and test of the branch-3.3 locally, let me know if 
it's all good there and I'll merge it in there too.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 630729)
Time Spent: 2h 40m  (was: 2.5h)

> ABFS: Allow all ExponentialRetryPolicy properties to be configured
> --
>
> Key: HADOOP-17811
> URL: https://issues.apache.org/jira/browse/HADOOP-17811
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, fs/azure
>Reporter: Brian Frank Loss
>Assignee: Brian Frank Loss
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> The ABFS driver uses ExponentialRetryPolicy to handle throttling by the ADLS 
> Gen 2 API. The number of retries can already be configured by setting the 
> property fs.azure.io.retry.max.retries. However, none of the other properties 
> on ExponentialRetryPolicy can be set even though configuration properties 
> have already been defined for them. Allow the additional properties (min/max 
> retry wait, default wait) to be configured.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #3221: HADOOP-17811: ABFS ExponentialRetryPolicy doesn't pick up configuration values

2021-07-28 Thread GitBox


steveloughran commented on pull request #3221:
URL: https://github.com/apache/hadoop/pull/3221#issuecomment-888563527


   > ITestAzureBlobFileSystemLease 
   talk to @billierinaldi there.
   
   w.r.t ITestAbfsStreamStatistics, if there's no JIRA on that test failure, 
file one, with the stack trace. It'll inevitably be some counting mismatch
   
   Can you do the cherrypick and test of the branch-3.3 locally, let me know if 
it's all good there and I'll merge it in there too.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17612) Upgrade Zookeeper to 3.6.3 and Curator to 5.2.0

2021-07-28 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17388979#comment-17388979
 ] 

Viraj Jasani edited comment on HADOOP-17612 at 7/28/21, 7:24 PM:
-

Thank you [~eolivelli] for Curator release!

Here is the PR to bump Zookeeper and Curator to 3.6.3 and 5.2.0 respectively: 
[https://github.com/apache/hadoop/pull/3241]

We have 2 full build results and I don't see any ZK related test failures. 
There are some Javac warnings because we use PathChildrenCache in 
ZKDelegationTokenSecretManager and it is deprecated (in the support of 
persistent recursive watchers CURATOR-549) in 5.0.0 onwards.

 

Edit: I can create follow up Jira to clean up deprecated usage of 
PathChildrenCache in ZKDelegationTokenSecretManager 
(ZKDelegationTokenSecretManager seems to be in critical path of Web 
AuthenticationFilter).


was (Author: vjasani):
Thank you [~eolivelli] for Curator release!

Here is the PR to bump Zookeeper and Curator to 3.6.3 and 5.2.0 respectively: 
[https://github.com/apache/hadoop/pull/3241]

We have 2 full build results and I don't see any ZK related test failures. 
There are some Javac warnings because we use PathChildrenCache and it is 
deprecated (in the support of persistent recursive watchers CURATOR-549) in 
5.0.0 onwards.

> Upgrade Zookeeper to 3.6.3 and Curator to 5.2.0
> ---
>
> Key: HADOOP-17612
> URL: https://issues.apache.org/jira/browse/HADOOP-17612
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> We can bump Zookeeper version to 3.7.0 for trunk.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17811) ABFS: Allow all ExponentialRetryPolicy properties to be configured

2021-07-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17811?focusedWorklogId=630728=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-630728
 ]

ASF GitHub Bot logged work on HADOOP-17811:
---

Author: ASF GitHub Bot
Created on: 28/Jul/21 19:24
Start Date: 28/Jul/21 19:24
Worklog Time Spent: 10m 
  Work Description: brianloss commented on pull request #3221:
URL: https://github.com/apache/hadoop/pull/3221#issuecomment-888561184


   > +1 for this, merging to trunk and then soon 3.3.2.
   > 
   > interesting you are seeing all the test failures. I don't go near Oauth 
myself (long story).
   
   Yeah, strange behavior. I rebuilt my storage accounts in a different tenant, 
and it's doing better now. I'm still seeing some weirdness in a few tests. 
ITestAzureBlobFileSystemLease I can get to pass if I don't use the runtests.sh 
script and set all of the right properties in azure-auth-keys.xml. I can't get 
ITestAbfsStreamStatistics to pass in the appendblob configuration (even on 
trunk). But other than those, everything else is running now. Thanks for 
merging!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 630728)
Time Spent: 2.5h  (was: 2h 20m)

> ABFS: Allow all ExponentialRetryPolicy properties to be configured
> --
>
> Key: HADOOP-17811
> URL: https://issues.apache.org/jira/browse/HADOOP-17811
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, fs/azure
>Reporter: Brian Frank Loss
>Assignee: Brian Frank Loss
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> The ABFS driver uses ExponentialRetryPolicy to handle throttling by the ADLS 
> Gen 2 API. The number of retries can already be configured by setting the 
> property fs.azure.io.retry.max.retries. However, none of the other properties 
> on ExponentialRetryPolicy can be set even though configuration properties 
> have already been defined for them. Allow the additional properties (min/max 
> retry wait, default wait) to be configured.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] brianloss commented on pull request #3221: HADOOP-17811: ABFS ExponentialRetryPolicy doesn't pick up configuration values

2021-07-28 Thread GitBox


brianloss commented on pull request #3221:
URL: https://github.com/apache/hadoop/pull/3221#issuecomment-888561184


   > +1 for this, merging to trunk and then soon 3.3.2.
   > 
   > interesting you are seeing all the test failures. I don't go near Oauth 
myself (long story).
   
   Yeah, strange behavior. I rebuilt my storage accounts in a different tenant, 
and it's doing better now. I'm still seeing some weirdness in a few tests. 
ITestAzureBlobFileSystemLease I can get to pass if I don't use the runtests.sh 
script and set all of the right properties in azure-auth-keys.xml. I can't get 
ITestAbfsStreamStatistics to pass in the appendblob configuration (even on 
trunk). But other than those, everything else is running now. Thanks for 
merging!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17811) ABFS: Allow all ExponentialRetryPolicy properties to be configured

2021-07-28 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17811:

Fix Version/s: 3.4.0

> ABFS: Allow all ExponentialRetryPolicy properties to be configured
> --
>
> Key: HADOOP-17811
> URL: https://issues.apache.org/jira/browse/HADOOP-17811
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, fs/azure
>Reporter: Brian Frank Loss
>Assignee: Brian Frank Loss
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> The ABFS driver uses ExponentialRetryPolicy to handle throttling by the ADLS 
> Gen 2 API. The number of retries can already be configured by setting the 
> property fs.azure.io.retry.max.retries. However, none of the other properties 
> on ExponentialRetryPolicy can be set even though configuration properties 
> have already been defined for them. Allow the additional properties (min/max 
> retry wait, default wait) to be configured.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17811) ABFS: Allow all ExponentialRetryPolicy properties to be configured

2021-07-28 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-17811.
-
Resolution: Fixed

> ABFS: Allow all ExponentialRetryPolicy properties to be configured
> --
>
> Key: HADOOP-17811
> URL: https://issues.apache.org/jira/browse/HADOOP-17811
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, fs/azure
>Reporter: Brian Frank Loss
>Assignee: Brian Frank Loss
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> The ABFS driver uses ExponentialRetryPolicy to handle throttling by the ADLS 
> Gen 2 API. The number of retries can already be configured by setting the 
> property fs.azure.io.retry.max.retries. However, none of the other properties 
> on ExponentialRetryPolicy can be set even though configuration properties 
> have already been defined for them. Allow the additional properties (min/max 
> retry wait, default wait) to be configured.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17811) ABFS: Allow all ExponentialRetryPolicy properties to be configured

2021-07-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17811?focusedWorklogId=630726=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-630726
 ]

ASF GitHub Bot logged work on HADOOP-17811:
---

Author: ASF GitHub Bot
Created on: 28/Jul/21 19:23
Start Date: 28/Jul/21 19:23
Worklog Time Spent: 10m 
  Work Description: steveloughran merged pull request #3221:
URL: https://github.com/apache/hadoop/pull/3221


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 630726)
Time Spent: 2h 20m  (was: 2h 10m)

> ABFS: Allow all ExponentialRetryPolicy properties to be configured
> --
>
> Key: HADOOP-17811
> URL: https://issues.apache.org/jira/browse/HADOOP-17811
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, fs/azure
>Reporter: Brian Frank Loss
>Assignee: Brian Frank Loss
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> The ABFS driver uses ExponentialRetryPolicy to handle throttling by the ADLS 
> Gen 2 API. The number of retries can already be configured by setting the 
> property fs.azure.io.retry.max.retries. However, none of the other properties 
> on ExponentialRetryPolicy can be set even though configuration properties 
> have already been defined for them. Allow the additional properties (min/max 
> retry wait, default wait) to be configured.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran merged pull request #3221: HADOOP-17811: ABFS ExponentialRetryPolicy doesn't pick up configuration values

2021-07-28 Thread GitBox


steveloughran merged pull request #3221:
URL: https://github.com/apache/hadoop/pull/3221


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17811) ABFS: Allow all ExponentialRetryPolicy properties to be configured

2021-07-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17811?focusedWorklogId=630722=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-630722
 ]

ASF GitHub Bot logged work on HADOOP-17811:
---

Author: ASF GitHub Bot
Created on: 28/Jul/21 19:20
Start Date: 28/Jul/21 19:20
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on a change in pull request 
#3221:
URL: https://github.com/apache/hadoop/pull/3221#discussion_r678585653



##
File path: hadoop-tools/hadoop-azure/src/site/markdown/testing_azure.md
##
@@ -448,7 +448,7 @@ use requires the presence of secret credentials, where 
tests may be slow,
 and where finding out why something failed from nothing but the test output
 is critical.
 
- Subclasses Existing Shared Base Blasses
+ Subclasses Existing Shared Base Classes

Review comment:
   thx for this; always good to keep the docs up to date




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 630722)
Time Spent: 2h 10m  (was: 2h)

> ABFS: Allow all ExponentialRetryPolicy properties to be configured
> --
>
> Key: HADOOP-17811
> URL: https://issues.apache.org/jira/browse/HADOOP-17811
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, fs/azure
>Reporter: Brian Frank Loss
>Assignee: Brian Frank Loss
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> The ABFS driver uses ExponentialRetryPolicy to handle throttling by the ADLS 
> Gen 2 API. The number of retries can already be configured by setting the 
> property fs.azure.io.retry.max.retries. However, none of the other properties 
> on ExponentialRetryPolicy can be set even though configuration properties 
> have already been defined for them. Allow the additional properties (min/max 
> retry wait, default wait) to be configured.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #3221: HADOOP-17811: Configure ExponentialRetryPolicy

2021-07-28 Thread GitBox


steveloughran commented on a change in pull request #3221:
URL: https://github.com/apache/hadoop/pull/3221#discussion_r678585653



##
File path: hadoop-tools/hadoop-azure/src/site/markdown/testing_azure.md
##
@@ -448,7 +448,7 @@ use requires the presence of secret credentials, where 
tests may be slow,
 and where finding out why something failed from nothing but the test output
 is critical.
 
- Subclasses Existing Shared Base Blasses
+ Subclasses Existing Shared Base Classes

Review comment:
   thx for this; always good to keep the docs up to date




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17612) Upgrade Zookeeper to 3.6.3 and Curator to 5.2.0

2021-07-28 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17388979#comment-17388979
 ] 

Viraj Jasani commented on HADOOP-17612:
---

Thank you [~eolivelli] for Curator release!

Here is the PR to bump Zookeeper and Curator to 3.6.3 and 5.2.0 respectively: 
[https://github.com/apache/hadoop/pull/3241]

We have 2 full build results and I don't see any ZK related test failures. 
There are some Javac warnings because we use PathChildrenCache and it is 
deprecated (in the support of persistent recursive watchers CURATOR-549) in 
5.0.0 onwards.

> Upgrade Zookeeper to 3.6.3 and Curator to 5.2.0
> ---
>
> Key: HADOOP-17612
> URL: https://issues.apache.org/jira/browse/HADOOP-17612
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> We can bump Zookeeper version to 3.7.0 for trunk.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17817) HADOOP-17817. S3A to raise IOE if both S3-CSE and S3Guard enabled

2021-07-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17817?focusedWorklogId=630718=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-630718
 ]

ASF GitHub Bot logged work on HADOOP-17817:
---

Author: ASF GitHub Bot
Created on: 28/Jul/21 19:12
Start Date: 28/Jul/21 19:12
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #3239:
URL: https://github.com/apache/hadoop/pull/3239#issuecomment-888553613


   hmm. Yes, that'll be an interesting problem. 
   
   Either test setup() checks for CSE on and skips the s3guard-enabled tests, 
or we catch the raised PathIOE and covert that to the skip call. That strategy 
might work well everywhere, including all contract tests.
   
   Also: did you forget to run the tests? or is it just your test setup isn't 
S3-CSE? This is where we need broader test configuration coverage, don't we?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 630718)
Time Spent: 1h 40m  (was: 1.5h)

> HADOOP-17817. S3A to raise IOE if both S3-CSE and S3Guard enabled
> -
>
> Key: HADOOP-17817
> URL: https://issues.apache.org/jira/browse/HADOOP-17817
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Mehakmeet Singh
>Assignee: Mehakmeet Singh
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Throw an exception if S3Guard and S3 Client-side encryption are enabled on a 
> bucket. Follow-up to HADOOP-13887. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #3239: HADOOP-17817. Throw an exception if S3 client-side encryption is enabled on S3Guard enabled bucket

2021-07-28 Thread GitBox


steveloughran commented on pull request #3239:
URL: https://github.com/apache/hadoop/pull/3239#issuecomment-888553613


   hmm. Yes, that'll be an interesting problem. 
   
   Either test setup() checks for CSE on and skips the s3guard-enabled tests, 
or we catch the raised PathIOE and covert that to the skip call. That strategy 
might work well everywhere, including all contract tests.
   
   Also: did you forget to run the tests? or is it just your test setup isn't 
S3-CSE? This is where we need broader test configuration coverage, don't we?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17784) hadoop-aws landsat-pds test bucket will be deleted after Jul 1, 2021

2021-07-28 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17388963#comment-17388963
 ] 

Steve Loughran commented on HADOOP-17784:
-

update, we're trying to convince the relevant AWS team to leave the landsat 
file behind, even after they delete everything else

> hadoop-aws landsat-pds test bucket will be deleted after Jul 1, 2021
> 
>
> Key: HADOOP-17784
> URL: https://issues.apache.org/jira/browse/HADOOP-17784
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs/s3, test
>Reporter: Leona Yoda
>Priority: Major
> Attachments: org.apache.hadoop.fs.s3a.select.ITestS3SelectMRJob.txt
>
>
> I found an anouncement that landsat-pds buket will be deleted on July 1, 2021
> (https://registry.opendata.aws/landsat-8/)
> and  I think this bucket  is used in th test of hadoop-aws module use
> [https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3ATestConstants.java#L93]
>  
> At this time I can access the bucket but we might have to change the test 
> bucket in someday.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2971: MAPREDUCE-7341. Intermediate Manifest Committer

2021-07-28 Thread GitBox


hadoop-yetus commented on pull request #2971:
URL: https://github.com/apache/hadoop/pull/2971#issuecomment-888535151


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  12m  9s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 23 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 39s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 10s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m  9s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  18m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   3m 40s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 10s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   3m 17s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 43s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +0 :ok: |  spotbugs  |   0m 43s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | +1 :green_heart: |  shadedclient  |  14m 31s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | -1 :x: |  javac  |  20m 34s | 
[/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/26/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 4 new + 1912 unchanged - 1 
fixed = 1916 total (was 1913)  |
   | +1 :green_heart: |  compile  |  18m 47s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | -1 :x: |  javac  |  18m 47s | 
[/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/26/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 4 new + 1788 
unchanged - 1 fixed = 1792 total (was 1789)  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/26/artifact/out/blanks-eol.txt)
 |  The patch has 4 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   3m 45s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/26/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 44 new + 0 unchanged - 0 fixed = 44 total (was 0) 
 |
   | +1 :green_heart: |  mvnsite  |   4m  3s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  9s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   2m 49s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 38s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +0 :ok: |  spotbugs  |   0m 37s |  |  hadoop-project has no data from 
spotbugs  |
   | -1 :x: |  spotbugs  |   1m 45s | 
[/new-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/26/artifact/out/new-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.html)
 |  
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  14m 52s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 36s |  |  hadoop-project in 

[GitHub] [hadoop] minni31 opened a new pull request #3246: YARN-10848. Vcore allocation problem with DefaultResourceCalculator

2021-07-28 Thread GitBox


minni31 opened a new pull request #3246:
URL: https://github.com/apache/hadoop/pull/3246


   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17814) Provide fallbacks for identity/cost providers and backoff enable

2021-07-28 Thread Takanobu Asanuma (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HADOOP-17814:
--
Fix Version/s: 3.4.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Provide fallbacks for identity/cost providers and backoff enable
> 
>
> Key: HADOOP-17814
> URL: https://issues.apache.org/jira/browse/HADOOP-17814
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> This sub-task is to provide default properties for identity-provider.impl, 
> cost-provider.impl and backoff.enable such that if properties with port is 
> not configured, we can fallback to default property (port-less).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17814) Provide fallbacks for identity/cost providers and backoff enable

2021-07-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17814?focusedWorklogId=630654=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-630654
 ]

ASF GitHub Bot logged work on HADOOP-17814:
---

Author: ASF GitHub Bot
Created on: 28/Jul/21 17:10
Start Date: 28/Jul/21 17:10
Worklog Time Spent: 10m 
  Work Description: tasanuma commented on pull request #3230:
URL: https://github.com/apache/hadoop/pull/3230#issuecomment-888477003


   Thanks for your contribution, @virajjasani! Thanks for your review, 
@jojochuang!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 630654)
Time Spent: 1h  (was: 50m)

> Provide fallbacks for identity/cost providers and backoff enable
> 
>
> Key: HADOOP-17814
> URL: https://issues.apache.org/jira/browse/HADOOP-17814
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> This sub-task is to provide default properties for identity-provider.impl, 
> cost-provider.impl and backoff.enable such that if properties with port is 
> not configured, we can fallback to default property (port-less).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tasanuma commented on pull request #3230: HADOOP-17814. Provide fallbacks for identity/cost providers and backoff enable

2021-07-28 Thread GitBox


tasanuma commented on pull request #3230:
URL: https://github.com/apache/hadoop/pull/3230#issuecomment-888477003


   Thanks for your contribution, @virajjasani! Thanks for your review, 
@jojochuang!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17814) Provide fallbacks for identity/cost providers and backoff enable

2021-07-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17814?focusedWorklogId=630653=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-630653
 ]

ASF GitHub Bot logged work on HADOOP-17814:
---

Author: ASF GitHub Bot
Created on: 28/Jul/21 17:10
Start Date: 28/Jul/21 17:10
Worklog Time Spent: 10m 
  Work Description: tasanuma merged pull request #3230:
URL: https://github.com/apache/hadoop/pull/3230


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 630653)
Time Spent: 50m  (was: 40m)

> Provide fallbacks for identity/cost providers and backoff enable
> 
>
> Key: HADOOP-17814
> URL: https://issues.apache.org/jira/browse/HADOOP-17814
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> This sub-task is to provide default properties for identity-provider.impl, 
> cost-provider.impl and backoff.enable such that if properties with port is 
> not configured, we can fallback to default property (port-less).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tasanuma merged pull request #3230: HADOOP-17814. Provide fallbacks for identity/cost providers and backoff enable

2021-07-28 Thread GitBox


tasanuma merged pull request #3230:
URL: https://github.com/apache/hadoop/pull/3230


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #3235: HDFS-16143. De-flake TestEditLogTailer#testStandbyTriggersLogRollsWhenTailInProgressEdits

2021-07-28 Thread GitBox


hadoop-yetus commented on pull request #3235:
URL: https://github.com/apache/hadoop/pull/3235#issuecomment-888463500


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  8s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 47s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 28s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  3s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 23s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 11s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 24s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 53s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 21s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 52s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 24s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  17m 40s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 490m 41s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/14/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 48s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 580m 52s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatus 
|
   |   | hadoop.hdfs.TestViewDistributedFileSystemContract |
   |   | hadoop.hdfs.TestSnapshotCommands |
   |   | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes |
   |   | hadoop.fs.viewfs.TestViewFSOverloadSchemeWithMountTableConfigInHDFS |
   |   | 
hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor |
   |   | hadoop.hdfs.TestBlocksScheduledCounter |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   |   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
   |   | hadoop.hdfs.TestHDFSFileSystemContract |
   |   | hadoop.hdfs.web.TestWebHdfsFileSystemContract |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/14/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3235 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 7052f9cdb77d 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 99c7d0e5b1fc8352816a7b1c659c51de56c71993 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 

[jira] [Work logged] (HADOOP-17628) Distcp contract test is really slow with ABFS and S3A; timing out

2021-07-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17628?focusedWorklogId=630639=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-630639
 ]

ASF GitHub Bot logged work on HADOOP-17628:
---

Author: ASF GitHub Bot
Created on: 28/Jul/21 16:33
Start Date: 28/Jul/21 16:33
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3240:
URL: https://github.com/apache/hadoop/pull/3240#issuecomment-888452283


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  0s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 7 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 43s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 50s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  24m 52s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  20m 53s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   4m  8s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 30s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   3m 17s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 56s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   6m 18s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 31s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 44s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  23m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  23m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m  7s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  21m  7s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 57s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   4m 12s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   3m  5s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 43s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   7m 15s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m 41s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 13s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  20m  1s |  |  hadoop-distcp in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 30s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   2m 15s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  1s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 254m 50s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3240/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3240 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 572cbaae0ad3 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / de1cfdb43770696929466725e839ffbe7c14883d |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #3240: HADOOP-17628. Distcp contract test is really slow with ABFS and S3A; timing out.

2021-07-28 Thread GitBox


hadoop-yetus commented on pull request #3240:
URL: https://github.com/apache/hadoop/pull/3240#issuecomment-888452283


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  0s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 7 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 43s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 50s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  24m 52s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  20m 53s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   4m  8s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 30s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   3m 17s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 56s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   6m 18s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 31s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 44s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  23m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  23m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m  7s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  21m  7s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 57s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   4m 12s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   3m  5s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 43s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   7m 15s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m 41s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 13s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  20m  1s |  |  hadoop-distcp in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 30s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   2m 15s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  1s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 254m 50s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3240/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3240 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 572cbaae0ad3 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / de1cfdb43770696929466725e839ffbe7c14883d |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3240/4/testReport/ |
   | Max. process+thread count | 2341 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common 
hadoop-tools/hadoop-distcp hadoop-tools/hadoop-aws hadoop-tools/hadoop-azure U: 
. |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3240/4/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT 

[jira] [Work logged] (HADOOP-17817) HADOOP-17817. S3A to raise IOE if both S3-CSE and S3Guard enabled

2021-07-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17817?focusedWorklogId=630632=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-630632
 ]

ASF GitHub Bot logged work on HADOOP-17817:
---

Author: ASF GitHub Bot
Created on: 28/Jul/21 16:23
Start Date: 28/Jul/21 16:23
Worklog Time Spent: 10m 
  Work Description: mehakmeet edited a comment on pull request #3239:
URL: https://github.com/apache/hadoop/pull/3239#issuecomment-888421643


   Seems like, I've broken tests for S3-CSE ON because alot of S3Guard tests 
don't require your bucket to be s3guard enabled, and force the metastore to be 
dynamoDB. My lapse on testing for S3-CSE ON and S3-Guard OFF. I think we 
should've skipped the S3Guard tests for S3-CSE anyways, so I'll skip all of 
them in a follow-up PR. The failure is valid, but still, we should skip, what 
do you think, @steveloughran? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 630632)
Time Spent: 1.5h  (was: 1h 20m)

> HADOOP-17817. S3A to raise IOE if both S3-CSE and S3Guard enabled
> -
>
> Key: HADOOP-17817
> URL: https://issues.apache.org/jira/browse/HADOOP-17817
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Mehakmeet Singh
>Assignee: Mehakmeet Singh
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Throw an exception if S3Guard and S3 Client-side encryption are enabled on a 
> bucket. Follow-up to HADOOP-13887. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mehakmeet edited a comment on pull request #3239: HADOOP-17817. Throw an exception if S3 client-side encryption is enabled on S3Guard enabled bucket

2021-07-28 Thread GitBox


mehakmeet edited a comment on pull request #3239:
URL: https://github.com/apache/hadoop/pull/3239#issuecomment-888421643


   Seems like, I've broken tests for S3-CSE ON because alot of S3Guard tests 
don't require your bucket to be s3guard enabled, and force the metastore to be 
dynamoDB. My lapse on testing for S3-CSE ON and S3-Guard OFF. I think we 
should've skipped the S3Guard tests for S3-CSE anyways, so I'll skip all of 
them in a follow-up PR. The failure is valid, but still, we should skip, what 
do you think, @steveloughran? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17817) HADOOP-17817. S3A to raise IOE if both S3-CSE and S3Guard enabled

2021-07-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17817?focusedWorklogId=630607=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-630607
 ]

ASF GitHub Bot logged work on HADOOP-17817:
---

Author: ASF GitHub Bot
Created on: 28/Jul/21 15:51
Start Date: 28/Jul/21 15:51
Worklog Time Spent: 10m 
  Work Description: mehakmeet commented on pull request #3239:
URL: https://github.com/apache/hadoop/pull/3239#issuecomment-888421643


   Seems like, I've broken tests for S3-CSE ON because some S3Guard tests don't 
require your bucket to be s3guard enabled, and force the metastore to be 
dynamoDB. My lapse on testing for S3-CSE ON and S3-Guard OFF. I think we 
should've skipped the S3Guard tests for S3-CSE anyways, so I'll skip all of 
them in a follow-up PR. The failure is valid, but still, we should skip, what 
do you think, @steveloughran? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 630607)
Time Spent: 1h 20m  (was: 1h 10m)

> HADOOP-17817. S3A to raise IOE if both S3-CSE and S3Guard enabled
> -
>
> Key: HADOOP-17817
> URL: https://issues.apache.org/jira/browse/HADOOP-17817
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Mehakmeet Singh
>Assignee: Mehakmeet Singh
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Throw an exception if S3Guard and S3 Client-side encryption are enabled on a 
> bucket. Follow-up to HADOOP-13887. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mehakmeet commented on pull request #3239: HADOOP-17817. Throw an exception if S3 client-side encryption is enabled on S3Guard enabled bucket

2021-07-28 Thread GitBox


mehakmeet commented on pull request #3239:
URL: https://github.com/apache/hadoop/pull/3239#issuecomment-888421643


   Seems like, I've broken tests for S3-CSE ON because some S3Guard tests don't 
require your bucket to be s3guard enabled, and force the metastore to be 
dynamoDB. My lapse on testing for S3-CSE ON and S3-Guard OFF. I think we 
should've skipped the S3Guard tests for S3-CSE anyways, so I'll skip all of 
them in a follow-up PR. The failure is valid, but still, we should skip, what 
do you think, @steveloughran? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #3235: HDFS-16143. De-flake TestEditLogTailer#testStandbyTriggersLogRollsWhenTailInProgressEdits

2021-07-28 Thread GitBox


hadoop-yetus commented on pull request #3235:
URL: https://github.com/apache/hadoop/pull/3235#issuecomment-888419375


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 38s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |   3m 51s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/11/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | +1 :green_heart: |  compile  |   2m 49s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  |  trunk passed  |
   | -1 :x: |  mvnsite  |   1m  0s | 
[/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/11/artifact/out/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in trunk failed.  |
   | -1 :x: |  javadoc  |   0m 23s | 
[/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/11/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  hadoop-hdfs in trunk failed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | +1 :green_heart: |  javadoc  |   1m 51s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 49s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 57s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  6s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 50s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 16s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 48s | 
[/results-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/11/artifact/out/results-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 99 new + 0 unchanged 
- 0 fixed = 99 total (was 0)  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 27s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m 39s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 483m 39s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/11/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 47s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 548m 30s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatus 
|
   |   | hadoop.hdfs.TestViewDistributedFileSystemContract |
   |   | hadoop.hdfs.TestSnapshotCommands |
   |   | hadoop.hdfs.TestViewDistributedFileSystemWithMountLinks |
   |   | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes |
   |   | hadoop.fs.viewfs.TestViewFSOverloadSchemeWithMountTableConfigInHDFS |
   |   | 
hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor |
   |   | hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand |
   |   | hadoop.hdfs.TestHDFSFileSystemContract |
   |   | 

[jira] [Resolved] (HADOOP-17789) S3 read performance with Spark with Hadoop 3.3.1 is slower than older Hadoop

2021-07-28 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-17789.
-
Resolution: Works for Me

> S3 read performance with Spark with Hadoop 3.3.1 is slower than older Hadoop
> 
>
> Key: HADOOP-17789
> URL: https://issues.apache.org/jira/browse/HADOOP-17789
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.3.1
>Reporter: Arghya Saha
>Priority: Minor
> Attachments: storediag.log
>
>
> This is issue is continuation to 
> https://issues.apache.org/jira/browse/HADOOP-17755
> The input data reported by Spark(Hadoop 3.3.1) was almost double and read 
> runtime also increased (around 20%) compared to Spark(Hadoop 3.2.0) with same 
> exact amount of resource and same configuration. And this is happening with 
> other jobs as well which was not impacted by read fully error as stated above.
> *I was having the same exact issue when I was using the workaround  
> fs.s3a.readahead.range = 1G with Hadoop 3.2.0*
> Below is further details :
>  
> |Hadoop Version|Actual size of the files(in SQL Tab)|Reported size of the 
> file(In Stages)|Time to complete the Stage|fs.s3a.readahead.range|
> |Hadoop 3.2.0|29.3 GiB|29.3 GiB|23 min|64K|
> |Hadoop 3.3.1|29.3 GiB|*{color:#ff}58.7 GiB{color}*|*{color:#ff}27 
> min{color}*|{color:#172b4d}64K{color}|
> |Hadoop 3.2.0|29.3 GiB|*{color:#ff}58.7 GiB{color}*|*{color:#ff}~27 
> min{color}*|{color:#172b4d}1G{color}|
>  * *Shuffle Write* is same (95.9 GiB) for all the above three cases
> I was expecting some improvement(or same as 3.2.0) with Hadoop 3.3.1 with 
> read operations, please suggest how to approach this and resolve this.
> I have used the default s3a config along with below and also using EKS cluster
> {code:java}
> spark.hadoop.fs.s3a.committer.magic.enabled: 'true'
> spark.hadoop.fs.s3a.committer.name: magic
> spark.hadoop.mapreduce.outputcommitter.factory.scheme.s3a: 
> org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory
> spark.hadoop.fs.s3a.downgrade.syncable.exceptions: "true"{code}
>  * I did not use 
> {code:java}
> spark.hadoop.fs.s3a.experimental.input.fadvise=random{code}
> And as already mentioned I have used same Spark, same amount of resources and 
> same config.  Only change is Hadoop 3.2.0 to Hadoop 3.3.1 (Built with Spark 
> using ./dev/make-distribution.sh --name spark-patched --pip -Pkubernetes 
> -Phive -Phive-thriftserver -Dhadoop.version="3.3.1")



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17789) S3 read performance with Spark with Hadoop 3.3.1 is slower than older Hadoop

2021-07-28 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17388847#comment-17388847
 ] 

Steve Loughran commented on HADOOP-17789:
-

bq. Can we use the latest one wildfly-openssl 2.1.x ? 

the one shipped in hadoop is the one tested. Anything else: you are on your own.

My recommendation: you check out hadoop trunk source, change the version in the 
poms, rebuilt and restest everything to see what S3A and ABFS do, then create a 
release build and retest in a test cluster you've created where the container 
images are all using the version of openssl native you intend to use. If all 
works, then submit the hadoop PR, which, if taken up, means more people will 
test it and it would get supported.

closing this JIRA as invalid as it was a configuration issue.


> S3 read performance with Spark with Hadoop 3.3.1 is slower than older Hadoop
> 
>
> Key: HADOOP-17789
> URL: https://issues.apache.org/jira/browse/HADOOP-17789
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.3.1
>Reporter: Arghya Saha
>Priority: Minor
> Attachments: storediag.log
>
>
> This is issue is continuation to 
> https://issues.apache.org/jira/browse/HADOOP-17755
> The input data reported by Spark(Hadoop 3.3.1) was almost double and read 
> runtime also increased (around 20%) compared to Spark(Hadoop 3.2.0) with same 
> exact amount of resource and same configuration. And this is happening with 
> other jobs as well which was not impacted by read fully error as stated above.
> *I was having the same exact issue when I was using the workaround  
> fs.s3a.readahead.range = 1G with Hadoop 3.2.0*
> Below is further details :
>  
> |Hadoop Version|Actual size of the files(in SQL Tab)|Reported size of the 
> file(In Stages)|Time to complete the Stage|fs.s3a.readahead.range|
> |Hadoop 3.2.0|29.3 GiB|29.3 GiB|23 min|64K|
> |Hadoop 3.3.1|29.3 GiB|*{color:#ff}58.7 GiB{color}*|*{color:#ff}27 
> min{color}*|{color:#172b4d}64K{color}|
> |Hadoop 3.2.0|29.3 GiB|*{color:#ff}58.7 GiB{color}*|*{color:#ff}~27 
> min{color}*|{color:#172b4d}1G{color}|
>  * *Shuffle Write* is same (95.9 GiB) for all the above three cases
> I was expecting some improvement(or same as 3.2.0) with Hadoop 3.3.1 with 
> read operations, please suggest how to approach this and resolve this.
> I have used the default s3a config along with below and also using EKS cluster
> {code:java}
> spark.hadoop.fs.s3a.committer.magic.enabled: 'true'
> spark.hadoop.fs.s3a.committer.name: magic
> spark.hadoop.mapreduce.outputcommitter.factory.scheme.s3a: 
> org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory
> spark.hadoop.fs.s3a.downgrade.syncable.exceptions: "true"{code}
>  * I did not use 
> {code:java}
> spark.hadoop.fs.s3a.experimental.input.fadvise=random{code}
> And as already mentioned I have used same Spark, same amount of resources and 
> same config.  Only change is Hadoop 3.2.0 to Hadoop 3.3.1 (Built with Spark 
> using ./dev/make-distribution.sh --name spark-patched --pip -Pkubernetes 
> -Phive -Phive-thriftserver -Dhadoop.version="3.3.1")



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #3235: HDFS-16143. De-flake TestEditLogTailer#testStandbyTriggersLogRollsWhenTailInProgressEdits

2021-07-28 Thread GitBox


hadoop-yetus commented on pull request #3235:
URL: https://github.com/apache/hadoop/pull/3235#issuecomment-888374231


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 15s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 11s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 32s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 59s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 25s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 30s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 12s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 13s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 55s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 21s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 39s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 20s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 393m  1s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/13/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +0 :ok: |  asflicense  |   0m 33s |  |  ASF License check generated no 
output?  |
   |  |   | 488m  4s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
   |   | 
hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor |
   |   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
   |   | hadoop.hdfs.TestSnapshotCommands |
   |   | hadoop.hdfs.TestHDFSFileSystemContract |
   |   | hadoop.hdfs.server.namenode.TestGetContentSummaryWithPermission |
   |   | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics |
   |   | hadoop.hdfs.server.namenode.TestCacheDirectivesWithViewDFS |
   |   | hadoop.hdfs.server.datanode.TestBlockScanner |
   |   | hadoop.hdfs.server.balancer.TestBalancer |
   |   | hadoop.hdfs.server.namenode.TestFSImage |
   |   | hadoop.hdfs.server.namenode.TestNameNodeRecovery |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/13/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3235 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 239e6b94c876 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 99c7d0e5b1fc8352816a7b1c659c51de56c71993 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 

[jira] [Resolved] (HADOOP-17817) HADOOP-17817. S3A to raise IOE if both S3-CSE and S3Guard enabled

2021-07-28 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-17817.
-
Resolution: Fixed

> HADOOP-17817. S3A to raise IOE if both S3-CSE and S3Guard enabled
> -
>
> Key: HADOOP-17817
> URL: https://issues.apache.org/jira/browse/HADOOP-17817
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Mehakmeet Singh
>Assignee: Mehakmeet Singh
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Throw an exception if S3Guard and S3 Client-side encryption are enabled on a 
> bucket. Follow-up to HADOOP-13887. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17817) HADOOP-17817. S3A to raise IOE if both S3-CSE and S3Guard enabled

2021-07-28 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17817:

Component/s: fs/s3

> HADOOP-17817. S3A to raise IOE if both S3-CSE and S3Guard enabled
> -
>
> Key: HADOOP-17817
> URL: https://issues.apache.org/jira/browse/HADOOP-17817
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Mehakmeet Singh
>Assignee: Mehakmeet Singh
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Throw an exception if S3Guard and S3 Client-side encryption are enabled on a 
> bucket. Follow-up to HADOOP-13887. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13887) Encrypt S3A data client-side with AWS SDK (S3-CSE)

2021-07-28 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13887:

Description: 
Expose the client-side encryption option documented in Amazon S3 documentation  
- http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingClientSideEncryption.html

When backporting, include HADOOP-17817

  was:
Expose the client-side encryption option documented in Amazon S3 documentation  
- http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingClientSideEncryption.html

Currently this is not exposed in Hadoop but it is exposed as an option in AWS 
Java SDK, which Hadoop currently includes.


> Encrypt S3A data client-side with AWS SDK (S3-CSE)
> --
>
> Key: HADOOP-13887
> URL: https://issues.apache.org/jira/browse/HADOOP-13887
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Jeeyoung Kim
>Assignee: Mehakmeet Singh
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: HADOOP-13887-002.patch, HADOOP-13887-007.patch, 
> HADOOP-13887-branch-2-003.patch, HADOOP-13897-branch-2-004.patch, 
> HADOOP-13897-branch-2-005.patch, HADOOP-13897-branch-2-006.patch, 
> HADOOP-13897-branch-2-008.patch, HADOOP-13897-branch-2-009.patch, 
> HADOOP-13897-branch-2-010.patch, HADOOP-13897-branch-2-012.patch, 
> HADOOP-13897-branch-2-014.patch, HADOOP-13897-trunk-011.patch, 
> HADOOP-13897-trunk-013.patch, HADOOP-14171-001.patch, S3-CSE Proposal.pdf
>
>  Time Spent: 11h 40m
>  Remaining Estimate: 0h
>
> Expose the client-side encryption option documented in Amazon S3 
> documentation  - 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingClientSideEncryption.html
> When backporting, include HADOOP-17817



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17817) HADOOP-17817. S3A to raise IOE if both S3-CSE and S3Guard enabled

2021-07-28 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17817:

Fix Version/s: 3.4.0

> HADOOP-17817. S3A to raise IOE if both S3-CSE and S3Guard enabled
> -
>
> Key: HADOOP-17817
> URL: https://issues.apache.org/jira/browse/HADOOP-17817
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Mehakmeet Singh
>Assignee: Mehakmeet Singh
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Throw an exception if S3Guard and S3 Client-side encryption are enabled on a 
> bucket. Follow-up to HADOOP-13887. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17817) HADOOP-17817. S3A to raise IOE if both S3-CSE and S3Guard enabled

2021-07-28 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17817:

Priority: Major  (was: Minor)

> HADOOP-17817. S3A to raise IOE if both S3-CSE and S3Guard enabled
> -
>
> Key: HADOOP-17817
> URL: https://issues.apache.org/jira/browse/HADOOP-17817
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Mehakmeet Singh
>Assignee: Mehakmeet Singh
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Throw an exception if S3Guard and S3 Client-side encryption are enabled on a 
> bucket. Follow-up to HADOOP-13887. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17817) HADOOP-17817. S3A to raise IOE if both S3-CSE and S3Guard enabled

2021-07-28 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17817:

Affects Version/s: 3.4.0

> HADOOP-17817. S3A to raise IOE if both S3-CSE and S3Guard enabled
> -
>
> Key: HADOOP-17817
> URL: https://issues.apache.org/jira/browse/HADOOP-17817
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.4.0
>Reporter: Mehakmeet Singh
>Assignee: Mehakmeet Singh
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Throw an exception if S3Guard and S3 Client-side encryption are enabled on a 
> bucket. Follow-up to HADOOP-13887. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17817) HADOOP-17817. S3A to raise IOE if both S3-CSE and S3Guard enabled

2021-07-28 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17817:

Priority: Minor  (was: Major)

> HADOOP-17817. S3A to raise IOE if both S3-CSE and S3Guard enabled
> -
>
> Key: HADOOP-17817
> URL: https://issues.apache.org/jira/browse/HADOOP-17817
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Mehakmeet Singh
>Assignee: Mehakmeet Singh
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Throw an exception if S3Guard and S3 Client-side encryption are enabled on a 
> bucket. Follow-up to HADOOP-13887. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17817) HADOOP-17817. S3A to raise IOE if both S3-CSE and S3Guard enabled

2021-07-28 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17817:

Summary: HADOOP-17817. S3A to raise IOE if both S3-CSE and S3Guard enabled  
(was: Throw an exception if S3 client-side encryption is enabled on S3Guard 
enabled bucket)

> HADOOP-17817. S3A to raise IOE if both S3-CSE and S3Guard enabled
> -
>
> Key: HADOOP-17817
> URL: https://issues.apache.org/jira/browse/HADOOP-17817
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Mehakmeet Singh
>Assignee: Mehakmeet Singh
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Throw an exception if S3Guard and S3 Client-side encryption are enabled on a 
> bucket. Follow-up to HADOOP-13887. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17817) Throw an exception if S3 client-side encryption is enabled on S3Guard enabled bucket

2021-07-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17817?focusedWorklogId=630561=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-630561
 ]

ASF GitHub Bot logged work on HADOOP-17817:
---

Author: ASF GitHub Bot
Created on: 28/Jul/21 14:34
Start Date: 28/Jul/21 14:34
Worklog Time Spent: 10m 
  Work Description: steveloughran merged pull request #3239:
URL: https://github.com/apache/hadoop/pull/3239


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 630561)
Time Spent: 1h 10m  (was: 1h)

> Throw an exception if S3 client-side encryption is enabled on S3Guard enabled 
> bucket
> 
>
> Key: HADOOP-17817
> URL: https://issues.apache.org/jira/browse/HADOOP-17817
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Mehakmeet Singh
>Assignee: Mehakmeet Singh
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Throw an exception if S3Guard and S3 Client-side encryption are enabled on a 
> bucket. Follow-up to HADOOP-13887. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran merged pull request #3239: HADOOP-17817. Throw an exception if S3 client-side encryption is enabled on S3Guard enabled bucket

2021-07-28 Thread GitBox


steveloughran merged pull request #3239:
URL: https://github.com/apache/hadoop/pull/3239


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #3243: HDFS-14529. SetTimes to throw FileNotFoundException if inode is not found

2021-07-28 Thread GitBox


hadoop-yetus commented on pull request #3243:
URL: https://github.com/apache/hadoop/pull/3243#issuecomment-888337965


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 57s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 56s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 30s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 27s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  6s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 32s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 33s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 28s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 58s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3243/2/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 1 unchanged - 
0 fixed = 2 total (was 1)  |
   | +1 :green_heart: |  mvnsite  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 33s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m  9s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 246m 54s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3243/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 49s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 340m 48s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3243/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3243 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 1d6b73d9265a 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 9458f1506d31fb1e5158b962a06a5730409784f1 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3243/2/testReport/ |
   | Max. process+thread count | 3183 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output 

[jira] [Commented] (HADOOP-17789) S3 read performance with Spark with Hadoop 3.3.1 is slower than older Hadoop

2021-07-28 Thread Arghya Saha (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17388790#comment-17388790
 ] 

Arghya Saha commented on HADOOP-17789:
--

[~ste...@apache.org] Thanks again for your suggestions, the performance 
improved around 10-30% after applying suggested configurations. We still did 
not try wildfly.jar as was confused with the version compatible with Hadoop 
3.3.1 as one I found in the Hadoop package is older one wildfly-openssl 1.x. 
Can we use the latest one wildfly-openssl 2.1.x ? 

I thing we are good to close the Jira as well.

> S3 read performance with Spark with Hadoop 3.3.1 is slower than older Hadoop
> 
>
> Key: HADOOP-17789
> URL: https://issues.apache.org/jira/browse/HADOOP-17789
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.3.1
>Reporter: Arghya Saha
>Priority: Minor
> Attachments: storediag.log
>
>
> This is issue is continuation to 
> https://issues.apache.org/jira/browse/HADOOP-17755
> The input data reported by Spark(Hadoop 3.3.1) was almost double and read 
> runtime also increased (around 20%) compared to Spark(Hadoop 3.2.0) with same 
> exact amount of resource and same configuration. And this is happening with 
> other jobs as well which was not impacted by read fully error as stated above.
> *I was having the same exact issue when I was using the workaround  
> fs.s3a.readahead.range = 1G with Hadoop 3.2.0*
> Below is further details :
>  
> |Hadoop Version|Actual size of the files(in SQL Tab)|Reported size of the 
> file(In Stages)|Time to complete the Stage|fs.s3a.readahead.range|
> |Hadoop 3.2.0|29.3 GiB|29.3 GiB|23 min|64K|
> |Hadoop 3.3.1|29.3 GiB|*{color:#ff}58.7 GiB{color}*|*{color:#ff}27 
> min{color}*|{color:#172b4d}64K{color}|
> |Hadoop 3.2.0|29.3 GiB|*{color:#ff}58.7 GiB{color}*|*{color:#ff}~27 
> min{color}*|{color:#172b4d}1G{color}|
>  * *Shuffle Write* is same (95.9 GiB) for all the above three cases
> I was expecting some improvement(or same as 3.2.0) with Hadoop 3.3.1 with 
> read operations, please suggest how to approach this and resolve this.
> I have used the default s3a config along with below and also using EKS cluster
> {code:java}
> spark.hadoop.fs.s3a.committer.magic.enabled: 'true'
> spark.hadoop.fs.s3a.committer.name: magic
> spark.hadoop.mapreduce.outputcommitter.factory.scheme.s3a: 
> org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory
> spark.hadoop.fs.s3a.downgrade.syncable.exceptions: "true"{code}
>  * I did not use 
> {code:java}
> spark.hadoop.fs.s3a.experimental.input.fadvise=random{code}
> And as already mentioned I have used same Spark, same amount of resources and 
> same config.  Only change is Hadoop 3.2.0 to Hadoop 3.3.1 (Built with Spark 
> using ./dev/make-distribution.sh --name spark-patched --pip -Pkubernetes 
> -Phive -Phive-thriftserver -Dhadoop.version="3.3.1")



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #3220: YARN-10355. Refactor NM ContainerLaunch.java#orderEnvByDependencies

2021-07-28 Thread GitBox


hadoop-yetus commented on pull request #3220:
URL: https://github.com/apache/hadoop/pull/3220#issuecomment-888301811


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  4s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 35s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m  8s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  24m  2s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  20m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   3m 59s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 32s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 20s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +0 :ok: |  spotbugs  |   0m 34s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | +1 :green_heart: |  shadedclient  |  19m 27s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   0m 56s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  27m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  27m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  23m 48s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  23m 48s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   4m 15s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +0 :ok: |  spotbugs  |   0m 28s |  |  hadoop-project has no data from 
spotbugs  |
   | +1 :green_heart: |  shadedclient  |  20m 10s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 29s |  |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  24m 32s |  |  hadoop-yarn-server-nodemanager 
in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 58s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 223m  8s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3220/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3220 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell xml spotbugs checkstyle |
   | uname | Linux 00e7d0292931 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 
06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 8251f55ee2b5150dab9fc59709996908ceaeb37c |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3220/6/testReport/ |
   | Max. process+thread count | 574 (vs. ulimit of 5500) |
   | modules | C: hadoop-project 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3220/6/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT 

[GitHub] [hadoop] hadoop-yetus commented on pull request #3195: YARN-10459. containerLaunchedOnNode method not need to hold scheduler…

2021-07-28 Thread GitBox


hadoop-yetus commented on pull request #3195:
URL: https://github.com/apache/hadoop/pull/3195#issuecomment-888281625


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 40s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  2s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 36s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 19s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 59s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 58s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 20s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 42s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 12s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 41s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 35s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m 21s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 25s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  1s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 44s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 47s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   9m 53s | 
[/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3195/1/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt)
 |  hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +0 :ok: |  asflicense  |   0m 26s |  |  ASF License check generated no 
output?  |
   |  |   | 119m 57s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestContainerResourceUsage |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3195/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3195 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 6aab5bd94c6b 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 
19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / fd61e90198f119c67716ea5402243bb5084ef6e9 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3195/1/testReport/ |
   | Max. process+thread count | 804 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
   | Console 

[jira] [Work logged] (HADOOP-17628) Distcp contract test is really slow with ABFS and S3A; timing out

2021-07-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17628?focusedWorklogId=630485=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-630485
 ]

ASF GitHub Bot logged work on HADOOP-17628:
---

Author: ASF GitHub Bot
Created on: 28/Jul/21 12:35
Start Date: 28/Jul/21 12:35
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #3240:
URL: https://github.com/apache/hadoop/pull/3240#issuecomment-888247513


   Latest release
   
   * Address review comments
   * log IOStats after each test case.
 Important: as the cached FS retains statistics, the numbers
 get bigger over time.
   * HDFS test is now reinstated, as we've identified that most
 of its long execution time is from the large file upload/download
 suites. Disable them and its execution time drops from 4m to 30s,
 which means it can then be used to make sure the contract suite
 is consistent between HDFS and the object stores.
   
   
   IOStats of full suite against S3 london (1:43s)
   
   ```
   2021-07-28 12:40:48,632 [setup] INFO  statistics.IOStatisticsLogging 
(IOStatisticsLogging.java:logIOStatisticsAtLevel(269)) - IOStatistics: 
counters=((action_executor_acquired=47)
   (action_http_get_request=38)
   (action_http_head_request=111)
   (audit_request_execution=420)
   (audit_span_creation=483)
   (directories_created=38)
   (directories_deleted=1)
   (fake_directories_deleted=485)
   (files_copied=2)
   (files_copied_bytes=264)
   (files_created=47)
   (files_deleted=48)
   (ignored_errors=14)
   (object_bulk_delete_request=88)
   (object_copy_requests=2)
   (object_delete_objects=534)
   (object_delete_request=5)
   (object_list_request=89)
   (object_metadata_request=111)
   (object_put_bytes=18880752)
   (object_put_request=85)
   (object_put_request_completed=85)
   (op_create=47)
   (op_delete=14)
   (op_exists=13)
   (op_exists.failures=3)
   (op_get_file_status=194)
   (op_get_file_status.failures=44)
   (op_glob_status=25)
   (op_is_file=1)
   (op_list_files=9)
   (op_list_status=60)
   (op_mkdirs=64)
   (op_open=39)
   (op_rename=2)
   (s3guard_metadatastore_initialization=1)
   (s3guard_metadatastore_put_path_request=103)
   (s3guard_metadatastore_record_deletes=2)
   (s3guard_metadatastore_record_reads=1473)
   (s3guard_metadatastore_record_writes=350)
   (store_io_request=422)
   (stream_read_bytes=18878052)
   (stream_read_close_operations=39)
   (stream_read_closed=38)
   (stream_read_opened=38)
   (stream_read_operations=2742)
   (stream_read_operations_incomplete=1639)
   (stream_read_seek_policy_changed=39)
   (stream_read_total_bytes=18878052)
   (stream_write_block_uploads=47)
   (stream_write_bytes=18880752)
   (stream_write_total_data=37761504));
   
   gauges=((stream_write_block_uploads_pending=47));
   
   minimums=((action_executor_acquired.min=0)
   (action_http_get_request.min=31)
   (action_http_head_request.min=22)
   (object_bulk_delete_request.min=45)
   (object_delete_request.min=34)
   (object_list_request.min=28)
   (object_put_request.min=42)
   (op_create.min=16)
   (op_delete.min=53)
   (op_exists.failures.min=16)
   (op_exists.min=15)
   (op_get_file_status.failures.min=16)
   (op_get_file_status.min=15)
   (op_glob_status.min=15)
   (op_is_file.min=43)
   (op_list_files.min=176)
   (op_list_status.min=64)
   (op_mkdirs.min=16)
   (op_rename.min=967));
   
   maximums=((action_executor_acquired.max=0)
   (action_http_get_request.max=123)
   (action_http_head_request.max=317)
   (object_bulk_delete_request.max=384)
   (object_delete_request.max=91)
   (object_list_request.max=202)
   (object_put_request.max=2083)
   (op_create.max=129)
   (op_delete.max=2196)
   (op_exists.failures.max=45)
   (op_exists.max=43)
   (op_get_file_status.failures.max=29)
   (op_get_file_status.max=341)
   (op_glob_status.max=192)
   (op_is_file.max=43)
   (op_list_files.max=589)
   (op_list_status.max=260)
   (op_mkdirs.max=729)
   (op_rename.max=1199));
   
   means=((action_executor_acquired.mean=(samples=47, sum=0, mean=0.))
   (action_http_get_request.mean=(samples=38, sum=1490, mean=39.2105))
   (action_http_head_request.mean=(samples=111, sum=4311, mean=38.8378))
   (object_bulk_delete_request.mean=(samples=88, sum=12810, mean=145.5682))
   (object_delete_request.mean=(samples=5, sum=260, mean=52.))
   (object_list_request.mean=(samples=89, sum=4988, mean=56.0449))
   (object_put_request.mean=(samples=85, sum=17463, mean=205.4471))
   (op_create.mean=(samples=47, sum=1160, mean=24.6809))
   (op_delete.mean=(samples=14, sum=11257, mean=804.0714))
   (op_exists.failures.mean=(samples=3, sum=80, mean=26.6667))
   (op_exists.mean=(samples=10, sum=250, mean=25.))
   (op_get_file_status.failures.mean=(samples=44, sum=876, mean=19.9091))
   (op_get_file_status.mean=(samples=150, sum=6404, mean=42.6933))
   (op_glob_status.mean=(samples=25, sum=1826, 

[GitHub] [hadoop] steveloughran commented on pull request #3240: HADOOP-17628. Distcp contract test is really slow with ABFS and S3A; timing out.

2021-07-28 Thread GitBox


steveloughran commented on pull request #3240:
URL: https://github.com/apache/hadoop/pull/3240#issuecomment-888247513


   Latest release
   
   * Address review comments
   * log IOStats after each test case.
 Important: as the cached FS retains statistics, the numbers
 get bigger over time.
   * HDFS test is now reinstated, as we've identified that most
 of its long execution time is from the large file upload/download
 suites. Disable them and its execution time drops from 4m to 30s,
 which means it can then be used to make sure the contract suite
 is consistent between HDFS and the object stores.
   
   
   IOStats of full suite against S3 london (1:43s)
   
   ```
   2021-07-28 12:40:48,632 [setup] INFO  statistics.IOStatisticsLogging 
(IOStatisticsLogging.java:logIOStatisticsAtLevel(269)) - IOStatistics: 
counters=((action_executor_acquired=47)
   (action_http_get_request=38)
   (action_http_head_request=111)
   (audit_request_execution=420)
   (audit_span_creation=483)
   (directories_created=38)
   (directories_deleted=1)
   (fake_directories_deleted=485)
   (files_copied=2)
   (files_copied_bytes=264)
   (files_created=47)
   (files_deleted=48)
   (ignored_errors=14)
   (object_bulk_delete_request=88)
   (object_copy_requests=2)
   (object_delete_objects=534)
   (object_delete_request=5)
   (object_list_request=89)
   (object_metadata_request=111)
   (object_put_bytes=18880752)
   (object_put_request=85)
   (object_put_request_completed=85)
   (op_create=47)
   (op_delete=14)
   (op_exists=13)
   (op_exists.failures=3)
   (op_get_file_status=194)
   (op_get_file_status.failures=44)
   (op_glob_status=25)
   (op_is_file=1)
   (op_list_files=9)
   (op_list_status=60)
   (op_mkdirs=64)
   (op_open=39)
   (op_rename=2)
   (s3guard_metadatastore_initialization=1)
   (s3guard_metadatastore_put_path_request=103)
   (s3guard_metadatastore_record_deletes=2)
   (s3guard_metadatastore_record_reads=1473)
   (s3guard_metadatastore_record_writes=350)
   (store_io_request=422)
   (stream_read_bytes=18878052)
   (stream_read_close_operations=39)
   (stream_read_closed=38)
   (stream_read_opened=38)
   (stream_read_operations=2742)
   (stream_read_operations_incomplete=1639)
   (stream_read_seek_policy_changed=39)
   (stream_read_total_bytes=18878052)
   (stream_write_block_uploads=47)
   (stream_write_bytes=18880752)
   (stream_write_total_data=37761504));
   
   gauges=((stream_write_block_uploads_pending=47));
   
   minimums=((action_executor_acquired.min=0)
   (action_http_get_request.min=31)
   (action_http_head_request.min=22)
   (object_bulk_delete_request.min=45)
   (object_delete_request.min=34)
   (object_list_request.min=28)
   (object_put_request.min=42)
   (op_create.min=16)
   (op_delete.min=53)
   (op_exists.failures.min=16)
   (op_exists.min=15)
   (op_get_file_status.failures.min=16)
   (op_get_file_status.min=15)
   (op_glob_status.min=15)
   (op_is_file.min=43)
   (op_list_files.min=176)
   (op_list_status.min=64)
   (op_mkdirs.min=16)
   (op_rename.min=967));
   
   maximums=((action_executor_acquired.max=0)
   (action_http_get_request.max=123)
   (action_http_head_request.max=317)
   (object_bulk_delete_request.max=384)
   (object_delete_request.max=91)
   (object_list_request.max=202)
   (object_put_request.max=2083)
   (op_create.max=129)
   (op_delete.max=2196)
   (op_exists.failures.max=45)
   (op_exists.max=43)
   (op_get_file_status.failures.max=29)
   (op_get_file_status.max=341)
   (op_glob_status.max=192)
   (op_is_file.max=43)
   (op_list_files.max=589)
   (op_list_status.max=260)
   (op_mkdirs.max=729)
   (op_rename.max=1199));
   
   means=((action_executor_acquired.mean=(samples=47, sum=0, mean=0.))
   (action_http_get_request.mean=(samples=38, sum=1490, mean=39.2105))
   (action_http_head_request.mean=(samples=111, sum=4311, mean=38.8378))
   (object_bulk_delete_request.mean=(samples=88, sum=12810, mean=145.5682))
   (object_delete_request.mean=(samples=5, sum=260, mean=52.))
   (object_list_request.mean=(samples=89, sum=4988, mean=56.0449))
   (object_put_request.mean=(samples=85, sum=17463, mean=205.4471))
   (op_create.mean=(samples=47, sum=1160, mean=24.6809))
   (op_delete.mean=(samples=14, sum=11257, mean=804.0714))
   (op_exists.failures.mean=(samples=3, sum=80, mean=26.6667))
   (op_exists.mean=(samples=10, sum=250, mean=25.))
   (op_get_file_status.failures.mean=(samples=44, sum=876, mean=19.9091))
   (op_get_file_status.mean=(samples=150, sum=6404, mean=42.6933))
   (op_glob_status.mean=(samples=25, sum=1826, mean=73.0400))
   (op_is_file.mean=(samples=1, sum=43, mean=43.))
   (op_list_files.mean=(samples=9, sum=3218, mean=357.5556))
   (op_list_status.mean=(samples=60, sum=7084, mean=118.0667))
   (op_mkdirs.mean=(samples=64, sum=15375, mean=240.2344))
   (op_rename.mean=(samples=2, sum=2166, mean=1083.)));
   ```
   
   IOStats of full suite against AWS cardiff (1:28). That region is about 30 
miles away from here, 

[GitHub] [hadoop] hadoop-yetus commented on pull request #3235: HDFS-16143. De-flake TestEditLogTailer#testStandbyTriggersLogRollsWhenTailInProgressEdits

2021-07-28 Thread GitBox


hadoop-yetus commented on pull request #3235:
URL: https://github.com/apache/hadoop/pull/3235#issuecomment-888263127


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  5s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  37m 48s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m 10s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 36s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 54s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m  8s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 10s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 50s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 11s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m 10s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 238m 57s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/12/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 43s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 337m  4s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestRollingUpgrade |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/12/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3235 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux c07e97aaa89d 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 99c7d0e5b1fc8352816a7b1c659c51de56c71993 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/12/testReport/ |
   | Max. process+thread count | 3522 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/12/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log 

[jira] [Work logged] (HADOOP-17628) Distcp contract test is really slow with ABFS and S3A; timing out

2021-07-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17628?focusedWorklogId=630471=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-630471
 ]

ASF GitHub Bot logged work on HADOOP-17628:
---

Author: ASF GitHub Bot
Created on: 28/Jul/21 12:32
Start Date: 28/Jul/21 12:32
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on a change in pull request 
#3240:
URL: https://github.com/apache/hadoop/pull/3240#discussion_r678227498



##
File path: 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/contract/AbstractContractDistCpTest.java
##
@@ -532,13 +549,15 @@ private Path distCpDeepDirectoryStructure(FileSystem 
srcFS,
*/
   private void largeFiles(FileSystem srcFS, Path srcDir, FileSystem dstFS,
   Path dstDir) throws Exception {
+int fileSizeKb = conf.getInt(SCALE_TEST_DISTCP_FILE_SIZE_KB,
+DEFAULT_DISTCP_SIZE_KB);
+if (fileSizeKb < 1) {
+  skip("File size in " + SCALE_TEST_DISTCP_FILE_SIZE_KB + " too small");

Review comment:
   now
   
   "File size in " + SCALE_TEST_DISTCP_FILE_SIZE_KB + " is zero
   
   It's not bug, just a fact...for HDFS suite it'll be zero by default now

##
File path: 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/contract/AbstractContractDistCpTest.java
##
@@ -612,6 +634,9 @@ public void testDirectWrite() throws Exception {
 
   @Test
   public void testNonDirectWrite() throws Exception {
+if (directWriteAlways()) {
+  skip("not needed");

Review comment:
   actually, it should be in the previous test. So moved up. thanks for 
drawing my attention to it.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 630471)
Time Spent: 2h 40m  (was: 2.5h)

> Distcp contract test is really slow with ABFS and S3A; timing out
> -
>
> Key: HADOOP-17628
> URL: https://issues.apache.org/jira/browse/HADOOP-17628
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, fs/s3, test, tools/distcp
>Affects Versions: 3.4.0
>Reporter: Bilahari T H
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> The test case testDistCpWithIterator in AbstractContractDistCpTest is 
> consistently timing out.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #3240: HADOOP-17628. Distcp contract test is really slow with ABFS and S3A; timing out.

2021-07-28 Thread GitBox


steveloughran commented on a change in pull request #3240:
URL: https://github.com/apache/hadoop/pull/3240#discussion_r678227498



##
File path: 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/contract/AbstractContractDistCpTest.java
##
@@ -532,13 +549,15 @@ private Path distCpDeepDirectoryStructure(FileSystem 
srcFS,
*/
   private void largeFiles(FileSystem srcFS, Path srcDir, FileSystem dstFS,
   Path dstDir) throws Exception {
+int fileSizeKb = conf.getInt(SCALE_TEST_DISTCP_FILE_SIZE_KB,
+DEFAULT_DISTCP_SIZE_KB);
+if (fileSizeKb < 1) {
+  skip("File size in " + SCALE_TEST_DISTCP_FILE_SIZE_KB + " too small");

Review comment:
   now
   
   "File size in " + SCALE_TEST_DISTCP_FILE_SIZE_KB + " is zero
   
   It's not bug, just a fact...for HDFS suite it'll be zero by default now

##
File path: 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/contract/AbstractContractDistCpTest.java
##
@@ -612,6 +634,9 @@ public void testDirectWrite() throws Exception {
 
   @Test
   public void testNonDirectWrite() throws Exception {
+if (directWriteAlways()) {
+  skip("not needed");

Review comment:
   actually, it should be in the previous test. So moved up. thanks for 
drawing my attention to it.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17628) Distcp contract test is really slow with ABFS and S3A; timing out

2021-07-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17628?focusedWorklogId=629905=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-629905
 ]

ASF GitHub Bot logged work on HADOOP-17628:
---

Author: ASF GitHub Bot
Created on: 28/Jul/21 12:00
Start Date: 28/Jul/21 12:00
Worklog Time Spent: 10m 
  Work Description: mehakmeet commented on a change in pull request #3240:
URL: https://github.com/apache/hadoop/pull/3240#discussion_r678208587



##
File path: 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/contract/AbstractContractDistCpTest.java
##
@@ -532,13 +549,15 @@ private Path distCpDeepDirectoryStructure(FileSystem 
srcFS,
*/
   private void largeFiles(FileSystem srcFS, Path srcDir, FileSystem dstFS,
   Path dstDir) throws Exception {
+int fileSizeKb = conf.getInt(SCALE_TEST_DISTCP_FILE_SIZE_KB,
+DEFAULT_DISTCP_SIZE_KB);
+if (fileSizeKb < 1) {
+  skip("File size in " + SCALE_TEST_DISTCP_FILE_SIZE_KB + " too small");

Review comment:
   Nit: maybe, we should say to "make fileSize in 
`SCALE_TEST_DISTCP_FILE_SIZE_KB` be greater than or equal to 1" or "File Size 
in `SCALE_TEST_DISTCP_FILE_SIZE_KB` smaller than 1"

##
File path: 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/contract/AbstractContractDistCpTest.java
##
@@ -612,6 +634,9 @@ public void testDirectWrite() throws Exception {
 
   @Test
   public void testNonDirectWrite() throws Exception {
+if (directWriteAlways()) {
+  skip("not needed");

Review comment:
   Nit: maybe move below `describe()`, or mention in skip message what is 
being skipped. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 629905)
Time Spent: 2.5h  (was: 2h 20m)

> Distcp contract test is really slow with ABFS and S3A; timing out
> -
>
> Key: HADOOP-17628
> URL: https://issues.apache.org/jira/browse/HADOOP-17628
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, fs/s3, test, tools/distcp
>Affects Versions: 3.4.0
>Reporter: Bilahari T H
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> The test case testDistCpWithIterator in AbstractContractDistCpTest is 
> consistently timing out.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mehakmeet commented on a change in pull request #3240: HADOOP-17628. Distcp contract test is really slow with ABFS and S3A; timing out.

2021-07-28 Thread GitBox


mehakmeet commented on a change in pull request #3240:
URL: https://github.com/apache/hadoop/pull/3240#discussion_r678208587



##
File path: 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/contract/AbstractContractDistCpTest.java
##
@@ -532,13 +549,15 @@ private Path distCpDeepDirectoryStructure(FileSystem 
srcFS,
*/
   private void largeFiles(FileSystem srcFS, Path srcDir, FileSystem dstFS,
   Path dstDir) throws Exception {
+int fileSizeKb = conf.getInt(SCALE_TEST_DISTCP_FILE_SIZE_KB,
+DEFAULT_DISTCP_SIZE_KB);
+if (fileSizeKb < 1) {
+  skip("File size in " + SCALE_TEST_DISTCP_FILE_SIZE_KB + " too small");

Review comment:
   Nit: maybe, we should say to "make fileSize in 
`SCALE_TEST_DISTCP_FILE_SIZE_KB` be greater than or equal to 1" or "File Size 
in `SCALE_TEST_DISTCP_FILE_SIZE_KB` smaller than 1"

##
File path: 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/contract/AbstractContractDistCpTest.java
##
@@ -612,6 +634,9 @@ public void testDirectWrite() throws Exception {
 
   @Test
   public void testNonDirectWrite() throws Exception {
+if (directWriteAlways()) {
+  skip("not needed");

Review comment:
   Nit: maybe move below `describe()`, or mention in skip message what is 
being skipped. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17628) Distcp contract test is really slow with ABFS and S3A; timing out

2021-07-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17628?focusedWorklogId=629408=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-629408
 ]

ASF GitHub Bot logged work on HADOOP-17628:
---

Author: ASF GitHub Bot
Created on: 28/Jul/21 11:32
Start Date: 28/Jul/21 11:32
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on a change in pull request 
#3240:
URL: https://github.com/apache/hadoop/pull/3240#discussion_r678208290



##
File path: 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/contract/AbstractContractDistCpTest.java
##
@@ -758,7 +828,7 @@ public void testDistCpWithUpdateExistFile() throws 
Exception {
 verifyPathExists(remoteFS, "", source);
 verifyPathExists(localFS, "", dest);
 DistCpTestUtils.assertRunDistCp(DistCpConstants.SUCCESS, source.toString(),
-dest.toString(), "-delete -update", conf);
+dest.toString(), "-delete -update" + getDefaultCLIOptions(), conf);

Review comment:
   did think about it, but I also felt that it might be prudent to have 
tests with the CLI to parse, so make sure that -direct is handled there 
properly. We've been hit in the past by some failures with s3guard and the FS 
shell API because we were always working at the API level




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 629408)
Time Spent: 2h 20m  (was: 2h 10m)

> Distcp contract test is really slow with ABFS and S3A; timing out
> -
>
> Key: HADOOP-17628
> URL: https://issues.apache.org/jira/browse/HADOOP-17628
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, fs/s3, test, tools/distcp
>Affects Versions: 3.4.0
>Reporter: Bilahari T H
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> The test case testDistCpWithIterator in AbstractContractDistCpTest is 
> consistently timing out.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #3240: HADOOP-17628. Distcp contract test is really slow with ABFS and S3A; timing out.

2021-07-28 Thread GitBox


steveloughran commented on a change in pull request #3240:
URL: https://github.com/apache/hadoop/pull/3240#discussion_r678208290



##
File path: 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/contract/AbstractContractDistCpTest.java
##
@@ -758,7 +828,7 @@ public void testDistCpWithUpdateExistFile() throws 
Exception {
 verifyPathExists(remoteFS, "", source);
 verifyPathExists(localFS, "", dest);
 DistCpTestUtils.assertRunDistCp(DistCpConstants.SUCCESS, source.toString(),
-dest.toString(), "-delete -update", conf);
+dest.toString(), "-delete -update" + getDefaultCLIOptions(), conf);

Review comment:
   did think about it, but I also felt that it might be prudent to have 
tests with the CLI to parse, so make sure that -direct is handled there 
properly. We've been hit in the past by some failures with s3guard and the FS 
shell API because we were always working at the API level




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17628) Distcp contract test is really slow with ABFS and S3A; timing out

2021-07-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17628?focusedWorklogId=629248=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-629248
 ]

ASF GitHub Bot logged work on HADOOP-17628:
---

Author: ASF GitHub Bot
Created on: 28/Jul/21 11:23
Start Date: 28/Jul/21 11:23
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus removed a comment on pull request #3240:
URL: https://github.com/apache/hadoop/pull/3240#issuecomment-887887092


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 5 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m  3s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m 40s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 17s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  19m 52s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   3m 46s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m  0s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 50s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   5m 53s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m  3s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  21m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 42s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  19m 42s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 50s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   4m 10s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   3m  1s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 54s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   7m 10s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  17m 12s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 33s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  26m 27s |  |  hadoop-distcp in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 30s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   2m 26s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 58s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 250m 56s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3240/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3240 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 0840ba8d81b5 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 8c9d528468f03424dfa16650c0b67b71651c4bf6 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 

[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #3240: HADOOP-17628. Distcp contract test is really slow with ABFS and S3A; timing out.

2021-07-28 Thread GitBox


hadoop-yetus removed a comment on pull request #3240:
URL: https://github.com/apache/hadoop/pull/3240#issuecomment-887887092


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 5 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m  3s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m 40s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 17s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  19m 52s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   3m 46s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m  0s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 50s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   5m 53s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m  3s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  21m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 42s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  19m 42s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 50s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   4m 10s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   3m  1s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 54s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   7m 10s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  17m 12s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 33s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  26m 27s |  |  hadoop-distcp in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 30s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   2m 26s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 58s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 250m 56s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3240/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3240 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 0840ba8d81b5 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 8c9d528468f03424dfa16650c0b67b71651c4bf6 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3240/3/testReport/ |
   | Max. process+thread count | 2645 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common 
hadoop-tools/hadoop-distcp hadoop-tools/hadoop-aws hadoop-tools/hadoop-azure U: 
. |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3240/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 

[jira] [Work logged] (HADOOP-17628) Distcp contract test is really slow with ABFS and S3A; timing out

2021-07-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17628?focusedWorklogId=629011=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-629011
 ]

ASF GitHub Bot logged work on HADOOP-17628:
---

Author: ASF GitHub Bot
Created on: 28/Jul/21 11:05
Start Date: 28/Jul/21 11:05
Worklog Time Spent: 10m 
  Work Description: steveloughran edited a comment on pull request #3240:
URL: https://github.com/apache/hadoop/pull/3240#issuecomment-888217775


   optional OptionalTestHDFSContractDistCp runs fine from IDE'; 4 minutes. 
   
   All the performance issues of the HDFS contract are related to the large 
file tests at 2 minutes for one, 1:30 for the other. If those tests were turned 
off then they could always be run, which would give us better regression checks 
on the object store behaviours matching HDFS.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 629011)
Time Spent: 2h  (was: 1h 50m)

> Distcp contract test is really slow with ABFS and S3A; timing out
> -
>
> Key: HADOOP-17628
> URL: https://issues.apache.org/jira/browse/HADOOP-17628
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, fs/s3, test, tools/distcp
>Affects Versions: 3.4.0
>Reporter: Bilahari T H
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> The test case testDistCpWithIterator in AbstractContractDistCpTest is 
> consistently timing out.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran edited a comment on pull request #3240: HADOOP-17628. Distcp contract test is really slow with ABFS and S3A; timing out.

2021-07-28 Thread GitBox


steveloughran edited a comment on pull request #3240:
URL: https://github.com/apache/hadoop/pull/3240#issuecomment-888217775


   optional OptionalTestHDFSContractDistCp runs fine from IDE'; 4 minutes. 
   
   All the performance issues of the HDFS contract are related to the large 
file tests at 2 minutes for one, 1:30 for the other. If those tests were turned 
off then they could always be run, which would give us better regression checks 
on the object store behaviours matching HDFS.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17628) Distcp contract test is really slow with ABFS and S3A; timing out

2021-07-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17628?focusedWorklogId=629009=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-629009
 ]

ASF GitHub Bot logged work on HADOOP-17628:
---

Author: ASF GitHub Bot
Created on: 28/Jul/21 11:00
Start Date: 28/Jul/21 11:00
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #3240:
URL: https://github.com/apache/hadoop/pull/3240#issuecomment-888217775


   optional OptionalTestHDFSContractDistCp runs fine from IDE'; 4 minutes. 
Probably mini DFSCluster overheads


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 629009)
Time Spent: 1h 50m  (was: 1h 40m)

> Distcp contract test is really slow with ABFS and S3A; timing out
> -
>
> Key: HADOOP-17628
> URL: https://issues.apache.org/jira/browse/HADOOP-17628
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, fs/s3, test, tools/distcp
>Affects Versions: 3.4.0
>Reporter: Bilahari T H
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> The test case testDistCpWithIterator in AbstractContractDistCpTest is 
> consistently timing out.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #3240: HADOOP-17628. Distcp contract test is really slow with ABFS and S3A; timing out.

2021-07-28 Thread GitBox


steveloughran commented on pull request #3240:
URL: https://github.com/apache/hadoop/pull/3240#issuecomment-888217775


   optional OptionalTestHDFSContractDistCp runs fine from IDE'; 4 minutes. 
Probably mini DFSCluster overheads


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17628) Distcp contract test is really slow with ABFS and S3A; timing out

2021-07-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17628?focusedWorklogId=629008=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-629008
 ]

ASF GitHub Bot logged work on HADOOP-17628:
---

Author: ASF GitHub Bot
Created on: 28/Jul/21 10:58
Start Date: 28/Jul/21 10:58
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #3240:
URL: https://github.com/apache/hadoop/pull/3240#issuecomment-888216640


   Will add to release notes the fact you can turn off the large file uploads 
through
   
   ```xml

  scale.test.distcp.file.size.kb
  0

   ```
   This is useful for anyone doing testing from home on a network with slower 
upload speeds


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 629008)
Time Spent: 1h 40m  (was: 1.5h)

> Distcp contract test is really slow with ABFS and S3A; timing out
> -
>
> Key: HADOOP-17628
> URL: https://issues.apache.org/jira/browse/HADOOP-17628
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, fs/s3, test, tools/distcp
>Affects Versions: 3.4.0
>Reporter: Bilahari T H
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> The test case testDistCpWithIterator in AbstractContractDistCpTest is 
> consistently timing out.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #3240: HADOOP-17628. Distcp contract test is really slow with ABFS and S3A; timing out.

2021-07-28 Thread GitBox


steveloughran commented on pull request #3240:
URL: https://github.com/apache/hadoop/pull/3240#issuecomment-888216640


   Will add to release notes the fact you can turn off the large file uploads 
through
   
   ```xml

  scale.test.distcp.file.size.kb
  0

   ```
   This is useful for anyone doing testing from home on a network with slower 
upload speeds


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17817) Throw an exception if S3 client-side encryption is enabled on S3Guard enabled bucket

2021-07-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17817?focusedWorklogId=628995=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-628995
 ]

ASF GitHub Bot logged work on HADOOP-17817:
---

Author: ASF GitHub Bot
Created on: 28/Jul/21 10:39
Start Date: 28/Jul/21 10:39
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3239:
URL: https://github.com/apache/hadoop/pull/3239#issuecomment-888206357


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 53s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 45s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 52s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 48s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 18s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 40s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 44s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  17m 24s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 16s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  81m 27s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3239/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3239 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell markdownlint |
   | uname | Linux 9072a5b349ac 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 40f1a6ab1e694dd8112483623a425300213291d0 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3239/2/testReport/ |
   | Max. process+thread count | 749 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #3239: HADOOP-17817. Throw an exception if S3 client-side encryption is enabled on S3Guard enabled bucket

2021-07-28 Thread GitBox


hadoop-yetus commented on pull request #3239:
URL: https://github.com/apache/hadoop/pull/3239#issuecomment-888206357


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 53s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 45s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 52s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 48s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 18s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 40s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 44s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  17m 24s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 16s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  81m 27s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3239/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3239 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell markdownlint |
   | uname | Linux 9072a5b349ac 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 40f1a6ab1e694dd8112483623a425300213291d0 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3239/2/testReport/ |
   | Max. process+thread count | 749 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3239/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To 

[jira] [Commented] (HADOOP-17612) Upgrade Zookeeper to 3.6.3 and Curator to 5.2.0

2021-07-28 Thread Enrico Olivelli (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17388672#comment-17388672
 ] 

Enrico Olivelli commented on HADOOP-17612:
--

Curator 5.2.0 has been released, with ZookKeeper 3.6.3 support. ZooKeeper 3.7 
should work well but we do not have automatic tests

> Upgrade Zookeeper to 3.6.3 and Curator to 5.2.0
> ---
>
> Key: HADOOP-17612
> URL: https://issues.apache.org/jira/browse/HADOOP-17612
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> We can bump Zookeeper version to 3.7.0 for trunk.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >