[jira] [Assigned] (HADOOP-17726) Replace Sets#newHashSet() and newTreeSet() with constructors directly

2022-05-24 Thread Samrat Deb (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samrat Deb reassigned HADOOP-17726:
---

Assignee: Samrat Deb

> Replace Sets#newHashSet() and newTreeSet() with constructors directly
> -
>
> Key: HADOOP-17726
> URL: https://issues.apache.org/jira/browse/HADOOP-17726
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Samrat Deb
>Priority: Major
>  Labels: beginner, beginner-friendly, newbie
>
> As per the guidelines provided by Guava Sets#newHashSet() and 
> Sets#newTreeSet(), we should get rid of them and use newHashSet<>() and 
> newTreeSet<>() directly.
> Once HADOOP-17115, HADOOP-17721, HADOOP-17722 and HADOOP-17720 are fixed, 
> please feel free to take this up.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ZanderXu commented on pull request #3806: HDFS-16386.Reduce DataNode load when FsDatasetAsyncDiskService is working.

2022-05-24 Thread GitBox


ZanderXu commented on PR #3806:
URL: https://github.com/apache/hadoop/pull/3806#issuecomment-1136727763

   Thanks, and i will create a new PR to do it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jianghuazhu commented on pull request #3806: HDFS-16386.Reduce DataNode load when FsDatasetAsyncDiskService is working.

2022-05-24 Thread GitBox


jianghuazhu commented on PR #3806:
URL: https://github.com/apache/hadoop/pull/3806#issuecomment-1136708070

   @ZanderXu , nice to communicate with you.
   I suggest that the number of active threads here needs to be set reasonably, 
according to the load capacity of the cluster.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4341: YARN-10487. Support getQueueUserAcls, listReservations, getApplicationAttempts, getContainerReport, getContainers, getResourceTypeInfo A

2022-05-24 Thread GitBox


hadoop-yetus commented on PR #4341:
URL: https://github.com/apache/hadoop/pull/4341#issuecomment-1136702589

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  4s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m 58s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 36s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m  6s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 51s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  24m 13s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 19s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4341/4/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.txt)
 |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router: 
The patch generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1)  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   0m 54s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 53s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 49s |  |  hadoop-yarn-server-router in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 102m 20s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4341/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4341 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 8f271dd2ac65 4.15.0-166-generic #174-Ubuntu SMP Wed Dec 8 
19:07:44 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / c4475d7f9f3752ed7484f1087e5291b390ed5542 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4341/4/testReport/ |
   | Max. process+thread count | 840 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router U: 

[GitHub] [hadoop] zhengchenyu commented on pull request #4308: YARN-11148. In federation and security mode, nm recover may fail.

2022-05-24 Thread GitBox


zhengchenyu commented on PR #4308:
URL: https://github.com/apache/hadoop/pull/4308#issuecomment-1136701598

   > @zhengchenyu Thanks for your contribution. It makes sense at first glance. 
How about to add unit test to cover this case?
   
   I think we should apply https://issues.apache.org/jira/browse/YARN-6539 
firstly, then I will add UT. Because If yarn router does not support security 
mode, this PR is meaningless. (Notes: in our cluster, YARN-6539 is apply).
   
   I don't know why YARN-6539 are not merge int trunk?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4341: YARN-10487. Support getQueueUserAcls, listReservations, getApplicationAttempts, getContainerReport, getContainers, getResourceTypeInfo A

2022-05-24 Thread GitBox


hadoop-yetus commented on PR #4341:
URL: https://github.com/apache/hadoop/pull/4341#issuecomment-1136699843

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 47s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m 54s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 51s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 47s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 47s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 52s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 18s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 41s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  21m  8s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 23s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4341/5/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.txt)
 |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router: 
The patch generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1)  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   0m 58s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m  7s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 57s |  |  hadoop-yarn-server-router in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 50s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  94m 35s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4341/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4341 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux b09a6b352a05 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / c4475d7f9f3752ed7484f1087e5291b390ed5542 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4341/5/testReport/ |
   | Max. process+thread count | 846 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router U: 

[GitHub] [hadoop] ZanderXu commented on pull request #3806: HDFS-16386.Reduce DataNode load when FsDatasetAsyncDiskService is working.

2022-05-24 Thread GitBox


ZanderXu commented on PR #3806:
URL: https://github.com/apache/hadoop/pull/3806#issuecomment-1136695695

   Thanks @jianghuazhu for you comment. 
   - I have a question, if the queue is unbounded, will the number of active 
thread in the ThreadPool be greater than the number of core thread?
   - I think that we need to support the ability to dynamically adjust the 
number of core threads, so that we can adjust it in time for different load to 
archive the best result. 
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4216: HDFS-16555. rename mixlead method name in DistCpOptions

2022-05-24 Thread GitBox


hadoop-yetus commented on PR #4216:
URL: https://github.com/apache/hadoop/pull/4216#issuecomment-1136691451

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  19m 30s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m 57s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 39s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 45s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 13s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 16s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   0m 54s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 42s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  47m 16s |  |  hadoop-distcp in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 44s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 165m 20s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4216/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4216 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 7072f516dc64 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 68e501c931618a65d46a9eebc2b12ffd0dabadc5 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4216/6/testReport/ |
   | Max. process+thread count | 606 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4216/6/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4341: YARN-10487. Support getQueueUserAcls, listReservations, getApplicationAttempts, getContainerReport, getContainers, getResourceTypeInfo A

2022-05-24 Thread GitBox


hadoop-yetus commented on PR #4341:
URL: https://github.com/apache/hadoop/pull/4341#issuecomment-1136684524

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  8s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m 34s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 38s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 12s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m 13s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  24m 36s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   0m 53s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m  4s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   2m 51s | 
[/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4341/3/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.txt)
 |  hadoop-yarn-server-router in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 41s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 102m 51s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.yarn.server.router.clientrm.TestFederationClientInterceptor |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4341/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4341 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux d6c2c698785b 4.15.0-166-generic #174-Ubuntu SMP Wed Dec 8 
19:07:44 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 699d7d12380b31f0742e1ec98159de66b68ed839 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4341/3/testReport/ |
   | Max. process+thread count | 1046 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router U: 

[GitHub] [hadoop] ZanderXu opened a new pull request, #4353: HDFS-16593. Correct inaccurate BlocksRemoved metric on DataNode side

2022-05-24 Thread GitBox


ZanderXu opened a new pull request, #4353:
URL: https://github.com/apache/hadoop/pull/4353

   Correct inaccurate BlocksRemoved metric on DataNode side


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jianghuazhu commented on pull request #3806: HDFS-16386.Reduce DataNode load when FsDatasetAsyncDiskService is working.

2022-05-24 Thread GitBox


jianghuazhu commented on PR #3806:
URL: https://github.com/apache/hadoop/pull/3806#issuecomment-1136659630

   Thanks @ZanderXu  for following.
   Here are some explanations:
   1. The main job of FsDatasetAsyncDiskService is to delete the replica files 
synchronously or asynchronously. The copy files to be deleted here are all 
files on the local DataNode, and the number is limited. Although the thread 
pool uses an unbounded queue, it will not be stored all the time, because it 
will always be consumed. And these copies have been loaded into memory when the 
DataNode is working, so the probability of OOM here is very low.
   2. If the copy is deleted asynchronously, the thread pool work will be 
started. Before this, each disk will correspond to a thread pool, and the 
thread pool will have at most 4 fixed threads to work, and this condition is 
fixed. In our cluster, DataNodes have different numbers of disks, 12 disks, 36 
disks, and 60 disks will exist. Take DataNode with 36 disks or 60 disks as an 
example, then during peak hours, DataNode needs to start a lot of thread work. 
Adjusting the number of threads flexibly will reduce the workload of the 
DataNode.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on a diff in pull request #4341: YARN-10487. Support getQueueUserAcls, listReservations, getApplicationAttempts, getContainerReport, getContainers, getResourceTyp

2022-05-24 Thread GitBox


slfan1989 commented on code in PR #4341:
URL: https://github.com/apache/hadoop/pull/4341#discussion_r881159169


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/clientrm/TestFederationClientInterceptor.java:
##
@@ -435,13 +437,10 @@ public void testGetApplicationEmptyRequest()
   @Test
   public void testGetApplicationAttemptReport()
   throws YarnException, IOException, InterruptedException {
-LOG.info("Test FederationClientInterceptor: " +
-"Get ApplicationAttempt Report");
+LOG.info("Test FederationClientInterceptor: Get ApplicationAttempt 
Report.");
 
 ApplicationId appId =
-ApplicationId.newInstance(System.currentTimeMillis(), 1);
-ApplicationAttemptId appAttemptId =
-ApplicationAttemptId.newInstance(appId, 1);

Review Comment:
   ApplicationAttemptId needs to be queried in Yarn and cannot be generated 
directly. It is found during testing that the query will fail.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18107) Vectored IO support for large S3 files.

2022-05-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18107?focusedWorklogId=774330=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-774330
 ]

ASF GitHub Bot logged work on HADOOP-18107:
---

Author: ASF GitHub Bot
Created on: 25/May/22 02:07
Start Date: 25/May/22 02:07
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4273:
URL: https://github.com/apache/hadoop/pull/4273#issuecomment-1136634303

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 50s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ feature-vectored-io Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 56s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  27m 52s |  |  feature-vectored-io 
passed  |
   | +1 :green_heart: |  compile  |  25m 12s |  |  feature-vectored-io passed 
with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  22m 22s |  |  feature-vectored-io passed 
with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   4m 19s |  |  feature-vectored-io 
passed  |
   | +1 :green_heart: |  mvnsite  |   3m 21s |  |  feature-vectored-io passed  |
   | -1 :x: |  javadoc  |   1m 35s | 
[/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4273/7/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-common in feature-vectored-io failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | -1 :x: |  javadoc  |   1m  7s | 
[/branch-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4273/7/artifact/out/branch-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  hadoop-aws in feature-vectored-io failed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.  |
   | +1 :green_heart: |  spotbugs  |   4m 58s |  |  feature-vectored-io passed  
|
   | +1 :green_heart: |  shadedclient  |  23m 26s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m  2s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 54s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 40s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |  24m 40s |  |  
root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 generated 0 new + 1815 unchanged - 2 
fixed = 1815 total (was 1817)  |
   | +1 :green_heart: |  compile  |  21m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  21m 36s |  |  
root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 generated 0 new + 1690 unchanged - 
2 fixed = 1690 total (was 1692)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   4m 15s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4273/7/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 3 new + 12 unchanged - 0 fixed = 15 total (was 
12)  |
   | +1 :green_heart: |  mvnsite  |   3m 24s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   1m 37s | 
[/patch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4273/7/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-common in the patch failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | -1 :x: |  javadoc  |   1m 15s | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4273: HADOOP-18107 Adding scale test for vectored reads for large file

2022-05-24 Thread GitBox


hadoop-yetus commented on PR #4273:
URL: https://github.com/apache/hadoop/pull/4273#issuecomment-1136634303

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 50s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ feature-vectored-io Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 56s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  27m 52s |  |  feature-vectored-io 
passed  |
   | +1 :green_heart: |  compile  |  25m 12s |  |  feature-vectored-io passed 
with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  22m 22s |  |  feature-vectored-io passed 
with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   4m 19s |  |  feature-vectored-io 
passed  |
   | +1 :green_heart: |  mvnsite  |   3m 21s |  |  feature-vectored-io passed  |
   | -1 :x: |  javadoc  |   1m 35s | 
[/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4273/7/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-common in feature-vectored-io failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | -1 :x: |  javadoc  |   1m  7s | 
[/branch-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4273/7/artifact/out/branch-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  hadoop-aws in feature-vectored-io failed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.  |
   | +1 :green_heart: |  spotbugs  |   4m 58s |  |  feature-vectored-io passed  
|
   | +1 :green_heart: |  shadedclient  |  23m 26s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m  2s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 54s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 40s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |  24m 40s |  |  
root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 generated 0 new + 1815 unchanged - 2 
fixed = 1815 total (was 1817)  |
   | +1 :green_heart: |  compile  |  21m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  21m 36s |  |  
root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 generated 0 new + 1690 unchanged - 
2 fixed = 1690 total (was 1692)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   4m 15s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4273/7/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 3 new + 12 unchanged - 0 fixed = 15 total (was 
12)  |
   | +1 :green_heart: |  mvnsite  |   3m 24s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   1m 37s | 
[/patch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4273/7/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-common in the patch failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | -1 :x: |  javadoc  |   1m 15s | 
[/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4273/7/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 
with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 generated 1 new + 
36 unchanged - 9 fixed = 37 total (was 45)  |
   | +1 :green_heart: |  spotbugs  |   5m 15s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 40s |  |  patch has no errors 
when 

[jira] [Work logged] (HADOOP-18107) Vectored IO support for large S3 files.

2022-05-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18107?focusedWorklogId=774319=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-774319
 ]

ASF GitHub Bot logged work on HADOOP-18107:
---

Author: ASF GitHub Bot
Created on: 25/May/22 01:13
Start Date: 25/May/22 01:13
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4273:
URL: https://github.com/apache/hadoop/pull/4273#issuecomment-1136593177

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 42s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ feature-vectored-io Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 26s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 28s |  |  feature-vectored-io 
passed  |
   | +1 :green_heart: |  compile  |  23m 55s |  |  feature-vectored-io passed 
with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  22m  9s |  |  feature-vectored-io passed 
with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   3m 59s |  |  feature-vectored-io 
passed  |
   | +1 :green_heart: |  mvnsite  |   3m 21s |  |  feature-vectored-io passed  |
   | -1 :x: |  javadoc  |   1m 39s | 
[/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4273/6/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-common in feature-vectored-io failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | -1 :x: |  javadoc  |   1m 40s | 
[/branch-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4273/6/artifact/out/branch-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  hadoop-aws in feature-vectored-io failed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.  |
   | +1 :green_heart: |  spotbugs  |   4m 45s |  |  feature-vectored-io passed  
|
   | +1 :green_heart: |  shadedclient  |  23m 14s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 30s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 51s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 47s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |  24m 47s |  |  
root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 generated 0 new + 1822 unchanged - 2 
fixed = 1822 total (was 1824)  |
   | +1 :green_heart: |  compile  |  21m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  21m 31s |  |  
root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 generated 0 new + 1701 unchanged - 
2 fixed = 1701 total (was 1703)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   4m 41s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4273/6/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 3 new + 12 unchanged - 0 fixed = 15 total (was 
12)  |
   | +1 :green_heart: |  mvnsite  |   3m 35s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   1m 31s | 
[/patch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4273/6/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-common in the patch failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | -1 :x: |  javadoc  |   1m  7s | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4273: HADOOP-18107 Adding scale test for vectored reads for large file

2022-05-24 Thread GitBox


hadoop-yetus commented on PR #4273:
URL: https://github.com/apache/hadoop/pull/4273#issuecomment-1136593177

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 42s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ feature-vectored-io Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 26s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 28s |  |  feature-vectored-io 
passed  |
   | +1 :green_heart: |  compile  |  23m 55s |  |  feature-vectored-io passed 
with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  22m  9s |  |  feature-vectored-io passed 
with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   3m 59s |  |  feature-vectored-io 
passed  |
   | +1 :green_heart: |  mvnsite  |   3m 21s |  |  feature-vectored-io passed  |
   | -1 :x: |  javadoc  |   1m 39s | 
[/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4273/6/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-common in feature-vectored-io failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | -1 :x: |  javadoc  |   1m 40s | 
[/branch-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4273/6/artifact/out/branch-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  hadoop-aws in feature-vectored-io failed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.  |
   | +1 :green_heart: |  spotbugs  |   4m 45s |  |  feature-vectored-io passed  
|
   | +1 :green_heart: |  shadedclient  |  23m 14s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 30s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 51s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 47s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |  24m 47s |  |  
root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 generated 0 new + 1822 unchanged - 2 
fixed = 1822 total (was 1824)  |
   | +1 :green_heart: |  compile  |  21m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  21m 31s |  |  
root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 generated 0 new + 1701 unchanged - 
2 fixed = 1701 total (was 1703)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   4m 41s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4273/6/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 3 new + 12 unchanged - 0 fixed = 15 total (was 
12)  |
   | +1 :green_heart: |  mvnsite  |   3m 35s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   1m 31s | 
[/patch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4273/6/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-common in the patch failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | -1 :x: |  javadoc  |   1m  7s | 
[/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4273/6/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 
with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 generated 1 new + 
36 unchanged - 9 fixed = 37 total (was 45)  |
   | +1 :green_heart: |  spotbugs  |   5m  4s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m  7s |  |  patch has no errors 
when 

[GitHub] [hadoop] slfan1989 commented on a diff in pull request #4341: YARN-10487. Support getQueueUserAcls, listReservations, getApplicationAttempts, getContainerReport, getContainers, getResourceTyp

2022-05-24 Thread GitBox


slfan1989 commented on code in PR #4341:
URL: https://github.com/apache/hadoop/pull/4341#discussion_r881106167


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/RouterMetrics.java:
##
@@ -238,6 +255,16 @@ public long getNumSucceededAppAttemptReportRetrieved(){
 return totalSucceededAppAttemptReportRetrieved.lastStat().numSamples();
   }
 
+  @VisibleForTesting
+  public long getNumSucceededGetQueueUserAclsRetrieved(){

Review Comment:
   Thanks for the suggestion, I will add Test.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on a diff in pull request #4341: YARN-10487. Support getQueueUserAcls, listReservations, getApplicationAttempts, getContainerReport, getContainers, getResourceTyp

2022-05-24 Thread GitBox


slfan1989 commented on code in PR #4341:
URL: https://github.com/apache/hadoop/pull/4341#discussion_r881103578


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/RouterYarnClientUtils.java:
##
@@ -295,5 +297,25 @@ public static GetClusterNodeLabelsResponse 
mergeClusterNodeLabelsResponse(
 nodeLabelsResponse.setNodeLabelList(new ArrayList<>(nodeLabelsList));
 return nodeLabelsResponse;
   }
+
+  /**
+   * Merges a list of GetQueueUserAclsInfoResponse.
+   *
+   * @param responses a list of GetQueueUserAclsInfoResponse to merge.
+   * @return the merged GetQueueUserAclsInfoResponse.
+   */
+  public static GetQueueUserAclsInfoResponse mergeQueueUserAcls(

Review Comment:
   Hi, @goiri Thanks for your help to review the code, I will add test.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] GuoPhilipse commented on a diff in pull request #4216: HDFS-16555. rename mixlead method name in DistCpOptions

2022-05-24 Thread GitBox


GuoPhilipse commented on code in PR #4216:
URL: https://github.com/apache/hadoop/pull/4216#discussion_r881089014


##
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java:
##
@@ -684,11 +684,23 @@ public Builder withAppend(boolean newAppend) {
   return this;
 }
 
+/**
+ * whether builder with crc.
+ * @param newSkipCRC whether to skip crc check
+ * @return  Builder object whether to skip crc check
+ * @deprecated Use {@link #withSkipCRC(boolean)} instead.
+ */
+@Deprecated
 public Builder withCRC(boolean newSkipCRC) {
   this.skipCRC = newSkipCRC;
   return this;
 }
 
+public Builder withSkipCRC(boolean newSkipCRC) {

Review Comment:
   > copy the javadocs from above, now you've written them
   
   Thanks @steveloughran  for your notice ,have just updated



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a diff in pull request #4341: YARN-10487. Support getQueueUserAcls, listReservations, getApplicationAttempts, getContainerReport, getContainers, getResourceTypeInf

2022-05-24 Thread GitBox


goiri commented on code in PR #4341:
URL: https://github.com/apache/hadoop/pull/4341#discussion_r881077343


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/RouterYarnClientUtils.java:
##
@@ -295,5 +297,25 @@ public static GetClusterNodeLabelsResponse 
mergeClusterNodeLabelsResponse(
 nodeLabelsResponse.setNodeLabelList(new ArrayList<>(nodeLabelsList));
 return nodeLabelsResponse;
   }
+
+  /**
+   * Merges a list of GetQueueUserAclsInfoResponse.
+   *
+   * @param responses a list of GetQueueUserAclsInfoResponse to merge.
+   * @return the merged GetQueueUserAclsInfoResponse.
+   */
+  public static GetQueueUserAclsInfoResponse mergeQueueUserAcls(

Review Comment:
   Can we add some test here?



##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/RouterMetrics.java:
##
@@ -238,6 +255,16 @@ public long getNumSucceededAppAttemptReportRetrieved(){
 return totalSucceededAppAttemptReportRetrieved.lastStat().numSamples();
   }
 
+  @VisibleForTesting
+  public long getNumSucceededGetQueueUserAclsRetrieved(){

Review Comment:
   Add tests



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri merged pull request #4336: YARN-11137. Improve log message in FederationClientInterceptor

2022-05-24 Thread GitBox


goiri merged PR #4336:
URL: https://github.com/apache/hadoop/pull/4336


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #4336: YARN-11137. Improve log message in FederationClientInterceptor

2022-05-24 Thread GitBox


slfan1989 commented on PR #4336:
URL: https://github.com/apache/hadoop/pull/4336#issuecomment-1136480996

   Hi, @goiri @tomscut please help to review the code , thank you very match!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4347: HDFS-16586. Purge FsDatasetAsyncDiskService threadgroup; it causes BP…

2022-05-24 Thread GitBox


hadoop-yetus commented on PR #4347:
URL: https://github.com/apache/hadoop/pull/4347#issuecomment-1136471915

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 10s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ branch-3.3 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  37m 59s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |   1m 25s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |   1m  7s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 36s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   1m 44s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  spotbugs  |   3m 41s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  28m 43s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 47s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 20s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 21s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   3m 26s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  28m  8s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 219m 51s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4347/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  1s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 333m 19s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestRollingUpgrade |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithShortCircuitRead |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4347/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4347 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 44f8f8cbf6c6 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / 9b929f5863b9789ca429ee1cf3d7f44d9b349991 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4347/3/testReport/ |
   | Max. process+thread count | 2321 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4347/3/console |
   | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-18255) fsdatainputstreambuilder.md refers to hadoop 3.3.3, when it shouldn't

2022-05-24 Thread Ashutosh Gupta (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Gupta reassigned HADOOP-18255:
---

Assignee: Ashutosh Gupta

> fsdatainputstreambuilder.md refers to hadoop 3.3.3, when it shouldn't
> -
>
> Key: HADOOP-18255
> URL: https://issues.apache.org/jira/browse/HADOOP-18255
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.3.4
>Reporter: Steve Loughran
>Assignee: Ashutosh Gupta
>Priority: Minor
>
> fsdatainputstreambuilder.md refers to hadoop 3.3.3, when it means whatever 
> ships off hadoop branch-3.3



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #4349: HDFS-16590. Fix Junit Test Deprecated assertThat

2022-05-24 Thread GitBox


slfan1989 commented on PR #4349:
URL: https://github.com/apache/hadoop/pull/4349#issuecomment-1136477439

   I run this junit test and found that it can be successful.
   
   ```
   [INFO] ---
   [INFO]  T E S T S
   [INFO] ---
   [INFO] Running org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes
   [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
61.054 s - in org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes
   [INFO] 
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
   ```
   
   I found the following information in the test report
   ```
   org.apache.hadoop.hdfs.server.datanode.TestBlockScanner
   ExecutionException The forked VM terminated without properly saying goodbye. 
VM crash or System.exit called?
   Command was /bin/sh -c cd 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4349/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs
 && /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xmx2048m 
-XX:+HeapDumpOnOutOfMemoryError -DminiClusterDedicatedDirs=true -jar 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4349/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter4063913185211432001.jar
 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4349/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/target/surefire
 2022-05-24T06-25-53_745-jvmRun1 surefire5817288024834392866tmp 
surefire_3427099837231563976684tmp
   Error occurred in starting fork, check output in log
   Process Exit Code: 1
   Crashed tests:
   org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter
   ExecutionException The forked VM terminated without properly saying goodbye. 
VM crash or System.exit called?
   Command was /bin/sh -c cd 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4349/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs
 && /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xmx2048m 
-XX:+HeapDumpOnOutOfMemoryError -DminiClusterDedicatedDirs=true -jar 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4349/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter8070136178396967706.jar
 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4349/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/target/surefire
 2022-05-24T06-25-53_745-jvmRun1 surefire722018860489712113tmp 
surefire_4633368331952091928634tmp
   Error occurred in starting fork, check output in log
   Process Exit Code: 1
   Crashed tests:
   org.apache.hadoop.hdfs.server.blockmanagement.TestSequentialBlockId
   ExecutionException The forked VM terminated without properly saying goodbye. 
VM crash or System.exit called?
   Command was /bin/sh -c cd 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4349/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs
 && /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xmx2048m 
-XX:+HeapDumpOnOutOfMemoryError -DminiClusterDedicatedDirs=true -jar 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4349/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter4034957644830842693.jar
 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4349/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/target/surefire
 2022-05-24T06-25-53_745-jvmRun2 surefire8748449737419433268tmp 
surefire_5939041876649571544204tmp
   Error occurred in starting fork, check output in log
   Process Exit Code: 1
   Crashed tests:
   org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal
   ExecutionException The forked VM terminated without properly saying goodbye. 
VM crash or System.exit called?
   Command was /bin/sh -c cd 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4349/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs
 && /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xmx2048m 
-XX:+HeapDumpOnOutOfMemoryError -DminiClusterDedicatedDirs=true -jar 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4349/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter5086311145150655861.jar
 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4349/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/target/surefire
 2022-05-24T06-25-53_745-jvmRun1 surefire2732095862261680658tmp 
surefire_668579198910796942474tmp
   Error occurred in starting fork, check output in log
   Process Exit Code: 1
   Crashed tests:
   org.apache.hadoop.hdfs.TestAclsEndToEnd
   org.apache.maven.surefire.booter.SurefireBooterForkException: 
ExecutionException The forked VM terminated without properly saying goodbye. VM 
crash or System.exit called?
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: 

[jira] [Work logged] (HADOOP-18242) ABFS Rename Failure when tracking metadata is in incomplete state

2022-05-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18242?focusedWorklogId=774253=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-774253
 ]

ASF GitHub Bot logged work on HADOOP-18242:
---

Author: ASF GitHub Bot
Created on: 24/May/22 21:59
Start Date: 24/May/22 21:59
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on code in PR #4331:
URL: https://github.com/apache/hadoop/pull/4331#discussion_r880982514


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemRename.java:
##
@@ -167,4 +169,30 @@ public void testPosixRenameDirectory() throws Exception {
 new Path(testDir2 + "/test1/test2/test3"));
   }
 
+  @Test
+  public void testRenameWithNoDestinationParentDir() throws Exception {

Review Comment:
   add similar test case for resilient rename api, here or in 
ITestAbfsManifestStoreOperations



##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java:
##
@@ -538,6 +544,23 @@ public Pair renamePath(
 if (!op.hasResult()) {
   throw e;
 }
+

Review Comment:
   have renamePath return a struct rather than a pair, with one of the fields 
being the "retried for metadata issue" flag. this can be passed up to the 
private resilient rename interface, and then to the manifest committer. it 
could then include this in its statistic reports





Issue Time Tracking
---

Worklog Id: (was: 774253)
Time Spent: 50m  (was: 40m)

> ABFS Rename Failure when tracking metadata is in incomplete state
> -
>
> Key: HADOOP-18242
> URL: https://issues.apache.org/jira/browse/HADOOP-18242
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Mehakmeet Singh
>Assignee: Mehakmeet Singh
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> If a node in the datacenter crashes while processing an operation, 
> occasionally it can leave the Storage-internal blob tracking metadata in an 
> incomplete state.  We expect this to happen occasionally, and so all API’s 
> are designed in such a way that if this incomplete state is observed on a 
> blob, the situation is resolved before the current operation proceeds.  
> However, this incident has exposed a bug specifically with the Rename API, 
> where the incomplete state fails to resolve, leading to this incorrect 
> failure.  As a temporary mitigation, if any other operation is performed on 
> this blob – GetBlobProperties, GetBlob, GetFileProperties, SetFileProperties, 
> etc – it should resolve the incomplete state, and rename will no longer hit 
> this issue.
> StackTrace:
> {code:java}
> 2022-03-22 17:52:19,789 DEBUG [regionserver/euwukwlss-hg50:16020.logRoller] 
> services.AbfsClient: HttpRequest: 
> 404,RenameDestinationParentPathNotFound,cid=ef5cbf0f-5d4a-4630-8a59-3d559077fc24,rid=35fef164-101f-000b-1b15-3ed81800,sent=0,recv=212,PUT,https://euwqdaotdfdls03.dfs.core.windows.net/eykbssc/apps/hbase/data/oldWALs/euwukwlss-hg50.tdf.qa%252C16020%252C1647949929877.1647967939315?timeout=90
>{code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a diff in pull request #4216: HDFS-16555. rename mixlead method name in DistCpOptions

2022-05-24 Thread GitBox


steveloughran commented on code in PR #4216:
URL: https://github.com/apache/hadoop/pull/4216#discussion_r880984625


##
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java:
##
@@ -684,11 +684,23 @@ public Builder withAppend(boolean newAppend) {
   return this;
 }
 
+/**
+ * whether builder with crc.
+ * @param newSkipCRC whether to skip crc check
+ * @return  Builder object whether to skip crc check
+ * @deprecated Use {@link #withSkipCRC(boolean)} instead.
+ */
+@Deprecated
 public Builder withCRC(boolean newSkipCRC) {
   this.skipCRC = newSkipCRC;
   return this;
 }
 
+public Builder withSkipCRC(boolean newSkipCRC) {

Review Comment:
   copy the javadocs from above, now you've written them



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a diff in pull request #4331: HADOOP-18242. ABFS Rename Failure when tracking metadata is in an incomplete state

2022-05-24 Thread GitBox


steveloughran commented on code in PR #4331:
URL: https://github.com/apache/hadoop/pull/4331#discussion_r880982514


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemRename.java:
##
@@ -167,4 +169,30 @@ public void testPosixRenameDirectory() throws Exception {
 new Path(testDir2 + "/test1/test2/test3"));
   }
 
+  @Test
+  public void testRenameWithNoDestinationParentDir() throws Exception {

Review Comment:
   add similar test case for resilient rename api, here or in 
ITestAbfsManifestStoreOperations



##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java:
##
@@ -538,6 +544,23 @@ public Pair renamePath(
 if (!op.hasResult()) {
   throw e;
 }
+

Review Comment:
   have renamePath return a struct rather than a pair, with one of the fields 
being the "retried for metadata issue" flag. this can be passed up to the 
private resilient rename interface, and then to the manifest committer. it 
could then include this in its statistic reports



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4332: HDFS-16583. DatanodeAdminDefaultMonitor can get stuck in an infinite loop holding the write lock

2022-05-24 Thread GitBox


hadoop-yetus commented on PR #4332:
URL: https://github.com/apache/hadoop/pull/4332#issuecomment-1136461239

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 55s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  39m 25s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 46s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   1m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 19s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 37s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 20s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 50s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m  7s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 30s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   1m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 59s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 35s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 50s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 377m 24s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4332/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 58s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 493m 51s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestReplaceDatanodeFailureReplication |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4332/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4332 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 87c9decf67e9 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 2fe9c5ebb481a05cf95df0da4e8bc115bce68959 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4332/3/testReport/ |
   | Max. process+thread count | 1911 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4332/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4348: HDFS-16586. Purge FsDatasetAsyncDiskService threadgroup; it causes BP…

2022-05-24 Thread GitBox


hadoop-yetus commented on PR #4348:
URL: https://github.com/apache/hadoop/pull/4348#issuecomment-1136459189

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  4s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ branch-3.2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m  7s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  compile  |   1m 24s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  checkstyle  |   1m 10s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 34s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  spotbugs  |   3m 49s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  shadedclient  |  19m  8s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 50s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   3m 24s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 17s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 224m  3s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4348/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 58s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 315m 20s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestNameNodeMXBean |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4348/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4348 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 8941ba45faa6 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.2 / a622237ea99ca1bcd38f3fb118fc85e04244f9f7 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4348/3/testReport/ |
   | Max. process+thread count | 1908 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4348/3/console |
   | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18107) Vectored IO support for large S3 files.

2022-05-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18107?focusedWorklogId=774245=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-774245
 ]

ASF GitHub Bot logged work on HADOOP-18107:
---

Author: ASF GitHub Bot
Created on: 24/May/22 21:39
Start Date: 24/May/22 21:39
Worklog Time Spent: 10m 
  Work Description: mukund-thakur commented on PR #4273:
URL: https://github.com/apache/hadoop/pull/4273#issuecomment-1136457258

   
   > (btw, suggest another rebase to get rid of those javadoc issues)
   
   Will do the rebase of feature branch in one go after the smaller patches are 
merged.
   
   




Issue Time Tracking
---

Worklog Id: (was: 774245)
Time Spent: 3h  (was: 2h 50m)

> Vectored IO support for large S3 files. 
> 
>
> Key: HADOOP-18107
> URL: https://issues.apache.org/jira/browse/HADOOP-18107
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> This effort would mostly be adding more tests for large files under scale 
> tests and see if any new issue surfaces. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on pull request #4273: HADOOP-18107 Adding scale test for vectored reads for large file

2022-05-24 Thread GitBox


mukund-thakur commented on PR #4273:
URL: https://github.com/apache/hadoop/pull/4273#issuecomment-1136457258

   
   > (btw, suggest another rebase to get rid of those javadoc issues)
   
   Will do the rebase of feature branch in one go after the smaller patches are 
merged.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17726) Replace Sets#newHashSet() and newTreeSet() with constructors directly

2022-05-24 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17541718#comment-17541718
 ] 

Viraj Jasani commented on HADOOP-17726:
---

Sure [~samrat007] please go ahead, no concerns from my side.

> Replace Sets#newHashSet() and newTreeSet() with constructors directly
> -
>
> Key: HADOOP-17726
> URL: https://issues.apache.org/jira/browse/HADOOP-17726
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Viraj Jasani
>Priority: Major
>  Labels: beginner, beginner-friendly, newbie
>
> As per the guidelines provided by Guava Sets#newHashSet() and 
> Sets#newTreeSet(), we should get rid of them and use newHashSet<>() and 
> newTreeSet<>() directly.
> Once HADOOP-17115, HADOOP-17721, HADOOP-17722 and HADOOP-17720 are fixed, 
> please feel free to take this up.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18255) fsdatainputstreambuilder.md refers to hadoop 3.3.3, when it shouldn't

2022-05-24 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-18255:
---

 Summary: fsdatainputstreambuilder.md refers to hadoop 3.3.3, when 
it shouldn't
 Key: HADOOP-18255
 URL: https://issues.apache.org/jira/browse/HADOOP-18255
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.3.4
Reporter: Steve Loughran


fsdatainputstreambuilder.md refers to hadoop 3.3.3, when it means whatever 
ships off hadoop branch-3.3



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18249) Fix getUri() in HttpRequest has been deprecated

2022-05-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18249?focusedWorklogId=774233=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-774233
 ]

ASF GitHub Bot logged work on HADOOP-18249:
---

Author: ASF GitHub Bot
Created on: 24/May/22 21:15
Start Date: 24/May/22 21:15
Worklog Time Spent: 10m 
  Work Description: slfan1989 commented on PR #4335:
URL: https://github.com/apache/hadoop/pull/4335#issuecomment-1136439304

   @hemanthboyina Thank you very much for your help reviewing the code!




Issue Time Tracking
---

Worklog Id: (was: 774233)
Time Spent: 1h 20m  (was: 1h 10m)

> Fix getUri() in HttpRequest has been deprecated
> ---
>
> Key: HADOOP-18249
> URL: https://issues.apache.org/jira/browse/HADOOP-18249
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: getUri() deprecated -1.png
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> When reading the code, I found that the method used has been deprecated due 
> to the upgrade of the netty component. The main methods are as follows:
> io.netty.handler.codec.http#HttpRequest
> @Deprecated
> HttpMethod getMethod();
> Deprecated. Use method() instead.
> @Deprecated
> String getUri()
> Deprecated. Use uri() instead.
> io.netty.handler.codec.http#DefaultHttpResponse
> @Deprecated
> public HttpResponseStatus getStatus()
> {         return this.status(); }
> Deprecated. Use status()  instead.
>  
> WebHdfsHandler.java:125:35:[deprecation] getUri() in HttpRequest has been 
> deprecated
> HostRestrictingAuthorizationFilterHandler.java:200:27:[deprecation] getUri() 
> in HttpRequest has been deprecated



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #4335: HADOOP-18249. Fix getUri() in HttpRequest has been deprecated.

2022-05-24 Thread GitBox


slfan1989 commented on PR #4335:
URL: https://github.com/apache/hadoop/pull/4335#issuecomment-1136439304

   @hemanthboyina Thank you very much for your help reviewing the code!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18105) Implement a variant of ElasticByteBufferPool which uses weak references for garbage collection.

2022-05-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18105?focusedWorklogId=774231=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-774231
 ]

ASF GitHub Bot logged work on HADOOP-18105:
---

Author: ASF GitHub Bot
Created on: 24/May/22 21:10
Start Date: 24/May/22 21:10
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on code in PR #4263:
URL: https://github.com/apache/hadoop/pull/4263#discussion_r880947333


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/WeakReferencedElasticByteBufferPool.java:
##
@@ -0,0 +1,126 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io;
+
+import java.lang.ref.WeakReference;
+import java.nio.ByteBuffer;
+import java.util.Map;
+import java.util.TreeMap;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+
+/**
+ * Buffer pool implementation which uses weak references to store
+ * buffers in the pool, such that they are garbage collected when
+ * there are no references to the buffer during a gc run. This is
+ * important as direct buffers don't get garbage collected automatically
+ * during a gc run as they are not stored on heap memory.
+ * Also the buffers are stored in a tree map which helps in returning
+ * smallest buffer whose size is just greater than requested length.
+ * This is a thread safe implementation.
+ */
+public final class WeakReferencedElasticByteBufferPool extends 
ElasticByteBufferPool {
+
+  private final TreeMap> directBuffers =

Review Comment:
   1. add javadocs here and below, mention use must be in synchronized blocks
   2. field should be of type Map<>, unless it has to be explicitly a tree map



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/ByteBufferPool.java:
##
@@ -45,4 +45,6 @@ public interface ByteBufferPool {
* @param buffera direct bytebuffer
*/
   void putBuffer(ByteBuffer buffer);
+
+  default void release() { }

Review Comment:
   javadoc?



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/WeakReferencedElasticByteBufferPool.java:
##
@@ -0,0 +1,123 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io;
+
+import java.lang.ref.WeakReference;
+import java.nio.ByteBuffer;
+import java.util.Map;
+import java.util.TreeMap;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+
+/**
+ * Buffer pool implementation which uses weak references to store
+ * buffers in the pool, such that they are garbage collected when
+ * there are no references to the buffer during a gc run. This is
+ * important as direct buffer don't get garbage collected automatically
+ * during a gc run as they are not stored on heap memory.
+ * Also the buffers are stored in a tree map which helps in returning
+ * smallest buffer whose size is just greater than requested length.
+ * This is a thread safe implementation.
+ */
+public final class WeakReferencedElasticByteBufferPool extends 
ElasticByteBufferPool {
+
+  private final TreeMap> directBuffers =
+  new TreeMap<>();
+
+  private final TreeMap> heapBuffers =
+  new TreeMap<>();
+
+  private TreeMap> getBufferTree(boolean 
isDirect) {
+return isDirect ? directBuffers : heapBuffers;
+  }
+
+  /**
+   * {@inheritDoc}
+   *
+   * 

[GitHub] [hadoop] steveloughran commented on a diff in pull request #4263: HADOOP-18105 Implement buffer pooling with weak references

2022-05-24 Thread GitBox


steveloughran commented on code in PR #4263:
URL: https://github.com/apache/hadoop/pull/4263#discussion_r880947333


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/WeakReferencedElasticByteBufferPool.java:
##
@@ -0,0 +1,126 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io;
+
+import java.lang.ref.WeakReference;
+import java.nio.ByteBuffer;
+import java.util.Map;
+import java.util.TreeMap;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+
+/**
+ * Buffer pool implementation which uses weak references to store
+ * buffers in the pool, such that they are garbage collected when
+ * there are no references to the buffer during a gc run. This is
+ * important as direct buffers don't get garbage collected automatically
+ * during a gc run as they are not stored on heap memory.
+ * Also the buffers are stored in a tree map which helps in returning
+ * smallest buffer whose size is just greater than requested length.
+ * This is a thread safe implementation.
+ */
+public final class WeakReferencedElasticByteBufferPool extends 
ElasticByteBufferPool {
+
+  private final TreeMap> directBuffers =

Review Comment:
   1. add javadocs here and below, mention use must be in synchronized blocks
   2. field should be of type Map<>, unless it has to be explicitly a tree map



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/ByteBufferPool.java:
##
@@ -45,4 +45,6 @@ public interface ByteBufferPool {
* @param buffera direct bytebuffer
*/
   void putBuffer(ByteBuffer buffer);
+
+  default void release() { }

Review Comment:
   javadoc?



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/WeakReferencedElasticByteBufferPool.java:
##
@@ -0,0 +1,123 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io;
+
+import java.lang.ref.WeakReference;
+import java.nio.ByteBuffer;
+import java.util.Map;
+import java.util.TreeMap;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+
+/**
+ * Buffer pool implementation which uses weak references to store
+ * buffers in the pool, such that they are garbage collected when
+ * there are no references to the buffer during a gc run. This is
+ * important as direct buffer don't get garbage collected automatically
+ * during a gc run as they are not stored on heap memory.
+ * Also the buffers are stored in a tree map which helps in returning
+ * smallest buffer whose size is just greater than requested length.
+ * This is a thread safe implementation.
+ */
+public final class WeakReferencedElasticByteBufferPool extends 
ElasticByteBufferPool {
+
+  private final TreeMap> directBuffers =
+  new TreeMap<>();
+
+  private final TreeMap> heapBuffers =
+  new TreeMap<>();
+
+  private TreeMap> getBufferTree(boolean 
isDirect) {
+return isDirect ? directBuffers : heapBuffers;
+  }
+
+  /**
+   * {@inheritDoc}
+   *
+   * @param direct whether we want a direct byte buffer or a heap one.
+   * @param length length of requested buffer.
+   * @return returns equal or next greater than capacity buffer from
+   * pool if already available and not garbage collected else creates
+   * a new buffer and return it.
+   */
+  @Override
+  public synchronized ByteBuffer getBuffer(boolean direct, int length) {
+TreeMap> buffersTree = 

[GitHub] [hadoop] slfan1989 commented on pull request #4349: HDFS-16590. Fix Junit Test Deprecated assertThat

2022-05-24 Thread GitBox


slfan1989 commented on PR #4349:
URL: https://github.com/apache/hadoop/pull/4349#issuecomment-1136434600

   > The new javac warnings make sense, so they are OK to me.
   > 
   > Unit test failures need looking at, I'm not sure if they are occasional 
issues. I don't usually review HDFS patches.
   
   @dannycjones Thanks you very much, I will follow up this pr.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18249) Fix getUri() in HttpRequest has been deprecated

2022-05-24 Thread fanshilun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17541702#comment-17541702
 ] 

fanshilun commented on HADOOP-18249:


Hi, [~hemanthboyina] ,Thanks you very much!

> Fix getUri() in HttpRequest has been deprecated
> ---
>
> Key: HADOOP-18249
> URL: https://issues.apache.org/jira/browse/HADOOP-18249
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: getUri() deprecated -1.png
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> When reading the code, I found that the method used has been deprecated due 
> to the upgrade of the netty component. The main methods are as follows:
> io.netty.handler.codec.http#HttpRequest
> @Deprecated
> HttpMethod getMethod();
> Deprecated. Use method() instead.
> @Deprecated
> String getUri()
> Deprecated. Use uri() instead.
> io.netty.handler.codec.http#DefaultHttpResponse
> @Deprecated
> public HttpResponseStatus getStatus()
> {         return this.status(); }
> Deprecated. Use status()  instead.
>  
> WebHdfsHandler.java:125:35:[deprecation] getUri() in HttpRequest has been 
> deprecated
> HostRestrictingAuthorizationFilterHandler.java:200:27:[deprecation] getUri() 
> in HttpRequest has been deprecated



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18107) Vectored IO support for large S3 files.

2022-05-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18107?focusedWorklogId=774225=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-774225
 ]

ASF GitHub Bot logged work on HADOOP-18107:
---

Author: ASF GitHub Bot
Created on: 24/May/22 20:59
Start Date: 24/May/22 20:59
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on PR #4273:
URL: https://github.com/apache/hadoop/pull/4273#issuecomment-1136425987

   +1 pending that move into a constant, just for clarity. thanks
   
   (btw, suggest another rebase to get rid of those javadoc issues)




Issue Time Tracking
---

Worklog Id: (was: 774225)
Time Spent: 2h 50m  (was: 2h 40m)

> Vectored IO support for large S3 files. 
> 
>
> Key: HADOOP-18107
> URL: https://issues.apache.org/jira/browse/HADOOP-18107
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> This effort would mostly be adding more tests for large files under scale 
> tests and see if any new issue surfaces. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #4273: HADOOP-18107 Adding scale test for vectored reads for large file

2022-05-24 Thread GitBox


steveloughran commented on PR #4273:
URL: https://github.com/apache/hadoop/pull/4273#issuecomment-1136425987

   +1 pending that move into a constant, just for clarity. thanks
   
   (btw, suggest another rebase to get rid of those javadoc issues)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18107) Vectored IO support for large S3 files.

2022-05-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18107?focusedWorklogId=774224=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-774224
 ]

ASF GitHub Bot logged work on HADOOP-18107:
---

Author: ASF GitHub Bot
Created on: 24/May/22 20:58
Start Date: 24/May/22 20:58
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on code in PR #4273:
URL: https://github.com/apache/hadoop/pull/4273#discussion_r880944821


##
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractTestUtils.java:
##
@@ -1095,6 +1102,54 @@ public static void validateFileContent(byte[] concat, 
byte[][] bytes) {
 mismatch);
   }
 
+  /**
+   * Utility to validate vectored read results.
+   * @param fileRanges input ranges.
+   * @param originalData original data.
+   * @throws IOException any ioe.
+   */
+  public static void validateVectoredReadResult(List fileRanges,
+byte[] originalData)
+  throws IOException, TimeoutException {
+CompletableFuture[] completableFutures = new 
CompletableFuture[fileRanges.size()];
+int i = 0;
+for (FileRange res : fileRanges) {
+  completableFutures[i++] = res.getData();
+}
+CompletableFuture combinedFuture = 
CompletableFuture.allOf(completableFutures);
+FutureIO.awaitFuture(combinedFuture, 5, TimeUnit.MINUTES);
+
+for (FileRange res : fileRanges) {
+  CompletableFuture data = res.getData();
+  ByteBuffer buffer = FutureIO.awaitFuture(data, 5, TimeUnit.MINUTES);

Review Comment:
   ok





Issue Time Tracking
---

Worklog Id: (was: 774224)
Time Spent: 2h 40m  (was: 2.5h)

> Vectored IO support for large S3 files. 
> 
>
> Key: HADOOP-18107
> URL: https://issues.apache.org/jira/browse/HADOOP-18107
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> This effort would mostly be adding more tests for large files under scale 
> tests and see if any new issue surfaces. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a diff in pull request #4273: HADOOP-18107 Adding scale test for vectored reads for large file

2022-05-24 Thread GitBox


steveloughran commented on code in PR #4273:
URL: https://github.com/apache/hadoop/pull/4273#discussion_r880944821


##
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractTestUtils.java:
##
@@ -1095,6 +1102,54 @@ public static void validateFileContent(byte[] concat, 
byte[][] bytes) {
 mismatch);
   }
 
+  /**
+   * Utility to validate vectored read results.
+   * @param fileRanges input ranges.
+   * @param originalData original data.
+   * @throws IOException any ioe.
+   */
+  public static void validateVectoredReadResult(List fileRanges,
+byte[] originalData)
+  throws IOException, TimeoutException {
+CompletableFuture[] completableFutures = new 
CompletableFuture[fileRanges.size()];
+int i = 0;
+for (FileRange res : fileRanges) {
+  completableFutures[i++] = res.getData();
+}
+CompletableFuture combinedFuture = 
CompletableFuture.allOf(completableFutures);
+FutureIO.awaitFuture(combinedFuture, 5, TimeUnit.MINUTES);
+
+for (FileRange res : fileRanges) {
+  CompletableFuture data = res.getData();
+  ByteBuffer buffer = FutureIO.awaitFuture(data, 5, TimeUnit.MINUTES);

Review Comment:
   ok



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18107) Vectored IO support for large S3 files.

2022-05-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18107?focusedWorklogId=774223=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-774223
 ]

ASF GitHub Bot logged work on HADOOP-18107:
---

Author: ASF GitHub Bot
Created on: 24/May/22 20:57
Start Date: 24/May/22 20:57
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on code in PR #4273:
URL: https://github.com/apache/hadoop/pull/4273#discussion_r880943791


##
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractTestUtils.java:
##
@@ -1095,6 +1102,54 @@ public static void validateFileContent(byte[] concat, 
byte[][] bytes) {
 mismatch);
   }
 
+  /**
+   * Utility to validate vectored read results.
+   * @param fileRanges input ranges.
+   * @param originalData original data.
+   * @throws IOException any ioe.
+   */
+  public static void validateVectoredReadResult(List fileRanges,
+byte[] originalData)
+  throws IOException, TimeoutException {
+CompletableFuture[] completableFutures = new 
CompletableFuture[fileRanges.size()];
+int i = 0;
+for (FileRange res : fileRanges) {
+  completableFutures[i++] = res.getData();
+}
+CompletableFuture combinedFuture = 
CompletableFuture.allOf(completableFutures);
+FutureIO.awaitFuture(combinedFuture, 5, TimeUnit.MINUTES);

Review Comment:
   yes, just pulling into the file as a static with javadoc





Issue Time Tracking
---

Worklog Id: (was: 774223)
Time Spent: 2.5h  (was: 2h 20m)

> Vectored IO support for large S3 files. 
> 
>
> Key: HADOOP-18107
> URL: https://issues.apache.org/jira/browse/HADOOP-18107
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> This effort would mostly be adding more tests for large files under scale 
> tests and see if any new issue surfaces. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a diff in pull request #4273: HADOOP-18107 Adding scale test for vectored reads for large file

2022-05-24 Thread GitBox


steveloughran commented on code in PR #4273:
URL: https://github.com/apache/hadoop/pull/4273#discussion_r880943791


##
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractTestUtils.java:
##
@@ -1095,6 +1102,54 @@ public static void validateFileContent(byte[] concat, 
byte[][] bytes) {
 mismatch);
   }
 
+  /**
+   * Utility to validate vectored read results.
+   * @param fileRanges input ranges.
+   * @param originalData original data.
+   * @throws IOException any ioe.
+   */
+  public static void validateVectoredReadResult(List fileRanges,
+byte[] originalData)
+  throws IOException, TimeoutException {
+CompletableFuture[] completableFutures = new 
CompletableFuture[fileRanges.size()];
+int i = 0;
+for (FileRange res : fileRanges) {
+  completableFutures[i++] = res.getData();
+}
+CompletableFuture combinedFuture = 
CompletableFuture.allOf(completableFutures);
+FutureIO.awaitFuture(combinedFuture, 5, TimeUnit.MINUTES);

Review Comment:
   yes, just pulling into the file as a static with javadoc



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17726) Replace Sets#newHashSet() and newTreeSet() with constructors directly

2022-05-24 Thread Samrat Deb (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17541692#comment-17541692
 ] 

Samrat Deb edited comment on HADOOP-17726 at 5/24/22 8:24 PM:
--

hi [~vjasani] , 
I am willing to pick this task as a newbie !
 can i go ahead with this task ?
given that dependent tasks are done (HADOOP-17115, HADOOP-17721, HADOOP-17722 
and HADOOP-17720 )  


was (Author: samrat007):
hi [~vjasani] , 
I am willing to pick this task as a newbie !
assigned it to me, if this is fine then can i go ahead with this task ?
given that dependent tasks are done (HADOOP-17115, HADOOP-17721, HADOOP-17722 
and HADOOP-17720 )  

> Replace Sets#newHashSet() and newTreeSet() with constructors directly
> -
>
> Key: HADOOP-17726
> URL: https://issues.apache.org/jira/browse/HADOOP-17726
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Viraj Jasani
>Priority: Major
>  Labels: beginner, beginner-friendly, newbie
>
> As per the guidelines provided by Guava Sets#newHashSet() and 
> Sets#newTreeSet(), we should get rid of them and use newHashSet<>() and 
> newTreeSet<>() directly.
> Once HADOOP-17115, HADOOP-17721, HADOOP-17722 and HADOOP-17720 are fixed, 
> please feel free to take this up.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-17726) Replace Sets#newHashSet() and newTreeSet() with constructors directly

2022-05-24 Thread Samrat Deb (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samrat Deb reassigned HADOOP-17726:
---

Assignee: (was: Samrat Deb)

> Replace Sets#newHashSet() and newTreeSet() with constructors directly
> -
>
> Key: HADOOP-17726
> URL: https://issues.apache.org/jira/browse/HADOOP-17726
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Viraj Jasani
>Priority: Major
>  Labels: beginner, beginner-friendly, newbie
>
> As per the guidelines provided by Guava Sets#newHashSet() and 
> Sets#newTreeSet(), we should get rid of them and use newHashSet<>() and 
> newTreeSet<>() directly.
> Once HADOOP-17115, HADOOP-17721, HADOOP-17722 and HADOOP-17720 are fixed, 
> please feel free to take this up.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17726) Replace Sets#newHashSet() and newTreeSet() with constructors directly

2022-05-24 Thread Samrat Deb (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17541692#comment-17541692
 ] 

Samrat Deb commented on HADOOP-17726:
-

hi [~vjasani] , 
I am willing to pick this task as a newbie !
assigned it to me, if this is fine then can i go ahead with this task ?
given that dependent tasks are done (HADOOP-17115, HADOOP-17721, HADOOP-17722 
and HADOOP-17720 )  

> Replace Sets#newHashSet() and newTreeSet() with constructors directly
> -
>
> Key: HADOOP-17726
> URL: https://issues.apache.org/jira/browse/HADOOP-17726
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Samrat Deb
>Priority: Major
>  Labels: beginner, beginner-friendly, newbie
>
> As per the guidelines provided by Guava Sets#newHashSet() and 
> Sets#newTreeSet(), we should get rid of them and use newHashSet<>() and 
> newTreeSet<>() directly.
> Once HADOOP-17115, HADOOP-17721, HADOOP-17722 and HADOOP-17720 are fixed, 
> please feel free to take this up.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18140) Update default value of hadoop.ssl.enabled.protocols in the EncryptedShuffle doc

2022-05-24 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HADOOP-18140.
--
Resolution: Duplicate

> Update default value of hadoop.ssl.enabled.protocols in the EncryptedShuffle 
> doc
> 
>
> Key: HADOOP-18140
> URL: https://issues.apache.org/jira/browse/HADOOP-18140
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.9.0, 2.8.4
>Reporter: Wei-Chiu Chuang
>Assignee: Samrat Deb
>Priority: Trivial
>  Labels: newbie
>
> The default value of hadoop.ssl.enabled.protocols was updated several times 
> but our doc was never updated.
> https://hadoop.apache.org/docs/r3.1.3/hadoop-mapreduce-client/hadoop-mapreduce-client-core/EncryptedShuffle.html



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-17726) Replace Sets#newHashSet() and newTreeSet() with constructors directly

2022-05-24 Thread Samrat Deb (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samrat Deb reassigned HADOOP-17726:
---

Assignee: Samrat Deb

> Replace Sets#newHashSet() and newTreeSet() with constructors directly
> -
>
> Key: HADOOP-17726
> URL: https://issues.apache.org/jira/browse/HADOOP-17726
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Samrat Deb
>Priority: Major
>  Labels: beginner, beginner-friendly, newbie
>
> As per the guidelines provided by Guava Sets#newHashSet() and 
> Sets#newTreeSet(), we should get rid of them and use newHashSet<>() and 
> newTreeSet<>() directly.
> Once HADOOP-17115, HADOOP-17721, HADOOP-17722 and HADOOP-17720 are fixed, 
> please feel free to take this up.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18140) Update default value of hadoop.ssl.enabled.protocols in the EncryptedShuffle doc

2022-05-24 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17541689#comment-17541689
 ] 

Wei-Chiu Chuang commented on HADOOP-18140:
--

Uh. you're right. This is fixed by HADOOP-16549.
Thanks for spending the time to help us with the doc.

> Update default value of hadoop.ssl.enabled.protocols in the EncryptedShuffle 
> doc
> 
>
> Key: HADOOP-18140
> URL: https://issues.apache.org/jira/browse/HADOOP-18140
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.9.0, 2.8.4
>Reporter: Wei-Chiu Chuang
>Assignee: Samrat Deb
>Priority: Trivial
>  Labels: newbie
>
> The default value of hadoop.ssl.enabled.protocols was updated several times 
> but our doc was never updated.
> https://hadoop.apache.org/docs/r3.1.3/hadoop-mapreduce-client/hadoop-mapreduce-client-core/EncryptedShuffle.html



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18140) Update default value of hadoop.ssl.enabled.protocols in the EncryptedShuffle doc

2022-05-24 Thread Samrat Deb (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17541677#comment-17541677
 ] 

Samrat Deb commented on HADOOP-18140:
-

I checked the code . looks like the file in github trunk is updated with 
correct default value for  `hadoop.ssl.enabled.protocols`

also the current documentation default value is updated correctly `TSLv2`
reference -> 
[https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/EncryptedShuffle.html]

Please correct me if this ticket is for correcting the doc for previous 
versions like 3.1.3 ?
[~weichiu] [~aajisaka] 

> Update default value of hadoop.ssl.enabled.protocols in the EncryptedShuffle 
> doc
> 
>
> Key: HADOOP-18140
> URL: https://issues.apache.org/jira/browse/HADOOP-18140
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.9.0, 2.8.4
>Reporter: Wei-Chiu Chuang
>Assignee: Samrat Deb
>Priority: Trivial
>  Labels: newbie
>
> The default value of hadoop.ssl.enabled.protocols was updated several times 
> but our doc was never updated.
> https://hadoop.apache.org/docs/r3.1.3/hadoop-mapreduce-client/hadoop-mapreduce-client-core/EncryptedShuffle.html



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17343) Upgrade aws-java-sdk to 1.11.901

2022-05-24 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17541666#comment-17541666
 ] 

Steve Loughran commented on HADOOP-17343:
-

bq. Can we backport to branch-3.2?

what is your pressing need? and any reason why you can't upgrade to 3.3.x?

> Upgrade aws-java-sdk to 1.11.901
> 
>
> Key: HADOOP-17343
> URL: https://issues.apache.org/jira/browse/HADOOP-17343
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/s3
>Affects Versions: 3.3.1, 3.4.0
>Reporter: Dongjoon Hyun
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.1
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> Upgrade AWS SDK to most recent version



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17461) Add thread-level IOStatistics Context

2022-05-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17461?focusedWorklogId=774151=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-774151
 ]

ASF GitHub Bot logged work on HADOOP-17461:
---

Author: ASF GitHub Bot
Created on: 24/May/22 17:41
Start Date: 24/May/22 17:41
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4352:
URL: https://github.com/apache/hadoop/pull/4352#issuecomment-1136252339

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 44s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 24s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  27m 11s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m  2s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  20m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   4m 24s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 45s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 59s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   5m  2s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m  5s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 35s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 46s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 10s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |  22m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 38s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  20m 38s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   4m 15s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4352/1/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 3 new + 6 unchanged - 0 fixed = 9 total (was 6)  |
   | +1 :green_heart: |  mvnsite  |   3m 38s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 53s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | -1 :x: |  javadoc  |   1m 19s | 
[/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4352/1/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 
with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 generated 2 new + 
38 unchanged - 0 fixed = 40 total (was 38)  |
   | +1 :green_heart: |  spotbugs  |   5m 12s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m  9s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 30s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   3m  2s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   1m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 241m 16s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4352/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4352 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 10f5099454c2 4.15.0-169-generic #177-Ubuntu SMP Thu Feb 3 
10:50:38 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven 

[jira] [Commented] (HADOOP-18249) Fix getUri() in HttpRequest has been deprecated

2022-05-24 Thread Hemanth Boyina (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17541638#comment-17541638
 ] 

Hemanth Boyina commented on HADOOP-18249:
-

committed to trunk

thanks for the contribution [~slfan1989] 

> Fix getUri() in HttpRequest has been deprecated
> ---
>
> Key: HADOOP-18249
> URL: https://issues.apache.org/jira/browse/HADOOP-18249
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: getUri() deprecated -1.png
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> When reading the code, I found that the method used has been deprecated due 
> to the upgrade of the netty component. The main methods are as follows:
> io.netty.handler.codec.http#HttpRequest
> @Deprecated
> HttpMethod getMethod();
> Deprecated. Use method() instead.
> @Deprecated
> String getUri()
> Deprecated. Use uri() instead.
> io.netty.handler.codec.http#DefaultHttpResponse
> @Deprecated
> public HttpResponseStatus getStatus()
> {         return this.status(); }
> Deprecated. Use status()  instead.
>  
> WebHdfsHandler.java:125:35:[deprecation] getUri() in HttpRequest has been 
> deprecated
> HostRestrictingAuthorizationFilterHandler.java:200:27:[deprecation] getUri() 
> in HttpRequest has been deprecated



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18249) Fix getUri() in HttpRequest has been deprecated

2022-05-24 Thread Hemanth Boyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hemanth Boyina resolved HADOOP-18249.
-
Resolution: Fixed

> Fix getUri() in HttpRequest has been deprecated
> ---
>
> Key: HADOOP-18249
> URL: https://issues.apache.org/jira/browse/HADOOP-18249
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: getUri() deprecated -1.png
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> When reading the code, I found that the method used has been deprecated due 
> to the upgrade of the netty component. The main methods are as follows:
> io.netty.handler.codec.http#HttpRequest
> @Deprecated
> HttpMethod getMethod();
> Deprecated. Use method() instead.
> @Deprecated
> String getUri()
> Deprecated. Use uri() instead.
> io.netty.handler.codec.http#DefaultHttpResponse
> @Deprecated
> public HttpResponseStatus getStatus()
> {         return this.status(); }
> Deprecated. Use status()  instead.
>  
> WebHdfsHandler.java:125:35:[deprecation] getUri() in HttpRequest has been 
> deprecated
> HostRestrictingAuthorizationFilterHandler.java:200:27:[deprecation] getUri() 
> in HttpRequest has been deprecated



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18249) Fix getUri() in HttpRequest has been deprecated

2022-05-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18249?focusedWorklogId=774150=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-774150
 ]

ASF GitHub Bot logged work on HADOOP-18249:
---

Author: ASF GitHub Bot
Created on: 24/May/22 17:41
Start Date: 24/May/22 17:41
Worklog Time Spent: 10m 
  Work Description: hemanthboyina merged PR #4335:
URL: https://github.com/apache/hadoop/pull/4335




Issue Time Tracking
---

Worklog Id: (was: 774150)
Time Spent: 1h 10m  (was: 1h)

> Fix getUri() in HttpRequest has been deprecated
> ---
>
> Key: HADOOP-18249
> URL: https://issues.apache.org/jira/browse/HADOOP-18249
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: getUri() deprecated -1.png
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> When reading the code, I found that the method used has been deprecated due 
> to the upgrade of the netty component. The main methods are as follows:
> io.netty.handler.codec.http#HttpRequest
> @Deprecated
> HttpMethod getMethod();
> Deprecated. Use method() instead.
> @Deprecated
> String getUri()
> Deprecated. Use uri() instead.
> io.netty.handler.codec.http#DefaultHttpResponse
> @Deprecated
> public HttpResponseStatus getStatus()
> {         return this.status(); }
> Deprecated. Use status()  instead.
>  
> WebHdfsHandler.java:125:35:[deprecation] getUri() in HttpRequest has been 
> deprecated
> HostRestrictingAuthorizationFilterHandler.java:200:27:[deprecation] getUri() 
> in HttpRequest has been deprecated



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4352: HADOOP-17461. Thread-level IOStatistics in S3A

2022-05-24 Thread GitBox


hadoop-yetus commented on PR #4352:
URL: https://github.com/apache/hadoop/pull/4352#issuecomment-1136252339

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 44s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 24s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  27m 11s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m  2s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  20m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   4m 24s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 45s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 59s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   5m  2s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m  5s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 35s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 46s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 10s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |  22m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 38s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  20m 38s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   4m 15s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4352/1/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 3 new + 6 unchanged - 0 fixed = 9 total (was 6)  |
   | +1 :green_heart: |  mvnsite  |   3m 38s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 53s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | -1 :x: |  javadoc  |   1m 19s | 
[/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4352/1/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 
with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 generated 2 new + 
38 unchanged - 0 fixed = 40 total (was 38)  |
   | +1 :green_heart: |  spotbugs  |   5m 12s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m  9s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 30s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   3m  2s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   1m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 241m 16s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4352/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4352 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 10f5099454c2 4.15.0-169-generic #177-Ubuntu SMP Thu Feb 3 
10:50:38 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / cedc0a9b7a47ca9434fc73ffc915ae1209f9499f |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 

[GitHub] [hadoop] hemanthboyina merged pull request #4335: HADOOP-18249. Fix getUri() in HttpRequest has been deprecated.

2022-05-24 Thread GitBox


hemanthboyina merged PR #4335:
URL: https://github.com/apache/hadoop/pull/4335


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17935) Spark job stuck in S3A StagingCommitter::setupJob

2022-05-24 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17541615#comment-17541615
 ] 

Steve Loughran commented on HADOOP-17935:
-

revisiting. 
* that staging path  "spark.hadoop.fs.s3a.committer.staging.tmp.path": MUST NOT 
BE ON S3. it relies on the  FileOutputCommitter v1 commit algorithm, which 
isn't safe there
* that stack trace is from an older release which doesn't schedule the work in 
a separate (unbounded) thread pool. 

i have vague memories of deadlock here if work in the bounded thread pool was 
using the same pool. which could happen if an s3a output stream was being 
written in the bounded threadpool itself

[~brandonvin] are you seeing this after upgrading to a hadoop 3.3.x release 
(ideally 3.3.3?)

if so, enable debug logging on org.apache.hadoop.fs.s3a.Invoker  and see if you 
see messages about retries. that will let us know whether it is retry problems 
or some deadlock.
currently we are only using that threadpool for rename copy operations. I don't 
think it is happening here.

I'm going to close this as a

> Spark job stuck in S3A StagingCommitter::setupJob
> -
>
> Key: HADOOP-17935
> URL: https://issues.apache.org/jira/browse/HADOOP-17935
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.2.1
> Environment: Spark 2.4.4
> Hadoop 3.2.1
> "spark.hadoop.fs.s3a.committer.name": "directory"
>Reporter: Brandon
>Priority: Major
>
> This is using the S3A directory staging committer, the Spark driver gets 
> stuck in a retry loop inside setupJob. Here's a stack trace:
> {noformat}
> org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:229)
> org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:271)
> org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:290)
> org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:676)
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
> org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
> org.apache.spark.sql.execution.SQLExecution$$$Lambda$1753/2105635903.apply(Unknown
>  Source)
> org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:78)
> org.apache.spark.sql.DataFrameWriter$$Lambda$1752/114484787.apply(Unknown 
> Source)
> org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand$1(DataFrameWriter.scala:676)
> org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:85)
> org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:85)
>  => holding Monitor(org.apache.spark.sql.execution.QueryExecution@705144571})
> org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
> org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
> org.apache.spark.sql.execution.SparkPlan$$Lambda$1574/1384254911.apply(Unknown
>  Source)
> org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:155)
> org.apache.spark.sql.execution.SparkPlan$$Lambda$1573/696771575.apply(Unknown 
> Source)
> org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:131)
> org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:122)
> org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102)
> org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104)
>  => holding 
> Monitor(org.apache.spark.sql.execution.command.DataWritingCommandExec@539925125})
> org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:170)
> org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:139)
> org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.setupJob(HadoopMapReduceCommitProtocol.scala:163)
> org.apache.hadoop.fs.s3a.commit.staging.DirectoryStagingCommitter.setupJob(DirectoryStagingCommitter.java:65)
> org.apache.hadoop.fs.s3a.commit.staging.StagingCommitter.setupJob(StagingCommitter.java:458)
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.setupJob(FileOutputCommitter.java:355)
> org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2275)
> org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:2062)
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerMkdirs(S3AFileSystem.java:2129)
> org.apache.hadoop.fs.s3a.S3AFileSystem.createFakeDirectory(S3AFileSystem.java:2808)
> org.apache.hadoop.fs.s3a.S3AFileSystem.createEmptyObject(S3AFileSystem.java:2833)
> org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:236)
> 

[jira] [Commented] (HADOOP-17063) S3A deleteObjects hanging/retrying forever

2022-05-24 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17541607#comment-17541607
 ] 

Steve Loughran commented on HADOOP-17063:
-

stack trace with line endings fixed up for IDEA to parse

{code}
sun.misc.Unsafe.park(Native Method) 
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
 com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:523) 
 
com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:82)
 
 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.putObject(S3ABlockOutputStream.java:446)
 
 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.close(S3ABlockOutputStream.java:365)
 
 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
 
 org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101) 
 
org.apache.parquet.hadoop.util.HadoopPositionOutputStream.close(HadoopPositionOutputStream.java:64)
 
 org.apache.parquet.hadoop.ParquetFileWriter.end(ParquetFileWriter.java:685) 
 
org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:122)
 
 
org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:165)
 
 
org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:42)
 
 
org.apache.spark.sql.execution.datasources.FileFormatDataWriter.releaseResources(FileFormatDataWriter.scala:57)
 
 
org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:74)
 
 
org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:247)
 
 
org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:242)
 
 
org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1394)
 
 
org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:248)
 
 
org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:170)
 
 
org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:169)
 
 org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) 
 org.apache.spark.scheduler.Task.run(Task.scala:123) 
 
org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
 
 org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) 
 org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414) 
 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 java.lang.Thread.run(Thread.java:748) 

{code}


> S3A deleteObjects hanging/retrying forever
> --
>
> Key: HADOOP-17063
> URL: https://issues.apache.org/jira/browse/HADOOP-17063
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.1
> Environment: hadoop 3.2.1
> spark 2.4.4
>  
>Reporter: Dyno
>Priority: Minor
> Attachments: jstack_exec-34.log, jstack_exec-40.log, 
> jstack_exec-74.log
>
>
> {code}
> sun.misc.Unsafe.park(Native Method) 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
> com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:523) 
> com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:82)
>  
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.putObject(S3ABlockOutputStream.java:446)
>  
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.close(S3ABlockOutputStream.java:365)
>  
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
>  org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101) 
> org.apache.parquet.hadoop.util.HadoopPositionOutputStream.close(HadoopPositionOutputStream.java:64)
>  org.apache.parquet.hadoop.ParquetFileWriter.end(ParquetFileWriter.java:685) 
> org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:122)
>  
> org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:165)
>  
> org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:42)
>  
> org.apache.spark.sql.execution.datasources.FileFormatDataWriter.releaseResources(FileFormatDataWriter.scala:57)
>  
> org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:74)
>  
> 

[GitHub] [hadoop] dannycjones commented on pull request #4349: HDFS-16590. Fix Junit Test Deprecated assertThat

2022-05-24 Thread GitBox


dannycjones commented on PR #4349:
URL: https://github.com/apache/hadoop/pull/4349#issuecomment-1136121490

   The new javac warnings make sense, so they are OK to me.
   
   Unit test failures need looking at, I'm not sure if they are occasional 
issues. I don't usually review HDFS patches.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4349: HDFS-16590. Fix Junit Test Deprecated assertThat

2022-05-24 Thread GitBox


hadoop-yetus commented on PR #4349:
URL: https://github.com/apache/hadoop/pull/4349#issuecomment-1136104471

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  6s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 60 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  17m 59s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 28s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m  8s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  20m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   4m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |  15m 55s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |  14m  0s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |  13m 46s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |  21m 26s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 25s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 35s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   7m 42s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 15s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | -1 :x: |  javac  |  22m 15s | 
[/results-compile-javac-root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4349/1/artifact/out/results-compile-javac-root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 generated 4 new + 2460 unchanged - 451 
fixed = 2464 total (was 2911)  |
   | +1 :green_heart: |  compile  |  20m 43s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | -1 :x: |  javac  |  20m 43s | 
[/results-compile-javac-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4349/1/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 generated 4 new + 2255 
unchanged - 451 fixed = 2259 total (was 2706)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   4m 17s |  |  root: The patch generated 
0 new + 688 unchanged - 2 fixed = 688 total (was 690)  |
   | +1 :green_heart: |  mvnsite  |  15m 45s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |  13m 50s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |  13m 39s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |  23m 21s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 35s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   4m 23s |  |  hadoop-auth in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  18m 48s |  |  hadoop-common in the patch 
passed.  |
   | -1 :x: |  unit  | 454m 38s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4349/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |   4m 43s |  |  hadoop-hdfs-nfs in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   5m 58s |  |  hadoop-yarn-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  24m 48s |  |  hadoop-yarn-server-nodemanager 
in the patch passed.  |
   | +1 :green_heart: |  unit  |  47m 29s |  |  hadoop-distcp in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  22m 32s |  |  hadoop-yarn-services-core in 
the patch passed.  |
   | +1 :green_heart: |  unit  |   3m 46s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   

[GitHub] [hadoop] Hexiaoqiao commented on pull request #4342: HDFS-16588.Backport HDFS-16584 to branch-3.3.

2022-05-24 Thread GitBox


Hexiaoqiao commented on PR #4342:
URL: https://github.com/apache/hadoop/pull/4342#issuecomment-1136099049

   Committed to branch-3.3. Thanks @jianghuazhu for your contribution. Thanks 
@tomscut for your review.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Hexiaoqiao merged pull request #4342: HDFS-16588.Backport HDFS-16584 to branch-3.3.

2022-05-24 Thread GitBox


Hexiaoqiao merged PR #4342:
URL: https://github.com/apache/hadoop/pull/4342


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18254) Add in configuration option to enable prefetching

2022-05-24 Thread Ahmar Suhail (Jira)
Ahmar Suhail created HADOOP-18254:
-

 Summary: Add in configuration option to enable prefetching
 Key: HADOOP-18254
 URL: https://issues.apache.org/jira/browse/HADOOP-18254
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Ahmar Suhail


Currently prefetching is enabled by default, we should instead add in a config 
option to enable it.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4341: YARN-10487. Support getQueueUserAcls, listReservations, getApplicationAttempts, getContainerReport, getContainers, getResourceTypeInfo A

2022-05-24 Thread GitBox


hadoop-yetus commented on PR #4341:
URL: https://github.com/apache/hadoop/pull/4341#issuecomment-1136052655

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  0s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  51m 45s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 47s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 53s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  5s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 15s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 45s |  |  trunk passed  |
   | -1 :x: |  shadedclient  |  23m 49s |  |  branch has errors when building 
and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 29s | 
[/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4341/2/artifact/out/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.txt)
 |  hadoop-yarn-server-router in the patch failed.  |
   | -1 :x: |  compile  |   0m 30s | 
[/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4341/2/artifact/out/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-yarn-server-router in the patch failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | -1 :x: |  javac  |   0m 30s | 
[/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4341/2/artifact/out/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-yarn-server-router in the patch failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | -1 :x: |  compile  |   0m 29s | 
[/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4341/2/artifact/out/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  hadoop-yarn-server-router in the patch failed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.  |
   | -1 :x: |  javac  |   0m 29s | 
[/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4341/2/artifact/out/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  hadoop-yarn-server-router in the patch failed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 27s | 
[/buildtool-patch-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4341/2/artifact/out/buildtool-patch-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.txt)
 |  The patch fails to run checkstyle in hadoop-yarn-server-router  |
   | -1 :x: |  mvnsite  |   0m 29s | 

[jira] [Work logged] (HADOOP-18249) Fix getUri() in HttpRequest has been deprecated

2022-05-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18249?focusedWorklogId=774058=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-774058
 ]

ASF GitHub Bot logged work on HADOOP-18249:
---

Author: ASF GitHub Bot
Created on: 24/May/22 14:59
Start Date: 24/May/22 14:59
Worklog Time Spent: 10m 
  Work Description: slfan1989 commented on PR #4335:
URL: https://github.com/apache/hadoop/pull/4335#issuecomment-1136041900

   @hemanthboyina please help me to review the code again, thank you very much.




Issue Time Tracking
---

Worklog Id: (was: 774058)
Time Spent: 1h  (was: 50m)

> Fix getUri() in HttpRequest has been deprecated
> ---
>
> Key: HADOOP-18249
> URL: https://issues.apache.org/jira/browse/HADOOP-18249
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: getUri() deprecated -1.png
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> When reading the code, I found that the method used has been deprecated due 
> to the upgrade of the netty component. The main methods are as follows:
> io.netty.handler.codec.http#HttpRequest
> @Deprecated
> HttpMethod getMethod();
> Deprecated. Use method() instead.
> @Deprecated
> String getUri()
> Deprecated. Use uri() instead.
> io.netty.handler.codec.http#DefaultHttpResponse
> @Deprecated
> public HttpResponseStatus getStatus()
> {         return this.status(); }
> Deprecated. Use status()  instead.
>  
> WebHdfsHandler.java:125:35:[deprecation] getUri() in HttpRequest has been 
> deprecated
> HostRestrictingAuthorizationFilterHandler.java:200:27:[deprecation] getUri() 
> in HttpRequest has been deprecated



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #4335: HADOOP-18249. Fix getUri() in HttpRequest has been deprecated.

2022-05-24 Thread GitBox


slfan1989 commented on PR #4335:
URL: https://github.com/apache/hadoop/pull/4335#issuecomment-1136041900

   @hemanthboyina please help me to review the code again, thank you very much.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18252) Hadoop 3.3.3 Spark write Mode.Overwrite breaks partitioned tables

2022-05-24 Thread Aaron Whiteway (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Whiteway resolved HADOOP-18252.
-
Resolution: Invalid

> Hadoop 3.3.3 Spark write Mode.Overwrite breaks partitioned tables
> -
>
> Key: HADOOP-18252
> URL: https://issues.apache.org/jira/browse/HADOOP-18252
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Aaron Whiteway
>Priority: Major
>
> During testing Hadoop 3.3.3 with S3A with Versioning enabled ran into an 
> issue where spark/hadoop tries to load the partitions that don't exist anymore
>  
> {noformat}
> ---
> Py4JJavaError Traceback (most recent call last)
>  in 
> > 1 test_load = spark.read.parquet(test_loc)
> /usr/local/spark/python/pyspark/sql/readwriter.py in parquet(self, *paths, 
> **options)
> 456modifiedAfter=modifiedAfter)
> 457 
> --> 458 return 
> self._df(self._jreader.parquet(_to_seq(self._spark._sc, paths)))
> 459 
> 460 def text(self, paths, wholetext=False, lineSep=None, 
> pathGlobFilter=None,
> /usr/local/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py in 
> __call__(self, *args)
>1302 
>1303 answer = self.gateway_client.send_command(command)
> -> 1304 return_value = get_return_value(
>1305 answer, self.gateway_client, self.target_id, self.name)
>1306 
> /usr/local/spark/python/pyspark/sql/utils.py in deco(*a, **kw)
> 109 def deco(*a, **kw):
> 110 try:
> --> 111 return f(*a, **kw)
> 112 except py4j.protocol.Py4JJavaError as e:
> 113 converted = convert_exception(e.java_exception)
> /usr/local/spark/python/lib/py4j-0.10.9-src.zip/py4j/protocol.py in 
> get_return_value(answer, gateway_client, target_id, name)
> 324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
> 325 if answer[1] == REFERENCE_TYPE:
> --> 326 raise Py4JJavaError(
> 327 "An error occurred while calling {0}{1}{2}.\n".
> 328 format(target_id, ".", name), value)
> Py4JJavaError: An error occurred while calling o183.parquet.
> : java.io.FileNotFoundException: No such file or directory: 
> s3a://test/s32/singleday_parts_simple2/Part=TESTING_1
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2269)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2163)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2102)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerListStatus(S3AFileSystem.java:1903)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$listStatus$9(S3AFileSystem.java:1882)
>   at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.listStatus(S3AFileSystem.java:1882)
>   at 
> org.apache.spark.util.HadoopFSUtils$.listLeafFiles(HadoopFSUtils.scala:225)
>   at 
> org.apache.spark.util.HadoopFSUtils$.$anonfun$listLeafFiles$7(HadoopFSUtils.scala:281)
>   at 
> scala.collection.TraversableLike.$anonfun$flatMap$1(TraversableLike.scala:245)
>   at 
> scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
>   at 
> scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
>   at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
>   at scala.collection.TraversableLike.flatMap(TraversableLike.scala:245)
>   at scala.collection.TraversableLike.flatMap$(TraversableLike.scala:242)
>   at scala.collection.mutable.ArrayOps$ofRef.flatMap(ArrayOps.scala:198)
>   at 
> org.apache.spark.util.HadoopFSUtils$.listLeafFiles(HadoopFSUtils.scala:271)
>   at 
> org.apache.spark.util.HadoopFSUtils$.$anonfun$parallelListLeafFilesInternal$1(HadoopFSUtils.scala:95)
>   at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
>   at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
>   at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
>   at scala.collection.TraversableLike.map(TraversableLike.scala:238)
>   at scala.collection.TraversableLike.map$(TraversableLike.scala:231)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:108)
>   at 
> org.apache.spark.util.HadoopFSUtils$.parallelListLeafFilesInternal(HadoopFSUtils.scala:85)
>   at 
> org.apache.spark.util.HadoopFSUtils$.parallelListLeafFiles(HadoopFSUtils.scala:69)

[jira] [Resolved] (HADOOP-18253) Hadoop 3.3.3 Spark write Mode.Overwrite breaks partitioned tables

2022-05-24 Thread Aaron Whiteway (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Whiteway resolved HADOOP-18253.
-
Resolution: Invalid

> Hadoop 3.3.3 Spark write Mode.Overwrite breaks partitioned tables
> -
>
> Key: HADOOP-18253
> URL: https://issues.apache.org/jira/browse/HADOOP-18253
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Aaron Whiteway
>Priority: Major
>
> During testing Hadoop 3.3.3 with S3A with Versioning enabled ran into an 
> issue where spark/hadoop tries to load the partitions that don't exist anymore
>  
> {noformat}
> ---
> Py4JJavaError Traceback (most recent call last)
>  in 
> > 1 test_load = spark.read.parquet(test_loc)
> /usr/local/spark/python/pyspark/sql/readwriter.py in parquet(self, *paths, 
> **options)
> 456modifiedAfter=modifiedAfter)
> 457 
> --> 458 return 
> self._df(self._jreader.parquet(_to_seq(self._spark._sc, paths)))
> 459 
> 460 def text(self, paths, wholetext=False, lineSep=None, 
> pathGlobFilter=None,
> /usr/local/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py in 
> __call__(self, *args)
>1302 
>1303 answer = self.gateway_client.send_command(command)
> -> 1304 return_value = get_return_value(
>1305 answer, self.gateway_client, self.target_id, self.name)
>1306 
> /usr/local/spark/python/pyspark/sql/utils.py in deco(*a, **kw)
> 109 def deco(*a, **kw):
> 110 try:
> --> 111 return f(*a, **kw)
> 112 except py4j.protocol.Py4JJavaError as e:
> 113 converted = convert_exception(e.java_exception)
> /usr/local/spark/python/lib/py4j-0.10.9-src.zip/py4j/protocol.py in 
> get_return_value(answer, gateway_client, target_id, name)
> 324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
> 325 if answer[1] == REFERENCE_TYPE:
> --> 326 raise Py4JJavaError(
> 327 "An error occurred while calling {0}{1}{2}.\n".
> 328 format(target_id, ".", name), value)
> Py4JJavaError: An error occurred while calling o183.parquet.
> : java.io.FileNotFoundException: No such file or directory: 
> s3a://test/s32/singleday_parts_simple2/Part=TESTING_1
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2269)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2163)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2102)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerListStatus(S3AFileSystem.java:1903)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$listStatus$9(S3AFileSystem.java:1882)
>   at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.listStatus(S3AFileSystem.java:1882)
>   at 
> org.apache.spark.util.HadoopFSUtils$.listLeafFiles(HadoopFSUtils.scala:225)
>   at 
> org.apache.spark.util.HadoopFSUtils$.$anonfun$listLeafFiles$7(HadoopFSUtils.scala:281)
>   at 
> scala.collection.TraversableLike.$anonfun$flatMap$1(TraversableLike.scala:245)
>   at 
> scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
>   at 
> scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
>   at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
>   at scala.collection.TraversableLike.flatMap(TraversableLike.scala:245)
>   at scala.collection.TraversableLike.flatMap$(TraversableLike.scala:242)
>   at scala.collection.mutable.ArrayOps$ofRef.flatMap(ArrayOps.scala:198)
>   at 
> org.apache.spark.util.HadoopFSUtils$.listLeafFiles(HadoopFSUtils.scala:271)
>   at 
> org.apache.spark.util.HadoopFSUtils$.$anonfun$parallelListLeafFilesInternal$1(HadoopFSUtils.scala:95)
>   at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
>   at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
>   at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
>   at scala.collection.TraversableLike.map(TraversableLike.scala:238)
>   at scala.collection.TraversableLike.map$(TraversableLike.scala:231)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:108)
>   at 
> org.apache.spark.util.HadoopFSUtils$.parallelListLeafFilesInternal(HadoopFSUtils.scala:85)
>   at 
> org.apache.spark.util.HadoopFSUtils$.parallelListLeafFiles(HadoopFSUtils.scala:69)

[jira] [Commented] (HADOOP-18245) Extend KMS related exceptions that get mapped to ConnectException

2022-05-24 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17541471#comment-17541471
 ] 

Steve Loughran commented on HADOOP-18245:
-

ok to pull into branch-3.3?

> Extend KMS related exceptions that get mapped to ConnectException 
> --
>
> Key: HADOOP-18245
> URL: https://issues.apache.org/jira/browse/HADOOP-18245
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Ritesh H Shukla
>Assignee: Ritesh H Shukla
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Based on production workload, we found that it is not enough to map just 
> SSLHandshakeException to ConnectException in Loadbalancing KMS Client but 
> that needs to be extended to SSLExceptions and SocketExceptions.
> Sample JDK code that can raise these exceptions: 
> https://github.com/openjdk/jdk/blob/jdk-18%2B32/src/java.base/share/classes/sun/security/ssl/SSLSocketImpl.java#L1409-L1428
> Sample Exception backtrace: 
> 22/04/13 16:25:53 WARN kms.LoadBalancingKMSClientProvider: KMS provider at 
> [https://bdgtr041x10h5.nam.nsroot.net:16001/kms/v1/] threw an IOException:
> javax.net.ssl.SSLHandshakeException: Remote host terminated the handshake
> at sun.security.ssl.SSLSocketImpl.handleEOF(SSLSocketImpl.java:1470)
> at sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1298)
> at 
> sun.security.ssl.SSLSocketImpl.readHandshakeRecord(SSLSocketImpl.java:1199)
> at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:373)
> at 
> sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:587)
> at 
> sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDe
> Caused by: java.io.EOFException: SSL peer shut down incorrectly
> at 
> sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:480)
> at 
> sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:469)
> ... 59 more



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17461) Add thread-level IOStatistics Context

2022-05-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17461?focusedWorklogId=774010=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-774010
 ]

ASF GitHub Bot logged work on HADOOP-17461:
---

Author: ASF GitHub Bot
Created on: 24/May/22 13:39
Start Date: 24/May/22 13:39
Worklog Time Spent: 10m 
  Work Description: mehakmeet opened a new pull request, #4352:
URL: https://github.com/apache/hadoop/pull/4352

   ### Description of PR
   Adding Thread-level IOStatsitics in hadoop-common and implementing it in S3A 
Streams.
   
   ### How was this patch tested?
   Region: ap-south-1
   `mvn clean verify -Dparallel-tests -DtestsThreadCount=4 -Dscale`
   
   All tests ran fine.
   ### For code changes:
   
   - [X] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [X] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [X] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [X] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




Issue Time Tracking
---

Worklog Id: (was: 774010)
Remaining Estimate: 0h
Time Spent: 10m

> Add thread-level IOStatistics Context
> -
>
> Key: HADOOP-17461
> URL: https://issues.apache.org/jira/browse/HADOOP-17461
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/azure, fs/s3
>Affects Versions: 3.3.1
>Reporter: Steve Loughran
>Assignee: Mehakmeet Singh
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> For effective reporting of the iostatistics of individual worker threads, we 
> need a thread-level context which IO components update.
> * this contact needs to be passed in two background thread forming work on 
> behalf of a task.
> * IO Components (streams, iterators, filesystems) need to update this context 
> statistics as they perform work
> * Without double counting anything.
> I imagine a ThreadLocal IOStatisticContext which will be updated in the 
> FileSystem API Calls. This context MUST be passed into the background threads 
> used by a task, so that IO is correctly aggregated.
> I don't want streams, listIterators  to do the updating as there is more 
> risk of double counting. However, we need to see their statistics if we want 
> to know things like "bytes discarded in backwards seeks". And I don't want to 
> be updating a shared context object on every read() call.
> If all we want is store IO (HEAD, GET, DELETE, list performance etc) then the 
> FS is sufficient. 
> If we do want the stream-specific detail, then I propose
> * caching the context in the constructor
> * updating it only in close() or unbuffer() (as we do from S3AInputStream to 
> S3AInstrumenation)
> * excluding those we know the FS already collects.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17461) Add thread-level IOStatistics Context

2022-05-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-17461:

Labels: pull-request-available  (was: )

> Add thread-level IOStatistics Context
> -
>
> Key: HADOOP-17461
> URL: https://issues.apache.org/jira/browse/HADOOP-17461
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/azure, fs/s3
>Affects Versions: 3.3.1
>Reporter: Steve Loughran
>Assignee: Mehakmeet Singh
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> For effective reporting of the iostatistics of individual worker threads, we 
> need a thread-level context which IO components update.
> * this contact needs to be passed in two background thread forming work on 
> behalf of a task.
> * IO Components (streams, iterators, filesystems) need to update this context 
> statistics as they perform work
> * Without double counting anything.
> I imagine a ThreadLocal IOStatisticContext which will be updated in the 
> FileSystem API Calls. This context MUST be passed into the background threads 
> used by a task, so that IO is correctly aggregated.
> I don't want streams, listIterators  to do the updating as there is more 
> risk of double counting. However, we need to see their statistics if we want 
> to know things like "bytes discarded in backwards seeks". And I don't want to 
> be updating a shared context object on every read() call.
> If all we want is store IO (HEAD, GET, DELETE, list performance etc) then the 
> FS is sufficient. 
> If we do want the stream-specific detail, then I propose
> * caching the context in the constructor
> * updating it only in close() or unbuffer() (as we do from S3AInputStream to 
> S3AInstrumenation)
> * excluding those we know the FS already collects.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mehakmeet opened a new pull request, #4352: HADOOP-17461. Thread-level IOStatistics in S3A

2022-05-24 Thread GitBox


mehakmeet opened a new pull request, #4352:
URL: https://github.com/apache/hadoop/pull/4352

   ### Description of PR
   Adding Thread-level IOStatsitics in hadoop-common and implementing it in S3A 
Streams.
   
   ### How was this patch tested?
   Region: ap-south-1
   `mvn clean verify -Dparallel-tests -DtestsThreadCount=4 -Dscale`
   
   All tests ran fine.
   ### For code changes:
   
   - [X] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [X] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [X] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [X] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18252) Hadoop 3.3.3 Spark write Mode.Overwrite breaks partitioned tables

2022-05-24 Thread Aaron Whiteway (Jira)
Aaron Whiteway created HADOOP-18252:
---

 Summary: Hadoop 3.3.3 Spark write Mode.Overwrite breaks 
partitioned tables
 Key: HADOOP-18252
 URL: https://issues.apache.org/jira/browse/HADOOP-18252
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Aaron Whiteway


During testing Hadoop 3.3.3 with S3A with Versioning enabled ran into an issue 
where spark/hadoop tries to load the partitions that don't exist anymore

 
{noformat}
---
Py4JJavaError Traceback (most recent call last)
 in 
> 1 test_load = spark.read.parquet(test_loc)

/usr/local/spark/python/pyspark/sql/readwriter.py in parquet(self, *paths, 
**options)
456modifiedAfter=modifiedAfter)
457 
--> 458 return self._df(self._jreader.parquet(_to_seq(self._spark._sc, 
paths)))
459 
460 def text(self, paths, wholetext=False, lineSep=None, 
pathGlobFilter=None,

/usr/local/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py in 
__call__(self, *args)
   1302 
   1303 answer = self.gateway_client.send_command(command)
-> 1304 return_value = get_return_value(
   1305 answer, self.gateway_client, self.target_id, self.name)
   1306 

/usr/local/spark/python/pyspark/sql/utils.py in deco(*a, **kw)
109 def deco(*a, **kw):
110 try:
--> 111 return f(*a, **kw)
112 except py4j.protocol.Py4JJavaError as e:
113 converted = convert_exception(e.java_exception)

/usr/local/spark/python/lib/py4j-0.10.9-src.zip/py4j/protocol.py in 
get_return_value(answer, gateway_client, target_id, name)
324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
325 if answer[1] == REFERENCE_TYPE:
--> 326 raise Py4JJavaError(
327 "An error occurred while calling {0}{1}{2}.\n".
328 format(target_id, ".", name), value)

Py4JJavaError: An error occurred while calling o183.parquet.
: java.io.FileNotFoundException: No such file or directory: 
s3a://test/s32/singleday_parts_simple2/Part=TESTING_1
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2269)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2163)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2102)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerListStatus(S3AFileSystem.java:1903)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$listStatus$9(S3AFileSystem.java:1882)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.listStatus(S3AFileSystem.java:1882)
at 
org.apache.spark.util.HadoopFSUtils$.listLeafFiles(HadoopFSUtils.scala:225)
at 
org.apache.spark.util.HadoopFSUtils$.$anonfun$listLeafFiles$7(HadoopFSUtils.scala:281)
at 
scala.collection.TraversableLike.$anonfun$flatMap$1(TraversableLike.scala:245)
at 
scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
at 
scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
at scala.collection.TraversableLike.flatMap(TraversableLike.scala:245)
at scala.collection.TraversableLike.flatMap$(TraversableLike.scala:242)
at scala.collection.mutable.ArrayOps$ofRef.flatMap(ArrayOps.scala:198)
at 
org.apache.spark.util.HadoopFSUtils$.listLeafFiles(HadoopFSUtils.scala:271)
at 
org.apache.spark.util.HadoopFSUtils$.$anonfun$parallelListLeafFilesInternal$1(HadoopFSUtils.scala:95)
at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
at 
scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at 
scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at scala.collection.TraversableLike.map(TraversableLike.scala:238)
at scala.collection.TraversableLike.map$(TraversableLike.scala:231)
at scala.collection.AbstractTraversable.map(Traversable.scala:108)
at 
org.apache.spark.util.HadoopFSUtils$.parallelListLeafFilesInternal(HadoopFSUtils.scala:85)
at 
org.apache.spark.util.HadoopFSUtils$.parallelListLeafFiles(HadoopFSUtils.scala:69)
at 
org.apache.spark.sql.execution.datasources.InMemoryFileIndex$.bulkListLeafFiles(InMemoryFileIndex.scala:158)
at 
org.apache.spark.sql.execution.datasources.InMemoryFileIndex.listLeafFiles(InMemoryFileIndex.scala:131)
at 
org.apache.spark.sql.execution.datasources.InMemoryFileIndex.refresh0(InMemoryFileIndex.scala:94)
 

[jira] [Created] (HADOOP-18253) Hadoop 3.3.3 Spark write Mode.Overwrite breaks partitioned tables

2022-05-24 Thread Aaron Whiteway (Jira)
Aaron Whiteway created HADOOP-18253:
---

 Summary: Hadoop 3.3.3 Spark write Mode.Overwrite breaks 
partitioned tables
 Key: HADOOP-18253
 URL: https://issues.apache.org/jira/browse/HADOOP-18253
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Aaron Whiteway


During testing Hadoop 3.3.3 with S3A with Versioning enabled ran into an issue 
where spark/hadoop tries to load the partitions that don't exist anymore

 
{noformat}
---
Py4JJavaError Traceback (most recent call last)
 in 
> 1 test_load = spark.read.parquet(test_loc)

/usr/local/spark/python/pyspark/sql/readwriter.py in parquet(self, *paths, 
**options)
456modifiedAfter=modifiedAfter)
457 
--> 458 return self._df(self._jreader.parquet(_to_seq(self._spark._sc, 
paths)))
459 
460 def text(self, paths, wholetext=False, lineSep=None, 
pathGlobFilter=None,

/usr/local/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py in 
__call__(self, *args)
   1302 
   1303 answer = self.gateway_client.send_command(command)
-> 1304 return_value = get_return_value(
   1305 answer, self.gateway_client, self.target_id, self.name)
   1306 

/usr/local/spark/python/pyspark/sql/utils.py in deco(*a, **kw)
109 def deco(*a, **kw):
110 try:
--> 111 return f(*a, **kw)
112 except py4j.protocol.Py4JJavaError as e:
113 converted = convert_exception(e.java_exception)

/usr/local/spark/python/lib/py4j-0.10.9-src.zip/py4j/protocol.py in 
get_return_value(answer, gateway_client, target_id, name)
324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
325 if answer[1] == REFERENCE_TYPE:
--> 326 raise Py4JJavaError(
327 "An error occurred while calling {0}{1}{2}.\n".
328 format(target_id, ".", name), value)

Py4JJavaError: An error occurred while calling o183.parquet.
: java.io.FileNotFoundException: No such file or directory: 
s3a://test/s32/singleday_parts_simple2/Part=TESTING_1
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2269)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2163)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2102)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerListStatus(S3AFileSystem.java:1903)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$listStatus$9(S3AFileSystem.java:1882)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.listStatus(S3AFileSystem.java:1882)
at 
org.apache.spark.util.HadoopFSUtils$.listLeafFiles(HadoopFSUtils.scala:225)
at 
org.apache.spark.util.HadoopFSUtils$.$anonfun$listLeafFiles$7(HadoopFSUtils.scala:281)
at 
scala.collection.TraversableLike.$anonfun$flatMap$1(TraversableLike.scala:245)
at 
scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
at 
scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
at scala.collection.TraversableLike.flatMap(TraversableLike.scala:245)
at scala.collection.TraversableLike.flatMap$(TraversableLike.scala:242)
at scala.collection.mutable.ArrayOps$ofRef.flatMap(ArrayOps.scala:198)
at 
org.apache.spark.util.HadoopFSUtils$.listLeafFiles(HadoopFSUtils.scala:271)
at 
org.apache.spark.util.HadoopFSUtils$.$anonfun$parallelListLeafFilesInternal$1(HadoopFSUtils.scala:95)
at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
at 
scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at 
scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at scala.collection.TraversableLike.map(TraversableLike.scala:238)
at scala.collection.TraversableLike.map$(TraversableLike.scala:231)
at scala.collection.AbstractTraversable.map(Traversable.scala:108)
at 
org.apache.spark.util.HadoopFSUtils$.parallelListLeafFilesInternal(HadoopFSUtils.scala:85)
at 
org.apache.spark.util.HadoopFSUtils$.parallelListLeafFiles(HadoopFSUtils.scala:69)
at 
org.apache.spark.sql.execution.datasources.InMemoryFileIndex$.bulkListLeafFiles(InMemoryFileIndex.scala:158)
at 
org.apache.spark.sql.execution.datasources.InMemoryFileIndex.listLeafFiles(InMemoryFileIndex.scala:131)
at 
org.apache.spark.sql.execution.datasources.InMemoryFileIndex.refresh0(InMemoryFileIndex.scala:94)
 

[jira] [Commented] (HADOOP-17695) adls test suite TestAdlContractGetFileStatusLive failing with no assertJ on the classpath

2022-05-24 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17541453#comment-17541453
 ] 

Steve Loughran commented on HADOOP-17695:
-

this happens because test artifact exports are *not* transitivecontract 
test implementations will need to explicitly import assertj

> adls test suite TestAdlContractGetFileStatusLive failing with no assertJ on 
> the classpath
> -
>
> Key: HADOOP-17695
> URL: https://issues.apache.org/jira/browse/HADOOP-17695
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl, test
>Affects Versions: 3.3.1
>Reporter: Steve Loughran
>Priority: Major
>
> Reported on PR #2482: https://github.com/apache/hadoop/pull/2842 ; CNFE on 
> assertJ assertions in adls test runs. 
> Cause will be HADOOP-17281, which added the asserts to the existing fs 
> contract test. We need to mark assertJ as an export of the hadoop-common 
> suite, or work out why hadoop-azuredatalake isn't picking itup



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ZanderXu commented on pull request #3806: HDFS-16386.Reduce DataNode load when FsDatasetAsyncDiskService is working.

2022-05-24 Thread GitBox


ZanderXu commented on PR #3806:
URL: https://github.com/apache/hadoop/pull/3806#issuecomment-1135792522

   `ThreadPoolExecutor executor = new ThreadPoolExecutor(
   CORE_THREADS_PER_VOLUME, maxNumThreadsPerVolume,
   THREADS_KEEP_ALIVE_SECONDS, TimeUnit.SECONDS,
   new LinkedBlockingQueue(), threadFactory);`
   
   The ThreadPoolExecutor used the unbounded LinkedBlockingQueue, so the actual 
thread number should be less than or equal to the corePoolSize.  When NN needs 
one DN to delete a large number of blocks,  this DN will create a large number 
of ReplicaFileDeleteTask, and stored all ReplicaFileDeleteTasks in the 
LinkedBlockingQueue of the ThreadPoolExecutor, resulting in increased memory or 
even OOM.
   
   Feel free to correct me if there are mistakes.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ZanderXu commented on pull request #3806: HDFS-16386.Reduce DataNode load when FsDatasetAsyncDiskService is working.

2022-05-24 Thread GitBox


ZanderXu commented on PR #3806:
URL: https://github.com/apache/hadoop/pull/3806#issuecomment-1135779919

   @jianghuazhu I'm so sorry to discuss this issue again.
   Setting smaller MAX THREAD can reduce memory usage? 
[HDFS-16386](https://issues.apache.org/jira/browse/HDFS-16386)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-18140) Update default value of hadoop.ssl.enabled.protocols in the EncryptedShuffle doc

2022-05-24 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HADOOP-18140:
--

Assignee: Samrat Deb

> Update default value of hadoop.ssl.enabled.protocols in the EncryptedShuffle 
> doc
> 
>
> Key: HADOOP-18140
> URL: https://issues.apache.org/jira/browse/HADOOP-18140
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.9.0, 2.8.4
>Reporter: Wei-Chiu Chuang
>Assignee: Samrat Deb
>Priority: Trivial
>  Labels: newbie
>
> The default value of hadoop.ssl.enabled.protocols was updated several times 
> but our doc was never updated.
> https://hadoop.apache.org/docs/r3.1.3/hadoop-mapreduce-client/hadoop-mapreduce-client-core/EncryptedShuffle.html



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18140) Update default value of hadoop.ssl.enabled.protocols in the EncryptedShuffle doc

2022-05-24 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17541415#comment-17541415
 ] 

Akira Ajisaka commented on HADOOP-18140:


Sure. Assigned.

> Update default value of hadoop.ssl.enabled.protocols in the EncryptedShuffle 
> doc
> 
>
> Key: HADOOP-18140
> URL: https://issues.apache.org/jira/browse/HADOOP-18140
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.9.0, 2.8.4
>Reporter: Wei-Chiu Chuang
>Assignee: Samrat Deb
>Priority: Trivial
>  Labels: newbie
>
> The default value of hadoop.ssl.enabled.protocols was updated several times 
> but our doc was never updated.
> https://hadoop.apache.org/docs/r3.1.3/hadoop-mapreduce-client/hadoop-mapreduce-client-core/EncryptedShuffle.html



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4347: HDFS-16586. Purge FsDatasetAsyncDiskService threadgroup; it causes BP…

2022-05-24 Thread GitBox


hadoop-yetus commented on PR #4347:
URL: https://github.com/apache/hadoop/pull/4347#issuecomment-1135730139

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  9s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ branch-3.3 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m 21s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |   1m 31s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |   1m  4s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 35s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   1m 46s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  spotbugs  |   3m 43s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  28m 43s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 47s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   3m 26s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  28m 11s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 227m 56s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4347/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 16s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 344m 49s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestRollingUpgrade |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4347/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4347 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 5504327bf8bc 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / f3f74013f773e0dbe599a80cee462c73b230946a |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4347/2/testReport/ |
   | Max. process+thread count | 2138 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4347/2/console |
   | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18140) Update default value of hadoop.ssl.enabled.protocols in the EncryptedShuffle doc

2022-05-24 Thread Samrat Deb (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17541404#comment-17541404
 ] 

Samrat Deb commented on HADOOP-18140:
-

can i pick this task up ? 
[~weichiu] 

> Update default value of hadoop.ssl.enabled.protocols in the EncryptedShuffle 
> doc
> 
>
> Key: HADOOP-18140
> URL: https://issues.apache.org/jira/browse/HADOOP-18140
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.9.0, 2.8.4
>Reporter: Wei-Chiu Chuang
>Priority: Trivial
>  Labels: newbie
>
> The default value of hadoop.ssl.enabled.protocols was updated several times 
> but our doc was never updated.
> https://hadoop.apache.org/docs/r3.1.3/hadoop-mapreduce-client/hadoop-mapreduce-client-core/EncryptedShuffle.html



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jianghuazhu commented on pull request #4351: HDFS-16592.Fix typo for BalancingPolicy.

2022-05-24 Thread GitBox


jianghuazhu commented on PR #4351:
URL: https://github.com/apache/hadoop/pull/4351#issuecomment-1135682996

   Here are some failing unit tests such as:
   TestWebHDFS
   TestUnderReplicatedBlocks
   TestExternalStoragePolicySatisfier
   TestIncrementalBlockReports
   TestRedudantBlocks
   
   It looks like these failures have little to do with the code I submitted.
   @aajisaka  @ferhui , can you help review this pr?
   Thank you very much.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4351: HDFS-16592.Fix typo for BalancingPolicy.

2022-05-24 Thread GitBox


hadoop-yetus commented on PR #4351:
URL: https://github.com/apache/hadoop/pull/4351#issuecomment-1135664263

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m 30s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 27s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m  6s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 25s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 21s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 48s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 18s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 11s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 51s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 10s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 42s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 255m 28s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4351/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 55s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 361m 33s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4351/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4351 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 9fc2a6dd2321 4.15.0-169-generic #177-Ubuntu SMP Thu Feb 3 
10:50:38 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 8081ef3ae322798ebaf16b7d054212ce50077d8a |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4351/1/testReport/ |
   | Max. process+thread count | 4088 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4351/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4348: HDFS-16586. Purge FsDatasetAsyncDiskService threadgroup; it causes BP…

2022-05-24 Thread GitBox


hadoop-yetus commented on PR #4348:
URL: https://github.com/apache/hadoop/pull/4348#issuecomment-1135653728

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ branch-3.2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 18s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  compile  |   1m  9s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 19s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  javadoc  |   1m 12s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  spotbugs  |   3m 26s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  shadedclient  |  18m  4s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 45s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   3m  8s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m 36s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 205m 33s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4348/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 56s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 291m 36s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestBlockTokenWithShortCircuitRead |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4348/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4348 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 8a88be9e620b 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.2 / 5b9672b89d60bb6b55b2dd5d9b38ce28d10adac2 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4348/2/testReport/ |
   | Max. process+thread count | 2267 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4348/2/console |
   | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18244) Fix Hadoop-Common JavaDoc Error on branch-3.3

2022-05-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18244?focusedWorklogId=773926=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-773926
 ]

ASF GitHub Bot logged work on HADOOP-18244:
---

Author: ASF GitHub Bot
Created on: 24/May/22 09:00
Start Date: 24/May/22 09:00
Worklog Time Spent: 10m 
  Work Description: slfan1989 commented on PR #4327:
URL: https://github.com/apache/hadoop/pull/4327#issuecomment-1135601667

   Hi, @steveloughran, please help to review the code, thank you very much!




Issue Time Tracking
---

Worklog Id: (was: 773926)
Time Spent: 1h  (was: 50m)

> Fix Hadoop-Common JavaDoc Error on branch-3.3
> -
>
> Key: HADOOP-18244
> URL: https://issues.apache.org/jira/browse/HADOOP-18244
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.3.0, 3.3.1, 3.3.2, 3.3.3
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Fix Hadoop-Common JavaDoc Error on branch-3.3.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #4327: HADOOP-18244. Fix Hadoop-Common JavaDoc Error on branch-3.3

2022-05-24 Thread GitBox


slfan1989 commented on PR #4327:
URL: https://github.com/apache/hadoop/pull/4327#issuecomment-1135601667

   Hi, @steveloughran, please help to review the code, thank you very much!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #4303: MAPREDUCE-7378. Change job temporary dir name to avoid delete by other jobs

2022-05-24 Thread GitBox


steveloughran commented on PR #4303:
URL: https://github.com/apache/hadoop/pull/4303#issuecomment-1135590790

   thanks for your work anyway. we do plan to make a hadoop release with the 
new committer later this year (it's shipping in cloudera cloud releases in 
preview mode, so i'll be fielding support calls there on any issues)
   
   any changes you can see there to support multi-job queries welcome. you can 
download and build hadoop branch-3.3 to test it in your environment.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18248) Fix Junit Test Deprecated assertThat

2022-05-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18248?focusedWorklogId=773917=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-773917
 ]

ASF GitHub Bot logged work on HADOOP-18248:
---

Author: ASF GitHub Bot
Created on: 24/May/22 08:25
Start Date: 24/May/22 08:25
Worklog Time Spent: 10m 
  Work Description: slfan1989 commented on PR #4334:
URL: https://github.com/apache/hadoop/pull/4334#issuecomment-1135564137

   > @slfan1989 sounds good. I assume the patch in the new PR will be similar - 
if so, feel free to mention me and I will try and review in good time.
   
   Hi, @dannycjones , thank you , new pr(HDFS-16590. Fix Junit Test Deprecated 
assertThat #4349)
   




Issue Time Tracking
---

Worklog Id: (was: 773917)
Time Spent: 4.5h  (was: 4h 20m)

> Fix Junit Test Deprecated assertThat
> 
>
> Key: HADOOP-18248
> URL: https://issues.apache.org/jira/browse/HADOOP-18248
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> javac will give a warning for compilation, as follows:
> org.junit.Assert.assertThat Deprecated. use 
> org.hamcrest.MatcherAssert.assertThat()
> {code:java}
> TestIncrementalBrVariations.java:141:4:[deprecation] 
> assertThat(T,Matcher) in Assert has been deprecated {code}
> a related issue will be resolved in HDFS-16590.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #4334: HADOOP-18248. Fix Junit Test Deprecated assertThat.

2022-05-24 Thread GitBox


slfan1989 commented on PR #4334:
URL: https://github.com/apache/hadoop/pull/4334#issuecomment-1135564137

   > @slfan1989 sounds good. I assume the patch in the new PR will be similar - 
if so, feel free to mention me and I will try and review in good time.
   
   Hi, @dannycjones , thank you , new pr(HDFS-16590. Fix Junit Test Deprecated 
assertThat #4349)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18248) Fix Junit Test Deprecated assertThat

2022-05-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18248?focusedWorklogId=773909=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-773909
 ]

ASF GitHub Bot logged work on HADOOP-18248:
---

Author: ASF GitHub Bot
Created on: 24/May/22 08:13
Start Date: 24/May/22 08:13
Worklog Time Spent: 10m 
  Work Description: dannycjones commented on PR #4334:
URL: https://github.com/apache/hadoop/pull/4334#issuecomment-1135552621

   @slfan1989 sounds good. I assume the patch in the new PR will be similar - 
if so, feel free to mention me and I will try and review in good time.




Issue Time Tracking
---

Worklog Id: (was: 773909)
Time Spent: 4h 20m  (was: 4h 10m)

> Fix Junit Test Deprecated assertThat
> 
>
> Key: HADOOP-18248
> URL: https://issues.apache.org/jira/browse/HADOOP-18248
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> javac will give a warning for compilation, as follows:
> org.junit.Assert.assertThat Deprecated. use 
> org.hamcrest.MatcherAssert.assertThat()
> {code:java}
> TestIncrementalBrVariations.java:141:4:[deprecation] 
> assertThat(T,Matcher) in Assert has been deprecated {code}
> a related issue will be resolved in HDFS-16590.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dannycjones commented on pull request #4334: HADOOP-18248. Fix Junit Test Deprecated assertThat.

2022-05-24 Thread GitBox


dannycjones commented on PR #4334:
URL: https://github.com/apache/hadoop/pull/4334#issuecomment-1135552621

   @slfan1989 sounds good. I assume the patch in the new PR will be similar - 
if so, feel free to mention me and I will try and review in good time.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10738) Dynamically adjust distcp configuration by adding distcp-site.xml into code base

2022-05-24 Thread Masatake Iwasaki (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-10738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17541344#comment-17541344
 ] 

Masatake Iwasaki commented on HADOOP-10738:
---

updated the target version for preparing 2.10.2 release.

> Dynamically adjust distcp configuration by adding distcp-site.xml into code 
> base
> 
>
> Key: HADOOP-10738
> URL: https://issues.apache.org/jira/browse/HADOOP-10738
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Siqi Li
>Priority: Major
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10738-branch-2.001.patch, HADOOP-10738.v1.patch, 
> HADOOP-10738.v2.patch
>
>
> For now, the configuration of distcp resides in hadoop-distcp.jar. This makes 
> it difficult to adjust the configuration dynamically.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10738) Dynamically adjust distcp configuration by adding distcp-site.xml into code base

2022-05-24 Thread Masatake Iwasaki (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-10738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-10738:
--
Target Version/s: 2.10.3  (was: 2.10.2)

> Dynamically adjust distcp configuration by adding distcp-site.xml into code 
> base
> 
>
> Key: HADOOP-10738
> URL: https://issues.apache.org/jira/browse/HADOOP-10738
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Siqi Li
>Priority: Major
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10738-branch-2.001.patch, HADOOP-10738.v1.patch, 
> HADOOP-10738.v2.patch
>
>
> For now, the configuration of distcp resides in hadoop-distcp.jar. This makes 
> it difficult to adjust the configuration dynamically.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13091) DistCp masks potential CRC check failures

2022-05-24 Thread Masatake Iwasaki (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-13091:
--
Target Version/s: 2.10.3  (was: 2.10.2)

> DistCp masks potential CRC check failures
> -
>
> Key: HADOOP-13091
> URL: https://issues.apache.org/jira/browse/HADOOP-13091
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Elliot West
>Assignee: Yiqun Lin
>Priority: Major
>  Labels: distcp
> Attachments: HADOOP-13091.003.patch, HADOOP-13091.004.patch, 
> HDFS-10338.001.patch, HDFS-10338.002.patch
>
>
> There appear to be edge cases whereby CRC checks may be circumvented when 
> requests for checksums from the source or target file system fail. In this 
> event CRCs could differ between the source and target and yet the DistCp copy 
> would succeed, even when the 'skip CRC check' option is not being used.
> The code in question is contained in the method 
> [{{org.apache.hadoop.tools.util.DistCpUtils#checksumsAreEqual(...)}}|https://github.com/apache/hadoop/blob/release-2.7.1/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java#L457]
> Specifically this code block suggests that if there is a failure when trying 
> to read the source or target checksum then the method will return {{true}} 
> (i.e.  the checksums are equal), implying that the check succeeded. In actual 
> fact we just failed to obtain the checksum and could not perform the check.
> {code}
> try {
>   sourceChecksum = sourceChecksum != null ? sourceChecksum : 
> sourceFS.getFileChecksum(source);
>   targetChecksum = targetFS.getFileChecksum(target);
> } catch (IOException e) {
>   LOG.error("Unable to retrieve checksum for " + source + " or "
> + target, e);
> }
> return (sourceChecksum == null || targetChecksum == null ||
>   sourceChecksum.equals(targetChecksum));
> {code}
> I believe that at the very least the caught {{IOException}} should be 
> re-thrown. If this is not deemed desirable then I believe an option 
> ({{--strictCrc}}?) should be added to enforce a strict check where we require 
> that both the source and target CRCs are retrieved, are not null, and are 
> then compared for equality. If for any reason either of the CRCs retrievals 
> fail then an exception is thrown.
> Clearly some {{FileSystems}} do not support CRCs and invocations to 
> {{FileSystem.getFileChecksum(...)}} return {{null}} in these instances. I 
> would suggest that these should fail a strict CRC check to prevent users 
> developing a false sense of security in their copy pipeline.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13091) DistCp masks potential CRC check failures

2022-05-24 Thread Masatake Iwasaki (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-13091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17541343#comment-17541343
 ] 

Masatake Iwasaki commented on HADOOP-13091:
---

updated the target version for preparing 2.10.2 release.

> DistCp masks potential CRC check failures
> -
>
> Key: HADOOP-13091
> URL: https://issues.apache.org/jira/browse/HADOOP-13091
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Elliot West
>Assignee: Yiqun Lin
>Priority: Major
>  Labels: distcp
> Attachments: HADOOP-13091.003.patch, HADOOP-13091.004.patch, 
> HDFS-10338.001.patch, HDFS-10338.002.patch
>
>
> There appear to be edge cases whereby CRC checks may be circumvented when 
> requests for checksums from the source or target file system fail. In this 
> event CRCs could differ between the source and target and yet the DistCp copy 
> would succeed, even when the 'skip CRC check' option is not being used.
> The code in question is contained in the method 
> [{{org.apache.hadoop.tools.util.DistCpUtils#checksumsAreEqual(...)}}|https://github.com/apache/hadoop/blob/release-2.7.1/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java#L457]
> Specifically this code block suggests that if there is a failure when trying 
> to read the source or target checksum then the method will return {{true}} 
> (i.e.  the checksums are equal), implying that the check succeeded. In actual 
> fact we just failed to obtain the checksum and could not perform the check.
> {code}
> try {
>   sourceChecksum = sourceChecksum != null ? sourceChecksum : 
> sourceFS.getFileChecksum(source);
>   targetChecksum = targetFS.getFileChecksum(target);
> } catch (IOException e) {
>   LOG.error("Unable to retrieve checksum for " + source + " or "
> + target, e);
> }
> return (sourceChecksum == null || targetChecksum == null ||
>   sourceChecksum.equals(targetChecksum));
> {code}
> I believe that at the very least the caught {{IOException}} should be 
> re-thrown. If this is not deemed desirable then I believe an option 
> ({{--strictCrc}}?) should be added to enforce a strict check where we require 
> that both the source and target CRCs are retrieved, are not null, and are 
> then compared for equality. If for any reason either of the CRCs retrievals 
> fail then an exception is thrown.
> Clearly some {{FileSystems}} do not support CRCs and invocations to 
> {{FileSystem.getFileChecksum(...)}} return {{null}} in these instances. I 
> would suggest that these should fail a strict CRC check to prevent users 
> developing a false sense of security in their copy pipeline.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16039) backport HADOOP-15965, "Upgrade ADLS SDK to 2.3.3" to branch-2

2022-05-24 Thread Masatake Iwasaki (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17541342#comment-17541342
 ] 

Masatake Iwasaki commented on HADOOP-16039:
---

updated the target version for preparing 2.10.2 release.

> backport HADOOP-15965, "Upgrade ADLS SDK to 2.3.3" to branch-2
> --
>
> Key: HADOOP-16039
> URL: https://issues.apache.org/jira/browse/HADOOP-16039
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 2.9.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16039-001.patch, HADOOP-16039-branch-2-001.patch
>
>
> Backport the ADLS SDK 2.3.3 update to branch-2, retest. etc



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >