[GitHub] [hadoop] yb12138 opened a new pull request, #4726: YARN-11191 Global Scheduler refreshQueue cause deadLock

2022-08-09 Thread GitBox


yb12138 opened a new pull request, #4726:
URL: https://github.com/apache/hadoop/pull/4726

   
   
   ### Description of PR
   
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ferhui commented on pull request #4628: HDFS-16689. NameNode may crash when transitioning to Active with in-progress tailer if there are some abnormal JNs.

2022-08-09 Thread GitBox


ferhui commented on PR #4628:
URL: https://github.com/apache/hadoop/pull/4628#issuecomment-1210072454

   I'm not sure. How about adding a test utility class to that package?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on a diff in pull request #4711: YARN-11236. Implement FederationReservationHomeSubClusterStore With MemoryStore.

2022-08-09 Thread GitBox


slfan1989 commented on code in PR #4711:
URL: https://github.com/apache/hadoop/pull/4711#discussion_r941870965


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/MemoryFederationStateStore.java:
##
@@ -312,4 +324,45 @@ public Version loadVersion() {
 return null;
   }
 
+  @Override
+  public AddReservationHomeSubClusterResponse addReservationHomeSubCluster(
+  AddReservationHomeSubClusterRequest request) throws YarnException {
+FederationReservationHomeSubClusterStoreInputValidator.validate(request);
+ReservationId reservationId =

Review Comment:
   I will fix it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on a diff in pull request #4701: YARN-10885. Make FederationStateStoreFacade#getApplicationHomeSubCluster use JCache.

2022-08-09 Thread GitBox


slfan1989 commented on code in PR #4701:
URL: https://github.com/apache/hadoop/pull/4701#discussion_r941869543


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/utils/FederationStateStoreFacade.java:
##
@@ -513,6 +524,25 @@ public Map invoke(
 return cacheRequest;
   }
 
+  private Object buildGetApplicationHomeSubClusterRequest(ApplicationId 
applicationId) {
+final String cacheKey = buildCacheKey(getClass().getSimpleName(),
+GET_APPLICATION_HOME_SUBCLUSTER_CACHEID, applicationId.toString());
+CacheRequest cacheRequest = new CacheRequest<>(
+cacheKey,
+input -> {
+GetApplicationHomeSubClusterResponse response =
+stateStore.getApplicationHomeSubCluster(
+
GetApplicationHomeSubClusterRequest.newInstance(applicationId));

Review Comment:
   Thanks for your suggestion, I will fix it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on a diff in pull request #4701: YARN-10885. Make FederationStateStoreFacade#getApplicationHomeSubCluster use JCache.

2022-08-09 Thread GitBox


slfan1989 commented on code in PR #4701:
URL: https://github.com/apache/hadoop/pull/4701#discussion_r941869799


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/utils/FederationStateStoreFacade.java:
##
@@ -513,6 +524,25 @@ public Map invoke(
 return cacheRequest;
   }
 
+  private Object buildGetApplicationHomeSubClusterRequest(ApplicationId 
applicationId) {
+final String cacheKey = buildCacheKey(getClass().getSimpleName(),
+GET_APPLICATION_HOME_SUBCLUSTER_CACHEID, applicationId.toString());
+CacheRequest cacheRequest = new CacheRequest<>(
+cacheKey,
+input -> {
+GetApplicationHomeSubClusterResponse response =
+stateStore.getApplicationHomeSubCluster(
+
GetApplicationHomeSubClusterRequest.newInstance(applicationId));
+
+ApplicationHomeSubCluster applicationHomeSubCluster =

Review Comment:
   I will fix it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4723: HDFS-16684. Exclude the current JournalNode

2022-08-09 Thread GitBox


hadoop-yetus commented on PR #4723:
URL: https://github.com/apache/hadoop/pull/4723#issuecomment-1209972092

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  1s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m 32s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 37s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   1m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 24s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 50s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 29s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 51s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 45s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 21s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 33s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   1m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 20s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  1s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4723/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 4 unchanged - 
0 fixed = 5 total (was 4)  |
   | +1 :green_heart: |  mvnsite  |   1m 21s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 18s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m  0s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 389m 45s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 15s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 503m 42s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4723/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4723 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 1f65809806fa 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 08eba303a1b7532e36a291b8854fa5bd7cadbdbe |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4723/1/testReport/ |
   | Max. process+thread count | 2415 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4723/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically 

[GitHub] [hadoop] slfan1989 commented on pull request #4712: YARN-6539. Create SecureLogin inside Router.

2022-08-09 Thread GitBox


slfan1989 commented on PR #4712:
URL: https://github.com/apache/hadoop/pull/4712#issuecomment-1209969456

   @goiri Please help to review the code again, Thank you very much! I want to 
follow up on YARN-11158, need this pr, thank you very much.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #4594: YARN-6572. Refactoring Router services to use common util classes for pipeline creations.

2022-08-09 Thread GitBox


slfan1989 commented on PR #4594:
URL: https://github.com/apache/hadoop/pull/4594#issuecomment-1209964235

   @goiri Thank you very much for your help reviewing the code!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4725: HDFS-16688. Unresolved Hosts during startup are not synced by JournalNodes

2022-08-09 Thread GitBox


hadoop-yetus commented on PR #4725:
URL: https://github.com/apache/hadoop/pull/4725#issuecomment-1209963377

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m 35s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 41s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   1m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 16s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 20s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 43s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 46s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m 19s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 28s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   1m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  0s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4725/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 5 new + 45 unchanged - 
0 fixed = 50 total (was 45)  |
   | +1 :green_heart: |  mvnsite  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 33s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 59s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 356m  2s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4725/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 59s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 473m 54s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.qjournal.server.TestJournalNode |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4725/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4725 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 4ecd0eb1d290 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6387637b18b62e31558b684d4658f03dd51cbeff |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4725/1/testReport/ |
   | Max. process+thread count | 2171 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4724: HDFS-16686. GetJournalEditServlet fails to authorize valid Kerberos request

2022-08-09 Thread GitBox


hadoop-yetus commented on PR #4724:
URL: https://github.com/apache/hadoop/pull/4724#issuecomment-1209945972

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 30s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  1s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m 27s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 42s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   1m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 20s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 21s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 45s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 45s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m  4s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 29s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   1m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 20s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 27s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 59s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 32s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 27s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 336m 29s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 59s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 455m 30s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4724/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4724 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux b818aa8dcecf 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 7bef774d6187d3241ca7310333239f32b42f16a4 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4724/1/testReport/ |
   | Max. process+thread count | 1994 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4724/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL 

[jira] [Commented] (HADOOP-18397) Shutdown AWSSecurityTokenService when it's resources are no longer in use

2022-08-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17577645#comment-17577645
 ] 

ASF GitHub Bot commented on HADOOP-18397:
-

virajjasani commented on PR #4722:
URL: https://github.com/apache/hadoop/pull/4722#issuecomment-1209935012

   Re-ran all the tests with `scale` profile, all tests are passing (three 
tests flaked but when they were run individually, they passed).
   
   ```
   [INFO] -

> Shutdown AWSSecurityTokenService when it's resources are no longer in use
> -
>
> Key: HADOOP-18397
> URL: https://issues.apache.org/jira/browse/HADOOP-18397
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs/s3
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> AWSSecurityTokenService resources can be released whenever they are no longer 
> in use. The documentation of AWSSecurityTokenService#shutdown says while it 
> is not important for client to compulsorily shutdown the token service, 
> client can definitely perform early release whenever client no longer 
> requires token service resources. We achieve this by making STSClient 
> closable, so we can certainly utilize it in all places where it's suitable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on pull request #4722: HADOOP-18397. Shutdown AWSSecurityTokenService when it's resources are no longer in use

2022-08-09 Thread GitBox


virajjasani commented on PR #4722:
URL: https://github.com/apache/hadoop/pull/4722#issuecomment-1209935012

   Re-ran all the tests with `scale` profile, all tests are passing (three 
tests flaked but when they were run individually, they passed).
   
   ```
   [INFO] --- maven-failsafe-plugin:3.0.0-M1:integration-test 
(default-integration-test) @ hadoop-aws ---
   [INFO] 
   [INFO] ---
   [INFO]  T E S T S
   [INFO] ---
   
   [INFO] Results:
   [INFO] 
   [ERROR] Failures: 
   [ERROR]   
ITestS3AContractUnbuffer>AbstractContractUnbufferTest.testMultipleUnbuffers:100->AbstractContractUnbufferTest.validateFullFileContents:132->AbstractContractUnbufferTest.validateFileContents:139->Assert.assertEquals:647->Assert.failNotEquals:835->Assert.fail:89
 failed to read expected number of bytes from stream. This may be transient 
expected:<1024> but was:<605>
   [ERROR]   
ITestS3AContractUnbuffer>AbstractContractUnbufferTest.testUnbufferAfterRead:53->AbstractContractUnbufferTest.validateFullFileContents:132->AbstractContractUnbufferTest.validateFileContents:139->Assert.assertEquals:647->Assert.failNotEquals:835->Assert.fail:89
 failed to read expected number of bytes from stream. This may be transient 
expected:<1024> but was:<605>
   [INFO] 
   [ERROR] Tests run: 1139, Failures: 2, Errors: 0, Skipped: 146
   [INFO] 
   
   [INFO] --- maven-failsafe-plugin:3.0.0-M1:integration-test 
(sequential-integration-tests) @ hadoop-aws ---
   [INFO] 
   [INFO] ---
   [INFO]  T E S T S
   [INFO] ---
   
   [INFO] Results:
   [INFO] 
   [ERROR] Errors: 
   [ERROR]   
ITestS3AContractRootDir>AbstractContractRootDirectoryTest.testRecursiveRootListing:267
 » TestTimedOut
   [INFO] 
   [ERROR] Tests run: 124, Failures: 0, Errors: 1, Skipped: 10
   [INFO] 
   
   ```
   
   All tests in `ITestS3AContractUnbuffer` and `ITestS3AContractRootDir` passed 
when run individually.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on pull request #4606: HDFS-16678. RBF should supports disable getNodeUsage() in RBFMetrics

2022-08-09 Thread GitBox


goiri commented on PR #4606:
URL: https://github.com/apache/hadoop/pull/4606#issuecomment-1209932600

   @slfan1989 please take another look


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a diff in pull request #4701: YARN-10885. Make FederationStateStoreFacade#getApplicationHomeSubCluster use JCache.

2022-08-09 Thread GitBox


goiri commented on code in PR #4701:
URL: https://github.com/apache/hadoop/pull/4701#discussion_r941833661


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/utils/FederationStateStoreFacade.java:
##
@@ -513,6 +524,25 @@ public Map invoke(
 return cacheRequest;
   }
 
+  private Object buildGetApplicationHomeSubClusterRequest(ApplicationId 
applicationId) {
+final String cacheKey = buildCacheKey(getClass().getSimpleName(),
+GET_APPLICATION_HOME_SUBCLUSTER_CACHEID, applicationId.toString());
+CacheRequest cacheRequest = new CacheRequest<>(
+cacheKey,
+input -> {
+GetApplicationHomeSubClusterResponse response =
+stateStore.getApplicationHomeSubCluster(
+
GetApplicationHomeSubClusterRequest.newInstance(applicationId));

Review Comment:
   Extract to req



##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/utils/FederationStateStoreFacade.java:
##
@@ -513,6 +524,25 @@ public Map invoke(
 return cacheRequest;
   }
 
+  private Object buildGetApplicationHomeSubClusterRequest(ApplicationId 
applicationId) {
+final String cacheKey = buildCacheKey(getClass().getSimpleName(),
+GET_APPLICATION_HOME_SUBCLUSTER_CACHEID, applicationId.toString());
+CacheRequest cacheRequest = new CacheRequest<>(
+cacheKey,
+input -> {
+GetApplicationHomeSubClusterResponse response =
+stateStore.getApplicationHomeSubCluster(
+
GetApplicationHomeSubClusterRequest.newInstance(applicationId));
+
+ApplicationHomeSubCluster applicationHomeSubCluster =

Review Comment:
   appHomeSubCluster and one line



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a diff in pull request #4711: YARN-11236. Implement FederationReservationHomeSubClusterStore With MemoryStore.

2022-08-09 Thread GitBox


goiri commented on code in PR #4711:
URL: https://github.com/apache/hadoop/pull/4711#discussion_r941830630


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/MemoryFederationStateStore.java:
##
@@ -312,4 +324,45 @@ public Version loadVersion() {
 return null;
   }
 
+  @Override
+  public AddReservationHomeSubClusterResponse addReservationHomeSubCluster(
+  AddReservationHomeSubClusterRequest request) throws YarnException {
+FederationReservationHomeSubClusterStoreInputValidator.validate(request);
+ReservationId reservationId =
+request.getReservationHomeSubCluster().getReservationId();
+if (!reservations.containsKey(reservationId)) {
+  reservations.put(reservationId,
+  request.getReservationHomeSubCluster().getHomeSubCluster());
+}
+return 
AddReservationHomeSubClusterResponse.newInstance(reservations.get(reservationId));
+  }
+
+  @Override
+  public GetReservationHomeSubClusterResponse getReservationHomeSubCluster(
+  GetReservationHomeSubClusterRequest request) throws YarnException {
+FederationReservationHomeSubClusterStoreInputValidator.validate(request);
+ReservationId reservationId = request.getReservationId();
+if (!reservations.containsKey(reservationId)) {
+  throw new YarnException("Reservation " + reservationId + " does not 
exist");
+}
+SubClusterId subClusterId = reservations.get(reservationId);
+return GetReservationHomeSubClusterResponse.newInstance(
+ReservationHomeSubCluster.newInstance(reservationId, subClusterId));
+  }
+
+  @Override
+  public GetReservationsHomeSubClusterResponse getReservationsHomeSubCluster(
+  GetReservationsHomeSubClusterRequest request) throws YarnException {
+List result = new ArrayList<>();
+
+for (Entry entry : reservations.entrySet()) {
+  ReservationId key = entry.getKey();
+  SubClusterId value = entry.getValue();
+  ReservationHomeSubCluster homeSubCluster =

Review Comment:
   Single line



##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/MemoryFederationStateStore.java:
##
@@ -312,4 +324,45 @@ public Version loadVersion() {
 return null;
   }
 
+  @Override
+  public AddReservationHomeSubClusterResponse addReservationHomeSubCluster(
+  AddReservationHomeSubClusterRequest request) throws YarnException {
+FederationReservationHomeSubClusterStoreInputValidator.validate(request);
+ReservationId reservationId =
+request.getReservationHomeSubCluster().getReservationId();
+if (!reservations.containsKey(reservationId)) {
+  reservations.put(reservationId,
+  request.getReservationHomeSubCluster().getHomeSubCluster());
+}
+return 
AddReservationHomeSubClusterResponse.newInstance(reservations.get(reservationId));
+  }
+
+  @Override
+  public GetReservationHomeSubClusterResponse getReservationHomeSubCluster(
+  GetReservationHomeSubClusterRequest request) throws YarnException {
+FederationReservationHomeSubClusterStoreInputValidator.validate(request);
+ReservationId reservationId = request.getReservationId();
+if (!reservations.containsKey(reservationId)) {
+  throw new YarnException("Reservation " + reservationId + " does not 
exist");
+}
+SubClusterId subClusterId = reservations.get(reservationId);
+return GetReservationHomeSubClusterResponse.newInstance(
+ReservationHomeSubCluster.newInstance(reservationId, subClusterId));
+  }
+
+  @Override
+  public GetReservationsHomeSubClusterResponse getReservationsHomeSubCluster(
+  GetReservationsHomeSubClusterRequest request) throws YarnException {
+List result = new ArrayList<>();
+
+for (Entry entry : reservations.entrySet()) {
+  ReservationId key = entry.getKey();

Review Comment:
   better names than key/value; reservationId, subclusterId



##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/MemoryFederationStateStore.java:
##
@@ -312,4 +324,45 @@ public Version loadVersion() {
 return null;
   }
 
+  @Override
+  public AddReservationHomeSubClusterResponse addReservationHomeSubCluster(
+  AddReservationHomeSubClusterRequest request) throws YarnException {
+FederationReservationHomeSubClusterStoreInputValidator.validate(request);
+ReservationId reservationId =

Review Comment:
   Single line



##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/MemoryFederationStateStore.java:
##
@@ -312,4 +324,45 @@ public Version loadVersion() {
 return null;
   }
 
+  @Override
+  public AddReservationHomeSubClusterResponse 

[GitHub] [hadoop] goiri merged pull request #4594: YARN-6572. Refactoring Router services to use common util classes for pipeline creations.

2022-08-09 Thread GitBox


goiri merged PR #4594:
URL: https://github.com/apache/hadoop/pull/4594


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on pull request #4531: HDFS-13274. RBF: Extend RouterRpcClient to use multiple sockets

2022-08-09 Thread GitBox


goiri commented on PR #4531:
URL: https://github.com/apache/hadoop/pull/4531#issuecomment-1209924414

   Can we retrigger the build?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a diff in pull request #4719: HDFS-16724. RBF should support get the information about ancestor mount points

2022-08-09 Thread GitBox


goiri commented on code in PR #4719:
URL: https://github.com/apache/hadoop/pull/4719#discussion_r941824560


##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java:
##
@@ -956,7 +959,7 @@ public HdfsFileStatus getFileInfo(String src) throws 
IOException {
   if (children != null && !children.isEmpty()) {
 Map dates = getMountPointDates(src);
 long date = 0;
-if (dates != null && dates.containsKey(src)) {
+if (dates.containsKey(src)) {

Review Comment:
   I would leave the null check; it is good practice.



##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java:
##
@@ -966,6 +969,10 @@ public HdfsFileStatus getFileInfo(String src) throws 
IOException {
   }
 }
 
+if (ret == null && noLocationException != null) {

Review Comment:
   Add some comment describing this.



##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NoLocationException.java:
##
@@ -0,0 +1,32 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.router;
+
+import java.io.IOException;
+
+/**
+ * Exception when no location found.

Review Comment:
   Is there a better name that reflects this is from the mount point 
perspective?
   Extend the javadoc a little anyway.



##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterMountTableWithoutDefaultNS.java:
##
@@ -0,0 +1,154 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.router;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdfs.protocol.ClientProtocol;
+import org.apache.hadoop.hdfs.protocol.HdfsFileStatus;
+import 
org.apache.hadoop.hdfs.server.federation.MiniRouterDFSCluster.RouterContext;
+import org.apache.hadoop.hdfs.server.federation.RouterConfigBuilder;
+import org.apache.hadoop.hdfs.server.federation.StateStoreDFSCluster;
+import org.apache.hadoop.hdfs.server.federation.resolver.MountTableManager;
+import org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver;
+import 
org.apache.hadoop.hdfs.server.federation.store.protocol.AddMountTableEntryRequest;
+import 
org.apache.hadoop.hdfs.server.federation.store.protocol.AddMountTableEntryResponse;
+import 
org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesRequest;
+import 
org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesResponse;
+import 
org.apache.hadoop.hdfs.server.federation.store.protocol.RemoveMountTableEntryRequest;
+import org.apache.hadoop.hdfs.server.federation.store.records.MountTable;
+import org.junit.After;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.util.Collections;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+/**
+ * Test a router end-to-end including the MountTable without default 
nameservice.
+ */
+public class TestRouterMountTableWithoutDefaultNS {
+  private static StateStoreDFSCluster cluster;
+  private static RouterContext 

[jira] [Commented] (HADOOP-18340) deleteOnExit does not work with S3AFileSystem

2022-08-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17577632#comment-17577632
 ] 

ASF GitHub Bot commented on HADOOP-18340:
-

hadoop-yetus commented on PR #4608:
URL: https://github.com/apache/hadoop/pull/4608#issuecomment-1209917783

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  6s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m  2s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  4s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 53s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 49s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  3s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 33s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  27m 19s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 42s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 43s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 44s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 22s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  26m 36s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 59s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 53s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 116m 23s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4608/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4608 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 02e1a93d5d2e 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / fb2ae77ed7a2f598dcd24cae9d43c944045bb81f |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4608/8/testReport/ |
   | Max. process+thread count | 607 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4608/8/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> deleteOnExit does not 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4608: HADOOP-18340 deleteOnExit does not work with S3AFileSystem

2022-08-09 Thread GitBox


hadoop-yetus commented on PR #4608:
URL: https://github.com/apache/hadoop/pull/4608#issuecomment-1209917783

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  6s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m  2s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  4s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 53s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 49s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  3s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 33s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  27m 19s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 42s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 43s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 44s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 22s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  26m 36s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 59s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 53s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 116m 23s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4608/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4608 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 02e1a93d5d2e 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / fb2ae77ed7a2f598dcd24cae9d43c944045bb81f |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4608/8/testReport/ |
   | Max. process+thread count | 607 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4608/8/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, 

[jira] [Commented] (HADOOP-18340) deleteOnExit does not work with S3AFileSystem

2022-08-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17577630#comment-17577630
 ] 

ASF GitHub Bot commented on HADOOP-18340:
-

hadoop-yetus commented on PR #4608:
URL: https://github.com/apache/hadoop/pull/4608#issuecomment-1209905685

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  8s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m 57s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 58s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 49s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 53s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 39s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m 33s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 45s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  24m  6s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 47s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 44s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 107m  8s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4608/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4608 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 107e0a86a207 4.15.0-166-generic #174-Ubuntu SMP Wed Dec 8 
19:07:44 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / fb2ae77ed7a2f598dcd24cae9d43c944045bb81f |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4608/6/testReport/ |
   | Max. process+thread count | 597 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4608/6/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> deleteOnExit does not work 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4608: HADOOP-18340 deleteOnExit does not work with S3AFileSystem

2022-08-09 Thread GitBox


hadoop-yetus commented on PR #4608:
URL: https://github.com/apache/hadoop/pull/4608#issuecomment-1209905685

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  8s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m 57s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 58s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 49s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 53s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 39s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m 33s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 45s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  24m  6s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 47s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 44s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 107m  8s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4608/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4608 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 107e0a86a207 4.15.0-166-generic #174-Ubuntu SMP Wed Dec 8 
19:07:44 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / fb2ae77ed7a2f598dcd24cae9d43c944045bb81f |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4608/6/testReport/ |
   | Max. process+thread count | 597 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4608/6/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please 

[jira] [Commented] (HADOOP-18340) deleteOnExit does not work with S3AFileSystem

2022-08-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17577625#comment-17577625
 ] 

ASF GitHub Bot commented on HADOOP-18340:
-

hadoop-yetus commented on PR #4608:
URL: https://github.com/apache/hadoop/pull/4608#issuecomment-1209900584

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 53s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  43m 21s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 35s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 45s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 19s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 13s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 41s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 29s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  98m 37s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4608/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4608 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 98c795ea550b 4.15.0-169-generic #177-Ubuntu SMP Thu Feb 3 
10:50:38 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / fb2ae77ed7a2f598dcd24cae9d43c944045bb81f |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4608/7/testReport/ |
   | Max. process+thread count | 554 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4608/7/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> deleteOnExit does not work 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4608: HADOOP-18340 deleteOnExit does not work with S3AFileSystem

2022-08-09 Thread GitBox


hadoop-yetus commented on PR #4608:
URL: https://github.com/apache/hadoop/pull/4608#issuecomment-1209900584

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 53s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  43m 21s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 35s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 45s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 19s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 13s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 41s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 29s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  98m 37s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4608/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4608 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 98c795ea550b 4.15.0-169-generic #177-Ubuntu SMP Thu Feb 3 
10:50:38 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / fb2ae77ed7a2f598dcd24cae9d43c944045bb81f |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4608/7/testReport/ |
   | Max. process+thread count | 554 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4608/7/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please 

[jira] [Commented] (HADOOP-18345) Enhance client protocol to propagate last seen state IDs for multiple nameservices.

2022-08-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17577624#comment-17577624
 ] 

ASF GitHub Bot commented on HADOOP-18345:
-

hadoop-yetus commented on PR #4584:
URL: https://github.com/apache/hadoop/pull/4584#issuecomment-1209897390

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 44s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  buf  |   0m  1s |  |  buf was not available.  |
   | +0 :ok: |  buf  |   0m  1s |  |  buf was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 30s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 38s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 16s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  20m 46s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   4m 26s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 45s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   3m 18s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   3m  6s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   5m 15s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 38s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 33s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 48s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 28s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  cc  |  22m 28s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  22m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 49s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  cc  |  20m 49s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  20m 49s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   4m 13s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4584/7/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 8 new + 167 unchanged - 1 fixed = 175 total (was 
168)  |
   | +1 :green_heart: |  mvnsite  |   3m 45s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   3m  8s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 11s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   5m 28s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 30s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 44s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  22m 48s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 264m 26s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4584/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4584 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets cc buflint 
bufcompat |
   | uname | Linux bd22356db5f2 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 
23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 2a76bb5fba6b6d052429c78ba8808d472b75482c |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4584: HADOOP-18345: Enhance client protocol to propagate last seen state IDs for multiple nameservices.

2022-08-09 Thread GitBox


hadoop-yetus commented on PR #4584:
URL: https://github.com/apache/hadoop/pull/4584#issuecomment-1209897390

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 44s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  buf  |   0m  1s |  |  buf was not available.  |
   | +0 :ok: |  buf  |   0m  1s |  |  buf was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 30s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 38s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 16s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  20m 46s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   4m 26s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 45s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   3m 18s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   3m  6s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   5m 15s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 38s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 33s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 48s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 28s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  cc  |  22m 28s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  22m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 49s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  cc  |  20m 49s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  20m 49s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   4m 13s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4584/7/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 8 new + 167 unchanged - 1 fixed = 175 total (was 
168)  |
   | +1 :green_heart: |  mvnsite  |   3m 45s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   3m  8s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 11s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   5m 28s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 30s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 44s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  22m 48s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 264m 26s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4584/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4584 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets cc buflint 
bufcompat |
   | uname | Linux bd22356db5f2 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 
23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 2a76bb5fba6b6d052429c78ba8808d472b75482c |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 

[jira] [Commented] (HADOOP-18397) Shutdown AWSSecurityTokenService when it's resources are no longer in use

2022-08-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17577622#comment-17577622
 ] 

ASF GitHub Bot commented on HADOOP-18397:
-

hadoop-yetus commented on PR #4722:
URL: https://github.com/apache/hadoop/pull/4722#issuecomment-1209885597

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 53s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  42m  1s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  1s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 48s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 55s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 30s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m 12s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 49s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 49s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  24m  3s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m  0s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 107m  6s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4722/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4722 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 457c4552f045 4.15.0-166-generic #174-Ubuntu SMP Wed Dec 8 
19:07:44 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / f2afaf21e7e4d66209996ce7859bbedb52e622e2 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4722/2/testReport/ |
   | Max. process+thread count | 618 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4722/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Shutdown 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4722: HADOOP-18397. Shutdown AWSSecurityTokenService when it's resources are no longer in use

2022-08-09 Thread GitBox


hadoop-yetus commented on PR #4722:
URL: https://github.com/apache/hadoop/pull/4722#issuecomment-1209885597

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 53s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  42m  1s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  1s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 48s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 55s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 30s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m 12s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 49s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 49s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  24m  3s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m  0s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 107m  6s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4722/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4722 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 457c4552f045 4.15.0-166-generic #174-Ubuntu SMP Wed Dec 8 
19:07:44 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / f2afaf21e7e4d66209996ce7859bbedb52e622e2 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4722/2/testReport/ |
   | Max. process+thread count | 618 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4722/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4680: HDFS-16702. MiniDFSCluster should report cause of exception in assert…

2022-08-09 Thread GitBox


hadoop-yetus commented on PR #4680:
URL: https://github.com/apache/hadoop/pull/4680#issuecomment-1209824343

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 35s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  39m 29s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 44s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   1m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 26s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 46s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 46s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 44s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 59s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 26s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 21s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 28s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 19s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 34s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 237m 43s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4680/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 15s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 348m 50s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestRollingUpgrade |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4680/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4680 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 7c8a58f67762 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 46ebe2ee0097ff476d32c14f5d2837fad54bd312 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4680/2/testReport/ |
   | Max. process+thread count | 3025 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4680/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically 

[jira] [Commented] (HADOOP-18340) deleteOnExit does not work with S3AFileSystem

2022-08-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17577601#comment-17577601
 ] 

ASF GitHub Bot commented on HADOOP-18340:
-

huaxiangsun commented on PR #4608:
URL: https://github.com/apache/hadoop/pull/4608#issuecomment-1209793570

   I also run the new IT test without change in S3AFileSystem, it failed as 
expected.




> deleteOnExit does not work with S3AFileSystem
> -
>
> Key: HADOOP-18340
> URL: https://issues.apache.org/jira/browse/HADOOP-18340
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.3
>Reporter: Huaxiang Sun
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> When deleteOnExit is set on some paths, they are not removed when file system 
> object is closed. The following exception is logged when printing out the 
> exception in info log.
> {code:java}
> 2022-07-15 19:29:12,552 [main] INFO  fs.FileSystem 
> (FileSystem.java:processDeleteOnExit(1810)) - Ignoring failure to 
> deleteOnExit for path /file, exception {}
> java.io.IOException: s3a://mock-bucket: FileSystem is closed!
>         at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.checkNotClosed(S3AFileSystem.java:3887)
>         at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2333)
>         at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2355)
>         at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.exists(S3AFileSystem.java:4402)
>         at 
> org.apache.hadoop.fs.FileSystem.processDeleteOnExit(FileSystem.java:1805)
>         at org.apache.hadoop.fs.FileSystem.close(FileSystem.java:2669)
>         at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.close(S3AFileSystem.java:3830)
>         at 
> org.apache.hadoop.fs.s3a.TestS3AGetFileStatus.testFile(TestS3AGetFileStatus.java:87)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>         at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>         at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>         at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>         at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>         at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:258)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>         at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>         at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>         at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>         at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>         at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>         at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>         at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>         at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>         at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>         at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>         at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>         at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>         at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>         at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
>  {code}



--
This message was sent by Atlassian Jira

[jira] [Commented] (HADOOP-18340) deleteOnExit does not work with S3AFileSystem

2022-08-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17577600#comment-17577600
 ] 

ASF GitHub Bot commented on HADOOP-18340:
-

huaxiangsun commented on code in PR #4608:
URL: https://github.com/apache/hadoop/pull/4608#discussion_r941729359


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ADeleteOnExit.java:
##
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.s3a;
+
+import org.junit.Test;
+
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.io.IOUtils;
+
+/**
+ * Test deleteOnExit for S3A.
+ * The following cases for deleteOnExit are tested:
+ *  1. A nonexist file, which is added to deleteOnExit set.
+ *  2. An existing file
+ *  3. A file is added to deleteOnExist set first, then created.
+ *  4. A directory with some files under it.
+ */
+public class ITestS3ADeleteOnExit extends AbstractS3ATestBase {
+
+  private static final String PARENT_DIR_PATH_STR = "testDeleteOnExitDir";
+  private static final String NON_EXIST_FILE_PATH_STR =
+  PARENT_DIR_PATH_STR + "/nonExistFile";
+  private static final String INORDER_FILE_PATH_STR =
+  PARENT_DIR_PATH_STR + "/inOrderFile";
+  private static final String OUT_OF_ORDER_FILE_PATH_STR =
+  PARENT_DIR_PATH_STR + "/outOfOrderFile";
+  private static final String SUBDIR_PATH_STR =
+  PARENT_DIR_PATH_STR + "/subDir";
+  private static final String FILE_UNDER_SUBDIR_PATH_STR =
+  SUBDIR_PATH_STR + "/subDirFile";
+
+  @Test
+  public void testDeleteOnExit() throws Exception {
+FileSystem fs = getFileSystem();
+
+// Get a new filesystem object which is same as fs.
+FileSystem s3aFs = new S3AFileSystem();
+s3aFs.initialize(fs.getUri(), fs.getConf());
+Path nonExistFilePath = path(NON_EXIST_FILE_PATH_STR);
+Path inOrderFilePath = path(INORDER_FILE_PATH_STR);
+Path outOfOrderFilePath = path(OUT_OF_ORDER_FILE_PATH_STR);
+Path subDirPath = path(SUBDIR_PATH_STR);
+Path fileUnderSubDirPath = path(FILE_UNDER_SUBDIR_PATH_STR);
+// 1. set up the test directory.
+Path dir = path("testDeleteOnExitDir");
+s3aFs.mkdirs(dir);
+
+// 2. Add a nonexisting file to DeleteOnExit set.
+s3aFs.deleteOnExit(nonExistFilePath);
+ContractTestUtils.assertPathDoesNotExist(s3aFs,
+"File " + NON_EXIST_FILE_PATH_STR + " should not exist", 
nonExistFilePath);
+
+// 3. create a file and then add it to DeleteOnExit set.
+FSDataOutputStream stream = s3aFs.create(inOrderFilePath, true);
+byte[] data = ContractTestUtils.dataset(16, 'a', 26);
+try {
+  stream.write(data);
+} finally {
+  IOUtils.closeStream(stream);
+}
+
+ContractTestUtils.assertPathExists(s3aFs,
+"File " + INORDER_FILE_PATH_STR + " should exist", 
inOrderFilePath);
+
+s3aFs.deleteOnExit(inOrderFilePath);
+
+// 4. add a path to DeleteOnExit set first, then create it.
+s3aFs.deleteOnExit(outOfOrderFilePath);
+stream = s3aFs.create(outOfOrderFilePath, true);
+try {
+  stream.write(data);
+} finally {
+  IOUtils.closeStream(stream);
+}
+
+ContractTestUtils.assertPathExists(s3aFs,
+"File " + OUT_OF_ORDER_FILE_PATH_STR + " should exist", 
outOfOrderFilePath);
+
+// 5. create a subdirectory, a file under it,  and add subdirectory 
DeleteOnExit set.
+s3aFs.mkdirs(subDirPath);
+s3aFs.deleteOnExit(subDirPath);
+
+stream = s3aFs.create(fileUnderSubDirPath, true);
+try {

Review Comment:
   Done.





> deleteOnExit does not work with S3AFileSystem
> -
>
> Key: HADOOP-18340
> URL: https://issues.apache.org/jira/browse/HADOOP-18340
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.3
>Reporter: Huaxiang Sun
>Priority: Minor
>  

[GitHub] [hadoop] huaxiangsun commented on pull request #4608: HADOOP-18340 deleteOnExit does not work with S3AFileSystem

2022-08-09 Thread GitBox


huaxiangsun commented on PR #4608:
URL: https://github.com/apache/hadoop/pull/4608#issuecomment-1209793570

   I also run the new IT test without change in S3AFileSystem, it failed as 
expected.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] huaxiangsun commented on a diff in pull request #4608: HADOOP-18340 deleteOnExit does not work with S3AFileSystem

2022-08-09 Thread GitBox


huaxiangsun commented on code in PR #4608:
URL: https://github.com/apache/hadoop/pull/4608#discussion_r941729359


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ADeleteOnExit.java:
##
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.s3a;
+
+import org.junit.Test;
+
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.io.IOUtils;
+
+/**
+ * Test deleteOnExit for S3A.
+ * The following cases for deleteOnExit are tested:
+ *  1. A nonexist file, which is added to deleteOnExit set.
+ *  2. An existing file
+ *  3. A file is added to deleteOnExist set first, then created.
+ *  4. A directory with some files under it.
+ */
+public class ITestS3ADeleteOnExit extends AbstractS3ATestBase {
+
+  private static final String PARENT_DIR_PATH_STR = "testDeleteOnExitDir";
+  private static final String NON_EXIST_FILE_PATH_STR =
+  PARENT_DIR_PATH_STR + "/nonExistFile";
+  private static final String INORDER_FILE_PATH_STR =
+  PARENT_DIR_PATH_STR + "/inOrderFile";
+  private static final String OUT_OF_ORDER_FILE_PATH_STR =
+  PARENT_DIR_PATH_STR + "/outOfOrderFile";
+  private static final String SUBDIR_PATH_STR =
+  PARENT_DIR_PATH_STR + "/subDir";
+  private static final String FILE_UNDER_SUBDIR_PATH_STR =
+  SUBDIR_PATH_STR + "/subDirFile";
+
+  @Test
+  public void testDeleteOnExit() throws Exception {
+FileSystem fs = getFileSystem();
+
+// Get a new filesystem object which is same as fs.
+FileSystem s3aFs = new S3AFileSystem();
+s3aFs.initialize(fs.getUri(), fs.getConf());
+Path nonExistFilePath = path(NON_EXIST_FILE_PATH_STR);
+Path inOrderFilePath = path(INORDER_FILE_PATH_STR);
+Path outOfOrderFilePath = path(OUT_OF_ORDER_FILE_PATH_STR);
+Path subDirPath = path(SUBDIR_PATH_STR);
+Path fileUnderSubDirPath = path(FILE_UNDER_SUBDIR_PATH_STR);
+// 1. set up the test directory.
+Path dir = path("testDeleteOnExitDir");
+s3aFs.mkdirs(dir);
+
+// 2. Add a nonexisting file to DeleteOnExit set.
+s3aFs.deleteOnExit(nonExistFilePath);
+ContractTestUtils.assertPathDoesNotExist(s3aFs,
+"File " + NON_EXIST_FILE_PATH_STR + " should not exist", 
nonExistFilePath);
+
+// 3. create a file and then add it to DeleteOnExit set.
+FSDataOutputStream stream = s3aFs.create(inOrderFilePath, true);
+byte[] data = ContractTestUtils.dataset(16, 'a', 26);
+try {
+  stream.write(data);
+} finally {
+  IOUtils.closeStream(stream);
+}
+
+ContractTestUtils.assertPathExists(s3aFs,
+"File " + INORDER_FILE_PATH_STR + " should exist", 
inOrderFilePath);
+
+s3aFs.deleteOnExit(inOrderFilePath);
+
+// 4. add a path to DeleteOnExit set first, then create it.
+s3aFs.deleteOnExit(outOfOrderFilePath);
+stream = s3aFs.create(outOfOrderFilePath, true);
+try {
+  stream.write(data);
+} finally {
+  IOUtils.closeStream(stream);
+}
+
+ContractTestUtils.assertPathExists(s3aFs,
+"File " + OUT_OF_ORDER_FILE_PATH_STR + " should exist", 
outOfOrderFilePath);
+
+// 5. create a subdirectory, a file under it,  and add subdirectory 
DeleteOnExit set.
+s3aFs.mkdirs(subDirPath);
+s3aFs.deleteOnExit(subDirPath);
+
+stream = s3aFs.create(fileUnderSubDirPath, true);
+try {

Review Comment:
   Done.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18340) deleteOnExit does not work with S3AFileSystem

2022-08-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17577593#comment-17577593
 ] 

ASF GitHub Bot commented on HADOOP-18340:
-

huaxiangsun opened a new pull request, #4608:
URL: https://github.com/apache/hadoop/pull/4608

   
   
   ### Description of PR
   processDeleteOnExit() is overiden in S3AFilesystem, it skips exist() check 
and delete objects without checking if FileSystem is closed. 
   
   ### How was this patch tested?
   A new unitest case is added. And all unittest cases under 
hadoop-tools/hadoop-aws passed.
   mvn -Dparallel-tests clean test
   
   Did S3A Integration tests against us-west-2 region and there are a few 
failures/errors. Run the trunk code without the patch, there are same 
errors/failures. The errors/failures are not caused by the patch, probably due 
to misconfiguration ( I could not figure out)
   mvn -Dparallel-tests clean verify
   
   The result is
   `
   Tests | Errors | Failures | Skipped | Success Rate | Time
   

> deleteOnExit does not work with S3AFileSystem
> -
>
> Key: HADOOP-18340
> URL: https://issues.apache.org/jira/browse/HADOOP-18340
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.3
>Reporter: Huaxiang Sun
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> When deleteOnExit is set on some paths, they are not removed when file system 
> object is closed. The following exception is logged when printing out the 
> exception in info log.
> {code:java}
> 2022-07-15 19:29:12,552 [main] INFO  fs.FileSystem 
> (FileSystem.java:processDeleteOnExit(1810)) - Ignoring failure to 
> deleteOnExit for path /file, exception {}
> java.io.IOException: s3a://mock-bucket: FileSystem is closed!
>         at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.checkNotClosed(S3AFileSystem.java:3887)
>         at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2333)
>         at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2355)
>         at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.exists(S3AFileSystem.java:4402)
>         at 
> org.apache.hadoop.fs.FileSystem.processDeleteOnExit(FileSystem.java:1805)
>         at org.apache.hadoop.fs.FileSystem.close(FileSystem.java:2669)
>         at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.close(S3AFileSystem.java:3830)
>         at 
> org.apache.hadoop.fs.s3a.TestS3AGetFileStatus.testFile(TestS3AGetFileStatus.java:87)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>         at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>         at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>         at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>         at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>         at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:258)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>         at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>         at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>         at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>         at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>         at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>         at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>         at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>         at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>         at 
> 

[jira] [Commented] (HADOOP-18340) deleteOnExit does not work with S3AFileSystem

2022-08-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17577590#comment-17577590
 ] 

ASF GitHub Bot commented on HADOOP-18340:
-

huaxiangsun commented on code in PR #4608:
URL: https://github.com/apache/hadoop/pull/4608#discussion_r941726247


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ADeleteOnExit.java:
##
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.s3a;
+
+import org.junit.Test;
+
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.io.IOUtils;
+
+/**
+ * Test deleteOnExit for S3A.
+ * The following cases for deleteOnExit are tested:
+ *  1. A nonexist file, which is added to deleteOnExit set.
+ *  2. An existing file
+ *  3. A file is added to deleteOnExist set first, then created.
+ *  4. A directory with some files under it.
+ */
+public class ITestS3ADeleteOnExit extends AbstractS3ATestBase {
+
+  private static final String PARENT_DIR_PATH_STR = "testDeleteOnExitDir";
+  private static final String NON_EXIST_FILE_PATH_STR =
+  PARENT_DIR_PATH_STR + "/nonExistFile";
+  private static final String INORDER_FILE_PATH_STR =
+  PARENT_DIR_PATH_STR + "/inOrderFile";
+  private static final String OUT_OF_ORDER_FILE_PATH_STR =
+  PARENT_DIR_PATH_STR + "/outOfOrderFile";
+  private static final String SUBDIR_PATH_STR =
+  PARENT_DIR_PATH_STR + "/subDir";
+  private static final String FILE_UNDER_SUBDIR_PATH_STR =
+  SUBDIR_PATH_STR + "/subDirFile";
+
+  @Test
+  public void testDeleteOnExit() throws Exception {
+FileSystem fs = getFileSystem();
+
+// Get a new filesystem object which is same as fs.
+FileSystem s3aFs = new S3AFileSystem();
+s3aFs.initialize(fs.getUri(), fs.getConf());
+Path nonExistFilePath = path(NON_EXIST_FILE_PATH_STR);
+Path inOrderFilePath = path(INORDER_FILE_PATH_STR);
+Path outOfOrderFilePath = path(OUT_OF_ORDER_FILE_PATH_STR);
+Path subDirPath = path(SUBDIR_PATH_STR);
+Path fileUnderSubDirPath = path(FILE_UNDER_SUBDIR_PATH_STR);
+// 1. set up the test directory.
+Path dir = path("testDeleteOnExitDir");
+s3aFs.mkdirs(dir);
+
+// 2. Add a nonexisting file to DeleteOnExit set.
+s3aFs.deleteOnExit(nonExistFilePath);
+ContractTestUtils.assertPathDoesNotExist(s3aFs,
+"File " + NON_EXIST_FILE_PATH_STR + " should not exist", 
nonExistFilePath);
+
+// 3. create a file and then add it to DeleteOnExit set.
+FSDataOutputStream stream = s3aFs.create(inOrderFilePath, true);
+byte[] data = ContractTestUtils.dataset(16, 'a', 26);
+try {
+  stream.write(data);
+} finally {
+  IOUtils.closeStream(stream);
+}
+
+ContractTestUtils.assertPathExists(s3aFs,
+"File " + INORDER_FILE_PATH_STR + " should exist", 
inOrderFilePath);
+
+s3aFs.deleteOnExit(inOrderFilePath);
+
+// 4. add a path to DeleteOnExit set first, then create it.
+s3aFs.deleteOnExit(outOfOrderFilePath);
+stream = s3aFs.create(outOfOrderFilePath, true);
+try {
+  stream.write(data);
+} finally {
+  IOUtils.closeStream(stream);
+}
+
+ContractTestUtils.assertPathExists(s3aFs,
+"File " + OUT_OF_ORDER_FILE_PATH_STR + " should exist", 
outOfOrderFilePath);
+
+// 5. create a subdirectory, a file under it,  and add subdirectory 
DeleteOnExit set.
+s3aFs.mkdirs(subDirPath);
+s3aFs.deleteOnExit(subDirPath);
+
+stream = s3aFs.create(fileUnderSubDirPath, true);
+try {
+  stream.write(data);
+} finally {
+  IOUtils.closeStream(stream);
+}
+
+ContractTestUtils.assertPathExists(s3aFs,

Review Comment:
   I found that I do not need ContractTestUtils#assertPathExists or 
ContractTestUtils#assertPathDoesNotExist. I can use 
AbstractFSContractTestBase#assertPathExists and assertPathDoesNotExist instead. 
Upload the new patch based on your feedback, thanks. 





> deleteOnExit does not work 

[jira] [Commented] (HADOOP-18340) deleteOnExit does not work with S3AFileSystem

2022-08-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17577591#comment-17577591
 ] 

ASF GitHub Bot commented on HADOOP-18340:
-

huaxiangsun commented on PR #4608:
URL: https://github.com/apache/hadoop/pull/4608#issuecomment-1209789086

   > all the production code is good; some minor changes to the tests and its 
ready for the next hadoop release
   
   




> deleteOnExit does not work with S3AFileSystem
> -
>
> Key: HADOOP-18340
> URL: https://issues.apache.org/jira/browse/HADOOP-18340
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.3
>Reporter: Huaxiang Sun
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> When deleteOnExit is set on some paths, they are not removed when file system 
> object is closed. The following exception is logged when printing out the 
> exception in info log.
> {code:java}
> 2022-07-15 19:29:12,552 [main] INFO  fs.FileSystem 
> (FileSystem.java:processDeleteOnExit(1810)) - Ignoring failure to 
> deleteOnExit for path /file, exception {}
> java.io.IOException: s3a://mock-bucket: FileSystem is closed!
>         at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.checkNotClosed(S3AFileSystem.java:3887)
>         at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2333)
>         at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2355)
>         at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.exists(S3AFileSystem.java:4402)
>         at 
> org.apache.hadoop.fs.FileSystem.processDeleteOnExit(FileSystem.java:1805)
>         at org.apache.hadoop.fs.FileSystem.close(FileSystem.java:2669)
>         at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.close(S3AFileSystem.java:3830)
>         at 
> org.apache.hadoop.fs.s3a.TestS3AGetFileStatus.testFile(TestS3AGetFileStatus.java:87)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>         at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>         at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>         at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>         at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>         at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:258)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>         at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>         at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>         at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>         at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>         at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>         at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>         at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>         at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>         at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>         at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>         at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>         at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>         at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>         at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
>  {code}



--
This message was sent 

[jira] [Commented] (HADOOP-18340) deleteOnExit does not work with S3AFileSystem

2022-08-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17577592#comment-17577592
 ] 

ASF GitHub Bot commented on HADOOP-18340:
-

huaxiangsun closed pull request #4608: HADOOP-18340 deleteOnExit does not work 
with S3AFileSystem
URL: https://github.com/apache/hadoop/pull/4608




> deleteOnExit does not work with S3AFileSystem
> -
>
> Key: HADOOP-18340
> URL: https://issues.apache.org/jira/browse/HADOOP-18340
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.3
>Reporter: Huaxiang Sun
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> When deleteOnExit is set on some paths, they are not removed when file system 
> object is closed. The following exception is logged when printing out the 
> exception in info log.
> {code:java}
> 2022-07-15 19:29:12,552 [main] INFO  fs.FileSystem 
> (FileSystem.java:processDeleteOnExit(1810)) - Ignoring failure to 
> deleteOnExit for path /file, exception {}
> java.io.IOException: s3a://mock-bucket: FileSystem is closed!
>         at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.checkNotClosed(S3AFileSystem.java:3887)
>         at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2333)
>         at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2355)
>         at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.exists(S3AFileSystem.java:4402)
>         at 
> org.apache.hadoop.fs.FileSystem.processDeleteOnExit(FileSystem.java:1805)
>         at org.apache.hadoop.fs.FileSystem.close(FileSystem.java:2669)
>         at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.close(S3AFileSystem.java:3830)
>         at 
> org.apache.hadoop.fs.s3a.TestS3AGetFileStatus.testFile(TestS3AGetFileStatus.java:87)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>         at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>         at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>         at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>         at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>         at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:258)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>         at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>         at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>         at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>         at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>         at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>         at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>         at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>         at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>         at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>         at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>         at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>         at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>         at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>         at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hadoop] huaxiangsun closed pull request #4608: HADOOP-18340 deleteOnExit does not work with S3AFileSystem

2022-08-09 Thread GitBox


huaxiangsun closed pull request #4608: HADOOP-18340 deleteOnExit does not work 
with S3AFileSystem
URL: https://github.com/apache/hadoop/pull/4608


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] huaxiangsun commented on a diff in pull request #4608: HADOOP-18340 deleteOnExit does not work with S3AFileSystem

2022-08-09 Thread GitBox


huaxiangsun commented on code in PR #4608:
URL: https://github.com/apache/hadoop/pull/4608#discussion_r941726247


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ADeleteOnExit.java:
##
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.s3a;
+
+import org.junit.Test;
+
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.io.IOUtils;
+
+/**
+ * Test deleteOnExit for S3A.
+ * The following cases for deleteOnExit are tested:
+ *  1. A nonexist file, which is added to deleteOnExit set.
+ *  2. An existing file
+ *  3. A file is added to deleteOnExist set first, then created.
+ *  4. A directory with some files under it.
+ */
+public class ITestS3ADeleteOnExit extends AbstractS3ATestBase {
+
+  private static final String PARENT_DIR_PATH_STR = "testDeleteOnExitDir";
+  private static final String NON_EXIST_FILE_PATH_STR =
+  PARENT_DIR_PATH_STR + "/nonExistFile";
+  private static final String INORDER_FILE_PATH_STR =
+  PARENT_DIR_PATH_STR + "/inOrderFile";
+  private static final String OUT_OF_ORDER_FILE_PATH_STR =
+  PARENT_DIR_PATH_STR + "/outOfOrderFile";
+  private static final String SUBDIR_PATH_STR =
+  PARENT_DIR_PATH_STR + "/subDir";
+  private static final String FILE_UNDER_SUBDIR_PATH_STR =
+  SUBDIR_PATH_STR + "/subDirFile";
+
+  @Test
+  public void testDeleteOnExit() throws Exception {
+FileSystem fs = getFileSystem();
+
+// Get a new filesystem object which is same as fs.
+FileSystem s3aFs = new S3AFileSystem();
+s3aFs.initialize(fs.getUri(), fs.getConf());
+Path nonExistFilePath = path(NON_EXIST_FILE_PATH_STR);
+Path inOrderFilePath = path(INORDER_FILE_PATH_STR);
+Path outOfOrderFilePath = path(OUT_OF_ORDER_FILE_PATH_STR);
+Path subDirPath = path(SUBDIR_PATH_STR);
+Path fileUnderSubDirPath = path(FILE_UNDER_SUBDIR_PATH_STR);
+// 1. set up the test directory.
+Path dir = path("testDeleteOnExitDir");
+s3aFs.mkdirs(dir);
+
+// 2. Add a nonexisting file to DeleteOnExit set.
+s3aFs.deleteOnExit(nonExistFilePath);
+ContractTestUtils.assertPathDoesNotExist(s3aFs,
+"File " + NON_EXIST_FILE_PATH_STR + " should not exist", 
nonExistFilePath);
+
+// 3. create a file and then add it to DeleteOnExit set.
+FSDataOutputStream stream = s3aFs.create(inOrderFilePath, true);
+byte[] data = ContractTestUtils.dataset(16, 'a', 26);
+try {
+  stream.write(data);
+} finally {
+  IOUtils.closeStream(stream);
+}
+
+ContractTestUtils.assertPathExists(s3aFs,
+"File " + INORDER_FILE_PATH_STR + " should exist", 
inOrderFilePath);
+
+s3aFs.deleteOnExit(inOrderFilePath);
+
+// 4. add a path to DeleteOnExit set first, then create it.
+s3aFs.deleteOnExit(outOfOrderFilePath);
+stream = s3aFs.create(outOfOrderFilePath, true);
+try {
+  stream.write(data);
+} finally {
+  IOUtils.closeStream(stream);
+}
+
+ContractTestUtils.assertPathExists(s3aFs,
+"File " + OUT_OF_ORDER_FILE_PATH_STR + " should exist", 
outOfOrderFilePath);
+
+// 5. create a subdirectory, a file under it,  and add subdirectory 
DeleteOnExit set.
+s3aFs.mkdirs(subDirPath);
+s3aFs.deleteOnExit(subDirPath);
+
+stream = s3aFs.create(fileUnderSubDirPath, true);
+try {
+  stream.write(data);
+} finally {
+  IOUtils.closeStream(stream);
+}
+
+ContractTestUtils.assertPathExists(s3aFs,

Review Comment:
   I found that I do not need ContractTestUtils#assertPathExists or 
ContractTestUtils#assertPathDoesNotExist. I can use 
AbstractFSContractTestBase#assertPathExists and assertPathDoesNotExist instead. 
Upload the new patch based on your feedback, thanks. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please 

[GitHub] [hadoop] huaxiangsun commented on pull request #4608: HADOOP-18340 deleteOnExit does not work with S3AFileSystem

2022-08-09 Thread GitBox


huaxiangsun commented on PR #4608:
URL: https://github.com/apache/hadoop/pull/4608#issuecomment-1209789086

   > all the production code is good; some minor changes to the tests and its 
ready for the next hadoop release
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18373) IOStatisticsContext tuning

2022-08-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17577585#comment-17577585
 ] 

ASF GitHub Bot commented on HADOOP-18373:
-

virajjasani commented on PR #4705:
URL: https://github.com/apache/hadoop/pull/4705#issuecomment-1209781608

   > what was it? test running as other user?
   
   Basically for this test, I manually did the `sts assume-role` and then 
retrieved session token with `aws sts get-session-token` and also added this 
token to auth-keys, this was the root cause behind 
`ITestS3ATemporaryCredentials#testSTS` failure because calling GetSessionToken 
with session credentials (including session token) would not allow user to 
retrieve temporary creds. Besides, I also kept 
`fs.s3a.aws.credentials.provider` as `TemporaryAWSCredentialsProvider`. After 
bit of digging, when I realized that temporary credential provider cannot call 
the above API with session token, I removed session token and also removed 
`fs.s3a.aws.credentials.provider` to let the credential providers be picked up 
by default. After this change, test went smooth.
   
   
   > fwiw i use a very restricted user/role for my s3 tests, so if that key 
ever leaked, it would limit the damage to accessing my test s3 buckets, assume 
one role, etc. not even start an EC2 vm
   
   Ah yes, this is definitely very good suggestion, thanks!




> IOStatisticsContext tuning
> --
>
> Key: HADOOP-18373
> URL: https://issues.apache.org/jira/browse/HADOOP-18373
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.9
>Reporter: Steve Loughran
>Assignee: Viraj Jasani
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.9
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Tuning of the IOStatisticsContext code
> h2. change property name  to fs.iostatistics
> there are other fs.iostatistics options, the new one needs consistent naming
> h2. enable in hadoop-aws
> edit core-site.xml in hadoop-aws/test/resources to always collect context 
> iOStatistics
> This helps qualify the code
> {code}
> 
>   fs.thread.level.iostatistics.enabled
>   true
> 
> {code}
> h3.  IOStatisticsContext to add add static probe to see if it is enabled.
> lets apps know not to bother collecting/reporting



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on pull request #4705: HADOOP-18373. IOStatisticsContext tuning

2022-08-09 Thread GitBox


virajjasani commented on PR #4705:
URL: https://github.com/apache/hadoop/pull/4705#issuecomment-1209781608

   > what was it? test running as other user?
   
   Basically for this test, I manually did the `sts assume-role` and then 
retrieved session token with `aws sts get-session-token` and also added this 
token to auth-keys, this was the root cause behind 
`ITestS3ATemporaryCredentials#testSTS` failure because calling GetSessionToken 
with session credentials (including session token) would not allow user to 
retrieve temporary creds. Besides, I also kept 
`fs.s3a.aws.credentials.provider` as `TemporaryAWSCredentialsProvider`. After 
bit of digging, when I realized that temporary credential provider cannot call 
the above API with session token, I removed session token and also removed 
`fs.s3a.aws.credentials.provider` to let the credential providers be picked up 
by default. After this change, test went smooth.
   
   
   > fwiw i use a very restricted user/role for my s3 tests, so if that key 
ever leaked, it would limit the damage to accessing my test s3 buckets, assume 
one role, etc. not even start an EC2 vm
   
   Ah yes, this is definitely very good suggestion, thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18397) Shutdown AWSSecurityTokenService when it's resources are no longer in use

2022-08-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17577583#comment-17577583
 ] 

ASF GitHub Bot commented on HADOOP-18397:
-

virajjasani commented on PR #4722:
URL: https://github.com/apache/hadoop/pull/4722#issuecomment-1209774388

   Tested the new commit changes against endpoint `us-west-2`:
   
   ```
   $ mvn -Dparallel-tests -DtestsThreadCount=8 clean verify
   
   [INFO] -

> Shutdown AWSSecurityTokenService when it's resources are no longer in use
> -
>
> Key: HADOOP-18397
> URL: https://issues.apache.org/jira/browse/HADOOP-18397
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs/s3
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> AWSSecurityTokenService resources can be released whenever they are no longer 
> in use. The documentation of AWSSecurityTokenService#shutdown says while it 
> is not important for client to compulsorily shutdown the token service, 
> client can definitely perform early release whenever client no longer 
> requires token service resources. We achieve this by making STSClient 
> closable, so we can certainly utilize it in all places where it's suitable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18340) deleteOnExit does not work with S3AFileSystem

2022-08-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17577582#comment-17577582
 ] 

ASF GitHub Bot commented on HADOOP-18340:
-

huaxiangsun commented on code in PR #4608:
URL: https://github.com/apache/hadoop/pull/4608#discussion_r941713106


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ADeleteOnExit.java:
##
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.s3a;
+
+import org.junit.Test;
+
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.io.IOUtils;
+
+/**
+ * Test deleteOnExit for S3A.
+ * The following cases for deleteOnExit are tested:
+ *  1. A nonexist file, which is added to deleteOnExit set.
+ *  2. An existing file
+ *  3. A file is added to deleteOnExist set first, then created.
+ *  4. A directory with some files under it.
+ */
+public class ITestS3ADeleteOnExit extends AbstractS3ATestBase {
+
+  private static final String PARENT_DIR_PATH_STR = "testDeleteOnExitDir";
+  private static final String NON_EXIST_FILE_PATH_STR =
+  PARENT_DIR_PATH_STR + "/nonExistFile";
+  private static final String INORDER_FILE_PATH_STR =
+  PARENT_DIR_PATH_STR + "/inOrderFile";
+  private static final String OUT_OF_ORDER_FILE_PATH_STR =
+  PARENT_DIR_PATH_STR + "/outOfOrderFile";
+  private static final String SUBDIR_PATH_STR =
+  PARENT_DIR_PATH_STR + "/subDir";
+  private static final String FILE_UNDER_SUBDIR_PATH_STR =
+  SUBDIR_PATH_STR + "/subDirFile";
+
+  @Test
+  public void testDeleteOnExit() throws Exception {
+FileSystem fs = getFileSystem();
+
+// Get a new filesystem object which is same as fs.
+FileSystem s3aFs = new S3AFileSystem();
+s3aFs.initialize(fs.getUri(), fs.getConf());
+Path nonExistFilePath = path(NON_EXIST_FILE_PATH_STR);
+Path inOrderFilePath = path(INORDER_FILE_PATH_STR);
+Path outOfOrderFilePath = path(OUT_OF_ORDER_FILE_PATH_STR);
+Path subDirPath = path(SUBDIR_PATH_STR);
+Path fileUnderSubDirPath = path(FILE_UNDER_SUBDIR_PATH_STR);
+// 1. set up the test directory.
+Path dir = path("testDeleteOnExitDir");
+s3aFs.mkdirs(dir);
+
+// 2. Add a nonexisting file to DeleteOnExit set.
+s3aFs.deleteOnExit(nonExistFilePath);
+ContractTestUtils.assertPathDoesNotExist(s3aFs,
+"File " + NON_EXIST_FILE_PATH_STR + " should not exist", 
nonExistFilePath);
+
+// 3. create a file and then add it to DeleteOnExit set.
+FSDataOutputStream stream = s3aFs.create(inOrderFilePath, true);
+byte[] data = ContractTestUtils.dataset(16, 'a', 26);
+try {
+  stream.write(data);
+} finally {
+  IOUtils.closeStream(stream);
+}
+
+ContractTestUtils.assertPathExists(s3aFs,
+"File " + INORDER_FILE_PATH_STR + " should exist", 
inOrderFilePath);
+
+s3aFs.deleteOnExit(inOrderFilePath);
+
+// 4. add a path to DeleteOnExit set first, then create it.
+s3aFs.deleteOnExit(outOfOrderFilePath);
+stream = s3aFs.create(outOfOrderFilePath, true);

Review Comment:
   Done.





> deleteOnExit does not work with S3AFileSystem
> -
>
> Key: HADOOP-18340
> URL: https://issues.apache.org/jira/browse/HADOOP-18340
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.3
>Reporter: Huaxiang Sun
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> When deleteOnExit is set on some paths, they are not removed when file system 
> object is closed. The following exception is logged when printing out the 
> exception in info log.
> {code:java}
> 2022-07-15 19:29:12,552 [main] INFO  fs.FileSystem 
> (FileSystem.java:processDeleteOnExit(1810)) - Ignoring failure to 
> deleteOnExit for path /file, exception {}
> 

[GitHub] [hadoop] virajjasani commented on pull request #4722: HADOOP-18397. Shutdown AWSSecurityTokenService when it's resources are no longer in use

2022-08-09 Thread GitBox


virajjasani commented on PR #4722:
URL: https://github.com/apache/hadoop/pull/4722#issuecomment-1209774388

   Tested the new commit changes against endpoint `us-west-2`:
   
   ```
   $ mvn -Dparallel-tests -DtestsThreadCount=8 clean verify
   
   [INFO] --- maven-dependency-plugin:3.0.2:copy-dependencies (copy) @ 
hadoop-aws ---
   [INFO] 
   [INFO] --- maven-surefire-plugin:3.0.0-M1:test (default-test) @ hadoop-aws 
---
   [INFO] 
   [INFO] ---
   [INFO]  T E S T S
   [INFO] ---
   
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 406, Failures: 0, Errors: 0, Skipped: 4
   [INFO] 
   [INFO] 
   
   
   [INFO] --- maven-failsafe-plugin:3.0.0-M1:integration-test 
(default-integration-test) @ hadoop-aws ---
   [INFO] 
   [INFO] ---
   [INFO]  T E S T S
   [INFO] ---
   
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 1139, Failures: 0, Errors: 0, Skipped: 186
   [INFO] 
   
   
   [INFO] --- maven-failsafe-plugin:3.0.0-M1:integration-test 
(sequential-integration-tests) @ hadoop-aws ---
   [INFO] 
   [INFO] ---
   [INFO]  T E S T S
   [INFO] ---
   
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 124, Failures: 0, Errors: 0, Skipped: 84
   [INFO] 
   
   
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] huaxiangsun commented on a diff in pull request #4608: HADOOP-18340 deleteOnExit does not work with S3AFileSystem

2022-08-09 Thread GitBox


huaxiangsun commented on code in PR #4608:
URL: https://github.com/apache/hadoop/pull/4608#discussion_r941713106


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ADeleteOnExit.java:
##
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.s3a;
+
+import org.junit.Test;
+
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.io.IOUtils;
+
+/**
+ * Test deleteOnExit for S3A.
+ * The following cases for deleteOnExit are tested:
+ *  1. A nonexist file, which is added to deleteOnExit set.
+ *  2. An existing file
+ *  3. A file is added to deleteOnExist set first, then created.
+ *  4. A directory with some files under it.
+ */
+public class ITestS3ADeleteOnExit extends AbstractS3ATestBase {
+
+  private static final String PARENT_DIR_PATH_STR = "testDeleteOnExitDir";
+  private static final String NON_EXIST_FILE_PATH_STR =
+  PARENT_DIR_PATH_STR + "/nonExistFile";
+  private static final String INORDER_FILE_PATH_STR =
+  PARENT_DIR_PATH_STR + "/inOrderFile";
+  private static final String OUT_OF_ORDER_FILE_PATH_STR =
+  PARENT_DIR_PATH_STR + "/outOfOrderFile";
+  private static final String SUBDIR_PATH_STR =
+  PARENT_DIR_PATH_STR + "/subDir";
+  private static final String FILE_UNDER_SUBDIR_PATH_STR =
+  SUBDIR_PATH_STR + "/subDirFile";
+
+  @Test
+  public void testDeleteOnExit() throws Exception {
+FileSystem fs = getFileSystem();
+
+// Get a new filesystem object which is same as fs.
+FileSystem s3aFs = new S3AFileSystem();
+s3aFs.initialize(fs.getUri(), fs.getConf());
+Path nonExistFilePath = path(NON_EXIST_FILE_PATH_STR);
+Path inOrderFilePath = path(INORDER_FILE_PATH_STR);
+Path outOfOrderFilePath = path(OUT_OF_ORDER_FILE_PATH_STR);
+Path subDirPath = path(SUBDIR_PATH_STR);
+Path fileUnderSubDirPath = path(FILE_UNDER_SUBDIR_PATH_STR);
+// 1. set up the test directory.
+Path dir = path("testDeleteOnExitDir");
+s3aFs.mkdirs(dir);
+
+// 2. Add a nonexisting file to DeleteOnExit set.
+s3aFs.deleteOnExit(nonExistFilePath);
+ContractTestUtils.assertPathDoesNotExist(s3aFs,
+"File " + NON_EXIST_FILE_PATH_STR + " should not exist", 
nonExistFilePath);
+
+// 3. create a file and then add it to DeleteOnExit set.
+FSDataOutputStream stream = s3aFs.create(inOrderFilePath, true);
+byte[] data = ContractTestUtils.dataset(16, 'a', 26);
+try {
+  stream.write(data);
+} finally {
+  IOUtils.closeStream(stream);
+}
+
+ContractTestUtils.assertPathExists(s3aFs,
+"File " + INORDER_FILE_PATH_STR + " should exist", 
inOrderFilePath);
+
+s3aFs.deleteOnExit(inOrderFilePath);
+
+// 4. add a path to DeleteOnExit set first, then create it.
+s3aFs.deleteOnExit(outOfOrderFilePath);
+stream = s3aFs.create(outOfOrderFilePath, true);

Review Comment:
   Done.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18397) Shutdown AWSSecurityTokenService when it's resources are no longer in use

2022-08-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17577581#comment-17577581
 ] 

ASF GitHub Bot commented on HADOOP-18397:
-

virajjasani commented on PR #4722:
URL: https://github.com/apache/hadoop/pull/4722#issuecomment-1209772369

   > back when we did that the sts client shutdown just threw 
UnsupportedOperationException. is that no longer the case? if so, we should 
stop swallowing it in our STSClient
   
   That is true, we no longer need to catch `UnsupportedOperationException`. 
Based on the implementations that implement `AWSSecurityTokenService`, it seems 
that this Exception is now only thrown by `AbstractAWSSecurityTokenService`, 
which we don't use anyways.
   
   `AbstractAWSSecurityTokenService`:
   ```
   @Override
   public void shutdown() {
   throw new java.lang.UnsupportedOperationException();
   }
   
   ```
   
   `AWSSecurityTokenServiceAsyncClient`:
   ```
   @Override
   public void shutdown() {
   super.shutdown();
   executorService.shutdownNow();
   }
   
   ```
   
   `AWSSecurityTokenServiceClient`:
   ```
   @Override
   public void shutdown() {
   super.shutdown();
   }
   
   ```
   
   We are good here. Just ran all the tests against new commit. Sharing the 
test results in the next comment.




> Shutdown AWSSecurityTokenService when it's resources are no longer in use
> -
>
> Key: HADOOP-18397
> URL: https://issues.apache.org/jira/browse/HADOOP-18397
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs/s3
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> AWSSecurityTokenService resources can be released whenever they are no longer 
> in use. The documentation of AWSSecurityTokenService#shutdown says while it 
> is not important for client to compulsorily shutdown the token service, 
> client can definitely perform early release whenever client no longer 
> requires token service resources. We achieve this by making STSClient 
> closable, so we can certainly utilize it in all places where it's suitable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on pull request #4722: HADOOP-18397. Shutdown AWSSecurityTokenService when it's resources are no longer in use

2022-08-09 Thread GitBox


virajjasani commented on PR #4722:
URL: https://github.com/apache/hadoop/pull/4722#issuecomment-1209772369

   > back when we did that the sts client shutdown just threw 
UnsupportedOperationException. is that no longer the case? if so, we should 
stop swallowing it in our STSClient
   
   That is true, we no longer need to catch `UnsupportedOperationException`. 
Based on the implementations that implement `AWSSecurityTokenService`, it seems 
that this Exception is now only thrown by `AbstractAWSSecurityTokenService`, 
which we don't use anyways.
   
   `AbstractAWSSecurityTokenService`:
   ```
   @Override
   public void shutdown() {
   throw new java.lang.UnsupportedOperationException();
   }
   
   ```
   
   `AWSSecurityTokenServiceAsyncClient`:
   ```
   @Override
   public void shutdown() {
   super.shutdown();
   executorService.shutdownNow();
   }
   
   ```
   
   `AWSSecurityTokenServiceClient`:
   ```
   @Override
   public void shutdown() {
   super.shutdown();
   }
   
   ```
   
   We are good here. Just ran all the tests against new commit. Sharing the 
test results in the next comment.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18159) Certificate doesn't match any of the subject alternative names: [*.s3.amazonaws.com, s3.amazonaws.com]

2022-08-09 Thread Igor Dvorzhak (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Dvorzhak updated HADOOP-18159:
---
Fix Version/s: 3.4.0
   (was: 3.3.9)

> Certificate doesn't match any of the subject alternative names: 
> [*.s3.amazonaws.com, s3.amazonaws.com]
> --
>
> Key: HADOOP-18159
> URL: https://issues.apache.org/jira/browse/HADOOP-18159
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.1, 3.3.2, 3.3.3
> Environment: hadoop 3.3.1
> httpclient 4.5.13
> JDK8
>Reporter: André F.
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> h2. If you see this error message when trying to use s3a:// or gs:// URLs, 
> look for copies of cos_api-bundle.jar on your classpath and remove them.
> Libraries which include shaded apache httpclient libraries 
> (hadoop-client-runtime.jar, aws-java-sdk-bundle.jar, 
> gcs-connector-shaded.jar, cos_api-bundle.jar) all load and use the unshaded 
> resource mozilla/public-suffix-list.txt. If an out of date version of this is 
> found on the classpath first, attempts to negotiate TLS connections may fail 
> with the error "Certificate doesn't match any of the subject alternative 
> names". 
> In a hadoop installation, you can use the findclass tool to track down where 
> the public-suffix-list.txt is coming from.
> {code}
> hadoop org.apache.hadoop.util.FindClass locate mozilla/public-suffix-list.txt
> {code}
> So far, the cos_api-bundle-5.6.19.jar appears to be the source of this 
> problem.
> 
> h2. bug report
> Trying to run any job after bumping our Spark version (which is now using 
> Hadoop 3.3.1), lead us to the current exception while reading files on s3:
> {code:java}
> org.apache.hadoop.fs.s3a.AWSClientIOException: getFileStatus on 
> s3a:///.parquet: com.amazonaws.SdkClientException: Unable to 
> execute HTTP request: Certificate for  doesn't match 
> any of the subject alternative names: [*.s3.amazonaws.com, s3.amazonaws.com]: 
> Unable to execute HTTP request: Certificate for  doesn't match any of 
> the subject alternative names: [*.s3.amazonaws.com, s3.amazonaws.com] at 
> org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:208) at 
> org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:170) at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:3351)
>  at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:3185)
>  at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.isDirectory(S3AFileSystem.java:4277) 
> at {code}
>  
> {code:java}
> Caused by: javax.net.ssl.SSLPeerUnverifiedException: Certificate for 
>  doesn't match any of the subject alternative names: 
> [*.s3.amazonaws.com, s3.amazonaws.com]
>   at 
> com.amazonaws.thirdparty.apache.http.conn.ssl.SSLConnectionSocketFactory.verifyHostname(SSLConnectionSocketFactory.java:507)
>   at 
> com.amazonaws.thirdparty.apache.http.conn.ssl.SSLConnectionSocketFactory.createLayeredSocket(SSLConnectionSocketFactory.java:437)
>   at 
> com.amazonaws.thirdparty.apache.http.conn.ssl.SSLConnectionSocketFactory.connectSocket(SSLConnectionSocketFactory.java:384)
>   at 
> com.amazonaws.thirdparty.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142)
>   at 
> com.amazonaws.thirdparty.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376)
>   at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> com.amazonaws.http.conn.ClientConnectionManagerFactory$Handler.invoke(ClientConnectionManagerFactory.java:76)
>   at com.amazonaws.http.conn.$Proxy16.connect(Unknown Source)
>   at 
> com.amazonaws.thirdparty.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393)
>   at 
> com.amazonaws.thirdparty.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
>   at 
> com.amazonaws.thirdparty.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186)
>   at 
> com.amazonaws.thirdparty.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
>   at 
> com.amazonaws.thirdparty.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
>   at 
> 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4628: HDFS-16689. NameNode may crash when transitioning to Active with in-progress tailer if there are some abnormal JNs.

2022-08-09 Thread GitBox


hadoop-yetus commented on PR #4628:
URL: https://github.com/apache/hadoop/pull/4628#issuecomment-1209714098

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 22s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 30s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 43s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   1m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 19s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 19s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 46s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 50s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 29s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   1m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  |  
hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 154 unchanged - 1 
fixed = 154 total (was 155)  |
   | +1 :green_heart: |  mvnsite  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 33s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 42s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 338m  6s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 56s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 459m 50s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4628/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4628 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 528bdc9a040f 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 209275054faaa7faf24781fb40e0a36079559706 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4628/3/testReport/ |
   | Max. process+thread count | 2251 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4628/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific 

[jira] [Commented] (HADOOP-18373) IOStatisticsContext tuning

2022-08-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17577565#comment-17577565
 ] 

ASF GitHub Bot commented on HADOOP-18373:
-

steveloughran commented on PR #4705:
URL: https://github.com/apache/hadoop/pull/4705#issuecomment-1209698915

   what was it? test running as other user?
   
   fwiw i use a very restricted user/role for my s3 tests, so if that key ever 
leaked, it would limit the damage to accessing my test s3 buckets, assume one 
role, etc. not even start an EC2 vm




> IOStatisticsContext tuning
> --
>
> Key: HADOOP-18373
> URL: https://issues.apache.org/jira/browse/HADOOP-18373
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.9
>Reporter: Steve Loughran
>Assignee: Viraj Jasani
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.9
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Tuning of the IOStatisticsContext code
> h2. change property name  to fs.iostatistics
> there are other fs.iostatistics options, the new one needs consistent naming
> h2. enable in hadoop-aws
> edit core-site.xml in hadoop-aws/test/resources to always collect context 
> iOStatistics
> This helps qualify the code
> {code}
> 
>   fs.thread.level.iostatistics.enabled
>   true
> 
> {code}
> h3.  IOStatisticsContext to add add static probe to see if it is enabled.
> lets apps know not to bother collecting/reporting



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #4705: HADOOP-18373. IOStatisticsContext tuning

2022-08-09 Thread GitBox


steveloughran commented on PR #4705:
URL: https://github.com/apache/hadoop/pull/4705#issuecomment-1209698915

   what was it? test running as other user?
   
   fwiw i use a very restricted user/role for my s3 tests, so if that key ever 
leaked, it would limit the damage to accessing my test s3 buckets, assume one 
role, etc. not even start an EC2 vm


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a diff in pull request #4680: HDFS-16702. MiniDFSCluster should report cause of exception in assert…

2022-08-09 Thread GitBox


steveloughran commented on code in PR #4680:
URL: https://github.com/apache/hadoop/pull/4680#discussion_r941637121


##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java:
##
@@ -2159,10 +2159,10 @@ public void shutdown(boolean deleteDfsDir, boolean 
closeFileSystem) {
 LOG.info("Shutting down the Mini HDFS Cluster");
 if (checkExitOnShutdown)  {
   if (ExitUtil.terminateCalled()) {
-LOG.error("Test resulted in an unexpected exit",
-ExitUtil.getFirstExitException());

Review Comment:
   having the full exception thrown helps with IDE integration...



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a diff in pull request #1747: HDFS-15042 Add more tests for ByteBufferPositionedReadable.

2022-08-09 Thread GitBox


steveloughran commented on code in PR #1747:
URL: https://github.com/apache/hadoop/pull/1747#discussion_r941634758


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ByteBufferPositionedReadable.java:
##
@@ -54,12 +69,11 @@ public interface ByteBufferPositionedReadable {
* stream supports this interface, otherwise they might get a
* {@link UnsupportedOperationException}.
* 
-   * Implementations should treat 0-length requests as legitimate, and must not
+   * Implementations MUST treat 0-length requests as legitimate, and MUST NOT
* signal an error upon their receipt.
-   * 
-   * This does not change the current offset of a file, and is thread-safe.
-   *
-   * @param position position within file
+   * The {@code position} offset MUST BE zero or positive; if negative
+   * an EOFException SHALL BE raised.

Review Comment:
   well, i think that's broken. like you say, can't fix though. or we could, 
but that complicates code written against the eof raising impl running against 
older/external stuff



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18340) deleteOnExit does not work with S3AFileSystem

2022-08-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17577550#comment-17577550
 ] 

ASF GitHub Bot commented on HADOOP-18340:
-

steveloughran commented on code in PR #4608:
URL: https://github.com/apache/hadoop/pull/4608#discussion_r940572367


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ADeleteOnExit.java:
##
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.s3a;
+
+import org.junit.Test;
+
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.io.IOUtils;
+
+/**
+ * Test deleteOnExit for S3A.
+ * The following cases for deleteOnExit are tested:
+ *  1. A nonexist file, which is added to deleteOnExit set.
+ *  2. An existing file
+ *  3. A file is added to deleteOnExist set first, then created.
+ *  4. A directory with some files under it.
+ */
+public class ITestS3ADeleteOnExit extends AbstractS3ATestBase {
+
+  private static final String PARENT_DIR_PATH_STR = "testDeleteOnExitDir";
+  private static final String NON_EXIST_FILE_PATH_STR =
+  PARENT_DIR_PATH_STR + "/nonExistFile";
+  private static final String INORDER_FILE_PATH_STR =
+  PARENT_DIR_PATH_STR + "/inOrderFile";
+  private static final String OUT_OF_ORDER_FILE_PATH_STR =
+  PARENT_DIR_PATH_STR + "/outOfOrderFile";
+  private static final String SUBDIR_PATH_STR =
+  PARENT_DIR_PATH_STR + "/subDir";
+  private static final String FILE_UNDER_SUBDIR_PATH_STR =
+  SUBDIR_PATH_STR + "/subDirFile";
+
+  @Test
+  public void testDeleteOnExit() throws Exception {
+FileSystem fs = getFileSystem();
+
+// Get a new filesystem object which is same as fs.
+FileSystem s3aFs = new S3AFileSystem();
+s3aFs.initialize(fs.getUri(), fs.getConf());
+Path nonExistFilePath = path(NON_EXIST_FILE_PATH_STR);
+Path inOrderFilePath = path(INORDER_FILE_PATH_STR);
+Path outOfOrderFilePath = path(OUT_OF_ORDER_FILE_PATH_STR);
+Path subDirPath = path(SUBDIR_PATH_STR);
+Path fileUnderSubDirPath = path(FILE_UNDER_SUBDIR_PATH_STR);
+// 1. set up the test directory.
+Path dir = path("testDeleteOnExitDir");
+s3aFs.mkdirs(dir);
+
+// 2. Add a nonexisting file to DeleteOnExit set.
+s3aFs.deleteOnExit(nonExistFilePath);
+ContractTestUtils.assertPathDoesNotExist(s3aFs,
+"File " + NON_EXIST_FILE_PATH_STR + " should not exist", 
nonExistFilePath);
+
+// 3. create a file and then add it to DeleteOnExit set.
+FSDataOutputStream stream = s3aFs.create(inOrderFilePath, true);
+byte[] data = ContractTestUtils.dataset(16, 'a', 26);
+try {
+  stream.write(data);
+} finally {
+  IOUtils.closeStream(stream);
+}
+
+ContractTestUtils.assertPathExists(s3aFs,
+"File " + INORDER_FILE_PATH_STR + " should exist", 
inOrderFilePath);
+
+s3aFs.deleteOnExit(inOrderFilePath);
+
+// 4. add a path to DeleteOnExit set first, then create it.
+s3aFs.deleteOnExit(outOfOrderFilePath);
+stream = s3aFs.create(outOfOrderFilePath, true);
+try {
+  stream.write(data);
+} finally {
+  IOUtils.closeStream(stream);
+}
+
+ContractTestUtils.assertPathExists(s3aFs,
+"File " + OUT_OF_ORDER_FILE_PATH_STR + " should exist", 
outOfOrderFilePath);
+
+// 5. create a subdirectory, a file under it,  and add subdirectory 
DeleteOnExit set.
+s3aFs.mkdirs(subDirPath);
+s3aFs.deleteOnExit(subDirPath);
+
+stream = s3aFs.create(fileUnderSubDirPath, true);
+try {

Review Comment:
   same thing





> deleteOnExit does not work with S3AFileSystem
> -
>
> Key: HADOOP-18340
> URL: https://issues.apache.org/jira/browse/HADOOP-18340
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.3
>Reporter: Huaxiang Sun
>Priority: Minor
>  

[GitHub] [hadoop] steveloughran commented on a diff in pull request #4608: HADOOP-18340 deleteOnExit does not work with S3AFileSystem

2022-08-09 Thread GitBox


steveloughran commented on code in PR #4608:
URL: https://github.com/apache/hadoop/pull/4608#discussion_r940572367


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ADeleteOnExit.java:
##
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.s3a;
+
+import org.junit.Test;
+
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.io.IOUtils;
+
+/**
+ * Test deleteOnExit for S3A.
+ * The following cases for deleteOnExit are tested:
+ *  1. A nonexist file, which is added to deleteOnExit set.
+ *  2. An existing file
+ *  3. A file is added to deleteOnExist set first, then created.
+ *  4. A directory with some files under it.
+ */
+public class ITestS3ADeleteOnExit extends AbstractS3ATestBase {
+
+  private static final String PARENT_DIR_PATH_STR = "testDeleteOnExitDir";
+  private static final String NON_EXIST_FILE_PATH_STR =
+  PARENT_DIR_PATH_STR + "/nonExistFile";
+  private static final String INORDER_FILE_PATH_STR =
+  PARENT_DIR_PATH_STR + "/inOrderFile";
+  private static final String OUT_OF_ORDER_FILE_PATH_STR =
+  PARENT_DIR_PATH_STR + "/outOfOrderFile";
+  private static final String SUBDIR_PATH_STR =
+  PARENT_DIR_PATH_STR + "/subDir";
+  private static final String FILE_UNDER_SUBDIR_PATH_STR =
+  SUBDIR_PATH_STR + "/subDirFile";
+
+  @Test
+  public void testDeleteOnExit() throws Exception {
+FileSystem fs = getFileSystem();
+
+// Get a new filesystem object which is same as fs.
+FileSystem s3aFs = new S3AFileSystem();
+s3aFs.initialize(fs.getUri(), fs.getConf());
+Path nonExistFilePath = path(NON_EXIST_FILE_PATH_STR);
+Path inOrderFilePath = path(INORDER_FILE_PATH_STR);
+Path outOfOrderFilePath = path(OUT_OF_ORDER_FILE_PATH_STR);
+Path subDirPath = path(SUBDIR_PATH_STR);
+Path fileUnderSubDirPath = path(FILE_UNDER_SUBDIR_PATH_STR);
+// 1. set up the test directory.
+Path dir = path("testDeleteOnExitDir");
+s3aFs.mkdirs(dir);
+
+// 2. Add a nonexisting file to DeleteOnExit set.
+s3aFs.deleteOnExit(nonExistFilePath);
+ContractTestUtils.assertPathDoesNotExist(s3aFs,
+"File " + NON_EXIST_FILE_PATH_STR + " should not exist", 
nonExistFilePath);
+
+// 3. create a file and then add it to DeleteOnExit set.
+FSDataOutputStream stream = s3aFs.create(inOrderFilePath, true);
+byte[] data = ContractTestUtils.dataset(16, 'a', 26);
+try {
+  stream.write(data);
+} finally {
+  IOUtils.closeStream(stream);
+}
+
+ContractTestUtils.assertPathExists(s3aFs,
+"File " + INORDER_FILE_PATH_STR + " should exist", 
inOrderFilePath);
+
+s3aFs.deleteOnExit(inOrderFilePath);
+
+// 4. add a path to DeleteOnExit set first, then create it.
+s3aFs.deleteOnExit(outOfOrderFilePath);
+stream = s3aFs.create(outOfOrderFilePath, true);
+try {
+  stream.write(data);
+} finally {
+  IOUtils.closeStream(stream);
+}
+
+ContractTestUtils.assertPathExists(s3aFs,
+"File " + OUT_OF_ORDER_FILE_PATH_STR + " should exist", 
outOfOrderFilePath);
+
+// 5. create a subdirectory, a file under it,  and add subdirectory 
DeleteOnExit set.
+s3aFs.mkdirs(subDirPath);
+s3aFs.deleteOnExit(subDirPath);
+
+stream = s3aFs.create(fileUnderSubDirPath, true);
+try {

Review Comment:
   same thing



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18396) Issues running in dynamic / managed environments

2022-08-09 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17577549#comment-17577549
 ] 

Steve Loughran commented on HADOOP-18396:
-

bq. Why didn't you file this in 2010?

nobody cared at that point

> Issues running in dynamic / managed environments
> 
>
> Key: HADOOP-18396
> URL: https://issues.apache.org/jira/browse/HADOOP-18396
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.4.0, 3.3.9, 3.3.4
> Environment: Running an HA configuration in Kubernetes, using Java 11.
>Reporter: Steve Vaughan
>Assignee: Steve Vaughan
>Priority: Major
>
> Running in dynamic or managed environments is a challenge because we can't 
> assume that all services will have DNS entries, will be started in a specific 
> order, will maintain constant IP addresses, etc.  I'm using the following 
> assumptions to guide the changes necessary to operate in this kind of 
> environment:
>  # The configuration files are an expression of desired state
>  # If a referenced service instance is not resolvable or reachable at a 
> moment in time, it will be eventually and should be able to participate in 
> the future, as if it had been there originally, without requiring manual 
> intervention
>  # IP address changes should be handled in a way that no only allows 
> distributed calls to continue to function, but avoids having to re-resolve 
> the address over and over
>  # Code that requires resolved names (Kerberos and DataNode registration) 
> should fall back to DNS reverse lookups to work around temporary issues 
> caused by caching.  Example: The DataNode registration is only performed at 
> startup, and yet the extra check that allows it to succeed in registering 
> with the NameNode isn’t performed
>  # If an HA system is supposed to only require a quorum, then we shouldn’t 
> require the full set, allowing the called service to bring the remaining 
> instances into compliance
>  # Managing a service should be independent of other services.  Example: You 
> should be able to perform a rolling restart of JournalNodes without worrying 
> about causing an issue with NameNodes as long as a quorum is present.
> A proof of these concepts would be the ability to:
>  * Start with less that the full replica count of a service, while still 
> providing the required quorum or minimal count, should still allow a cluster 
> to start and function.  Example: 2 out of 3 configured JournalNodes should 
> still allow the NameNode to format, function, rollover to the standby, etc.
>  * Introduce missing instances should join the existing cluster without 
> manual intervention.  Example: Starting the 3rd JournalNode should 
> automatically be formatted and brought up to date
>  * Perform rolling restarts of individual services without negatively 
> impacting other services (causing failures, restarts, etc.).  Example: 
> Rolling restarts of JournalNodes shouldn't cause problems in NameNodes; 
> Rolling restarts of NameNodes shouldn't cause problems with DataNodes
>  * Logs should only report updated IP addresses once (per dependent), 
> avoiding costly re-resolution



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18396) Issues running in dynamic / managed environments

2022-08-09 Thread Nick Dimiduk (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17577547#comment-17577547
 ] 

Nick Dimiduk commented on HADOOP-18396:
---

[~ste...@apache.org] your slide "Hadoop's Assumptions" looks like a nice list 
of milestones. Why didn't you file this in 2010? ::smile::

> Issues running in dynamic / managed environments
> 
>
> Key: HADOOP-18396
> URL: https://issues.apache.org/jira/browse/HADOOP-18396
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.4.0, 3.3.9, 3.3.4
> Environment: Running an HA configuration in Kubernetes, using Java 11.
>Reporter: Steve Vaughan
>Assignee: Steve Vaughan
>Priority: Major
>
> Running in dynamic or managed environments is a challenge because we can't 
> assume that all services will have DNS entries, will be started in a specific 
> order, will maintain constant IP addresses, etc.  I'm using the following 
> assumptions to guide the changes necessary to operate in this kind of 
> environment:
>  # The configuration files are an expression of desired state
>  # If a referenced service instance is not resolvable or reachable at a 
> moment in time, it will be eventually and should be able to participate in 
> the future, as if it had been there originally, without requiring manual 
> intervention
>  # IP address changes should be handled in a way that no only allows 
> distributed calls to continue to function, but avoids having to re-resolve 
> the address over and over
>  # Code that requires resolved names (Kerberos and DataNode registration) 
> should fall back to DNS reverse lookups to work around temporary issues 
> caused by caching.  Example: The DataNode registration is only performed at 
> startup, and yet the extra check that allows it to succeed in registering 
> with the NameNode isn’t performed
>  # If an HA system is supposed to only require a quorum, then we shouldn’t 
> require the full set, allowing the called service to bring the remaining 
> instances into compliance
>  # Managing a service should be independent of other services.  Example: You 
> should be able to perform a rolling restart of JournalNodes without worrying 
> about causing an issue with NameNodes as long as a quorum is present.
> A proof of these concepts would be the ability to:
>  * Start with less that the full replica count of a service, while still 
> providing the required quorum or minimal count, should still allow a cluster 
> to start and function.  Example: 2 out of 3 configured JournalNodes should 
> still allow the NameNode to format, function, rollover to the standby, etc.
>  * Introduce missing instances should join the existing cluster without 
> manual intervention.  Example: Starting the 3rd JournalNode should 
> automatically be formatted and brought up to date
>  * Perform rolling restarts of individual services without negatively 
> impacting other services (causing failures, restarts, etc.).  Example: 
> Rolling restarts of JournalNodes shouldn't cause problems in NameNodes; 
> Rolling restarts of NameNodes shouldn't cause problems with DataNodes
>  * Logs should only report updated IP addresses once (per dependent), 
> avoiding costly re-resolution



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18333) hadoop-client-runtime impact by CVE-2022-2047 CVE-2022-2048 due to shaded jetty

2022-08-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17577538#comment-17577538
 ] 

ASF GitHub Bot commented on HADOOP-18333:
-

jojochuang commented on PR #4600:
URL: https://github.com/apache/hadoop/pull/4600#issuecomment-1209652615

   Let's see what it says.




> hadoop-client-runtime impact by CVE-2022-2047 CVE-2022-2048 due to shaded 
> jetty
> ---
>
> Key: HADOOP-18333
> URL: https://issues.apache.org/jira/browse/HADOOP-18333
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.3.3
>Reporter: phoebe chen
>Assignee: groot
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> CVE-2022-2047 and CVE-2022-2048 is recently found for Eclipse Jetty, and 
> impacts 9.4.0 thru 9.4.46.
> In latest 3.3.3 of hadoop-client-runtime, it shaded 9.4.43.v20210629 version 
> jetty which is impacted.
> In Trunk, Jetty is in version 9.4.44.v20210927, which is still impacted.
> Need to upgrade Jetty Version. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on pull request #4600: HADOOP-18333. Upgrade jetty version to 9.4.48.v20220622

2022-08-09 Thread GitBox


jojochuang commented on PR #4600:
URL: https://github.com/apache/hadoop/pull/4600#issuecomment-1209652615

   Let's see what it says.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18345) Enhance client protocol to propagate last seen state IDs for multiple nameservices.

2022-08-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17577528#comment-17577528
 ] 

ASF GitHub Bot commented on HADOOP-18345:
-

simbadzina commented on code in PR #4584:
URL: https://github.com/apache/hadoop/pull/4584#discussion_r941569147


##
hadoop-common-project/hadoop-common/src/main/proto/RpcHeader.proto:
##
@@ -91,6 +91,7 @@ message RpcRequestHeaderProto { // the header for the 
RpcRequest
   optional RPCTraceInfoProto traceInfo = 6; // tracing info
   optional RPCCallerContextProto callerContext = 7; // call context
   optional int64 stateId = 8; // The last seen Global State ID
+  optional bytes routerFederatedState = 9; // Alignment context info for use 
with routers.

Review Comment:
   Added comments.





> Enhance client protocol to propagate last seen state IDs for multiple 
> nameservices.
> ---
>
> Key: HADOOP-18345
> URL: https://issues.apache.org/jira/browse/HADOOP-18345
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Simbarashe Dzinamarira
>Assignee: Simbarashe Dzinamarira
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> The RPCHeader in the client protocol currently contains a single value to 
> indicate the last seen state ID for a namenode.
> {noformat}
> optional int64 stateId = 8; // The last seen Global State ID
> {noformat}
> When there are multiple namenodes, such as in router based federation, the 
> headers need to carry the state IDs for each of these nameservices that are 
> part of the federation.
> This change is a prerequisite for HDFS-13522: RBF: Support observer node from 
> Router-Based Federation



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18345) Enhance client protocol to propagate last seen state IDs for multiple nameservices.

2022-08-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17577527#comment-17577527
 ] 

ASF GitHub Bot commented on HADOOP-18345:
-

simbadzina commented on code in PR #4584:
URL: https://github.com/apache/hadoop/pull/4584#discussion_r941568764


##
hadoop-common-project/hadoop-common/src/main/proto/RpcHeader.proto:
##
@@ -157,6 +158,11 @@ message RpcResponseHeaderProto {
   optional bytes clientId = 7; // Globally unique client ID
   optional sint32 retryCount = 8 [default = -1];
   optional int64 stateId = 9; // The last written Global State ID
+  optional bytes routerFederatedState = 10; // Alignment context info for use 
with routers.
+}
+
+message RouterFederatedStateProto {

Review Comment:
   Moved to FederationProtocol.proto in the rbf module.





> Enhance client protocol to propagate last seen state IDs for multiple 
> nameservices.
> ---
>
> Key: HADOOP-18345
> URL: https://issues.apache.org/jira/browse/HADOOP-18345
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Simbarashe Dzinamarira
>Assignee: Simbarashe Dzinamarira
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> The RPCHeader in the client protocol currently contains a single value to 
> indicate the last seen state ID for a namenode.
> {noformat}
> optional int64 stateId = 8; // The last seen Global State ID
> {noformat}
> When there are multiple namenodes, such as in router based federation, the 
> headers need to carry the state IDs for each of these nameservices that are 
> part of the federation.
> This change is a prerequisite for HDFS-13522: RBF: Support observer node from 
> Router-Based Federation



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] simbadzina commented on a diff in pull request #4584: HADOOP-18345: Enhance client protocol to propagate last seen state IDs for multiple nameservices.

2022-08-09 Thread GitBox


simbadzina commented on code in PR #4584:
URL: https://github.com/apache/hadoop/pull/4584#discussion_r941569147


##
hadoop-common-project/hadoop-common/src/main/proto/RpcHeader.proto:
##
@@ -91,6 +91,7 @@ message RpcRequestHeaderProto { // the header for the 
RpcRequest
   optional RPCTraceInfoProto traceInfo = 6; // tracing info
   optional RPCCallerContextProto callerContext = 7; // call context
   optional int64 stateId = 8; // The last seen Global State ID
+  optional bytes routerFederatedState = 9; // Alignment context info for use 
with routers.

Review Comment:
   Added comments.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] simbadzina commented on a diff in pull request #4584: HADOOP-18345: Enhance client protocol to propagate last seen state IDs for multiple nameservices.

2022-08-09 Thread GitBox


simbadzina commented on code in PR #4584:
URL: https://github.com/apache/hadoop/pull/4584#discussion_r941568764


##
hadoop-common-project/hadoop-common/src/main/proto/RpcHeader.proto:
##
@@ -157,6 +158,11 @@ message RpcResponseHeaderProto {
   optional bytes clientId = 7; // Globally unique client ID
   optional sint32 retryCount = 8 [default = -1];
   optional int64 stateId = 9; // The last written Global State ID
+  optional bytes routerFederatedState = 10; // Alignment context info for use 
with routers.
+}
+
+message RouterFederatedStateProto {

Review Comment:
   Moved to FederationProtocol.proto in the rbf module.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18345) Enhance client protocol to propagate last seen state IDs for multiple nameservices.

2022-08-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17577526#comment-17577526
 ] 

ASF GitHub Bot commented on HADOOP-18345:
-

simbadzina commented on code in PR #4584:
URL: https://github.com/apache/hadoop/pull/4584#discussion_r941567962


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RpcConstants.java:
##
@@ -37,7 +37,9 @@ private RpcConstants() {
   
   
   public static final int INVALID_RETRY_COUNT = -1;
-  
+  // Special state ID value to indicate client request header has 
routerFederatedState set.
+  public static final long REQUEST_HEADER_NAMESPACE_STATEIDS_SET = -2L;

Review Comment:
   I agree. Removed.





> Enhance client protocol to propagate last seen state IDs for multiple 
> nameservices.
> ---
>
> Key: HADOOP-18345
> URL: https://issues.apache.org/jira/browse/HADOOP-18345
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Simbarashe Dzinamarira
>Assignee: Simbarashe Dzinamarira
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> The RPCHeader in the client protocol currently contains a single value to 
> indicate the last seen state ID for a namenode.
> {noformat}
> optional int64 stateId = 8; // The last seen Global State ID
> {noformat}
> When there are multiple namenodes, such as in router based federation, the 
> headers need to carry the state IDs for each of these nameservices that are 
> part of the federation.
> This change is a prerequisite for HDFS-13522: RBF: Support observer node from 
> Router-Based Federation



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] simbadzina commented on a diff in pull request #4584: HADOOP-18345: Enhance client protocol to propagate last seen state IDs for multiple nameservices.

2022-08-09 Thread GitBox


simbadzina commented on code in PR #4584:
URL: https://github.com/apache/hadoop/pull/4584#discussion_r941567962


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RpcConstants.java:
##
@@ -37,7 +37,9 @@ private RpcConstants() {
   
   
   public static final int INVALID_RETRY_COUNT = -1;
-  
+  // Special state ID value to indicate client request header has 
routerFederatedState set.
+  public static final long REQUEST_HEADER_NAMESPACE_STATEIDS_SET = -2L;

Review Comment:
   I agree. Removed.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18340) deleteOnExit does not work with S3AFileSystem

2022-08-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17577520#comment-17577520
 ] 

ASF GitHub Bot commented on HADOOP-18340:
-

huaxiangsun commented on code in PR #4608:
URL: https://github.com/apache/hadoop/pull/4608#discussion_r941552836


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ADeleteOnExit.java:
##
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.s3a;
+
+import org.junit.Test;
+
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.io.IOUtils;
+
+/**
+ * Test deleteOnExit for S3A.
+ * The following cases for deleteOnExit are tested:
+ *  1. A nonexist file, which is added to deleteOnExit set.
+ *  2. An existing file
+ *  3. A file is added to deleteOnExist set first, then created.
+ *  4. A directory with some files under it.
+ */
+public class ITestS3ADeleteOnExit extends AbstractS3ATestBase {
+
+  private static final String PARENT_DIR_PATH_STR = "testDeleteOnExitDir";
+  private static final String NON_EXIST_FILE_PATH_STR =
+  PARENT_DIR_PATH_STR + "/nonExistFile";
+  private static final String INORDER_FILE_PATH_STR =
+  PARENT_DIR_PATH_STR + "/inOrderFile";
+  private static final String OUT_OF_ORDER_FILE_PATH_STR =
+  PARENT_DIR_PATH_STR + "/outOfOrderFile";
+  private static final String SUBDIR_PATH_STR =
+  PARENT_DIR_PATH_STR + "/subDir";
+  private static final String FILE_UNDER_SUBDIR_PATH_STR =
+  SUBDIR_PATH_STR + "/subDirFile";
+
+  @Test
+  public void testDeleteOnExit() throws Exception {
+FileSystem fs = getFileSystem();
+
+// Get a new filesystem object which is same as fs.
+FileSystem s3aFs = new S3AFileSystem();
+s3aFs.initialize(fs.getUri(), fs.getConf());
+Path nonExistFilePath = path(NON_EXIST_FILE_PATH_STR);
+Path inOrderFilePath = path(INORDER_FILE_PATH_STR);
+Path outOfOrderFilePath = path(OUT_OF_ORDER_FILE_PATH_STR);
+Path subDirPath = path(SUBDIR_PATH_STR);
+Path fileUnderSubDirPath = path(FILE_UNDER_SUBDIR_PATH_STR);
+// 1. set up the test directory.
+Path dir = path("testDeleteOnExitDir");
+s3aFs.mkdirs(dir);
+
+// 2. Add a nonexisting file to DeleteOnExit set.
+s3aFs.deleteOnExit(nonExistFilePath);
+ContractTestUtils.assertPathDoesNotExist(s3aFs,
+"File " + NON_EXIST_FILE_PATH_STR + " should not exist", 
nonExistFilePath);
+
+// 3. create a file and then add it to DeleteOnExit set.
+FSDataOutputStream stream = s3aFs.create(inOrderFilePath, true);
+byte[] data = ContractTestUtils.dataset(16, 'a', 26);
+try {
+  stream.write(data);
+} finally {
+  IOUtils.closeStream(stream);
+}
+
+ContractTestUtils.assertPathExists(s3aFs,
+"File " + INORDER_FILE_PATH_STR + " should exist", 
inOrderFilePath);
+
+s3aFs.deleteOnExit(inOrderFilePath);
+
+// 4. add a path to DeleteOnExit set first, then create it.
+s3aFs.deleteOnExit(outOfOrderFilePath);
+stream = s3aFs.create(outOfOrderFilePath, true);

Review Comment:
   Will do.





> deleteOnExit does not work with S3AFileSystem
> -
>
> Key: HADOOP-18340
> URL: https://issues.apache.org/jira/browse/HADOOP-18340
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.3
>Reporter: Huaxiang Sun
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> When deleteOnExit is set on some paths, they are not removed when file system 
> object is closed. The following exception is logged when printing out the 
> exception in info log.
> {code:java}
> 2022-07-15 19:29:12,552 [main] INFO  fs.FileSystem 
> (FileSystem.java:processDeleteOnExit(1810)) - Ignoring failure to 
> deleteOnExit for path /file, exception {}
> 

[GitHub] [hadoop] huaxiangsun commented on a diff in pull request #4608: HADOOP-18340 deleteOnExit does not work with S3AFileSystem

2022-08-09 Thread GitBox


huaxiangsun commented on code in PR #4608:
URL: https://github.com/apache/hadoop/pull/4608#discussion_r941552836


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ADeleteOnExit.java:
##
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.s3a;
+
+import org.junit.Test;
+
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.io.IOUtils;
+
+/**
+ * Test deleteOnExit for S3A.
+ * The following cases for deleteOnExit are tested:
+ *  1. A nonexist file, which is added to deleteOnExit set.
+ *  2. An existing file
+ *  3. A file is added to deleteOnExist set first, then created.
+ *  4. A directory with some files under it.
+ */
+public class ITestS3ADeleteOnExit extends AbstractS3ATestBase {
+
+  private static final String PARENT_DIR_PATH_STR = "testDeleteOnExitDir";
+  private static final String NON_EXIST_FILE_PATH_STR =
+  PARENT_DIR_PATH_STR + "/nonExistFile";
+  private static final String INORDER_FILE_PATH_STR =
+  PARENT_DIR_PATH_STR + "/inOrderFile";
+  private static final String OUT_OF_ORDER_FILE_PATH_STR =
+  PARENT_DIR_PATH_STR + "/outOfOrderFile";
+  private static final String SUBDIR_PATH_STR =
+  PARENT_DIR_PATH_STR + "/subDir";
+  private static final String FILE_UNDER_SUBDIR_PATH_STR =
+  SUBDIR_PATH_STR + "/subDirFile";
+
+  @Test
+  public void testDeleteOnExit() throws Exception {
+FileSystem fs = getFileSystem();
+
+// Get a new filesystem object which is same as fs.
+FileSystem s3aFs = new S3AFileSystem();
+s3aFs.initialize(fs.getUri(), fs.getConf());
+Path nonExistFilePath = path(NON_EXIST_FILE_PATH_STR);
+Path inOrderFilePath = path(INORDER_FILE_PATH_STR);
+Path outOfOrderFilePath = path(OUT_OF_ORDER_FILE_PATH_STR);
+Path subDirPath = path(SUBDIR_PATH_STR);
+Path fileUnderSubDirPath = path(FILE_UNDER_SUBDIR_PATH_STR);
+// 1. set up the test directory.
+Path dir = path("testDeleteOnExitDir");
+s3aFs.mkdirs(dir);
+
+// 2. Add a nonexisting file to DeleteOnExit set.
+s3aFs.deleteOnExit(nonExistFilePath);
+ContractTestUtils.assertPathDoesNotExist(s3aFs,
+"File " + NON_EXIST_FILE_PATH_STR + " should not exist", 
nonExistFilePath);
+
+// 3. create a file and then add it to DeleteOnExit set.
+FSDataOutputStream stream = s3aFs.create(inOrderFilePath, true);
+byte[] data = ContractTestUtils.dataset(16, 'a', 26);
+try {
+  stream.write(data);
+} finally {
+  IOUtils.closeStream(stream);
+}
+
+ContractTestUtils.assertPathExists(s3aFs,
+"File " + INORDER_FILE_PATH_STR + " should exist", 
inOrderFilePath);
+
+s3aFs.deleteOnExit(inOrderFilePath);
+
+// 4. add a path to DeleteOnExit set first, then create it.
+s3aFs.deleteOnExit(outOfOrderFilePath);
+stream = s3aFs.create(outOfOrderFilePath, true);

Review Comment:
   Will do.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18382) Upgrade AWS SDK to V2 - Prerequisites

2022-08-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17577516#comment-17577516
 ] 

ASF GitHub Bot commented on HADOOP-18382:
-

hadoop-yetus commented on PR #4698:
URL: https://github.com/apache/hadoop/pull/4698#issuecomment-1209595644

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 59s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 9 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  39m 13s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 55s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 55s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 47s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  0s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 32s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 51s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | -1 :x: |  javac  |   0m 39s | 
[/results-compile-javac-hadoop-tools_hadoop-aws-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4698/5/artifact/out/results-compile-javac-hadoop-tools_hadoop-aws-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-tools_hadoop-aws-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 generated 23 new + 43 
unchanged - 1 fixed = 66 total (was 44)  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | -1 :x: |  javac  |   0m 33s | 
[/results-compile-javac-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4698/5/artifact/out/results-compile-javac-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 
with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 generated 23 new 
+ 42 unchanged - 1 fixed = 65 total (was 43)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m  7s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 51s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 46s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  98m 52s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4698/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4698 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux 2fd87491154a 4.15.0-65-generic #74-Ubuntu SMP Tue 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4698: HADOOP-18382. SDK upgrade prerequisites

2022-08-09 Thread GitBox


hadoop-yetus commented on PR #4698:
URL: https://github.com/apache/hadoop/pull/4698#issuecomment-1209595644

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 59s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 9 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  39m 13s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 55s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 55s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 47s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  0s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 32s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 51s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | -1 :x: |  javac  |   0m 39s | 
[/results-compile-javac-hadoop-tools_hadoop-aws-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4698/5/artifact/out/results-compile-javac-hadoop-tools_hadoop-aws-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-tools_hadoop-aws-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 generated 23 new + 43 
unchanged - 1 fixed = 66 total (was 44)  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | -1 :x: |  javac  |   0m 33s | 
[/results-compile-javac-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4698/5/artifact/out/results-compile-javac-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 
with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 generated 23 new 
+ 42 unchanged - 1 fixed = 65 total (was 43)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m  7s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 51s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 46s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  98m 52s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4698/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4698 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux 2fd87491154a 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / eb95a0a7e2426543d10e34b2be6ec1891829a9ec |
   | Default Java | Private 

[GitHub] [hadoop] slfan1989 commented on pull request #4712: YARN-6539. Create SecureLogin inside Router.

2022-08-09 Thread GitBox


slfan1989 commented on PR #4712:
URL: https://github.com/apache/hadoop/pull/4712#issuecomment-1209581105

   @goiri Please help to review the code again, Thank you very much!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #4711: YARN-11236. Implement FederationReservationHomeSubClusterStore With MemoryStore.

2022-08-09 Thread GitBox


slfan1989 commented on PR #4711:
URL: https://github.com/apache/hadoop/pull/4711#issuecomment-1209574795

   @goiri Please help to review the code again, Thank you very much!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] lfxy commented on pull request #4567: HDFS-16663. Allow block reconstruction pending timeout refreshable to increase decommission performance

2022-08-09 Thread GitBox


lfxy commented on PR #4567:
URL: https://github.com/apache/hadoop/pull/4567#issuecomment-1209571853

   @jojochuang Could you help to review and merge this patch if you have time? 
Thank you.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] iwasakims commented on pull request #4684: HDFS-16714: Replace okhttp by apache http client

2022-08-09 Thread GitBox


iwasakims commented on PR #4684:
URL: https://github.com/apache/hadoop/pull/4684#issuecomment-1209481555

   @pan3793
   I got following error on building trunk with the patch applied.
   
   ```
   $ mvn clean install -DskipTests
   ...
   [WARNING]
   Dependency convergence error for 
org.jetbrains.kotlin:kotlin-stdlib:jar:1.4.0:test paths to dependency are:
   +-org.apache.hadoop:hadoop-common:jar:3.4.0-SNAPSHOT
 +-com.squareup.okhttp3:mockwebserver:jar:4.9.3:test
   +-com.squareup.okhttp3:okhttp:jar:4.9.3:test
 +-com.squareup.okio:okio:jar:2.8.0:test
   +-org.jetbrains.kotlin:kotlin-stdlib:jar:1.4.0:test
   and
   +-org.apache.hadoop:hadoop-common:jar:3.4.0-SNAPSHOT
 +-com.squareup.okhttp3:mockwebserver:jar:4.9.3:test
   +-com.squareup.okhttp3:okhttp:jar:4.9.3:test
 +-org.jetbrains.kotlin:kotlin-stdlib:jar:1.4.10:test
   and
   +-org.apache.hadoop:hadoop-common:jar:3.4.0-SNAPSHOT
 +-com.squareup.okhttp3:mockwebserver:jar:4.9.3:test
   +-org.jetbrains.kotlin:kotlin-stdlib-jdk8:jar:1.4.10:test
 +-org.jetbrains.kotlin:kotlin-stdlib:jar:1.4.10:test
   and
   +-org.apache.hadoop:hadoop-common:jar:3.4.0-SNAPSHOT
 +-com.squareup.okhttp3:mockwebserver:jar:4.9.3:test
   +-org.jetbrains.kotlin:kotlin-stdlib-jdk8:jar:1.4.10:test
 +-org.jetbrains.kotlin:kotlin-stdlib-jdk7:jar:1.4.10:test
   +-org.jetbrains.kotlin:kotlin-stdlib:jar:1.4.10:test
   
   [WARNING] Rule 0: org.apache.maven.plugins.enforcer.DependencyConvergence 
failed with message:
   Failed while enforcing releasability. See above detailed error message.
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] snmvaughan opened a new pull request, #4725: HDFS-16688. Unresolved Hosts during startup are not synced by JournalNodes

2022-08-09 Thread GitBox


snmvaughan opened a new pull request, #4725:
URL: https://github.com/apache/hadoop/pull/4725

   ### Description of PR
   
   During the JournalNode startup, it builds the list of servers in the 
JournalNode set, ignoring hostnames that cannot be resolved.  In environments 
with dynamic IP address allocations this means that the JournalNodeSyncer will 
never sync with hosts that aren't resolvable during startup.
   
   Allow unresolved names during startup, so that when the hosts become 
available they will be included in JournalNode synchronization.  This also 
requires that mechanisms that return a string representation which includes the 
IP address had to be updated to handle unresolved addresses.
   
   ### How was this patch tested?
   
   Integration tests were performed against an HA configuration running in 
Kubernetes, running Java 11.  Starting the cluster with fewer replicas than the 
list of JournalNodes listed in the configurations started, and the logs showed 
attempts to sync against the missing replicas.  Missing instances of the 
JournalNode were incorporated into synchronization and NameNode shared edits 
once they were started without any intervention.
   
   ### For code changes:
   
   - [X] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] snmvaughan opened a new pull request, #4724: HDFS-16686. GetJournalEditServlet fails to authorize valid Kerberos request

2022-08-09 Thread GitBox


snmvaughan opened a new pull request, #4724:
URL: https://github.com/apache/hadoop/pull/4724

   ### Description of PR
   
   GetJournalEditServlet uses request.getRemoteuser() to determine the 
remoteShortName for Kerberos authorization, which fails to match when the 
JournalNode uses its own Kerberos principal (e.g. jn/@).
   
   This can be fixed by using the UserGroupInformation provided by the base 
DfsServlet class using the getUGI(request, conf) call.
   
   ### How was this patch tested?
   
   Integration tests were performed against an HA configuration running in 
Kubernetes, running Java 11.  With the patch, exceptions which had previously 
reported expected Kerberos principals which included an IP address string were 
eliminated.  
   
   ### For code changes:
   
   - [X] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] snmvaughan opened a new pull request, #4723: HDFS-16684. Exclude the current JournalNode

2022-08-09 Thread GitBox


snmvaughan opened a new pull request, #4723:
URL: https://github.com/apache/hadoop/pull/4723

   ### Description of PR
   
   The JournalNodeSyncer will include the local instance in syncing when using 
a bind host (e.g. 0.0.0.0).  There is a mechanism that is supposed to exclude 
the local instance, but it doesn't recognize the meta-address as a local 
address.
   
   Running with bind addresses set to 0.0.0.0, the JournalNodeSyncer will log 
attempts to sync with itself as part of the normal syncing rotation.  For an HA 
configuration running 3 JournalNodes, the "other" list used by the 
JournalNodeSyncer will include 3 proxies.
   
   Exclude bound local addresses, including the use of a wildcard address in 
the bound host configurations.
   
   ### How was this patch tested?
   
   An additional unit tests was added to verify the call getJournalAddrList 
dropped the current instance.
   
   Integration tests were conducted in HA configuration in Kubernetes, running 
Java 11.  After the patch, all JournalNodes stopped attempting to sync with 
themselves.
   
   ### For code changes:
   
   - [X] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18382) Upgrade AWS SDK to V2 - Prerequisites

2022-08-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17577450#comment-17577450
 ] 

ASF GitHub Bot commented on HADOOP-18382:
-

hadoop-yetus commented on PR #4698:
URL: https://github.com/apache/hadoop/pull/4698#issuecomment-1209452092

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 50s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 8 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  47m 52s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 53s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 52s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 26s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 42s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | -1 :x: |  javac  |   0m 41s | 
[/results-compile-javac-hadoop-tools_hadoop-aws-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4698/4/artifact/out/results-compile-javac-hadoop-tools_hadoop-aws-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-tools_hadoop-aws-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 generated 23 new + 43 
unchanged - 1 fixed = 66 total (was 44)  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | -1 :x: |  javac  |   0m 33s | 
[/results-compile-javac-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4698/4/artifact/out/results-compile-javac-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 
with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 generated 23 new 
+ 42 unchanged - 1 fixed = 65 total (was 43)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 15s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 31s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 110m 50s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4698/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4698 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux 180df90da590 4.15.0-175-generic #184-Ubuntu SMP Thu 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4698: HADOOP-18382. SDK upgrade prerequisites

2022-08-09 Thread GitBox


hadoop-yetus commented on PR #4698:
URL: https://github.com/apache/hadoop/pull/4698#issuecomment-1209452092

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 50s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 8 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  47m 52s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 53s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 52s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 26s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 42s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | -1 :x: |  javac  |   0m 41s | 
[/results-compile-javac-hadoop-tools_hadoop-aws-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4698/4/artifact/out/results-compile-javac-hadoop-tools_hadoop-aws-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-tools_hadoop-aws-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 generated 23 new + 43 
unchanged - 1 fixed = 66 total (was 44)  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | -1 :x: |  javac  |   0m 33s | 
[/results-compile-javac-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4698/4/artifact/out/results-compile-javac-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 
with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 generated 23 new 
+ 42 unchanged - 1 fixed = 65 total (was 43)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 15s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 31s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 110m 50s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4698/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4698 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux 180df90da590 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 8b7ae999e278a08b1eab811e27794477bae5d359 |
   | Default Java | Private 

[jira] [Comment Edited] (HADOOP-18393) Hadoop 3.3.2 has CVEs coming from dependencies

2022-08-09 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17577381#comment-17577381
 ] 

Steve Loughran edited comment on HADOOP-18393 at 8/9/22 12:22 PM:
--

All the hadoop CVEs are fixed. Given that we do not announce CVEs until we have 
issued an updated set of artefacts for all all branches which are kept up to 
date with is there a security fixes we can manage (2.10.x, 3.2.x, 3.3.x),  can 
assume that whenever you see a Hadoop CVE it means "you should upgrade to the 
latest release on that branch, or, even better to the latest branch we are 
shipping.".

As for the other issues, as all our updates are done in public, you can look 
through the commit log and JIRA to see the status of those. 

I am 100% confident there are other transient dependencies which have issues. 
One fundamental problem here is that upgrading some libraries produces a 
release which is incompatible at the binary level with many shipping 
applications. As a result, they won't upgrade. Which I would make it impossible 
to get an upgrade fixing our own CVEs into those projects. 

see https://steveloughran.blogspot.com/2022/08/transitive-issues.html for my 
thoughts on this.

One dependency which is tractable but for which we need engineering support is 
an upgrade of our shaded protobuf library, 
https://issues.apache.org/jira/browse/HADOOP-18197

If someone can provide a fix for this which works by the end of the month then 
we can get it into the next 3.3.5 release. Are you able and willing to 
contribute this? Or at least get involved in testing?

Otherwise, we really need JavaScript experts to help us with keeping the YARN 
UI up-to-date.

Either way, we and all other open source projects depend on the contributions 
from the broader community including people such as yourself. I anything you 
can do here would be very welcome. 

Closing as DUPLICATE.


was (Author: ste...@apache.org):
All the hadoop CVEs are fixed. Given that we do not announce CVEs until really 
we have issued an updated set of artefacts for all all branches which are kept 
up to date with is there a security fixes we can manage (2.10.x, 3.2.x, 3.3.x), 
 can assume that whenever you see a Hadoop CVE it means "you should upgrade to 
the latest release on that branch, or, even better to the latest branch we are 
shipping.".

As for the other issues, as all our updates are done in public, you can look 
through the commit log and JIRA to see the status of those. 

I am 100% confident there are other transient dependencies which have issues. 
One fundamental problem here is that upgrading some libraries produces a 
release which is incompatible at the binary level with many shipping 
applications. As a result, they won't upgrade. Which I would make it impossible 
to get an upgrade fixing our own CVEs into those projects. 

see https://steveloughran.blogspot.com/2022/08/transitive-issues.html for my 
thoughts on this.

One dependency which is tractable but for which we need engineering support is 
an upgrade of our shaded protobuf library, 
https://issues.apache.org/jira/browse/HADOOP-18197

If someone can provide a fix for this which works by the end of the month then 
we can get it into the next 3.3.5 release. Are you able and willing to 
contribute this? Or at least get involved in testing?

Otherwise, we really need JavaScript experts to help us with keeping the YARN 
UI up-to-date.

Either way, we and all other open source projects depend on the contributions 
from the broader community including people such as yourself. I anything you 
can do here would be very welcome. 

Closing as DUPLICATE.

> Hadoop 3.3.2 has CVEs coming from dependencies
> --
>
> Key: HADOOP-18393
> URL: https://issues.apache.org/jira/browse/HADOOP-18393
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.3.2
>Reporter: suman agrawal
>Priority: Major
> Fix For: 3.3.4
>
>
> Hi Team,
>  
> Hadoop version 3.3.1 which is compatible for our application have 
> Vulnerebilities:
> Is there any plan to fix this
> CVE-2021-37404 hadoop versions < 3.3.2 Apache Hadoop potential heap buffer 
> overflow in libhdfs.
> CVE-2020-10650 jackson < 2.9.10.4
> CVE-2021-33036 hadoop < 3.3.2
> CVE-2022-31159 aws xfer manager download < 1.12.262



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-18393) Hadoop 3.3.2 has CVEs coming from dependencies

2022-08-09 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17577381#comment-17577381
 ] 

Steve Loughran edited comment on HADOOP-18393 at 8/9/22 12:22 PM:
--

All the hadoop CVEs are fixed. Given that we do not announce CVEs until we have 
issued an updated set of artefacts for all all branches which are kept up to 
date with is there a security fixes we can manage (2.10.x, 3.2.x, 3.3.x),  can 
assume that whenever you see a Hadoop CVE it means "you should upgrade to the 
latest release on that branch, or, even better to the latest branch we are 
shipping.".

As for the other issues, as all our updates are done in public, you can look 
through the commit log and JIRA to see the status of those. 

I am 100% confident there are other transient dependencies which have issues. 
One fundamental problem here is that upgrading some libraries produces a 
release which is incompatible at the binary level with many shipping 
applications. As a result, they won't upgrade. Which would make it impossible 
to get an upgrade fixing our own CVEs into those projects.

see https://steveloughran.blogspot.com/2022/08/transitive-issues.html for my 
thoughts on this.

One dependency which is tractable but for which we need engineering support is 
an upgrade of our shaded protobuf library, 
https://issues.apache.org/jira/browse/HADOOP-18197

If someone can provide a fix for this which works by the end of the month then 
we can get it into the next 3.3.5 release. Are you able and willing to 
contribute this? Or at least get involved in testing?

Otherwise, we really need JavaScript experts to help us with keeping the YARN 
UI up-to-date.

Either way, we and all other open source projects depend on the contributions 
from the broader community including people such as yourself. I anything you 
can do here would be very welcome. 

Closing as DUPLICATE.


was (Author: ste...@apache.org):
All the hadoop CVEs are fixed. Given that we do not announce CVEs until we have 
issued an updated set of artefacts for all all branches which are kept up to 
date with is there a security fixes we can manage (2.10.x, 3.2.x, 3.3.x),  can 
assume that whenever you see a Hadoop CVE it means "you should upgrade to the 
latest release on that branch, or, even better to the latest branch we are 
shipping.".

As for the other issues, as all our updates are done in public, you can look 
through the commit log and JIRA to see the status of those. 

I am 100% confident there are other transient dependencies which have issues. 
One fundamental problem here is that upgrading some libraries produces a 
release which is incompatible at the binary level with many shipping 
applications. As a result, they won't upgrade. Which I would make it impossible 
to get an upgrade fixing our own CVEs into those projects. 

see https://steveloughran.blogspot.com/2022/08/transitive-issues.html for my 
thoughts on this.

One dependency which is tractable but for which we need engineering support is 
an upgrade of our shaded protobuf library, 
https://issues.apache.org/jira/browse/HADOOP-18197

If someone can provide a fix for this which works by the end of the month then 
we can get it into the next 3.3.5 release. Are you able and willing to 
contribute this? Or at least get involved in testing?

Otherwise, we really need JavaScript experts to help us with keeping the YARN 
UI up-to-date.

Either way, we and all other open source projects depend on the contributions 
from the broader community including people such as yourself. I anything you 
can do here would be very welcome. 

Closing as DUPLICATE.

> Hadoop 3.3.2 has CVEs coming from dependencies
> --
>
> Key: HADOOP-18393
> URL: https://issues.apache.org/jira/browse/HADOOP-18393
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.3.2
>Reporter: suman agrawal
>Priority: Major
> Fix For: 3.3.4
>
>
> Hi Team,
>  
> Hadoop version 3.3.1 which is compatible for our application have 
> Vulnerebilities:
> Is there any plan to fix this
> CVE-2021-37404 hadoop versions < 3.3.2 Apache Hadoop potential heap buffer 
> overflow in libhdfs.
> CVE-2020-10650 jackson < 2.9.10.4
> CVE-2021-33036 hadoop < 3.3.2
> CVE-2022-31159 aws xfer manager download < 1.12.262



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18393) Hadoop 3.3.2 has CVEs coming from dependencies

2022-08-09 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-18393:

Summary: Hadoop 3.3.2 has CVEs coming from dependencies  (was: Hadoop 3.3.2 
have CVE coming from dependencies)

> Hadoop 3.3.2 has CVEs coming from dependencies
> --
>
> Key: HADOOP-18393
> URL: https://issues.apache.org/jira/browse/HADOOP-18393
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.3.2
>Reporter: suman agrawal
>Priority: Major
> Fix For: 3.3.4
>
>
> Hi Team,
>  
> Hadoop version 3.3.1 which is compatible for our application have 
> Vulnerebilities:
> Is there any plan to fix this
> CVE-2021-37404 hadoop versions < 3.3.2 Apache Hadoop potential heap buffer 
> overflow in libhdfs.
> CVE-2020-10650 jackson < 2.9.10.4
> CVE-2021-33036 hadoop < 3.3.2
> CVE-2022-31159 aws xfer manager download < 1.12.262



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18393) Hadoop 3.3.2 have CVE coming from dependencies

2022-08-09 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18393.
-
Fix Version/s: 3.3.4
   Resolution: Duplicate

> Hadoop 3.3.2 have CVE coming from dependencies
> --
>
> Key: HADOOP-18393
> URL: https://issues.apache.org/jira/browse/HADOOP-18393
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.3.2
>Reporter: suman agrawal
>Priority: Major
> Fix For: 3.3.4
>
>
> Hi Team,
>  
> Hadoop version 3.3.1 which is compatible for our application have 
> Vulnerebilities:
> Is there any plan to fix this
> CVE-2021-37404 hadoop versions < 3.3.2 Apache Hadoop potential heap buffer 
> overflow in libhdfs.
> CVE-2020-10650 jackson < 2.9.10.4
> CVE-2021-33036 hadoop < 3.3.2
> CVE-2022-31159 aws xfer manager download < 1.12.262



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18393) Hadoop 3.3.2 have CVE coming from dependencies

2022-08-09 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17577381#comment-17577381
 ] 

Steve Loughran commented on HADOOP-18393:
-

All the hadoop CVEs are fixed. Given that we do not announce CVEs until really 
we have issued an updated set of artefacts for all all branches which are kept 
up to date with is there a security fixes we can manage (2.10.x, 3.2.x, 3.3.x), 
 can assume that whenever you see a Hadoop CVE it means "you should upgrade to 
the latest release on that branch, or, even better to the latest branch we are 
shipping.".

As for the other issues, as all our updates are done in public, you can look 
through the commit log and JIRA to see the status of those. 

I am 100% confident there are other transient dependencies which have issues. 
One fundamental problem here is that upgrading some libraries produces a 
release which is incompatible at the binary level with many shipping 
applications. As a result, they won't upgrade. Which I would make it impossible 
to get an upgrade fixing our own CVEs into those projects. 

see https://steveloughran.blogspot.com/2022/08/transitive-issues.html for my 
thoughts on this.

One dependency which is tractable but for which we need engineering support is 
an upgrade of our shaded protobuf library, 
https://issues.apache.org/jira/browse/HADOOP-18197

If someone can provide a fix for this which works by the end of the month then 
we can get it into the next 3.3.5 release. Are you able and willing to 
contribute this? Or at least get involved in testing?

Otherwise, we really need JavaScript experts to help us with keeping the YARN 
UI up-to-date.

Either way, we and all other open source projects depend on the contributions 
from the broader community including people such as yourself. I anything you 
can do here would be very welcome. 

Closing as DUPLICATE.

> Hadoop 3.3.2 have CVE coming from dependencies
> --
>
> Key: HADOOP-18393
> URL: https://issues.apache.org/jira/browse/HADOOP-18393
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.3.2
>Reporter: suman agrawal
>Priority: Major
>
> Hi Team,
>  
> Hadoop version 3.3.1 which is compatible for our application have 
> Vulnerebilities:
> Is there any plan to fix this
> CVE-2021-37404 hadoop versions < 3.3.2 Apache Hadoop potential heap buffer 
> overflow in libhdfs.
> CVE-2020-10650 jackson < 2.9.10.4
> CVE-2021-33036 hadoop < 3.3.2
> CVE-2022-31159 aws xfer manager download < 1.12.262



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18393) Hadoop 3.3.2 have CVE coming from dependencies

2022-08-09 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-18393:

Description: 
Hi Team,

 

Hadoop version 3.3.1 which is compatible for our application have 
Vulnerebilities:

Is there any plan to fix this

CVE-2021-37404 hadoop versions < 3.3.2 Apache Hadoop potential heap buffer 
overflow in libhdfs.
CVE-2020-10650 jackson < 2.9.10.4
CVE-2021-33036 hadoop < 3.3.2
CVE-2022-31159 aws xfer manager download < 1.12.262

  was:
Hi Team,

 

Hadoop version 3.3.1 which is compatible for our application have 
Vulnerebilities:

Is there any plan to fix this

CVE-2021-37404 hadoop versions < 3.3.2 Apache Hadoop potential heap buffer 
overflow in libhdfs.
CVE-2020-10650 jackson < 2.9.10.4
CVE-2021-33036 hadoop < 3.3.2
CVE-2022-31159 aws


> Hadoop 3.3.2 have CVE coming from dependencies
> --
>
> Key: HADOOP-18393
> URL: https://issues.apache.org/jira/browse/HADOOP-18393
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.3.2
>Reporter: suman agrawal
>Priority: Major
>
> Hi Team,
>  
> Hadoop version 3.3.1 which is compatible for our application have 
> Vulnerebilities:
> Is there any plan to fix this
> CVE-2021-37404 hadoop versions < 3.3.2 Apache Hadoop potential heap buffer 
> overflow in libhdfs.
> CVE-2020-10650 jackson < 2.9.10.4
> CVE-2021-33036 hadoop < 3.3.2
> CVE-2022-31159 aws xfer manager download < 1.12.262



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18393) Hadoop 3.3.2 have CVE coming from dependencies

2022-08-09 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-18393:

Description: 
Hi Team,

 

Hadoop version 3.3.1 which is compatible for our application have 
Vulnerebilities:

Is there any plan to fix this

CVE-2021-37404 hadoop versions < 3.3.2 Apache Hadoop potential heap buffer 
overflow in libhdfs.
CVE-2020-10650 jackson < 2.9.10.4
CVE-2021-33036 hadoop < 3.3.2
CVE-2022-31159 aws

  was:
Hi Team,

 

Hadoop version 3.3.1 which is compatible for our application have 
Vulnerebilities:

Is there any plan to fix this

CVE-2021-37404
CVE-2020-10650
CVE-2021-33036
CVE-2022-31159


> Hadoop 3.3.2 have CVE coming from dependencies
> --
>
> Key: HADOOP-18393
> URL: https://issues.apache.org/jira/browse/HADOOP-18393
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.3.2
>Reporter: suman agrawal
>Priority: Major
>
> Hi Team,
>  
> Hadoop version 3.3.1 which is compatible for our application have 
> Vulnerebilities:
> Is there any plan to fix this
> CVE-2021-37404 hadoop versions < 3.3.2 Apache Hadoop potential heap buffer 
> overflow in libhdfs.
> CVE-2020-10650 jackson < 2.9.10.4
> CVE-2021-33036 hadoop < 3.3.2
> CVE-2022-31159 aws



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18178) Upgrade jackson to 2.13.2 and jackson-databind to 2.13.2.2. CVE-2020-36518

2022-08-09 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-18178:

Component/s: build
 security

> Upgrade jackson to 2.13.2 and jackson-databind to 2.13.2.2. CVE-2020-36518
> --
>
> Key: HADOOP-18178
> URL: https://issues.apache.org/jira/browse/HADOOP-18178
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, security
>Affects Versions: 3.3.2
>Reporter: PJ Fanning
>Assignee: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.3
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> CVE-2020-36518
> https://github.com/FasterXML/jackson-databind/issues/2816



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18178) Upgrade jackson to 2.13.2 and jackson-databind to 2.13.2.2. CVE-2020-36518

2022-08-09 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-18178:

Affects Version/s: 3.3.2

> Upgrade jackson to 2.13.2 and jackson-databind to 2.13.2.2. CVE-2020-36518
> --
>
> Key: HADOOP-18178
> URL: https://issues.apache.org/jira/browse/HADOOP-18178
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.2
>Reporter: PJ Fanning
>Assignee: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.3
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> CVE-2020-36518
> https://github.com/FasterXML/jackson-databind/issues/2816



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18178) Upgrade jackson to 2.13.2 and jackson-databind to 2.13.2.2. CVE-2020-36518

2022-08-09 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-18178:

Description: 
CVE-2020-36518
https://github.com/FasterXML/jackson-databind/issues/2816

  was:https://github.com/FasterXML/jackson-databind/issues/2816


> Upgrade jackson to 2.13.2 and jackson-databind to 2.13.2.2. CVE-2020-36518
> --
>
> Key: HADOOP-18178
> URL: https://issues.apache.org/jira/browse/HADOOP-18178
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: PJ Fanning
>Assignee: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.3
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> CVE-2020-36518
> https://github.com/FasterXML/jackson-databind/issues/2816



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18178) Upgrade jackson to 2.13.2 and jackson-databind to 2.13.2.2. CVE-2020-36518

2022-08-09 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-18178:

Summary: Upgrade jackson to 2.13.2 and jackson-databind to 2.13.2.2. 
CVE-2020-36518  (was: Upgrade jackson to 2.13.2 and jackson-databind to 
2.13.2.2)

> Upgrade jackson to 2.13.2 and jackson-databind to 2.13.2.2. CVE-2020-36518
> --
>
> Key: HADOOP-18178
> URL: https://issues.apache.org/jira/browse/HADOOP-18178
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: PJ Fanning
>Assignee: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.3
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> https://github.com/FasterXML/jackson-databind/issues/2816



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18393) Hadoop 3.3.2 have CVE coming from dependencies

2022-08-09 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-18393:

Component/s: build

> Hadoop 3.3.2 have CVE coming from dependencies
> --
>
> Key: HADOOP-18393
> URL: https://issues.apache.org/jira/browse/HADOOP-18393
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.3.2
>Reporter: suman agrawal
>Priority: Major
>
> Hi Team,
>  
> Hadoop version 3.3.1 which is compatible for our application have 
> Vulnerebilities:
> Is there any plan to fix this
> CVE-2021-37404
> CVE-2020-10650
> CVE-2021-33036
> CVE-2022-31159



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18393) Hadoop 3.3.2 have CVE coming from dependencies

2022-08-09 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-18393:

Affects Version/s: 3.3.2

> Hadoop 3.3.2 have CVE coming from dependencies
> --
>
> Key: HADOOP-18393
> URL: https://issues.apache.org/jira/browse/HADOOP-18393
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.2
>Reporter: suman agrawal
>Priority: Major
>
> Hi Team,
>  
> Hadoop version 3.3.1 which is compatible for our application have 
> Vulnerebilities:
> Is there any plan to fix this
> CVE-2021-37404
> CVE-2020-10650
> CVE-2021-33036
> CVE-2022-31159



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18393) Hadoop 3.3.2 have CVE coming from dependencies

2022-08-09 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-18393:

Summary: Hadoop 3.3.2 have CVE coming from dependencies  (was: Hadoop 3.3.2 
have CVE cmoing from dependecies)

> Hadoop 3.3.2 have CVE coming from dependencies
> --
>
> Key: HADOOP-18393
> URL: https://issues.apache.org/jira/browse/HADOOP-18393
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: suman agrawal
>Priority: Major
>
> Hi Team,
>  
> Hadoop version 3.3.1 which is compatible for our application have 
> Vulnerebilities:
> Is there any plan to fix this
> CVE-2021-37404
> CVE-2020-10650
> CVE-2021-33036
> CVE-2022-31159



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18344) AWS SDK update to 1.12.262 to address jackson CVE-2018-7489 and AWS CVE-2022-31159

2022-08-09 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-18344:

Description: 
The CVE [CVE-2022-31159|https://nvd.nist.gov/vuln/detail/CVE-2022-31159] is a 
vulnerability in path resolution in the AWS SDK transfer manager during 
downloads.

*the s3a client is not exposed to this*. it uses the class for local file 
upload and for object copying, but not download.

it may affect downstream use by other applications.

 yet another jackson CVE in aws sdk
https://github.com/apache/hadoop/pull/4491/commits/5496816b472473eb7a9c174b7d3e69b6eee1e271

maybe we need to have a list of all shaded jackson's we get on the CP and have 
a process of upgrading them all at the same time

  was:
 yet another jackson CVE in aws sdk
https://github.com/apache/hadoop/pull/4491/commits/5496816b472473eb7a9c174b7d3e69b6eee1e271

maybe we need to have a list of all shaded jackson's we get on the CP and have 
a process of upgrading them all at the same time


> AWS SDK update to 1.12.262 to address jackson  CVE-2018-7489 and AWS 
> CVE-2022-31159
> ---
>
> Key: HADOOP-18344
> URL: https://issues.apache.org/jira/browse/HADOOP-18344
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0, 3.3.4
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.4
>
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> The CVE [CVE-2022-31159|https://nvd.nist.gov/vuln/detail/CVE-2022-31159] is a 
> vulnerability in path resolution in the AWS SDK transfer manager during 
> downloads.
> *the s3a client is not exposed to this*. it uses the class for local file 
> upload and for object copying, but not download.
> it may affect downstream use by other applications.
>  yet another jackson CVE in aws sdk
> https://github.com/apache/hadoop/pull/4491/commits/5496816b472473eb7a9c174b7d3e69b6eee1e271
> maybe we need to have a list of all shaded jackson's we get on the CP and 
> have a process of upgrading them all at the same time



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] snmvaughan commented on pull request #4415: MAPREDUCE-7386. Maven parallel builds (skipping tests) fail

2022-08-09 Thread GitBox


snmvaughan commented on PR #4415:
URL: https://github.com/apache/hadoop/pull/4415#issuecomment-1209280016

   @ayushtkn What other random behavior did your see?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18344) AWS SDK update to 1.12.262 to address jackson CVE-2018-7489 and AWS CVE-2022-31159

2022-08-09 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-18344:

Summary: AWS SDK update to 1.12.262 to address jackson  CVE-2018-7489 and 
AWS CVE-2022-31159  (was: AWS SDK update to 1.12.262 to address jackson  
CVE-2018-7489)

> AWS SDK update to 1.12.262 to address jackson  CVE-2018-7489 and AWS 
> CVE-2022-31159
> ---
>
> Key: HADOOP-18344
> URL: https://issues.apache.org/jira/browse/HADOOP-18344
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0, 3.3.4
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.4
>
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
>  yet another jackson CVE in aws sdk
> https://github.com/apache/hadoop/pull/4491/commits/5496816b472473eb7a9c174b7d3e69b6eee1e271
> maybe we need to have a list of all shaded jackson's we get on the CP and 
> have a process of upgrading them all at the same time



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18333) hadoop-client-runtime impact by CVE-2022-2047 CVE-2022-2048 due to shaded jetty

2022-08-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17577367#comment-17577367
 ] 

ASF GitHub Bot commented on HADOOP-18333:
-

steveloughran commented on PR #4600:
URL: https://github.com/apache/hadoop/pull/4600#issuecomment-1209274370

   @jojochuang can you do a rebase and push this up so we can see what yetus 
says now?




> hadoop-client-runtime impact by CVE-2022-2047 CVE-2022-2048 due to shaded 
> jetty
> ---
>
> Key: HADOOP-18333
> URL: https://issues.apache.org/jira/browse/HADOOP-18333
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.3.3
>Reporter: phoebe chen
>Assignee: groot
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> CVE-2022-2047 and CVE-2022-2048 is recently found for Eclipse Jetty, and 
> impacts 9.4.0 thru 9.4.46.
> In latest 3.3.3 of hadoop-client-runtime, it shaded 9.4.43.v20210629 version 
> jetty which is impacted.
> In Trunk, Jetty is in version 9.4.44.v20210927, which is still impacted.
> Need to upgrade Jetty Version. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #4600: HADOOP-18333. Upgrade jetty version to 9.4.48.v20220622

2022-08-09 Thread GitBox


steveloughran commented on PR #4600:
URL: https://github.com/apache/hadoop/pull/4600#issuecomment-1209274370

   @jojochuang can you do a rebase and push this up so we can see what yetus 
says now?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] KevinWikant commented on pull request #4568: HDFS-16664. Use correct GenerationStamp when invalidating corrupt block replicas

2022-08-09 Thread GitBox


KevinWikant commented on PR #4568:
URL: https://github.com/apache/hadoop/pull/4568#issuecomment-1209272762

   @XiaohuiSun1 XiaohuiSun1
   
   Do you have any follow-up comments on this PR? Just wanted to give you a 
chance to reply before I reach out to a Hadoop committer for review


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18396) Issues running in dynamic / managed environments

2022-08-09 Thread Steve Vaughan (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17577364#comment-17577364
 ] 

Steve Vaughan commented on HADOOP-18396:


[~ste...@apache.org] Did you have any comments about the individual changes?  
Even in environments intended to be static, there are circumstances where 
unplanned changes are required (e.g. hardware failures).  In addition, having 
servers silently ignoring configuration (dropping unresolved servers) because 
of a hiccup during startup can lead to unexpected behaviors.

> Issues running in dynamic / managed environments
> 
>
> Key: HADOOP-18396
> URL: https://issues.apache.org/jira/browse/HADOOP-18396
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.4.0, 3.3.9, 3.3.4
> Environment: Running an HA configuration in Kubernetes, using Java 11.
>Reporter: Steve Vaughan
>Assignee: Steve Vaughan
>Priority: Major
>
> Running in dynamic or managed environments is a challenge because we can't 
> assume that all services will have DNS entries, will be started in a specific 
> order, will maintain constant IP addresses, etc.  I'm using the following 
> assumptions to guide the changes necessary to operate in this kind of 
> environment:
>  # The configuration files are an expression of desired state
>  # If a referenced service instance is not resolvable or reachable at a 
> moment in time, it will be eventually and should be able to participate in 
> the future, as if it had been there originally, without requiring manual 
> intervention
>  # IP address changes should be handled in a way that no only allows 
> distributed calls to continue to function, but avoids having to re-resolve 
> the address over and over
>  # Code that requires resolved names (Kerberos and DataNode registration) 
> should fall back to DNS reverse lookups to work around temporary issues 
> caused by caching.  Example: The DataNode registration is only performed at 
> startup, and yet the extra check that allows it to succeed in registering 
> with the NameNode isn’t performed
>  # If an HA system is supposed to only require a quorum, then we shouldn’t 
> require the full set, allowing the called service to bring the remaining 
> instances into compliance
>  # Managing a service should be independent of other services.  Example: You 
> should be able to perform a rolling restart of JournalNodes without worrying 
> about causing an issue with NameNodes as long as a quorum is present.
> A proof of these concepts would be the ability to:
>  * Start with less that the full replica count of a service, while still 
> providing the required quorum or minimal count, should still allow a cluster 
> to start and function.  Example: 2 out of 3 configured JournalNodes should 
> still allow the NameNode to format, function, rollover to the standby, etc.
>  * Introduce missing instances should join the existing cluster without 
> manual intervention.  Example: Starting the 3rd JournalNode should 
> automatically be formatted and brought up to date
>  * Perform rolling restarts of individual services without negatively 
> impacting other services (causing failures, restarts, etc.).  Example: 
> Rolling restarts of JournalNodes shouldn't cause problems in NameNodes; 
> Rolling restarts of NameNodes shouldn't cause problems with DataNodes
>  * Logs should only report updated IP addresses once (per dependent), 
> avoiding costly re-resolution



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >