[GitHub] [hadoop] slfan1989 commented on a diff in pull request #4764: YARN-11177. Support getNewReservation, submit / update/ Reservation API's for Federation.

2022-08-24 Thread GitBox


slfan1989 commented on code in PR #4764:
URL: https://github.com/apache/hadoop/pull/4764#discussion_r954528905


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/clientrm/TestFederationClientInterceptor.java:
##
@@ -1254,4 +1275,250 @@ public void testNodesToAttributes() throws Exception {
 NodeAttributeType.STRING, "nvida");
 Assert.assertTrue(nodeAttributeMap.get("0-host1").contains(gpu));
   }
+
+  @Test
+  public void testGetNewReservation() throws Exception {
+LOG.info("Test FederationClientInterceptor : Get NewReservation request.");
+
+// null request
+LambdaTestUtils.intercept(YarnException.class,
+"Missing getNewReservation request.", () -> 
interceptor.getNewReservation(null));
+
+// normal request
+GetNewReservationRequest request = GetNewReservationRequest.newInstance();
+GetNewReservationResponse response = 
interceptor.getNewReservation(request);
+Assert.assertNotNull(response);
+
+ReservationId reservationId = response.getReservationId();
+Assert.assertNotNull(reservationId);
+Assert.assertTrue(reservationId.toString().contains("reservation"));
+Assert.assertEquals(reservationId.getClusterTimestamp(), 
ResourceManager.getClusterTimeStamp());
+  }
+
+  @Test
+  public void testSubmitReservation() throws Exception {
+LOG.info("Test FederationClientInterceptor : SubmitReservation request.");
+
+// get new reservationId
+GetNewReservationRequest request = GetNewReservationRequest.newInstance();
+GetNewReservationResponse response = 
interceptor.getNewReservation(request);
+Assert.assertNotNull(response);
+
+// allow plan follower to synchronize, manually trigger an assignment
+Map mockRMs = interceptor.getMockRMs();
+for (MockRM mockRM : mockRMs.values()) {
+  ReservationSystem reservationSystem = mockRM.getReservationSystem();
+  reservationSystem.synchronizePlan("root.decided", true);
+}
+
+// Submit Reservation
+ReservationId reservationId = response.getReservationId();
+ReservationDefinition rDefinition = createReservationDefinition(1024, 1);
+ReservationSubmissionRequest rSubmissionRequest = 
ReservationSubmissionRequest.newInstance(
+rDefinition, "decided", reservationId);
+
+ReservationSubmissionResponse submissionResponse =
+interceptor.submitReservation(rSubmissionRequest);
+Assert.assertNotNull(submissionResponse);
+
+SubClusterId subClusterId = 
stateStoreUtil.queryReservationHomeSC(reservationId);
+Assert.assertNotNull(subClusterId);
+Assert.assertTrue(subClusters.contains(subClusterId));
+  }
+
+  @Test
+  public void testSubmitReservationEmptyRequest() throws Exception {
+LOG.info("Test FederationClientInterceptor : SubmitReservation request 
empty.");
+
+// null request1
+LambdaTestUtils.intercept(YarnException.class,
+"Missing submitReservation request or reservationId or reservation 
definition or queue.",
+() -> interceptor.submitReservation(null));
+
+// null request2
+LambdaTestUtils.intercept(YarnException.class,
+"Missing submitReservation request or reservationId or reservation 
definition or queue.",
+() -> interceptor.submitReservation(
+ReservationSubmissionRequest.newInstance(null, null, null)));
+
+// null request3
+ReservationSubmissionRequest request3 =
+ReservationSubmissionRequest.newInstance(null, "q1", null);
+LambdaTestUtils.intercept(YarnException.class,
+"Missing submitReservation request or reservationId or reservation 
definition or queue.",
+() -> interceptor.submitReservation(request3));
+
+// null request4
+ReservationId reservationId = ReservationId.newInstance(Time.now(), 1);
+ReservationSubmissionRequest request4 =
+ReservationSubmissionRequest.newInstance(null, null,  reservationId);
+LambdaTestUtils.intercept(YarnException.class,
+"Missing submitReservation request or reservationId or reservation 
definition or queue.",
+() -> interceptor.submitReservation(request4));
+
+// null request5
+long defaultDuration = 60;
+long arrival = Time.now();
+long deadline = arrival + (int)(defaultDuration * 1.1);
+
+ReservationRequest rRequest = ReservationRequest.newInstance(
+Resource.newInstance(1024, 1), 1, 1, defaultDuration);
+ReservationRequest[] rRequests = new ReservationRequest[] {rRequest};
+ReservationDefinition rDefinition = createReservationDefinition(arrival, 
deadline, rRequests,
+ReservationRequestInterpreter.R_ALL, "u1");
+ReservationSubmissionRequest request5 =
+ReservationSubmissionRequest.newInstance(rDefinition, null,  
reservationId);
+LambdaTestUtils.intercept(YarnException.class,
+"Missing submitReservation request or reservationId or reservation 
definition or queue.",
+

[GitHub] [hadoop] slfan1989 commented on a diff in pull request #4764: YARN-11177. Support getNewReservation, submit / update/ Reservation API's for Federation.

2022-08-24 Thread GitBox


slfan1989 commented on code in PR #4764:
URL: https://github.com/apache/hadoop/pull/4764#discussion_r954527995


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/clientrm/TestFederationClientInterceptor.java:
##
@@ -1254,4 +1275,250 @@ public void testNodesToAttributes() throws Exception {
 NodeAttributeType.STRING, "nvida");
 Assert.assertTrue(nodeAttributeMap.get("0-host1").contains(gpu));
   }
+
+  @Test
+  public void testGetNewReservation() throws Exception {
+LOG.info("Test FederationClientInterceptor : Get NewReservation request.");
+
+// null request
+LambdaTestUtils.intercept(YarnException.class,
+"Missing getNewReservation request.", () -> 
interceptor.getNewReservation(null));
+
+// normal request
+GetNewReservationRequest request = GetNewReservationRequest.newInstance();
+GetNewReservationResponse response = 
interceptor.getNewReservation(request);
+Assert.assertNotNull(response);
+
+ReservationId reservationId = response.getReservationId();
+Assert.assertNotNull(reservationId);
+Assert.assertTrue(reservationId.toString().contains("reservation"));
+Assert.assertEquals(reservationId.getClusterTimestamp(), 
ResourceManager.getClusterTimeStamp());
+  }
+
+  @Test
+  public void testSubmitReservation() throws Exception {
+LOG.info("Test FederationClientInterceptor : SubmitReservation request.");
+
+// get new reservationId
+GetNewReservationRequest request = GetNewReservationRequest.newInstance();
+GetNewReservationResponse response = 
interceptor.getNewReservation(request);
+Assert.assertNotNull(response);
+
+// allow plan follower to synchronize, manually trigger an assignment
+Map mockRMs = interceptor.getMockRMs();
+for (MockRM mockRM : mockRMs.values()) {
+  ReservationSystem reservationSystem = mockRM.getReservationSystem();
+  reservationSystem.synchronizePlan("root.decided", true);
+}
+
+// Submit Reservation
+ReservationId reservationId = response.getReservationId();
+ReservationDefinition rDefinition = createReservationDefinition(1024, 1);
+ReservationSubmissionRequest rSubmissionRequest = 
ReservationSubmissionRequest.newInstance(
+rDefinition, "decided", reservationId);
+
+ReservationSubmissionResponse submissionResponse =
+interceptor.submitReservation(rSubmissionRequest);
+Assert.assertNotNull(submissionResponse);
+
+SubClusterId subClusterId = 
stateStoreUtil.queryReservationHomeSC(reservationId);
+Assert.assertNotNull(subClusterId);
+Assert.assertTrue(subClusters.contains(subClusterId));
+  }
+
+  @Test
+  public void testSubmitReservationEmptyRequest() throws Exception {
+LOG.info("Test FederationClientInterceptor : SubmitReservation request 
empty.");
+
+// null request1
+LambdaTestUtils.intercept(YarnException.class,
+"Missing submitReservation request or reservationId or reservation 
definition or queue.",
+() -> interceptor.submitReservation(null));
+
+// null request2
+LambdaTestUtils.intercept(YarnException.class,
+"Missing submitReservation request or reservationId or reservation 
definition or queue.",
+() -> interceptor.submitReservation(
+ReservationSubmissionRequest.newInstance(null, null, null)));

Review Comment:
   I will fix it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4805: HADOOP-18415. Replace Sets.newHashSet() in hadoop-tools

2022-08-24 Thread GitBox


hadoop-yetus commented on PR #4805:
URL: https://github.com/apache/hadoop/pull/4805#issuecomment-1226806122

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 39s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 5 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 43s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  27m 48s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   3m 34s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   2m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 56s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m  4s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m  2s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 18s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 30s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 27s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   3m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   2m 24s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 44s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 15s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 18s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 33s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  24m 37s |  |  hadoop-distcp in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   0m 34s |  |  hadoop-dynamometer-infra in the 
patch passed.  |
   | +1 :green_heart: |  unit  |   2m 52s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 50s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 149m 47s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4805/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4805 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 2a7bd08b6626 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 1cdc5d3de27e5cb679bd2d818146b54b7f3f385e |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4805/1/testReport/ |
   | Max. process+thread count | 728 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-distcp 
hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra 
hadoop-tools/hadoop-aws U: hadoop-tools |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4805/1/console |
   | versions | git=2.25.1 

[jira] [Commented] (HADOOP-18415) Replace Sets#newHashSet() and newTreeSet() with constructors directly hadoop-tool

2022-08-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17584626#comment-17584626
 ] 

ASF GitHub Bot commented on HADOOP-18415:
-

hadoop-yetus commented on PR #4805:
URL: https://github.com/apache/hadoop/pull/4805#issuecomment-1226806122

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 39s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 5 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 43s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  27m 48s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   3m 34s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   2m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 56s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m  4s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m  2s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 18s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 30s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 27s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   3m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   2m 24s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 44s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 15s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 18s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 33s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  24m 37s |  |  hadoop-distcp in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   0m 34s |  |  hadoop-dynamometer-infra in the 
patch passed.  |
   | +1 :green_heart: |  unit  |   2m 52s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 50s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 149m 47s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4805/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4805 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 2a7bd08b6626 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 1cdc5d3de27e5cb679bd2d818146b54b7f3f385e |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4805/1/testReport/ |
   | Max. process+thread count | 728 (vs. ulimit of 5500) |
   | modules | C: 

[GitHub] [hadoop] slfan1989 commented on a diff in pull request #4764: YARN-11177. Support getNewReservation, submit / update/ Reservation API's for Federation.

2022-08-24 Thread GitBox


slfan1989 commented on code in PR #4764:
URL: https://github.com/apache/hadoop/pull/4764#discussion_r954525110


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/FederationClientInterceptor.java:
##
@@ -925,13 +1041,61 @@ public ReservationListResponse listReservations(
   @Override
   public ReservationUpdateResponse updateReservation(
   ReservationUpdateRequest request) throws YarnException, IOException {
-throw new NotImplementedException("Code is not implemented");
+
+if (request == null || request.getReservationId() == null
+|| request.getReservationDefinition() == null) {

Review Comment:
   I will fix it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on a diff in pull request #4764: YARN-11177. Support getNewReservation, submit / update/ Reservation API's for Federation.

2022-08-24 Thread GitBox


slfan1989 commented on code in PR #4764:
URL: https://github.com/apache/hadoop/pull/4764#discussion_r954524957


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/FederationClientInterceptor.java:
##
@@ -888,13 +890,127 @@ public MoveApplicationAcrossQueuesResponse 
moveApplicationAcrossQueues(
   @Override
   public GetNewReservationResponse getNewReservation(
   GetNewReservationRequest request) throws YarnException, IOException {
-throw new NotImplementedException("Code is not implemented");
+
+if (request == null) {
+  routerMetrics.incrGetNewReservationFailedRetrieved();
+  String errMsg = "Missing getNewReservation request.";
+  RouterServerUtil.logAndThrowException(errMsg, null);
+}
+
+long startTime = clock.getTime();
+Map subClustersActive =
+federationFacade.getSubClusters(true);
+
+for (int i = 0; i < numSubmitRetries; ++i) {
+  SubClusterId subClusterId = getRandomActiveSubCluster(subClustersActive);
+  LOG.info("getNewReservation try #{} on SubCluster {}.", i, subClusterId);
+  ApplicationClientProtocol clientRMProxy = 
getClientRMProxyForSubCluster(subClusterId);
+  GetNewReservationResponse response = null;
+  try {
+response = clientRMProxy.getNewReservation(request);
+if (response != null) {
+  long stopTime = clock.getTime();
+  routerMetrics.succeededGetNewReservationRetrieved(stopTime - 
startTime);
+  return response;
+}
+  } catch (Exception e) {
+LOG.warn("Unable to create a new Reservation in SubCluster {}.", 
subClusterId.getId(), e);
+subClustersActive.remove(subClusterId);
+  }
+}
+
+routerMetrics.incrGetNewReservationFailedRetrieved();
+String errMsg = "Failed to create a new reservation.";
+throw new YarnException(errMsg);
   }
 
   @Override
   public ReservationSubmissionResponse submitReservation(
   ReservationSubmissionRequest request) throws YarnException, IOException {
-throw new NotImplementedException("Code is not implemented");
+
+if (request == null || request.getReservationId() == null
+|| request.getReservationDefinition() == null || 
request.getQueue() == null) {
+  routerMetrics.incrSubmitReservationFailedRetrieved();
+  RouterServerUtil.logAndThrowException(
+  "Missing submitReservation request or reservationId " +
+   "or reservation definition or queue.", null);
+}
+
+long startTime = clock.getTime();
+ReservationId reservationId = request.getReservationId();
+
+long retryCount = 0;
+boolean firstRetry = true;
+
+while (retryCount < numSubmitRetries) {
+
+  SubClusterId subClusterId = 
policyFacade.getReservationHomeSubCluster(request);
+  LOG.info("submitReservation reservationId {} try #{} on SubCluster {}.",
+  reservationId, retryCount, subClusterId);
+
+  ReservationHomeSubCluster reservationHomeSubCluster =
+  ReservationHomeSubCluster.newInstance(reservationId, subClusterId);
+
+  // If it is the first attempt,use StateStore to add the
+  // mapping of reservationId and subClusterId.
+  // if the number of attempts is greater than 1, use StateStore to update 
the mapping.
+  if (firstRetry) {
+try {
+  // persist the mapping of reservationId and the subClusterId which 
has
+  // been selected as its home
+  subClusterId = 
federationFacade.addReservationHomeSubCluster(reservationHomeSubCluster);
+  firstRetry = false;
+} catch (YarnException e) {
+  routerMetrics.incrSubmitReservationFailedRetrieved();
+  RouterServerUtil.logAndThrowException(e,
+  "Unable to insert the ReservationId %s into the 
FederationStateStore.",
+   reservationId);
+}
+  } else {
+try {
+  // update the mapping of reservationId and the home subClusterId to
+  // the new subClusterId we have selected
+  
federationFacade.updateReservationHomeSubCluster(reservationHomeSubCluster);
+} catch (YarnException e) {
+  SubClusterId subClusterIdInStateStore =
+  federationFacade.getReservationHomeSubCluster(reservationId);
+  if (subClusterId == subClusterIdInStateStore) {
+LOG.info("Reservation {} already submitted on SubCluster {}.",
+reservationId, subClusterId);
+  } else {
+routerMetrics.incrSubmitReservationFailedRetrieved();
+RouterServerUtil.logAndThrowException(e,
+"Unable to update the ReservationId %s into the 
FederationStateStore.",
+ reservationId);
+  }
+}
+  }
+
+  // Obtain the ApplicationClientProtocol of the corresponding RM 
according to the subClusterId,
+  // and call the submitReservation method, 

[GitHub] [hadoop] slfan1989 commented on a diff in pull request #4764: YARN-11177. Support getNewReservation, submit / update/ Reservation API's for Federation.

2022-08-24 Thread GitBox


slfan1989 commented on code in PR #4764:
URL: https://github.com/apache/hadoop/pull/4764#discussion_r954524494


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/FederationClientInterceptor.java:
##
@@ -888,13 +890,127 @@ public MoveApplicationAcrossQueuesResponse 
moveApplicationAcrossQueues(
   @Override
   public GetNewReservationResponse getNewReservation(
   GetNewReservationRequest request) throws YarnException, IOException {
-throw new NotImplementedException("Code is not implemented");
+
+if (request == null) {
+  routerMetrics.incrGetNewReservationFailedRetrieved();
+  String errMsg = "Missing getNewReservation request.";
+  RouterServerUtil.logAndThrowException(errMsg, null);
+}
+
+long startTime = clock.getTime();
+Map subClustersActive =
+federationFacade.getSubClusters(true);
+
+for (int i = 0; i < numSubmitRetries; ++i) {
+  SubClusterId subClusterId = getRandomActiveSubCluster(subClustersActive);
+  LOG.info("getNewReservation try #{} on SubCluster {}.", i, subClusterId);
+  ApplicationClientProtocol clientRMProxy = 
getClientRMProxyForSubCluster(subClusterId);
+  GetNewReservationResponse response = null;
+  try {
+response = clientRMProxy.getNewReservation(request);
+if (response != null) {
+  long stopTime = clock.getTime();
+  routerMetrics.succeededGetNewReservationRetrieved(stopTime - 
startTime);
+  return response;
+}
+  } catch (Exception e) {
+LOG.warn("Unable to create a new Reservation in SubCluster {}.", 
subClusterId.getId(), e);
+subClustersActive.remove(subClusterId);
+  }
+}
+
+routerMetrics.incrGetNewReservationFailedRetrieved();
+String errMsg = "Failed to create a new reservation.";
+throw new YarnException(errMsg);
   }
 
   @Override
   public ReservationSubmissionResponse submitReservation(
   ReservationSubmissionRequest request) throws YarnException, IOException {
-throw new NotImplementedException("Code is not implemented");
+
+if (request == null || request.getReservationId() == null
+|| request.getReservationDefinition() == null || 
request.getQueue() == null) {
+  routerMetrics.incrSubmitReservationFailedRetrieved();
+  RouterServerUtil.logAndThrowException(
+  "Missing submitReservation request or reservationId " +
+   "or reservation definition or queue.", null);
+}
+
+long startTime = clock.getTime();
+ReservationId reservationId = request.getReservationId();
+
+long retryCount = 0;
+boolean firstRetry = true;
+
+while (retryCount < numSubmitRetries) {
+
+  SubClusterId subClusterId = 
policyFacade.getReservationHomeSubCluster(request);
+  LOG.info("submitReservation reservationId {} try #{} on SubCluster {}.",
+  reservationId, retryCount, subClusterId);
+
+  ReservationHomeSubCluster reservationHomeSubCluster =
+  ReservationHomeSubCluster.newInstance(reservationId, subClusterId);
+
+  // If it is the first attempt,use StateStore to add the
+  // mapping of reservationId and subClusterId.
+  // if the number of attempts is greater than 1, use StateStore to update 
the mapping.
+  if (firstRetry) {
+try {
+  // persist the mapping of reservationId and the subClusterId which 
has
+  // been selected as its home
+  subClusterId = 
federationFacade.addReservationHomeSubCluster(reservationHomeSubCluster);
+  firstRetry = false;
+} catch (YarnException e) {
+  routerMetrics.incrSubmitReservationFailedRetrieved();
+  RouterServerUtil.logAndThrowException(e,
+  "Unable to insert the ReservationId %s into the 
FederationStateStore.",
+   reservationId);
+}
+  } else {
+try {
+  // update the mapping of reservationId and the home subClusterId to
+  // the new subClusterId we have selected
+  
federationFacade.updateReservationHomeSubCluster(reservationHomeSubCluster);
+} catch (YarnException e) {
+  SubClusterId subClusterIdInStateStore =
+  federationFacade.getReservationHomeSubCluster(reservationId);
+  if (subClusterId == subClusterIdInStateStore) {
+LOG.info("Reservation {} already submitted on SubCluster {}.",
+reservationId, subClusterId);
+  } else {
+routerMetrics.incrSubmitReservationFailedRetrieved();
+RouterServerUtil.logAndThrowException(e,
+"Unable to update the ReservationId %s into the 
FederationStateStore.",
+ reservationId);
+  }
+}
+  }
+
+  // Obtain the ApplicationClientProtocol of the corresponding RM 
according to the subClusterId,
+  // and call the submitReservation method, 

[GitHub] [hadoop] slfan1989 commented on a diff in pull request #4764: YARN-11177. Support getNewReservation, submit / update/ Reservation API's for Federation.

2022-08-24 Thread GitBox


slfan1989 commented on code in PR #4764:
URL: https://github.com/apache/hadoop/pull/4764#discussion_r954524092


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/FederationClientInterceptor.java:
##
@@ -888,13 +890,127 @@ public MoveApplicationAcrossQueuesResponse 
moveApplicationAcrossQueues(
   @Override
   public GetNewReservationResponse getNewReservation(
   GetNewReservationRequest request) throws YarnException, IOException {
-throw new NotImplementedException("Code is not implemented");
+
+if (request == null) {
+  routerMetrics.incrGetNewReservationFailedRetrieved();
+  String errMsg = "Missing getNewReservation request.";
+  RouterServerUtil.logAndThrowException(errMsg, null);
+}
+
+long startTime = clock.getTime();
+Map subClustersActive =
+federationFacade.getSubClusters(true);
+
+for (int i = 0; i < numSubmitRetries; ++i) {
+  SubClusterId subClusterId = getRandomActiveSubCluster(subClustersActive);
+  LOG.info("getNewReservation try #{} on SubCluster {}.", i, subClusterId);
+  ApplicationClientProtocol clientRMProxy = 
getClientRMProxyForSubCluster(subClusterId);
+  GetNewReservationResponse response = null;
+  try {
+response = clientRMProxy.getNewReservation(request);
+if (response != null) {
+  long stopTime = clock.getTime();
+  routerMetrics.succeededGetNewReservationRetrieved(stopTime - 
startTime);
+  return response;
+}
+  } catch (Exception e) {
+LOG.warn("Unable to create a new Reservation in SubCluster {}.", 
subClusterId.getId(), e);
+subClustersActive.remove(subClusterId);
+  }
+}
+
+routerMetrics.incrGetNewReservationFailedRetrieved();
+String errMsg = "Failed to create a new reservation.";
+throw new YarnException(errMsg);
   }
 
   @Override
   public ReservationSubmissionResponse submitReservation(
   ReservationSubmissionRequest request) throws YarnException, IOException {
-throw new NotImplementedException("Code is not implemented");
+
+if (request == null || request.getReservationId() == null
+|| request.getReservationDefinition() == null || 
request.getQueue() == null) {
+  routerMetrics.incrSubmitReservationFailedRetrieved();
+  RouterServerUtil.logAndThrowException(
+  "Missing submitReservation request or reservationId " +
+   "or reservation definition or queue.", null);
+}
+
+long startTime = clock.getTime();
+ReservationId reservationId = request.getReservationId();
+
+long retryCount = 0;
+boolean firstRetry = true;
+
+while (retryCount < numSubmitRetries) {
+
+  SubClusterId subClusterId = 
policyFacade.getReservationHomeSubCluster(request);
+  LOG.info("submitReservation reservationId {} try #{} on SubCluster {}.",
+  reservationId, retryCount, subClusterId);
+
+  ReservationHomeSubCluster reservationHomeSubCluster =
+  ReservationHomeSubCluster.newInstance(reservationId, subClusterId);
+
+  // If it is the first attempt,use StateStore to add the
+  // mapping of reservationId and subClusterId.
+  // if the number of attempts is greater than 1, use StateStore to update 
the mapping.
+  if (firstRetry) {
+try {
+  // persist the mapping of reservationId and the subClusterId which 
has
+  // been selected as its home
+  subClusterId = 
federationFacade.addReservationHomeSubCluster(reservationHomeSubCluster);
+  firstRetry = false;
+} catch (YarnException e) {
+  routerMetrics.incrSubmitReservationFailedRetrieved();
+  RouterServerUtil.logAndThrowException(e,
+  "Unable to insert the ReservationId %s into the 
FederationStateStore.",
+   reservationId);
+}
+  } else {
+try {
+  // update the mapping of reservationId and the home subClusterId to
+  // the new subClusterId we have selected
+  
federationFacade.updateReservationHomeSubCluster(reservationHomeSubCluster);
+} catch (YarnException e) {
+  SubClusterId subClusterIdInStateStore =
+  federationFacade.getReservationHomeSubCluster(reservationId);
+  if (subClusterId == subClusterIdInStateStore) {
+LOG.info("Reservation {} already submitted on SubCluster {}.",
+reservationId, subClusterId);
+  } else {
+routerMetrics.incrSubmitReservationFailedRetrieved();
+RouterServerUtil.logAndThrowException(e,
+"Unable to update the ReservationId %s into the 
FederationStateStore.",
+ reservationId);

Review Comment:
   I will fix it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the 

[GitHub] [hadoop] slfan1989 commented on a diff in pull request #4764: YARN-11177. Support getNewReservation, submit / update/ Reservation API's for Federation.

2022-08-24 Thread GitBox


slfan1989 commented on code in PR #4764:
URL: https://github.com/apache/hadoop/pull/4764#discussion_r954523877


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/FederationClientInterceptor.java:
##
@@ -888,13 +890,127 @@ public MoveApplicationAcrossQueuesResponse 
moveApplicationAcrossQueues(
   @Override
   public GetNewReservationResponse getNewReservation(
   GetNewReservationRequest request) throws YarnException, IOException {
-throw new NotImplementedException("Code is not implemented");
+
+if (request == null) {
+  routerMetrics.incrGetNewReservationFailedRetrieved();
+  String errMsg = "Missing getNewReservation request.";
+  RouterServerUtil.logAndThrowException(errMsg, null);
+}
+
+long startTime = clock.getTime();
+Map subClustersActive =

Review Comment:
   I will fix it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] zhengchenyu commented on a diff in pull request #4756: HDFS-16732. [SBN READ] Avoid get location from observer when the bloc…

2022-08-24 Thread GitBox


zhengchenyu commented on code in PR #4756:
URL: https://github.com/apache/hadoop/pull/4756#discussion_r954499783


##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestObserverNodeWhenReportDelay.java:
##
@@ -0,0 +1,153 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.namenode.ha;
+
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_BLOCK_SIZE_KEY;
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_BYTES_PER_CHECKSUM_KEY;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_STATE_CONTEXT_ENABLED_KEY;
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_REPLICATION_KEY;
+import static 
org.apache.hadoop.hdfs.server.namenode.NameNodeAdapter.getServiceState;
+import static 
org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider.OBSERVER_PROBE_RETRY_PERIOD_KEY;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+
+import java.io.IOException;
+import java.util.List;
+
+import org.apache.commons.lang3.ArrayUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.ha.HAServiceProtocol.HAServiceState;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.hdfs.protocol.DirectoryListing;
+import org.apache.hadoop.hdfs.protocol.HdfsLocatedFileStatus;
+import org.apache.hadoop.hdfs.protocol.LocatedBlock;
+import org.apache.hadoop.hdfs.protocol.LocatedBlocks;
+import org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster;
+import org.apache.hadoop.hdfs.server.datanode.DataNode;
+import org.junit.After;
+import org.junit.AfterClass;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class TestObserverNodeWhenReportDelay {

Review Comment:
   Thanks for carefully review, the new test is overkill indeed. I tested, 
testObserverNodeBlockMissingRetry can also reproduce this bug with below code.
   
   ```
   dfs.getClient().listPaths("/", new byte[0], true);
   assertSentTo(0);
   
   dfs.getClient().getLocatedFileInfo(testPath.toString(), false);
   assertSentTo(0);
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on a diff in pull request #4764: YARN-11177. Support getNewReservation, submit / update/ Reservation API's for Federation.

2022-08-24 Thread GitBox


slfan1989 commented on code in PR #4764:
URL: https://github.com/apache/hadoop/pull/4764#discussion_r954499120


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/FederationClientInterceptor.java:
##
@@ -1624,6 +1788,35 @@ protected SubClusterId getApplicationHomeSubCluster(
 throw new YarnException(errorMsg);
   }
 
+  protected SubClusterId getReservationHomeSubCluster(ReservationId 
reservationId)
+  throws YarnException {
+
+if (reservationId == null) {
+  LOG.error("ReservationId is Null, Can't find in SubCluster.");
+  return null;
+}
+
+SubClusterId resultSubClusterId = null;
+
+// try looking for applicationId in Home SubCluster
+try {
+  resultSubClusterId = 
federationFacade.getReservationHomeSubCluster(reservationId);
+} catch (YarnException ex) {
+  if(LOG.isDebugEnabled()){
+LOG.debug("Can't find reservationId = {} in home sub cluster, " +
+" try foreach sub clusters.", reservationId);

Review Comment:
   I will fix it.



##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/clientrm/TestFederationClientInterceptor.java:
##
@@ -203,6 +218,12 @@ protected YarnConfiguration createConfiguration() {
 
 // Disable StateStoreFacade cache
 conf.setInt(YarnConfiguration.FEDERATION_CACHE_TIME_TO_LIVE_SECS, 0);
+
+conf.setInt("yarn.scheduler.minimum-allocation-mb", 512);
+conf.setInt("yarn.scheduler.minimum-allocation-vcores", 1);
+conf.setInt("yarn.scheduler.maximum-allocation-mb", 102400);

Review Comment:
   I will fix it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] zhengchenyu commented on a diff in pull request #4756: HDFS-16732. [SBN READ] Avoid get location from observer when the bloc…

2022-08-24 Thread GitBox


zhengchenyu commented on code in PR #4756:
URL: https://github.com/apache/hadoop/pull/4756#discussion_r954478351


##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestObserverNodeWhenReportDelay.java:
##
@@ -0,0 +1,153 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.namenode.ha;
+
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_BLOCK_SIZE_KEY;
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_BYTES_PER_CHECKSUM_KEY;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_STATE_CONTEXT_ENABLED_KEY;
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_REPLICATION_KEY;
+import static 
org.apache.hadoop.hdfs.server.namenode.NameNodeAdapter.getServiceState;
+import static 
org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider.OBSERVER_PROBE_RETRY_PERIOD_KEY;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+
+import java.io.IOException;
+import java.util.List;
+
+import org.apache.commons.lang3.ArrayUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.ha.HAServiceProtocol.HAServiceState;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.hdfs.protocol.DirectoryListing;
+import org.apache.hadoop.hdfs.protocol.HdfsLocatedFileStatus;
+import org.apache.hadoop.hdfs.protocol.LocatedBlock;
+import org.apache.hadoop.hdfs.protocol.LocatedBlocks;
+import org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster;
+import org.apache.hadoop.hdfs.server.datanode.DataNode;
+import org.junit.After;
+import org.junit.AfterClass;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class TestObserverNodeWhenReportDelay {

Review Comment:
   I add some new config in TestObserverNodeWhenReportDelay, I worried about 
affect other unit test in  TestObserverNode. I will try to add this new test in 
TestObserverNode.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4763: HDFS-16734. RBF: fix some bugs when handling getContentSummary RPC

2022-08-24 Thread GitBox


hadoop-yetus commented on PR #4763:
URL: https://github.com/apache/hadoop/pull/4763#issuecomment-1226764238

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 58s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 36s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 54s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 52s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  0s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 14s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 46s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 10s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 45s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 35s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  35m 38s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 47s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 133m 18s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4763/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4763 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 14d59d93a830 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / c08ffbe412dbcbf20ecc9e3ea6e9ef831d6ca1c4 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4763/7/testReport/ |
   | Max. process+thread count | 2770 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4763/7/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For 

[jira] [Commented] (HADOOP-18418) Upgrade bundled Tomcat to 8.5.82

2022-08-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17584591#comment-17584591
 ] 

ASF GitHub Bot commented on HADOOP-18418:
-

iwasakims merged PR #4799:
URL: https://github.com/apache/hadoop/pull/4799




> Upgrade bundled Tomcat to 8.5.82
> 
>
> Key: HADOOP-18418
> URL: https://issues.apache.org/jira/browse/HADOOP-18418
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: httpfs, kms
>Affects Versions: 2.10.2
>Reporter: groot
>Assignee: groot
>Priority: Major
>  Labels: pull-request-available
>
> h4.  
> Currently we are using 8.5.81 which is affected by CVE-2022-34305
> More Details - [https://github.com/advisories/GHSA-6j88-6whg-x687]
> Lets upgrade  to 8.5.82
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18418) Upgrade bundled Tomcat to 8.5.82

2022-08-24 Thread Masatake Iwasaki (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-18418:
--
Fix Version/s: 2.10.3
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Upgrade bundled Tomcat to 8.5.82
> 
>
> Key: HADOOP-18418
> URL: https://issues.apache.org/jira/browse/HADOOP-18418
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: httpfs, kms
>Affects Versions: 2.10.2
>Reporter: groot
>Assignee: groot
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.10.3
>
>
> h4.  
> Currently we are using 8.5.81 which is affected by CVE-2022-34305
> More Details - [https://github.com/advisories/GHSA-6j88-6whg-x687]
> Lets upgrade  to 8.5.82
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] iwasakims merged pull request #4799: HADOOP-18418. Upgrade bundled Tomcat to 8.5.82

2022-08-24 Thread GitBox


iwasakims merged PR #4799:
URL: https://github.com/apache/hadoop/pull/4799


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] zhengchenyu closed pull request #4354: YARN-6539. Create SecureLogin inside Router.

2022-08-24 Thread GitBox


zhengchenyu closed pull request #4354: YARN-6539. Create SecureLogin inside 
Router.
URL: https://github.com/apache/hadoop/pull/4354


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] zhengchenyu commented on a diff in pull request #4756: HDFS-16732. [SBN READ] Avoid get location from observer when the bloc…

2022-08-24 Thread GitBox


zhengchenyu commented on code in PR #4756:
URL: https://github.com/apache/hadoop/pull/4756#discussion_r954478351


##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestObserverNodeWhenReportDelay.java:
##
@@ -0,0 +1,153 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.namenode.ha;
+
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_BLOCK_SIZE_KEY;
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_BYTES_PER_CHECKSUM_KEY;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_STATE_CONTEXT_ENABLED_KEY;
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_REPLICATION_KEY;
+import static 
org.apache.hadoop.hdfs.server.namenode.NameNodeAdapter.getServiceState;
+import static 
org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider.OBSERVER_PROBE_RETRY_PERIOD_KEY;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+
+import java.io.IOException;
+import java.util.List;
+
+import org.apache.commons.lang3.ArrayUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.ha.HAServiceProtocol.HAServiceState;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.hdfs.protocol.DirectoryListing;
+import org.apache.hadoop.hdfs.protocol.HdfsLocatedFileStatus;
+import org.apache.hadoop.hdfs.protocol.LocatedBlock;
+import org.apache.hadoop.hdfs.protocol.LocatedBlocks;
+import org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster;
+import org.apache.hadoop.hdfs.server.datanode.DataNode;
+import org.junit.After;
+import org.junit.AfterClass;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class TestObserverNodeWhenReportDelay {

Review Comment:
   I add some new config in TestObserverNodeWhenReportDelay, I worried about 
affect other unit test in  TestObserverNode. I will try to add this new test in 
TestObserverNode.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on a diff in pull request #4764: YARN-11177. Support getNewReservation, submit / update/ Reservation API's for Federation.

2022-08-24 Thread GitBox


slfan1989 commented on code in PR #4764:
URL: https://github.com/apache/hadoop/pull/4764#discussion_r954475014


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/FederationClientInterceptor.java:
##
@@ -925,13 +1041,61 @@ public ReservationListResponse listReservations(
   @Override
   public ReservationUpdateResponse updateReservation(
   ReservationUpdateRequest request) throws YarnException, IOException {
-throw new NotImplementedException("Code is not implemented");
+
+if (request == null || request.getReservationId() == null
+|| request.getReservationDefinition() == null) {
+  routerMetrics.incrUpdateReservationFailedRetrieved();
+  RouterServerUtil.logAndThrowException(
+  "Missing updateReservation request or reservationId or reservation 
definition.", null);
+}
+
+long startTime = clock.getTime();
+ReservationId reservationId = request.getReservationId();
+SubClusterId subClusterId = getReservationHomeSubCluster(reservationId);
+
+ApplicationClientProtocol client;

Review Comment:
   I will fix it.



##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/FederationClientInterceptor.java:
##
@@ -925,13 +1041,61 @@ public ReservationListResponse listReservations(
   @Override
   public ReservationUpdateResponse updateReservation(
   ReservationUpdateRequest request) throws YarnException, IOException {
-throw new NotImplementedException("Code is not implemented");
+
+if (request == null || request.getReservationId() == null
+|| request.getReservationDefinition() == null) {
+  routerMetrics.incrUpdateReservationFailedRetrieved();
+  RouterServerUtil.logAndThrowException(
+  "Missing updateReservation request or reservationId or reservation 
definition.", null);
+}
+
+long startTime = clock.getTime();
+ReservationId reservationId = request.getReservationId();
+SubClusterId subClusterId = getReservationHomeSubCluster(reservationId);
+
+ApplicationClientProtocol client;
+ReservationUpdateResponse response = null;
+try {
+  client = getClientRMProxyForSubCluster(subClusterId);
+  response = client.updateReservation(request);
+} catch (Exception ex) {
+  routerMetrics.incrUpdateReservationFailedRetrieved();
+  RouterServerUtil.logAndThrowException(
+  "Unable to reservation update due to exception.", ex);
+}
+long stopTime = clock.getTime();
+routerMetrics.succeededUpdateReservationRetrieved(stopTime - startTime);
+return response;
   }
 
   @Override
   public ReservationDeleteResponse deleteReservation(
   ReservationDeleteRequest request) throws YarnException, IOException {
-throw new NotImplementedException("Code is not implemented");
+if (request == null || request.getReservationId() == null) {
+  routerMetrics.incrDeleteReservationFailedRetrieved();
+  RouterServerUtil.logAndThrowException(
+  "Missing deleteReservation request or reservationId.", null);
+}
+
+long startTime = clock.getTime();
+ReservationId reservationId = request.getReservationId();
+SubClusterId subClusterId = getReservationHomeSubCluster(reservationId);
+
+ApplicationClientProtocol client;
+ReservationDeleteResponse response = null;
+try {
+  client = getClientRMProxyForSubCluster(subClusterId);

Review Comment:
   I will fix it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on a diff in pull request #4764: YARN-11177. Support getNewReservation, submit / update/ Reservation API's for Federation.

2022-08-24 Thread GitBox


slfan1989 commented on code in PR #4764:
URL: https://github.com/apache/hadoop/pull/4764#discussion_r954473098


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/FederationClientInterceptor.java:
##
@@ -888,13 +890,127 @@ public MoveApplicationAcrossQueuesResponse 
moveApplicationAcrossQueues(
   @Override
   public GetNewReservationResponse getNewReservation(
   GetNewReservationRequest request) throws YarnException, IOException {
-throw new NotImplementedException("Code is not implemented");
+
+if (request == null) {
+  routerMetrics.incrGetNewReservationFailedRetrieved();
+  String errMsg = "Missing getNewReservation request.";
+  RouterServerUtil.logAndThrowException(errMsg, null);
+}
+
+long startTime = clock.getTime();
+Map subClustersActive =
+federationFacade.getSubClusters(true);
+
+for (int i = 0; i < numSubmitRetries; ++i) {
+  SubClusterId subClusterId = getRandomActiveSubCluster(subClustersActive);
+  LOG.info("getNewReservation try #{} on SubCluster {}.", i, subClusterId);
+  ApplicationClientProtocol clientRMProxy = 
getClientRMProxyForSubCluster(subClusterId);
+  GetNewReservationResponse response = null;
+  try {
+response = clientRMProxy.getNewReservation(request);
+if (response != null) {
+  long stopTime = clock.getTime();
+  routerMetrics.succeededGetNewReservationRetrieved(stopTime - 
startTime);
+  return response;
+}
+  } catch (Exception e) {
+LOG.warn("Unable to create a new Reservation in SubCluster {}.", 
subClusterId.getId(), e);
+subClustersActive.remove(subClusterId);
+  }
+}
+
+routerMetrics.incrGetNewReservationFailedRetrieved();
+String errMsg = "Failed to create a new reservation.";
+throw new YarnException(errMsg);
   }
 
   @Override
   public ReservationSubmissionResponse submitReservation(
   ReservationSubmissionRequest request) throws YarnException, IOException {
-throw new NotImplementedException("Code is not implemented");
+
+if (request == null || request.getReservationId() == null
+|| request.getReservationDefinition() == null || 
request.getQueue() == null) {

Review Comment:
   I will fix it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18415) Replace Sets#newHashSet() and newTreeSet() with constructors directly hadoop-tool

2022-08-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17584571#comment-17584571
 ] 

ASF GitHub Bot commented on HADOOP-18415:
-

Samrat002 opened a new pull request, #4805:
URL: https://github.com/apache/hadoop/pull/4805

   …op-tools
   
   
   
   ### Description of PR
   
   [HADOOP-18415](https://issues.apache.org/jira/browse/HADOOP-18415)
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> Replace Sets#newHashSet() and newTreeSet() with constructors directly 
> hadoop-tool
> -
>
> Key: HADOOP-18415
> URL: https://issues.apache.org/jira/browse/HADOOP-18415
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Samrat Deb
>Assignee: Samrat Deb
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18415) Replace Sets#newHashSet() and newTreeSet() with constructors directly hadoop-tool

2022-08-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-18415:

Labels: pull-request-available  (was: )

> Replace Sets#newHashSet() and newTreeSet() with constructors directly 
> hadoop-tool
> -
>
> Key: HADOOP-18415
> URL: https://issues.apache.org/jira/browse/HADOOP-18415
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Samrat Deb
>Assignee: Samrat Deb
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Samrat002 opened a new pull request, #4805: HADOOP-18415. Replace Sets.newHashSet() with java constructor in hado…

2022-08-24 Thread GitBox


Samrat002 opened a new pull request, #4805:
URL: https://github.com/apache/hadoop/pull/4805

   …op-tools
   
   
   
   ### Description of PR
   
   [HADOOP-18415](https://issues.apache.org/jira/browse/HADOOP-18415)
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18391) Improve VectoredReadUtils

2022-08-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17584563#comment-17584563
 ] 

ASF GitHub Bot commented on HADOOP-18391:
-

hadoop-yetus commented on PR #4787:
URL: https://github.com/apache/hadoop/pull/4787#issuecomment-1226708208

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 59s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 27s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 14s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m  7s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  20m 43s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   4m 41s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m  8s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 21s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   4m 42s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m  2s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 33s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 24s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |  22m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 39s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  20m 39s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   3m 57s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4787/2/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   3m 12s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   1m 21s | 
[/results-javadoc-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4787/2/artifact/out/results-javadoc-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  
hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
 with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0)  |
   | +1 :green_heart: |  javadoc  |   2m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   4m 33s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 35s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 39s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   3m 12s |  |  hadoop-aws in the patch passed. 
 |
   | -1 :x: |  asflicense  |   1m 22s | 
[/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4787/2/artifact/out/results-asflicense.txt)
 |  The patch generated 1 ASF License warnings.  |
   |  |   | 235m 41s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4787/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4787 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux e4ee97048549 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4787: HADOOP-18391. Improvements in VectoredReadUtils#readVectored() for direct buffers

2022-08-24 Thread GitBox


hadoop-yetus commented on PR #4787:
URL: https://github.com/apache/hadoop/pull/4787#issuecomment-1226708208

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 59s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 27s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 14s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m  7s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  20m 43s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   4m 41s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m  8s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 21s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   4m 42s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m  2s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 33s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 24s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |  22m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 39s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  20m 39s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   3m 57s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4787/2/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   3m 12s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   1m 21s | 
[/results-javadoc-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4787/2/artifact/out/results-javadoc-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  
hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
 with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0)  |
   | +1 :green_heart: |  javadoc  |   2m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   4m 33s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 35s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 39s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   3m 12s |  |  hadoop-aws in the patch passed. 
 |
   | -1 :x: |  asflicense  |   1m 22s | 
[/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4787/2/artifact/out/results-asflicense.txt)
 |  The patch generated 1 ASF License warnings.  |
   |  |   | 235m 41s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4787/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4787 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux e4ee97048549 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / e9c071920abe7a5c07bb50c661f44168488367a9 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | 

[GitHub] [hadoop] ZanderXu commented on pull request #4763: HDFS-16734. RBF: fix some bugs when handling getContentSummary RPC

2022-08-24 Thread GitBox


ZanderXu commented on PR #4763:
URL: https://github.com/apache/hadoop/pull/4763#issuecomment-1226693584

   @goiri Sir, I have rebased this patch based on the latest trunk. Please help 
me review it again. Thanks


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #4797: YARN-11277. trigger log-dir deletion by size for NonAggregatingLogHandler

2022-08-24 Thread GitBox


slfan1989 commented on PR #4797:
URL: https://github.com/apache/hadoop/pull/4797#issuecomment-1226681435

   @leixm Fix CheckStyle.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ZanderXu commented on pull request #4560: HDFS-16659. JournalNode should throw CacheMissException when SinceTxId is bigger than HighestWrittenTxId

2022-08-24 Thread GitBox


ZanderXu commented on PR #4560:
URL: https://github.com/apache/hadoop/pull/4560#issuecomment-1226681081

   @xkrogen Master, thanks for your detailed explanation and nice suggestion. I 
will modify this path whit this nice idea.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on a diff in pull request #4792: YARN-11275. [Federation] Add batchFinishApplicationMaster in UAMPoolManager.

2022-08-24 Thread GitBox


slfan1989 commented on code in PR #4792:
URL: https://github.com/apache/hadoop/pull/4792#discussion_r954433018


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedAMPoolManager.java:
##
@@ -450,4 +452,52 @@ public void drainUAMHeartbeats() {
   uam.drainHeartbeatThread();
 }
   }
+
+  /**
+   * Complete FinishApplicationMaster interface calls in batches.
+   *
+   * @param request FinishApplicationMasterRequest
+   * @param appId application Id
+   * @return Returns the Map map,
+   * the key is subClusterId, the value is 
FinishApplicationMasterResponse
+   */
+  public Map 
batchFinishApplicationMaster(
+  FinishApplicationMasterRequest request, String appId) {
+
+Map responseMap = new HashMap<>();
+Set subClusterIds = this.unmanagedAppMasterMap.keySet();
+
+if (subClusterIds != null && !subClusterIds.isEmpty()) {
+  ExecutorCompletionService> 
finishAppService =
+  new ExecutorCompletionService<>(this.threadpool);
+  LOG.info("Sending finish application request to {} sub-cluster RMs", 
subClusterIds.size());
+
+  for (final String subClusterId : subClusterIds) {
+finishAppService.submit(() -> {
+  LOG.info("Sending finish application request to RM {}", 
subClusterId);
+  FinishApplicationMasterResponse uamResponse = null;
+  try {
+uamResponse = finishApplicationMaster(subClusterId, request);

Review Comment:
   Thanks for your suggestion, the code looks very good, I will modify it.



##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/TestFederationInterceptor.java:
##
@@ -969,4 +969,58 @@ private PreemptionMessage createDummyPreemptionMessage(
 preemptionMessage.setContract(contract);
 return preemptionMessage;
   }
+
+  @Test
+  public void testBatchFinishApplicationMaster() throws IOException, 
InterruptedException {
+
+final RegisterApplicationMasterRequest registerReq =
+Records.newRecord(RegisterApplicationMasterRequest.class);
+registerReq.setHost(Integer.toString(testAppId));
+registerReq.setRpcPort(testAppId);
+registerReq.setTrackingUrl("");
+
+UserGroupInformation ugi = 
interceptor.getUGIWithToken(interceptor.getAttemptId());
+
+ugi.doAs((PrivilegedExceptionAction) () -> {
+
+  // Register the application
+  RegisterApplicationMasterRequest registerReq1 =
+  Records.newRecord(RegisterApplicationMasterRequest.class);
+  registerReq1.setHost(Integer.toString(testAppId));
+  registerReq1.setRpcPort(0);
+  registerReq1.setTrackingUrl("");
+
+  // Register ApplicationMaster
+  RegisterApplicationMasterResponse registerResponse =
+  interceptor.registerApplicationMaster(registerReq1);
+  Assert.assertNotNull(registerResponse);
+  lastResponseId = 0;
+
+  Assert.assertEquals(0, interceptor.getUnmanagedAMPoolSize());
+
+  // Allocate the first batch of containers, with sc1 and sc2 active
+  registerSubCluster(SubClusterId.newInstance("SC-1"));
+  registerSubCluster(SubClusterId.newInstance("SC-2"));
+
+  int numberOfContainers = 3;
+  List containers =
+  getContainersAndAssert(numberOfContainers, numberOfContainers * 2);
+  Assert.assertEquals(2, interceptor.getUnmanagedAMPoolSize());
+  Assert.assertEquals(numberOfContainers * 2, containers.size());
+
+  // Finish the application
+  FinishApplicationMasterRequest finishReq =
+  Records.newRecord(FinishApplicationMasterRequest.class);
+  finishReq.setDiagnostics("");
+  finishReq.setTrackingUrl("");
+  finishReq.setFinalApplicationStatus(FinalApplicationStatus.SUCCEEDED);
+
+  FinishApplicationMasterResponse finshResponse =

Review Comment:
   I will fix it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ZanderXu commented on pull request #4744: HDFS-16689. Standby NameNode crashes when transitioning to Active with in-progress tailer

2022-08-24 Thread GitBox


ZanderXu commented on PR #4744:
URL: https://github.com/apache/hadoop/pull/4744#issuecomment-1226679229

   ```
if (curSegment != null) { 
  LOG.warn("Client is requesting a new log segment " + txid +  
  " though we are already writing " + curSegment + ". " + 
  "Aborting the current segment in order to begin the new one." + 
  " ; journal id: " + journalId); 
  // The writer may have lost a connection to us and is now 
  // re-connecting after the connection came back. 
  // We should abort our own old segment. 
  abortCurSegment(); 
} 
   ```
   
   The `abortCurSegment()` just aborts the current segment, but not finalize 
the current inProgress segment, so may result in  two inProgress segment files 
on disk.
   
   > So are we agreed that the best way forward is to modify 
recoverUnclosedStreams() to throw exception on failure, then we can use 
inProgressOk = false to solve this problem as you originally proposed?
   
   Yes, I totally agree with this and I will modify this patch with this idea. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a diff in pull request #4792: YARN-11275. [Federation] Add batchFinishApplicationMaster in UAMPoolManager.

2022-08-24 Thread GitBox


goiri commented on code in PR #4792:
URL: https://github.com/apache/hadoop/pull/4792#discussion_r954407506


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/TestFederationInterceptor.java:
##
@@ -969,4 +969,58 @@ private PreemptionMessage createDummyPreemptionMessage(
 preemptionMessage.setContract(contract);
 return preemptionMessage;
   }
+
+  @Test
+  public void testBatchFinishApplicationMaster() throws IOException, 
InterruptedException {
+
+final RegisterApplicationMasterRequest registerReq =
+Records.newRecord(RegisterApplicationMasterRequest.class);
+registerReq.setHost(Integer.toString(testAppId));
+registerReq.setRpcPort(testAppId);
+registerReq.setTrackingUrl("");
+
+UserGroupInformation ugi = 
interceptor.getUGIWithToken(interceptor.getAttemptId());
+
+ugi.doAs((PrivilegedExceptionAction) () -> {
+
+  // Register the application
+  RegisterApplicationMasterRequest registerReq1 =
+  Records.newRecord(RegisterApplicationMasterRequest.class);
+  registerReq1.setHost(Integer.toString(testAppId));
+  registerReq1.setRpcPort(0);
+  registerReq1.setTrackingUrl("");
+
+  // Register ApplicationMaster
+  RegisterApplicationMasterResponse registerResponse =
+  interceptor.registerApplicationMaster(registerReq1);
+  Assert.assertNotNull(registerResponse);
+  lastResponseId = 0;
+
+  Assert.assertEquals(0, interceptor.getUnmanagedAMPoolSize());
+
+  // Allocate the first batch of containers, with sc1 and sc2 active
+  registerSubCluster(SubClusterId.newInstance("SC-1"));
+  registerSubCluster(SubClusterId.newInstance("SC-2"));
+
+  int numberOfContainers = 3;
+  List containers =
+  getContainersAndAssert(numberOfContainers, numberOfContainers * 2);
+  Assert.assertEquals(2, interceptor.getUnmanagedAMPoolSize());
+  Assert.assertEquals(numberOfContainers * 2, containers.size());
+
+  // Finish the application
+  FinishApplicationMasterRequest finishReq =
+  Records.newRecord(FinishApplicationMasterRequest.class);
+  finishReq.setDiagnostics("");
+  finishReq.setTrackingUrl("");
+  finishReq.setFinalApplicationStatus(FinalApplicationStatus.SUCCEEDED);
+
+  FinishApplicationMasterResponse finshResponse =

Review Comment:
   Single line?



##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedAMPoolManager.java:
##
@@ -450,4 +452,52 @@ public void drainUAMHeartbeats() {
   uam.drainHeartbeatThread();
 }
   }
+
+  /**
+   * Complete FinishApplicationMaster interface calls in batches.
+   *
+   * @param request FinishApplicationMasterRequest
+   * @param appId application Id
+   * @return Returns the Map map,
+   * the key is subClusterId, the value is 
FinishApplicationMasterResponse
+   */
+  public Map 
batchFinishApplicationMaster(
+  FinishApplicationMasterRequest request, String appId) {
+
+Map responseMap = new HashMap<>();
+Set subClusterIds = this.unmanagedAppMasterMap.keySet();
+
+if (subClusterIds != null && !subClusterIds.isEmpty()) {
+  ExecutorCompletionService> 
finishAppService =
+  new ExecutorCompletionService<>(this.threadpool);
+  LOG.info("Sending finish application request to {} sub-cluster RMs", 
subClusterIds.size());
+
+  for (final String subClusterId : subClusterIds) {
+finishAppService.submit(() -> {
+  LOG.info("Sending finish application request to RM {}", 
subClusterId);
+  FinishApplicationMasterResponse uamResponse = null;
+  try {
+uamResponse = finishApplicationMaster(subClusterId, request);

Review Comment:
   ```
   try {
 FinishApplicationMasterResponse uamResponse = 
finishApplicationMaster(subClusterId, request);
 return Collections.singletonMap(subClusterId, uamResponse);
   } catch (Throwable e) {
 LOG.warn("Failed to finish unmanaged application master: RM address: {} 
ApplicationId: {}",
 subClusterId, appId, e);
 return Collections.singletonMap(subClusterId, null);
   }



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18416) ITestS3AIOStatisticsContext failure

2022-08-24 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17584518#comment-17584518
 ] 

Viraj Jasani commented on HADOOP-18416:
---

When we enable prefetch feature, three tests in ITestS3AIOStatisticsContext 
fails consistently.

testThreadIOStatisticsForDifferentThreads also fails but it fails for main 
thread, it doesn't even reach to asserting non null stats for worker thread:
{code:java}
java.lang.AssertionError: [Counter named stream_read_bytes] 
Expecting actual not to be null    at 
org.apache.hadoop.fs.statistics.IOStatisticAssertions.lookupStatistic(IOStatisticAssertions.java:160)
    at 
org.apache.hadoop.fs.statistics.IOStatisticAssertions.assertThatStatisticLong(IOStatisticAssertions.java:291)
    at 
org.apache.hadoop.fs.statistics.IOStatisticAssertions.assertThatStatisticCounter(IOStatisticAssertions.java:306)
    at 
org.apache.hadoop.fs.s3a.ITestS3AIOStatisticsContext.assertThreadStatisticsForThread(ITestS3AIOStatisticsContext.java:373)
    at 
org.apache.hadoop.fs.s3a.ITestS3AIOStatisticsContext.testThreadIOStatisticsForDifferentThreads(ITestS3AIOStatisticsContext.java:259)
 {code}

> ITestS3AIOStatisticsContext failure
> ---
>
> Key: HADOOP-18416
> URL: https://issues.apache.org/jira/browse/HADOOP-18416
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.9
>Reporter: Steve Loughran
>Assignee: Mehakmeet Singh
>Priority: Major
> Attachments: 
> org.apache.hadoop.fs.s3a.ITestS3AIOStatisticsContext-output.txt
>
>
> test failure running the new ITestS3AIOStatisticsContext. attaching the stack 
> and log file.
> This happened on a large (12 thread) test run, but i can get it to come back 
> intermittently on repeated runs of the whole suite, but never when i just run 
> the single test case.
> {code}
> [ERROR] 
> testThreadIOStatisticsForDifferentThreads(org.apache.hadoop.fs.s3a.ITestS3AIOStatisticsContext)
>   Time elapsed: 3.616 s  <<< FAILURE!
> java.lang.AssertionError: 
> [Counter named stream_write_bytes] 
> Expecting actual not to be null
> at 
> org.apache.hadoop.fs.statistics.IOStatisticAssertions.lookupStatistic(IOStatisticAssertions.java:160)
> at 
> org.apache.hadoop.fs.statistics.IOStatisticAssertions.assertThatStatisticLong(IOStatisticAssertions.java:291)
> at 
> org.apache.hadoop.fs.statistics.IOStatisticAssertions.assertThatStatisticCounter(IOStatisticAssertions.java:306)
> at 
> org.apache.hadoop.fs.s3a.ITestS3AIOStatisticsContext.assertThreadStatisticsForThread(ITestS3AIOStatisticsContext.java:367)
> at 
> org.apache.hadoop.fs.s3a.ITestS3AIOStatisticsContext.testThreadIOStatisticsForDifferentThreads(ITestS3AIOStatisticsContext.java:260)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:288)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:282)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.lang.Thread.run(Thread.java:750)
> {code}
> I'm suspecting some race condition *or* gc pressure is releasing that 
> reference in the worker thread.
> proposed test changes
> * worker thread changes its thread ID for the logs
> * stores its thread context into a field, so there's guarantee of no GC
> * logs more as it goes along.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a diff in pull request #4764: YARN-11177. Support getNewReservation, submit / update/ Reservation API's for Federation.

2022-08-24 Thread GitBox


goiri commented on code in PR #4764:
URL: https://github.com/apache/hadoop/pull/4764#discussion_r954396015


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/FederationClientInterceptor.java:
##
@@ -925,13 +1041,61 @@ public ReservationListResponse listReservations(
   @Override
   public ReservationUpdateResponse updateReservation(
   ReservationUpdateRequest request) throws YarnException, IOException {
-throw new NotImplementedException("Code is not implemented");
+
+if (request == null || request.getReservationId() == null
+|| request.getReservationDefinition() == null) {

Review Comment:
   Indentation



##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/FederationClientInterceptor.java:
##
@@ -888,13 +890,127 @@ public MoveApplicationAcrossQueuesResponse 
moveApplicationAcrossQueues(
   @Override
   public GetNewReservationResponse getNewReservation(
   GetNewReservationRequest request) throws YarnException, IOException {
-throw new NotImplementedException("Code is not implemented");
+
+if (request == null) {
+  routerMetrics.incrGetNewReservationFailedRetrieved();
+  String errMsg = "Missing getNewReservation request.";
+  RouterServerUtil.logAndThrowException(errMsg, null);
+}
+
+long startTime = clock.getTime();
+Map subClustersActive =
+federationFacade.getSubClusters(true);
+
+for (int i = 0; i < numSubmitRetries; ++i) {
+  SubClusterId subClusterId = getRandomActiveSubCluster(subClustersActive);
+  LOG.info("getNewReservation try #{} on SubCluster {}.", i, subClusterId);
+  ApplicationClientProtocol clientRMProxy = 
getClientRMProxyForSubCluster(subClusterId);
+  GetNewReservationResponse response = null;
+  try {
+response = clientRMProxy.getNewReservation(request);
+if (response != null) {
+  long stopTime = clock.getTime();
+  routerMetrics.succeededGetNewReservationRetrieved(stopTime - 
startTime);
+  return response;
+}
+  } catch (Exception e) {
+LOG.warn("Unable to create a new Reservation in SubCluster {}.", 
subClusterId.getId(), e);
+subClustersActive.remove(subClusterId);
+  }
+}
+
+routerMetrics.incrGetNewReservationFailedRetrieved();
+String errMsg = "Failed to create a new reservation.";
+throw new YarnException(errMsg);
   }
 
   @Override
   public ReservationSubmissionResponse submitReservation(
   ReservationSubmissionRequest request) throws YarnException, IOException {
-throw new NotImplementedException("Code is not implemented");
+
+if (request == null || request.getReservationId() == null
+|| request.getReservationDefinition() == null || 
request.getQueue() == null) {
+  routerMetrics.incrSubmitReservationFailedRetrieved();
+  RouterServerUtil.logAndThrowException(
+  "Missing submitReservation request or reservationId " +
+   "or reservation definition or queue.", null);
+}
+
+long startTime = clock.getTime();
+ReservationId reservationId = request.getReservationId();
+
+long retryCount = 0;
+boolean firstRetry = true;
+
+while (retryCount < numSubmitRetries) {
+
+  SubClusterId subClusterId = 
policyFacade.getReservationHomeSubCluster(request);
+  LOG.info("submitReservation reservationId {} try #{} on SubCluster {}.",
+  reservationId, retryCount, subClusterId);
+
+  ReservationHomeSubCluster reservationHomeSubCluster =
+  ReservationHomeSubCluster.newInstance(reservationId, subClusterId);
+
+  // If it is the first attempt,use StateStore to add the
+  // mapping of reservationId and subClusterId.
+  // if the number of attempts is greater than 1, use StateStore to update 
the mapping.
+  if (firstRetry) {
+try {
+  // persist the mapping of reservationId and the subClusterId which 
has
+  // been selected as its home
+  subClusterId = 
federationFacade.addReservationHomeSubCluster(reservationHomeSubCluster);
+  firstRetry = false;
+} catch (YarnException e) {
+  routerMetrics.incrSubmitReservationFailedRetrieved();
+  RouterServerUtil.logAndThrowException(e,
+  "Unable to insert the ReservationId %s into the 
FederationStateStore.",
+   reservationId);
+}
+  } else {
+try {
+  // update the mapping of reservationId and the home subClusterId to
+  // the new subClusterId we have selected
+  
federationFacade.updateReservationHomeSubCluster(reservationHomeSubCluster);
+} catch (YarnException e) {
+  SubClusterId subClusterIdInStateStore =
+  federationFacade.getReservationHomeSubCluster(reservationId);
+   

[GitHub] [hadoop] xkrogen commented on pull request #4560: HDFS-16659. JournalNode should throw CacheMissException when SinceTxId is bigger than HighestWrittenTxId

2022-08-24 Thread GitBox


xkrogen commented on PR #4560:
URL: https://github.com/apache/hadoop/pull/4560#issuecomment-1226631416

   I am suggesting that we would also modify 
`QuorumJournalManager#selectInputStreams()` like:
   ```
 try {
   Collection rpcStreams = new ArrayList<>();
   selectRpcInputStreams(rpcStreams, fromTxnId, onlyDurableTxns);
   streams.addAll(rpcStreams);
   return;
 } catch (NewerTxnIdException ntie) {
   // normal situation, we requested newer IDs than any journal has. no 
new streams
   return;
 } catch (IOException ioe) {
   LOG.warn("Encountered exception while tailing edits >= " + fromTxnId 
+
   " via RPC; falling back to streaming.", ioe);
 }
   ```
   
   I say this mainly because we want to use `NewerTxnIdException` to detect 
when a JN is lagging, right? But if we special-case `sinceTxId == highestTxId + 
1`, then we might not detect the case where a JN is lagging by one txn.
   
   So let's say we have: JN0 with ID 1, JN1 with ID 2, JN2 with ID 2 (so JN0 
lags by one txn). Now we send out `getJournaledEdits()` RPCs. JN2 happens to 
respond slow, so we get response from JN0 and JN1. Now it looks like only txn 1 
is durably committed and we never load txn 2 -- the same issue you described in 
your original bug description.
   
   But by throwing `NewerTxnIdException`, `AsyncLoggerSet` will instead 
_ignore_ the response from JN0, so we wait for response from JN1 and JN2, and 
we correctly see that up to txn 2 is committed durably.
   
   Does this clarify? I agree the situation I describe should be rare, but I 
feel that we can cleanly solve it by using `NewerTxnIdException`.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xkrogen commented on pull request #4744: HDFS-16689. Standby NameNode crashes when transitioning to Active with in-progress tailer

2022-08-24 Thread GitBox


xkrogen commented on PR #4744:
URL: https://github.com/apache/hadoop/pull/4744#issuecomment-1226624906

   > I'm sorry, I just find this comment, but didn't find related code to 
finalize the previous inProgress segment. Can you share the related code? 
Thanks.
   
   I'm referring to this:
   
https://github.com/apache/hadoop/blob/62c86eaa0e539a4307ca794e0fcd502a77ebceb8/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/Journal.java#L574-L583
   
   But a little more digging made me realize that I don't think what I 
described will actually happen, since in `FSEditLog#openForWrite()` before 
calling `startLogSegment()` we first check that there are no active streams:
   
https://github.com/apache/hadoop/blob/63db1a85e376c2266afdc62b9590e40acc98429c/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java#L338-L347
   
   So it will actually throw an exception, rather than finalizing the old 
segment as I said previously. But this is _after_ `catchupDuringFailver()`, so 
to make your original proposal (disable in-progress edits) work properly, we 
still need to modify `recoverUnclosedStreams()` to throw an error when it fails 
instead of just swallowing the exception.
   
   I briefly looked at the other usages of `recoverUnclosedStreams()` and I 
don't really see any reason why we would want to swallow the exception ... The 
TODO comment there is also from 2012, 10 years old now :)
   
   So are we agreed that the best way forward is to modify 
`recoverUnclosedStreams()` to throw exception on failure, then we can use 
`inProgressOk = false` to solve this problem as you originally proposed?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xkrogen commented on a diff in pull request #4756: HDFS-16732. [SBN READ] Avoid get location from observer when the bloc…

2022-08-24 Thread GitBox


xkrogen commented on code in PR #4756:
URL: https://github.com/apache/hadoop/pull/4756#discussion_r954390770


##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestObserverNodeWhenReportDelay.java:
##
@@ -0,0 +1,153 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.namenode.ha;
+
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_BLOCK_SIZE_KEY;
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_BYTES_PER_CHECKSUM_KEY;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_STATE_CONTEXT_ENABLED_KEY;
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_REPLICATION_KEY;
+import static 
org.apache.hadoop.hdfs.server.namenode.NameNodeAdapter.getServiceState;
+import static 
org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider.OBSERVER_PROBE_RETRY_PERIOD_KEY;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+
+import java.io.IOException;
+import java.util.List;
+
+import org.apache.commons.lang3.ArrayUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.ha.HAServiceProtocol.HAServiceState;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.hdfs.protocol.DirectoryListing;
+import org.apache.hadoop.hdfs.protocol.HdfsLocatedFileStatus;
+import org.apache.hadoop.hdfs.protocol.LocatedBlock;
+import org.apache.hadoop.hdfs.protocol.LocatedBlocks;
+import org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster;
+import org.apache.hadoop.hdfs.server.datanode.DataNode;
+import org.junit.After;
+import org.junit.AfterClass;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class TestObserverNodeWhenReportDelay {

Review Comment:
   Can we just add new tests in `TestObserverNode` similar to 
`testObserverNodeBlockMissingRetry`? I am wondering if this new test might be 
overkill.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13144) Enhancing IPC client throughput via multiple connections per user

2022-08-24 Thread Owen O'Malley (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Owen O'Malley updated HADOOP-13144:
---
Fix Version/s: 3.3.9

> Enhancing IPC client throughput via multiple connections per user
> -
>
> Key: HADOOP-13144
> URL: https://issues.apache.org/jira/browse/HADOOP-13144
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Jason Kace
>Assignee: Íñigo Goiri
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
> Attachments: HADOOP-13144-performance.patch, HADOOP-13144.000.patch, 
> HADOOP-13144.001.patch, HADOOP-13144.002.patch, HADOOP-13144.003.patch, 
> HADOOP-13144_overload_enhancement.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> The generic IPC client ({{org.apache.hadoop.ipc.Client}}) utilizes a single 
> connection thread for each {{ConnectionId}}.  The {{ConnectionId}} is unique 
> to the connection's remote address, ticket and protocol.  Each ConnectionId 
> is 1:1 mapped to a connection thread by the client via a map cache.
> The result is to serialize all IPC read/write activity through a single 
> thread for a each user/ticket + address.  If a single user makes repeated 
> calls (1k-100k/sec) to the same destination, the IPC client becomes a 
> bottleneck.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18406) Adds alignment context to call path for creating RPC proxy with multiple connections per user.

2022-08-24 Thread Owen O'Malley (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Owen O'Malley resolved HADOOP-18406.

Fix Version/s: 3.4.0
   3.3.9
   Resolution: Fixed

> Adds alignment context to call path for creating RPC proxy with multiple 
> connections per user.
> --
>
> Key: HADOOP-18406
> URL: https://issues.apache.org/jira/browse/HADOOP-18406
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Simbarashe Dzinamarira
>Assignee: Simbarashe Dzinamarira
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>
> HDFS-13274 (RBF: Extend RouterRpcClient to use multiple sockets) gets the RPC 
> proxy using methods which do not allow using an alignment context. These 
> methods were added in HADOOP-13144 (Enhancing IPC client throughput via 
> multiple connections per user).
> This change adds an alignment context as an argument for methods in the call 
> path for creating the proxy.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18215) Enhance WritableName to be able to return aliases for classes that use serializers

2022-08-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17584504#comment-17584504
 ] 

ASF GitHub Bot commented on HADOOP-18215:
-

bbeaudreault commented on PR #4215:
URL: https://github.com/apache/hadoop/pull/4215#issuecomment-1226605432

   @jojochuang Any chance we can merge this?




> Enhance WritableName to be able to return aliases for classes that use 
> serializers
> --
>
> Key: HADOOP-18215
> URL: https://issues.apache.org/jira/browse/HADOOP-18215
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Bryan Beaudreault
>Assignee: Bryan Beaudreault
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> WritableName allows users shim in aliases for writables, in the case where a 
> SequenceFile was written with a Writable class that has since been renamed or 
> moved to another package. However, this requires that the aliased class 
> extend Writable. 
> Separately it's possible to configure jobs with keys and values which don't 
> actually extend Writable. Instead they are meant to be 
> serialized/deserialized using the serialization classes defined in 
> {{io.serializations}} config.
> Unfortunately, the current implementation does not support these key/value 
> classes. All we need to do to support this is remove the 
> {{.asSubclass(Writable.class)}} as is already the case for the default.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bbeaudreault commented on pull request #4215: HADOOP-18215. Enhance WritableName to be able to return aliases for classes that use serializers

2022-08-24 Thread GitBox


bbeaudreault commented on PR #4215:
URL: https://github.com/apache/hadoop/pull/4215#issuecomment-1226605432

   @jojochuang Any chance we can merge this?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18186) s3a prefetching to use SemaphoredDelegatingExecutor for submitting work

2022-08-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17584501#comment-17584501
 ] 

ASF GitHub Bot commented on HADOOP-18186:
-

virajjasani commented on PR #4796:
URL: https://github.com/apache/hadoop/pull/4796#issuecomment-1226578101

   @steveloughran i also tested the entire test suite by enabling prefatch. 
Tests in `ITestS3AContractVectoredRead` failed. One example:
   ```
   [ERROR] testMinSeekAndMaxSizeDefaultValues[Buffer type : 
direct](org.apache.hadoop.fs.contract.s3a.ITestS3AContractVectoredRead)  Time 
elapsed: 13.025 s  <<< FAILURE!
   org.junit.ComparisonFailure: [Mismatch in default s3a min seek for vectored 
reads] expected:<4[8]96> but was:<4[0]96>
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at org.apache.hadoop.test.MoreAsserts.assertEqual(MoreAsserts.java:99)
at 
org.apache.hadoop.fs.contract.s3a.ITestS3AContractVectoredRead.testMinSeekAndMaxSizeDefaultValues(ITestS3AContractVectoredRead.java:111)
   
   ```
   
   However, the failures don't seem relevant to the change on this PR. As per 
the test logic, it seems`ITestS3AContractVectoredRead` failures are expected 
when prefetch is enabled.




> s3a prefetching to use SemaphoredDelegatingExecutor for submitting work
> ---
>
> Key: HADOOP-18186
> URL: https://issues.apache.org/jira/browse/HADOOP-18186
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> Use SemaphoredDelegatingExecutor for each to stream to submit work, if 
> possible, for better fairness in processes with many streams.
> this also takes a DurationTrackerFactory to count how long was spent in the 
> queue, something we would want to know



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on pull request #4796: HADOOP-18186. s3a prefetching to use SemaphoredDelegatingExecutor for submitting work

2022-08-24 Thread GitBox


virajjasani commented on PR #4796:
URL: https://github.com/apache/hadoop/pull/4796#issuecomment-1226578101

   @steveloughran i also tested the entire test suite by enabling prefatch. 
Tests in `ITestS3AContractVectoredRead` failed. One example:
   ```
   [ERROR] testMinSeekAndMaxSizeDefaultValues[Buffer type : 
direct](org.apache.hadoop.fs.contract.s3a.ITestS3AContractVectoredRead)  Time 
elapsed: 13.025 s  <<< FAILURE!
   org.junit.ComparisonFailure: [Mismatch in default s3a min seek for vectored 
reads] expected:<4[8]96> but was:<4[0]96>
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at org.apache.hadoop.test.MoreAsserts.assertEqual(MoreAsserts.java:99)
at 
org.apache.hadoop.fs.contract.s3a.ITestS3AContractVectoredRead.testMinSeekAndMaxSizeDefaultValues(ITestS3AContractVectoredRead.java:111)
   
   ```
   
   However, the failures don't seem relevant to the change on this PR. As per 
the test logic, it seems`ITestS3AContractVectoredRead` failures are expected 
when prefetch is enabled.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18417) Upgrade Maven Surefire plugin to 3.0.0-M7

2022-08-24 Thread Steve Vaughan (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17584495#comment-17584495
 ] 

Steve Vaughan commented on HADOOP-18417:


This still leaves the issue of the launcher not being consistently included 
when using 3.0.0-M1.  Can we come up with a compromise that addresses the 
launcher issue without negatively impacting the rest of the build?

> Upgrade Maven Surefire plugin to 3.0.0-M7
> -
>
> Key: HADOOP-18417
> URL: https://issues.apache.org/jira/browse/HADOOP-18417
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0, 3.3.9
>Reporter: Steve Vaughan
>Assignee: Steve Vaughan
>Priority: Blocker
>  Labels: pull-request-available
>
> The Maven Surefire plugin 3.0.0-M1 doesn't always include the launcher as 
> part of it's setup, which can cause problems with Yarn tests. Some of the 
> Yarn modules use Jupiter, which may be a complicating factor.  Switching to 
> 3.0.0-M7 fixes the issue.
> This is currently blocking MAPREDUCE-7386



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-18417) Upgrade Maven Surefire plugin to 3.0.0-M7

2022-08-24 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena reopened HADOOP-18417:
---

> Upgrade Maven Surefire plugin to 3.0.0-M7
> -
>
> Key: HADOOP-18417
> URL: https://issues.apache.org/jira/browse/HADOOP-18417
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0, 3.3.9
>Reporter: Steve Vaughan
>Assignee: Steve Vaughan
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>
> The Maven Surefire plugin 3.0.0-M1 doesn't always include the launcher as 
> part of it's setup, which can cause problems with Yarn tests. Some of the 
> Yarn modules use Jupiter, which may be a complicating factor.  Switching to 
> 3.0.0-M7 fixes the issue.
> This is currently blocking MAPREDUCE-7386



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18417) Upgrade Maven Surefire plugin to 3.0.0-M7

2022-08-24 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HADOOP-18417:
--
Fix Version/s: (was: 3.4.0)
   (was: 3.3.9)

> Upgrade Maven Surefire plugin to 3.0.0-M7
> -
>
> Key: HADOOP-18417
> URL: https://issues.apache.org/jira/browse/HADOOP-18417
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0, 3.3.9
>Reporter: Steve Vaughan
>Assignee: Steve Vaughan
>Priority: Blocker
>  Labels: pull-request-available
>
> The Maven Surefire plugin 3.0.0-M1 doesn't always include the launcher as 
> part of it's setup, which can cause problems with Yarn tests. Some of the 
> Yarn modules use Jupiter, which may be a complicating factor.  Switching to 
> 3.0.0-M7 fixes the issue.
> This is currently blocking MAPREDUCE-7386



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18417) Upgrade Maven Surefire plugin to 3.0.0-M7

2022-08-24 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17584492#comment-17584492
 ] 

Ayush Saxena commented on HADOOP-18417:
---

Have reverted this for now to unblock the builds

> Upgrade Maven Surefire plugin to 3.0.0-M7
> -
>
> Key: HADOOP-18417
> URL: https://issues.apache.org/jira/browse/HADOOP-18417
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0, 3.3.9
>Reporter: Steve Vaughan
>Assignee: Steve Vaughan
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>
> The Maven Surefire plugin 3.0.0-M1 doesn't always include the launcher as 
> part of it's setup, which can cause problems with Yarn tests. Some of the 
> Yarn modules use Jupiter, which may be a complicating factor.  Switching to 
> 3.0.0-M7 fixes the issue.
> This is currently blocking MAPREDUCE-7386



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18417) Upgrade Maven Surefire plugin to 3.0.0-M7

2022-08-24 Thread Steve Vaughan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Vaughan updated HADOOP-18417:
---
Description: 
The Maven Surefire plugin 3.0.0-M1 doesn't always include the launcher as part 
of it's setup, which can cause problems with Yarn tests. Some of the Yarn 
modules use Jupiter, which may be a complicating factor.  Switching to 3.0.0-M7 
fixes the issue.

This is currently blocking MAPREDUCE-7386

  was:
The Maven Surefire plugin 3.0.0-M1 doesn't always including the launcher as 
part of it's setup, which can cause problems with Yarn tests. Some of the Yarn 
modules use Jupiter, which may be a complicating factor.  Switching to 3.0.0-M7 
fixes the issue.

This is currently blocking MAPREDUCE-7386


> Upgrade Maven Surefire plugin to 3.0.0-M7
> -
>
> Key: HADOOP-18417
> URL: https://issues.apache.org/jira/browse/HADOOP-18417
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0, 3.3.9
>Reporter: Steve Vaughan
>Assignee: Steve Vaughan
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>
> The Maven Surefire plugin 3.0.0-M1 doesn't always include the launcher as 
> part of it's setup, which can cause problems with Yarn tests. Some of the 
> Yarn modules use Jupiter, which may be a complicating factor.  Switching to 
> 3.0.0-M7 fixes the issue.
> This is currently blocking MAPREDUCE-7386



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18417) Upgrade Maven Surefire plugin to 3.0.0-M7

2022-08-24 Thread Steve Vaughan (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17584484#comment-17584484
 ] 

Steve Vaughan commented on HADOOP-18417:


Looking at the code, it looks like the default value setting was added to the 
declaration in May of 2021.  The documentation states:
{quote}Failsafe plugin deprecated the parameter {{skipTests}} and the parameter 
will be removed in _Failsafe 3.0.0_ as it is a source of conflicts between 
Failsafe and Surefire plugin.
{quote}

> Upgrade Maven Surefire plugin to 3.0.0-M7
> -
>
> Key: HADOOP-18417
> URL: https://issues.apache.org/jira/browse/HADOOP-18417
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0, 3.3.9
>Reporter: Steve Vaughan
>Assignee: Steve Vaughan
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>
> The Maven Surefire plugin 3.0.0-M1 doesn't always including the launcher as 
> part of it's setup, which can cause problems with Yarn tests. Some of the 
> Yarn modules use Jupiter, which may be a complicating factor.  Switching to 
> 3.0.0-M7 fixes the issue.
> This is currently blocking MAPREDUCE-7386



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18419) Don't fail when using -DNoUnitTests

2022-08-24 Thread Steve Vaughan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Vaughan resolved HADOOP-18419.

Resolution: Duplicate

> Don't fail when using -DNoUnitTests
> ---
>
> Key: HADOOP-18419
> URL: https://issues.apache.org/jira/browse/HADOOP-18419
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Steve Vaughan
>Assignee: Steve Vaughan
>Priority: Critical
>  Labels: pull-request-available
>
> There are commands in hadoop.sh that use -Dtest=NoUnitTests.  I believe this 
> is intended to replace -DskipTest, by not matching any tests.  This causes an 
> issue since the default for surefire.failIfNoSpecifiedTests is true.
> Updating those commands to override the default will address the issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18419) Don't fail when using -DNoUnitTests

2022-08-24 Thread Steve Vaughan (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17584481#comment-17584481
 ] 

Steve Vaughan commented on HADOOP-18419:


The documentation for Surefire states that this has defaulted to true since 
version 2.12.  I was surprised to see the use of "-Dtest=NoUnitTests" in the 
Hadoop build since I would have expected that to fail.  I actually started 
looking into the Surefire code thinking it must be a special case I didn't 
know.  I can't explain why 3.0.0-M1 wouldn't have complained.

If we want to maintain the ability to filter on non-existent tests, then 
overriding surefire.failIfNoSpecifiedTests to make it false is the correct 
approach

> Don't fail when using -DNoUnitTests
> ---
>
> Key: HADOOP-18419
> URL: https://issues.apache.org/jira/browse/HADOOP-18419
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Steve Vaughan
>Assignee: Steve Vaughan
>Priority: Critical
>  Labels: pull-request-available
>
> There are commands in hadoop.sh that use -Dtest=NoUnitTests.  I believe this 
> is intended to replace -DskipTest, by not matching any tests.  This causes an 
> issue since the default for surefire.failIfNoSpecifiedTests is true.
> Updating those commands to override the default will address the issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16674) TestDNS.testRDNS can fail with ServiceUnavailableException

2022-08-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17584471#comment-17584471
 ] 

ASF GitHub Bot commented on HADOOP-16674:
-

hadoop-yetus commented on PR #4802:
URL: https://github.com/apache/hadoop/pull/4802#issuecomment-1226344029

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  4s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 59s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m  7s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  20m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m  3s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 43s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 16s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m  8s |  |  trunk passed  |
   | -1 :x: |  shadedclient  |   5m 31s |  |  branch has errors when building 
and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 22s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |  22m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  20m 32s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 25s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 54s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  2s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m  6s |  |  the patch passed  |
   | -1 :x: |  shadedclient  |   5m  4s |  |  patch has errors when building 
and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 56s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 31s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 179m 26s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4802/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4802 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 0dbddedad29f 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 789befc6c681bd1c367d6e6ee1c0d73b4d0c8ba3 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4802/1/testReport/ |
   | Max. process+thread count | 1649 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4802/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> TestDNS.testRDNS can 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4802: HADOOP-16674. Fix when TestDNS.testRDNS can fail with ServiceUnavailableException

2022-08-24 Thread GitBox


hadoop-yetus commented on PR #4802:
URL: https://github.com/apache/hadoop/pull/4802#issuecomment-1226344029

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  4s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 59s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m  7s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  20m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m  3s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 43s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 16s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m  8s |  |  trunk passed  |
   | -1 :x: |  shadedclient  |   5m 31s |  |  branch has errors when building 
and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 22s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |  22m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  20m 32s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 25s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 54s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  2s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m  6s |  |  the patch passed  |
   | -1 :x: |  shadedclient  |   5m  4s |  |  patch has errors when building 
and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 56s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 31s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 179m 26s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4802/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4802 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 0dbddedad29f 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 789befc6c681bd1c367d6e6ee1c0d73b4d0c8ba3 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4802/1/testReport/ |
   | Max. process+thread count | 1649 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4802/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, 

[jira] [Commented] (HADOOP-18417) Upgrade Maven Surefire plugin to 3.0.0-M7

2022-08-24 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17584467#comment-17584467
 ] 

Steve Loughran commented on HADOOP-18417:
-

lets just revert and retry now we understand the issues. sounds like this is a 
regression in how we use surefire and i worry about how to run mvn verify 
integration tests with unit tests skipped. does -DskipTests only affect the 
unit tests?

> Upgrade Maven Surefire plugin to 3.0.0-M7
> -
>
> Key: HADOOP-18417
> URL: https://issues.apache.org/jira/browse/HADOOP-18417
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0, 3.3.9
>Reporter: Steve Vaughan
>Assignee: Steve Vaughan
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>
> The Maven Surefire plugin 3.0.0-M1 doesn't always including the launcher as 
> part of it's setup, which can cause problems with Yarn tests. Some of the 
> Yarn modules use Jupiter, which may be a complicating factor.  Switching to 
> 3.0.0-M7 fixes the issue.
> This is currently blocking MAPREDUCE-7386



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18419) Don't fail when using -DNoUnitTests

2022-08-24 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17584466#comment-17584466
 ] 

Steve Loughran commented on HADOOP-18419:
-

if surefire.failIfNoSpecifiedTests is now true and you can't use 
-Dtest=something that's a regression

i'm worried about the mvn verify phase; i need to be able to run the failsafe 
tests without surefire blowing up or trying to run all the unit tests

> Don't fail when using -DNoUnitTests
> ---
>
> Key: HADOOP-18419
> URL: https://issues.apache.org/jira/browse/HADOOP-18419
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Steve Vaughan
>Assignee: Steve Vaughan
>Priority: Critical
>  Labels: pull-request-available
>
> There are commands in hadoop.sh that use -Dtest=NoUnitTests.  I believe this 
> is intended to replace -DskipTest, by not matching any tests.  This causes an 
> issue since the default for surefire.failIfNoSpecifiedTests is true.
> Updating those commands to override the default will address the issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4797: YARN-11277. trigger log-dir deletion by size for NonAggregatingLogHandler

2022-08-24 Thread GitBox


hadoop-yetus commented on PR #4797:
URL: https://github.com/apache/hadoop/pull/4797#issuecomment-1226185659

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 38s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 45s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  26m 59s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  10m 34s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   9m 13s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   2m  3s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 53s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   3m 37s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   6m 43s |  |  trunk passed  |
   | -1 :x: |  shadedclient  |   3m 55s |  |  branch has errors when building 
and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 30s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   9m  6s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   9m  5s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   8m 30s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 57s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4797/2/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt)
 |  hadoop-yarn-project/hadoop-yarn: The patch generated 2 new + 187 unchanged 
- 34 fixed = 189 total (was 221)  |
   | +1 :green_heart: |  mvnsite  |   3m 16s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   3m 12s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   6m 31s |  |  the patch passed  |
   | -1 :x: |  shadedclient  |   3m 35s |  |  patch has errors when building 
and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 20s |  |  hadoop-yarn-api in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   4m 54s |  |  hadoop-yarn-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  25m  2s |  |  hadoop-yarn-server-nodemanager 
in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  4s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 163m 22s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4797/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4797 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint |
   | uname | Linux 6252cc763226 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 9d16fc2482b0d057d5ba635c67c9c1c081864535 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 

[jira] [Commented] (HADOOP-18417) Upgrade Maven Surefire plugin to 3.0.0-M7

2022-08-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17584437#comment-17584437
 ] 

ASF GitHub Bot commented on HADOOP-18417:
-

hadoop-yetus commented on PR #4800:
URL: https://github.com/apache/hadoop/pull/4800#issuecomment-1226165054

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 53s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 25s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  28m 31s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  25m 24s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  21m 55s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   4m 24s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 40s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 10s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +0 :ok: |  spotbugs  |   0m 58s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | -1 :x: |  shadedclient  |   4m 28s |  |  branch has errors when building 
and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 27s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |  24m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 50s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  21m 50s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   4m 10s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 35s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m  1s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 42s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +0 :ok: |  spotbugs  |   0m 43s |  |  hadoop-project has no data from 
spotbugs  |
   | +1 :green_heart: |  shadedclient  |  26m  5s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 42s |  |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  19m 23s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  7s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 222m  0s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4800/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4800 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint |
   | uname | Linux fa0a7cb13a9c 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ccdd1052313f9b1652acec0b78b97282a2ff7503 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4800/1/testReport/ |
   | Max. process+thread count | 1884 (vs. ulimit of 5500) |
   | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4800: HADOOP-18417. Addendum: Upgrade to M7 of surefire plugin.

2022-08-24 Thread GitBox


hadoop-yetus commented on PR #4800:
URL: https://github.com/apache/hadoop/pull/4800#issuecomment-1226165054

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 53s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 25s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  28m 31s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  25m 24s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  21m 55s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   4m 24s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 40s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 10s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +0 :ok: |  spotbugs  |   0m 58s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | -1 :x: |  shadedclient  |   4m 28s |  |  branch has errors when building 
and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 27s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |  24m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 50s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  21m 50s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   4m 10s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 35s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m  1s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 42s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +0 :ok: |  spotbugs  |   0m 43s |  |  hadoop-project has no data from 
spotbugs  |
   | +1 :green_heart: |  shadedclient  |  26m  5s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 42s |  |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  19m 23s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  7s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 222m  0s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4800/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4800 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint |
   | uname | Linux fa0a7cb13a9c 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ccdd1052313f9b1652acec0b78b97282a2ff7503 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4800/1/testReport/ |
   | Max. process+thread count | 1884 (vs. ulimit of 5500) |
   | modules | C: hadoop-project hadoop-common-project/hadoop-common U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4800/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | 

[jira] [Commented] (HADOOP-18414) Replace Sets#newHashSet() and newTreeSet() with constructors directly hadoop-mapreduce

2022-08-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17584434#comment-17584434
 ] 

ASF GitHub Bot commented on HADOOP-18414:
-

Samrat002 opened a new pull request, #4804:
URL: https://github.com/apache/hadoop/pull/4804

   
   
   ### Description of PR
   [HADOOP-18414](https://issues.apache.org/jira/browse/HADOOP-18414)
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> Replace Sets#newHashSet() and newTreeSet() with constructors directly 
> hadoop-mapreduce
> --
>
> Key: HADOOP-18414
> URL: https://issues.apache.org/jira/browse/HADOOP-18414
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Samrat Deb
>Assignee: Samrat Deb
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18414) Replace Sets#newHashSet() and newTreeSet() with constructors directly hadoop-mapreduce

2022-08-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-18414:

Labels: pull-request-available  (was: )

> Replace Sets#newHashSet() and newTreeSet() with constructors directly 
> hadoop-mapreduce
> --
>
> Key: HADOOP-18414
> URL: https://issues.apache.org/jira/browse/HADOOP-18414
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Samrat Deb
>Assignee: Samrat Deb
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Samrat002 opened a new pull request, #4804: HADOOP-18414. Replace Sets.newHashSet() with java constructor

2022-08-24 Thread GitBox


Samrat002 opened a new pull request, #4804:
URL: https://github.com/apache/hadoop/pull/4804

   
   
   ### Description of PR
   [HADOOP-18414](https://issues.apache.org/jira/browse/HADOOP-18414)
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4803: HDFS-16706. ViewFS doc points to wrong mount table name

2022-08-24 Thread GitBox


hadoop-yetus commented on PR #4803:
URL: https://github.com/apache/hadoop/pull/4803#issuecomment-1226145439

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 57s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m 32s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 36s |  |  trunk passed  |
   | -1 :x: |  shadedclient  |  44m 34s |  |  branch has errors when building 
and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 21s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   1m 23s |  |  the patch passed  |
   | -1 :x: |  shadedclient  |   2m 16s |  |  patch has errors when building 
and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  51m 33s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4803/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4803 |
   | Optional Tests | dupname asflicense mvnsite codespell detsecrets 
markdownlint |
   | uname | Linux 4cc44fdd5f2b 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / f143e8690336888bcbdb9324445ae9df5c5e73fc |
   | Max. process+thread count | 83 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4803/1/console |
   | versions | git=2.25.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4803: HDFS-16706. ViewFS doc points to wrong mount table name

2022-08-24 Thread GitBox


hadoop-yetus commented on PR #4803:
URL: https://github.com/apache/hadoop/pull/4803#issuecomment-1226145402

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 44s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 19s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 35s |  |  trunk passed  |
   | -1 :x: |  shadedclient  |  41m 32s |  |  branch has errors when building 
and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 21s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   1m 20s |  |  the patch passed  |
   | -1 :x: |  shadedclient  |   2m 21s |  |  patch has errors when building 
and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  48m 18s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4803/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4803 |
   | Optional Tests | dupname asflicense mvnsite codespell detsecrets 
markdownlint |
   | uname | Linux 405912e6721d 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / f143e8690336888bcbdb9324445ae9df5c5e73fc |
   | Max. process+thread count | 93 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4803/2/console |
   | versions | git=2.25.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18417) Upgrade Maven Surefire plugin to 3.0.0-M7

2022-08-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17584428#comment-17584428
 ] 

ASF GitHub Bot commented on HADOOP-18417:
-

snmvaughan commented on PR #4800:
URL: https://github.com/apache/hadoop/pull/4800#issuecomment-1226133452

   I'm generally inclined to avoid changing default global behavior, but this 
also addresses the issue.




> Upgrade Maven Surefire plugin to 3.0.0-M7
> -
>
> Key: HADOOP-18417
> URL: https://issues.apache.org/jira/browse/HADOOP-18417
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0, 3.3.9
>Reporter: Steve Vaughan
>Assignee: Steve Vaughan
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>
> The Maven Surefire plugin 3.0.0-M1 doesn't always including the launcher as 
> part of it's setup, which can cause problems with Yarn tests. Some of the 
> Yarn modules use Jupiter, which may be a complicating factor.  Switching to 
> 3.0.0-M7 fixes the issue.
> This is currently blocking MAPREDUCE-7386



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] snmvaughan commented on pull request #4800: HADOOP-18417. Addendum: Upgrade to M7 of surefire plugin.

2022-08-24 Thread GitBox


snmvaughan commented on PR #4800:
URL: https://github.com/apache/hadoop/pull/4800#issuecomment-1226133452

   I'm generally inclined to avoid changing default global behavior, but this 
also addresses the issue.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18412) Replace Sets#newHashSet() and newTreeSet() with constructors directly in hadoop-client-modules

2022-08-24 Thread Samrat Deb (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samrat Deb resolved HADOOP-18412.
-
Resolution: Invalid

> Replace Sets#newHashSet() and newTreeSet() with constructors directly in 
> hadoop-client-modules
> --
>
> Key: HADOOP-18412
> URL: https://issues.apache.org/jira/browse/HADOOP-18412
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Samrat Deb
>Assignee: Samrat Deb
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18413) Replace Sets#newHashSet() and newTreeSet() with constructors directly in hadoop-cloud-storage-project

2022-08-24 Thread Samrat Deb (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samrat Deb resolved HADOOP-18413.
-
Resolution: Invalid

> Replace Sets#newHashSet() and newTreeSet() with constructors directly in 
> hadoop-cloud-storage-project
> -
>
> Key: HADOOP-18413
> URL: https://issues.apache.org/jira/browse/HADOOP-18413
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Samrat Deb
>Assignee: Samrat Deb
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18419) Don't fail when using -DNoUnitTests

2022-08-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17584427#comment-17584427
 ] 

ASF GitHub Bot commented on HADOOP-18419:
-

snmvaughan commented on PR #4801:
URL: https://github.com/apache/hadoop/pull/4801#issuecomment-1226132393

   No problem.  I missed the other pull request.




> Don't fail when using -DNoUnitTests
> ---
>
> Key: HADOOP-18419
> URL: https://issues.apache.org/jira/browse/HADOOP-18419
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Steve Vaughan
>Assignee: Steve Vaughan
>Priority: Critical
>  Labels: pull-request-available
>
> There are commands in hadoop.sh that use -Dtest=NoUnitTests.  I believe this 
> is intended to replace -DskipTest, by not matching any tests.  This causes an 
> issue since the default for surefire.failIfNoSpecifiedTests is true.
> Updating those commands to override the default will address the issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] snmvaughan commented on pull request #4801: HADOOP-18419. Don't fail when using -DNoUnitTests

2022-08-24 Thread GitBox


snmvaughan commented on PR #4801:
URL: https://github.com/apache/hadoop/pull/4801#issuecomment-1226132393

   No problem.  I missed the other pull request.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18419) Don't fail when using -DNoUnitTests

2022-08-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17584424#comment-17584424
 ] 

ASF GitHub Bot commented on HADOOP-18419:
-

ayushtkn commented on PR #4801:
URL: https://github.com/apache/hadoop/pull/4801#issuecomment-1226118985

   @snmvaughan if you see I had a PR as well there on the jira along with that 
comment where I flagged the issue, #4800
   And I feel that is better approach, this hadoop.sh will only sort things for 
our Jenkins jobs. I would prefer to go with that approach only




> Don't fail when using -DNoUnitTests
> ---
>
> Key: HADOOP-18419
> URL: https://issues.apache.org/jira/browse/HADOOP-18419
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Steve Vaughan
>Assignee: Steve Vaughan
>Priority: Critical
>  Labels: pull-request-available
>
> There are commands in hadoop.sh that use -Dtest=NoUnitTests.  I believe this 
> is intended to replace -DskipTest, by not matching any tests.  This causes an 
> issue since the default for surefire.failIfNoSpecifiedTests is true.
> Updating those commands to override the default will address the issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn commented on pull request #4801: HADOOP-18419. Don't fail when using -DNoUnitTests

2022-08-24 Thread GitBox


ayushtkn commented on PR #4801:
URL: https://github.com/apache/hadoop/pull/4801#issuecomment-1226118985

   @snmvaughan if you see I had a PR as well there on the jira along with that 
comment where I flagged the issue, #4800
   And I feel that is better approach, this hadoop.sh will only sort things for 
our Jenkins jobs. I would prefer to go with that approach only


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18419) Don't fail when using -DNoUnitTests

2022-08-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17584422#comment-17584422
 ] 

ASF GitHub Bot commented on HADOOP-18419:
-

snmvaughan commented on PR #4801:
URL: https://github.com/apache/hadoop/pull/4801#issuecomment-1226103151

   @ayushtkn Build 4 passed as expected, addressing the issue you identified.  
The multiple builds were required because of out of memory issues on a specific 
host.  I'm not sure why the github status failed (it's still showing the 
results of build 2)




> Don't fail when using -DNoUnitTests
> ---
>
> Key: HADOOP-18419
> URL: https://issues.apache.org/jira/browse/HADOOP-18419
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Steve Vaughan
>Assignee: Steve Vaughan
>Priority: Critical
>  Labels: pull-request-available
>
> There are commands in hadoop.sh that use -Dtest=NoUnitTests.  I believe this 
> is intended to replace -DskipTest, by not matching any tests.  This causes an 
> issue since the default for surefire.failIfNoSpecifiedTests is true.
> Updating those commands to override the default will address the issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] snmvaughan commented on pull request #4801: HADOOP-18419. Don't fail when using -DNoUnitTests

2022-08-24 Thread GitBox


snmvaughan commented on PR #4801:
URL: https://github.com/apache/hadoop/pull/4801#issuecomment-1226103151

   @ayushtkn Build 4 passed as expected, addressing the issue you identified.  
The multiple builds were required because of out of memory issues on a specific 
host.  I'm not sure why the github status failed (it's still showing the 
results of build 2)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4746: YARN-9708. Yarn Federation Router Support DelegationToken.

2022-08-24 Thread GitBox


hadoop-yetus commented on PR #4746:
URL: https://github.com/apache/hadoop/pull/4746#issuecomment-1226102617

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  1s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +0 :ok: |  buf  |   0m  1s |  |  buf was not available.  |
   | +0 :ok: |  buf  |   0m  1s |  |  buf was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 5 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m  5s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  28m 42s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  10m 42s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   9m 10s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   2m  6s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 30s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   4m 10s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 53s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   7m 23s |  |  trunk passed  |
   | -1 :x: |  shadedclient  |   2m 50s |  |  branch has errors when building 
and testing our client artifacts.  |
   | -0 :warning: |  patch  |   3m 15s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 39s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   9m 56s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  cc  |   9m 56s |  |  the patch passed  |
   | -1 :x: |  javac  |   9m 56s | 
[/results-compile-javac-hadoop-yarn-project_hadoop-yarn-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4746/9/artifact/out/results-compile-javac-hadoop-yarn-project_hadoop-yarn-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  
hadoop-yarn-project_hadoop-yarn-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
 with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 generated 1 new + 
740 unchanged - 0 fixed = 741 total (was 740)  |
   | +1 :green_heart: |  compile  |   9m  1s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  cc  |   9m  1s |  |  the patch passed  |
   | -1 :x: |  javac  |   9m  1s | 
[/results-compile-javac-hadoop-yarn-project_hadoop-yarn-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4746/9/artifact/out/results-compile-javac-hadoop-yarn-project_hadoop-yarn-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  
hadoop-yarn-project_hadoop-yarn-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
 with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 generated 3 new 
+ 649 unchanged - 2 fixed = 652 total (was 651)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 50s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4746/9/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt)
 |  hadoop-yarn-project/hadoop-yarn: The patch generated 3 new + 26 unchanged - 
2 fixed = 29 total (was 28)  |
   | +1 :green_heart: |  mvnsite  |   4m  3s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   3m 42s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   7m 36s |  |  the patch passed  |
   | -1 :x: |  shadedclient  |   2m 47s |  |  patch has errors when building 
and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   4m 43s |  |  hadoop-yarn-common in 

[GitHub] [hadoop] Samrat002 opened a new pull request, #4803: HDFS-16706. ViewFS doc points to wrong mount table name

2022-08-24 Thread GitBox


Samrat002 opened a new pull request, #4803:
URL: https://github.com/apache/hadoop/pull/4803

   
   
   ### Description of PR
   [HDFS-16706](https://issues.apache.org/jira/browse/HDFS-16706)
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18419) Don't fail when using -DNoUnitTests

2022-08-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17584416#comment-17584416
 ] 

ASF GitHub Bot commented on HADOOP-18419:
-

hadoop-yetus commented on PR #4801:
URL: https://github.com/apache/hadoop/pull/4801#issuecomment-1226082977

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  shelldocs  |   0m  1s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m  1s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 27s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m  0s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m  8s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   0m  0s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m  0s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  1s |  |  No new issues.  |
   | +1 :green_heart: |  shadedclient  |  19m  8s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 46s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  83m  1s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4801/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4801 |
   | Optional Tests | dupname asflicense mvnsite unit codespell detsecrets 
shellcheck shelldocs |
   | uname | Linux a754d5ce2559 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b19504efbecf6d79b18505598c606d81198316ec |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4801/4/testReport/ |
   | Max. process+thread count | 734 (vs. ulimit of 5500) |
   | modules | C:  U:  |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4801/4/console |
   | versions | git=2.25.1 maven=3.6.3 shellcheck=0.7.0 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Don't fail when using -DNoUnitTests
> ---
>
> Key: HADOOP-18419
> URL: https://issues.apache.org/jira/browse/HADOOP-18419
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Steve Vaughan
>Assignee: Steve Vaughan
>Priority: Critical
>  Labels: pull-request-available
>
> There are commands in hadoop.sh that use -Dtest=NoUnitTests.  I believe this 
> is intended to replace -DskipTest, by not matching any tests.  This causes an 
> issue since the default for surefire.failIfNoSpecifiedTests is true.
> Updating those commands to override the default will address the issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4801: HADOOP-18419. Don't fail when using -DNoUnitTests

2022-08-24 Thread GitBox


hadoop-yetus commented on PR #4801:
URL: https://github.com/apache/hadoop/pull/4801#issuecomment-1226082977

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  shelldocs  |   0m  1s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m  1s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 27s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m  0s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m  8s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   0m  0s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m  0s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  1s |  |  No new issues.  |
   | +1 :green_heart: |  shadedclient  |  19m  8s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 46s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  83m  1s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4801/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4801 |
   | Optional Tests | dupname asflicense mvnsite unit codespell detsecrets 
shellcheck shelldocs |
   | uname | Linux a754d5ce2559 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b19504efbecf6d79b18505598c606d81198316ec |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4801/4/testReport/ |
   | Max. process+thread count | 734 (vs. ulimit of 5500) |
   | modules | C:  U:  |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4801/4/console |
   | versions | git=2.25.1 maven=3.6.3 shellcheck=0.7.0 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16637) Fix findbugs warnings in hadoop-cos

2022-08-24 Thread groot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17584414#comment-17584414
 ] 

groot commented on HADOOP-16637:


[~aajisaka] - The PR seems to be closed without any mention of reason. Can you 
check if the issues is still existing - if yes, can you please share latest 
`Fix findbugs warnings in hadoop-cos`. I can work to fix it. Thanks.

> Fix findbugs warnings in hadoop-cos
> ---
>
> Key: HADOOP-16637
> URL: https://issues.apache.org/jira/browse/HADOOP-16637
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/cos
>Reporter: Akira Ajisaka
>Assignee: Yi-Sheng Lien
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> qbt report: 
> https://lists.apache.org/thread.html/ab1ea4ac6590061cfb2f89183f33f97e92da0e68e67657dbfbda862f@%3Ccommon-dev.hadoop.apache.org%3E
> {noformat}
>module:hadoop-cloud-storage-project/hadoop-cos
>Redundant nullcheck of dir, which is known to be non-null in 
> org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check 
> at BufferPool.java:is known to be non-null in 
> org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check 
> at BufferPool.java:[line 66]
>org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may 
> expose internal representation by returning CosNInputStream$ReadBuffer.buffer 
> At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At 
> CosNInputStream.java:[line 87]
>Found reliance on default encoding in 
> org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, 
> byte[]):in 
> org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, 
> byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
>Found reliance on default encoding in 
> org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
> InputStream, byte[], long):in 
> org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
> InputStream, byte[], long): new String(byte[]) At 
> CosNativeFileSystemStore.java:[line 178]
>org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, 
> String, String, int) may fail to clean up java.io.InputStream Obligation to 
> clean up resource created at CosNativeFileSystemStore.java:fail to clean up 
> java.io.InputStream Obligation to clean up resource created at 
> CosNativeFileSystemStore.java:[line 252] is not discharged
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16674) TestDNS.testRDNS can fail with ServiceUnavailableException

2022-08-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-16674:

Labels: pull-request-available  (was: )

> TestDNS.testRDNS can fail with ServiceUnavailableException
> --
>
> Key: HADOOP-16674
> URL: https://issues.apache.org/jira/browse/HADOOP-16674
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, net
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: groot
>Priority: Minor
>  Labels: pull-request-available
>
> TestDNS.testRDNS can fail in some network configurations; it is already setup 
> to catch and swallow these.
> However it can also fail with a ServiceUnavailableException -which is not 
> caught.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16674) TestDNS.testRDNS can fail with ServiceUnavailableException

2022-08-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17584409#comment-17584409
 ] 

ASF GitHub Bot commented on HADOOP-16674:
-

ashutoshcipher opened a new pull request, #4802:
URL: https://github.com/apache/hadoop/pull/4802

   ### Description of PR
   
   Fix when TestDNS.testRDNS can fail with ServiceUnavailableException
   
   JIRA - HADOOP-16674
   
   ### For code changes:
   
   - [X] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> TestDNS.testRDNS can fail with ServiceUnavailableException
> --
>
> Key: HADOOP-16674
> URL: https://issues.apache.org/jira/browse/HADOOP-16674
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, net
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: groot
>Priority: Minor
>
> TestDNS.testRDNS can fail in some network configurations; it is already setup 
> to catch and swallow these.
> However it can also fail with a ServiceUnavailableException -which is not 
> caught.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16674) TestDNS.testRDNS can fail with ServiceUnavailableException

2022-08-24 Thread groot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

groot reassigned HADOOP-16674:
--

Assignee: groot  (was: Kevin Su)

> TestDNS.testRDNS can fail with ServiceUnavailableException
> --
>
> Key: HADOOP-16674
> URL: https://issues.apache.org/jira/browse/HADOOP-16674
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, net
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: groot
>Priority: Minor
>
> TestDNS.testRDNS can fail in some network configurations; it is already setup 
> to catch and swallow these.
> However it can also fail with a ServiceUnavailableException -which is not 
> caught.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16674) TestDNS.testRDNS can fail with ServiceUnavailableException

2022-08-24 Thread groot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17584408#comment-17584408
 ] 

groot commented on HADOOP-16674:


This is pending for a while. Taking is up to fix it.

cc: [~ayushtkn] [~ste...@apache.org] 

> TestDNS.testRDNS can fail with ServiceUnavailableException
> --
>
> Key: HADOOP-16674
> URL: https://issues.apache.org/jira/browse/HADOOP-16674
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, net
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Kevin Su
>Priority: Minor
>
> TestDNS.testRDNS can fail in some network configurations; it is already setup 
> to catch and swallow these.
> However it can also fail with a ServiceUnavailableException -which is not 
> caught.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ashutoshcipher opened a new pull request, #4802: HADOOP-16674. Fix when TestDNS.testRDNS can fail with ServiceUnavailableException

2022-08-24 Thread GitBox


ashutoshcipher opened a new pull request, #4802:
URL: https://github.com/apache/hadoop/pull/4802

   ### Description of PR
   
   Fix when TestDNS.testRDNS can fail with ServiceUnavailableException
   
   JIRA - HADOOP-16674
   
   ### For code changes:
   
   - [X] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16769) LocalDirAllocator to provide diagnostics when file creation fails

2022-08-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17584396#comment-17584396
 ] 

ASF GitHub Bot commented on HADOOP-16769:
-

ashutoshcipher commented on PR #1892:
URL: https://github.com/apache/hadoop/pull/1892#issuecomment-1226037306

   Thanks @ramesh0201 for the PR. Wanted to check if you are still working on 
it or I can take it forward.




> LocalDirAllocator to provide diagnostics when file creation fails
> -
>
> Key: HADOOP-16769
> URL: https://issues.apache.org/jira/browse/HADOOP-16769
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Ramesh Kumar Thangarajan
>Priority: Minor
>  Labels: pull-request-available
> Attachments: HADOOP-16769.1.patch, HADOOP-16769.3.patch, 
> HADOOP-16769.4.patch, HADOOP-16769.5.patch, HADOOP-16769.6.patch, 
> HADOOP-16769.7.patch, HADOOP-16769.8.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Log details of requested size and available capacity when file creation is 
> not successuful



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ashutoshcipher commented on pull request #1892: HADOOP-16769 LocalDirAllocator to provide diagnostics when file creat…

2022-08-24 Thread GitBox


ashutoshcipher commented on PR #1892:
URL: https://github.com/apache/hadoop/pull/1892#issuecomment-1226037306

   Thanks @ramesh0201 for the PR. Wanted to check if you are still working on 
it or I can take it forward.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4793: YARN-11276. Add lru cache for RMWebServices.getApps

2022-08-24 Thread GitBox


hadoop-yetus commented on PR #4793:
URL: https://github.com/apache/hadoop/pull/4793#issuecomment-1226033589

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for branch  |
   | -1 :x: |  mvninstall  |   0m 30s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4793/9/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | -1 :x: |  compile  |   0m 29s | 
[/branch-compile-hadoop-yarn-project_hadoop-yarn-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4793/9/artifact/out/branch-compile-hadoop-yarn-project_hadoop-yarn-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-yarn in trunk failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | -1 :x: |  compile  |   0m 29s | 
[/branch-compile-hadoop-yarn-project_hadoop-yarn-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4793/9/artifact/out/branch-compile-hadoop-yarn-project_hadoop-yarn-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  hadoop-yarn in trunk failed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.  |
   | -0 :warning: |  checkstyle  |   0m 30s | 
[/buildtool-branch-checkstyle-hadoop-yarn-project_hadoop-yarn.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4793/9/artifact/out/buildtool-branch-checkstyle-hadoop-yarn-project_hadoop-yarn.txt)
 |  The patch fails to run checkstyle in hadoop-yarn  |
   | -1 :x: |  mvnsite  |   0m 29s | 
[/branch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4793/9/artifact/out/branch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt)
 |  hadoop-yarn-api in trunk failed.  |
   | -1 :x: |  mvnsite  |   0m 29s | 
[/branch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4793/9/artifact/out/branch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt)
 |  hadoop-yarn-common in trunk failed.  |
   | -1 :x: |  mvnsite  |   0m 29s | 
[/branch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4793/9/artifact/out/branch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt)
 |  hadoop-yarn-server-resourcemanager in trunk failed.  |
   | -1 :x: |  javadoc  |   0m 29s | 
[/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4793/9/artifact/out/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-yarn-api in trunk failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | -1 :x: |  javadoc  |   0m 28s | 
[/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4793/9/artifact/out/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-yarn-common in trunk failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | -1 :x: |  javadoc  |   0m 29s | 
[/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4793/9/artifact/out/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-yarn-server-resourcemanager in trunk failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | -1 :x: |  javadoc  |   0m 29s | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4800: HADOOP-18417. Addendum: Upgrade to M7 of surefire plugin.

2022-08-24 Thread GitBox


hadoop-yetus commented on PR #4800:
URL: https://github.com/apache/hadoop/pull/4800#issuecomment-1226007049

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 59s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 23s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  mvnsite  |   0m 36s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | -1 :x: |  shadedclient  |  43m 10s |  |  branch has errors when building 
and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 17s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  shadedclient  |  21m 47s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 30s |  |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 45s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  70m  0s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4800/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4800 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint |
   | uname | Linux a5083aebe3a3 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 65215fc8961d7b1c21f48fb707294df5405ae7b0 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4800/2/testReport/ |
   | Max. process+thread count | 747 (vs. ulimit of 5500) |
   | modules | C: hadoop-project U: hadoop-project |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4800/2/console |
   | versions | git=2.25.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, 

[jira] [Commented] (HADOOP-18417) Upgrade Maven Surefire plugin to 3.0.0-M7

2022-08-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17584381#comment-17584381
 ] 

ASF GitHub Bot commented on HADOOP-18417:
-

hadoop-yetus commented on PR #4800:
URL: https://github.com/apache/hadoop/pull/4800#issuecomment-1226007049

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 59s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 23s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  mvnsite  |   0m 36s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | -1 :x: |  shadedclient  |  43m 10s |  |  branch has errors when building 
and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 17s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  shadedclient  |  21m 47s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 30s |  |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 45s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  70m  0s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4800/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4800 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint |
   | uname | Linux a5083aebe3a3 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 65215fc8961d7b1c21f48fb707294df5405ae7b0 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4800/2/testReport/ |
   | Max. process+thread count | 747 (vs. ulimit of 5500) |
   | modules | C: hadoop-project U: hadoop-project |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4800/2/console |
   | versions | git=2.25.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Upgrade Maven Surefire plugin to 3.0.0-M7
> -
>
> Key: HADOOP-18417
> URL: 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4764: YARN-11177. Support getNewReservation, submit / update/ Reservation API's for Federation.

2022-08-24 Thread GitBox


hadoop-yetus commented on PR #4764:
URL: https://github.com/apache/hadoop/pull/4764#issuecomment-1226005145

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 13s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  2s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  2s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  2s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 6 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 18s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  28m 34s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   5m 15s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   4m 14s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m  2s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 53s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   5m  3s |  |  trunk passed  |
   | -1 :x: |  shadedclient  |   2m  7s |  |  branch has errors when building 
and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  3s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 26s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   4m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   3m 19s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 14s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 49s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   4m 37s |  |  the patch passed  |
   | -1 :x: |  shadedclient  |   1m 46s |  |  patch has errors when building 
and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 59s |  |  hadoop-yarn-server-common in 
the patch passed.  |
   | +1 :green_heart: |  unit  | 103m 21s |  |  
hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  unit  |   3m 32s |  |  hadoop-yarn-server-router in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 208m 22s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4764/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4764 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint |
   | uname | Linux 5b5897d42341 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 2c702f84ff006dfa5ce4765d561e02ca4bd488ad |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4764/9/testReport/ |
   | Max. process+thread count | 1770 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 

[jira] [Commented] (HADOOP-18419) Don't fail when using -DNoUnitTests

2022-08-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17584376#comment-17584376
 ] 

ASF GitHub Bot commented on HADOOP-18419:
-

hadoop-yetus commented on PR #4801:
URL: https://github.com/apache/hadoop/pull/4801#issuecomment-122646

   (!) A patch to the testing environment has been detected. 
   Re-executing against the patched versions to perform further tests. 
   The console is at 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4801/4/console in 
case of problems.
   




> Don't fail when using -DNoUnitTests
> ---
>
> Key: HADOOP-18419
> URL: https://issues.apache.org/jira/browse/HADOOP-18419
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Steve Vaughan
>Assignee: Steve Vaughan
>Priority: Critical
>  Labels: pull-request-available
>
> There are commands in hadoop.sh that use -Dtest=NoUnitTests.  I believe this 
> is intended to replace -DskipTest, by not matching any tests.  This causes an 
> issue since the default for surefire.failIfNoSpecifiedTests is true.
> Updating those commands to override the default will address the issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4801: HADOOP-18419. Don't fail when using -DNoUnitTests

2022-08-24 Thread GitBox


hadoop-yetus commented on PR #4801:
URL: https://github.com/apache/hadoop/pull/4801#issuecomment-122646

   (!) A patch to the testing environment has been detected. 
   Re-executing against the patched versions to perform further tests. 
   The console is at 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4801/4/console in 
case of problems.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18420) Optimise S3A’s recursive delete to drop successful S3 keys on retry of S3 DeleteObjects

2022-08-24 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17584375#comment-17584375
 ] 

Steve Loughran commented on HADOOP-18420:
-

no, nothing directly in the invoker for this, but no reason why it couldnt be 
done. we could have an invoker whose retry policy didn't retry on 503, but on 
everything else. this would be invoked in a loop handling the throttle retry.

> Optimise S3A’s recursive delete to drop successful S3 keys on retry of S3 
> DeleteObjects
> ---
>
> Key: HADOOP-18420
> URL: https://issues.apache.org/jira/browse/HADOOP-18420
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Daniel Carl Jones
>Priority: Major
>
> S3A users with large filesystems performing renames or deletes can run into 
> throttling when S3A performs a bulk delete on keys. These are currently 
> batches of 250 
> ([https://github.com/apache/hadoop/blob/c1d82cd95e375410cb0dffc2931063d48687386f/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java#L319-L323]).
> When the bulk delete ([S3 
> DeleteObjects|https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjects.html])
>  fails, it provides a list of keys that failed and why. Today, S3A recovers 
> from throttles by sending the DeleteObjects request again with no change. 
> This can result in additional deletes and counts towards throttling limits.
> Instead, S3A should retry only the keys that failed, limiting the number of 
> mutations against the S3 bucket, and hopefully mitigate errors when deleting 
> a large number of objects.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16806) AWS AssumedRoleCredentialProvider needs ExternalId add

2022-08-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17584374#comment-17584374
 ] 

ASF GitHub Bot commented on HADOOP-16806:
-

hadoop-yetus commented on PR #4753:
URL: https://github.com/apache/hadoop/pull/4753#issuecomment-1225993635

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  8s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m 43s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 54s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 46s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 47s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 56s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 34s |  |  trunk passed  |
   | -1 :x: |  shadedclient  |   2m 48s |  |  branch has errors when building 
and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 44s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 11s |  |  the patch passed  |
   | -1 :x: |  shadedclient  |   2m  3s |  |  patch has errors when building 
and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 52s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  63m  4s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4753/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4753 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
xmllint |
   | uname | Linux 44800ade0261 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 176a7f1d03134711fee9fe309180dd393168a349 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4753/4/testReport/ |
   | Max. process+thread count | 264 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4753/4/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4753: HADOOP-16806: AWS AssumedRoleCredentialProvider needs ExternalId add

2022-08-24 Thread GitBox


hadoop-yetus commented on PR #4753:
URL: https://github.com/apache/hadoop/pull/4753#issuecomment-1225993635

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  8s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m 43s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 54s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 46s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 47s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 56s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 34s |  |  trunk passed  |
   | -1 :x: |  shadedclient  |   2m 48s |  |  branch has errors when building 
and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 44s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 11s |  |  the patch passed  |
   | -1 :x: |  shadedclient  |   2m  3s |  |  patch has errors when building 
and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 52s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  63m  4s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4753/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4753 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
xmllint |
   | uname | Linux 44800ade0261 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 176a7f1d03134711fee9fe309180dd393168a349 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4753/4/testReport/ |
   | Max. process+thread count | 264 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4753/4/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the

[jira] [Commented] (HADOOP-18419) Don't fail when using -DNoUnitTests

2022-08-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17584372#comment-17584372
 ] 

ASF GitHub Bot commented on HADOOP-18419:
-

hadoop-yetus commented on PR #4801:
URL: https://github.com/apache/hadoop/pull/4801#issuecomment-1225985162

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 14s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for branch  |
   | -1 :x: |  mvninstall  |   0m 58s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4801/2/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | +1 :green_heart: |  mvnsite  |   0m  0s |  |  trunk passed  |
   | -1 :x: |  shadedclient  |   6m  0s |  |  branch has errors when building 
and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   0m  0s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m  0s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  1s |  |  No new issues.  |
   | +1 :green_heart: |  shadedclient  |   0m 58s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +0 :ok: |  asflicense  |   0m 30s |  |  ASF License check generated no 
output?  |
   |  |   |  12m 53s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4801/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4801 |
   | Optional Tests | dupname asflicense mvnsite unit codespell detsecrets 
shellcheck shelldocs |
   | uname | Linux 88e4472a55fa 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b19504efbecf6d79b18505598c606d81198316ec |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4801/2/testReport/ |
   | Max. process+thread count | 56 (vs. ulimit of 5500) |
   | modules | C:  U:  |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4801/2/console |
   | versions | git=2.25.1 maven=3.6.3 shellcheck=0.7.0 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Don't fail when using -DNoUnitTests
> ---
>
> Key: HADOOP-18419
> URL: https://issues.apache.org/jira/browse/HADOOP-18419
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Steve Vaughan
>Assignee: Steve Vaughan
>Priority: Critical
>  Labels: pull-request-available
>
> There are commands in hadoop.sh that use -Dtest=NoUnitTests.  I believe this 
> is intended to replace -DskipTest, by not matching any tests.  This causes an 
> issue since the default for surefire.failIfNoSpecifiedTests is true.
> Updating those commands to override the default will address the issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4801: HADOOP-18419. Don't fail when using -DNoUnitTests

2022-08-24 Thread GitBox


hadoop-yetus commented on PR #4801:
URL: https://github.com/apache/hadoop/pull/4801#issuecomment-1225985162

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 14s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for branch  |
   | -1 :x: |  mvninstall  |   0m 58s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4801/2/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | +1 :green_heart: |  mvnsite  |   0m  0s |  |  trunk passed  |
   | -1 :x: |  shadedclient  |   6m  0s |  |  branch has errors when building 
and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   0m  0s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m  0s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  1s |  |  No new issues.  |
   | +1 :green_heart: |  shadedclient  |   0m 58s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +0 :ok: |  asflicense  |   0m 30s |  |  ASF License check generated no 
output?  |
   |  |   |  12m 53s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4801/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4801 |
   | Optional Tests | dupname asflicense mvnsite unit codespell detsecrets 
shellcheck shelldocs |
   | uname | Linux 88e4472a55fa 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b19504efbecf6d79b18505598c606d81198316ec |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4801/2/testReport/ |
   | Max. process+thread count | 56 (vs. ulimit of 5500) |
   | modules | C:  U:  |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4801/2/console |
   | versions | git=2.25.1 maven=3.6.3 shellcheck=0.7.0 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16806) AWS AssumedRoleCredentialProvider needs ExternalId add

2022-08-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17584367#comment-17584367
 ] 

ASF GitHub Bot commented on HADOOP-16806:
-

hadoop-yetus commented on PR #4753:
URL: https://github.com/apache/hadoop/pull/4753#issuecomment-1225977792

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  2s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  39m 30s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 53s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 48s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 46s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 52s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 21s |  |  trunk passed  |
   | -1 :x: |  shadedclient  |   2m 32s |  |  branch has errors when building 
and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 14s |  |  the patch passed  |
   | -1 :x: |  shadedclient  |   2m  5s |  |  patch has errors when building 
and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m  1s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 40s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  60m 44s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4753/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4753 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
xmllint |
   | uname | Linux cea56b748b9e 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / a00586ceca062495da9ed740c631d8c1e1a89b3f |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4753/3/testReport/ |
   | Max. process+thread count | 272 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4753/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4753: HADOOP-16806: AWS AssumedRoleCredentialProvider needs ExternalId add

2022-08-24 Thread GitBox


hadoop-yetus commented on PR #4753:
URL: https://github.com/apache/hadoop/pull/4753#issuecomment-1225977792

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  2s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  39m 30s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 53s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 48s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 46s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 52s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 21s |  |  trunk passed  |
   | -1 :x: |  shadedclient  |   2m 32s |  |  branch has errors when building 
and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 14s |  |  the patch passed  |
   | -1 :x: |  shadedclient  |   2m  5s |  |  patch has errors when building 
and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m  1s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 40s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  60m 44s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4753/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4753 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
xmllint |
   | uname | Linux cea56b748b9e 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / a00586ceca062495da9ed740c631d8c1e1a89b3f |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4753/3/testReport/ |
   | Max. process+thread count | 272 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4753/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the

[jira] [Commented] (HADOOP-18419) Don't fail when using -DNoUnitTests

2022-08-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17584365#comment-17584365
 ] 

ASF GitHub Bot commented on HADOOP-18419:
-

hadoop-yetus commented on PR #4801:
URL: https://github.com/apache/hadoop/pull/4801#issuecomment-1225973482

   (!) A patch to the testing environment has been detected. 
   Re-executing against the patched versions to perform further tests. 
   The console is at 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4801/2/console in 
case of problems.
   




> Don't fail when using -DNoUnitTests
> ---
>
> Key: HADOOP-18419
> URL: https://issues.apache.org/jira/browse/HADOOP-18419
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Steve Vaughan
>Assignee: Steve Vaughan
>Priority: Critical
>  Labels: pull-request-available
>
> There are commands in hadoop.sh that use -Dtest=NoUnitTests.  I believe this 
> is intended to replace -DskipTest, by not matching any tests.  This causes an 
> issue since the default for surefire.failIfNoSpecifiedTests is true.
> Updating those commands to override the default will address the issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4801: HADOOP-18419. Don't fail when using -DNoUnitTests

2022-08-24 Thread GitBox


hadoop-yetus commented on PR #4801:
URL: https://github.com/apache/hadoop/pull/4801#issuecomment-1225973482

   (!) A patch to the testing environment has been detected. 
   Re-executing against the patched versions to perform further tests. 
   The console is at 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4801/2/console in 
case of problems.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18419) Don't fail when using -DNoUnitTests

2022-08-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17584357#comment-17584357
 ] 

ASF GitHub Bot commented on HADOOP-18419:
-

hadoop-yetus commented on PR #4801:
URL: https://github.com/apache/hadoop/pull/4801#issuecomment-1225954181

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 55s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  |  Maven dependency ordering for branch  |
   | -1 :x: |  mvninstall  |   2m 17s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4801/1/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | +1 :green_heart: |  mvnsite  |   0m  0s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |   0m 58s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   0m  0s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m  0s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  shadedclient  |   1m  0s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +0 :ok: |  asflicense  |   0m 31s |  |  ASF License check generated no 
output?  |
   |  |   |   8m 59s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4801/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4801 |
   | Optional Tests | dupname asflicense mvnsite unit codespell detsecrets 
shellcheck shelldocs |
   | uname | Linux 86c6dd29015f 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b19504efbecf6d79b18505598c606d81198316ec |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4801/1/testReport/ |
   | Max. process+thread count | 51 (vs. ulimit of 5500) |
   | modules | C:  U:  |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4801/1/console |
   | versions | git=2.25.1 maven=3.6.3 shellcheck=0.7.0 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Don't fail when using -DNoUnitTests
> ---
>
> Key: HADOOP-18419
> URL: https://issues.apache.org/jira/browse/HADOOP-18419
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Steve Vaughan
>Assignee: Steve Vaughan
>Priority: Critical
>  Labels: pull-request-available
>
> There are commands in hadoop.sh that use -Dtest=NoUnitTests.  I believe this 
> is intended to replace -DskipTest, by not matching any tests.  This causes an 
> issue since the default for surefire.failIfNoSpecifiedTests is true.
> Updating those commands to override the default will address the issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   3   >