[GitHub] [hadoop] virajjasani opened a new pull request, #4323: HDFS-16582. Expose aggregate latency of slow node as perceived by the reporting node

2022-05-17 Thread GitBox


virajjasani opened a new pull request, #4323:
URL: https://github.com/apache/hadoop/pull/4323

   ### Description of PR
   When any datanode is reported to be slower by another node, we expose the 
slow node as well as the reporting nodes list for the slow node. However, we 
don't provide latency numbers of the slownode as reported by the reporting 
node. Having the latency exposed in the metrics would be really helpful for 
operators to keep a track of how far behind a given slow node is performing 
compared to the rest of the nodes in the cluster.
   
   The operator should be able to gather aggregated latencies of all slow nodes 
with their reporting nodes in Namenode metrics.
   
   ### How was this patch tested?
   Dev cluster and UT.
   
   https://user-images.githubusercontent.com/34790606/168956923-d53e727a-c683-4d99-b075-9b3f776fd9f4.png;>
   
   
   ### For code changes:
   
   - [X] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18242) ABFS Rename Failure when tracking metadata is in incomplete state

2022-05-17 Thread Mehakmeet Singh (Jira)
Mehakmeet Singh created HADOOP-18242:


 Summary: ABFS Rename Failure when tracking metadata is in 
incomplete state
 Key: HADOOP-18242
 URL: https://issues.apache.org/jira/browse/HADOOP-18242
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/azure
Reporter: Mehakmeet Singh
Assignee: Mehakmeet Singh


If a node in the datacenter crashes while processing an operation, occasionally 
it can leave the Storage-internal blob tracking metadata in an incomplete 
state.  We expect this to happen occasionally, and so all API’s are designed in 
such a way that if this incomplete state is observed on a blob, the situation 
is resolved before the current operation proceeds.  However, this incident has 
exposed a bug specifically with the Rename API, where the incomplete state 
fails to resolve, leading to this incorrect failure.  As a temporary 
mitigation, if any other operation is performed on this blob – 
GetBlobProperties, GetBlob, GetFileProperties, SetFileProperties, etc – it 
should resolve the incomplete state, and rename will no longer hit this issue.

StackTrace:
{code:java}
2022-03-22 17:52:19,789 DEBUG [regionserver/euwukwlss-hg50:16020.logRoller] 
services.AbfsClient: HttpRequest: 
404,RenameDestinationParentPathNotFound,cid=ef5cbf0f-5d4a-4630-8a59-3d559077fc24,rid=35fef164-101f-000b-1b15-3ed81800,sent=0,recv=212,PUT,https://euwqdaotdfdls03.dfs.core.windows.net/eykbssc/apps/hbase/data/oldWALs/euwukwlss-hg50.tdf.qa%252C16020%252C1647949929877.1647967939315?timeout=90
   {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18105) Implement a variant of ElasticByteBufferPool which uses weak references for garbage collection.

2022-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18105?focusedWorklogId=771704=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-771704
 ]

ASF GitHub Bot logged work on HADOOP-18105:
---

Author: ASF GitHub Bot
Created on: 18/May/22 03:50
Start Date: 18/May/22 03:50
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4263:
URL: https://github.com/apache/hadoop/pull/4263#issuecomment-1129536677

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 55s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ feature-vectored-io Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m 27s |  |  feature-vectored-io 
passed  |
   | +1 :green_heart: |  compile  |  24m 59s |  |  feature-vectored-io passed 
with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  21m 42s |  |  feature-vectored-io passed 
with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 31s |  |  feature-vectored-io 
passed  |
   | +1 :green_heart: |  mvnsite  |   1m 59s |  |  feature-vectored-io passed  |
   | -1 :x: |  javadoc  |   1m 37s | 
[/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4263/3/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-common in feature-vectored-io failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | +1 :green_heart: |  javadoc  |   2m  2s |  |  feature-vectored-io passed 
with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m  6s |  |  feature-vectored-io passed  
|
   | +1 :green_heart: |  shadedclient  |  26m  5s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  4s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 16s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |  24m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 39s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  21m 39s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 57s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   1m 25s | 
[/patch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4263/3/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-common in the patch failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | +1 :green_heart: |  javadoc  |   2m  0s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m  3s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 55s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 15s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 16s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 226m 54s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4263/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4263 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 19fc3f0597f5 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | feature-vectored-io / 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4263: HADOOP-18105 Implement buffer pooling with weak references

2022-05-17 Thread GitBox


hadoop-yetus commented on PR #4263:
URL: https://github.com/apache/hadoop/pull/4263#issuecomment-1129536677

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 55s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ feature-vectored-io Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m 27s |  |  feature-vectored-io 
passed  |
   | +1 :green_heart: |  compile  |  24m 59s |  |  feature-vectored-io passed 
with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  21m 42s |  |  feature-vectored-io passed 
with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 31s |  |  feature-vectored-io 
passed  |
   | +1 :green_heart: |  mvnsite  |   1m 59s |  |  feature-vectored-io passed  |
   | -1 :x: |  javadoc  |   1m 37s | 
[/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4263/3/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-common in feature-vectored-io failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | +1 :green_heart: |  javadoc  |   2m  2s |  |  feature-vectored-io passed 
with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m  6s |  |  feature-vectored-io passed  
|
   | +1 :green_heart: |  shadedclient  |  26m  5s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  4s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 16s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |  24m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 39s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  21m 39s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 57s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   1m 25s | 
[/patch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4263/3/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-common in the patch failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | +1 :green_heart: |  javadoc  |   2m  0s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m  3s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 55s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 15s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 16s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 226m 54s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4263/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4263 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 19fc3f0597f5 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | feature-vectored-io / 
12a1925318877133859b7b8ea08754fa9a324bfd |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4263/3/testReport/ |
   | Max. process+thread 

[GitHub] [hadoop] slfan1989 commented on a diff in pull request #4317: YARN-10465. Support getNodeToLabels, getLabelsToNodes, getClusterNodeLabels API's for Federation

2022-05-17 Thread GitBox


slfan1989 commented on code in PR #4317:
URL: https://github.com/apache/hadoop/pull/4317#discussion_r875366804


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/FederationClientInterceptor.java:
##
@@ -870,19 +870,88 @@ public ReservationDeleteResponse deleteReservation(
   @Override
   public GetNodesToLabelsResponse getNodeToLabels(
   GetNodesToLabelsRequest request) throws YarnException, IOException {
-throw new NotImplementedException("Code is not implemented");
+if (request == null) {
+  routerMetrics.incrNodeToLabelsFailedRetrieved();
+  RouterServerUtil.logAndThrowException("Missing getNodesToLabels 
request.", null);
+}
+long startTime = clock.getTime();
+Map subClusters =
+federationFacade.getSubClusters(true);

Review Comment:
   Is it possible to design a generic function like this?
   ```
 private  Collection invokeAppClientProtocolMethod(
 Boolean filterInactiveSubClusters, ClientMethod request, Class 
clazz)
   throws YarnException, RuntimeException {
   Map subClusters =
federationFacade.getSubClusters(filterInactiveSubClusters);
   return subClusters.keySet().stream().map(subClusterId -> {
 try {
   ApplicationClientProtocol protocol =
   getClientRMProxyForSubCluster(subClusterId);
   Method method = ApplicationClientProtocol.class
   .getMethod(request.getMethodName(), request.getTypes());
   return clazz.cast(method.invoke(protocol, request.getParams()));
 } catch (YarnException | NoSuchMethodException | 
IllegalAccessException |
  InvocationTargetException ex) {
   throw new RuntimeException(ex);
 }
   }).collect(Collectors.toList());
 }
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4322: HDFS-16574. Reduces the time it takes once to hold FSNamesystem write lock to remove blocks associated with dead datanodes

2022-05-17 Thread GitBox


hadoop-yetus commented on PR #4322:
URL: https://github.com/apache/hadoop/pull/4322#issuecomment-1129461576

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 53s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m 46s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 41s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   1m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 21s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 19s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 46s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m  2s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  26m 25s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 21s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 29s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   1m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  3s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4322/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 3 new + 281 unchanged 
- 0 fixed = 284 total (was 281)  |
   | +1 :green_heart: |  mvnsite  |   1m 27s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 35s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 38s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 353m 54s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4322/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 59s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 471m 20s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestReplaceDatanodeFailureReplication |
   |   | hadoop.hdfs.tools.TestDFSAdmin |
   |   | hadoop.hdfs.server.mover.TestMover |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4322/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4322 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux 4023edda9606 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / e1941b2ea8a4566ae2e5794c06965576caafd35a |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | 

[GitHub] [hadoop] slfan1989 commented on a diff in pull request #4317: YARN-10465. Support getNodeToLabels, getLabelsToNodes, getClusterNodeLabels API's for Federation

2022-05-17 Thread GitBox


slfan1989 commented on code in PR #4317:
URL: https://github.com/apache/hadoop/pull/4317#discussion_r875366804


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/FederationClientInterceptor.java:
##
@@ -870,19 +870,88 @@ public ReservationDeleteResponse deleteReservation(
   @Override
   public GetNodesToLabelsResponse getNodeToLabels(
   GetNodesToLabelsRequest request) throws YarnException, IOException {
-throw new NotImplementedException("Code is not implemented");
+if (request == null) {
+  routerMetrics.incrNodeToLabelsFailedRetrieved();
+  RouterServerUtil.logAndThrowException("Missing getNodesToLabels 
request.", null);
+}
+long startTime = clock.getTime();
+Map subClusters =
+federationFacade.getSubClusters(true);

Review Comment:
   Is it possible to design a generic function like this?
   ```
 private  Collection invokeConcurrent(
 Boolean filterInactiveSubClusters, ClientMethod request, Class 
clazz)
throws YarnException, RuntimeException {
   Map subClusters =
federationFacade.getSubClusters(filterInactiveSubClusters);
   return subClusters.keySet().stream().map(subClusterId -> {
 try {
   ApplicationClientProtocol protocol =
   getClientRMProxyForSubCluster(subClusterId);
   Method method = ApplicationClientProtocol.class
   .getMethod(request.getMethodName(), request.getTypes());
   return clazz.cast(method.invoke(protocol, request.getParams()));
 } catch (YarnException | NoSuchMethodException | 
IllegalAccessException |
  InvocationTargetException ex) {
   throw new RuntimeException(ex);
 }
   }).collect(Collectors.toList());
 }
   ```



##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/FederationClientInterceptor.java:
##
@@ -870,19 +870,88 @@ public ReservationDeleteResponse deleteReservation(
   @Override
   public GetNodesToLabelsResponse getNodeToLabels(
   GetNodesToLabelsRequest request) throws YarnException, IOException {
-throw new NotImplementedException("Code is not implemented");
+if (request == null) {
+  routerMetrics.incrNodeToLabelsFailedRetrieved();
+  RouterServerUtil.logAndThrowException("Missing getNodesToLabels 
request.", null);
+}
+long startTime = clock.getTime();
+Map subClusters =
+federationFacade.getSubClusters(true);

Review Comment:
   Is it possible to design a generic function like this?
   ```
 private  Collection invokeConcurrent(
 Boolean filterInactiveSubClusters, ClientMethod request, Class 
clazz)
   throws YarnException, RuntimeException {
   Map subClusters =
federationFacade.getSubClusters(filterInactiveSubClusters);
   return subClusters.keySet().stream().map(subClusterId -> {
 try {
   ApplicationClientProtocol protocol =
   getClientRMProxyForSubCluster(subClusterId);
   Method method = ApplicationClientProtocol.class
   .getMethod(request.getMethodName(), request.getTypes());
   return clazz.cast(method.invoke(protocol, request.getParams()));
 } catch (YarnException | NoSuchMethodException | 
IllegalAccessException |
  InvocationTargetException ex) {
   throw new RuntimeException(ex);
 }
   }).collect(Collectors.toList());
 }
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on a diff in pull request #4317: YARN-10465. Support getNodeToLabels, getLabelsToNodes, getClusterNodeLabels API's for Federation

2022-05-17 Thread GitBox


slfan1989 commented on code in PR #4317:
URL: https://github.com/apache/hadoop/pull/4317#discussion_r875366804


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/FederationClientInterceptor.java:
##
@@ -870,19 +870,88 @@ public ReservationDeleteResponse deleteReservation(
   @Override
   public GetNodesToLabelsResponse getNodeToLabels(
   GetNodesToLabelsRequest request) throws YarnException, IOException {
-throw new NotImplementedException("Code is not implemented");
+if (request == null) {
+  routerMetrics.incrNodeToLabelsFailedRetrieved();
+  RouterServerUtil.logAndThrowException("Missing getNodesToLabels 
request.", null);
+}
+long startTime = clock.getTime();
+Map subClusters =
+federationFacade.getSubClusters(true);

Review Comment:
   Is it possible to design a generic function like this?
   ```
 private  Collection invokeConcurrent(
 Boolean filterInactiveSubClusters, ClientMethod request, Class 
clazz)
 throws YarnException, RuntimeException {
   Map subClusters =
federationFacade.getSubClusters(filterInactiveSubClusters);
   return subClusters.keySet().stream().map(subClusterId -> {
 try {
   ApplicationClientProtocol protocol =
   getClientRMProxyForSubCluster(subClusterId);
   Method method = ApplicationClientProtocol.class
   .getMethod(request.getMethodName(), request.getTypes());
   return clazz.cast(method.invoke(protocol, request.getParams()));
 } catch (YarnException | NoSuchMethodException | 
IllegalAccessException |
  InvocationTargetException ex) {
   throw new RuntimeException(ex);
 }
   }).collect(Collectors.toList());
 }
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on a diff in pull request #4317: YARN-10465. Support getNodeToLabels, getLabelsToNodes, getClusterNodeLabels API's for Federation

2022-05-17 Thread GitBox


slfan1989 commented on code in PR #4317:
URL: https://github.com/apache/hadoop/pull/4317#discussion_r875366804


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/FederationClientInterceptor.java:
##
@@ -870,19 +870,88 @@ public ReservationDeleteResponse deleteReservation(
   @Override
   public GetNodesToLabelsResponse getNodeToLabels(
   GetNodesToLabelsRequest request) throws YarnException, IOException {
-throw new NotImplementedException("Code is not implemented");
+if (request == null) {
+  routerMetrics.incrNodeToLabelsFailedRetrieved();
+  RouterServerUtil.logAndThrowException("Missing getNodesToLabels 
request.", null);
+}
+long startTime = clock.getTime();
+Map subClusters =
+federationFacade.getSubClusters(true);

Review Comment:
   Is it possible to design a generic function like this?
   ```
 private  Collection invokeConcurrent(
 Boolean filterInactiveSubClusters, ClientMethod request, Class 
clazz)
 throws YarnException, RuntimeException {
   Map subClusters =
   federationFacade.getSubClusters(filterInactiveSubClusters);
   return subClusters.keySet().stream().map(subClusterId -> {
 try {
   ApplicationClientProtocol protocol =
   getClientRMProxyForSubCluster(subClusterId);
   Method method = ApplicationClientProtocol.class
   .getMethod(request.getMethodName(), request.getTypes());
   return clazz.cast(method.invoke(protocol, request.getParams()));
 } catch (YarnException | NoSuchMethodException | 
IllegalAccessException |
  InvocationTargetException ex) {
   throw new RuntimeException(ex);
 }
   }).collect(Collectors.toList());
 }
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18105) Implement a variant of ElasticByteBufferPool which uses weak references for garbage collection.

2022-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18105?focusedWorklogId=771645=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-771645
 ]

ASF GitHub Bot logged work on HADOOP-18105:
---

Author: ASF GitHub Bot
Created on: 18/May/22 00:00
Start Date: 18/May/22 00:00
Worklog Time Spent: 10m 
  Work Description: mukund-thakur commented on code in PR #4263:
URL: https://github.com/apache/hadoop/pull/4263#discussion_r875361776


##
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestWeakReferencedElasticByteBufferPool.java:
##
@@ -0,0 +1,227 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io;
+
+import java.nio.ByteBuffer;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Random;
+
+import org.assertj.core.api.Assertions;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+
+/**
+ * Unit tests for {@code WeakReferencedElasticByteBufferPool}.
+ */
+@RunWith(Parameterized.class)
+public class TestWeakReferencedElasticByteBufferPool {
+
+  private final boolean isDirect;
+
+  private final String type;
+
+  @Parameterized.Parameters(name = "Buffer type : {0}")
+  public static List params() {
+return Arrays.asList("direct", "array");
+  }
+
+  public TestWeakReferencedElasticByteBufferPool(String type) {
+this.type = type;
+this.isDirect = !"array".equals(type);
+  }
+
+  // Add more tests for different time and same size buffers in the pool. 
+  @Test
+  public void testGetAndPutBasic() {
+WeakReferencedElasticByteBufferPool pool = new 
WeakReferencedElasticByteBufferPool();
+int bufferSize = 5;
+ByteBuffer buffer = pool.getBuffer(isDirect, bufferSize);
+Assertions.assertThat(buffer.isDirect())
+.describedAs("Buffered returned should be of correct type {}", 
type)
+.isEqualTo(isDirect);
+Assertions.assertThat(buffer.capacity())
+.describedAs("Initial capacity of returned buffer from pool")
+.isEqualTo(bufferSize);
+Assertions.assertThat(buffer.position())
+.describedAs("Initial position of returned buffer from pool")
+.isEqualTo(0);
+
+byte[] arr = createByteArray(bufferSize);
+buffer.put(arr, 0, arr.length);
+buffer.flip();
+validateBufferContent(buffer, arr);
+Assertions.assertThat(buffer.position())
+.describedAs("Buffer's position after filling bytes in it")
+.isEqualTo(bufferSize);
+// releasing buffer to the pool.
+pool.putBuffer(buffer);
+Assertions.assertThat(buffer.position())
+.describedAs("Position should be reset to 0 after returning buffer 
to the pool")
+.isEqualTo(0);
+
+  }
+
+  @Test
+  public void testPoolingWithDifferentSizes() {
+WeakReferencedElasticByteBufferPool pool = new 
WeakReferencedElasticByteBufferPool();
+ByteBuffer buffer = pool.getBuffer(isDirect, 5);
+ByteBuffer buffer1 = pool.getBuffer(isDirect, 10);
+ByteBuffer buffer2 = pool.getBuffer(isDirect, 15);
+
+Assertions.assertThat(pool.getCurrentBuffersCount(isDirect))
+.describedAs("Number of buffers in the pool")
+.isEqualTo(0);
+
+pool.putBuffer(buffer1);
+pool.putBuffer(buffer2);
+Assertions.assertThat(pool.getCurrentBuffersCount(isDirect))
+.describedAs("Number of buffers in the pool")
+.isEqualTo(2);
+ByteBuffer buffer3 = pool.getBuffer(isDirect, 12);
+Assertions.assertThat(buffer3.capacity())
+.describedAs("Pooled buffer should have older capacity")
+.isEqualTo(15);
+Assertions.assertThat(pool.getCurrentBuffersCount(isDirect))
+.describedAs("Number of buffers in the pool")
+.isEqualTo(1);
+pool.putBuffer(buffer);
+ByteBuffer buffer4 = pool.getBuffer(isDirect, 6);
+Assertions.assertThat(buffer4.capacity())
+.describedAs("Pooled buffer should have older capacity")
+.isEqualTo(10);
+Assertions.assertThat(pool.getCurrentBuffersCount(isDirect))
+

[GitHub] [hadoop] mukund-thakur commented on a diff in pull request #4263: HADOOP-18105 Implement buffer pooling with weak references

2022-05-17 Thread GitBox


mukund-thakur commented on code in PR #4263:
URL: https://github.com/apache/hadoop/pull/4263#discussion_r875361776


##
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestWeakReferencedElasticByteBufferPool.java:
##
@@ -0,0 +1,227 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io;
+
+import java.nio.ByteBuffer;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Random;
+
+import org.assertj.core.api.Assertions;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+
+/**
+ * Unit tests for {@code WeakReferencedElasticByteBufferPool}.
+ */
+@RunWith(Parameterized.class)
+public class TestWeakReferencedElasticByteBufferPool {
+
+  private final boolean isDirect;
+
+  private final String type;
+
+  @Parameterized.Parameters(name = "Buffer type : {0}")
+  public static List params() {
+return Arrays.asList("direct", "array");
+  }
+
+  public TestWeakReferencedElasticByteBufferPool(String type) {
+this.type = type;
+this.isDirect = !"array".equals(type);
+  }
+
+  // Add more tests for different time and same size buffers in the pool. 
+  @Test
+  public void testGetAndPutBasic() {
+WeakReferencedElasticByteBufferPool pool = new 
WeakReferencedElasticByteBufferPool();
+int bufferSize = 5;
+ByteBuffer buffer = pool.getBuffer(isDirect, bufferSize);
+Assertions.assertThat(buffer.isDirect())
+.describedAs("Buffered returned should be of correct type {}", 
type)
+.isEqualTo(isDirect);
+Assertions.assertThat(buffer.capacity())
+.describedAs("Initial capacity of returned buffer from pool")
+.isEqualTo(bufferSize);
+Assertions.assertThat(buffer.position())
+.describedAs("Initial position of returned buffer from pool")
+.isEqualTo(0);
+
+byte[] arr = createByteArray(bufferSize);
+buffer.put(arr, 0, arr.length);
+buffer.flip();
+validateBufferContent(buffer, arr);
+Assertions.assertThat(buffer.position())
+.describedAs("Buffer's position after filling bytes in it")
+.isEqualTo(bufferSize);
+// releasing buffer to the pool.
+pool.putBuffer(buffer);
+Assertions.assertThat(buffer.position())
+.describedAs("Position should be reset to 0 after returning buffer 
to the pool")
+.isEqualTo(0);
+
+  }
+
+  @Test
+  public void testPoolingWithDifferentSizes() {
+WeakReferencedElasticByteBufferPool pool = new 
WeakReferencedElasticByteBufferPool();
+ByteBuffer buffer = pool.getBuffer(isDirect, 5);
+ByteBuffer buffer1 = pool.getBuffer(isDirect, 10);
+ByteBuffer buffer2 = pool.getBuffer(isDirect, 15);
+
+Assertions.assertThat(pool.getCurrentBuffersCount(isDirect))
+.describedAs("Number of buffers in the pool")
+.isEqualTo(0);
+
+pool.putBuffer(buffer1);
+pool.putBuffer(buffer2);
+Assertions.assertThat(pool.getCurrentBuffersCount(isDirect))
+.describedAs("Number of buffers in the pool")
+.isEqualTo(2);
+ByteBuffer buffer3 = pool.getBuffer(isDirect, 12);
+Assertions.assertThat(buffer3.capacity())
+.describedAs("Pooled buffer should have older capacity")
+.isEqualTo(15);
+Assertions.assertThat(pool.getCurrentBuffersCount(isDirect))
+.describedAs("Number of buffers in the pool")
+.isEqualTo(1);
+pool.putBuffer(buffer);
+ByteBuffer buffer4 = pool.getBuffer(isDirect, 6);
+Assertions.assertThat(buffer4.capacity())
+.describedAs("Pooled buffer should have older capacity")
+.isEqualTo(10);
+Assertions.assertThat(pool.getCurrentBuffersCount(isDirect))
+.describedAs("Number of buffers in the pool")
+.isEqualTo(1);
+
+pool.release();
+Assertions.assertThat(pool.getCurrentBuffersCount(isDirect))
+.describedAs("Number of buffers in the pool post release")
+.isEqualTo(0);
+  }
+
+  @Test
+  public void testPoolingWithDifferentInsertionTime() {
+WeakReferencedElasticByteBufferPool pool = new 
WeakReferencedElasticByteBufferPool();

[GitHub] [hadoop] hadoop-yetus commented on pull request #4321: HDFS-16581.Print DataNode node status.

2022-05-17 Thread GitBox


hadoop-yetus commented on PR #4321:
URL: https://github.com/apache/hadoop/pull/4321#issuecomment-1129422862

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 55s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m 44s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 39s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   1m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 18s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 40s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 18s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 43s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 50s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 59s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 25s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | -1 :x: |  javac  |   1m 25s | 
[/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4321/1/artifact/out/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  
hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
 with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 generated 4 new + 
466 unchanged - 0 fixed = 470 total (was 466)  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | -1 :x: |  javac  |   1m 17s | 
[/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4321/1/artifact/out/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  
hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
 with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 generated 24 new 
+ 450 unchanged - 0 fixed = 474 total (was 450)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  1s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4321/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 121 unchanged 
- 25 fixed = 123 total (was 146)  |
   | +1 :green_heart: |  mvnsite  |   1m 25s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | -1 :x: |  spotbugs  |   3m 39s | 
[/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4321/1/artifact/out/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs.html)
 |  hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 unchanged - 0 fixed = 1 
total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  25m 39s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 361m 12s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4321/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  2s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 478m 43s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-hdfs-project/hadoop-hdfs |
   |  |  

[jira] [Work logged] (HADOOP-18105) Implement a variant of ElasticByteBufferPool which uses weak references for garbage collection.

2022-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18105?focusedWorklogId=771624=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-771624
 ]

ASF GitHub Bot logged work on HADOOP-18105:
---

Author: ASF GitHub Bot
Created on: 17/May/22 22:21
Start Date: 17/May/22 22:21
Worklog Time Spent: 10m 
  Work Description: mukund-thakur commented on code in PR #4263:
URL: https://github.com/apache/hadoop/pull/4263#discussion_r875305357


##
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestWeakReferencedElasticByteBufferPool.java:
##
@@ -0,0 +1,227 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io;
+
+import java.nio.ByteBuffer;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Random;
+
+import org.assertj.core.api.Assertions;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+
+/**
+ * Unit tests for {@code WeakReferencedElasticByteBufferPool}.
+ */
+@RunWith(Parameterized.class)
+public class TestWeakReferencedElasticByteBufferPool {
+
+  private final boolean isDirect;
+
+  private final String type;
+
+  @Parameterized.Parameters(name = "Buffer type : {0}")
+  public static List params() {
+return Arrays.asList("direct", "array");
+  }
+
+  public TestWeakReferencedElasticByteBufferPool(String type) {
+this.type = type;
+this.isDirect = !"array".equals(type);
+  }
+
+  // Add more tests for different time and same size buffers in the pool. 
+  @Test
+  public void testGetAndPutBasic() {
+WeakReferencedElasticByteBufferPool pool = new 
WeakReferencedElasticByteBufferPool();
+int bufferSize = 5;
+ByteBuffer buffer = pool.getBuffer(isDirect, bufferSize);
+Assertions.assertThat(buffer.isDirect())
+.describedAs("Buffered returned should be of correct type {}", 
type)
+.isEqualTo(isDirect);
+Assertions.assertThat(buffer.capacity())
+.describedAs("Initial capacity of returned buffer from pool")
+.isEqualTo(bufferSize);
+Assertions.assertThat(buffer.position())
+.describedAs("Initial position of returned buffer from pool")
+.isEqualTo(0);
+
+byte[] arr = createByteArray(bufferSize);
+buffer.put(arr, 0, arr.length);
+buffer.flip();
+validateBufferContent(buffer, arr);
+Assertions.assertThat(buffer.position())
+.describedAs("Buffer's position after filling bytes in it")
+.isEqualTo(bufferSize);
+// releasing buffer to the pool.
+pool.putBuffer(buffer);
+Assertions.assertThat(buffer.position())
+.describedAs("Position should be reset to 0 after returning buffer 
to the pool")
+.isEqualTo(0);
+
+  }
+
+  @Test
+  public void testPoolingWithDifferentSizes() {
+WeakReferencedElasticByteBufferPool pool = new 
WeakReferencedElasticByteBufferPool();
+ByteBuffer buffer = pool.getBuffer(isDirect, 5);
+ByteBuffer buffer1 = pool.getBuffer(isDirect, 10);
+ByteBuffer buffer2 = pool.getBuffer(isDirect, 15);
+
+Assertions.assertThat(pool.getCurrentBuffersCount(isDirect))
+.describedAs("Number of buffers in the pool")
+.isEqualTo(0);
+
+pool.putBuffer(buffer1);
+pool.putBuffer(buffer2);
+Assertions.assertThat(pool.getCurrentBuffersCount(isDirect))
+.describedAs("Number of buffers in the pool")
+.isEqualTo(2);
+ByteBuffer buffer3 = pool.getBuffer(isDirect, 12);
+Assertions.assertThat(buffer3.capacity())
+.describedAs("Pooled buffer should have older capacity")
+.isEqualTo(15);
+Assertions.assertThat(pool.getCurrentBuffersCount(isDirect))
+.describedAs("Number of buffers in the pool")
+.isEqualTo(1);
+pool.putBuffer(buffer);
+ByteBuffer buffer4 = pool.getBuffer(isDirect, 6);
+Assertions.assertThat(buffer4.capacity())
+.describedAs("Pooled buffer should have older capacity")
+.isEqualTo(10);
+Assertions.assertThat(pool.getCurrentBuffersCount(isDirect))
+

[GitHub] [hadoop] mukund-thakur commented on a diff in pull request #4263: HADOOP-18105 Implement buffer pooling with weak references

2022-05-17 Thread GitBox


mukund-thakur commented on code in PR #4263:
URL: https://github.com/apache/hadoop/pull/4263#discussion_r875305357


##
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestWeakReferencedElasticByteBufferPool.java:
##
@@ -0,0 +1,227 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io;
+
+import java.nio.ByteBuffer;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Random;
+
+import org.assertj.core.api.Assertions;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+
+/**
+ * Unit tests for {@code WeakReferencedElasticByteBufferPool}.
+ */
+@RunWith(Parameterized.class)
+public class TestWeakReferencedElasticByteBufferPool {
+
+  private final boolean isDirect;
+
+  private final String type;
+
+  @Parameterized.Parameters(name = "Buffer type : {0}")
+  public static List params() {
+return Arrays.asList("direct", "array");
+  }
+
+  public TestWeakReferencedElasticByteBufferPool(String type) {
+this.type = type;
+this.isDirect = !"array".equals(type);
+  }
+
+  // Add more tests for different time and same size buffers in the pool. 
+  @Test
+  public void testGetAndPutBasic() {
+WeakReferencedElasticByteBufferPool pool = new 
WeakReferencedElasticByteBufferPool();
+int bufferSize = 5;
+ByteBuffer buffer = pool.getBuffer(isDirect, bufferSize);
+Assertions.assertThat(buffer.isDirect())
+.describedAs("Buffered returned should be of correct type {}", 
type)
+.isEqualTo(isDirect);
+Assertions.assertThat(buffer.capacity())
+.describedAs("Initial capacity of returned buffer from pool")
+.isEqualTo(bufferSize);
+Assertions.assertThat(buffer.position())
+.describedAs("Initial position of returned buffer from pool")
+.isEqualTo(0);
+
+byte[] arr = createByteArray(bufferSize);
+buffer.put(arr, 0, arr.length);
+buffer.flip();
+validateBufferContent(buffer, arr);
+Assertions.assertThat(buffer.position())
+.describedAs("Buffer's position after filling bytes in it")
+.isEqualTo(bufferSize);
+// releasing buffer to the pool.
+pool.putBuffer(buffer);
+Assertions.assertThat(buffer.position())
+.describedAs("Position should be reset to 0 after returning buffer 
to the pool")
+.isEqualTo(0);
+
+  }
+
+  @Test
+  public void testPoolingWithDifferentSizes() {
+WeakReferencedElasticByteBufferPool pool = new 
WeakReferencedElasticByteBufferPool();
+ByteBuffer buffer = pool.getBuffer(isDirect, 5);
+ByteBuffer buffer1 = pool.getBuffer(isDirect, 10);
+ByteBuffer buffer2 = pool.getBuffer(isDirect, 15);
+
+Assertions.assertThat(pool.getCurrentBuffersCount(isDirect))
+.describedAs("Number of buffers in the pool")
+.isEqualTo(0);
+
+pool.putBuffer(buffer1);
+pool.putBuffer(buffer2);
+Assertions.assertThat(pool.getCurrentBuffersCount(isDirect))
+.describedAs("Number of buffers in the pool")
+.isEqualTo(2);
+ByteBuffer buffer3 = pool.getBuffer(isDirect, 12);
+Assertions.assertThat(buffer3.capacity())
+.describedAs("Pooled buffer should have older capacity")
+.isEqualTo(15);
+Assertions.assertThat(pool.getCurrentBuffersCount(isDirect))
+.describedAs("Number of buffers in the pool")
+.isEqualTo(1);
+pool.putBuffer(buffer);
+ByteBuffer buffer4 = pool.getBuffer(isDirect, 6);
+Assertions.assertThat(buffer4.capacity())
+.describedAs("Pooled buffer should have older capacity")
+.isEqualTo(10);
+Assertions.assertThat(pool.getCurrentBuffersCount(isDirect))
+.describedAs("Number of buffers in the pool")
+.isEqualTo(1);
+
+pool.release();
+Assertions.assertThat(pool.getCurrentBuffersCount(isDirect))
+.describedAs("Number of buffers in the pool post release")
+.isEqualTo(0);
+  }
+
+  @Test
+  public void testPoolingWithDifferentInsertionTime() {
+WeakReferencedElasticByteBufferPool pool = new 
WeakReferencedElasticByteBufferPool();

[jira] [Commented] (HADOOP-18212) hadoop-client-runtime latest version 3.3.2 has security issues

2022-05-17 Thread phoebe chen (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17538490#comment-17538490
 ] 

phoebe chen commented on HADOOP-18212:
--

[~ste...@apache.org] Sorry for late response. It seems the links in the message 
is not valid now. Will you please give me the updated link? Thanks.

> hadoop-client-runtime latest version 3.3.2 has security issues
> --
>
> Key: HADOOP-18212
> URL: https://issues.apache.org/jira/browse/HADOOP-18212
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.3.2
>Reporter: phoebe chen
>Priority: Major
> Fix For: 3.3.3
>
>
> Currently in highest version of hadoop-client-runtime 3.3.2, there are 
> following security vulnerabilities comes from dependencies:
> com.fasterxml.jackson.core_jackson-databind in version 2.13.0, per 
> [CVE-2020-36518|[https://nvd.nist.gov/vuln/detail/CVE-2020-36518],] need to 
> be upgraded to 2.13.2.2.
> commons-codec_commons-codec in version 1.11 per CODEC-134, need to be 
> upgraded to 1.13 or higher
> Thanks.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18107) Vectored IO support for large S3 files.

2022-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18107?focusedWorklogId=771579=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-771579
 ]

ASF GitHub Bot logged work on HADOOP-18107:
---

Author: ASF GitHub Bot
Created on: 17/May/22 20:43
Start Date: 17/May/22 20:43
Worklog Time Spent: 10m 
  Work Description: mukund-thakur commented on code in PR #4273:
URL: https://github.com/apache/hadoop/pull/4273#discussion_r875246013


##
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractTestUtils.java:
##
@@ -1095,6 +1102,54 @@ public static void validateFileContent(byte[] concat, 
byte[][] bytes) {
 mismatch);
   }
 
+  /**
+   * Utility to validate vectored read results.
+   * @param fileRanges input ranges.
+   * @param originalData original data.
+   * @throws IOException any ioe.
+   */
+  public static void validateVectoredReadResult(List fileRanges,
+byte[] originalData)
+  throws IOException, TimeoutException {
+CompletableFuture[] completableFutures = new 
CompletableFuture[fileRanges.size()];
+int i = 0;
+for (FileRange res : fileRanges) {
+  completableFutures[i++] = res.getData();
+}
+CompletableFuture combinedFuture = 
CompletableFuture.allOf(completableFutures);
+FutureIO.awaitFuture(combinedFuture, 5, TimeUnit.MINUTES);
+
+for (FileRange res : fileRanges) {
+  CompletableFuture data = res.getData();
+  ByteBuffer buffer = FutureIO.awaitFuture(data, 5, TimeUnit.MINUTES);

Review Comment:
   I don't think just using allOf will complete the combined future. The intent 
here was to get the individual future execution in parallel by calling a get on 
combined future rather than a for loop which will lead to serial execution. 
   
   Also from the doc, it suggests to call join. 
   `Among the applications of this method is to await completion of a set of 
independent CompletableFutures before continuing a program, as in: 
CompletableFuture.allOf(c1, c2, c3).join();.`





Issue Time Tracking
---

Worklog Id: (was: 771579)
Time Spent: 2h 20m  (was: 2h 10m)

> Vectored IO support for large S3 files. 
> 
>
> Key: HADOOP-18107
> URL: https://issues.apache.org/jira/browse/HADOOP-18107
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> This effort would mostly be adding more tests for large files under scale 
> tests and see if any new issue surfaces. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on a diff in pull request #4273: HADOOP-18107 Adding scale test for vectored reads for large file

2022-05-17 Thread GitBox


mukund-thakur commented on code in PR #4273:
URL: https://github.com/apache/hadoop/pull/4273#discussion_r875246013


##
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractTestUtils.java:
##
@@ -1095,6 +1102,54 @@ public static void validateFileContent(byte[] concat, 
byte[][] bytes) {
 mismatch);
   }
 
+  /**
+   * Utility to validate vectored read results.
+   * @param fileRanges input ranges.
+   * @param originalData original data.
+   * @throws IOException any ioe.
+   */
+  public static void validateVectoredReadResult(List fileRanges,
+byte[] originalData)
+  throws IOException, TimeoutException {
+CompletableFuture[] completableFutures = new 
CompletableFuture[fileRanges.size()];
+int i = 0;
+for (FileRange res : fileRanges) {
+  completableFutures[i++] = res.getData();
+}
+CompletableFuture combinedFuture = 
CompletableFuture.allOf(completableFutures);
+FutureIO.awaitFuture(combinedFuture, 5, TimeUnit.MINUTES);
+
+for (FileRange res : fileRanges) {
+  CompletableFuture data = res.getData();
+  ByteBuffer buffer = FutureIO.awaitFuture(data, 5, TimeUnit.MINUTES);

Review Comment:
   I don't think just using allOf will complete the combined future. The intent 
here was to get the individual future execution in parallel by calling a get on 
combined future rather than a for loop which will lead to serial execution. 
   
   Also from the doc, it suggests to call join. 
   `Among the applications of this method is to await completion of a set of 
independent CompletableFutures before continuing a program, as in: 
CompletableFuture.allOf(c1, c2, c3).join();.`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on a diff in pull request #4317: YARN-10465. Support getNodeToLabels, getLabelsToNodes, getClusterNodeLabels API's for Federation

2022-05-17 Thread GitBox


slfan1989 commented on code in PR #4317:
URL: https://github.com/apache/hadoop/pull/4317#discussion_r875242460


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/FederationClientInterceptor.java:
##
@@ -870,19 +870,88 @@ public ReservationDeleteResponse deleteReservation(
   @Override
   public GetNodesToLabelsResponse getNodeToLabels(
   GetNodesToLabelsRequest request) throws YarnException, IOException {
-throw new NotImplementedException("Code is not implemented");
+if (request == null) {
+  routerMetrics.incrNodeToLabelsFailedRetrieved();
+  RouterServerUtil.logAndThrowException("Missing getNodesToLabels 
request.", null);
+}
+long startTime = clock.getTime();
+Map subClusters =
+federationFacade.getSubClusters(true);
+Map clusterNodes = 
Maps.newHashMap();
+for (SubClusterId subClusterId : subClusters.keySet()) {
+  ApplicationClientProtocol client;
+  try {
+client = getClientRMProxyForSubCluster(subClusterId);

Review Comment:
   I will fix it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on a diff in pull request #4317: YARN-10465. Support getNodeToLabels, getLabelsToNodes, getClusterNodeLabels API's for Federation

2022-05-17 Thread GitBox


slfan1989 commented on code in PR #4317:
URL: https://github.com/apache/hadoop/pull/4317#discussion_r875242021


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/RouterYarnClientUtils.java:
##
@@ -218,4 +225,75 @@ public static GetClusterNodesResponse 
mergeClusterNodesResponse(
 clusterNodesResponse.setNodeReports(nodeReports);
 return clusterNodesResponse;
   }
+
+  /**
+   * Merges a list of GetNodesToLabelsResponse.
+   *
+   * @param responses a list of GetNodesToLabelsResponse to merge.
+   * @return the merged GetNodesToLabelsResponse.
+   */
+  public static GetNodesToLabelsResponse mergeNodesToLabelsResponse(
+  Collection responses) {
+GetNodesToLabelsResponse nodesToLabelsResponse = Records.newRecord(
+ GetNodesToLabelsResponse.class);
+Map> nodesToLabelMap = new HashMap<>();
+for (GetNodesToLabelsResponse response : responses) {
+  if (response != null && response.getNodeToLabels() != null) {
+nodesToLabelMap.putAll(response.getNodeToLabels());
+  }
+}
+nodesToLabelsResponse.setNodeToLabels(nodesToLabelMap);
+return nodesToLabelsResponse;
+  }
+
+  /**
+   * Merges a list of GetLabelsToNodesResponse.
+   *
+   * @param responses a list of GetLabelsToNodesResponse to merge.
+   * @return the merged GetLabelsToNodesResponse.
+   */
+  public static GetLabelsToNodesResponse mergeLabelsToNodes(
+  Collection responses){
+GetLabelsToNodesResponse labelsToNodesResponse = Records.newRecord(
+ GetLabelsToNodesResponse.class);
+Map> labelsToNodesMap = new HashMap<>();
+for (GetLabelsToNodesResponse response : responses) {
+  if (response != null && response.getLabelsToNodes() != null) {
+Map> clusterLabelsToNodesMap = 
response.getLabelsToNodes();
+for (Map.Entry> entry : 
clusterLabelsToNodesMap.entrySet()) {
+  String label = entry.getKey();
+  Set clusterNodes = entry.getValue();
+  if (labelsToNodesMap.containsKey(label)) {
+Set allNodes = labelsToNodesMap.get(label);
+allNodes.addAll(clusterNodes);
+  } else {
+labelsToNodesMap.put(label, clusterNodes);
+  }
+}
+  }
+}
+labelsToNodesResponse.setLabelsToNodes(labelsToNodesMap);
+return labelsToNodesResponse;
+  }
+
+  /**
+   * Merges a list of GetClusterNodeLabelsResponse.
+   *
+   * @param responses a list of GetClusterNodeLabelsResponse to merge.
+   * @return the merged GetClusterNodeLabelsResponse.
+   */
+  public static GetClusterNodeLabelsResponse mergeClusterNodeLabelsResponse(

Review Comment:
I will add new Junit Test.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18107) Vectored IO support for large S3 files.

2022-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18107?focusedWorklogId=771565=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-771565
 ]

ASF GitHub Bot logged work on HADOOP-18107:
---

Author: ASF GitHub Bot
Created on: 17/May/22 20:27
Start Date: 17/May/22 20:27
Worklog Time Spent: 10m 
  Work Description: mukund-thakur commented on code in PR #4273:
URL: https://github.com/apache/hadoop/pull/4273#discussion_r875234271


##
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractTestUtils.java:
##
@@ -1095,6 +1102,54 @@ public static void validateFileContent(byte[] concat, 
byte[][] bytes) {
 mismatch);
   }
 
+  /**
+   * Utility to validate vectored read results.
+   * @param fileRanges input ranges.
+   * @param originalData original data.
+   * @throws IOException any ioe.
+   */
+  public static void validateVectoredReadResult(List fileRanges,
+byte[] originalData)
+  throws IOException, TimeoutException {
+CompletableFuture[] completableFutures = new 
CompletableFuture[fileRanges.size()];
+int i = 0;
+for (FileRange res : fileRanges) {
+  completableFutures[i++] = res.getData();
+}
+CompletableFuture combinedFuture = 
CompletableFuture.allOf(completableFutures);
+FutureIO.awaitFuture(combinedFuture, 5, TimeUnit.MINUTES);

Review Comment:
   I hope you don't want me to make this configurable as it is just used at 
here. Just adding a new constant in the current class?





Issue Time Tracking
---

Worklog Id: (was: 771565)
Time Spent: 2h 10m  (was: 2h)

> Vectored IO support for large S3 files. 
> 
>
> Key: HADOOP-18107
> URL: https://issues.apache.org/jira/browse/HADOOP-18107
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> This effort would mostly be adding more tests for large files under scale 
> tests and see if any new issue surfaces. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on a diff in pull request #4273: HADOOP-18107 Adding scale test for vectored reads for large file

2022-05-17 Thread GitBox


mukund-thakur commented on code in PR #4273:
URL: https://github.com/apache/hadoop/pull/4273#discussion_r875234271


##
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractTestUtils.java:
##
@@ -1095,6 +1102,54 @@ public static void validateFileContent(byte[] concat, 
byte[][] bytes) {
 mismatch);
   }
 
+  /**
+   * Utility to validate vectored read results.
+   * @param fileRanges input ranges.
+   * @param originalData original data.
+   * @throws IOException any ioe.
+   */
+  public static void validateVectoredReadResult(List fileRanges,
+byte[] originalData)
+  throws IOException, TimeoutException {
+CompletableFuture[] completableFutures = new 
CompletableFuture[fileRanges.size()];
+int i = 0;
+for (FileRange res : fileRanges) {
+  completableFutures[i++] = res.getData();
+}
+CompletableFuture combinedFuture = 
CompletableFuture.allOf(completableFutures);
+FutureIO.awaitFuture(combinedFuture, 5, TimeUnit.MINUTES);

Review Comment:
   I hope you don't want me to make this configurable as it is just used at 
here. Just adding a new constant in the current class?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a diff in pull request #4317: YARN-10465. Support getNodeToLabels, getLabelsToNodes, getClusterNodeLabels API's for Federation

2022-05-17 Thread GitBox


goiri commented on code in PR #4317:
URL: https://github.com/apache/hadoop/pull/4317#discussion_r875217639


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/FederationClientInterceptor.java:
##
@@ -870,19 +870,88 @@ public ReservationDeleteResponse deleteReservation(
   @Override
   public GetNodesToLabelsResponse getNodeToLabels(
   GetNodesToLabelsRequest request) throws YarnException, IOException {
-throw new NotImplementedException("Code is not implemented");
+if (request == null) {
+  routerMetrics.incrNodeToLabelsFailedRetrieved();
+  RouterServerUtil.logAndThrowException("Missing getNodesToLabels 
request.", null);
+}
+long startTime = clock.getTime();
+Map subClusters =
+federationFacade.getSubClusters(true);
+Map clusterNodes = 
Maps.newHashMap();
+for (SubClusterId subClusterId : subClusters.keySet()) {
+  ApplicationClientProtocol client;
+  try {
+client = getClientRMProxyForSubCluster(subClusterId);

Review Comment:
   ApplicationClientProtocol client inside the try



##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/FederationClientInterceptor.java:
##
@@ -870,19 +870,88 @@ public ReservationDeleteResponse deleteReservation(
   @Override
   public GetNodesToLabelsResponse getNodeToLabels(
   GetNodesToLabelsRequest request) throws YarnException, IOException {
-throw new NotImplementedException("Code is not implemented");
+if (request == null) {
+  routerMetrics.incrNodeToLabelsFailedRetrieved();
+  RouterServerUtil.logAndThrowException("Missing getNodesToLabels 
request.", null);
+}
+long startTime = clock.getTime();
+Map subClusters =
+federationFacade.getSubClusters(true);

Review Comment:
   It looks like we do this pattern a bunch of time.
   Maybe we can generalizer this with some lambda passed as a parameter?



##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/RouterYarnClientUtils.java:
##
@@ -218,4 +225,75 @@ public static GetClusterNodesResponse 
mergeClusterNodesResponse(
 clusterNodesResponse.setNodeReports(nodeReports);
 return clusterNodesResponse;
   }
+
+  /**
+   * Merges a list of GetNodesToLabelsResponse.
+   *
+   * @param responses a list of GetNodesToLabelsResponse to merge.
+   * @return the merged GetNodesToLabelsResponse.
+   */
+  public static GetNodesToLabelsResponse mergeNodesToLabelsResponse(
+  Collection responses) {
+GetNodesToLabelsResponse nodesToLabelsResponse = Records.newRecord(
+ GetNodesToLabelsResponse.class);
+Map> nodesToLabelMap = new HashMap<>();
+for (GetNodesToLabelsResponse response : responses) {
+  if (response != null && response.getNodeToLabels() != null) {
+nodesToLabelMap.putAll(response.getNodeToLabels());
+  }
+}
+nodesToLabelsResponse.setNodeToLabels(nodesToLabelMap);
+return nodesToLabelsResponse;
+  }
+
+  /**
+   * Merges a list of GetLabelsToNodesResponse.
+   *
+   * @param responses a list of GetLabelsToNodesResponse to merge.
+   * @return the merged GetLabelsToNodesResponse.
+   */
+  public static GetLabelsToNodesResponse mergeLabelsToNodes(
+  Collection responses){
+GetLabelsToNodesResponse labelsToNodesResponse = Records.newRecord(
+ GetLabelsToNodesResponse.class);
+Map> labelsToNodesMap = new HashMap<>();
+for (GetLabelsToNodesResponse response : responses) {
+  if (response != null && response.getLabelsToNodes() != null) {
+Map> clusterLabelsToNodesMap = 
response.getLabelsToNodes();
+for (Map.Entry> entry : 
clusterLabelsToNodesMap.entrySet()) {
+  String label = entry.getKey();
+  Set clusterNodes = entry.getValue();
+  if (labelsToNodesMap.containsKey(label)) {
+Set allNodes = labelsToNodesMap.get(label);
+allNodes.addAll(clusterNodes);
+  } else {
+labelsToNodesMap.put(label, clusterNodes);
+  }
+}
+  }
+}
+labelsToNodesResponse.setLabelsToNodes(labelsToNodesMap);
+return labelsToNodesResponse;
+  }
+
+  /**
+   * Merges a list of GetClusterNodeLabelsResponse.
+   *
+   * @param responses a list of GetClusterNodeLabelsResponse to merge.
+   * @return the merged GetClusterNodeLabelsResponse.
+   */
+  public static GetClusterNodeLabelsResponse mergeClusterNodeLabelsResponse(

Review Comment:
   Can we add a few independent unit tests for these merge methods?
   Covering null cases and other corner cases.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: 

[jira] [Work logged] (HADOOP-18237) Upgrade Apache Xerces Java to 2.12.2

2022-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18237?focusedWorklogId=771529=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-771529
 ]

ASF GitHub Bot logged work on HADOOP-18237:
---

Author: ASF GitHub Bot
Created on: 17/May/22 19:34
Start Date: 17/May/22 19:34
Worklog Time Spent: 10m 
  Work Description: steveloughran merged PR #4318:
URL: https://github.com/apache/hadoop/pull/4318




Issue Time Tracking
---

Worklog Id: (was: 771529)
Time Spent: 0.5h  (was: 20m)

> Upgrade Apache Xerces Java to 2.12.2
> 
>
> Key: HADOOP-18237
> URL: https://issues.apache.org/jira/browse/HADOOP-18237
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ashutosh Gupta
>Assignee: Ashutosh Gupta
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Description
> https://github.com/advisories/GHSA-h65f-jvqw-m9fj
> There's a vulnerability within the Apache Xerces Java (XercesJ) XML parser 
> when handling specially crafted XML document payloads. This causes, the 
> XercesJ XML parser to wait in an infinite loop, which may sometimes consume 
> system resources for prolonged duration. This vulnerability is present within 
> XercesJ version 2.12.1 and the previous versions.
> References
> [https://nvd.nist.gov/vuln/detail/CVE-2022-23437]
> https://lists.apache.org/thread/6pjwm10bb69kq955fzr1n0nflnjd27dl
> http://www.openwall.com/lists/oss-security/2022/01/24/3
> https://www.oracle.com/security-alerts/cpuapr2022.html



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran merged pull request #4318: HADOOP-18237. Upgrade Apache Xerces Java to 2.12.2

2022-05-17 Thread GitBox


steveloughran merged PR #4318:
URL: https://github.com/apache/hadoop/pull/4318


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18238) Hadoop 3.3.1 SFTPFileSystem.close() method have problem

2022-05-17 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17538428#comment-17538428
 ] 

Steve Loughran commented on HADOOP-18238:
-

i see the problem. we should only set that reentrancy check after calling 
super.close(),as the delete on exit code has to finish before we shut down the 
pool of connections.

happy to accept a PR which moves the check down. 

> Hadoop 3.3.1 SFTPFileSystem.close() method have problem
> ---
>
> Key: HADOOP-18238
> URL: https://issues.apache.org/jira/browse/HADOOP-18238
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.1
>Reporter: yi liu
>Priority: Major
>
> @Override
> public void close() throws IOException {
> if (closed.getAndSet(true)) {
> return;
> }
> try {
> super.close();
> } finally {
> if (connectionPool != null) {
> connectionPool.shutdown();
> }
> }
> }
>  
> if  you  exe this method ,the  fs can not exec deleteOnExsist method,because 
> the fs is closed.
> 如果手动调用,sftp fs执行close方法关闭连接池,让jvm能正常退出,deleteOnExsist 
> 将因为fs已关闭无法执行成功。如果不关闭,则连接池不会释放,jvm不能退出。
> https://issues.apache.org/jira/browse/HADOOP-17528,这是3.2.0 sftpfilesystem的问题
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18236) Remove duplicate locks in NetworkTopology

2022-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18236?focusedWorklogId=771500=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-771500
 ]

ASF GitHub Bot logged work on HADOOP-18236:
---

Author: ASF GitHub Bot
Created on: 17/May/22 18:14
Start Date: 17/May/22 18:14
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4320:
URL: https://github.com/apache/hadoop/pull/4320#issuecomment-1129174359

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 58s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m  2s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  27m  3s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  22m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 37s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 58s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   1m 36s | 
[/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4320/1/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-common in trunk failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | +1 :green_heart: |  javadoc  |   1m 59s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m  3s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m  2s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 11s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |  24m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  21m 29s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 56s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   1m 26s | 
[/patch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4320/1/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-common in the patch failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | +1 :green_heart: |  javadoc  |   1m 58s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m  2s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 57s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 11s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 15s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 230m 14s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4320/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4320 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 14ea81897937 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / bb272376e748af7c84b568526937e7bc5e348941 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4320: HADOOP-18236. Remove duplicate locks in NetworkTopology

2022-05-17 Thread GitBox


hadoop-yetus commented on PR #4320:
URL: https://github.com/apache/hadoop/pull/4320#issuecomment-1129174359

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 58s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m  2s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  27m  3s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  22m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 37s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 58s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   1m 36s | 
[/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4320/1/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-common in trunk failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | +1 :green_heart: |  javadoc  |   1m 59s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m  3s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m  2s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 11s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |  24m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  21m 29s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 56s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   1m 26s | 
[/patch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4320/1/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-common in the patch failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | +1 :green_heart: |  javadoc  |   1m 58s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m  2s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 57s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 11s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 15s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 230m 14s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4320/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4320 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 14ea81897937 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / bb272376e748af7c84b568526937e7bc5e348941 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4320/1/testReport/ |
   | Max. process+thread count | 1246 (vs. 

[jira] [Work logged] (HADOOP-18224) Upgrade maven compiler plugin to 3.10.1

2022-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18224?focusedWorklogId=771499=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-771499
 ]

ASF GitHub Bot logged work on HADOOP-18224:
---

Author: ASF GitHub Bot
Created on: 17/May/22 18:06
Start Date: 17/May/22 18:06
Worklog Time Spent: 10m 
  Work Description: virajjasani commented on PR #4267:
URL: https://github.com/apache/hadoop/pull/4267#issuecomment-1129165458

   > How about removing some the files and make sure there is only one 
package-info.java for each package?
   
   This sounds better than excluding it. Thanks for the suggestion. I just 
tried this and the build looks good locally.




Issue Time Tracking
---

Worklog Id: (was: 771499)
Time Spent: 5h 20m  (was: 5h 10m)

> Upgrade maven compiler plugin to 3.10.1
> ---
>
> Key: HADOOP-18224
> URL: https://issues.apache.org/jira/browse/HADOOP-18224
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 20m
>  Remaining Estimate: 0h
>
> Currently we are using maven-compiler-plugin 3.1 version, which is quite old 
> (2013) and it's also pulling in vulnerable log4j dependency:
> {code:java}
> [INFO]
> org.apache.maven.plugins:maven-compiler-plugin:maven-plugin:3.1:runtime
> [INFO]   org.apache.maven.plugins:maven-compiler-plugin:jar:3.1
> [INFO]   org.apache.maven:maven-plugin-api:jar:2.0.9
> [INFO]   org.apache.maven:maven-artifact:jar:2.0.9
> [INFO]   org.codehaus.plexus:plexus-utils:jar:1.5.1
> [INFO]   org.apache.maven:maven-core:jar:2.0.9
> [INFO]   org.apache.maven:maven-settings:jar:2.0.9
> [INFO]   org.apache.maven:maven-plugin-parameter-documenter:jar:2.0.9
> ...
> ...
> ...
> [INFO]   log4j:log4j:jar:1.2.12
> [INFO]   commons-logging:commons-logging-api:jar:1.1
> [INFO]   com.google.collections:google-collections:jar:1.0
> [INFO]   junit:junit:jar:3.8.2
>  {code}
>  
> We should upgrade to 3.10.1 (latest Mar, 2022) version of 
> maven-compiler-plugin.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on pull request #4267: HADOOP-18224. Upgrade maven compiler plugin to 3.10.1

2022-05-17 Thread GitBox


virajjasani commented on PR #4267:
URL: https://github.com/apache/hadoop/pull/4267#issuecomment-1129165458

   > How about removing some the files and make sure there is only one 
package-info.java for each package?
   
   This sounds better than excluding it. Thanks for the suggestion. I just 
tried this and the build looks good locally.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] huaxiangsun commented on pull request #4246: HDFS-16540. Data locality is lost when DataNode pod restarts in kubernetes (#4170)

2022-05-17 Thread GitBox


huaxiangsun commented on PR #4246:
URL: https://github.com/apache/hadoop/pull/4246#issuecomment-1129159135

   Thanks a lot, @saintstack!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18241) Move to Java 11

2022-05-17 Thread Ayush Saxena (Jira)
Ayush Saxena created HADOOP-18241:
-

 Summary: Move to Java 11
 Key: HADOOP-18241
 URL: https://issues.apache.org/jira/browse/HADOOP-18241
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.4.0
Reporter: Ayush Saxena
Assignee: Ayush Saxena


https://lists.apache.org/thread/h5lmpqo2tz7tc02j44qxpwcnjzpxo0k2



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4319: Move to JAVA 11.

2022-05-17 Thread GitBox


hadoop-yetus commented on PR #4319:
URL: https://github.com/apache/hadoop/pull/4319#issuecomment-1129156096

   (!) A patch to the testing environment has been detected. 
   Re-executing against the patched versions to perform further tests. 
   The console is at 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4319/3/console in 
case of problems.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] simbadzina commented on pull request #4127: HDFS-13522. RBF: Support observer node from Router-Based Federation

2022-05-17 Thread GitBox


simbadzina commented on PR #4127:
URL: https://github.com/apache/hadoop/pull/4127#issuecomment-1129150575

   @goiri @omalley I've now split off the IPC related parts of this change into 
another pull request (https://github.com/apache/hadoop/pull/4311). Please take 
a look.
   The javadoc issues are unrelated to my change. There is HADOOP-18229 
(https://github.com/apache/hadoop/pull/4292) to fix them.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18224) Upgrade maven compiler plugin to 3.10.1

2022-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18224?focusedWorklogId=771486=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-771486
 ]

ASF GitHub Bot logged work on HADOOP-18224:
---

Author: ASF GitHub Bot
Created on: 17/May/22 17:33
Start Date: 17/May/22 17:33
Worklog Time Spent: 10m 
  Work Description: aajisaka commented on PR #4267:
URL: https://github.com/apache/hadoop/pull/4267#issuecomment-1129136477

   > package-info.class exclusion on this PR is required for compiler plugin 
upgrade.
   
   There are multiple different package-info.java using the same package.
   ```
   find . -name "package-info.java" | xargs grep "package 
org.apache.hadoop.yarn.server.metrics"
   
./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/metrics/package-info.java:package
 org.apache.hadoop.yarn.server.metrics;
   
./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/metrics/package-info.java:package
 org.apache.hadoop.yarn.server.metrics;
   ```
   ```
   find . -name "package-info.java" | xargs grep "package 
org.apache.hadoop.hdfs.protocolPB"
   
./hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/package-info.java:package
 org.apache.hadoop.hdfs.protocolPB;
   
./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/package-info.java:package
 org.apache.hadoop.hdfs.protocolPB;
   
./hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/package-info.java:package
 org.apache.hadoop.hdfs.protocolPB;
   ```
   How about removing some the files and make sure there is only one 
package-info.java for each package?




Issue Time Tracking
---

Worklog Id: (was: 771486)
Time Spent: 5h 10m  (was: 5h)

> Upgrade maven compiler plugin to 3.10.1
> ---
>
> Key: HADOOP-18224
> URL: https://issues.apache.org/jira/browse/HADOOP-18224
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>
> Currently we are using maven-compiler-plugin 3.1 version, which is quite old 
> (2013) and it's also pulling in vulnerable log4j dependency:
> {code:java}
> [INFO]
> org.apache.maven.plugins:maven-compiler-plugin:maven-plugin:3.1:runtime
> [INFO]   org.apache.maven.plugins:maven-compiler-plugin:jar:3.1
> [INFO]   org.apache.maven:maven-plugin-api:jar:2.0.9
> [INFO]   org.apache.maven:maven-artifact:jar:2.0.9
> [INFO]   org.codehaus.plexus:plexus-utils:jar:1.5.1
> [INFO]   org.apache.maven:maven-core:jar:2.0.9
> [INFO]   org.apache.maven:maven-settings:jar:2.0.9
> [INFO]   org.apache.maven:maven-plugin-parameter-documenter:jar:2.0.9
> ...
> ...
> ...
> [INFO]   log4j:log4j:jar:1.2.12
> [INFO]   commons-logging:commons-logging-api:jar:1.1
> [INFO]   com.google.collections:google-collections:jar:1.0
> [INFO]   junit:junit:jar:3.8.2
>  {code}
>  
> We should upgrade to 3.10.1 (latest Mar, 2022) version of 
> maven-compiler-plugin.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on pull request #4267: HADOOP-18224. Upgrade maven compiler plugin to 3.10.1

2022-05-17 Thread GitBox


aajisaka commented on PR #4267:
URL: https://github.com/apache/hadoop/pull/4267#issuecomment-1129136477

   > package-info.class exclusion on this PR is required for compiler plugin 
upgrade.
   
   There are multiple different package-info.java using the same package.
   ```
   find . -name "package-info.java" | xargs grep "package 
org.apache.hadoop.yarn.server.metrics"
   
./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/metrics/package-info.java:package
 org.apache.hadoop.yarn.server.metrics;
   
./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/metrics/package-info.java:package
 org.apache.hadoop.yarn.server.metrics;
   ```
   ```
   find . -name "package-info.java" | xargs grep "package 
org.apache.hadoop.hdfs.protocolPB"
   
./hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/package-info.java:package
 org.apache.hadoop.hdfs.protocolPB;
   
./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/package-info.java:package
 org.apache.hadoop.hdfs.protocolPB;
   
./hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/package-info.java:package
 org.apache.hadoop.hdfs.protocolPB;
   ```
   How about removing some the files and make sure there is only one 
package-info.java for each package?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4127: HDFS-13522. RBF: Support observer node from Router-Based Federation

2022-05-17 Thread GitBox


hadoop-yetus commented on PR #4127:
URL: https://github.com/apache/hadoop/pull/4127#issuecomment-1129134997

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 50s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 12 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 11s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  28m  7s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  25m  0s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  21m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   4m 35s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   6m 41s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   1m 35s | 
[/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4127/10/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-common in trunk failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | +1 :green_heart: |  javadoc  |   6m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |  12m 13s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m 44s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m  2s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 13s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |  24m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  25m 39s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  25m 39s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   4m 48s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4127/10/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 2 new + 340 unchanged - 1 fixed = 342 total (was 
341)  |
   | +1 :green_heart: |  mvnsite  |   6m 41s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  3s |  |  The patch has no ill-formed XML 
file.  |
   | -1 :x: |  javadoc  |   1m 27s | 
[/patch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4127/10/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-common in the patch failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | +1 :green_heart: |  javadoc  |   6m 41s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |  12m 42s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  26m 54s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 29s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 52s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 380m 55s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  34m 35s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 32s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 709m 33s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4127/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4127 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux 27a84d19dc85 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 

[jira] [Commented] (HADOOP-18240) Upgrade Yetus to 0.14.0

2022-05-17 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17538329#comment-17538329
 ] 

Akira Ajisaka commented on HADOOP-18240:


Need to update
 * HADOOP_YETUS_VERSION in dev-support/bin/yetus-wrapper
 * YETUS_VERSION in dev-support/Jenkinsfile

 

> Upgrade Yetus to 0.14.0
> ---
>
> Key: HADOOP-18240
> URL: https://issues.apache.org/jira/browse/HADOOP-18240
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Priority: Major
>
> Yetus 0.14.0 is released. Let's upgrade.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18240) Upgrade Yetus to 0.14.0

2022-05-17 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-18240:
--

 Summary: Upgrade Yetus to 0.14.0
 Key: HADOOP-18240
 URL: https://issues.apache.org/jira/browse/HADOOP-18240
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Akira Ajisaka


Yetus 0.14.0 is released. Let's upgrade.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Happy-shi opened a new pull request, #4322: HDFS-16574. Reduces the time it takes once to hold FSNamesystem write lock to remove blocks associated with dead datanodes

2022-05-17 Thread GitBox


Happy-shi opened a new pull request, #4322:
URL: https://github.com/apache/hadoop/pull/4322

   …ove blocks associated with dead datanodes
   
   
   
   ### Description of PR
   
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18069) CVE-2021-0341 in okhttp@2.7.5 detected in hdfs-client

2022-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18069?focusedWorklogId=771465=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-771465
 ]

ASF GitHub Bot logged work on HADOOP-18069:
---

Author: ASF GitHub Bot
Created on: 17/May/22 16:53
Start Date: 17/May/22 16:53
Worklog Time Spent: 10m 
  Work Description: aajisaka commented on PR #4229:
URL: https://github.com/apache/hadoop/pull/4229#issuecomment-1129099805

   The test failures looks related to HADOOP-18222, but I want to run the 
precommit job again to validate.
   
https://ci-hadoop.apache.org/blue/organizations/jenkins/hadoop-multibranch/detail/PR-4229/14/pipeline/




Issue Time Tracking
---

Worklog Id: (was: 771465)
Time Spent: 6h  (was: 5h 50m)

> CVE-2021-0341 in okhttp@2.7.5 detected in hdfs-client  
> ---
>
> Key: HADOOP-18069
> URL: https://issues.apache.org/jira/browse/HADOOP-18069
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.3.1
>Reporter: Eugene Shinn (Truveta)
>Assignee: Ashutosh Gupta
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6h
>  Remaining Estimate: 0h
>
> Our static vulnerability scanner (Fortify On Demand) detected [NVD - 
> CVE-2021-0341 
> (nist.gov)|https://nvd.nist.gov/vuln/detail/CVE-2021-0341#VulnChangeHistorySection]
>  in our application. We traced the vulnerability to a transitive dependency 
> coming from hadoop-hdfs-client, which depends on okhttp@2.7.5 
> ([hadoop/pom.xml at trunk · apache/hadoop 
> (github.com)|https://github.com/apache/hadoop/blob/trunk/hadoop-project/pom.xml#L137]).
>  To resolve this issue, okhttp should be upgraded to 4.9.2+ (ref: 
> [CVE-2021-0341 · Issue #6724 · square/okhttp 
> (github.com)|https://github.com/square/okhttp/issues/6724]).



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on pull request #4229: HADOOP-18069. okhttp@2.7.5 to 4.9.3

2022-05-17 Thread GitBox


aajisaka commented on PR #4229:
URL: https://github.com/apache/hadoop/pull/4229#issuecomment-1129099805

   The test failures looks related to HADOOP-18222, but I want to run the 
precommit job again to validate.
   
https://ci-hadoop.apache.org/blue/organizations/jenkins/hadoop-multibranch/detail/PR-4229/14/pipeline/


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18222) Prevent DelegationTokenSecretManagerMetrics from registering multiple times

2022-05-17 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-18222:
---
Issue Type: Bug  (was: Improvement)
  Priority: Major  (was: Minor)

> Prevent DelegationTokenSecretManagerMetrics from registering multiple times 
> 
>
> Key: HADOOP-18222
> URL: https://issues.apache.org/jira/browse/HADOOP-18222
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Hector Sandoval Chaverri
>Assignee: Hector Sandoval Chaverri
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.4
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> After committing HADOOP-18167, we received reports of the following error 
> when ResourceManager is initialized:
> {noformat}
> Caused by: java.io.IOException: Problem starting http server
> at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1389)
> at 
> org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:475)
> ... 4 more
> Caused by: org.apache.hadoop.metrics2.MetricsException: Metrics source 
> DelegationTokenSecretManagerMetrics already exists!
> at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newSourceName(DefaultMetricsSystem.java:152)
> at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.sourceName(DefaultMetricsSystem.java:125)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:229)
> at 
> org.apache.hadoop.metrics2.MetricsSystem.register(MetricsSystem.java:71)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager$DelegationTokenSecretManagerMetrics.create(AbstractDelegationTokenSecretManager.java:878)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.(AbstractDelegationTokenSecretManager.java:152)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager$DelegationTokenSecretManager.(DelegationTokenManager.java:72)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.(DelegationTokenManager.java:122)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.initTokenManager(DelegationTokenAuthenticationHandler.java:161)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.init(DelegationTokenAuthenticationHandler.java:130)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeAuthHandler(AuthenticationFilter.java:194)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.initializeAuthHandler(DelegationTokenAuthenticationFilter.java:214)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:180)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:180)
> at 
> org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53){noformat}
> This can happen if MetricsSystemImpl#init is called and multiple metrics are 
> registered with the same name. A proposed solution is to declare the metrics 
> in AbstractDelegationTokenSecretManager as singleton, which would prevent 
> multiple instances DelegationTokenSecretManagerMetrics from being registered.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-18167) Add metrics to track delegation token secret manager operations

2022-05-17 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HADOOP-18167:
--

Assignee: Hector Sandoval Chaverri

> Add metrics to track delegation token secret manager operations
> ---
>
> Key: HADOOP-18167
> URL: https://issues.apache.org/jira/browse/HADOOP-18167
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Hector Sandoval Chaverri
>Assignee: Hector Sandoval Chaverri
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.4
>
> Attachments: HADOOP-18167-branch-2.10-2.patch, 
> HADOOP-18167-branch-2.10-3.patch, HADOOP-18167-branch-2.10-4.patch, 
> HADOOP-18167-branch-2.10.patch, HADOOP-18167-branch-3.3.patch
>
>  Time Spent: 6h 10m
>  Remaining Estimate: 0h
>
> New metrics to track operations that store, update and remove delegation 
> tokens in implementations of AbstractDelegationTokenSecretManager. This will 
> help evaluate the impact of using different secret managers and add 
> optimizations.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18167) Add metrics to track delegation token secret manager operations

2022-05-17 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-18167:
---
Fix Version/s: 3.4.0
   3.3.4

> Add metrics to track delegation token secret manager operations
> ---
>
> Key: HADOOP-18167
> URL: https://issues.apache.org/jira/browse/HADOOP-18167
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Hector Sandoval Chaverri
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.4
>
> Attachments: HADOOP-18167-branch-2.10-2.patch, 
> HADOOP-18167-branch-2.10-3.patch, HADOOP-18167-branch-2.10-4.patch, 
> HADOOP-18167-branch-2.10.patch, HADOOP-18167-branch-3.3.patch
>
>  Time Spent: 6h 10m
>  Remaining Estimate: 0h
>
> New metrics to track operations that store, update and remove delegation 
> tokens in implementations of AbstractDelegationTokenSecretManager. This will 
> help evaluate the impact of using different secret managers and add 
> optimizations.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18231) tests in ITestS3AInputStreamPerformance are failing

2022-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18231?focusedWorklogId=771453=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-771453
 ]

ASF GitHub Bot logged work on HADOOP-18231:
---

Author: ASF GitHub Bot
Created on: 17/May/22 15:58
Start Date: 17/May/22 15:58
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4305:
URL: https://github.com/apache/hadoop/pull/4305#issuecomment-1129046551

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ feature-HADOOP-18028-s3a-prefetch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m  7s |  |  
feature-HADOOP-18028-s3a-prefetch passed  |
   | +1 :green_heart: |  compile  |   1m  0s |  |  
feature-HADOOP-18028-s3a-prefetch passed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 53s |  |  
feature-HADOOP-18028-s3a-prefetch passed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 43s |  |  
feature-HADOOP-18028-s3a-prefetch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 55s |  |  
feature-HADOOP-18028-s3a-prefetch passed  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  
feature-HADOOP-18028-s3a-prefetch passed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  
feature-HADOOP-18028-s3a-prefetch passed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 34s |  |  
feature-HADOOP-18028-s3a-prefetch passed  |
   | +1 :green_heart: |  shadedclient  |  26m 12s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 45s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 51s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 51s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 23s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4305/4/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 2 new + 11 unchanged - 0 fixed 
= 13 total (was 11)  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 39s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m 14s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 43s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 110m  8s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4305/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4305 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 2dd2aa4928de 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | feature-HADOOP-18028-s3a-prefetch / 
9026fab1766217aa8ccf78a26a79d269e14eada0 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4305: HADOOP-18231. Adds in new test for S3PrefetchingInputStream

2022-05-17 Thread GitBox


hadoop-yetus commented on PR #4305:
URL: https://github.com/apache/hadoop/pull/4305#issuecomment-1129046551

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ feature-HADOOP-18028-s3a-prefetch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m  7s |  |  
feature-HADOOP-18028-s3a-prefetch passed  |
   | +1 :green_heart: |  compile  |   1m  0s |  |  
feature-HADOOP-18028-s3a-prefetch passed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 53s |  |  
feature-HADOOP-18028-s3a-prefetch passed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 43s |  |  
feature-HADOOP-18028-s3a-prefetch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 55s |  |  
feature-HADOOP-18028-s3a-prefetch passed  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  
feature-HADOOP-18028-s3a-prefetch passed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  
feature-HADOOP-18028-s3a-prefetch passed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 34s |  |  
feature-HADOOP-18028-s3a-prefetch passed  |
   | +1 :green_heart: |  shadedclient  |  26m 12s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 45s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 51s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 51s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 23s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4305/4/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 2 new + 11 unchanged - 0 fixed 
= 13 total (was 11)  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 39s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m 14s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 43s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 110m  8s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4305/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4305 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 2dd2aa4928de 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | feature-HADOOP-18028-s3a-prefetch / 
9026fab1766217aa8ccf78a26a79d269e14eada0 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4305/4/testReport/ |
   | Max. process+thread count | 594 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4305/4/console |
   | 

[GitHub] [hadoop] jianghuazhu opened a new pull request, #4321: HDFS-16581.Print DataNode node status.

2022-05-17 Thread GitBox


jianghuazhu opened a new pull request, #4321:
URL: https://github.com/apache/hadoop/pull/4321

   ### Description of PR
   Right now we can't directly see the status of some DataNodes, it would be 
helpful to see this information through the dfsadmin tool.
   Details: HDFS-16581
   
   ### How was this patch tested?
   Need to test, when nodes are in DECOMMISSION_INPROGRESS, 
DECOMMISSION_INPROGRESS or alive state, we should correctly identify them.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ZanderXu commented on a diff in pull request #4155: HDFS-16533. COMPOSITE_CRC failed between replicated file and striped …

2022-05-17 Thread GitBox


ZanderXu commented on code in PR #4155:
URL: https://github.com/apache/hadoop/pull/4155#discussion_r874957305


##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/FileChecksumHelper.java:
##
@@ -316,18 +317,22 @@ FileChecksum makeCompositeCrcResult() throws IOException {
 "Added blockCrc 0x{} for block index {} of size {}",
 Integer.toString(blockCrc, 16), i, block.getBlockSize());
   }
-
-  // NB: In some cases the located blocks have their block size adjusted
-  // explicitly based on the requested length, but not all cases;
-  // these numbers may or may not reflect actual sizes on disk.
-  long reportedLastBlockSize =
-  blockLocations.getLastLocatedBlock().getBlockSize();
-  long consumedLastBlockLength = reportedLastBlockSize;
-  if (length - sumBlockLengths < reportedLastBlockSize) {
-LOG.warn(
-"Last block length {} is less than reportedLastBlockSize {}",
-length - sumBlockLengths, reportedLastBlockSize);
-consumedLastBlockLength = length - sumBlockLengths;
+  LocatedBlock nextBlock = locatedBlocks.get(i);
+  long consumedLastBlockLength = Math.min(length - sumBlockLengths,
+  nextBlock.getBlockSize());
+  LocatedBlock lastBlock = blockLocations.getLastLocatedBlock();
+  if (nextBlock.equals(lastBlock)) {

Review Comment:
   Whether it is a replicated file or striped file, for a block, we will obtain 
a 4-bytes composer crc, and the actual size corresponding to the crc is very 
important, because line 336 will use it to compute the composer crc.
   
   Suppose a file has 4 blocks, number block1, block2, block3 and block4 
respectively, and the size of each blocks is 10MB, 10MB, 10MB, 7MB. And i use 
getFilecheck(mockFile, 29MB).  The correct consumedLastBlockLength in line 336 
should be 9MB, but the result of the current logic is 7MB which from the last 
block size of the file. So we will get an error composer crc.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ZanderXu commented on a diff in pull request #4155: HDFS-16533. COMPOSITE_CRC failed between replicated file and striped …

2022-05-17 Thread GitBox


ZanderXu commented on code in PR #4155:
URL: https://github.com/apache/hadoop/pull/4155#discussion_r874957305


##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/FileChecksumHelper.java:
##
@@ -316,18 +317,22 @@ FileChecksum makeCompositeCrcResult() throws IOException {
 "Added blockCrc 0x{} for block index {} of size {}",
 Integer.toString(blockCrc, 16), i, block.getBlockSize());
   }
-
-  // NB: In some cases the located blocks have their block size adjusted
-  // explicitly based on the requested length, but not all cases;
-  // these numbers may or may not reflect actual sizes on disk.
-  long reportedLastBlockSize =
-  blockLocations.getLastLocatedBlock().getBlockSize();
-  long consumedLastBlockLength = reportedLastBlockSize;
-  if (length - sumBlockLengths < reportedLastBlockSize) {
-LOG.warn(
-"Last block length {} is less than reportedLastBlockSize {}",
-length - sumBlockLengths, reportedLastBlockSize);
-consumedLastBlockLength = length - sumBlockLengths;
+  LocatedBlock nextBlock = locatedBlocks.get(i);
+  long consumedLastBlockLength = Math.min(length - sumBlockLengths,
+  nextBlock.getBlockSize());
+  LocatedBlock lastBlock = blockLocations.getLastLocatedBlock();
+  if (nextBlock.equals(lastBlock)) {

Review Comment:
   Whether it is a replicated file or striped file, for a block, we will obtain 
a 4-bytes composer crc, and the actual size corresponding to the crc is very 
important, because line 341 will use it to compute the composer crc.
   
   Suppose a file has 4 blocks, number block1, block2, block3 and block4 
respectively, and the size of each blocks is 10MB, 10MB, 10MB, 7MB. And i use 
getFilecheck(mockFile, 29MB).  The correct consumedLastBlockLength in line 336 
should be 9MB, but the result of the current logic is 7MB which from the last 
block size of the file.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4319: Move to JAVA 11.

2022-05-17 Thread GitBox


hadoop-yetus commented on PR #4319:
URL: https://github.com/apache/hadoop/pull/4319#issuecomment-1128995395

   (!) A patch to the testing environment has been detected. 
   Re-executing against the patched versions to perform further tests. 
   The console is at 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4319/2/console in 
case of problems.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ZanderXu commented on a diff in pull request #4155: HDFS-16533. COMPOSITE_CRC failed between replicated file and striped …

2022-05-17 Thread GitBox


ZanderXu commented on code in PR #4155:
URL: https://github.com/apache/hadoop/pull/4155#discussion_r874941317


##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/FileChecksumHelper.java:
##
@@ -316,18 +317,22 @@ FileChecksum makeCompositeCrcResult() throws IOException {
 "Added blockCrc 0x{} for block index {} of size {}",
 Integer.toString(blockCrc, 16), i, block.getBlockSize());
   }
-
-  // NB: In some cases the located blocks have their block size adjusted
-  // explicitly based on the requested length, but not all cases;
-  // these numbers may or may not reflect actual sizes on disk.
-  long reportedLastBlockSize =
-  blockLocations.getLastLocatedBlock().getBlockSize();
-  long consumedLastBlockLength = reportedLastBlockSize;
-  if (length - sumBlockLengths < reportedLastBlockSize) {
-LOG.warn(
-"Last block length {} is less than reportedLastBlockSize {}",
-length - sumBlockLengths, reportedLastBlockSize);
-consumedLastBlockLength = length - sumBlockLengths;
+  LocatedBlock nextBlock = locatedBlocks.get(i);
+  long consumedLastBlockLength = Math.min(length - sumBlockLengths,
+  nextBlock.getBlockSize());
+  LocatedBlock lastBlock = blockLocations.getLastLocatedBlock();
+  if (nextBlock.equals(lastBlock)) {

Review Comment:
   Thanks @jojochuang for your comment. 
   First, i will explain the goal of UT:
   
   1. Use the same context to create a replicated file and a striped file.
   2. Set the conf the use COMPOSITE_CRC
   3. Expected  the same checksum result of any length from the replicated file 
and the striped file via getFileChecksum
   
   Second, i will explain the root cause:
   
   1. blockLocations in line 104, contains a blocks and a lastLocatedBlock
   2. the last block in blocks maybe not the same with lastLocatedBlock when 
the input length is less than file length.
   3. so, we cannot always compare with the lastLocatedBlock to get the 
composer length in line 336.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18231) tests in ITestS3AInputStreamPerformance are failing

2022-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18231?focusedWorklogId=771415=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-771415
 ]

ASF GitHub Bot logged work on HADOOP-18231:
---

Author: ASF GitHub Bot
Created on: 17/May/22 15:06
Start Date: 17/May/22 15:06
Worklog Time Spent: 10m 
  Work Description: ahmarsuhail commented on PR #4305:
URL: https://github.com/apache/hadoop/pull/4305#issuecomment-1128987587

   just FYI, `testRandomReadLargeFile` takes around 23 seconds to finish & 
`testReadLargeFileFully` takes 26 seconds. Wondering if that's too long and we 
should consider using a smaller file and updating block size validation. 




Issue Time Tracking
---

Worklog Id: (was: 771415)
Time Spent: 1h 50m  (was: 1h 40m)

> tests in ITestS3AInputStreamPerformance are failing 
> 
>
> Key: HADOOP-18231
> URL: https://issues.apache.org/jira/browse/HADOOP-18231
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmar Suhail
>Assignee: Ahmar Suhail
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> The following tests are failing when prefetching is enabled:
> testRandomIORandomPolicy - expects stream to be opened 4 times (once for 
> every random read), but prefetching will only open twice. 
> testDecompressionSequential128K - expects stream to be opened once, but 
> prefetching will open once for each block the file has. landsat file used in 
> the test has size 42MB, prefetching block size = 8MB, expected open count is 
> 6.
>  testReadWithNormalPolicy - same as above. 
> testRandomIONormalPolicy - executes random IO, but with a normal policy. 
> S3AInputStream will abort the stream and change the policy, prefetching 
> handles random IO by caching blocks so doesn't do any of that. 
> testRandomReadOverBuffer - multiple assertions failing here, also depends a 
> lot on readAhead values, not very relevant for prefetching



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ahmarsuhail commented on pull request #4305: HADOOP-18231. Adds in new test for S3PrefetchingInputStream

2022-05-17 Thread GitBox


ahmarsuhail commented on PR #4305:
URL: https://github.com/apache/hadoop/pull/4305#issuecomment-1128987587

   just FYI, `testRandomReadLargeFile` takes around 23 seconds to finish & 
`testReadLargeFileFully` takes 26 seconds. Wondering if that's too long and we 
should consider using a smaller file and updating block size validation. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4311: HDFS-13522: IPC changes to support observer reads through routers.

2022-05-17 Thread GitBox


hadoop-yetus commented on PR #4311:
URL: https://github.com/apache/hadoop/pull/4311#issuecomment-1128981618

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 36s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 46s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  24m 56s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m  6s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  20m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   4m 22s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   7m 35s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   1m 50s | 
[/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4311/3/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-common in trunk failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | +1 :green_heart: |  javadoc  |   7m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |  12m 21s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 33s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 31s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m  8s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 32s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |  22m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 45s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  21m 45s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   4m 42s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4311/3/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 2 new + 199 unchanged - 1 fixed = 201 total (was 
200)  |
   | +1 :green_heart: |  mvnsite  |   7m 24s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | -1 :x: |  javadoc  |   1m 30s | 
[/patch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4311/3/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-common in the patch failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | +1 :green_heart: |  javadoc  |   7m 14s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |  13m  3s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 43s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 32s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   3m 15s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 252m 28s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  23m 33s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 46s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 559m 24s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4311/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4311 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux a8fa86f61127 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 

[jira] [Work logged] (HADOOP-18231) tests in ITestS3AInputStreamPerformance are failing

2022-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18231?focusedWorklogId=771405=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-771405
 ]

ASF GitHub Bot logged work on HADOOP-18231:
---

Author: ASF GitHub Bot
Created on: 17/May/22 14:56
Start Date: 17/May/22 14:56
Worklog Time Spent: 10m 
  Work Description: monthonk commented on PR #4305:
URL: https://github.com/apache/hadoop/pull/4305#issuecomment-1128975914

   Thanks for clarifying @ahmarsuhail, then we probably have to test with this 
big file for now.




Issue Time Tracking
---

Worklog Id: (was: 771405)
Time Spent: 1h 40m  (was: 1.5h)

> tests in ITestS3AInputStreamPerformance are failing 
> 
>
> Key: HADOOP-18231
> URL: https://issues.apache.org/jira/browse/HADOOP-18231
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmar Suhail
>Assignee: Ahmar Suhail
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> The following tests are failing when prefetching is enabled:
> testRandomIORandomPolicy - expects stream to be opened 4 times (once for 
> every random read), but prefetching will only open twice. 
> testDecompressionSequential128K - expects stream to be opened once, but 
> prefetching will open once for each block the file has. landsat file used in 
> the test has size 42MB, prefetching block size = 8MB, expected open count is 
> 6.
>  testReadWithNormalPolicy - same as above. 
> testRandomIONormalPolicy - executes random IO, but with a normal policy. 
> S3AInputStream will abort the stream and change the policy, prefetching 
> handles random IO by caching blocks so doesn't do any of that. 
> testRandomReadOverBuffer - multiple assertions failing here, also depends a 
> lot on readAhead values, not very relevant for prefetching



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] monthonk commented on pull request #4305: HADOOP-18231. Adds in new test for S3PrefetchingInputStream

2022-05-17 Thread GitBox


monthonk commented on PR #4305:
URL: https://github.com/apache/hadoop/pull/4305#issuecomment-1128975914

   Thanks for clarifying @ahmarsuhail, then we probably have to test with this 
big file for now.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18236) Remove duplicate locks in NetworkTopology

2022-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18236?focusedWorklogId=771383=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-771383
 ]

ASF GitHub Bot logged work on HADOOP-18236:
---

Author: ASF GitHub Bot
Created on: 17/May/22 14:23
Start Date: 17/May/22 14:23
Worklog Time Spent: 10m 
  Work Description: ZanderXu opened a new pull request, #4320:
URL: https://github.com/apache/hadoop/pull/4320

Remove duplicate locks in NetworkTopology




Issue Time Tracking
---

Worklog Id: (was: 771383)
Remaining Estimate: 0h
Time Spent: 10m

> Remove duplicate locks in NetworkTopology
> -
>
> Key: HADOOP-18236
> URL: https://issues.apache.org/jira/browse/HADOOP-18236
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> During reading the hadoop NetworkTopology.java, I suspect there is a 
> duplicate lock.
> chooseRandom(line 532), and code is:
> {code:java}
> final int availableNodes;
> if (excludedScope == null) {
>   availableNodes = countNumOfAvailableNodes(scope, excludedNodes);
> } else {
>   netlock.readLock().lock();
>   try {
> availableNodes = countNumOfAvailableNodes(scope, excludedNodes) -
> countNumOfAvailableNodes(excludedScope, excludedNodes);
>   } finally {
> netlock.readLock().unlock();
>   }
> } {code}
> All the places where called `chooseRandom` have the global read lock, so the 
> internal read lock is duplicated.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18236) Remove duplicate locks in NetworkTopology

2022-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-18236:

Labels: pull-request-available  (was: )

> Remove duplicate locks in NetworkTopology
> -
>
> Key: HADOOP-18236
> URL: https://issues.apache.org/jira/browse/HADOOP-18236
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> During reading the hadoop NetworkTopology.java, I suspect there is a 
> duplicate lock.
> chooseRandom(line 532), and code is:
> {code:java}
> final int availableNodes;
> if (excludedScope == null) {
>   availableNodes = countNumOfAvailableNodes(scope, excludedNodes);
> } else {
>   netlock.readLock().lock();
>   try {
> availableNodes = countNumOfAvailableNodes(scope, excludedNodes) -
> countNumOfAvailableNodes(excludedScope, excludedNodes);
>   } finally {
> netlock.readLock().unlock();
>   }
> } {code}
> All the places where called `chooseRandom` have the global read lock, so the 
> internal read lock is duplicated.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18231) tests in ITestS3AInputStreamPerformance are failing

2022-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18231?focusedWorklogId=771382=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-771382
 ]

ASF GitHub Bot logged work on HADOOP-18231:
---

Author: ASF GitHub Bot
Created on: 17/May/22 14:22
Start Date: 17/May/22 14:22
Worklog Time Spent: 10m 
  Work Description: ahmarsuhail commented on code in PR #4305:
URL: https://github.com/apache/hadoop/pull/4305#discussion_r874874814


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3PrefetchingInputStream.java:
##
@@ -0,0 +1,139 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import java.io.IOException;
+
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.fs.s3a.performance.AbstractS3ACostTest;
+import org.apache.hadoop.fs.statistics.IOStatistics;
+import org.apache.hadoop.fs.statistics.StoreStatisticNames;
+import org.apache.hadoop.fs.statistics.StreamStatisticNames;
+
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_BLOCK_DEFAULT_SIZE;
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_BLOCK_SIZE_KEY;
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_ENABLED_KEY;
+import static 
org.apache.hadoop.fs.statistics.IOStatisticAssertions.verifyStatisticCounterValue;
+
+public class ITestS3PrefetchingInputStream extends AbstractS3ACostTest {
+
+  public ITestS3PrefetchingInputStream() {
+super(true);
+  }
+
+  private static final int _1K = 1024;
+  // Path for file which should have length > block size so 
S3CachingInputStream is used
+  private Path largeFile;
+  private FileSystem fs;
+  private int numBlocks;
+  private int blockSize;
+  private long largeFileSize;
+  // Size should be < block size so S3InMemoryInputStream is used
+  private static final int smallFileSize = _1K * 16;
+
+  @Override
+  public void setup() throws Exception {
+super.setup();
+
+Configuration conf = getConfiguration();
+conf.setBoolean(PREFETCH_ENABLED_KEY, true);
+  }
+
+  private void openFS() throws IOException {
+Configuration conf = getConfiguration();
+
+largeFile = new Path(DEFAULT_CSVTEST_FILE);
+blockSize = conf.getInt(PREFETCH_BLOCK_SIZE_KEY, 
PREFETCH_BLOCK_DEFAULT_SIZE);
+fs = largeFile.getFileSystem(getConfiguration());
+FileStatus fileStatus = fs.getFileStatus(largeFile);
+largeFileSize = fileStatus.getLen();

Review Comment:
   It is currently using the landsat file `landsat-pds/scene_list.gz` which has 
a size of 42MB





Issue Time Tracking
---

Worklog Id: (was: 771382)
Time Spent: 1.5h  (was: 1h 20m)

> tests in ITestS3AInputStreamPerformance are failing 
> 
>
> Key: HADOOP-18231
> URL: https://issues.apache.org/jira/browse/HADOOP-18231
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmar Suhail
>Assignee: Ahmar Suhail
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> The following tests are failing when prefetching is enabled:
> testRandomIORandomPolicy - expects stream to be opened 4 times (once for 
> every random read), but prefetching will only open twice. 
> testDecompressionSequential128K - expects stream to be opened once, but 
> prefetching will open once for each block the file has. landsat file used in 
> the test has size 42MB, prefetching block size = 8MB, expected open count is 
> 6.
>  testReadWithNormalPolicy - same as above. 
> testRandomIONormalPolicy - executes random IO, but with a normal policy. 
> S3AInputStream will abort the stream and change the policy, prefetching 
> handles random IO by caching blocks so doesn't do any of that. 
> testRandomReadOverBuffer - 

[GitHub] [hadoop] ZanderXu opened a new pull request, #4320: HADOOP-18236. Remove duplicate locks in NetworkTopology

2022-05-17 Thread GitBox


ZanderXu opened a new pull request, #4320:
URL: https://github.com/apache/hadoop/pull/4320

Remove duplicate locks in NetworkTopology


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ahmarsuhail commented on a diff in pull request #4305: HADOOP-18231. Adds in new test for S3PrefetchingInputStream

2022-05-17 Thread GitBox


ahmarsuhail commented on code in PR #4305:
URL: https://github.com/apache/hadoop/pull/4305#discussion_r874874814


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3PrefetchingInputStream.java:
##
@@ -0,0 +1,139 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import java.io.IOException;
+
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.fs.s3a.performance.AbstractS3ACostTest;
+import org.apache.hadoop.fs.statistics.IOStatistics;
+import org.apache.hadoop.fs.statistics.StoreStatisticNames;
+import org.apache.hadoop.fs.statistics.StreamStatisticNames;
+
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_BLOCK_DEFAULT_SIZE;
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_BLOCK_SIZE_KEY;
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_ENABLED_KEY;
+import static 
org.apache.hadoop.fs.statistics.IOStatisticAssertions.verifyStatisticCounterValue;
+
+public class ITestS3PrefetchingInputStream extends AbstractS3ACostTest {
+
+  public ITestS3PrefetchingInputStream() {
+super(true);
+  }
+
+  private static final int _1K = 1024;
+  // Path for file which should have length > block size so 
S3CachingInputStream is used
+  private Path largeFile;
+  private FileSystem fs;
+  private int numBlocks;
+  private int blockSize;
+  private long largeFileSize;
+  // Size should be < block size so S3InMemoryInputStream is used
+  private static final int smallFileSize = _1K * 16;
+
+  @Override
+  public void setup() throws Exception {
+super.setup();
+
+Configuration conf = getConfiguration();
+conf.setBoolean(PREFETCH_ENABLED_KEY, true);
+  }
+
+  private void openFS() throws IOException {
+Configuration conf = getConfiguration();
+
+largeFile = new Path(DEFAULT_CSVTEST_FILE);
+blockSize = conf.getInt(PREFETCH_BLOCK_SIZE_KEY, 
PREFETCH_BLOCK_DEFAULT_SIZE);
+fs = largeFile.getFileSystem(getConfiguration());
+FileStatus fileStatus = fs.getFileStatus(largeFile);
+largeFileSize = fileStatus.getLen();

Review Comment:
   It is currently using the landsat file `landsat-pds/scene_list.gz` which has 
a size of 42MB



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18231) tests in ITestS3AInputStreamPerformance are failing

2022-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18231?focusedWorklogId=771381=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-771381
 ]

ASF GitHub Bot logged work on HADOOP-18231:
---

Author: ASF GitHub Bot
Created on: 17/May/22 14:21
Start Date: 17/May/22 14:21
Worklog Time Spent: 10m 
  Work Description: ahmarsuhail commented on PR #4305:
URL: https://github.com/apache/hadoop/pull/4305#issuecomment-1128933678

   Thanks @monthonk. As discussed, instead of using 
`landsat-pds/scene_list.gz`, I tried creating a smaller file (size 16k) and 
setting the block size to 4K, which would make the tests faster. But currently 
if you try to set the block size < default size (8M), validation fails 
[here](https://github.com/apache/hadoop/blob/feature-HADOOP-18028-s3a-prefetch/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L487)
 . The min allowed block size currently is `PREFETCH_BLOCK_DEFAULT_SIZE` , I'm 
not sure if this is something we want to update. 




Issue Time Tracking
---

Worklog Id: (was: 771381)
Time Spent: 1h 20m  (was: 1h 10m)

> tests in ITestS3AInputStreamPerformance are failing 
> 
>
> Key: HADOOP-18231
> URL: https://issues.apache.org/jira/browse/HADOOP-18231
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmar Suhail
>Assignee: Ahmar Suhail
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> The following tests are failing when prefetching is enabled:
> testRandomIORandomPolicy - expects stream to be opened 4 times (once for 
> every random read), but prefetching will only open twice. 
> testDecompressionSequential128K - expects stream to be opened once, but 
> prefetching will open once for each block the file has. landsat file used in 
> the test has size 42MB, prefetching block size = 8MB, expected open count is 
> 6.
>  testReadWithNormalPolicy - same as above. 
> testRandomIONormalPolicy - executes random IO, but with a normal policy. 
> S3AInputStream will abort the stream and change the policy, prefetching 
> handles random IO by caching blocks so doesn't do any of that. 
> testRandomReadOverBuffer - multiple assertions failing here, also depends a 
> lot on readAhead values, not very relevant for prefetching



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ahmarsuhail commented on pull request #4305: HADOOP-18231. Adds in new test for S3PrefetchingInputStream

2022-05-17 Thread GitBox


ahmarsuhail commented on PR #4305:
URL: https://github.com/apache/hadoop/pull/4305#issuecomment-1128933678

   Thanks @monthonk. As discussed, instead of using 
`landsat-pds/scene_list.gz`, I tried creating a smaller file (size 16k) and 
setting the block size to 4K, which would make the tests faster. But currently 
if you try to set the block size < default size (8M), validation fails 
[here](https://github.com/apache/hadoop/blob/feature-HADOOP-18028-s3a-prefetch/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L487)
 . The min allowed block size currently is `PREFETCH_BLOCK_DEFAULT_SIZE` , I'm 
not sure if this is something we want to update. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18231) tests in ITestS3AInputStreamPerformance are failing

2022-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18231?focusedWorklogId=771379=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-771379
 ]

ASF GitHub Bot logged work on HADOOP-18231:
---

Author: ASF GitHub Bot
Created on: 17/May/22 14:16
Start Date: 17/May/22 14:16
Worklog Time Spent: 10m 
  Work Description: ahmarsuhail commented on code in PR #4305:
URL: https://github.com/apache/hadoop/pull/4305#discussion_r874874814


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3PrefetchingInputStream.java:
##
@@ -0,0 +1,139 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import java.io.IOException;
+
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.fs.s3a.performance.AbstractS3ACostTest;
+import org.apache.hadoop.fs.statistics.IOStatistics;
+import org.apache.hadoop.fs.statistics.StoreStatisticNames;
+import org.apache.hadoop.fs.statistics.StreamStatisticNames;
+
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_BLOCK_DEFAULT_SIZE;
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_BLOCK_SIZE_KEY;
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_ENABLED_KEY;
+import static 
org.apache.hadoop.fs.statistics.IOStatisticAssertions.verifyStatisticCounterValue;
+
+public class ITestS3PrefetchingInputStream extends AbstractS3ACostTest {
+
+  public ITestS3PrefetchingInputStream() {
+super(true);
+  }
+
+  private static final int _1K = 1024;
+  // Path for file which should have length > block size so 
S3CachingInputStream is used
+  private Path largeFile;
+  private FileSystem fs;
+  private int numBlocks;
+  private int blockSize;
+  private long largeFileSize;
+  // Size should be < block size so S3InMemoryInputStream is used
+  private static final int smallFileSize = _1K * 16;
+
+  @Override
+  public void setup() throws Exception {
+super.setup();
+
+Configuration conf = getConfiguration();
+conf.setBoolean(PREFETCH_ENABLED_KEY, true);
+  }
+
+  private void openFS() throws IOException {
+Configuration conf = getConfiguration();
+
+largeFile = new Path(DEFAULT_CSVTEST_FILE);
+blockSize = conf.getInt(PREFETCH_BLOCK_SIZE_KEY, 
PREFETCH_BLOCK_DEFAULT_SIZE);
+fs = largeFile.getFileSystem(getConfiguration());
+FileStatus fileStatus = fs.getFileStatus(largeFile);
+largeFileSize = fileStatus.getLen();

Review Comment:
   It is currently using the landsat file `landsat-pds/scene_list.gz`





Issue Time Tracking
---

Worklog Id: (was: 771379)
Time Spent: 1h 10m  (was: 1h)

> tests in ITestS3AInputStreamPerformance are failing 
> 
>
> Key: HADOOP-18231
> URL: https://issues.apache.org/jira/browse/HADOOP-18231
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmar Suhail
>Assignee: Ahmar Suhail
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> The following tests are failing when prefetching is enabled:
> testRandomIORandomPolicy - expects stream to be opened 4 times (once for 
> every random read), but prefetching will only open twice. 
> testDecompressionSequential128K - expects stream to be opened once, but 
> prefetching will open once for each block the file has. landsat file used in 
> the test has size 42MB, prefetching block size = 8MB, expected open count is 
> 6.
>  testReadWithNormalPolicy - same as above. 
> testRandomIONormalPolicy - executes random IO, but with a normal policy. 
> S3AInputStream will abort the stream and change the policy, prefetching 
> handles random IO by caching blocks so doesn't do any of that. 
> testRandomReadOverBuffer - multiple assertions failing 

[GitHub] [hadoop] ahmarsuhail commented on a diff in pull request #4305: HADOOP-18231. Adds in new test for S3PrefetchingInputStream

2022-05-17 Thread GitBox


ahmarsuhail commented on code in PR #4305:
URL: https://github.com/apache/hadoop/pull/4305#discussion_r874874814


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3PrefetchingInputStream.java:
##
@@ -0,0 +1,139 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import java.io.IOException;
+
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.fs.s3a.performance.AbstractS3ACostTest;
+import org.apache.hadoop.fs.statistics.IOStatistics;
+import org.apache.hadoop.fs.statistics.StoreStatisticNames;
+import org.apache.hadoop.fs.statistics.StreamStatisticNames;
+
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_BLOCK_DEFAULT_SIZE;
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_BLOCK_SIZE_KEY;
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_ENABLED_KEY;
+import static 
org.apache.hadoop.fs.statistics.IOStatisticAssertions.verifyStatisticCounterValue;
+
+public class ITestS3PrefetchingInputStream extends AbstractS3ACostTest {
+
+  public ITestS3PrefetchingInputStream() {
+super(true);
+  }
+
+  private static final int _1K = 1024;
+  // Path for file which should have length > block size so 
S3CachingInputStream is used
+  private Path largeFile;
+  private FileSystem fs;
+  private int numBlocks;
+  private int blockSize;
+  private long largeFileSize;
+  // Size should be < block size so S3InMemoryInputStream is used
+  private static final int smallFileSize = _1K * 16;
+
+  @Override
+  public void setup() throws Exception {
+super.setup();
+
+Configuration conf = getConfiguration();
+conf.setBoolean(PREFETCH_ENABLED_KEY, true);
+  }
+
+  private void openFS() throws IOException {
+Configuration conf = getConfiguration();
+
+largeFile = new Path(DEFAULT_CSVTEST_FILE);
+blockSize = conf.getInt(PREFETCH_BLOCK_SIZE_KEY, 
PREFETCH_BLOCK_DEFAULT_SIZE);
+fs = largeFile.getFileSystem(getConfiguration());
+FileStatus fileStatus = fs.getFileStatus(largeFile);
+largeFileSize = fileStatus.getLen();

Review Comment:
   It is currently using the landsat file `landsat-pds/scene_list.gz`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18239) Update guava to 30.1.1-jre

2022-05-17 Thread Hemanth Boyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hemanth Boyina resolved HADOOP-18239.
-
Resolution: Duplicate

> Update guava to 30.1.1-jre
> --
>
> Key: HADOOP-18239
> URL: https://issues.apache.org/jira/browse/HADOOP-18239
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Hemanth Boyina
>Assignee: Hemanth Boyina
>Priority: Major
>
> Update guava to 30.1.1-jre



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18239) Update guava to 30.1.1-jre

2022-05-17 Thread Hemanth Boyina (Jira)
Hemanth Boyina created HADOOP-18239:
---

 Summary: Update guava to 30.1.1-jre
 Key: HADOOP-18239
 URL: https://issues.apache.org/jira/browse/HADOOP-18239
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Hemanth Boyina
Assignee: Hemanth Boyina


Update guava to 30.1.1-jre



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] RuinanGu commented on pull request #4252: HDFS-16566 Erasure Coding: Recovery may causes excess replicas when busy DN exsits

2022-05-17 Thread GitBox


RuinanGu commented on PR #4252:
URL: https://github.com/apache/hadoop/pull/4252#issuecomment-1128923122

   @jojochuang Could you please take a look again?All the issues you mentioned 
are fixed and the failed UT is passed in my PC.
   https://user-images.githubusercontent.com/57645247/168831626-93f17909-1af7-4465-ac5b-5820f3a31163.png;>
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18231) tests in ITestS3AInputStreamPerformance are failing

2022-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18231?focusedWorklogId=771373=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-771373
 ]

ASF GitHub Bot logged work on HADOOP-18231:
---

Author: ASF GitHub Bot
Created on: 17/May/22 14:10
Start Date: 17/May/22 14:10
Worklog Time Spent: 10m 
  Work Description: ahmarsuhail commented on code in PR #4305:
URL: https://github.com/apache/hadoop/pull/4305#discussion_r874867835


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3PrefetchingInputStream.java:
##
@@ -0,0 +1,139 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import java.io.IOException;
+
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.fs.s3a.performance.AbstractS3ACostTest;
+import org.apache.hadoop.fs.statistics.IOStatistics;
+import org.apache.hadoop.fs.statistics.StoreStatisticNames;
+import org.apache.hadoop.fs.statistics.StreamStatisticNames;
+
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_BLOCK_DEFAULT_SIZE;
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_BLOCK_SIZE_KEY;
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_ENABLED_KEY;
+import static 
org.apache.hadoop.fs.statistics.IOStatisticAssertions.verifyStatisticCounterValue;
+
+public class ITestS3PrefetchingInputStream extends AbstractS3ACostTest {
+
+  public ITestS3PrefetchingInputStream() {
+super(true);
+  }
+
+  private static final int _1K = 1024;
+  // Path for file which should have length > block size so 
S3CachingInputStream is used
+  private Path largeFile;
+  private FileSystem fs;
+  private int numBlocks;
+  private int blockSize;
+  private long largeFileSize;
+  // Size should be < block size so S3InMemoryInputStream is used
+  private static final int smallFileSize = _1K * 16;
+
+  @Override
+  public void setup() throws Exception {
+super.setup();
+
+Configuration conf = getConfiguration();
+conf.setBoolean(PREFETCH_ENABLED_KEY, true);
+  }
+
+  private void openFS() throws IOException {
+Configuration conf = getConfiguration();
+
+largeFile = new Path(DEFAULT_CSVTEST_FILE);
+blockSize = conf.getInt(PREFETCH_BLOCK_SIZE_KEY, 
PREFETCH_BLOCK_DEFAULT_SIZE);
+fs = largeFile.getFileSystem(getConfiguration());
+FileStatus fileStatus = fs.getFileStatus(largeFile);
+largeFileSize = fileStatus.getLen();
+numBlocks = (largeFileSize == 0) ?
+0 :
+((int) (largeFileSize / blockSize)) + (largeFileSize % blockSize > 0 ? 
1 : 0);

Review Comment:
   This depends on the size of the file being used (landsat-pds/scene_list.gz), 
so needs to be calculated





Issue Time Tracking
---

Worklog Id: (was: 771373)
Time Spent: 1h  (was: 50m)

> tests in ITestS3AInputStreamPerformance are failing 
> 
>
> Key: HADOOP-18231
> URL: https://issues.apache.org/jira/browse/HADOOP-18231
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmar Suhail
>Assignee: Ahmar Suhail
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The following tests are failing when prefetching is enabled:
> testRandomIORandomPolicy - expects stream to be opened 4 times (once for 
> every random read), but prefetching will only open twice. 
> testDecompressionSequential128K - expects stream to be opened once, but 
> prefetching will open once for each block the file has. landsat file used in 
> the test has size 42MB, prefetching block size = 8MB, expected open count is 
> 6.
>  testReadWithNormalPolicy - same as above. 
> testRandomIONormalPolicy - executes random IO, but with a normal policy. 
> S3AInputStream will 

[GitHub] [hadoop] ahmarsuhail commented on a diff in pull request #4305: HADOOP-18231. Adds in new test for S3PrefetchingInputStream

2022-05-17 Thread GitBox


ahmarsuhail commented on code in PR #4305:
URL: https://github.com/apache/hadoop/pull/4305#discussion_r874867835


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3PrefetchingInputStream.java:
##
@@ -0,0 +1,139 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import java.io.IOException;
+
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.fs.s3a.performance.AbstractS3ACostTest;
+import org.apache.hadoop.fs.statistics.IOStatistics;
+import org.apache.hadoop.fs.statistics.StoreStatisticNames;
+import org.apache.hadoop.fs.statistics.StreamStatisticNames;
+
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_BLOCK_DEFAULT_SIZE;
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_BLOCK_SIZE_KEY;
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_ENABLED_KEY;
+import static 
org.apache.hadoop.fs.statistics.IOStatisticAssertions.verifyStatisticCounterValue;
+
+public class ITestS3PrefetchingInputStream extends AbstractS3ACostTest {
+
+  public ITestS3PrefetchingInputStream() {
+super(true);
+  }
+
+  private static final int _1K = 1024;
+  // Path for file which should have length > block size so 
S3CachingInputStream is used
+  private Path largeFile;
+  private FileSystem fs;
+  private int numBlocks;
+  private int blockSize;
+  private long largeFileSize;
+  // Size should be < block size so S3InMemoryInputStream is used
+  private static final int smallFileSize = _1K * 16;
+
+  @Override
+  public void setup() throws Exception {
+super.setup();
+
+Configuration conf = getConfiguration();
+conf.setBoolean(PREFETCH_ENABLED_KEY, true);
+  }
+
+  private void openFS() throws IOException {
+Configuration conf = getConfiguration();
+
+largeFile = new Path(DEFAULT_CSVTEST_FILE);
+blockSize = conf.getInt(PREFETCH_BLOCK_SIZE_KEY, 
PREFETCH_BLOCK_DEFAULT_SIZE);
+fs = largeFile.getFileSystem(getConfiguration());
+FileStatus fileStatus = fs.getFileStatus(largeFile);
+largeFileSize = fileStatus.getLen();
+numBlocks = (largeFileSize == 0) ?
+0 :
+((int) (largeFileSize / blockSize)) + (largeFileSize % blockSize > 0 ? 
1 : 0);

Review Comment:
   This depends on the size of the file being used (landsat-pds/scene_list.gz), 
so needs to be calculated



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] RuinanGu commented on a diff in pull request #4252: HDFS-16566 Erasure Coding: Recovery may causes excess replicas when busy DN exsits

2022-05-17 Thread GitBox


RuinanGu commented on code in PR #4252:
URL: https://github.com/apache/hadoop/pull/4252#discussion_r863162303


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/StripedReconstructionInfo.java:
##
@@ -41,26 +41,28 @@ public class StripedReconstructionInfo {
   private final DatanodeInfo[] targets;
   private final StorageType[] targetStorageTypes;
   private final String[] targetStorageIds;
+  private final byte[] excludeReconstructedIndices;
 
   public StripedReconstructionInfo(ExtendedBlock blockGroup,
   ErasureCodingPolicy ecPolicy, byte[] liveIndices, DatanodeInfo[] sources,
   byte[] targetIndices) {
 this(blockGroup, ecPolicy, liveIndices, sources, targetIndices, null,
-null, null);
+null, null, new byte[0]);
   }
 
   StripedReconstructionInfo(ExtendedBlock blockGroup,
   ErasureCodingPolicy ecPolicy, byte[] liveIndices, DatanodeInfo[] sources,
   DatanodeInfo[] targets, StorageType[] targetStorageTypes,
-  String[] targetStorageIds) {
+  String[] targetStorageIds, byte[] excludeReconstructedIndices) {

Review Comment:
   When the DN receive the command of EC reconstruction,  the ec task is 
processed in ErasureCodingWorker.processErasureCodingTasks().In this function, 
it calls the constructor you mentioned.
   (Line 126 ,ErasureCodingWorker)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18229) Fix Hadoop Common Java Doc Error

2022-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18229?focusedWorklogId=771364=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-771364
 ]

ASF GitHub Bot logged work on HADOOP-18229:
---

Author: ASF GitHub Bot
Created on: 17/May/22 13:56
Start Date: 17/May/22 13:56
Worklog Time Spent: 10m 
  Work Description: slfan1989 commented on PR #4292:
URL: https://github.com/apache/hadoop/pull/4292#issuecomment-1128904918

   Hi,@steveloughran @virajjasani @goiri , I have fixed hadoop-common javadoc 
compilation problem in jdk11, please help to review the code.
   
   my changes are as follows:
   
   1.modify the java doc configuration of the pom file, skip the proto file
   ```
   
org.apache.maven.plugins
maven-javadoc-plugin


 **/FSProtos.java
 
 
*.proto:*.tracing:*.protobuf


   ```
   
   2.fix the warn and error problems, as follows:
   ```
   warning: no @param for in
   warning: no @return
   warning: no @throws for java.io.IOException
   warning: no description for @throws
   error: exception not thrown: java.io.IOException
   error: unknown tag: username
   error: bad use of '>'
   etc...
   ```
   
   hope these fixes will help hadoop project to be more complete, thanks!
   
   




Issue Time Tracking
---

Worklog Id: (was: 771364)
Time Spent: 12.5h  (was: 12h 20m)

> Fix Hadoop Common Java Doc Error
> 
>
> Key: HADOOP-18229
> URL: https://issues.apache.org/jira/browse/HADOOP-18229
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 12.5h
>  Remaining Estimate: 0h
>
> I found that when hadoop-multibranch compiled PR-4266, some errors would pop 
> up, I tried to solve it
> The wrong compilation information is as follows, I try to fix the Error 
> information
> {code:java}
> [ERROR] 
> /home/jenkins/jenkins-agent/workspace/hadoop-multibranch_PR-4266/ubuntu-focal/src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java:432:
>  error: exception not thrown: java.io.IOException
> [ERROR]* @throws IOException
> [ERROR]  ^
> [ERROR] 
> /home/jenkins/jenkins-agent/workspace/hadoop-multibranch_PR-4266/ubuntu-focal/src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java:885:
>  error: unknown tag: username
> [ERROR]*  E.g. link: ^/user/(?\\w+) => 
> s3://$user.apache.com/_${user}
> [ERROR]   ^
> [ERROR] 
> /home/jenkins/jenkins-agent/workspace/hadoop-multibranch_PR-4266/ubuntu-focal/src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java:885:
>  error: bad use of '>'
> [ERROR]*  E.g. link: ^/user/(?\\w+) => 
> s3://$user.apache.com/_${user}
> [ERROR]^
> [ERROR] 
> /home/jenkins/jenkins-agent/workspace/hadoop-multibranch_PR-4266/ubuntu-focal/src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java:910:
>  error: unknown tag: username
> [ERROR]* 
> .linkRegex.replaceresolveddstpath:_:-#.^/user/(?\w+)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #4292: HADOOP-18229. Fix Hadoop-Common JavaDoc Error

2022-05-17 Thread GitBox


slfan1989 commented on PR #4292:
URL: https://github.com/apache/hadoop/pull/4292#issuecomment-1128904918

   Hi,@steveloughran @virajjasani @goiri , I have fixed hadoop-common javadoc 
compilation problem in jdk11, please help to review the code.
   
   my changes are as follows:
   
   1.modify the java doc configuration of the pom file, skip the proto file
   ```
   
org.apache.maven.plugins
maven-javadoc-plugin


 **/FSProtos.java
 
 
*.proto:*.tracing:*.protobuf


   ```
   
   2.fix the warn and error problems, as follows:
   ```
   warning: no @param for in
   warning: no @return
   warning: no @throws for java.io.IOException
   warning: no description for @throws
   error: exception not thrown: java.io.IOException
   error: unknown tag: username
   error: bad use of '>'
   etc...
   ```
   
   hope these fixes will help hadoop project to be more complete, thanks!
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #4317: YARN-10465. Support getNodeToLabels, getLabelsToNodes, getClusterNodeLabels API's for Federation

2022-05-17 Thread GitBox


slfan1989 commented on PR #4317:
URL: https://github.com/apache/hadoop/pull/4317#issuecomment-1128887593

   Hi, @goiri , YARN-10465:Support getNodeToLabels, getLabelsToNodes, 
getClusterNodeLabels API's for Federation, It has been done in this pr, please 
help to review the code, thank you very much!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4267: HADOOP-18224. Upgrade maven compiler plugin to 3.10.1

2022-05-17 Thread GitBox


hadoop-yetus commented on PR #4267:
URL: https://github.com/apache/hadoop/pull/4267#issuecomment-1128873793

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 38s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 20s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  24m 48s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 12s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  20m 43s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   4m 24s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |  19m 32s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   1m 43s | 
[/branch-javadoc-root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4267/13/artifact/out/branch-javadoc-root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  root in trunk failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | +1 :green_heart: |  javadoc  |   8m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +0 :ok: |  spotbugs  |   0m 29s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 30s |  |  
branch/hadoop-client-modules/hadoop-client-minicluster no spotbugs output file 
(spotbugsXml.xml)  |
   | -1 :x: |  shadedclient  |  54m 37s |  |  branch has errors when building 
and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 48s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  29m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 22s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | -1 :x: |  javac  |  24m 22s | 
[/results-compile-javac-root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4267/13/artifact/out/results-compile-javac-root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 generated 1091 new + 1816 unchanged - 
0 fixed = 2907 total (was 1816)  |
   | +1 :green_heart: |  compile  |  21m  6s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | -1 :x: |  javac  |  21m  6s | 
[/results-compile-javac-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4267/13/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 generated 1013 new + 1692 
unchanged - 0 fixed = 2705 total (was 1692)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   4m 38s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |  19m 42s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  4s |  |  The patch has no ill-formed XML 
file.  |
   | -1 :x: |  javadoc  |   1m 36s | 
[/patch-javadoc-root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4267/13/artifact/out/patch-javadoc-root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  root in the patch failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | +1 :green_heart: |  javadoc  |   8m 57s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +0 :ok: |  spotbugs  |   0m 28s |  |  hadoop-project has no data from 
spotbugs  |
   | +0 :ok: |  spotbugs  |   0m 35s |  |  
hadoop-client-modules/hadoop-client-minicluster has no data from spotbugs  |
   | +1 :green_heart: |  shadedclient  |  57m  9s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 819m 34s | 

[jira] [Work logged] (HADOOP-18224) Upgrade maven compiler plugin to 3.10.1

2022-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18224?focusedWorklogId=771342=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-771342
 ]

ASF GitHub Bot logged work on HADOOP-18224:
---

Author: ASF GitHub Bot
Created on: 17/May/22 13:30
Start Date: 17/May/22 13:30
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4267:
URL: https://github.com/apache/hadoop/pull/4267#issuecomment-1128873793

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 38s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 20s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  24m 48s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 12s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  20m 43s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   4m 24s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |  19m 32s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   1m 43s | 
[/branch-javadoc-root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4267/13/artifact/out/branch-javadoc-root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  root in trunk failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | +1 :green_heart: |  javadoc  |   8m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +0 :ok: |  spotbugs  |   0m 29s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 30s |  |  
branch/hadoop-client-modules/hadoop-client-minicluster no spotbugs output file 
(spotbugsXml.xml)  |
   | -1 :x: |  shadedclient  |  54m 37s |  |  branch has errors when building 
and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 48s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  29m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 22s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | -1 :x: |  javac  |  24m 22s | 
[/results-compile-javac-root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4267/13/artifact/out/results-compile-javac-root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 generated 1091 new + 1816 unchanged - 
0 fixed = 2907 total (was 1816)  |
   | +1 :green_heart: |  compile  |  21m  6s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | -1 :x: |  javac  |  21m  6s | 
[/results-compile-javac-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4267/13/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 generated 1013 new + 1692 
unchanged - 0 fixed = 2705 total (was 1692)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   4m 38s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |  19m 42s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  4s |  |  The patch has no ill-formed XML 
file.  |
   | -1 :x: |  javadoc  |   1m 36s | 
[/patch-javadoc-root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4267/13/artifact/out/patch-javadoc-root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  root in the patch failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | +1 :green_heart: |  javadoc  |   8m 57s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +0 :ok: |  

[GitHub] [hadoop] hadoop-yetus commented on pull request #4319: Move to JAVA 11.

2022-05-17 Thread GitBox


hadoop-yetus commented on PR #4319:
URL: https://github.com/apache/hadoop/pull/4319#issuecomment-1128817581

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  41m 18s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  hadolint  |   0m  1s |  |  hadolint was not available.  |
   | +0 :ok: |  shellcheck  |   0m  1s |  |  Shellcheck was not available.  |
   | +0 :ok: |  shelldocs  |   0m  1s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 35s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 49s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 12s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |  19m 37s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   8m 55s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  | 110m 20s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 51s |  |  Maven dependency ordering for patch  |
   | -1 :x: |  mvninstall  |   0m 22s | 
[/patch-mvninstall-hadoop-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4319/1/artifact/out/patch-mvninstall-hadoop-project.txt)
 |  hadoop-project in the patch failed.  |
   | -1 :x: |  mvninstall  |   0m 21s | 
[/patch-mvninstall-hadoop-common-project_hadoop-annotations.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4319/1/artifact/out/patch-mvninstall-hadoop-common-project_hadoop-annotations.txt)
 |  hadoop-annotations in the patch failed.  |
   | -1 :x: |  mvninstall  |   0m 22s | 
[/patch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4319/1/artifact/out/patch-mvninstall-root.txt)
 |  root in the patch failed.  |
   | -1 :x: |  compile  |   0m 22s | 
[/patch-compile-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4319/1/artifact/out/patch-compile-root.txt)
 |  root in the patch failed.  |
   | -1 :x: |  javac  |   0m 22s | 
[/patch-compile-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4319/1/artifact/out/patch-compile-root.txt)
 |  root in the patch failed.  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -1 :x: |  mvnsite  |   0m 22s | 
[/patch-mvnsite-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4319/1/artifact/out/patch-mvnsite-root.txt)
 |  root in the patch failed.  |
   | +1 :green_heart: |  xml  |   0m  4s |  |  The patch has no ill-formed XML 
file.  |
   | -1 :x: |  javadoc  |   0m 20s | 
[/patch-javadoc-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4319/1/artifact/out/patch-javadoc-root.txt)
 |  root in the patch failed.  |
   | -1 :x: |  shadedclient  |   1m 42s |  |  patch has errors when building 
and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   0m 21s | 
[/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4319/1/artifact/out/patch-unit-root.txt)
 |  root in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 159m  3s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4319/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4319 |
   | Optional Tests | dupname asflicense codespell hadolint shellcheck 
shelldocs mvnsite unit compile javac javadoc mvninstall shadedclient xml |
   | uname | Linux 956309d80e28 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 14a2be9a8eebfa2eef67d48ebd5f327ec8bb6e89 |
   | Default Java | Red Hat, Inc.-1.8.0_332-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4319/1/testReport/ |
   | Max. process+thread count | 724 (vs. ulimit of 5500) |
   | modules | C: hadoop-project hadoop-common-project/hadoop-annotations . U: 
. |
   | Console output | 

[jira] [Work logged] (HADOOP-18231) tests in ITestS3AInputStreamPerformance are failing

2022-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18231?focusedWorklogId=771301=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-771301
 ]

ASF GitHub Bot logged work on HADOOP-18231:
---

Author: ASF GitHub Bot
Created on: 17/May/22 12:38
Start Date: 17/May/22 12:38
Worklog Time Spent: 10m 
  Work Description: monthonk commented on code in PR #4305:
URL: https://github.com/apache/hadoop/pull/4305#discussion_r874742506


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3PrefetchingInputStream.java:
##
@@ -0,0 +1,139 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import java.io.IOException;
+
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.fs.s3a.performance.AbstractS3ACostTest;
+import org.apache.hadoop.fs.statistics.IOStatistics;
+import org.apache.hadoop.fs.statistics.StoreStatisticNames;
+import org.apache.hadoop.fs.statistics.StreamStatisticNames;
+
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_BLOCK_DEFAULT_SIZE;
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_BLOCK_SIZE_KEY;
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_ENABLED_KEY;
+import static 
org.apache.hadoop.fs.statistics.IOStatisticAssertions.verifyStatisticCounterValue;
+
+public class ITestS3PrefetchingInputStream extends AbstractS3ACostTest {

Review Comment:
   Could you add some description what's the purpose of this test?



##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3PrefetchingInputStream.java:
##
@@ -0,0 +1,139 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import java.io.IOException;
+
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.fs.s3a.performance.AbstractS3ACostTest;
+import org.apache.hadoop.fs.statistics.IOStatistics;
+import org.apache.hadoop.fs.statistics.StoreStatisticNames;
+import org.apache.hadoop.fs.statistics.StreamStatisticNames;
+
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_BLOCK_DEFAULT_SIZE;
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_BLOCK_SIZE_KEY;
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_ENABLED_KEY;
+import static 
org.apache.hadoop.fs.statistics.IOStatisticAssertions.verifyStatisticCounterValue;
+
+public class ITestS3PrefetchingInputStream extends AbstractS3ACostTest {
+
+  public ITestS3PrefetchingInputStream() {
+super(true);
+  }
+
+  private static final int _1K = 1024;
+  // Path for file which should have length > block size so 
S3CachingInputStream is used
+  private Path largeFile;
+  private FileSystem fs;
+  private int numBlocks;
+  private int blockSize;
+  private long largeFileSize;
+  // Size should be < block size so S3InMemoryInputStream is used
+  private static 

[GitHub] [hadoop] monthonk commented on a diff in pull request #4305: HADOOP-18231. Adds in new test for S3PrefetchingInputStream

2022-05-17 Thread GitBox


monthonk commented on code in PR #4305:
URL: https://github.com/apache/hadoop/pull/4305#discussion_r874742506


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3PrefetchingInputStream.java:
##
@@ -0,0 +1,139 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import java.io.IOException;
+
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.fs.s3a.performance.AbstractS3ACostTest;
+import org.apache.hadoop.fs.statistics.IOStatistics;
+import org.apache.hadoop.fs.statistics.StoreStatisticNames;
+import org.apache.hadoop.fs.statistics.StreamStatisticNames;
+
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_BLOCK_DEFAULT_SIZE;
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_BLOCK_SIZE_KEY;
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_ENABLED_KEY;
+import static 
org.apache.hadoop.fs.statistics.IOStatisticAssertions.verifyStatisticCounterValue;
+
+public class ITestS3PrefetchingInputStream extends AbstractS3ACostTest {

Review Comment:
   Could you add some description what's the purpose of this test?



##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3PrefetchingInputStream.java:
##
@@ -0,0 +1,139 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import java.io.IOException;
+
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.fs.s3a.performance.AbstractS3ACostTest;
+import org.apache.hadoop.fs.statistics.IOStatistics;
+import org.apache.hadoop.fs.statistics.StoreStatisticNames;
+import org.apache.hadoop.fs.statistics.StreamStatisticNames;
+
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_BLOCK_DEFAULT_SIZE;
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_BLOCK_SIZE_KEY;
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_ENABLED_KEY;
+import static 
org.apache.hadoop.fs.statistics.IOStatisticAssertions.verifyStatisticCounterValue;
+
+public class ITestS3PrefetchingInputStream extends AbstractS3ACostTest {
+
+  public ITestS3PrefetchingInputStream() {
+super(true);
+  }
+
+  private static final int _1K = 1024;
+  // Path for file which should have length > block size so 
S3CachingInputStream is used
+  private Path largeFile;
+  private FileSystem fs;
+  private int numBlocks;
+  private int blockSize;
+  private long largeFileSize;
+  // Size should be < block size so S3InMemoryInputStream is used
+  private static final int smallFileSize = _1K * 16;
+
+  @Override
+  public void setup() throws Exception {
+super.setup();
+
+Configuration conf = getConfiguration();
+conf.setBoolean(PREFETCH_ENABLED_KEY, true);
+  }
+
+  private void openFS() throws IOException {
+Configuration conf = getConfiguration();
+
+largeFile = new Path(DEFAULT_CSVTEST_FILE);
+blockSize = conf.getInt(PREFETCH_BLOCK_SIZE_KEY, 

[jira] [Updated] (HADOOP-18146) ABFS: Add changes for expect hundred continue header with append requests

2022-05-17 Thread Anmol Asrani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anmol Asrani updated HADOOP-18146:
--
Description: 
 Heavy load from a Hadoop cluster lead to high resource utilization at FE 
nodes. Investigations from the server side indicate payload buffering at 
Http.Sys as the cause. Payload of requests that eventually fail due to 
throttling limits are also getting buffered, as its triggered before FE could 
start request processing.

Approach: Client sends Append Http request with Expect header, but holds back 
on payload transmission until server replies back with HTTP 100. We add this 
header for all append requests so as to reduce.

 

> ABFS: Add changes for expect hundred continue header with append requests
> -
>
> Key: HADOOP-18146
> URL: https://issues.apache.org/jira/browse/HADOOP-18146
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.1
>Reporter: Anmol Asrani
>Assignee: Anmol Asrani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
>  Heavy load from a Hadoop cluster lead to high resource utilization at FE 
> nodes. Investigations from the server side indicate payload buffering at 
> Http.Sys as the cause. Payload of requests that eventually fail due to 
> throttling limits are also getting buffered, as its triggered before FE could 
> start request processing.
> Approach: Client sends Append Http request with Expect header, but holds back 
> on payload transmission until server replies back with HTTP 100. We add this 
> header for all append requests so as to reduce.
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4319: Move to JAVA 11.

2022-05-17 Thread GitBox


hadoop-yetus commented on PR #4319:
URL: https://github.com/apache/hadoop/pull/4319#issuecomment-1128670567

   (!) A patch to the testing environment has been detected. 
   Re-executing against the patched versions to perform further tests. 
   The console is at 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4319/1/console in 
case of problems.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18229) Fix Hadoop Common Java Doc Error

2022-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18229?focusedWorklogId=771225=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-771225
 ]

ASF GitHub Bot logged work on HADOOP-18229:
---

Author: ASF GitHub Bot
Created on: 17/May/22 09:38
Start Date: 17/May/22 09:38
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4292:
URL: https://github.com/apache/hadoop/pull/4292#issuecomment-1128647689

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 30s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  9s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m 15s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  24m 30s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  21m 56s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   2m 29s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 14s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   1m 51s | 
[/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4292/54/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-common in trunk failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | +1 :green_heart: |  javadoc  |   2m 18s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 11s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 51s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  26m 22s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 31s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |  22m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 43s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  20m 43s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   2m 16s |  |  
hadoop-common-project/hadoop-common: The patch generated 0 new + 4150 unchanged 
- 162 fixed = 4150 total (was 4312)  |
   | +1 :green_heart: |  mvnsite  |   2m  9s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  |  
hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
 with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 generated 0 new + 0 
unchanged - 106 fixed = 0 total (was 106)  |
   | +1 :green_heart: |  javadoc  |   1m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m  8s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 28s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 36s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 30s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 228m  2s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4292/54/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4292 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell xml spotbugs checkstyle |
   | uname | Linux 76f54f3f8883 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4292: HADOOP-18229. Fix Hadoop-Common JavaDoc Error

2022-05-17 Thread GitBox


hadoop-yetus commented on PR #4292:
URL: https://github.com/apache/hadoop/pull/4292#issuecomment-1128647689

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 30s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  9s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m 15s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  24m 30s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  21m 56s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   2m 29s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 14s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   1m 51s | 
[/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4292/54/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-common in trunk failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | +1 :green_heart: |  javadoc  |   2m 18s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 11s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 51s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  26m 22s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 31s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |  22m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 43s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  20m 43s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   2m 16s |  |  
hadoop-common-project/hadoop-common: The patch generated 0 new + 4150 unchanged 
- 162 fixed = 4150 total (was 4312)  |
   | +1 :green_heart: |  mvnsite  |   2m  9s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  |  
hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
 with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 generated 0 new + 0 
unchanged - 106 fixed = 0 total (was 106)  |
   | +1 :green_heart: |  javadoc  |   1m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m  8s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 28s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 36s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 30s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 228m  2s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4292/54/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4292 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell xml spotbugs checkstyle |
   | uname | Linux 76f54f3f8883 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 0b4146b40321baa3815f928489e1b017b387c13f |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 

[jira] [Commented] (HADOOP-18193) Support nested mount points in INodeTree

2022-05-17 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17538035#comment-17538035
 ] 

Ayush Saxena commented on HADOOP-18193:
---

Coming too late, Sorry [~virajith] was occupied with some stuff internally and 
this slipped every time.


Just had a quick look on the PR(It has too many formatting changes as well, 
very tough to follow). Some questions here on the design.
 * How do you manage permissions/ownership/ACLs in this setup, /foo has some 
owner & permissions & /foo/bar has some other, Are we checking relevant 
permissions on the parent of bar before operating on the /foo/bar. The Default 
ACLs stuff on /foo are they honoured for the child paths? (Default ACL: An ACL 
entry to be applied to a directory's children that do not otherwise have their 
own ACL defined), if not can impact some Hive usecases.
 * How does ContentSummary operate, It gets the data from all the child mounts 
as well?
 * What about Quotas, say I have quota set on /foo & I have /foo/bar1 & 
/foo/bar2, well do we honour the usage.
 * Similarly like the Storage Policies and all, will they get propagated to 
child mounts?
 * From one of the tests, I see, Even if the target FS is same, but the paths 
land up in different mounts, we disallow copy. Why? I think that should be 
allowed. RBF allows that as well, There are logics where the Client matches the 
target FS and decides whether to rename or Copy. Since the Target FS is same it 
would call for rename rather than copy and it is gonna fail. Did I catch this 
wrong. Something similar is there in MoveTask for Hive.

I can't remember exactly, but during ViewFsOverloadScheme or ViewDFS also we 
made some assumptions regarding Nested Mounts aren't supported so X use case 
won't occur and hence we are safe. And now I forgot was that

> Support nested mount points in INodeTree
> 
>
> Key: HADOOP-18193
> URL: https://issues.apache.org/jira/browse/HADOOP-18193
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: viewfs
>Affects Versions: 2.10.0
>Reporter: Lei Yang
>Assignee: Lei Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: Nested Mount Point in ViewFs.pdf
>
>  Time Spent: 7h 40m
>  Remaining Estimate: 0h
>
> Defining following client mount table config is not supported in  INodeTree 
> and will throw FileAlreadyExistsException
>  
> {code:java}
> fs.viewfs.mounttable.link./foo/bar=hdfs://nn1/foo/bar
> fs.viewfs.mounttable.link./foo=hdfs://nn02/foo
> {code}
> INodeTree has 2 methods that need change to support nested mount points.
> {code:java}
> createLink(): build INodeTree during fs init.
> resolve(): resolve path in INodeTree with viewfs apis.
> {code}
> ViewFileSystem and ViewFs maintains an INodeTree instance(fsState) in both 
> classes and call fsState.resolve(..) to resolve path to specific mount point. 
> INodeTree.resolve encapsulates the logic of nested mount point resolving. So 
> no changes are expected in both classes. 
> AC:
>  # INodeTree.createlink should support creating nested mount 
> points.(INodeTree is constructed during fs init)
>  # INodeTree.resolve should support resolve path based on nested mount 
> points. (INodeTree.resolve is used in viewfs apis)
>  # No regression in existing ViewFileSystem and ViewFs apis.
>  # Ensure some important apis are not broken with nested mount points. 
> (Rename, getContentSummary, listStatus...)
>  
> Spec:
> Please review attached pdf for spec about this feature.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18229) Fix Hadoop Common Java Doc Error

2022-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18229?focusedWorklogId=771196=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-771196
 ]

ASF GitHub Bot logged work on HADOOP-18229:
---

Author: ASF GitHub Bot
Created on: 17/May/22 08:32
Start Date: 17/May/22 08:32
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4292:
URL: https://github.com/apache/hadoop/pull/4292#issuecomment-1128578830

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 55s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  7s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m  5s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  24m 54s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  21m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   2m  0s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 59s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   1m 35s | 
[/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4292/53/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-common in trunk failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | +1 :green_heart: |  javadoc  |   2m  1s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m  2s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m 10s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  26m 36s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m  9s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |  24m  9s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 47s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  21m 47s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 53s |  |  
hadoop-common-project/hadoop-common: The patch generated 0 new + 4147 unchanged 
- 160 fixed = 4147 total (was 4307)  |
   | +1 :green_heart: |  mvnsite  |   1m 56s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   1m 20s |  |  
hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
 with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 generated 0 new + 0 
unchanged - 106 fixed = 0 total (was 106)  |
   | +1 :green_heart: |  javadoc  |   1m  3s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m  8s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  26m 15s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 15s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 16s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 227m 55s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4292/53/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4292 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell xml spotbugs checkstyle |
   | uname | Linux e54407acecf9 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4292: HADOOP-18229. Fix Hadoop-Common JavaDoc Error

2022-05-17 Thread GitBox


hadoop-yetus commented on PR #4292:
URL: https://github.com/apache/hadoop/pull/4292#issuecomment-1128578830

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 55s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  7s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m  5s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  24m 54s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  21m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   2m  0s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 59s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   1m 35s | 
[/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4292/53/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-common in trunk failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | +1 :green_heart: |  javadoc  |   2m  1s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m  2s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m 10s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  26m 36s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m  9s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |  24m  9s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 47s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  21m 47s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 53s |  |  
hadoop-common-project/hadoop-common: The patch generated 0 new + 4147 unchanged 
- 160 fixed = 4147 total (was 4307)  |
   | +1 :green_heart: |  mvnsite  |   1m 56s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   1m 20s |  |  
hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
 with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 generated 0 new + 0 
unchanged - 106 fixed = 0 total (was 106)  |
   | +1 :green_heart: |  javadoc  |   1m  3s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m  8s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  26m 15s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 15s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 16s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 227m 55s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4292/53/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4292 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell xml spotbugs checkstyle |
   | uname | Linux e54407acecf9 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 207ee6cd5b7fc38b362b79b4422e2a5c2a5cb431 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4314: YARN-11153. Make proxy server support yarn federation.

2022-05-17 Thread GitBox


hadoop-yetus commented on PR #4314:
URL: https://github.com/apache/hadoop/pull/4314#issuecomment-1128578603

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 24s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 5 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  16m  7s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  27m 21s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   4m 26s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   3m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 34s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m  4s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 51s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 24s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 55s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 30s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 15s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   4m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 38s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   3m 38s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 20s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4314/2/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt)
 |  hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 11 
new + 95 unchanged - 4 fixed = 106 total (was 99)  |
   | +1 :green_heart: |  mvnsite  |   1m 41s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 26s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  24m  8s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 28s |  |  hadoop-yarn-server-web-proxy in 
the patch passed.  |
   | +1 :green_heart: |  unit  | 107m 11s |  |  
hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 51s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 239m  5s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4314/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4314 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 761a1c6f654d 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6f15363c6511df156cc7fdab7ba4ae7895043420 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4314/2/testReport/ |
   | Max. process+thread count | 934 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4317: YARN-10465. Support getNodeToLabels, getLabelsToNodes, getClusterNodeLabels API's for Federation

2022-05-17 Thread GitBox


hadoop-yetus commented on PR #4317:
URL: https://github.com/apache/hadoop/pull/4317#issuecomment-1128559490

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 56s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m 25s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 37s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m  8s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 33s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   0m 55s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 13s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 47s |  |  hadoop-yarn-server-router in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 101m 59s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4317/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4317 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux b670c5e111f9 4.15.0-166-generic #174-Ubuntu SMP Wed Dec 8 
19:07:44 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 71a99de31bb4abd1d16bb80fb3a030bb1067f746 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4317/6/testReport/ |
   | Max. process+thread count | 742 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4317/6/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #4127: HDFS-13522. RBF: Support observer node from Router-Based Federation

2022-05-17 Thread GitBox


hadoop-yetus commented on PR #4127:
URL: https://github.com/apache/hadoop/pull/4127#issuecomment-1128522409

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 55s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 12 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  16m  5s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m  7s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 11s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  20m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   4m 22s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   7m 43s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   1m 52s | 
[/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4127/9/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-common in trunk failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | +1 :green_heart: |  javadoc  |   7m 46s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |  12m 29s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 30s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 30s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 27s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |  22m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  20m 37s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   4m 17s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4127/9/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 2 new + 340 unchanged - 1 fixed = 342 total (was 
341)  |
   | +1 :green_heart: |  mvnsite  |   7m 42s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | -1 :x: |  javadoc  |   1m 44s | 
[/patch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4127/9/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-common in the patch failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | +1 :green_heart: |  javadoc  |   7m 48s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |  13m  4s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 54s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 41s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   3m 19s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 399m  0s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  39m 13s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 52s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 725m 35s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4127/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4127 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux 41d120fb7e5a 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 

[jira] [Work logged] (HADOOP-18229) Fix Hadoop Common Java Doc Error

2022-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18229?focusedWorklogId=771168=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-771168
 ]

ASF GitHub Bot logged work on HADOOP-18229:
---

Author: ASF GitHub Bot
Created on: 17/May/22 07:14
Start Date: 17/May/22 07:14
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4292:
URL: https://github.com/apache/hadoop/pull/4292#issuecomment-1128500821

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  5s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  8s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  39m 47s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  26m 14s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  23m  8s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   2m 22s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 16s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   1m 49s | 
[/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4292/52/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-common in trunk failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | +1 :green_heart: |  javadoc  |   2m  3s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 14s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 11s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  23m 40s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 33s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |  24m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 48s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  21m 48s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   2m  5s |  |  
hadoop-common-project/hadoop-common: The patch generated 0 new + 4152 unchanged 
- 160 fixed = 4152 total (was 4312)  |
   | +1 :green_heart: |  mvnsite  |   2m  1s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   1m 21s |  |  
hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
 with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 generated 0 new + 0 
unchanged - 106 fixed = 0 total (was 106)  |
   | +1 :green_heart: |  javadoc  |   1m  1s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   2m 48s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  24m  2s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 40s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 24s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 229m 46s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4292/52/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4292 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell xml spotbugs checkstyle |
   | uname | Linux d0adf6e5c638 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4292: HADOOP-18229. Fix Hadoop-Common JavaDoc Error

2022-05-17 Thread GitBox


hadoop-yetus commented on PR #4292:
URL: https://github.com/apache/hadoop/pull/4292#issuecomment-1128500821

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  5s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  8s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  39m 47s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  26m 14s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  23m  8s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   2m 22s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 16s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   1m 49s | 
[/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4292/52/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-common in trunk failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | +1 :green_heart: |  javadoc  |   2m  3s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 14s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 11s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  23m 40s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 33s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |  24m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 48s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  21m 48s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   2m  5s |  |  
hadoop-common-project/hadoop-common: The patch generated 0 new + 4152 unchanged 
- 160 fixed = 4152 total (was 4312)  |
   | +1 :green_heart: |  mvnsite  |   2m  1s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   1m 21s |  |  
hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
 with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 generated 0 new + 0 
unchanged - 106 fixed = 0 total (was 106)  |
   | +1 :green_heart: |  javadoc  |   1m  1s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   2m 48s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  24m  2s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 40s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 24s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 229m 46s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4292/52/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4292 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell xml spotbugs checkstyle |
   | uname | Linux d0adf6e5c638 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 32433ade07dd21ee306066bac04961685433e624 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4307: HDFS-14750. RBF: Support dynamic handler allocation in routers

2022-05-17 Thread GitBox


hadoop-yetus commented on PR #4307:
URL: https://github.com/apache/hadoop/pull/4307#issuecomment-1128467406

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 53s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m 32s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 54s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 48s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 53s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 41s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m  1s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 21s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4307/4/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 4 new + 1 
unchanged - 1 fixed = 5 total (was 2)  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 43s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  33m 49s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 44s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 138m 16s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4307/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4307 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux 1b565faaae6c 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 9032ddd5bc430a9938d7ee072b14a9e5acda0883 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4307/4/testReport/ |
   | Max. process+thread count | 2312 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4307/4/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   

[GitHub] [hadoop] hadoop-yetus commented on pull request #4317: YARN-10465. Support getNodeToLabels, getLabelsToNodes, getClusterNodeLabels API's for Federation

2022-05-17 Thread GitBox


hadoop-yetus commented on PR #4317:
URL: https://github.com/apache/hadoop/pull/4317#issuecomment-1128464611

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 57s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m 24s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 38s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m  9s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 17s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   0m 55s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 58s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 45s |  |  hadoop-yarn-server-router in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 41s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 102m 14s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4317/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4317 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 91393eafb44e 4.15.0-166-generic #174-Ubuntu SMP Wed Dec 8 
19:07:44 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / c3d2467801e963b66e48f33c4dbed53c81a6075c |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4317/5/testReport/ |
   | Max. process+thread count | 779 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4317/5/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org