[jira] [Comment Edited] (HADOOP-17847) S3AInstrumentation Closing output stream statistics while data is still marked as pending upload in OutputStreamStatistics

2022-11-09 Thread duhanmin (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17631386#comment-17631386
 ] 

duhanmin edited comment on HADOOP-17847 at 11/10/22 7:58 AM:
-

I use TextOutputFormat getRecordWriter encounter similar mistakes when 
uploading files

 

hadoop:hadoop-aws-3.2.1-amzn-4
{code:java}
//log

2022-11-10 01:43:44.337 [0-0-0-writer] WARN  S3AInstrumentation - Closing 
output stream statistics while data is still marked as pending upload in 
OutputStreamStatistics{blocksSubmitted=1, blocksInQueue=1, blocksActive=0, 
blockUploadsCompleted=0, blockUploadsFailed=0, bytesPendingUpload=55436789, 
bytesUploaded=0, blocksAllocated=1, blocksReleased=1, 
blocksActivelyAllocated=0, exceptionsInMultipartFinalize=0, transferDuration=0 
ms, queueDuration=0 ms, averageQueueTime=0 ms, totalUploadDuration=0 ms, 
effectiveBandwidth=0.0 bytes/s}
2022-11-10 01:43:44.343 [0-0-0-writer] ERROR S3Writer$Task - error
java.io.IOException: regular upload failed: java.lang.NullPointerException
    at org.apache.hadoop.fs.s3a.S3AUtils.extractException(S3AUtils.java:303) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.putObject(S3ABlockOutputStream.java:453)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.close(S3ABlockOutputStream.java:365)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:73)
 ~[hadoop-common-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:102) 
~[hadoop-common-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.mapred.TextOutputFormat$LineRecordWriter.close(TextOutputFormat.java:99)
 ~[hadoop-mapreduce-client-core-3.2.1-amzn-4.jar:na]
    at java.lang.Thread.run(Thread.java:748) [na:1.8.0_312]
Caused by: java.lang.NullPointerException: null
    at 
org.apache.hadoop.fs.s3a.S3AFileSystem.incrementStatistic(S3AFileSystem.java:1189)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.S3AFileSystem.incrementStatistic(S3AFileSystem.java:1179)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.S3AFileSystem.incrementPutStartStatistics(S3AFileSystem.java:1649)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.S3AFileSystem.putObjectDirect(S3AFileSystem.java:1584) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.lambda$putObject$5(WriteOperationHelper.java:430)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:265) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:322) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:261) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:236) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.retry(WriteOperationHelper.java:123)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.putObject(WriteOperationHelper.java:428)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.lambda$putObject$0(S3ABlockOutputStream.java:438)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.util.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:219)
 ~[hadoop-common-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.util.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:219)
 ~[hadoop-common-3.2.1-amzn-4.jar:na]
    at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_312]
    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
~[na:1.8.0_312]
    at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
~[na:1.8.0_312]
    ... 1 common frames omitted{code}
 


was (Author: JIRAUSER292033):
我在使用TextOutputFormat.getRecordWriter 上传文件时也遇到类似错误

hadoop:hadoop-aws-3.2.1-amzn-4
{code:java}
//log

2022-11-10 01:43:44.337 [0-0-0-writer] WARN  S3AInstrumentation - Closing 
output stream statistics while data is still marked as pending upload in 
OutputStreamStatistics{blocksSubmitted=1, blocksInQueue=1, blocksActive=0, 
blockUploadsCompleted=0, blockUploadsFailed=0, bytesPendingUpload=55436789, 
bytesUploaded=0, blocksAllocated=1, blocksReleased=1, 
blocksActivelyAllocated=0, exceptionsInMultipartFinalize=0, transferDuration=0 
ms, queueDuration=0 ms, averageQueueTime=0 ms, totalUploadDuration=0 ms, 
effectiveBandwidth=0.0 bytes/s}
2022-11-10 01:43:44.343 [0-0-0-writer] ERROR S3Writer$Task - error
java.io.IOException: regular upload 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5111: YARN-11350. [Federation] Router Support DelegationToken With ZK.

2022-11-09 Thread GitBox


hadoop-yetus commented on PR #5111:
URL: https://github.com/apache/hadoop/pull/5111#issuecomment-1309904281

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  9s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  18m 11s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  27m 56s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   4m  7s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   3m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 23s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 31s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 29s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 14s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   2m 49s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m  5s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 48s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   3m 48s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   3m 16s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   2m 39s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 43s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   3m 32s | 
[/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5111/3/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt)
 |  hadoop-yarn-server-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   5m 53s |  |  hadoop-yarn-server-router in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 44s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 133m 42s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.yarn.server.federation.store.impl.TestZookeeperFederationStateStore |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5111/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5111 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 6f69a3b2fb5e 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / a5ea2481479699909b1de4a2c1a288db2e813419 |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   |  

[GitHub] [hadoop] hadoop-yetus commented on pull request #5111: YARN-11350. [Federation] Router Support DelegationToken With ZK.

2022-11-09 Thread GitBox


hadoop-yetus commented on PR #5111:
URL: https://github.com/apache/hadoop/pull/5111#issuecomment-1309854775

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  4s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 33s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  26m  7s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   4m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   3m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 33s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   2m 38s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 59s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 30s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   4m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   3m 31s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   2m 36s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m  5s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   3m 20s | 
[/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5111/2/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt)
 |  hadoop-yarn-server-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   5m 44s |  |  hadoop-yarn-server-router in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 48s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 130m 36s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.yarn.server.federation.store.impl.TestZookeeperFederationStateStore |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5111/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5111 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 0f5a9f153f95 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b183d77c1ca5b662424837e4ca48ddf1ad15ecc5 |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   |  

[GitHub] [hadoop] zhengchenyu commented on pull request #5099: HDFS-16832. [SBN READ] Fix NPE when check the block location of empty…

2022-11-09 Thread GitBox


zhengchenyu commented on PR #5099:
URL: https://github.com/apache/hadoop/pull/5099#issuecomment-1309832743

   @xkrogen @shvachko @ZanderXu Can you please review this PR? NPE is 
introduced by HDFS-16732. NPE will reproduce by below: 
   ```
   hive -hiveconf "hive.execution.engine=tez" -e "select 1 as a1 UNION ALL 
select 2 as a2";
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut commented on pull request #5122: HDFS-16811. Support DecommissionBackoffMonitor parameters reconfigurable

2022-11-09 Thread GitBox


tomscut commented on PR #5122:
URL: https://github.com/apache/hadoop/pull/5122#issuecomment-1309802427

   Thanks @haiyang1987 for you contribution.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut merged pull request #5122: HDFS-16811. Support DecommissionBackoffMonitor parameters reconfigurable

2022-11-09 Thread GitBox


tomscut merged PR #5122:
URL: https://github.com/apache/hadoop/pull/5122


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] haiyang1987 commented on pull request #5122: HDFS-16811. Support DecommissionBackoffMonitor parameters reconfigurable

2022-11-09 Thread GitBox


haiyang1987 commented on PR #5122:
URL: https://github.com/apache/hadoop/pull/5122#issuecomment-1309793313

   @tomscut please help review it , Thanks


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] haiyang1987 commented on pull request #5122: HDFS-16811. Support DecommissionBackoffMonitor parameters reconfigurable

2022-11-09 Thread GitBox


haiyang1987 commented on PR #5122:
URL: https://github.com/apache/hadoop/pull/5122#issuecomment-1309792557

   The failed unit test seems unrelated to the change.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18520) Backport HADOOP-18427 and HADOOP-18452 to branch-3.3

2022-11-09 Thread Melissa You (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Melissa You resolved HADOOP-18520.
--
Resolution: Fixed

> Backport HADOOP-18427 and HADOOP-18452 to branch-3.3
> 
>
> Key: HADOOP-18520
> URL: https://issues.apache.org/jira/browse/HADOOP-18520
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.5
>Reporter: Melissa You
>Assignee: Melissa You
>Priority: Major
>  Labels: pull-request-available
>
> This is a sub-task of HADOOP-18518 to upgrade zk on 3.3 branches.
> It is a clean cherry pick from [https://github.com/apache/hadoop/pull/4812] 
> which solved the deprecation of EnsurePath in new Curator. Note, as this 
> change contained a bug, fixed by [https://github.com/apache/hadoop/pull/4885] 
> , we thus need to cherry pick this bug fix PR as well. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Hexiaoqiao commented on pull request #4945: HDFS-16785. DataNode hold BP write lock to scan disk

2022-11-09 Thread GitBox


Hexiaoqiao commented on PR #4945:
URL: https://github.com/apache/hadoop/pull/4945#issuecomment-1309734208

   re-trigger jenkins and let's wait what it will say.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18246) Remove lower limit on s3a prefetching/caching block size

2022-11-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17631433#comment-17631433
 ] 

ASF GitHub Bot commented on HADOOP-18246:
-

ahmarsuhail commented on code in PR #5120:
URL: https://github.com/apache/hadoop/pull/5120#discussion_r1018626908


##
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md:
##
@@ -1107,7 +1107,9 @@ options are covered in [Testing](./testing.md).
   fs.s3a.prefetch.block.size
   8MB
   
-  The size of a single prefetched block of data.
+  The size of a single prefetched block of data. 
+  Default value is 8 MB.
+  Lower limit for the block size is 1 byte.

Review Comment:
   Sorry, I should have updated my comment. We discussed offline, you can 
either revert this change completely, or change to just the following:
   
   ```suggestion
   Decreasing this will increase the number of prefetches required, and may 
negatively impact performance.  
   ```
   





> Remove lower limit on s3a prefetching/caching block size
> 
>
> Key: HADOOP-18246
> URL: https://issues.apache.org/jira/browse/HADOOP-18246
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.4.0
>Reporter: Daniel Carl Jones
>Assignee: Ankit Saurabh
>Priority: Minor
>  Labels: pull-request-available
>
> The minimum allowed block size currently is {{PREFETCH_BLOCK_DEFAULT_SIZE}} 
> (8MB).
> {code:java}
> this.prefetchBlockSize = intOption(
> conf, PREFETCH_BLOCK_SIZE_KEY, 
> PREFETCH_BLOCK_DEFAULT_SIZE, PREFETCH_BLOCK_DEFAULT_SIZE);{code}
> [https://github.com/apache/hadoop/blob/3aa03e0eb95bbcb066144706e06509f0e0549196/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L487-L488]
> Why is this the case and should we lower or remove it?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ahmarsuhail commented on a diff in pull request #5120: HADOOP-18246. Remove lower limit on s3a prefetching/caching block size

2022-11-09 Thread GitBox


ahmarsuhail commented on code in PR #5120:
URL: https://github.com/apache/hadoop/pull/5120#discussion_r1018626908


##
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md:
##
@@ -1107,7 +1107,9 @@ options are covered in [Testing](./testing.md).
   fs.s3a.prefetch.block.size
   8MB
   
-  The size of a single prefetched block of data.
+  The size of a single prefetched block of data. 
+  Default value is 8 MB.
+  Lower limit for the block size is 1 byte.

Review Comment:
   Sorry, I should have updated my comment. We discussed offline, you can 
either revert this change completely, or change to just the following:
   
   ```suggestion
   Decreasing this will increase the number of prefetches required, and may 
negatively impact performance.  
   ```
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] andreoss closed pull request #4910: YARN-11262: [hadoop-yarn-server-resourcemanager] Upgrade to Junit 5

2022-11-09 Thread GitBox


andreoss closed pull request #4910: YARN-11262: 
[hadoop-yarn-server-resourcemanager] Upgrade to Junit 5
URL: https://github.com/apache/hadoop/pull/4910


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16206) Migrate from Log4j1 to Log4j2

2022-11-09 Thread shmily (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17631403#comment-17631403
 ] 

shmily commented on HADOOP-16206:
-

Have you finished this task? Can you briefly introduce the process of using the 
bridge upgrade? tks.

> Migrate from Log4j1 to Log4j2
> -
>
> Key: HADOOP-16206
> URL: https://issues.apache.org/jira/browse/HADOOP-16206
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.3.0
>Reporter: Akira Ajisaka
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HADOOP-16206-wip.001.patch
>
>
> This sub-task is to remove log4j1 dependency and add log4j2 dependency.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17847) S3AInstrumentation Closing output stream statistics while data is still marked as pending upload in OutputStreamStatistics

2022-11-09 Thread duhanmin (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17631386#comment-17631386
 ] 

duhanmin edited comment on HADOOP-17847 at 11/10/22 2:01 AM:
-

我在使用TextOutputFormat.getRecordWriter 上传文件时也遇到类似错误

hadoop:hadoop-aws-3.2.1-amzn-4
{code:java}
//log

2022-11-10 01:43:44.337 [0-0-0-writer] WARN  S3AInstrumentation - Closing 
output stream statistics while data is still marked as pending upload in 
OutputStreamStatistics{blocksSubmitted=1, blocksInQueue=1, blocksActive=0, 
blockUploadsCompleted=0, blockUploadsFailed=0, bytesPendingUpload=55436789, 
bytesUploaded=0, blocksAllocated=1, blocksReleased=1, 
blocksActivelyAllocated=0, exceptionsInMultipartFinalize=0, transferDuration=0 
ms, queueDuration=0 ms, averageQueueTime=0 ms, totalUploadDuration=0 ms, 
effectiveBandwidth=0.0 bytes/s}
2022-11-10 01:43:44.343 [0-0-0-writer] ERROR S3Writer$Task - error
java.io.IOException: regular upload failed: java.lang.NullPointerException
    at org.apache.hadoop.fs.s3a.S3AUtils.extractException(S3AUtils.java:303) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.putObject(S3ABlockOutputStream.java:453)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.close(S3ABlockOutputStream.java:365)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:73)
 ~[hadoop-common-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:102) 
~[hadoop-common-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.mapred.TextOutputFormat$LineRecordWriter.close(TextOutputFormat.java:99)
 ~[hadoop-mapreduce-client-core-3.2.1-amzn-4.jar:na]
    at java.lang.Thread.run(Thread.java:748) [na:1.8.0_312]
Caused by: java.lang.NullPointerException: null
    at 
org.apache.hadoop.fs.s3a.S3AFileSystem.incrementStatistic(S3AFileSystem.java:1189)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.S3AFileSystem.incrementStatistic(S3AFileSystem.java:1179)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.S3AFileSystem.incrementPutStartStatistics(S3AFileSystem.java:1649)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.S3AFileSystem.putObjectDirect(S3AFileSystem.java:1584) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.lambda$putObject$5(WriteOperationHelper.java:430)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:265) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:322) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:261) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:236) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.retry(WriteOperationHelper.java:123)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.putObject(WriteOperationHelper.java:428)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.lambda$putObject$0(S3ABlockOutputStream.java:438)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.util.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:219)
 ~[hadoop-common-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.util.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:219)
 ~[hadoop-common-3.2.1-amzn-4.jar:na]
    at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_312]
    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
~[na:1.8.0_312]
    at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
~[na:1.8.0_312]
    ... 1 common frames omitted{code}
 


was (Author: JIRAUSER292033):
我在使用TextOutputFormat.getRecordWriter 上传文件时也遇到类似错误

 

 

 
{code:java}
//log

2022-11-10 01:43:44.337 [0-0-0-writer] WARN  S3AInstrumentation - Closing 
output stream statistics while data is still marked as pending upload in 
OutputStreamStatistics{blocksSubmitted=1, blocksInQueue=1, blocksActive=0, 
blockUploadsCompleted=0, blockUploadsFailed=0, bytesPendingUpload=55436789, 
bytesUploaded=0, blocksAllocated=1, blocksReleased=1, 
blocksActivelyAllocated=0, exceptionsInMultipartFinalize=0, transferDuration=0 
ms, queueDuration=0 ms, averageQueueTime=0 ms, totalUploadDuration=0 ms, 
effectiveBandwidth=0.0 bytes/s}
2022-11-10 01:43:44.343 [0-0-0-writer] ERROR S3Writer$Task - error
java.io.IOException: regular upload failed: java.lang.NullPointerException
    at 

[jira] [Commented] (HADOOP-17847) S3AInstrumentation Closing output stream statistics while data is still marked as pending upload in OutputStreamStatistics

2022-11-09 Thread duhanmin (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17631386#comment-17631386
 ] 

duhanmin commented on HADOOP-17847:
---

我在使用TextOutputFormat.getRecordWriter 上传文件时也遇到类似错误

 

 

 
{code:java}
//log

2022-11-10 01:43:44.337 [0-0-0-writer] WARN  S3AInstrumentation - Closing 
output stream statistics while data is still marked as pending upload in 
OutputStreamStatistics{blocksSubmitted=1, blocksInQueue=1, blocksActive=0, 
blockUploadsCompleted=0, blockUploadsFailed=0, bytesPendingUpload=55436789, 
bytesUploaded=0, blocksAllocated=1, blocksReleased=1, 
blocksActivelyAllocated=0, exceptionsInMultipartFinalize=0, transferDuration=0 
ms, queueDuration=0 ms, averageQueueTime=0 ms, totalUploadDuration=0 ms, 
effectiveBandwidth=0.0 bytes/s}
2022-11-10 01:43:44.343 [0-0-0-writer] ERROR S3Writer$Task - error
java.io.IOException: regular upload failed: java.lang.NullPointerException
    at org.apache.hadoop.fs.s3a.S3AUtils.extractException(S3AUtils.java:303) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.putObject(S3ABlockOutputStream.java:453)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.close(S3ABlockOutputStream.java:365)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:73)
 ~[hadoop-common-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:102) 
~[hadoop-common-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.mapred.TextOutputFormat$LineRecordWriter.close(TextOutputFormat.java:99)
 ~[hadoop-mapreduce-client-core-3.2.1-amzn-4.jar:na]
    at java.lang.Thread.run(Thread.java:748) [na:1.8.0_312]
Caused by: java.lang.NullPointerException: null
    at 
org.apache.hadoop.fs.s3a.S3AFileSystem.incrementStatistic(S3AFileSystem.java:1189)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.S3AFileSystem.incrementStatistic(S3AFileSystem.java:1179)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.S3AFileSystem.incrementPutStartStatistics(S3AFileSystem.java:1649)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.S3AFileSystem.putObjectDirect(S3AFileSystem.java:1584) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.lambda$putObject$5(WriteOperationHelper.java:430)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:265) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:322) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:261) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:236) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.retry(WriteOperationHelper.java:123)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.putObject(WriteOperationHelper.java:428)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.lambda$putObject$0(S3ABlockOutputStream.java:438)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.util.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:219)
 ~[hadoop-common-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.util.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:219)
 ~[hadoop-common-3.2.1-amzn-4.jar:na]
    at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_312]
    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
~[na:1.8.0_312]
    at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
~[na:1.8.0_312]
    ... 1 common frames omitted{code}
 

> S3AInstrumentation Closing output stream statistics while data is still 
> marked as pending upload in OutputStreamStatistics
> --
>
> Key: HADOOP-17847
> URL: https://issues.apache.org/jira/browse/HADOOP-17847
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.2.1
> Environment: hadoop: 3.2.1
> spark: 3.0.2
> k8s server version: 1.18
> aws.java.sdk.bundle.version:1.11.1033
>Reporter: Li Rong
>Priority: Major
> Attachments: logs.txt
>
>
> When using hadoop s3a file upload for spark event Logs, the logs were queued 
> up and not uploaded before the process is shut down:
> {code:java}
> // 21/08/13 12:22:39 WARN 

[GitHub] [hadoop] slfan1989 commented on pull request #5100: YARN-11367. [Federation] Fix DefaultRequestInterceptorREST Client NPE.

2022-11-09 Thread GitBox


slfan1989 commented on PR #5100:
URL: https://github.com/apache/hadoop/pull/5100#issuecomment-1309650368

   @goiri Thank you very much for helping to review the code!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18521) ABFS ReadBufferManager buffer sharing across concurrent HTTP requests

2022-11-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17631333#comment-17631333
 ] 

ASF GitHub Bot commented on HADOOP-18521:
-

hadoop-yetus commented on PR #5117:
URL: https://github.com/apache/hadoop/pull/5117#issuecomment-1309518862

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 47s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  16m 25s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  26m  8s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |  20m 48s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   4m 17s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m  2s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   2m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   4m 42s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 13s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 44s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 41s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |  22m 41s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 46s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  20m 46s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   3m 57s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5117/4/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 36 new + 1 unchanged - 0 fixed = 37 total (was 1) 
 |
   | +1 :green_heart: |  mvnsite  |   3m 15s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   2m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   4m 37s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 48s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 49s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 45s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 16s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 239m  1s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5117/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5117 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 1cea54f46634 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 1ee18eeb4922d18168bd1fc8ec4a5c75610447cc |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5117/4/testReport/ |
   | Max. process+thread 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5117: HADOOP-18521. ABFS ReadBufferManager must not reuse in-progress buffers

2022-11-09 Thread GitBox


hadoop-yetus commented on PR #5117:
URL: https://github.com/apache/hadoop/pull/5117#issuecomment-1309518862

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 47s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  16m 25s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  26m  8s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |  20m 48s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   4m 17s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m  2s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   2m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   4m 42s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 13s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 44s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 41s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |  22m 41s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 46s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  20m 46s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   3m 57s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5117/4/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 36 new + 1 unchanged - 0 fixed = 37 total (was 1) 
 |
   | +1 :green_heart: |  mvnsite  |   3m 15s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   2m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   4m 37s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 48s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 49s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 45s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 16s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 239m  1s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5117/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5117 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 1cea54f46634 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 1ee18eeb4922d18168bd1fc8ec4a5c75610447cc |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5117/4/testReport/ |
   | Max. process+thread count | 1263 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-azure 
U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5117/4/console |
   | versions | 

[GitHub] [hadoop] 9uapaw commented on pull request #4655: YARN-11216. Avoid unnecessary reconstruction of ConfigurationProperties

2022-11-09 Thread GitBox


9uapaw commented on PR #4655:
URL: https://github.com/apache/hadoop/pull/4655#issuecomment-1309275394

   Thanks for the changes @K0K0V0K, can you deal with the checkstyle issues 
please?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5121: HDFS-16834: Removes request stateID consistency constraint between clients in different connection pools.

2022-11-09 Thread GitBox


hadoop-yetus commented on PR #5121:
URL: https://github.com/apache/hadoop/pull/5121#issuecomment-1309272954

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  0s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m 19s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 55s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 49s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 50s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 55s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 59s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 15s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 41s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 16s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 43s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 22s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 41s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  36m 14s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 46s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 135m 48s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5121/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5121 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 91edbbf64c8f 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ee6881db4fbadae4e3ace262e99737a841f8 |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5121/4/testReport/ |
   | Max. process+thread count | 2798 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5121/4/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, 

[jira] [Commented] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4

2022-11-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17631248#comment-17631248
 ] 

ASF GitHub Bot commented on HADOOP-15327:
-

9uapaw commented on code in PR #3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r1018349426


##
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/Fetcher.java:
##
@@ -72,19 +73,21 @@
   private static final String FETCH_RETRY_AFTER_HEADER = "Retry-After";
 
   protected final Reporter reporter;
-  private enum ShuffleErrors{IO_ERROR, WRONG_LENGTH, BAD_ID, WRONG_MAP,
+  @VisibleForTesting
+  public enum ShuffleErrors{IO_ERROR, WRONG_LENGTH, BAD_ID, WRONG_MAP,
 CONNECTION, WRONG_REDUCE}
-  
-  private final static String SHUFFLE_ERR_GRP_NAME = "Shuffle Errors";
+
+  @VisibleForTesting
+  public final static String SHUFFLE_ERR_GRP_NAME = "Shuffle Errors";

Review Comment:
   I think lets keep it, it is not important.





> Upgrade MR ShuffleHandler to use Netty4
> ---
>
> Key: HADOOP-15327
> URL: https://issues.apache.org/jira/browse/HADOOP-15327
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Szilard Nemeth
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-15327.001.patch, HADOOP-15327.002.patch, 
> HADOOP-15327.003.patch, HADOOP-15327.004.patch, HADOOP-15327.005.patch, 
> HADOOP-15327.005.patch, 
> getMapOutputInfo_BlockingOperationException_awaitUninterruptibly.log, 
> hades-results-20221108.zip, testfailure-testMapFileAccess-emptyresponse.zip, 
> testfailure-testReduceFromPartialMem.zip
>
>  Time Spent: 11.5h
>  Remaining Estimate: 0h
>
> This way, we can remove the dependencies on the netty3 (jboss.netty)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] 9uapaw commented on a diff in pull request #3259: HADOOP-15327. Upgrade MR ShuffleHandler to use Netty4

2022-11-09 Thread GitBox


9uapaw commented on code in PR #3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r1018349426


##
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/Fetcher.java:
##
@@ -72,19 +73,21 @@
   private static final String FETCH_RETRY_AFTER_HEADER = "Retry-After";
 
   protected final Reporter reporter;
-  private enum ShuffleErrors{IO_ERROR, WRONG_LENGTH, BAD_ID, WRONG_MAP,
+  @VisibleForTesting
+  public enum ShuffleErrors{IO_ERROR, WRONG_LENGTH, BAD_ID, WRONG_MAP,
 CONNECTION, WRONG_REDUCE}
-  
-  private final static String SHUFFLE_ERR_GRP_NAME = "Shuffle Errors";
+
+  @VisibleForTesting
+  public final static String SHUFFLE_ERR_GRP_NAME = "Shuffle Errors";

Review Comment:
   I think lets keep it, it is not important.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4

2022-11-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17631246#comment-17631246
 ] 

ASF GitHub Bot commented on HADOOP-15327:
-

K0K0V0K commented on code in PR #3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r1018348504


##
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleHandler.java:
##
@@ -668,34 +1357,61 @@ protected ChannelFuture 
sendMapOutput(ChannelHandlerContext ctx,
   conns[i].connect();
 }
 
-//Ensure first connections are okay
-conns[0].getInputStream();
-int rc = conns[0].getResponseCode();
-Assert.assertEquals(HttpURLConnection.HTTP_OK, rc);
-
-conns[1].getInputStream();
-rc = conns[1].getResponseCode();
-Assert.assertEquals(HttpURLConnection.HTTP_OK, rc);
-
-// This connection should be closed because it to above the limit
-try {
-  rc = conns[2].getResponseCode();
-  Assert.assertEquals("Expected a too-many-requests response code",
-  ShuffleHandler.TOO_MANY_REQ_STATUS.getCode(), rc);
-  long backoff = Long.valueOf(
-  conns[2].getHeaderField(ShuffleHandler.RETRY_AFTER_HEADER));
-  Assert.assertTrue("The backoff value cannot be negative.", backoff > 0);
-  conns[2].getInputStream();
-  Assert.fail("Expected an IOException");
-} catch (IOException ioe) {
-  LOG.info("Expected - connection should not be open");
-} catch (NumberFormatException ne) {
-  Assert.fail("Expected a numerical value for RETRY_AFTER header field");
-} catch (Exception e) {
-  Assert.fail("Expected a IOException");
+Map> mapOfConnections = Maps.newHashMap();
+for (HttpURLConnection conn : conns) {
+  try {
+conn.getInputStream();
+  } catch (IOException ioe) {
+LOG.info("Expected - connection should not be open");
+  } catch (NumberFormatException ne) {
+fail("Expected a numerical value for RETRY_AFTER header field");
+  } catch (Exception e) {
+fail("Expected a IOException");
+  }
+  int statusCode = conn.getResponseCode();
+  LOG.debug("Connection status code: {}", statusCode);
+  mapOfConnections.putIfAbsent(statusCode, new ArrayList<>());
+  List connectionList = 
mapOfConnections.get(statusCode);
+  connectionList.add(conn);
 }
+
+assertEquals(String.format("Expected only %s and %s response",
+OK_STATUS, ShuffleHandler.TOO_MANY_REQ_STATUS),
+Sets.newHashSet(
+HttpURLConnection.HTTP_OK,
+ShuffleHandler.TOO_MANY_REQ_STATUS.code()),
+mapOfConnections.keySet());
 
-shuffleHandler.stop(); 
+List successfulConnections =
+mapOfConnections.get(HttpURLConnection.HTTP_OK);
+assertEquals(String.format("Expected exactly %d requests " +
+"with %s response", maxAllowedConnections, OK_STATUS),
+maxAllowedConnections, successfulConnections.size());
+
+//Ensure exactly one connection is HTTP 429 (TOO MANY REQUESTS)
+List closedConnections =
+mapOfConnections.get(ShuffleHandler.TOO_MANY_REQ_STATUS.code());
+assertEquals(String.format("Expected exactly %d %s response",
+notAcceptedConnections, ShuffleHandler.TOO_MANY_REQ_STATUS),
+notAcceptedConnections, closedConnections.size());
+
+// This connection should be closed because it is above the maximum limit
+HttpURLConnection conn = closedConnections.get(0);
+assertEquals(String.format("Expected a %s response",
+ShuffleHandler.TOO_MANY_REQ_STATUS),
+ShuffleHandler.TOO_MANY_REQ_STATUS.code(), conn.getResponseCode());
+long backoff = Long.parseLong(
+conn.getHeaderField(ShuffleHandler.RETRY_AFTER_HEADER));
+assertTrue("The backoff value cannot be negative.", backoff > 0);
+
+shuffleHandler.stop();
+
+//It's okay to get a ClosedChannelException.
+//All other kinds of exceptions means something went wrong
+assertEquals("Should have no caught exceptions",
+Collections.emptyList(), failures.stream()
+.filter(f -> !(f instanceof ClosedChannelException))
+.collect(toList()));

Review Comment:
   So on second try:
   
   Maybe here we can call a close method on the elements of the http connection 
lists,
   cause some of them will be still open even if this test is finished





> Upgrade MR ShuffleHandler to use Netty4
> ---
>
> Key: HADOOP-15327
> URL: https://issues.apache.org/jira/browse/HADOOP-15327
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Szilard Nemeth
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-15327.001.patch, 

[jira] [Commented] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4

2022-11-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17631247#comment-17631247
 ] 

ASF GitHub Bot commented on HADOOP-15327:
-

9uapaw commented on code in PR #3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r1018349080


##
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/ShuffleHandler.java:
##
@@ -182,19 +184,29 @@ public class ShuffleHandler extends AuxiliaryService {
 
   public static final HttpResponseStatus TOO_MANY_REQ_STATUS =
   new HttpResponseStatus(429, "TOO MANY REQUESTS");
-  // This should kept in sync with Fetcher.FETCH_RETRY_DELAY_DEFAULT
+  // This should be kept in sync with Fetcher.FETCH_RETRY_DELAY_DEFAULT
   public static final long FETCH_RETRY_DELAY = 1000L;
   public static final String RETRY_AFTER_HEADER = "Retry-After";
+  static final String ENCODER_HANDLER_NAME = "encoder";
 
   private int port;
-  private ChannelFactory selector;
-  private final ChannelGroup accepted = new DefaultChannelGroup();
+  private EventLoopGroup bossGroup;
+  private EventLoopGroup workerGroup;
+  private ServerBootstrap bootstrap;
+  private Channel ch;
+  private final ChannelGroup accepted =
+  new DefaultChannelGroup(new DefaultEventExecutorGroup(5).next());

Review Comment:
   This is just the number of threads the executor will have. Though it might 
be extracted for clear intentions.





> Upgrade MR ShuffleHandler to use Netty4
> ---
>
> Key: HADOOP-15327
> URL: https://issues.apache.org/jira/browse/HADOOP-15327
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Szilard Nemeth
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-15327.001.patch, HADOOP-15327.002.patch, 
> HADOOP-15327.003.patch, HADOOP-15327.004.patch, HADOOP-15327.005.patch, 
> HADOOP-15327.005.patch, 
> getMapOutputInfo_BlockingOperationException_awaitUninterruptibly.log, 
> hades-results-20221108.zip, testfailure-testMapFileAccess-emptyresponse.zip, 
> testfailure-testReduceFromPartialMem.zip
>
>  Time Spent: 11.5h
>  Remaining Estimate: 0h
>
> This way, we can remove the dependencies on the netty3 (jboss.netty)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] 9uapaw commented on a diff in pull request #3259: HADOOP-15327. Upgrade MR ShuffleHandler to use Netty4

2022-11-09 Thread GitBox


9uapaw commented on code in PR #3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r1018349080


##
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/ShuffleHandler.java:
##
@@ -182,19 +184,29 @@ public class ShuffleHandler extends AuxiliaryService {
 
   public static final HttpResponseStatus TOO_MANY_REQ_STATUS =
   new HttpResponseStatus(429, "TOO MANY REQUESTS");
-  // This should kept in sync with Fetcher.FETCH_RETRY_DELAY_DEFAULT
+  // This should be kept in sync with Fetcher.FETCH_RETRY_DELAY_DEFAULT
   public static final long FETCH_RETRY_DELAY = 1000L;
   public static final String RETRY_AFTER_HEADER = "Retry-After";
+  static final String ENCODER_HANDLER_NAME = "encoder";
 
   private int port;
-  private ChannelFactory selector;
-  private final ChannelGroup accepted = new DefaultChannelGroup();
+  private EventLoopGroup bossGroup;
+  private EventLoopGroup workerGroup;
+  private ServerBootstrap bootstrap;
+  private Channel ch;
+  private final ChannelGroup accepted =
+  new DefaultChannelGroup(new DefaultEventExecutorGroup(5).next());

Review Comment:
   This is just the number of threads the executor will have. Though it might 
be extracted for clear intentions.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] K0K0V0K commented on a diff in pull request #3259: HADOOP-15327. Upgrade MR ShuffleHandler to use Netty4

2022-11-09 Thread GitBox


K0K0V0K commented on code in PR #3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r1018348504


##
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleHandler.java:
##
@@ -668,34 +1357,61 @@ protected ChannelFuture 
sendMapOutput(ChannelHandlerContext ctx,
   conns[i].connect();
 }
 
-//Ensure first connections are okay
-conns[0].getInputStream();
-int rc = conns[0].getResponseCode();
-Assert.assertEquals(HttpURLConnection.HTTP_OK, rc);
-
-conns[1].getInputStream();
-rc = conns[1].getResponseCode();
-Assert.assertEquals(HttpURLConnection.HTTP_OK, rc);
-
-// This connection should be closed because it to above the limit
-try {
-  rc = conns[2].getResponseCode();
-  Assert.assertEquals("Expected a too-many-requests response code",
-  ShuffleHandler.TOO_MANY_REQ_STATUS.getCode(), rc);
-  long backoff = Long.valueOf(
-  conns[2].getHeaderField(ShuffleHandler.RETRY_AFTER_HEADER));
-  Assert.assertTrue("The backoff value cannot be negative.", backoff > 0);
-  conns[2].getInputStream();
-  Assert.fail("Expected an IOException");
-} catch (IOException ioe) {
-  LOG.info("Expected - connection should not be open");
-} catch (NumberFormatException ne) {
-  Assert.fail("Expected a numerical value for RETRY_AFTER header field");
-} catch (Exception e) {
-  Assert.fail("Expected a IOException");
+Map> mapOfConnections = Maps.newHashMap();
+for (HttpURLConnection conn : conns) {
+  try {
+conn.getInputStream();
+  } catch (IOException ioe) {
+LOG.info("Expected - connection should not be open");
+  } catch (NumberFormatException ne) {
+fail("Expected a numerical value for RETRY_AFTER header field");
+  } catch (Exception e) {
+fail("Expected a IOException");
+  }
+  int statusCode = conn.getResponseCode();
+  LOG.debug("Connection status code: {}", statusCode);
+  mapOfConnections.putIfAbsent(statusCode, new ArrayList<>());
+  List connectionList = 
mapOfConnections.get(statusCode);
+  connectionList.add(conn);
 }
+
+assertEquals(String.format("Expected only %s and %s response",
+OK_STATUS, ShuffleHandler.TOO_MANY_REQ_STATUS),
+Sets.newHashSet(
+HttpURLConnection.HTTP_OK,
+ShuffleHandler.TOO_MANY_REQ_STATUS.code()),
+mapOfConnections.keySet());
 
-shuffleHandler.stop(); 
+List successfulConnections =
+mapOfConnections.get(HttpURLConnection.HTTP_OK);
+assertEquals(String.format("Expected exactly %d requests " +
+"with %s response", maxAllowedConnections, OK_STATUS),
+maxAllowedConnections, successfulConnections.size());
+
+//Ensure exactly one connection is HTTP 429 (TOO MANY REQUESTS)
+List closedConnections =
+mapOfConnections.get(ShuffleHandler.TOO_MANY_REQ_STATUS.code());
+assertEquals(String.format("Expected exactly %d %s response",
+notAcceptedConnections, ShuffleHandler.TOO_MANY_REQ_STATUS),
+notAcceptedConnections, closedConnections.size());
+
+// This connection should be closed because it is above the maximum limit
+HttpURLConnection conn = closedConnections.get(0);
+assertEquals(String.format("Expected a %s response",
+ShuffleHandler.TOO_MANY_REQ_STATUS),
+ShuffleHandler.TOO_MANY_REQ_STATUS.code(), conn.getResponseCode());
+long backoff = Long.parseLong(
+conn.getHeaderField(ShuffleHandler.RETRY_AFTER_HEADER));
+assertTrue("The backoff value cannot be negative.", backoff > 0);
+
+shuffleHandler.stop();
+
+//It's okay to get a ClosedChannelException.
+//All other kinds of exceptions means something went wrong
+assertEquals("Should have no caught exceptions",
+Collections.emptyList(), failures.stream()
+.filter(f -> !(f instanceof ClosedChannelException))
+.collect(toList()));

Review Comment:
   So on second try:
   
   Maybe here we can call a close method on the elements of the http connection 
lists,
   cause some of them will be still open even if this test is finished



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4

2022-11-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17631245#comment-17631245
 ] 

ASF GitHub Bot commented on HADOOP-15327:
-

9uapaw commented on code in PR #3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r1018347131


##
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/ShuffleHandler.java:
##
@@ -291,36 +302,86 @@ public void operationComplete(ChannelFuture future) 
throws Exception {
 }
   }
 
+  static class NettyChannelHelper {
+static ChannelFuture writeToChannel(Channel ch, Object obj) {
+  LOG.debug("Writing {} to channel: {}", obj.getClass().getSimpleName(), 
ch.id());
+  return ch.writeAndFlush(obj);
+}
+
+static ChannelFuture writeToChannelAndClose(Channel ch, Object obj) {
+  return writeToChannel(ch, obj).addListener(ChannelFutureListener.CLOSE);
+}
+
+static ChannelFuture writeToChannelAndAddLastHttpContent(Channel ch, 
HttpResponse obj) {
+  writeToChannel(ch, obj);
+  return writeLastHttpContentToChannel(ch);
+}
+
+static ChannelFuture writeLastHttpContentToChannel(Channel ch) {
+  LOG.debug("Writing LastHttpContent, channel id: {}", ch.id());
+  return ch.writeAndFlush(LastHttpContent.EMPTY_LAST_CONTENT);
+}
+
+static ChannelFuture closeChannel(Channel ch) {
+  LOG.debug("Closing channel, channel id: {}", ch.id());
+  return ch.close();
+}
+
+static void closeChannels(ChannelGroup channelGroup) {
+  channelGroup.close().awaitUninterruptibly(10, TimeUnit.SECONDS);
+}
+
+public static ChannelFuture closeAsIdle(Channel channel, int timeout) {
+  LOG.debug("Closing channel as writer was idle for {} seconds", timeout);
+  return closeChannel(channel);
+}
+
+public static void channelActive(Channel ch) {
+  LOG.debug("Executing channelActive, channel id: {}", ch.id());
+}
+
+public static void channelInactive(Channel channel) {
+  LOG.debug("Executing channelInactive, channel id: {}", channel.id());
+}

Review Comment:
   We need an other patch anyway, so we might as well fix these to be more 
consistent.





> Upgrade MR ShuffleHandler to use Netty4
> ---
>
> Key: HADOOP-15327
> URL: https://issues.apache.org/jira/browse/HADOOP-15327
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Szilard Nemeth
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-15327.001.patch, HADOOP-15327.002.patch, 
> HADOOP-15327.003.patch, HADOOP-15327.004.patch, HADOOP-15327.005.patch, 
> HADOOP-15327.005.patch, 
> getMapOutputInfo_BlockingOperationException_awaitUninterruptibly.log, 
> hades-results-20221108.zip, testfailure-testMapFileAccess-emptyresponse.zip, 
> testfailure-testReduceFromPartialMem.zip
>
>  Time Spent: 11.5h
>  Remaining Estimate: 0h
>
> This way, we can remove the dependencies on the netty3 (jboss.netty)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] 9uapaw commented on a diff in pull request #3259: HADOOP-15327. Upgrade MR ShuffleHandler to use Netty4

2022-11-09 Thread GitBox


9uapaw commented on code in PR #3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r1018347131


##
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/ShuffleHandler.java:
##
@@ -291,36 +302,86 @@ public void operationComplete(ChannelFuture future) 
throws Exception {
 }
   }
 
+  static class NettyChannelHelper {
+static ChannelFuture writeToChannel(Channel ch, Object obj) {
+  LOG.debug("Writing {} to channel: {}", obj.getClass().getSimpleName(), 
ch.id());
+  return ch.writeAndFlush(obj);
+}
+
+static ChannelFuture writeToChannelAndClose(Channel ch, Object obj) {
+  return writeToChannel(ch, obj).addListener(ChannelFutureListener.CLOSE);
+}
+
+static ChannelFuture writeToChannelAndAddLastHttpContent(Channel ch, 
HttpResponse obj) {
+  writeToChannel(ch, obj);
+  return writeLastHttpContentToChannel(ch);
+}
+
+static ChannelFuture writeLastHttpContentToChannel(Channel ch) {
+  LOG.debug("Writing LastHttpContent, channel id: {}", ch.id());
+  return ch.writeAndFlush(LastHttpContent.EMPTY_LAST_CONTENT);
+}
+
+static ChannelFuture closeChannel(Channel ch) {
+  LOG.debug("Closing channel, channel id: {}", ch.id());
+  return ch.close();
+}
+
+static void closeChannels(ChannelGroup channelGroup) {
+  channelGroup.close().awaitUninterruptibly(10, TimeUnit.SECONDS);
+}
+
+public static ChannelFuture closeAsIdle(Channel channel, int timeout) {
+  LOG.debug("Closing channel as writer was idle for {} seconds", timeout);
+  return closeChannel(channel);
+}
+
+public static void channelActive(Channel ch) {
+  LOG.debug("Executing channelActive, channel id: {}", ch.id());
+}
+
+public static void channelInactive(Channel channel) {
+  LOG.debug("Executing channelInactive, channel id: {}", channel.id());
+}

Review Comment:
   We need an other patch anyway, so we might as well fix these to be more 
consistent.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4

2022-11-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17631244#comment-17631244
 ] 

ASF GitHub Bot commented on HADOOP-15327:
-

9uapaw commented on code in PR #3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r1018346414


##
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/ShuffleHandler.java:
##
@@ -904,65 +990,84 @@ private List splitMaps(List mapq) {
 }
 
 @Override
-public void channelOpen(ChannelHandlerContext ctx, ChannelStateEvent evt) 
+public void channelActive(ChannelHandlerContext ctx)
 throws Exception {
-  super.channelOpen(ctx, evt);
-
-  if ((maxShuffleConnections > 0) && (accepted.size() >= 
maxShuffleConnections)) {
+  NettyChannelHelper.channelActive(ctx.channel());
+  int numConnections = activeConnections.incrementAndGet();
+  if ((maxShuffleConnections > 0) && (numConnections > 
maxShuffleConnections)) {
 LOG.info(String.format("Current number of shuffle connections (%d) is 
" + 
-"greater than or equal to the max allowed shuffle connections 
(%d)", 
+"greater than the max allowed shuffle connections (%d)",
 accepted.size(), maxShuffleConnections));
 
-Map headers = new HashMap(1);
+Map headers = new HashMap<>(1);
 // notify fetchers to backoff for a while before closing the connection
 // if the shuffle connection limit is hit. Fetchers are expected to
 // handle this notification gracefully, that is, not treating this as a
 // fetch failure.
 headers.put(RETRY_AFTER_HEADER, String.valueOf(FETCH_RETRY_DELAY));
 sendError(ctx, "", TOO_MANY_REQ_STATUS, headers);
-return;
+  } else {
+super.channelActive(ctx);
+accepted.add(ctx.channel());
+LOG.debug("Added channel: {}, channel id: {}. Accepted number of 
connections={}",
+ctx.channel(), ctx.channel().id(), activeConnections.get());
   }
-  accepted.add(evt.getChannel());
 }
 
 @Override
-public void messageReceived(ChannelHandlerContext ctx, MessageEvent evt)
+public void channelInactive(ChannelHandlerContext ctx) throws Exception {
+  NettyChannelHelper.channelInactive(ctx.channel());
+  super.channelInactive(ctx);
+  int noOfConnections = activeConnections.decrementAndGet();
+  LOG.debug("New value of Accepted number of connections={}", 
noOfConnections);
+}
+
+@Override
+public void channelRead(ChannelHandlerContext ctx, Object msg)
 throws Exception {
-  HttpRequest request = (HttpRequest) evt.getMessage();
-  if (request.getMethod() != GET) {
-  sendError(ctx, METHOD_NOT_ALLOWED);
-  return;
+  Channel channel = ctx.channel();
+  LOG.trace("Executing channelRead, channel id: {}", channel.id());
+  HttpRequest request = (HttpRequest) msg;
+  LOG.debug("Received HTTP request: {}, channel id: {}", request, 
channel.id());
+  if (request.method() != GET) {
+sendError(ctx, METHOD_NOT_ALLOWED);
+return;
   }
   // Check whether the shuffle version is compatible
-  if (!ShuffleHeader.DEFAULT_HTTP_HEADER_NAME.equals(
-  request.headers() != null ?
-  request.headers().get(ShuffleHeader.HTTP_HEADER_NAME) : null)
-  || !ShuffleHeader.DEFAULT_HTTP_HEADER_VERSION.equals(
-  request.headers() != null ?
-  request.headers()
-  .get(ShuffleHeader.HTTP_HEADER_VERSION) : null)) {
+  String shuffleVersion = ShuffleHeader.DEFAULT_HTTP_HEADER_VERSION;
+  String httpHeaderName = ShuffleHeader.HTTP_HEADER_NAME;

Review Comment:
   This is a valid concern, please fix it.





> Upgrade MR ShuffleHandler to use Netty4
> ---
>
> Key: HADOOP-15327
> URL: https://issues.apache.org/jira/browse/HADOOP-15327
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Szilard Nemeth
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-15327.001.patch, HADOOP-15327.002.patch, 
> HADOOP-15327.003.patch, HADOOP-15327.004.patch, HADOOP-15327.005.patch, 
> HADOOP-15327.005.patch, 
> getMapOutputInfo_BlockingOperationException_awaitUninterruptibly.log, 
> hades-results-20221108.zip, testfailure-testMapFileAccess-emptyresponse.zip, 
> testfailure-testReduceFromPartialMem.zip
>
>  Time Spent: 11.5h
>  Remaining Estimate: 0h
>
> This way, we can remove the dependencies on the netty3 (jboss.netty)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: 

[GitHub] [hadoop] 9uapaw commented on a diff in pull request #3259: HADOOP-15327. Upgrade MR ShuffleHandler to use Netty4

2022-11-09 Thread GitBox


9uapaw commented on code in PR #3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r1018346414


##
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/ShuffleHandler.java:
##
@@ -904,65 +990,84 @@ private List splitMaps(List mapq) {
 }
 
 @Override
-public void channelOpen(ChannelHandlerContext ctx, ChannelStateEvent evt) 
+public void channelActive(ChannelHandlerContext ctx)
 throws Exception {
-  super.channelOpen(ctx, evt);
-
-  if ((maxShuffleConnections > 0) && (accepted.size() >= 
maxShuffleConnections)) {
+  NettyChannelHelper.channelActive(ctx.channel());
+  int numConnections = activeConnections.incrementAndGet();
+  if ((maxShuffleConnections > 0) && (numConnections > 
maxShuffleConnections)) {
 LOG.info(String.format("Current number of shuffle connections (%d) is 
" + 
-"greater than or equal to the max allowed shuffle connections 
(%d)", 
+"greater than the max allowed shuffle connections (%d)",
 accepted.size(), maxShuffleConnections));
 
-Map headers = new HashMap(1);
+Map headers = new HashMap<>(1);
 // notify fetchers to backoff for a while before closing the connection
 // if the shuffle connection limit is hit. Fetchers are expected to
 // handle this notification gracefully, that is, not treating this as a
 // fetch failure.
 headers.put(RETRY_AFTER_HEADER, String.valueOf(FETCH_RETRY_DELAY));
 sendError(ctx, "", TOO_MANY_REQ_STATUS, headers);
-return;
+  } else {
+super.channelActive(ctx);
+accepted.add(ctx.channel());
+LOG.debug("Added channel: {}, channel id: {}. Accepted number of 
connections={}",
+ctx.channel(), ctx.channel().id(), activeConnections.get());
   }
-  accepted.add(evt.getChannel());
 }
 
 @Override
-public void messageReceived(ChannelHandlerContext ctx, MessageEvent evt)
+public void channelInactive(ChannelHandlerContext ctx) throws Exception {
+  NettyChannelHelper.channelInactive(ctx.channel());
+  super.channelInactive(ctx);
+  int noOfConnections = activeConnections.decrementAndGet();
+  LOG.debug("New value of Accepted number of connections={}", 
noOfConnections);
+}
+
+@Override
+public void channelRead(ChannelHandlerContext ctx, Object msg)
 throws Exception {
-  HttpRequest request = (HttpRequest) evt.getMessage();
-  if (request.getMethod() != GET) {
-  sendError(ctx, METHOD_NOT_ALLOWED);
-  return;
+  Channel channel = ctx.channel();
+  LOG.trace("Executing channelRead, channel id: {}", channel.id());
+  HttpRequest request = (HttpRequest) msg;
+  LOG.debug("Received HTTP request: {}, channel id: {}", request, 
channel.id());
+  if (request.method() != GET) {
+sendError(ctx, METHOD_NOT_ALLOWED);
+return;
   }
   // Check whether the shuffle version is compatible
-  if (!ShuffleHeader.DEFAULT_HTTP_HEADER_NAME.equals(
-  request.headers() != null ?
-  request.headers().get(ShuffleHeader.HTTP_HEADER_NAME) : null)
-  || !ShuffleHeader.DEFAULT_HTTP_HEADER_VERSION.equals(
-  request.headers() != null ?
-  request.headers()
-  .get(ShuffleHeader.HTTP_HEADER_VERSION) : null)) {
+  String shuffleVersion = ShuffleHeader.DEFAULT_HTTP_HEADER_VERSION;
+  String httpHeaderName = ShuffleHeader.HTTP_HEADER_NAME;

Review Comment:
   This is a valid concern, please fix it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4

2022-11-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17631229#comment-17631229
 ] 

ASF GitHub Bot commented on HADOOP-15327:
-

9uapaw commented on code in PR #3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r1018309405


##
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/resources/log4j.properties:
##
@@ -17,3 +17,5 @@ log4j.threshold=ALL
 log4j.appender.stdout=org.apache.log4j.ConsoleAppender
 log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
 log4j.appender.stdout.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2} 
(%F:%M(%L)) - %m%n
+log4j.logger.io.netty=DEBUG
+log4j.logger.org.apache.hadoop.mapred=DEBUG

Review Comment:
   +1 this is left here accidentally.





> Upgrade MR ShuffleHandler to use Netty4
> ---
>
> Key: HADOOP-15327
> URL: https://issues.apache.org/jira/browse/HADOOP-15327
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Szilard Nemeth
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-15327.001.patch, HADOOP-15327.002.patch, 
> HADOOP-15327.003.patch, HADOOP-15327.004.patch, HADOOP-15327.005.patch, 
> HADOOP-15327.005.patch, 
> getMapOutputInfo_BlockingOperationException_awaitUninterruptibly.log, 
> hades-results-20221108.zip, testfailure-testMapFileAccess-emptyresponse.zip, 
> testfailure-testReduceFromPartialMem.zip
>
>  Time Spent: 11.5h
>  Remaining Estimate: 0h
>
> This way, we can remove the dependencies on the netty3 (jboss.netty)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] 9uapaw commented on a diff in pull request #3259: HADOOP-15327. Upgrade MR ShuffleHandler to use Netty4

2022-11-09 Thread GitBox


9uapaw commented on code in PR #3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r1018309405


##
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/resources/log4j.properties:
##
@@ -17,3 +17,5 @@ log4j.threshold=ALL
 log4j.appender.stdout=org.apache.log4j.ConsoleAppender
 log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
 log4j.appender.stdout.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2} 
(%F:%M(%L)) - %m%n
+log4j.logger.io.netty=DEBUG
+log4j.logger.org.apache.hadoop.mapred=DEBUG

Review Comment:
   +1 this is left here accidentally.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri merged pull request #5100: YARN-11367. [Federation] Fix DefaultRequestInterceptorREST Client NPE.

2022-11-09 Thread GitBox


goiri merged PR #5100:
URL: https://github.com/apache/hadoop/pull/5100


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18246) Remove lower limit on s3a prefetching/caching block size

2022-11-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17631203#comment-17631203
 ] 

ASF GitHub Bot commented on HADOOP-18246:
-

hadoop-yetus commented on PR #5120:
URL: https://github.com/apache/hadoop/pull/5120#issuecomment-1309187717

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 50s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m 58s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 52s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 40s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 49s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 25s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 57s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 43s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 47s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 47s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 33s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5120/2/artifact/out/blanks-eol.txt)
 |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  checkstyle  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 56s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  29m  0s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 48s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 40s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 112m  6s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5120/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5120 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux edc34f5eff5e 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / bf73b01d524d539bdff52cc11daf6dec0c40abc3 |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5120/2/testReport/ |
   | Max. process+thread count | 533 (vs. ulimit of 5500) |

[GitHub] [hadoop] hadoop-yetus commented on pull request #5120: HADOOP-18246. Remove lower limit on s3a prefetching/caching block size

2022-11-09 Thread GitBox


hadoop-yetus commented on PR #5120:
URL: https://github.com/apache/hadoop/pull/5120#issuecomment-1309187717

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 50s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m 58s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 52s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 40s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 49s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 25s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 57s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 43s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 47s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 47s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 33s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5120/2/artifact/out/blanks-eol.txt)
 |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  checkstyle  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 56s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  29m  0s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 48s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 40s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 112m  6s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5120/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5120 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux edc34f5eff5e 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / bf73b01d524d539bdff52cc11daf6dec0c40abc3 |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5120/2/testReport/ |
   | Max. process+thread count | 533 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5120/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | 

[jira] [Commented] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4

2022-11-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17631196#comment-17631196
 ] 

ASF GitHub Bot commented on HADOOP-15327:
-

brumi1024 commented on code in PR #3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r1018265618


##
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/ShuffleHandler.java:
##
@@ -904,65 +990,84 @@ private List splitMaps(List mapq) {
 }
 
 @Override
-public void channelOpen(ChannelHandlerContext ctx, ChannelStateEvent evt) 
+public void channelActive(ChannelHandlerContext ctx)
 throws Exception {
-  super.channelOpen(ctx, evt);
-
-  if ((maxShuffleConnections > 0) && (accepted.size() >= 
maxShuffleConnections)) {
+  NettyChannelHelper.channelActive(ctx.channel());
+  int numConnections = activeConnections.incrementAndGet();
+  if ((maxShuffleConnections > 0) && (numConnections > 
maxShuffleConnections)) {
 LOG.info(String.format("Current number of shuffle connections (%d) is 
" + 
-"greater than or equal to the max allowed shuffle connections 
(%d)", 
+"greater than the max allowed shuffle connections (%d)",
 accepted.size(), maxShuffleConnections));
 
-Map headers = new HashMap(1);
+Map headers = new HashMap<>(1);
 // notify fetchers to backoff for a while before closing the connection
 // if the shuffle connection limit is hit. Fetchers are expected to
 // handle this notification gracefully, that is, not treating this as a
 // fetch failure.
 headers.put(RETRY_AFTER_HEADER, String.valueOf(FETCH_RETRY_DELAY));
 sendError(ctx, "", TOO_MANY_REQ_STATUS, headers);
-return;
+  } else {
+super.channelActive(ctx);
+accepted.add(ctx.channel());
+LOG.debug("Added channel: {}, channel id: {}. Accepted number of 
connections={}",
+ctx.channel(), ctx.channel().id(), activeConnections.get());
   }
-  accepted.add(evt.getChannel());
 }
 
 @Override
-public void messageReceived(ChannelHandlerContext ctx, MessageEvent evt)
+public void channelInactive(ChannelHandlerContext ctx) throws Exception {
+  NettyChannelHelper.channelInactive(ctx.channel());
+  super.channelInactive(ctx);
+  int noOfConnections = activeConnections.decrementAndGet();
+  LOG.debug("New value of Accepted number of connections={}", 
noOfConnections);
+}
+
+@Override
+public void channelRead(ChannelHandlerContext ctx, Object msg)
 throws Exception {
-  HttpRequest request = (HttpRequest) evt.getMessage();
-  if (request.getMethod() != GET) {
-  sendError(ctx, METHOD_NOT_ALLOWED);
-  return;
+  Channel channel = ctx.channel();
+  LOG.trace("Executing channelRead, channel id: {}", channel.id());
+  HttpRequest request = (HttpRequest) msg;
+  LOG.debug("Received HTTP request: {}, channel id: {}", request, 
channel.id());
+  if (request.method() != GET) {
+sendError(ctx, METHOD_NOT_ALLOWED);
+return;
   }
   // Check whether the shuffle version is compatible
-  if (!ShuffleHeader.DEFAULT_HTTP_HEADER_NAME.equals(
-  request.headers() != null ?
-  request.headers().get(ShuffleHeader.HTTP_HEADER_NAME) : null)
-  || !ShuffleHeader.DEFAULT_HTTP_HEADER_VERSION.equals(
-  request.headers() != null ?
-  request.headers()
-  .get(ShuffleHeader.HTTP_HEADER_VERSION) : null)) {
+  String shuffleVersion = ShuffleHeader.DEFAULT_HTTP_HEADER_VERSION;
+  String httpHeaderName = ShuffleHeader.HTTP_HEADER_NAME;

Review Comment:
   +1 on the DEFAULT, however the if is needed because if the request.headers() 
returns a non-null value we need to check the version.



##
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/Fetcher.java:
##
@@ -72,19 +73,21 @@
   private static final String FETCH_RETRY_AFTER_HEADER = "Retry-After";
 
   protected final Reporter reporter;
-  private enum ShuffleErrors{IO_ERROR, WRONG_LENGTH, BAD_ID, WRONG_MAP,
+  @VisibleForTesting
+  public enum ShuffleErrors{IO_ERROR, WRONG_LENGTH, BAD_ID, WRONG_MAP,
 CONNECTION, WRONG_REDUCE}
-  
-  private final static String SHUFFLE_ERR_GRP_NAME = "Shuffle Errors";
+
+  @VisibleForTesting
+  public final static String SHUFFLE_ERR_GRP_NAME = "Shuffle Errors";

Review Comment:
   Changing this would mean that the others should be changed as well, and that 
could complicate the diff. Not sure if it's worth it.





> Upgrade MR ShuffleHandler to use Netty4
> ---
>
> Key: 

[GitHub] [hadoop] brumi1024 commented on a diff in pull request #3259: HADOOP-15327. Upgrade MR ShuffleHandler to use Netty4

2022-11-09 Thread GitBox


brumi1024 commented on code in PR #3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r1018265618


##
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/ShuffleHandler.java:
##
@@ -904,65 +990,84 @@ private List splitMaps(List mapq) {
 }
 
 @Override
-public void channelOpen(ChannelHandlerContext ctx, ChannelStateEvent evt) 
+public void channelActive(ChannelHandlerContext ctx)
 throws Exception {
-  super.channelOpen(ctx, evt);
-
-  if ((maxShuffleConnections > 0) && (accepted.size() >= 
maxShuffleConnections)) {
+  NettyChannelHelper.channelActive(ctx.channel());
+  int numConnections = activeConnections.incrementAndGet();
+  if ((maxShuffleConnections > 0) && (numConnections > 
maxShuffleConnections)) {
 LOG.info(String.format("Current number of shuffle connections (%d) is 
" + 
-"greater than or equal to the max allowed shuffle connections 
(%d)", 
+"greater than the max allowed shuffle connections (%d)",
 accepted.size(), maxShuffleConnections));
 
-Map headers = new HashMap(1);
+Map headers = new HashMap<>(1);
 // notify fetchers to backoff for a while before closing the connection
 // if the shuffle connection limit is hit. Fetchers are expected to
 // handle this notification gracefully, that is, not treating this as a
 // fetch failure.
 headers.put(RETRY_AFTER_HEADER, String.valueOf(FETCH_RETRY_DELAY));
 sendError(ctx, "", TOO_MANY_REQ_STATUS, headers);
-return;
+  } else {
+super.channelActive(ctx);
+accepted.add(ctx.channel());
+LOG.debug("Added channel: {}, channel id: {}. Accepted number of 
connections={}",
+ctx.channel(), ctx.channel().id(), activeConnections.get());
   }
-  accepted.add(evt.getChannel());
 }
 
 @Override
-public void messageReceived(ChannelHandlerContext ctx, MessageEvent evt)
+public void channelInactive(ChannelHandlerContext ctx) throws Exception {
+  NettyChannelHelper.channelInactive(ctx.channel());
+  super.channelInactive(ctx);
+  int noOfConnections = activeConnections.decrementAndGet();
+  LOG.debug("New value of Accepted number of connections={}", 
noOfConnections);
+}
+
+@Override
+public void channelRead(ChannelHandlerContext ctx, Object msg)
 throws Exception {
-  HttpRequest request = (HttpRequest) evt.getMessage();
-  if (request.getMethod() != GET) {
-  sendError(ctx, METHOD_NOT_ALLOWED);
-  return;
+  Channel channel = ctx.channel();
+  LOG.trace("Executing channelRead, channel id: {}", channel.id());
+  HttpRequest request = (HttpRequest) msg;
+  LOG.debug("Received HTTP request: {}, channel id: {}", request, 
channel.id());
+  if (request.method() != GET) {
+sendError(ctx, METHOD_NOT_ALLOWED);
+return;
   }
   // Check whether the shuffle version is compatible
-  if (!ShuffleHeader.DEFAULT_HTTP_HEADER_NAME.equals(
-  request.headers() != null ?
-  request.headers().get(ShuffleHeader.HTTP_HEADER_NAME) : null)
-  || !ShuffleHeader.DEFAULT_HTTP_HEADER_VERSION.equals(
-  request.headers() != null ?
-  request.headers()
-  .get(ShuffleHeader.HTTP_HEADER_VERSION) : null)) {
+  String shuffleVersion = ShuffleHeader.DEFAULT_HTTP_HEADER_VERSION;
+  String httpHeaderName = ShuffleHeader.HTTP_HEADER_NAME;

Review Comment:
   +1 on the DEFAULT, however the if is needed because if the request.headers() 
returns a non-null value we need to check the version.



##
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/Fetcher.java:
##
@@ -72,19 +73,21 @@
   private static final String FETCH_RETRY_AFTER_HEADER = "Retry-After";
 
   protected final Reporter reporter;
-  private enum ShuffleErrors{IO_ERROR, WRONG_LENGTH, BAD_ID, WRONG_MAP,
+  @VisibleForTesting
+  public enum ShuffleErrors{IO_ERROR, WRONG_LENGTH, BAD_ID, WRONG_MAP,
 CONNECTION, WRONG_REDUCE}
-  
-  private final static String SHUFFLE_ERR_GRP_NAME = "Shuffle Errors";
+
+  @VisibleForTesting
+  public final static String SHUFFLE_ERR_GRP_NAME = "Shuffle Errors";

Review Comment:
   Changing this would mean that the others should be changed as well, and that 
could complicate the diff. Not sure if it's worth it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (HADOOP-18519) Backport HDFS-15383 and HADOOP-17835 to branch-3.3

2022-11-09 Thread Melissa You (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Melissa You resolved HADOOP-18519.
--
Resolution: Fixed

> Backport HDFS-15383 and HADOOP-17835 to branch-3.3
> --
>
> Key: HADOOP-18519
> URL: https://issues.apache.org/jira/browse/HADOOP-18519
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.5
>Reporter: Melissa You
>Assignee: Melissa You
>Priority: Major
>  Labels: pull-request-available
>
> This is a sub-task of HADOOP-18518 to upgrade zk on 3.3 branches.
> It contains clean cherry pick from 
> [https://github.com/apache/hadoop/pull/3266] which solved the deprecation of 
> PathChildrenCache/TreeCache in new ZK. 
> Also clean cherry pick from https://issues.apache.org/jira/browse/HDFS-15383 
> because PR-3266 is based on this earlier change, specifically 
> isTokenWatcherEnabled was introduced in 
> [ZKDelegationTokenSecretManager.java|https://github.com/apache/hadoop/pull/2047/files#diff-f65a8ac81e253e85af159ba041fbad62fbb34b5bd909c1e9fc93d58222f406b9]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5122: HDFS-16811. Support DecommissionBackoffMonitor parameters reconfigurable

2022-11-09 Thread GitBox


hadoop-yetus commented on PR #5122:
URL: https://github.com/apache/hadoop/pull/5122#issuecomment-1309107785

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  10m 26s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ branch-3.3 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 51s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |   1m 25s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |   1m  3s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 34s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   1m 45s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  spotbugs  |   3m 38s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  28m 48s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 45s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 22s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   3m 26s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  28m  2s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 220m 47s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5122/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 56s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 344m  9s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5122/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5122 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux c84173ebde0c 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / 1a9b62f51f07262a19055f2432d196e3d3e47c01 |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~18.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5122/1/testReport/ |
   | Max. process+thread count | 2511 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5122/1/console |
   | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18246) Remove lower limit on s3a prefetching/caching block size

2022-11-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17631160#comment-17631160
 ] 

ASF GitHub Bot commented on HADOOP-18246:
-

sauraank commented on PR #5120:
URL: https://github.com/apache/hadoop/pull/5120#issuecomment-1309030927

   Thanks @ahmarsuhail for the feedback. I have made the recommended changes.




> Remove lower limit on s3a prefetching/caching block size
> 
>
> Key: HADOOP-18246
> URL: https://issues.apache.org/jira/browse/HADOOP-18246
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.4.0
>Reporter: Daniel Carl Jones
>Assignee: Ankit Saurabh
>Priority: Minor
>  Labels: pull-request-available
>
> The minimum allowed block size currently is {{PREFETCH_BLOCK_DEFAULT_SIZE}} 
> (8MB).
> {code:java}
> this.prefetchBlockSize = intOption(
> conf, PREFETCH_BLOCK_SIZE_KEY, 
> PREFETCH_BLOCK_DEFAULT_SIZE, PREFETCH_BLOCK_DEFAULT_SIZE);{code}
> [https://github.com/apache/hadoop/blob/3aa03e0eb95bbcb066144706e06509f0e0549196/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L487-L488]
> Why is this the case and should we lower or remove it?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sauraank commented on pull request #5120: HADOOP-18246. Remove lower limit on s3a prefetching/caching block size

2022-11-09 Thread GitBox


sauraank commented on PR #5120:
URL: https://github.com/apache/hadoop/pull/5120#issuecomment-1309030927

   Thanks @ahmarsuhail for the feedback. I have made the recommended changes.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4

2022-11-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17631149#comment-17631149
 ] 

ASF GitHub Bot commented on HADOOP-15327:
-

K0K0V0K commented on code in PR #3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r1018076397


##
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/ShuffleHandler.java:
##
@@ -182,19 +184,29 @@ public class ShuffleHandler extends AuxiliaryService {
 
   public static final HttpResponseStatus TOO_MANY_REQ_STATUS =
   new HttpResponseStatus(429, "TOO MANY REQUESTS");
-  // This should kept in sync with Fetcher.FETCH_RETRY_DELAY_DEFAULT
+  // This should be kept in sync with Fetcher.FETCH_RETRY_DELAY_DEFAULT
   public static final long FETCH_RETRY_DELAY = 1000L;
   public static final String RETRY_AFTER_HEADER = "Retry-After";
+  static final String ENCODER_HANDLER_NAME = "encoder";
 
   private int port;
-  private ChannelFactory selector;
-  private final ChannelGroup accepted = new DefaultChannelGroup();
+  private EventLoopGroup bossGroup;
+  private EventLoopGroup workerGroup;
+  private ServerBootstrap bootstrap;
+  private Channel ch;
+  private final ChannelGroup accepted =
+  new DefaultChannelGroup(new DefaultEventExecutorGroup(5).next());

Review Comment:
   Maybe there can be a line comment why we have to create 5 event executor



##
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/Fetcher.java:
##
@@ -72,19 +73,21 @@
   private static final String FETCH_RETRY_AFTER_HEADER = "Retry-After";
 
   protected final Reporter reporter;
-  private enum ShuffleErrors{IO_ERROR, WRONG_LENGTH, BAD_ID, WRONG_MAP,
+  @VisibleForTesting
+  public enum ShuffleErrors{IO_ERROR, WRONG_LENGTH, BAD_ID, WRONG_MAP,
 CONNECTION, WRONG_REDUCE}
-  
-  private final static String SHUFFLE_ERR_GRP_NAME = "Shuffle Errors";
+
+  @VisibleForTesting
+  public final static String SHUFFLE_ERR_GRP_NAME = "Shuffle Errors";

Review Comment:
   check style wont cry for public static final instead of public final static?



##
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/ShuffleHandler.java:
##
@@ -904,65 +990,84 @@ private List splitMaps(List mapq) {
 }
 
 @Override
-public void channelOpen(ChannelHandlerContext ctx, ChannelStateEvent evt) 
+public void channelActive(ChannelHandlerContext ctx)
 throws Exception {
-  super.channelOpen(ctx, evt);
-
-  if ((maxShuffleConnections > 0) && (accepted.size() >= 
maxShuffleConnections)) {
+  NettyChannelHelper.channelActive(ctx.channel());
+  int numConnections = activeConnections.incrementAndGet();
+  if ((maxShuffleConnections > 0) && (numConnections > 
maxShuffleConnections)) {
 LOG.info(String.format("Current number of shuffle connections (%d) is 
" + 
-"greater than or equal to the max allowed shuffle connections 
(%d)", 
+"greater than the max allowed shuffle connections (%d)",
 accepted.size(), maxShuffleConnections));
 
-Map headers = new HashMap(1);
+Map headers = new HashMap<>(1);
 // notify fetchers to backoff for a while before closing the connection
 // if the shuffle connection limit is hit. Fetchers are expected to
 // handle this notification gracefully, that is, not treating this as a
 // fetch failure.
 headers.put(RETRY_AFTER_HEADER, String.valueOf(FETCH_RETRY_DELAY));
 sendError(ctx, "", TOO_MANY_REQ_STATUS, headers);
-return;
+  } else {
+super.channelActive(ctx);
+accepted.add(ctx.channel());
+LOG.debug("Added channel: {}, channel id: {}. Accepted number of 
connections={}",
+ctx.channel(), ctx.channel().id(), activeConnections.get());
   }
-  accepted.add(evt.getChannel());
 }
 
 @Override
-public void messageReceived(ChannelHandlerContext ctx, MessageEvent evt)
+public void channelInactive(ChannelHandlerContext ctx) throws Exception {
+  NettyChannelHelper.channelInactive(ctx.channel());
+  super.channelInactive(ctx);
+  int noOfConnections = activeConnections.decrementAndGet();
+  LOG.debug("New value of Accepted number of connections={}", 
noOfConnections);
+}
+
+@Override
+public void channelRead(ChannelHandlerContext ctx, Object msg)
 throws Exception {
-  HttpRequest request = (HttpRequest) evt.getMessage();
-  if (request.getMethod() != GET) {
-  sendError(ctx, METHOD_NOT_ALLOWED);
-  return;
+  Channel channel = ctx.channel();
+  LOG.trace("Executing channelRead, channel id: {}", channel.id());
+  

[GitHub] [hadoop] K0K0V0K commented on a diff in pull request #3259: HADOOP-15327. Upgrade MR ShuffleHandler to use Netty4

2022-11-09 Thread GitBox


K0K0V0K commented on code in PR #3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r1018076397


##
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/ShuffleHandler.java:
##
@@ -182,19 +184,29 @@ public class ShuffleHandler extends AuxiliaryService {
 
   public static final HttpResponseStatus TOO_MANY_REQ_STATUS =
   new HttpResponseStatus(429, "TOO MANY REQUESTS");
-  // This should kept in sync with Fetcher.FETCH_RETRY_DELAY_DEFAULT
+  // This should be kept in sync with Fetcher.FETCH_RETRY_DELAY_DEFAULT
   public static final long FETCH_RETRY_DELAY = 1000L;
   public static final String RETRY_AFTER_HEADER = "Retry-After";
+  static final String ENCODER_HANDLER_NAME = "encoder";
 
   private int port;
-  private ChannelFactory selector;
-  private final ChannelGroup accepted = new DefaultChannelGroup();
+  private EventLoopGroup bossGroup;
+  private EventLoopGroup workerGroup;
+  private ServerBootstrap bootstrap;
+  private Channel ch;
+  private final ChannelGroup accepted =
+  new DefaultChannelGroup(new DefaultEventExecutorGroup(5).next());

Review Comment:
   Maybe there can be a line comment why we have to create 5 event executor



##
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/Fetcher.java:
##
@@ -72,19 +73,21 @@
   private static final String FETCH_RETRY_AFTER_HEADER = "Retry-After";
 
   protected final Reporter reporter;
-  private enum ShuffleErrors{IO_ERROR, WRONG_LENGTH, BAD_ID, WRONG_MAP,
+  @VisibleForTesting
+  public enum ShuffleErrors{IO_ERROR, WRONG_LENGTH, BAD_ID, WRONG_MAP,
 CONNECTION, WRONG_REDUCE}
-  
-  private final static String SHUFFLE_ERR_GRP_NAME = "Shuffle Errors";
+
+  @VisibleForTesting
+  public final static String SHUFFLE_ERR_GRP_NAME = "Shuffle Errors";

Review Comment:
   check style wont cry for public static final instead of public final static?



##
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/ShuffleHandler.java:
##
@@ -904,65 +990,84 @@ private List splitMaps(List mapq) {
 }
 
 @Override
-public void channelOpen(ChannelHandlerContext ctx, ChannelStateEvent evt) 
+public void channelActive(ChannelHandlerContext ctx)
 throws Exception {
-  super.channelOpen(ctx, evt);
-
-  if ((maxShuffleConnections > 0) && (accepted.size() >= 
maxShuffleConnections)) {
+  NettyChannelHelper.channelActive(ctx.channel());
+  int numConnections = activeConnections.incrementAndGet();
+  if ((maxShuffleConnections > 0) && (numConnections > 
maxShuffleConnections)) {
 LOG.info(String.format("Current number of shuffle connections (%d) is 
" + 
-"greater than or equal to the max allowed shuffle connections 
(%d)", 
+"greater than the max allowed shuffle connections (%d)",
 accepted.size(), maxShuffleConnections));
 
-Map headers = new HashMap(1);
+Map headers = new HashMap<>(1);
 // notify fetchers to backoff for a while before closing the connection
 // if the shuffle connection limit is hit. Fetchers are expected to
 // handle this notification gracefully, that is, not treating this as a
 // fetch failure.
 headers.put(RETRY_AFTER_HEADER, String.valueOf(FETCH_RETRY_DELAY));
 sendError(ctx, "", TOO_MANY_REQ_STATUS, headers);
-return;
+  } else {
+super.channelActive(ctx);
+accepted.add(ctx.channel());
+LOG.debug("Added channel: {}, channel id: {}. Accepted number of 
connections={}",
+ctx.channel(), ctx.channel().id(), activeConnections.get());
   }
-  accepted.add(evt.getChannel());
 }
 
 @Override
-public void messageReceived(ChannelHandlerContext ctx, MessageEvent evt)
+public void channelInactive(ChannelHandlerContext ctx) throws Exception {
+  NettyChannelHelper.channelInactive(ctx.channel());
+  super.channelInactive(ctx);
+  int noOfConnections = activeConnections.decrementAndGet();
+  LOG.debug("New value of Accepted number of connections={}", 
noOfConnections);
+}
+
+@Override
+public void channelRead(ChannelHandlerContext ctx, Object msg)
 throws Exception {
-  HttpRequest request = (HttpRequest) evt.getMessage();
-  if (request.getMethod() != GET) {
-  sendError(ctx, METHOD_NOT_ALLOWED);
-  return;
+  Channel channel = ctx.channel();
+  LOG.trace("Executing channelRead, channel id: {}", channel.id());
+  HttpRequest request = (HttpRequest) msg;
+  LOG.debug("Received HTTP request: {}, channel id: {}", request, 
channel.id());
+  if (request.method() != GET) {
+sendError(ctx, METHOD_NOT_ALLOWED);
+return;
   }
   // 

[jira] [Created] (HADOOP-18525) ViewFileSystem major bug can cause entire subtrees to effectively disappear

2022-11-09 Thread Garret Wilson (Jira)
Garret Wilson created HADOOP-18525:
--

 Summary: ViewFileSystem major bug can cause entire subtrees to 
effectively disappear
 Key: HADOOP-18525
 URL: https://issues.apache.org/jira/browse/HADOOP-18525
 Project: Hadoop Common
  Issue Type: Bug
  Components: viewfs
Affects Versions: 3.3.4
Reporter: Garret Wilson


{{ViewFileSystem}} allows a federated view of a file system, so that for 
example under the path {{foo/}} I might have {{foo/bar1}} mapped to some other 
file system, {{foo/bar2}} mapped to some different file system, etc. using the 
ViewFS mount table.

Consider a situation where I have 1,000 subdirectories {{foo/bar000}} to 
{{foo/bar999}} mapped to 1,000 different cloud providers (e.g. AWS S3 buckets 
or whatever). Let's say that for whatever reason the mapping for {{foo/bar123}} 
was incorrect (maybe there was a corrupted mount table or a race condition in 
creating the destination cloud storage), so that when we we try to get the 
status of {{foo/bar123}} it returns an HTTP {{404}}, throwing an exception.

But let's say that we were instead _listing the status of {{foo/}} itself_, in 
order to return all 1,000 children. Look what would happen in the 
{{ViewFileSystem.listStatus(Path f)}} code when we call 
{{ViewFileSystem.listStatus(new Path("…/foo"))}}. We expect it to return 999 
child paths instead of 1,000 child (because one of the mounted paths is 
misconfigured and returns {{404}})):

{code:java}
  for (Entry> iEntry :
  theInternalDir.getChildren().entrySet()) {
…
  try {
FileStatus status =
((ChRootedFileSystem)link.getTargetFileSystem())
.getMyFs().getFileStatus(new Path(linkedPath));
linkStatuses.add(
new FileStatus(status.getLen(), status.isDirectory(),
status.getReplication(), status.getBlockSize(),
status.getModificationTime(), status.getAccessTime(),
status.getPermission(), status.getOwner(),
status.getGroup(), null, path));
  } catch (FileNotFoundException ex) {
LOG.warn("Cannot get one of the children's(" + path
+ ")  target path(" + link.getTargetFileSystem().getUri()
+ ") file status.", ex);
throw ex;
  }
{code}

For each particular child that is mapped in the map table, a 
{{((ChRootedFileSystem)link.getTargetFileSystem()).getMyFs().getFileStatus(new 
Path(linkedPath))}} is performed on the underlying federated file system and 
the resulting `FileSystatus` is added to the list. But in the case of 
{{foo/bar123}}, it throws an exception. The code above appropriately catches 
the exception and warns, "Cannot get one of the children's … file status" That 
part is perfectly fine. *But then the code rethrows the exception, which is 
incorrect.*

Rethrowing the exception with {{throw ex}} breaks the directory listing; it 
will result in an exception for the entire directory listing of {{foo/}}, not 
just the child. If the child mapping for {{foo/bar123}} has somehow disappeared 
(maybe it's just a race condition, and that the mapping table was stale when 
the directory listing started so that the mapping was never current) and 
{{foo/bar123}} returns a {{404}}, suddenly the entire directory listing, 
instead of returning 999 entries as expected doesn't return any entries because 
the file status listing of {{foo/}} itself returns {{404}}!

This bug essentially causes an entire subtree to disappear merely because of a 
problem accessing one of the _children_. In a distributed environment (which is 
what ViewFs was intended for), with thousands of mappings to various HTTP-based 
cloud storage accounts, it's not unexpected that one of them might be 
temporarily unavailable. But this bug would cause the _parent_ directory to 
seem unavailable, essentially making it appear that e.g. {{/users}} simply did 
not exist simply because {{/users/fulano}} happened to be missing.

And if we happen to have {{/missing-mount}} mounted under the root and it was 
temporarily unavailable, and we did a {{listStatus()}} on the root directory 
{{/}} itself? Yes, it would _appear as if the root directory itself was 
missing_, i.e. the entire federated file system.

I have seen this bug in practice. In fact I had thought I had already filed a 
ticket for this, but maybe it was at some organization's internal bug tracking 
system instead of on the public Apache Hadoop bug tracking system.

You can verify this bug simply by adding a unit/integration test that mocks 
{{foo/bar1}}, {{foo/bar2}}, and {{foo/bar3}} as {{ChRootedFileSystem}} in a 
{{ViewFileSystem}} via {{ViewFileSystem.getMyFs()}}. Perform a 
{{ViewFileStatus.listStatus()}} on {{foo/}} and see that it returns 3 children. 
Then have {{getMyFs().getFileStatus()}} return a {{404}} 

[GitHub] [hadoop] ashutoshcipher commented on pull request #5014: MAPREDUCE-5608. Replace and deprecate mapred.tasktracker.indexcache.mb

2022-11-09 Thread GitBox


ashutoshcipher commented on PR #5014:
URL: https://github.com/apache/hadoop/pull/5014#issuecomment-1308839013

   Checkstyle in latest Yetus can be ignored as `public static final` is added 
to keep it consistent with other parameters.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5098: HDFS-16831. [RBF SBN] GetNamenodesForNameserviceId should shuffle Observer NameNodes every time

2022-11-09 Thread GitBox


hadoop-yetus commented on PR #5098:
URL: https://github.com/apache/hadoop/pull/5098#issuecomment-1308772686

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 56s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  42m 24s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  0s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 54s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  1s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 13s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   2m  0s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m 18s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 31s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  24m  2s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  42m 45s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 41s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 150m 14s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5098/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5098 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux ee7402af6c08 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 4451a8438835a2598a111411570cefa033daec07 |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5098/5/testReport/ |
   | Max. process+thread count | 2377 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5098/5/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5098: HDFS-16831. [RBF SBN] GetNamenodesForNameserviceId should shuffle Observer NameNodes every time

2022-11-09 Thread GitBox


hadoop-yetus commented on PR #5098:
URL: https://github.com/apache/hadoop/pull/5098#issuecomment-1308738120

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 49s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  39m  3s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 55s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 54s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 59s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 36s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 15s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 42s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 44s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 39s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  23m 19s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 48s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 121m 31s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5098/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5098 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 9fb903214903 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 2329cb79ca390d2275ec623ebb0b96276fc5e4ef |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5098/6/testReport/ |
   | Max. process+thread count | 2456 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5098/6/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5119: YARN-5607. Moved waitFor** methods from Mock** class to CommonUtil class

2022-11-09 Thread GitBox


hadoop-yetus commented on PR #5119:
URL: https://github.com/apache/hadoop/pull/5119#issuecomment-1308703528

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 60 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m  5s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  28m 58s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  25m 15s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |  21m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   4m  2s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 16s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 50s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   2m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   5m 23s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 32s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  7s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 28s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | -1 :x: |  javac  |  24m 28s | 
[/results-compile-javac-root-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5119/1/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt)
 |  root-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 generated 1 new + 2825 unchanged - 0 
fixed = 2826 total (was 2825)  |
   | +1 :green_heart: |  compile  |  21m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | -1 :x: |  javac  |  21m 29s | 
[/results-compile-javac-root-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5119/1/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt)
 |  root-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 generated 1 new + 2619 
unchanged - 0 fixed = 2620 total (was 2619)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   4m  1s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5119/1/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 203 new + 774 unchanged - 3 fixed = 977 total 
(was 777)  |
   | +1 :green_heart: |  mvnsite  |   3m 15s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 44s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   2m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   5m 54s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 19s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 1311m 47s | 
[/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5119/1/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt)
 |  hadoop-yarn-server-resourcemanager in the patch passed.  |
   | -1 :x: |  unit  |   0m 58s | 
[/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5119/1/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt)
 |  hadoop-yarn-client in the patch failed.  |
   | -1 :x: |  unit  |   0m 47s | 

[jira] [Comment Edited] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4

2022-11-09 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17631029#comment-17631029
 ] 

Szilard Nemeth edited comment on HADOOP-15327 at 11/9/22 12:07 PM:
---

Hi,
CC: [~gandras], [~shuzirra], [~weichiu]

Let me summarize what kind of testing I performed to make sure this change 
won't cause any regression.
The project that helped me very much with the testing is called 
[Hades|https://github.com/9uapaw/hades].
Kudos to [~gandras] for the initial work on the Hades project.
h1. TL;DR

*Hades was the framework I used to run my testcases.*
*All testcases are passed both with the trunk version of Hadoop (this is not 
surprising at all) and the deployed Hadoop version with my Netty upgrade patch.*
*See the attached test logs for details.*  [^hades-results-20221108.zip] 
*Also see the details below about what Hades is, how I tested, why I chose 
certain configurations for the testcases and many more..*
*Now I'm pretty confident that this patch won't break anything so I'm waiting 
for reviewers.*

h1. HADES IN GENERAL
h2. What is Hades?

Hades is a CLI tool, that shares a common interface between various Hadoop 
distributions. It is a collection of commands most frequently used by 
developers of Hadoop components.

Hades supports [Hadock|https://github.com/9uapaw/docker-hadoop-dev], [Cloduera 
Data Platform|https://www.cloudera.com/products/cloudera-data-platform.html] 
and standard upstream distribution.
h2. Basic features of Hades
 - Discover cluster: Stores where individual YARN / HDFS daemons are running.
 - Distribute files on certain nodes
 - Get config: Prints configuration of selected roles
 - Read logs of Hadoop roles
 - Restart: Restarting of certain roles
 - Run an application on the defined cluster
 - Status: Prints the status of the cluster
 - Update config: Update properties on a config file for selected roles
 - YARN specific commands
 - Run script: Runs user-defined custom scripts against the cluster.

h1. CLUSTER + HADES SETUP
h2. Run Hades with the Netty testing script against a cluster

First of all, I created a standard cluster and deployed Hadoop to the cluster.
Side note: Later on, all the installation that deploys Hadoop on the cluster 
could be part of Hades as well.

It's worth to be mentioned that I have a [PR with netty-related 
changes|https://github.com/9uapaw/hades/pull/6] against the Hades repo.
The branch of this PR is 
[this|https://github.com/szilard-nemeth/hades/tree/netty4-finish].

[Here are the 
instructions|https://github.com/szilard-nemeth/hades/blob/c16e95393ecf3e787e125c58d88ec2dc6a44b9e0/README.md#set-up-hades-on-a-cluster-and-run-the-netty-script]
 for how to set up and run Hades with the Netty testing script.
h1. THE NETTY TESTING SCRIPT

The Netty testing script [lives 
here|https://github.com/szilard-nemeth/hades/blob/netty4-finish/script/netty4.py].
As you can see on the code, quite a lot of work has been done to make sure the 
Netty 4 upgrade won't break anything and won't cause any regression as it is a 
crucial part of MapReduce.
h2. CONCEPTS
h3. Test context

Class: Netty4TestContext

The test context provides a way to encapsulate a base branch and a patch file 
(if any) applied on top of the base branch.
The context can enable or disable Maven compilation.
The context can also have certain ways to ensure that the compilation and the 
deployment of new jars were successful on the cluster.
Now, it can verify that certain logs are appearing in the daemon logs, making 
sure the deployment was okay.
The main purpose of the context is to compare it with results of other contexts.
For the Netty testing, it was evident that I need to make sure the trunk 
version and my version with the patch applied on top of trnuk works the same, 
e.g. there's no regression.
For this, I created the context.
h3. Testcase

Class: Netty4Testcase

In general, a testcase can have a name, a simple name, some config changes 
(dictionary of string keys, string values) and one MR application.
h3. Test config: Config options for running the tests

Class: Netty4TestConfig

These are the main config options for the Netty testing.
I won't go into too much details as I defined a ton of options along the way.
You can check all the config options 
[here|https://github.com/szilard-nemeth/hades/blob/c16e95393ecf3e787e125c58d88ec2dc6a44b9e0/script/netty4.py#L655-L687]
h3. Compiler

As mentioned above, Hades can compile Hadoop with Maven and replace the changed 
jars / Maven modules on the cluster.
This is particularly useful for the Netty testing as I was interested in 
whether the patch causes any issues so I had to compile Hadoop with my Netty 
patch, deploy the jars on the cluster and run all the tests and see all of them 
passing.
h2. TESTCASES

The testcases are defined with the help of the Netty4TestcasesBuilder. You can 
find all the testcases 

[jira] [Commented] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4

2022-11-09 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17631029#comment-17631029
 ] 

Szilard Nemeth commented on HADOOP-15327:
-

Hi,
CC: [~gandras], [~shuzirra], [~weichiu]

Let me summarize what kind of testing I performed to make sure this change 
won't cause any regression.
The project that helped me very much with the testing is called 
[Hades|https://github.com/9uapaw/hades].
Kudos to [~gandras] for the initial work on the Hades project.
h1. TL;DR

*Hades was the framework I used to run my testcases.*
*All testcases are passed both with the trunk version of Hadoop (this is not 
surprising at all) and the deployed Hadoop version with my Netty upgrade patch.*
*See the attached test logs for details.*
*Also see the details below about what Hades is, how I tested, why I chose 
certain configurations for the testcases and many more..*
*Now I'm pretty confident that this patch won't break anything so I'm waiting 
for reviewers.*

h1. HADES IN GENERAL
h2. What is Hades?

Hades is a CLI tool, that shares a common interface between various Hadoop 
distributions. It is a collection of commands most frequently used by 
developers of Hadoop components.

Hades supports [Hadock|https://github.com/9uapaw/docker-hadoop-dev], [Cloduera 
Data Platform|https://www.cloudera.com/products/cloudera-data-platform.html] 
and standard upstream distribution.
h2. Basic features of Hades
 - Discover cluster: Stores where individual YARN / HDFS daemons are running.
 - Distribute files on certain nodes
 - Get config: Prints configuration of selected roles
 - Read logs of Hadoop roles
 - Restart: Restarting of certain roles
 - Run an application on the defined cluster
 - Status: Prints the status of the cluster
 - Update config: Update properties on a config file for selected roles
 - YARN specific commands
 - Run script: Runs user-defined custom scripts against the cluster.

h1. CLUSTER + HADES SETUP
h2. Run Hades with the Netty testing script against a cluster

First of all, I created a standard cluster and deployed Hadoop to the cluster.
Side note: Later on, all the installation that deploys Hadoop on the cluster 
could be part of Hades as well.

It's worth to be mentioned that I have a [PR with netty-related 
changes|https://github.com/9uapaw/hades/pull/6] against the Hades repo.
The branch of this PR is 
[this|https://github.com/szilard-nemeth/hades/tree/netty4-finish].

[Here are the 
instructions|https://github.com/szilard-nemeth/hades/blob/c16e95393ecf3e787e125c58d88ec2dc6a44b9e0/README.md#set-up-hades-on-a-cluster-and-run-the-netty-script]
 for how to set up and run Hades with the Netty testing script.
h1. THE NETTY TESTING SCRIPT

The Netty testing script [lives 
here|https://github.com/szilard-nemeth/hades/blob/netty4-finish/script/netty4.py].
As you can see on the code, quite a lot of work has been done to make sure the 
Netty 4 upgrade won't break anything and won't cause any regression as it is a 
crucial part of MapReduce.
h2. CONCEPTS
h3. Test context

Class: Netty4TestContext

The test context provides a way to encapsulate a base branch and a patch file 
(if any) applied on top of the base branch.
The context can enable or disable Maven compilation.
The context can also have certain ways to ensure that the compilation and the 
deployment of new jars were successful on the cluster.
Now, it can verify that certain logs are appearing in the daemon logs, making 
sure the deployment was okay.
The main purpose of the context is to compare it with results of other contexts.
For the Netty testing, it was evident that I need to make sure the trunk 
version and my version with the patch applied on top of trnuk works the same, 
e.g. there's no regression.
For this, I created the context.
h3. Testcase

Class: Netty4Testcase

In general, a testcase can have a name, a simple name, some config changes 
(dictionary of string keys, string values) and one MR application.
h3. Test config: Config options for running the tests

Class: Netty4TestConfig

These are the main config options for the Netty testing.
I won't go into too much details as I defined a ton of options along the way.
You can check all the config options 
[here|https://github.com/szilard-nemeth/hades/blob/c16e95393ecf3e787e125c58d88ec2dc6a44b9e0/script/netty4.py#L655-L687]
h3. Compiler

As mentioned above, Hades can compile Hadoop with Maven and replace the changed 
jars / Maven modules on the cluster.
This is particularly useful for the Netty testing as I was interested in 
whether the patch causes any issues so I had to compile Hadoop with my Netty 
patch, deploy the jars on the cluster and run all the tests and see all of them 
passing.
h2. TESTCASES

The testcases are defined with the help of the Netty4TestcasesBuilder. You can 
find all the testcases 

[jira] [Comment Edited] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4

2022-11-09 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17631029#comment-17631029
 ] 

Szilard Nemeth edited comment on HADOOP-15327 at 11/9/22 12:05 PM:
---

Hi,
CC: [~gandras], [~shuzirra], [~weichiu]

Let me summarize what kind of testing I performed to make sure this change 
won't cause any regression.
The project that helped me very much with the testing is called 
[Hades|https://github.com/9uapaw/hades].
Kudos to [~gandras] for the initial work on the Hades project.
h1. TL;DR

*Hades was the framework I used to run my testcases.*
*All testcases are passed both with the trunk version of Hadoop (this is not 
surprising at all) and the deployed Hadoop version with my Netty upgrade patch.*
*See the attached test logs for details.*
*Also see the details below about what Hades is, how I tested, why I chose 
certain configurations for the testcases and many more..*
*Now I'm pretty confident that this patch won't break anything so I'm waiting 
for reviewers.*

h1. HADES IN GENERAL
h2. What is Hades?

Hades is a CLI tool, that shares a common interface between various Hadoop 
distributions. It is a collection of commands most frequently used by 
developers of Hadoop components.

Hades supports [Hadock|https://github.com/9uapaw/docker-hadoop-dev], [Cloduera 
Data Platform|https://www.cloudera.com/products/cloudera-data-platform.html] 
and standard upstream distribution.
h2. Basic features of Hades
 - Discover cluster: Stores where individual YARN / HDFS daemons are running.
 - Distribute files on certain nodes
 - Get config: Prints configuration of selected roles
 - Read logs of Hadoop roles
 - Restart: Restarting of certain roles
 - Run an application on the defined cluster
 - Status: Prints the status of the cluster
 - Update config: Update properties on a config file for selected roles
 - YARN specific commands
 - Run script: Runs user-defined custom scripts against the cluster.

h1. CLUSTER + HADES SETUP
h2. Run Hades with the Netty testing script against a cluster

First of all, I created a standard cluster and deployed Hadoop to the cluster.
Side note: Later on, all the installation that deploys Hadoop on the cluster 
could be part of Hades as well.

It's worth to be mentioned that I have a [PR with netty-related 
changes|https://github.com/9uapaw/hades/pull/6] against the Hades repo.
The branch of this PR is 
[this|https://github.com/szilard-nemeth/hades/tree/netty4-finish].

[Here are the 
instructions|https://github.com/szilard-nemeth/hades/blob/c16e95393ecf3e787e125c58d88ec2dc6a44b9e0/README.md#set-up-hades-on-a-cluster-and-run-the-netty-script]
 for how to set up and run Hades with the Netty testing script.
h1. THE NETTY TESTING SCRIPT

The Netty testing script [lives 
here|https://github.com/szilard-nemeth/hades/blob/netty4-finish/script/netty4.py].
As you can see on the code, quite a lot of work has been done to make sure the 
Netty 4 upgrade won't break anything and won't cause any regression as it is a 
crucial part of MapReduce.
h2. CONCEPTS
h3. Test context

Class: Netty4TestContext

The test context provides a way to encapsulate a base branch and a patch file 
(if any) applied on top of the base branch.
The context can enable or disable Maven compilation.
The context can also have certain ways to ensure that the compilation and the 
deployment of new jars were successful on the cluster.
Now, it can verify that certain logs are appearing in the daemon logs, making 
sure the deployment was okay.
The main purpose of the context is to compare it with results of other contexts.
For the Netty testing, it was evident that I need to make sure the trunk 
version and my version with the patch applied on top of trnuk works the same, 
e.g. there's no regression.
For this, I created the context.
h3. Testcase

Class: Netty4Testcase

In general, a testcase can have a name, a simple name, some config changes 
(dictionary of string keys, string values) and one MR application.
h3. Test config: Config options for running the tests

Class: Netty4TestConfig

These are the main config options for the Netty testing.
I won't go into too much details as I defined a ton of options along the way.
You can check all the config options 
[here|https://github.com/szilard-nemeth/hades/blob/c16e95393ecf3e787e125c58d88ec2dc6a44b9e0/script/netty4.py#L655-L687]
h3. Compiler

As mentioned above, Hades can compile Hadoop with Maven and replace the changed 
jars / Maven modules on the cluster.
This is particularly useful for the Netty testing as I was interested in 
whether the patch causes any issues so I had to compile Hadoop with my Netty 
patch, deploy the jars on the cluster and run all the tests and see all of them 
passing.
h2. TESTCASES

The testcases are defined with the help of the Netty4TestcasesBuilder. You can 
find all the testcases 

[jira] [Updated] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4

2022-11-09 Thread Szilard Nemeth (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated HADOOP-15327:

Attachment: hades-results-20221108.zip

> Upgrade MR ShuffleHandler to use Netty4
> ---
>
> Key: HADOOP-15327
> URL: https://issues.apache.org/jira/browse/HADOOP-15327
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Szilard Nemeth
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-15327.001.patch, HADOOP-15327.002.patch, 
> HADOOP-15327.003.patch, HADOOP-15327.004.patch, HADOOP-15327.005.patch, 
> HADOOP-15327.005.patch, 
> getMapOutputInfo_BlockingOperationException_awaitUninterruptibly.log, 
> hades-results-20221108.zip, testfailure-testMapFileAccess-emptyresponse.zip, 
> testfailure-testReduceFromPartialMem.zip
>
>  Time Spent: 11.5h
>  Remaining Estimate: 0h
>
> This way, we can remove the dependencies on the netty3 (jboss.netty)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] haiyang1987 opened a new pull request, #5122: HDFS-16811. Support DecommissionBackoffMonitor parameters reconfigurable

2022-11-09 Thread GitBox


haiyang1987 opened a new pull request, #5122:
URL: https://github.com/apache/hadoop/pull/5122

   backport HDFS-16811. Support DecommissionBackoffMonitor parameters 
reconfigurable into brach-3.3
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5115: YARN-10005. Code improvements in MutableCSConfigurationProvider

2022-11-09 Thread GitBox


hadoop-yetus commented on PR #5115:
URL: https://github.com/apache/hadoop/pull/5115#issuecomment-1308629101

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 49s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m 25s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 12s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   1m 13s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 12s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   2m 22s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m  1s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 56s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  3s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   1m  3s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 57s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 57s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 48s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5115/4/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt)
 |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   0m 57s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   2m  2s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 18s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  99m  5s |  |  
hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 45s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 201m 19s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5115/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5115 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 6547d975e2d1 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 04bc57facbeedf7e3894664ff98ea103a74e4793 |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5115/4/testReport/ |
   | Max. process+thread count | 976 (vs. ulimit of 5500) |
   | modules | C: 

[jira] [Commented] (HADOOP-18523) Allow to retrieve an object from MinIO (S3 API) with a very restrictive policy

2022-11-09 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17631009#comment-17631009
 ] 

Steve Loughran commented on HADOOP-18523:
-

don't blame the s3a code here, spark is calling fs.isDirectory(hdfsPath)
going to have to close this as a wontfix

in the theoretical world of open source, anything is fixable. here I'd recommed 
you comment out that bit of 
org.apache.spark.sql.execution.streaming.FileStreamSink.hasMetadata  in the 
private fork of spark you will have to do.

that or hack around the s3a connector. it is written for  AWS s3 where 
ListObjects to the entire bucket is expected. 

leaving it as your homework, i'm afraid

> Allow to retrieve an object from MinIO (S3 API) with a very restrictive policy
> --
>
> Key: HADOOP-18523
> URL: https://issues.apache.org/jira/browse/HADOOP-18523
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Sébastien Burton
>Priority: Major
>
> Hello,
> We're using Spark 
> ({{{}"org.apache.spark:spark-[catalyst|core|sql]_2.12:3.2.2"{}}}) and Hadoop 
> ({{{}"org.apache.hadoop:hadoop-common:3.3.3"{}}}) and want to retrieve an 
> object stored in a MinIO bucket (MinIO implements the S3 API). Spark relies 
> on Hadoop for this operation.
> The MinIO bucket (that we don't manage) is configured with a very restrictive 
> policy that only allows us to retrieve the object (and nothing else). 
> Something like:
> {code:java}
> {
>   "statement": [
> {
>   "effect": "Allow",
>       "action": [ "s3:GetObject" ],
>   "resource": [ "arn:aws:s3:::minio-bucket/object" ]
>     }
>   ]
> }{code}
> And using the AWS CLI, we can well retrieve the object.
> When we try with Spark's {{{}DataFrameReader{}}}, we receive an HTTP 403 
> response (access denied) from MinIO:
> {code:java}
> java.nio.file.AccessDeniedException: s3a://minio-bucket/object: getFileStatus 
> on s3a://minio-bucket/object: 
> com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied. (Service: 
> Amazon S3; Status Code: 403; Error Code: AccessDenied; ...
> at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:255)
> at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:175)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:3858)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:3688)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$isDirectory$35(S3AFileSystem.java:4724)
> at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:499)
> at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:444)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2337)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2356)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.isDirectory(S3AFileSystem.java:4722)
> at 
> org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:54)
> at 
> org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:370)
> at 
> org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:274)
> at 
> org.apache.spark.sql.DataFrameReader.$anonfun$load$3(DataFrameReader.scala:245)
> at scala.Option.getOrElse(Option.scala:189)
> at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:245)
> at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:571)
> at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:481)
> at 
> com.soprabanking.dxp.pure.bf.dataaccess.S3Storage.loadDataset(S3Storage.java:55)
> at 
> com.soprabanking.dxp.pure.bf.business.step.DatasetLoader.lambda$doLoad$3(DatasetLoader.java:148)
> at 
> reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:125)
> at 
> reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
> at 
> reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:151)
> at 
> reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
> at 
> reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:249)
> at 
> reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
> at reactor.core.publisher.MonoZip$ZipCoordinator.signal(MonoZip.java:251)
> at reactor.core.publisher.MonoZip$ZipInner.onNext(MonoZip.java:336)
> at 
> reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2398)
> at 

[jira] [Commented] (HADOOP-18522) Remove usage of System.out.println from Hadoop Codebase

2022-11-09 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17631006#comment-17631006
 ] 

Steve Loughran commented on HADOOP-18522:
-

test code is generally ok to move, though look for any suites which retarget 
System.out

production code, especially the output of any CLI, safest to leave alone unless 
any other tool is parsing the output

as usual, separate PRs will be needed for separate components (hdfs, yarn, mr 
etc) with their own module jiras linked to this.

> Remove usage of System.out.println from Hadoop Codebase 
> 
>
> Key: HADOOP-18522
> URL: https://issues.apache.org/jira/browse/HADOOP-18522
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Samrat Deb
>Assignee: Samrat Deb
>Priority: Minor
>
> In Hadoop codebase , there exists usage of System.out.println used by 
> developers during development . 
> This PR tries to remove those as a part of technical debt



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18433) Fix main thread name.

2022-11-09 Thread ZanderXu (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZanderXu resolved HADOOP-18433.
---
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Fix main thread name.
> -
>
> Key: HADOOP-18433
> URL: https://issues.apache.org/jira/browse/HADOOP-18433
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, ipc
>Affects Versions: 3.2.1
>Reporter: zhengchenyu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> The server's main thread is "Listener at ${hostname}/9000", it is confused 
> easily. We can see the main thread like below.
> {code:java}
> "Listener at ${hostname}/9000" #1 prio=5 os_prio=0 tid=0x7f8068016000 
> nid=0x5c086 in Object.wait() [0x7f806f1d4000]
>    java.lang.Thread.State: WAITING (on object monitor)
>     at java.lang.Object.wait(Native Method)
>     - waiting on <0x7f7552553010> (a 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server)
>     at java.lang.Object.wait(Object.java:502)
>     at org.apache.hadoop.ipc.Server.join(Server.java:3449)
>     - locked <0x7f7552553010> (a 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server)
>     at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.join(NameNodeRpcServer.java:613)
>     at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.join(NameNode.java:1014)
>     at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1774)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ZanderXu commented on pull request #4838: HADOOP-18433. Fix main thread name.

2022-11-09 Thread GitBox


ZanderXu commented on PR #4838:
URL: https://github.com/apache/hadoop/pull/4838#issuecomment-1308600322

   Merged. @zhengchenyu Thanks for your contribution. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18433) Fix main thread name.

2022-11-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17631005#comment-17631005
 ] 

ASF GitHub Bot commented on HADOOP-18433:
-

ZanderXu commented on PR #4838:
URL: https://github.com/apache/hadoop/pull/4838#issuecomment-1308600322

   Merged. @zhengchenyu Thanks for your contribution. 




> Fix main thread name.
> -
>
> Key: HADOOP-18433
> URL: https://issues.apache.org/jira/browse/HADOOP-18433
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, ipc
>Affects Versions: 3.2.1
>Reporter: zhengchenyu
>Priority: Major
>  Labels: pull-request-available
>
> The server's main thread is "Listener at ${hostname}/9000", it is confused 
> easily. We can see the main thread like below.
> {code:java}
> "Listener at ${hostname}/9000" #1 prio=5 os_prio=0 tid=0x7f8068016000 
> nid=0x5c086 in Object.wait() [0x7f806f1d4000]
>    java.lang.Thread.State: WAITING (on object monitor)
>     at java.lang.Object.wait(Native Method)
>     - waiting on <0x7f7552553010> (a 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server)
>     at java.lang.Object.wait(Object.java:502)
>     at org.apache.hadoop.ipc.Server.join(Server.java:3449)
>     - locked <0x7f7552553010> (a 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server)
>     at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.join(NameNodeRpcServer.java:613)
>     at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.join(NameNode.java:1014)
>     at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1774)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18433) Fix main thread name.

2022-11-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17631004#comment-17631004
 ] 

ASF GitHub Bot commented on HADOOP-18433:
-

ZanderXu merged PR #4838:
URL: https://github.com/apache/hadoop/pull/4838




> Fix main thread name.
> -
>
> Key: HADOOP-18433
> URL: https://issues.apache.org/jira/browse/HADOOP-18433
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, ipc
>Affects Versions: 3.2.1
>Reporter: zhengchenyu
>Priority: Major
>  Labels: pull-request-available
>
> The server's main thread is "Listener at ${hostname}/9000", it is confused 
> easily. We can see the main thread like below.
> {code:java}
> "Listener at ${hostname}/9000" #1 prio=5 os_prio=0 tid=0x7f8068016000 
> nid=0x5c086 in Object.wait() [0x7f806f1d4000]
>    java.lang.Thread.State: WAITING (on object monitor)
>     at java.lang.Object.wait(Native Method)
>     - waiting on <0x7f7552553010> (a 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server)
>     at java.lang.Object.wait(Object.java:502)
>     at org.apache.hadoop.ipc.Server.join(Server.java:3449)
>     - locked <0x7f7552553010> (a 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server)
>     at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.join(NameNodeRpcServer.java:613)
>     at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.join(NameNode.java:1014)
>     at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1774)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ZanderXu merged pull request #4838: HADOOP-18433. Fix main thread name.

2022-11-09 Thread GitBox


ZanderXu merged PR #4838:
URL: https://github.com/apache/hadoop/pull/4838


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18433) Fix main thread name.

2022-11-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17631002#comment-17631002
 ] 

ASF GitHub Bot commented on HADOOP-18433:
-

ZanderXu commented on PR #4838:
URL: https://github.com/apache/hadoop/pull/4838#issuecomment-1308598712

   > > How about moving line1397 to line 1513? Correct it, not delete
   > 
   > @ZanderXu Line 1410 has already set thread name, I think just delete is ok.
   > 
   > ```
   >   this.setName("IPC Server listener on " + port);
   > ```
   
   Oh, I see.  LGTM.




> Fix main thread name.
> -
>
> Key: HADOOP-18433
> URL: https://issues.apache.org/jira/browse/HADOOP-18433
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, ipc
>Affects Versions: 3.2.1
>Reporter: zhengchenyu
>Priority: Major
>  Labels: pull-request-available
>
> The server's main thread is "Listener at ${hostname}/9000", it is confused 
> easily. We can see the main thread like below.
> {code:java}
> "Listener at ${hostname}/9000" #1 prio=5 os_prio=0 tid=0x7f8068016000 
> nid=0x5c086 in Object.wait() [0x7f806f1d4000]
>    java.lang.Thread.State: WAITING (on object monitor)
>     at java.lang.Object.wait(Native Method)
>     - waiting on <0x7f7552553010> (a 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server)
>     at java.lang.Object.wait(Object.java:502)
>     at org.apache.hadoop.ipc.Server.join(Server.java:3449)
>     - locked <0x7f7552553010> (a 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server)
>     at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.join(NameNodeRpcServer.java:613)
>     at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.join(NameNode.java:1014)
>     at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1774)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ZanderXu commented on pull request #4838: HADOOP-18433. Fix main thread name.

2022-11-09 Thread GitBox


ZanderXu commented on PR #4838:
URL: https://github.com/apache/hadoop/pull/4838#issuecomment-1308598712

   > > How about moving line1397 to line 1513? Correct it, not delete
   > 
   > @ZanderXu Line 1410 has already set thread name, I think just delete is ok.
   > 
   > ```
   >   this.setName("IPC Server listener on " + port);
   > ```
   
   Oh, I see.  LGTM.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ZanderXu commented on a diff in pull request #5098: HDFS-16831. [RBF SBN] GetNamenodesForNameserviceId should shuffle Observer NameNodes every time

2022-11-09 Thread GitBox


ZanderXu commented on code in PR #5098:
URL: https://github.com/apache/hadoop/pull/5098#discussion_r1017802434


##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MembershipNamenodeResolver.java:
##
@@ -189,13 +189,45 @@ private void updateNameNodeState(final String nsId,
 }
   }
 
+  private  List shuffleObserverNN(

Review Comment:
   @goiri Sir, I have updated it, please help me review it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ZanderXu commented on a diff in pull request #5098: HDFS-16831. [RBF SBN] GetNamenodesForNameserviceId should shuffle Observer NameNodes every time

2022-11-09 Thread GitBox


ZanderXu commented on code in PR #5098:
URL: https://github.com/apache/hadoop/pull/5098#discussion_r1017802115


##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MembershipNamenodeResolver.java:
##
@@ -189,13 +189,45 @@ private void updateNameNodeState(final String nsId,
 }
   }
 
+  private  List shuffleObserverNN(
+  List inputNameNodes, boolean listObserversFirst) {
+if (!listObserversFirst) {
+  return inputNameNodes;
+}
+// Get Observers first.
+List observerList = new ArrayList<>();
+for (T t : inputNameNodes) {
+  if (t.getState() == OBSERVER) {
+observerList.add(t);
+  } else {
+// The inputNameNodes are already sorted, so it can break

Review Comment:
   Yes. If `listObserversFirst` is true, all Observers will be placed at the 
front of the `inputNameNodes` which been processed by 
`getRecentRegistrationForQuery` method.
   
   If we want to make this method more common, I can ignore it and loop all the 
inputNameNodes to find the Observers.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18501) [ABFS]: Partial Read should add to throttling metric

2022-11-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17630986#comment-17630986
 ] 

ASF GitHub Bot commented on HADOOP-18501:
-

hadoop-yetus commented on PR #5109:
URL: https://github.com/apache/hadoop/pull/5109#issuecomment-1308563922

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 9 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  39m 13s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 51s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 51s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 46s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 56s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 33s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 55s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 22s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5109/10/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 6 new + 9 unchanged - 0 
fixed = 15 total (was 9)  |
   | +1 :green_heart: |  mvnsite  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 44s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 23s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 49s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  97m 44s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5109/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5109 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux dcb74096c9eb 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 7ee87be41e7be3ed70f7e734f66482678696aaf3 |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5109/10/testReport/ |
   | Max. process+thread count | 568 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5109: HADOOP-18501: ABFS: Partial read should add to throttling data

2022-11-09 Thread GitBox


hadoop-yetus commented on PR #5109:
URL: https://github.com/apache/hadoop/pull/5109#issuecomment-1308563922

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 9 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  39m 13s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 51s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 51s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 46s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 56s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 33s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 55s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 22s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5109/10/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 6 new + 9 unchanged - 0 
fixed = 15 total (was 9)  |
   | +1 :green_heart: |  mvnsite  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 44s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 23s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 49s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  97m 44s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5109/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5109 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux dcb74096c9eb 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 7ee87be41e7be3ed70f7e734f66482678696aaf3 |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5109/10/testReport/ |
   | Max. process+thread count | 568 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5109/10/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message