[jira] [Work logged] (HADOOP-13144) Enhancing IPC client throughput via multiple connections per user

2022-07-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13144?focusedWorklogId=789859=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-789859
 ]

ASF GitHub Bot logged work on HADOOP-13144:
---

Author: ASF GitHub Bot
Created on: 12/Jul/22 05:26
Start Date: 12/Jul/22 05:26
Worklog Time Spent: 10m 
  Work Description: ferhui commented on PR #4542:
URL: https://github.com/apache/hadoop/pull/4542#issuecomment-1181329759

   @ZanderXu Thanks for pushing is forward. It makes sense.




Issue Time Tracking
---

Worklog Id: (was: 789859)
Time Spent: 1h 20m  (was: 1h 10m)

> Enhancing IPC client throughput via multiple connections per user
> -
>
> Key: HADOOP-13144
> URL: https://issues.apache.org/jira/browse/HADOOP-13144
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Jason Kace
>Assignee: Íñigo Goiri
>Priority: Minor
>  Labels: pull-request-available
> Attachments: HADOOP-13144-performance.patch, HADOOP-13144.000.patch, 
> HADOOP-13144.001.patch, HADOOP-13144.002.patch, HADOOP-13144.003.patch, 
> HADOOP-13144_overload_enhancement.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> The generic IPC client ({{org.apache.hadoop.ipc.Client}}) utilizes a single 
> connection thread for each {{ConnectionId}}.  The {{ConnectionId}} is unique 
> to the connection's remote address, ticket and protocol.  Each ConnectionId 
> is 1:1 mapped to a connection thread by the client via a map cache.
> The result is to serialize all IPC read/write activity through a single 
> thread for a each user/ticket + address.  If a single user makes repeated 
> calls (1k-100k/sec) to the same destination, the IPC client becomes a 
> bottleneck.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ferhui commented on pull request #4542: HADOOP-13144. Enhancing IPC client throughput via multiple connections per user

2022-07-11 Thread GitBox


ferhui commented on PR #4542:
URL: https://github.com/apache/hadoop/pull/4542#issuecomment-1181329759

   @ZanderXu Thanks for pushing is forward. It makes sense.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18334) Fix create-release to address removal of GPG_AGENT_INFO in branch-3.2

2022-07-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18334?focusedWorklogId=789858=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-789858
 ]

ASF GitHub Bot logged work on HADOOP-18334:
---

Author: ASF GitHub Bot
Created on: 12/Jul/22 05:09
Start Date: 12/Jul/22 05:09
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4554:
URL: https://github.com/apache/hadoop/pull/4554#issuecomment-1181320374

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   5m 38s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
    _ branch-3.2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   4m 58s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  shadedclient  |  18m  5s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  shellcheck  |   0m  1s |  |  No new issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 19s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 47s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  46m 23s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4554/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4554 |
   | Optional Tests | dupname asflicense codespell detsecrets shellcheck 
shelldocs |
   | uname | Linux 34bd0b55104d 4.15.0-169-generic #177-Ubuntu SMP Thu Feb 3 
10:50:38 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.2 / c9a4cb10f704f35c9382e6da6dfdc1118a8bea01 |
   | Max. process+thread count | 400 (vs. ulimit of 5500) |
   | modules | C:  U:  |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4554/1/console |
   | versions | git=2.17.1 maven=3.6.0 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




Issue Time Tracking
---

Worklog Id: (was: 789858)
Time Spent: 0.5h  (was: 20m)

> Fix create-release to address removal of GPG_AGENT_INFO in branch-3.2
> -
>
> Key: HADOOP-18334
> URL: https://issues.apache.org/jira/browse/HADOOP-18334
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.2.3
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> gpg v2.1 and above does not export GPG_AGENT_INFO. create-release script need 
> to export the info by itself to make {{--sign}} work. It was addressed as 
> part of HADOOP-16797 in branch-3.3 and trunk. Since we can not backport 
> aarch64 support to branch-3.2, I filed this issue for branch-3.2 only.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4554: HADOOP-18334. Fix create-release to address removal of GPG_AGENT_INFO in branch-3.2.

2022-07-11 Thread GitBox


hadoop-yetus commented on PR #4554:
URL: https://github.com/apache/hadoop/pull/4554#issuecomment-1181320374

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   5m 38s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
    _ branch-3.2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   4m 58s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  shadedclient  |  18m  5s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  shellcheck  |   0m  1s |  |  No new issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 19s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 47s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  46m 23s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4554/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4554 |
   | Optional Tests | dupname asflicense codespell detsecrets shellcheck 
shelldocs |
   | uname | Linux 34bd0b55104d 4.15.0-169-generic #177-Ubuntu SMP Thu Feb 3 
10:50:38 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.2 / c9a4cb10f704f35c9382e6da6dfdc1118a8bea01 |
   | Max. process+thread count | 400 (vs. ulimit of 5500) |
   | modules | C:  U:  |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4554/1/console |
   | versions | git=2.17.1 maven=3.6.0 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-13144) Enhancing IPC client throughput via multiple connections per user

2022-07-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13144?focusedWorklogId=789845=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-789845
 ]

ASF GitHub Bot logged work on HADOOP-13144:
---

Author: ASF GitHub Bot
Created on: 12/Jul/22 04:31
Start Date: 12/Jul/22 04:31
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4542:
URL: https://github.com/apache/hadoop/pull/4542#issuecomment-1181301908

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 42s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  37m 44s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m  3s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  20m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 48s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 12s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 50s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 22s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 14s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 47s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  7s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 16s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |  22m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 35s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  20m 35s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 42s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 10s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m  8s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 34s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 45s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 215m 29s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4542/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4542 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux a3ed3d06b83f 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 13fd2adeb338be56004c56a9e23c9406c0abe053 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4542/3/testReport/ |
   | Max. process+thread count | 2773 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4542/3/console |
  

[GitHub] [hadoop] hadoop-yetus commented on pull request #4542: HADOOP-13144. Enhancing IPC client throughput via multiple connections per user

2022-07-11 Thread GitBox


hadoop-yetus commented on PR #4542:
URL: https://github.com/apache/hadoop/pull/4542#issuecomment-1181301908

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 42s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  37m 44s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m  3s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  20m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 48s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 12s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 50s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 22s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 14s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 47s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  7s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 16s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |  22m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 35s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  20m 35s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 42s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 10s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m  8s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 34s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 45s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 215m 29s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4542/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4542 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux a3ed3d06b83f 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 13fd2adeb338be56004c56a9e23c9406c0abe053 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4542/3/testReport/ |
   | Max. process+thread count | 2773 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4542/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For 

[jira] [Updated] (HADOOP-18334) Fix create-release to address removal of GPG_AGENT_INFO in branch-3.2

2022-07-11 Thread Masatake Iwasaki (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-18334:
--
Status: Patch Available  (was: Open)

> Fix create-release to address removal of GPG_AGENT_INFO in branch-3.2
> -
>
> Key: HADOOP-18334
> URL: https://issues.apache.org/jira/browse/HADOOP-18334
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.2.3
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> gpg v2.1 and above does not export GPG_AGENT_INFO. create-release script need 
> to export the info by itself to make {{--sign}} work. It was addressed as 
> part of HADOOP-16797 in branch-3.3 and trunk. Since we can not backport 
> aarch64 support to branch-3.2, I filed this issue for branch-3.2 only.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18334) Fix create-release to address removal of GPG_AGENT_INFO in branch-3.2

2022-07-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18334?focusedWorklogId=789842=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-789842
 ]

ASF GitHub Bot logged work on HADOOP-18334:
---

Author: ASF GitHub Bot
Created on: 12/Jul/22 04:24
Start Date: 12/Jul/22 04:24
Worklog Time Spent: 10m 
  Work Description: iwasakims commented on PR #4554:
URL: https://github.com/apache/hadoop/pull/4554#issuecomment-1181298439

   looks working in branch-3.2.
   ```
   $ dev-support/bin/create-release --asfrelease --docker --dockercache
   ...(snip)
   
   
 Hadoop Release Creator
   
   
   
   
   Version to create  : 3.2.5-SNAPSHOT
   Release Candidate Label:
   Source Version : 3.2.5-SNAPSHOT
   
   
   starting gpg agent
   Warming the gpg-agent cache prior to calling maven
   $ cd /build/source
   ```




Issue Time Tracking
---

Worklog Id: (was: 789842)
Time Spent: 20m  (was: 10m)

> Fix create-release to address removal of GPG_AGENT_INFO in branch-3.2
> -
>
> Key: HADOOP-18334
> URL: https://issues.apache.org/jira/browse/HADOOP-18334
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.2.3
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> gpg v2.1 and above does not export GPG_AGENT_INFO. create-release script need 
> to export the info by itself to make {{--sign}} work. It was addressed as 
> part of HADOOP-16797 in branch-3.3 and trunk. Since we can not backport 
> aarch64 support to branch-3.2, I filed this issue for branch-3.2 only.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] iwasakims commented on pull request #4554: HADOOP-18334. Fix create-release to address removal of GPG_AGENT_INFO in branch-3.2.

2022-07-11 Thread GitBox


iwasakims commented on PR #4554:
URL: https://github.com/apache/hadoop/pull/4554#issuecomment-1181298439

   looks working in branch-3.2.
   ```
   $ dev-support/bin/create-release --asfrelease --docker --dockercache
   ...(snip)
   
   
 Hadoop Release Creator
   
   
   
   
   Version to create  : 3.2.5-SNAPSHOT
   Release Candidate Label:
   Source Version : 3.2.5-SNAPSHOT
   
   
   starting gpg agent
   Warming the gpg-agent cache prior to calling maven
   $ cd /build/source
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18334) Fix create-release to address removal of GPG_AGENT_INFO in branch-3.2

2022-07-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-18334:

Labels: pull-request-available  (was: )

> Fix create-release to address removal of GPG_AGENT_INFO in branch-3.2
> -
>
> Key: HADOOP-18334
> URL: https://issues.apache.org/jira/browse/HADOOP-18334
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.2.3
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> gpg v2.1 and above does not export GPG_AGENT_INFO. create-release script need 
> to export the info by itself to make {{--sign}} work. It was addressed as 
> part of HADOOP-16797 in branch-3.3 and trunk. Since we can not backport 
> aarch64 support to branch-3.2, I filed this issue for branch-3.2 only.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18334) Fix create-release to address removal of GPG_AGENT_INFO in branch-3.2

2022-07-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18334?focusedWorklogId=789841=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-789841
 ]

ASF GitHub Bot logged work on HADOOP-18334:
---

Author: ASF GitHub Bot
Created on: 12/Jul/22 04:21
Start Date: 12/Jul/22 04:21
Worklog Time Spent: 10m 
  Work Description: iwasakims opened a new pull request, #4554:
URL: https://github.com/apache/hadoop/pull/4554

   https://issues.apache.org/jira/browse/HADOOP-18334
   
   gpg v2.1 and above does not export GPG_AGENT_INFO. create-release script 
need to export the info by itself to make `--sign` work. It was addressed as 
part of [HADOOP-16797](https://issues.apache.org/jira/browse/HADOOP-16797) in 
branch-3.3 and trunk. Since we can not backport aarch64 support to branch-3.2, 
I filed this issue for branch-3.2 only.




Issue Time Tracking
---

Worklog Id: (was: 789841)
Remaining Estimate: 0h
Time Spent: 10m

> Fix create-release to address removal of GPG_AGENT_INFO in branch-3.2
> -
>
> Key: HADOOP-18334
> URL: https://issues.apache.org/jira/browse/HADOOP-18334
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.2.3
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> gpg v2.1 and above does not export GPG_AGENT_INFO. create-release script need 
> to export the info by itself to make {{--sign}} work. It was addressed as 
> part of HADOOP-16797 in branch-3.3 and trunk. Since we can not backport 
> aarch64 support to branch-3.2, I filed this issue for branch-3.2 only.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] iwasakims opened a new pull request, #4554: HADOOP-18334. Fix create-release to address removal of GPG_AGENT_INFO in branch-3.2.

2022-07-11 Thread GitBox


iwasakims opened a new pull request, #4554:
URL: https://github.com/apache/hadoop/pull/4554

   https://issues.apache.org/jira/browse/HADOOP-18334
   
   gpg v2.1 and above does not export GPG_AGENT_INFO. create-release script 
need to export the info by itself to make `--sign` work. It was addressed as 
part of [HADOOP-16797](https://issues.apache.org/jira/browse/HADOOP-16797) in 
branch-3.3 and trunk. Since we can not backport aarch64 support to branch-3.2, 
I filed this issue for branch-3.2 only.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18334) Fix create-release to address removal of GPG_AGENT_INFO in branch-3.2

2022-07-11 Thread Masatake Iwasaki (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17565259#comment-17565259
 ] 

Masatake Iwasaki commented on HADOOP-18334:
---

{noformat}


  Hadoop Release Creator




Version to create  : 3.2.4
Release Candidate Label:
Source Version : 3.2.4


starting gpg agent
ERROR: Unable to launch or acquire gpg-agent. Disable signing.
{noformat}

> Fix create-release to address removal of GPG_AGENT_INFO in branch-3.2
> -
>
> Key: HADOOP-18334
> URL: https://issues.apache.org/jira/browse/HADOOP-18334
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.2.3
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>
> gpg v2.1 and above does not export GPG_AGENT_INFO. create-release script need 
> to export the info by itself to make {{--sign}} work. It was addressed as 
> part of HADOOP-16797 in branch-3.3 and trunk. Since we can not backport 
> aarch64 support to branch-3.2, I filed this issue for branch-3.2 only.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18334) Fix create-release to address removal of GPG_AGENT_INFO in branch-3.2

2022-07-11 Thread Masatake Iwasaki (Jira)
Masatake Iwasaki created HADOOP-18334:
-

 Summary: Fix create-release to address removal of GPG_AGENT_INFO 
in branch-3.2
 Key: HADOOP-18334
 URL: https://issues.apache.org/jira/browse/HADOOP-18334
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.2.3
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki


gpg v2.1 and above does not export GPG_AGENT_INFO. create-release script need 
to export the info by itself to make {{--sign}} work. It was addressed as part 
of HADOOP-16797 in branch-3.3 and trunk. Since we can not backport aarch64 
support to branch-3.2, I filed this issue for branch-3.2 only.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18324) Interrupting RPC Client calls can lead to thread exhaustion

2022-07-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18324?focusedWorklogId=789822=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-789822
 ]

ASF GitHub Bot logged work on HADOOP-18324:
---

Author: ASF GitHub Bot
Created on: 12/Jul/22 02:53
Start Date: 12/Jul/22 02:53
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4527:
URL: https://github.com/apache/hadoop/pull/4527#issuecomment-1181255165

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  37m 38s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  24m 32s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  23m  1s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 52s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 13s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 48s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 12s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 29s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 10s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | -1 :x: |  javac  |  22m 10s | 
[/results-compile-javac-root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4527/2/artifact/out/results-compile-javac-root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 generated 1 new + 2879 unchanged - 0 
fixed = 2880 total (was 2879)  |
   | +1 :green_heart: |  compile  |  20m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | -1 :x: |  javac  |  20m 27s | 
[/results-compile-javac-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4527/2/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 generated 1 new + 2676 
unchanged - 0 fixed = 2677 total (was 2676)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 43s | 
[/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4527/2/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 1 new + 147 
unchanged - 3 fixed = 148 total (was 150)  |
   | +1 :green_heart: |  mvnsite  |   2m 11s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 13s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 35s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 49s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 219m 20s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4527/2/artifact/out/Dockerfile
 |

[GitHub] [hadoop] hadoop-yetus commented on pull request #4527: HADOOP-18324. Interrupting RPC Client calls can lead to thread exhaustion.

2022-07-11 Thread GitBox


hadoop-yetus commented on PR #4527:
URL: https://github.com/apache/hadoop/pull/4527#issuecomment-1181255165

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  37m 38s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  24m 32s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  23m  1s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 52s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 13s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 48s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 12s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 29s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 10s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | -1 :x: |  javac  |  22m 10s | 
[/results-compile-javac-root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4527/2/artifact/out/results-compile-javac-root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 generated 1 new + 2879 unchanged - 0 
fixed = 2880 total (was 2879)  |
   | +1 :green_heart: |  compile  |  20m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | -1 :x: |  javac  |  20m 27s | 
[/results-compile-javac-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4527/2/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 generated 1 new + 2676 
unchanged - 0 fixed = 2677 total (was 2676)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 43s | 
[/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4527/2/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 1 new + 147 
unchanged - 3 fixed = 148 total (was 150)  |
   | +1 :green_heart: |  mvnsite  |   2m 11s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 13s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 35s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 49s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 219m 20s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4527/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4527 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 63998bedd3c4 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | 

[GitHub] [hadoop] iwasakims commented on a diff in pull request #4538: HDFS-16654. Link OpenSSL lib for CMake deps check

2022-07-11 Thread GitBox


iwasakims commented on code in PR #4538:
URL: https://github.com/apache/hadoop/pull/4538#discussion_r918499535


##
hadoop-hdfs-project/hadoop-hdfs-native-client/src/CMakeLists.txt:
##
@@ -127,7 +127,8 @@ if(OPENSSL_LIBRARY AND OPENSSL_INCLUDE_DIR)
 include(CheckCSourceCompiles)
 set(OLD_CMAKE_REQUIRED_INCLUDES ${CMAKE_REQUIRED_INCLUDES})
 set(CMAKE_REQUIRED_INCLUDES ${OPENSSL_INCLUDE_DIR})
-check_c_source_compiles("#include 
\"${OPENSSL_INCLUDE_DIR}/openssl/evp.h\"\nint main(int argc, char **argv) { 
return !EVP_aes_256_ctr; }" HAS_NEW_ENOUGH_OPENSSL)

Review Comment:
   Thanks for the explanation. Let me try the patch on Linux env with custom 
OpenSSL location.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-13144) Enhancing IPC client throughput via multiple connections per user

2022-07-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13144?focusedWorklogId=789805=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-789805
 ]

ASF GitHub Bot logged work on HADOOP-13144:
---

Author: ASF GitHub Bot
Created on: 12/Jul/22 01:00
Start Date: 12/Jul/22 01:00
Worklog Time Spent: 10m 
  Work Description: ZanderXu commented on PR #4542:
URL: https://github.com/apache/hadoop/pull/4542#issuecomment-1181197972

   Thanks @goiri for your review. I have updated this patch, please help me 
reivew it. Thanks
   
   And @Hexiaoqiao @ayushtkn Can you help me to review this patch?




Issue Time Tracking
---

Worklog Id: (was: 789805)
Time Spent: 1h  (was: 50m)

> Enhancing IPC client throughput via multiple connections per user
> -
>
> Key: HADOOP-13144
> URL: https://issues.apache.org/jira/browse/HADOOP-13144
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Jason Kace
>Assignee: Íñigo Goiri
>Priority: Minor
>  Labels: pull-request-available
> Attachments: HADOOP-13144-performance.patch, HADOOP-13144.000.patch, 
> HADOOP-13144.001.patch, HADOOP-13144.002.patch, HADOOP-13144.003.patch, 
> HADOOP-13144_overload_enhancement.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The generic IPC client ({{org.apache.hadoop.ipc.Client}}) utilizes a single 
> connection thread for each {{ConnectionId}}.  The {{ConnectionId}} is unique 
> to the connection's remote address, ticket and protocol.  Each ConnectionId 
> is 1:1 mapped to a connection thread by the client via a map cache.
> The result is to serialize all IPC read/write activity through a single 
> thread for a each user/ticket + address.  If a single user makes repeated 
> calls (1k-100k/sec) to the same destination, the IPC client becomes a 
> bottleneck.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ZanderXu commented on pull request #4542: HADOOP-13144. Enhancing IPC client throughput via multiple connections per user

2022-07-11 Thread GitBox


ZanderXu commented on PR #4542:
URL: https://github.com/apache/hadoop/pull/4542#issuecomment-1181197972

   Thanks @goiri for your review. I have updated this patch, please help me 
reivew it. Thanks
   
   And @Hexiaoqiao @ayushtkn Can you help me to review this patch?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-13144) Enhancing IPC client throughput via multiple connections per user

2022-07-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13144?focusedWorklogId=789802=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-789802
 ]

ASF GitHub Bot logged work on HADOOP-13144:
---

Author: ASF GitHub Bot
Created on: 12/Jul/22 00:52
Start Date: 12/Jul/22 00:52
Worklog Time Spent: 10m 
  Work Description: ZanderXu commented on code in PR #4542:
URL: https://github.com/apache/hadoop/pull/4542#discussion_r918455307


##
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRpcBase.java:
##
@@ -154,11 +155,53 @@ protected static TestRpcService 
getClient(InetSocketAddress serverAddr,
 }
   }
 
-  protected static void stop(Server server, TestRpcService proxy) {
-if (proxy != null) {
-  try {
-RPC.stopProxy(proxy);
-  } catch (Exception ignored) {}
+  /**
+   * Try to obtain a proxy of TestRpcService with an index.
+   * @param serverAddr input server address
+   * @param clientConf input client configuration
+   * @param retryPolicy input retryPolicy
+   * @param index input index
+   * @return one proxy of TestRpcService
+   */
+  protected static TestRpcService getMultipleClientWithIndex(InetSocketAddress 
serverAddr,
+  Configuration clientConf, RetryPolicy retryPolicy, int index)
+  throws ServiceException, IOException {
+MockConnectionId connectionId = new MockConnectionId(serverAddr,
+TestRpcService.class, UserGroupInformation.getCurrentUser(),
+RPC.getRpcTimeout(clientConf), retryPolicy, clientConf, index);
+return getClient(connectionId, clientConf);
+  }
+
+  /**
+   * Obtain a TestRpcService Proxy by a connectionId.
+   * @param connId input connectionId
+   * @param clientConf  input configuration
+   * @return a TestRpcService Proxy
+   * @throws ServiceException a ServiceException
+   */
+  protected static TestRpcService getClient(ConnectionId connId,
+  Configuration clientConf) throws ServiceException {
+try {
+  return RPC.getProtocolProxy(
+  TestRpcService.class,
+  0,
+  connId,
+  clientConf,
+  NetUtils.getDefaultSocketFactory(clientConf)).getProxy();
+} catch (IOException e) {
+  throw new ServiceException(e);
+}
+  }
+
+  protected static void stop(Server server, TestRpcService... proxies) {
+if (proxies != null) {
+  for (TestRpcService proxy : proxies) {

Review Comment:
   It will throw NPE if `proxies` is null. 
   ```
   java.lang.NullPointerException
at org.apache.hadoop.ipc.TestRpcBase.stop(TestRpcBase.java:199)
at org.apache.hadoop.ipc.TestRPC.testServerAddress(TestRPC.java:682)
   ```





Issue Time Tracking
---

Worklog Id: (was: 789802)
Time Spent: 50m  (was: 40m)

> Enhancing IPC client throughput via multiple connections per user
> -
>
> Key: HADOOP-13144
> URL: https://issues.apache.org/jira/browse/HADOOP-13144
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Jason Kace
>Assignee: Íñigo Goiri
>Priority: Minor
>  Labels: pull-request-available
> Attachments: HADOOP-13144-performance.patch, HADOOP-13144.000.patch, 
> HADOOP-13144.001.patch, HADOOP-13144.002.patch, HADOOP-13144.003.patch, 
> HADOOP-13144_overload_enhancement.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The generic IPC client ({{org.apache.hadoop.ipc.Client}}) utilizes a single 
> connection thread for each {{ConnectionId}}.  The {{ConnectionId}} is unique 
> to the connection's remote address, ticket and protocol.  Each ConnectionId 
> is 1:1 mapped to a connection thread by the client via a map cache.
> The result is to serialize all IPC read/write activity through a single 
> thread for a each user/ticket + address.  If a single user makes repeated 
> calls (1k-100k/sec) to the same destination, the IPC client becomes a 
> bottleneck.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ZanderXu commented on a diff in pull request #4542: HADOOP-13144. Enhancing IPC client throughput via multiple connections per user

2022-07-11 Thread GitBox


ZanderXu commented on code in PR #4542:
URL: https://github.com/apache/hadoop/pull/4542#discussion_r918455307


##
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRpcBase.java:
##
@@ -154,11 +155,53 @@ protected static TestRpcService 
getClient(InetSocketAddress serverAddr,
 }
   }
 
-  protected static void stop(Server server, TestRpcService proxy) {
-if (proxy != null) {
-  try {
-RPC.stopProxy(proxy);
-  } catch (Exception ignored) {}
+  /**
+   * Try to obtain a proxy of TestRpcService with an index.
+   * @param serverAddr input server address
+   * @param clientConf input client configuration
+   * @param retryPolicy input retryPolicy
+   * @param index input index
+   * @return one proxy of TestRpcService
+   */
+  protected static TestRpcService getMultipleClientWithIndex(InetSocketAddress 
serverAddr,
+  Configuration clientConf, RetryPolicy retryPolicy, int index)
+  throws ServiceException, IOException {
+MockConnectionId connectionId = new MockConnectionId(serverAddr,
+TestRpcService.class, UserGroupInformation.getCurrentUser(),
+RPC.getRpcTimeout(clientConf), retryPolicy, clientConf, index);
+return getClient(connectionId, clientConf);
+  }
+
+  /**
+   * Obtain a TestRpcService Proxy by a connectionId.
+   * @param connId input connectionId
+   * @param clientConf  input configuration
+   * @return a TestRpcService Proxy
+   * @throws ServiceException a ServiceException
+   */
+  protected static TestRpcService getClient(ConnectionId connId,
+  Configuration clientConf) throws ServiceException {
+try {
+  return RPC.getProtocolProxy(
+  TestRpcService.class,
+  0,
+  connId,
+  clientConf,
+  NetUtils.getDefaultSocketFactory(clientConf)).getProxy();
+} catch (IOException e) {
+  throw new ServiceException(e);
+}
+  }
+
+  protected static void stop(Server server, TestRpcService... proxies) {
+if (proxies != null) {
+  for (TestRpcService proxy : proxies) {

Review Comment:
   It will throw NPE if `proxies` is null. 
   ```
   java.lang.NullPointerException
at org.apache.hadoop.ipc.TestRpcBase.stop(TestRpcBase.java:199)
at org.apache.hadoop.ipc.TestRPC.testServerAddress(TestRPC.java:682)
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18330) S3AFileSystem removes Path when calling createS3Client

2022-07-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18330?focusedWorklogId=789796=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-789796
 ]

ASF GitHub Bot logged work on HADOOP-18330:
---

Author: ASF GitHub Bot
Created on: 12/Jul/22 00:39
Start Date: 12/Jul/22 00:39
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4551:
URL: https://github.com/apache/hadoop/pull/4551#issuecomment-1181186399

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 35s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  37m 38s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 53s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 53s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 53s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  1s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 33s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m  6s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 40s | 
[/patch-mvninstall-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4551/1/artifact/out/patch-mvninstall-hadoop-tools_hadoop-aws.txt)
 |  hadoop-aws in the patch failed.  |
   | -1 :x: |  compile  |   0m 31s | 
[/patch-compile-hadoop-tools_hadoop-aws-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4551/1/artifact/out/patch-compile-hadoop-tools_hadoop-aws-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-aws in the patch failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | -1 :x: |  javac  |   0m 31s | 
[/patch-compile-hadoop-tools_hadoop-aws-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4551/1/artifact/out/patch-compile-hadoop-tools_hadoop-aws-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-aws in the patch failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | -1 :x: |  compile  |   0m 27s | 
[/patch-compile-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4551/1/artifact/out/patch-compile-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  hadoop-aws in the patch failed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.  |
   | -1 :x: |  javac  |   0m 27s | 
[/patch-compile-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4551/1/artifact/out/patch-compile-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  hadoop-aws in the patch failed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 25s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4551/1/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 2 new + 5 unchanged - 0 fixed 
= 7 total (was 5)  |
   | -1 :x: |  mvnsite  |   0m 27s | 
[/patch-mvnsite-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4551/1/artifact/out/patch-mvnsite-hadoop-tools_hadoop-aws.txt)
 |  hadoop-aws in the patch failed.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4551: HADOOP-18330

2022-07-11 Thread GitBox


hadoop-yetus commented on PR #4551:
URL: https://github.com/apache/hadoop/pull/4551#issuecomment-1181186399

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 35s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  37m 38s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 53s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 53s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 53s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  1s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 33s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m  6s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 40s | 
[/patch-mvninstall-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4551/1/artifact/out/patch-mvninstall-hadoop-tools_hadoop-aws.txt)
 |  hadoop-aws in the patch failed.  |
   | -1 :x: |  compile  |   0m 31s | 
[/patch-compile-hadoop-tools_hadoop-aws-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4551/1/artifact/out/patch-compile-hadoop-tools_hadoop-aws-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-aws in the patch failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | -1 :x: |  javac  |   0m 31s | 
[/patch-compile-hadoop-tools_hadoop-aws-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4551/1/artifact/out/patch-compile-hadoop-tools_hadoop-aws-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-aws in the patch failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | -1 :x: |  compile  |   0m 27s | 
[/patch-compile-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4551/1/artifact/out/patch-compile-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  hadoop-aws in the patch failed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.  |
   | -1 :x: |  javac  |   0m 27s | 
[/patch-compile-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4551/1/artifact/out/patch-compile-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  hadoop-aws in the patch failed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 25s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4551/1/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 2 new + 5 unchanged - 0 fixed 
= 7 total (was 5)  |
   | -1 :x: |  mvnsite  |   0m 27s | 
[/patch-mvnsite-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4551/1/artifact/out/patch-mvnsite-hadoop-tools_hadoop-aws.txt)
 |  hadoop-aws in the patch failed.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | -1 :x: |  spotbugs  |   0m 28s | 
[/patch-spotbugs-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4551/1/artifact/out/patch-spotbugs-hadoop-tools_hadoop-aws.txt)
 |  hadoop-aws in the patch 

[jira] [Work logged] (HADOOP-18333) hadoop-client-runtime impact by CVE-2022-2047 due to shaded jetty

2022-07-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18333?focusedWorklogId=789783=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-789783
 ]

ASF GitHub Bot logged work on HADOOP-18333:
---

Author: ASF GitHub Bot
Created on: 12/Jul/22 00:02
Start Date: 12/Jul/22 00:02
Worklog Time Spent: 10m 
  Work Description: ashutoshcipher opened a new pull request, #4553:
URL: https://github.com/apache/hadoop/pull/4553

   ### Description of PR
   
   Upgrade jetty version to 9.4.48.v20220622 to mitigate CVE-2022-2047
   
   JIRA: HADOOP-18333
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




Issue Time Tracking
---

Worklog Id: (was: 789783)
Remaining Estimate: 0h
Time Spent: 10m

> hadoop-client-runtime impact by CVE-2022-2047 due to shaded jetty
> -
>
> Key: HADOOP-18333
> URL: https://issues.apache.org/jira/browse/HADOOP-18333
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.3
>Reporter: phoebe chen
>Assignee: Ashutosh Gupta
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> CVE-2022-2047 is recently found for Eclipse Jetty, and impacts 9.4.0 thru 
> 9.4.46.
> In latest 3.3.3 of hadoop-client-runtime, it shaded 9.4.43.v20210629 version 
> jetty which is impacted.
> In Trunch, Jetty is in version 9.4.44.v20210927, which is still impacted.
> Need to upgrade Jetty Version. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18333) hadoop-client-runtime impact by CVE-2022-2047 due to shaded jetty

2022-07-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-18333:

Labels: pull-request-available  (was: )

> hadoop-client-runtime impact by CVE-2022-2047 due to shaded jetty
> -
>
> Key: HADOOP-18333
> URL: https://issues.apache.org/jira/browse/HADOOP-18333
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.3
>Reporter: phoebe chen
>Assignee: Ashutosh Gupta
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> CVE-2022-2047 is recently found for Eclipse Jetty, and impacts 9.4.0 thru 
> 9.4.46.
> In latest 3.3.3 of hadoop-client-runtime, it shaded 9.4.43.v20210629 version 
> jetty which is impacted.
> In Trunch, Jetty is in version 9.4.44.v20210927, which is still impacted.
> Need to upgrade Jetty Version. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ashutoshcipher opened a new pull request, #4553: HADOOP-18333.Upgrade jetty version to 9.4.48.v20220622

2022-07-11 Thread GitBox


ashutoshcipher opened a new pull request, #4553:
URL: https://github.com/apache/hadoop/pull/4553

   ### Description of PR
   
   Upgrade jetty version to 9.4.48.v20220622 to mitigate CVE-2022-2047
   
   JIRA: HADOOP-18333
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on a diff in pull request #4540: YARN-11160. Support getResourceProfiles, getResourceProfile API's for Federation

2022-07-11 Thread GitBox


slfan1989 commented on code in PR #4540:
URL: https://github.com/apache/hadoop/pull/4540#discussion_r918436961


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/clientrm/TestRouterYarnClientUtils.java:
##
@@ -27,14 +27,7 @@
 
 import org.apache.commons.collections.CollectionUtils;
 import org.apache.hadoop.thirdparty.com.google.common.collect.ImmutableSet;
-import org.apache.hadoop.yarn.api.protocolrecords.GetApplicationsResponse;
-import org.apache.hadoop.yarn.api.protocolrecords.GetClusterMetricsResponse;
-import org.apache.hadoop.yarn.api.protocolrecords.GetNodesToLabelsResponse;
-import org.apache.hadoop.yarn.api.protocolrecords.GetClusterNodeLabelsResponse;
-import org.apache.hadoop.yarn.api.protocolrecords.GetLabelsToNodesResponse;
-import org.apache.hadoop.yarn.api.protocolrecords.GetQueueUserAclsInfoResponse;
-import org.apache.hadoop.yarn.api.protocolrecords.ReservationListResponse;
-import 
org.apache.hadoop.yarn.api.protocolrecords.GetAllResourceTypeInfoResponse;
+import org.apache.hadoop.yarn.api.protocolrecords.*;

Review Comment:
   I will fix it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on a diff in pull request #4540: YARN-11160. Support getResourceProfiles, getResourceProfile API's for Federation

2022-07-11 Thread GitBox


slfan1989 commented on code in PR #4540:
URL: https://github.com/apache/hadoop/pull/4540#discussion_r918436889


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/clientrm/TestFederationClientInterceptor.java:
##
@@ -1169,4 +1172,57 @@ public void testGetQueueInfo() throws Exception {
 Assert.assertEquals(queueInfo.getChildQueues().size(), 12, 0);
 Assert.assertEquals(queueInfo.getAccessibleNodeLabels().size(), 1);
   }
+
+  @Test
+  public void testGetResourceProfiles() throws Exception {
+LOG.info("Test FederationClientInterceptor : Get Resource Profiles 
request.");
+
+// null request
+LambdaTestUtils.intercept(YarnException.class, "Missing 
getResourceProfiles request.",
+() -> interceptor.getResourceProfiles(null));
+
+// normal request
+GetAllResourceProfilesRequest request = 
GetAllResourceProfilesRequest.newInstance();
+GetAllResourceProfilesResponse response = 
interceptor.getResourceProfiles(request);
+
+Assert.assertNotNull(response);
+
Assert.assertEquals(response.getResourceProfiles().get("maximum").getMemorySize(),
  32768);
+
Assert.assertEquals(response.getResourceProfiles().get("maximum").getVirtualCores(),
  16);
+
Assert.assertEquals(response.getResourceProfiles().get("default").getMemorySize(),
  8192);
+
Assert.assertEquals(response.getResourceProfiles().get("default").getVirtualCores(),
  8);
+
Assert.assertEquals(response.getResourceProfiles().get("minimum").getMemorySize(),
  4096);
+
Assert.assertEquals(response.getResourceProfiles().get("minimum").getVirtualCores(),
  4);
+  }
+
+  @Test
+  public void testGetResourceProfile() throws Exception {
+LOG.info("Test FederationClientInterceptor : Get Resource Profile 
request.");
+
+// null request
+LambdaTestUtils.intercept(YarnException.class,
+"Missing getResourceProfile request or profileName.",
+() -> interceptor.getResourceProfile(null));
+
+// normal request
+GetResourceProfileRequest request = 
GetResourceProfileRequest.newInstance("maximum");
+GetResourceProfileResponse response = 
interceptor.getResourceProfile(request);
+
+Assert.assertNotNull(response);
+Assert.assertEquals(response.getResource().getMemorySize(), 32768);
+Assert.assertEquals(response.getResource().getVirtualCores(), 16);
+
+request = GetResourceProfileRequest.newInstance("default");

Review Comment:
   I will fix it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on a diff in pull request #4540: YARN-11160. Support getResourceProfiles, getResourceProfile API's for Federation

2022-07-11 Thread GitBox


slfan1989 commented on code in PR #4540:
URL: https://github.com/apache/hadoop/pull/4540#discussion_r918436652


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/clientrm/TestFederationClientInterceptor.java:
##
@@ -1169,4 +1172,57 @@ public void testGetQueueInfo() throws Exception {
 Assert.assertEquals(queueInfo.getChildQueues().size(), 12, 0);
 Assert.assertEquals(queueInfo.getAccessibleNodeLabels().size(), 1);
   }
+
+  @Test
+  public void testGetResourceProfiles() throws Exception {
+LOG.info("Test FederationClientInterceptor : Get Resource Profiles 
request.");
+
+// null request
+LambdaTestUtils.intercept(YarnException.class, "Missing 
getResourceProfiles request.",
+() -> interceptor.getResourceProfiles(null));
+
+// normal request
+GetAllResourceProfilesRequest request = 
GetAllResourceProfilesRequest.newInstance();
+GetAllResourceProfilesResponse response = 
interceptor.getResourceProfiles(request);
+
+Assert.assertNotNull(response);
+
Assert.assertEquals(response.getResourceProfiles().get("maximum").getMemorySize(),
  32768);

Review Comment:
   Thanks for your suggestion, I will fix it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on a diff in pull request #4543: YARN-8900. [Router] Federation: routing getContainers REST invocations transparently to multiple RMs

2022-07-11 Thread GitBox


slfan1989 commented on code in PR #4543:
URL: https://github.com/apache/hadoop/pull/4543#discussion_r918436233


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/FederationInterceptorREST.java:
##
@@ -1336,7 +1336,51 @@ public AppAttemptInfo getAppAttempt(HttpServletRequest 
req,
   @Override
   public ContainersInfo getContainers(HttpServletRequest req,
   HttpServletResponse res, String appId, String appAttemptId) {
-throw new NotImplementedException("Code is not implemented");
+ContainersInfo containersInfo = new ContainersInfo();
+
+Map subClustersActive = null;
+try {
+  subClustersActive = federationFacade.getSubClusters(true);
+} catch (YarnException e) {
+  LOG.error(e.getLocalizedMessage());

Review Comment:
   Thanks for your help reviewing the code, I will fix it asap.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18302) Remove WhiteBox in hadoop-common module.

2022-07-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18302?focusedWorklogId=789777=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-789777
 ]

ASF GitHub Bot logged work on HADOOP-18302:
---

Author: ASF GitHub Bot
Created on: 11/Jul/22 23:57
Start Date: 11/Jul/22 23:57
Worklog Time Spent: 10m 
  Work Description: slfan1989 commented on PR #4457:
URL: https://github.com/apache/hadoop/pull/4457#issuecomment-1181130866

   @aajisaka Thank you for helping to review the code, I will fix it as soon as 
possible.




Issue Time Tracking
---

Worklog Id: (was: 789777)
Time Spent: 2.5h  (was: 2h 20m)

> Remove WhiteBox in hadoop-common module.
> 
>
> Key: HADOOP-18302
> URL: https://issues.apache.org/jira/browse/HADOOP-18302
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.4.0, 3.3.9
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> WhiteBox is deprecated, try to remove this method in hadoop-common.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #4457: HADOOP-18302. Remove WhiteBox in hadoop-common module.

2022-07-11 Thread GitBox


slfan1989 commented on PR #4457:
URL: https://github.com/apache/hadoop/pull/4457#issuecomment-1181130866

   @aajisaka Thank you for helping to review the code, I will fix it as soon as 
possible.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18324) Interrupting RPC Client calls can lead to thread exhaustion

2022-07-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18324?focusedWorklogId=789775=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-789775
 ]

ASF GitHub Bot logged work on HADOOP-18324:
---

Author: ASF GitHub Bot
Created on: 11/Jul/22 23:47
Start Date: 11/Jul/22 23:47
Worklog Time Spent: 10m 
  Work Description: omalley commented on PR #4527:
URL: https://github.com/apache/hadoop/pull/4527#issuecomment-1181110693

   @steveloughran Can you recommend someone to review this RPC patch?




Issue Time Tracking
---

Worklog Id: (was: 789775)
Time Spent: 1h 20m  (was: 1h 10m)

> Interrupting RPC Client calls can lead to thread exhaustion
> ---
>
> Key: HADOOP-18324
> URL: https://issues.apache.org/jira/browse/HADOOP-18324
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 3.4.0, 2.10.2, 3.3.3
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Currently the IPC client creates a boundless number of threads to write the 
> rpc request to the socket. The NameNode uses timeouts on its RPC calls to the 
> Journal Node and a stuck JN will cause the NN to create an infinite set of 
> threads.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] omalley commented on pull request #4527: HADOOP-18324. Interrupting RPC Client calls can lead to thread exhaustion.

2022-07-11 Thread GitBox


omalley commented on PR #4527:
URL: https://github.com/apache/hadoop/pull/4527#issuecomment-1181110693

   @steveloughran Can you recommend someone to review this RPC patch?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-18333) hadoop-client-runtime impact by CVE-2022-2047 due to shaded jetty

2022-07-11 Thread Ashutosh Gupta (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Gupta reassigned HADOOP-18333:
---

Assignee: Ashutosh Gupta

> hadoop-client-runtime impact by CVE-2022-2047 due to shaded jetty
> -
>
> Key: HADOOP-18333
> URL: https://issues.apache.org/jira/browse/HADOOP-18333
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.3
>Reporter: phoebe chen
>Assignee: Ashutosh Gupta
>Priority: Major
>
> CVE-2022-2047 is recently found for Eclipse Jetty, and impacts 9.4.0 thru 
> 9.4.46.
> In latest 3.3.3 of hadoop-client-runtime, it shaded 9.4.43.v20210629 version 
> jetty which is impacted.
> In Trunch, Jetty is in version 9.4.44.v20210927, which is still impacted.
> Need to upgrade Jetty Version. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18332) remove rs-api dependency (needs jackson downgrade to 2.12.7)

2022-07-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18332?focusedWorklogId=789772=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-789772
 ]

ASF GitHub Bot logged work on HADOOP-18332:
---

Author: ASF GitHub Bot
Created on: 11/Jul/22 23:31
Start Date: 11/Jul/22 23:31
Worklog Time Spent: 10m 
  Work Description: pjfanning commented on PR #4547:
URL: https://github.com/apache/hadoop/pull/4547#issuecomment-1181079394

   @virajjasani I created https://github.com/apache/hadoop/pull/4552 for the 
3.3 branch




Issue Time Tracking
---

Worklog Id: (was: 789772)
Time Spent: 1h 10m  (was: 1h)

> remove rs-api dependency (needs jackson downgrade to 2.12.7)
> 
>
> Key: HADOOP-18332
> URL: https://issues.apache.org/jira/browse/HADOOP-18332
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> This jsr311-api jar seems to conflict with newly added rs-api jar dependency 
> - they have many of the same classes (but conflicting copies) - jersey-core 
> 1.19 needs jsr311-api to work properly (and fails if rs-api used instead)
> * https://mvnrepository.com/artifact/javax.ws.rs/jsr311-api
> * https://mvnrepository.com/artifact/javax.ws.rs/javax.ws.rs-api
> Seems we will need to downgrade jackson to 2.12.7 because of jax-rs 
> compatibility issues in jackson 2.13 (see 
> https://github.com/FasterXML/jackson-jaxrs-providers/issues/134)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] pjfanning commented on pull request #4547: HADOOP-18332: remove rs-api dependency as it conflicts with jsr311-api

2022-07-11 Thread GitBox


pjfanning commented on PR #4547:
URL: https://github.com/apache/hadoop/pull/4547#issuecomment-1181079394

   @virajjasani I created https://github.com/apache/hadoop/pull/4552 for the 
3.3 branch


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] pjfanning opened a new pull request, #4552: HADOOP-18832 remove rs-api dependency (3.3 branch)

2022-07-11 Thread GitBox


pjfanning opened a new pull request, #4552:
URL: https://github.com/apache/hadoop/pull/4552

   
   
   ### Description of PR
   
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18333) hadoop-client-runtime impact by CVE-2022-2047 due to shaded jetty

2022-07-11 Thread phoebe chen (Jira)
phoebe chen created HADOOP-18333:


 Summary: hadoop-client-runtime impact by CVE-2022-2047 due to 
shaded jetty
 Key: HADOOP-18333
 URL: https://issues.apache.org/jira/browse/HADOOP-18333
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.3.3
Reporter: phoebe chen


CVE-2022-2047 is recently found for Eclipse Jetty, and impacts 9.4.0 thru 
9.4.46.

In latest 3.3.3 of hadoop-client-runtime, it shaded 9.4.43.v20210629 version 
jetty which is impacted.

In Trunch, Jetty is in version 9.4.44.v20210927, which is still impacted.

Need to upgrade Jetty Version. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18332) remove rs-api dependency (needs jackson downgrade to 2.12.7)

2022-07-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18332?focusedWorklogId=789771=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-789771
 ]

ASF GitHub Bot logged work on HADOOP-18332:
---

Author: ASF GitHub Bot
Created on: 11/Jul/22 23:13
Start Date: 11/Jul/22 23:13
Worklog Time Spent: 10m 
  Work Description: virajjasani commented on PR #4547:
URL: https://github.com/apache/hadoop/pull/4547#issuecomment-1181041026

   @pjfanning could you please create PR against branch-3.3 in parallel? I 
think this would be more beneficial to branch-3.3 than trunk. On HADOOP-18033, 
it's still being discussed if we should only consider branch-3.3 for this PR 
because HADOOP-15984 anyways requires javax.ws.rs-api (and hence, Jackson 2.13).




Issue Time Tracking
---

Worklog Id: (was: 789771)
Time Spent: 1h  (was: 50m)

> remove rs-api dependency (needs jackson downgrade to 2.12.7)
> 
>
> Key: HADOOP-18332
> URL: https://issues.apache.org/jira/browse/HADOOP-18332
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> This jsr311-api jar seems to conflict with newly added rs-api jar dependency 
> - they have many of the same classes (but conflicting copies) - jersey-core 
> 1.19 needs jsr311-api to work properly (and fails if rs-api used instead)
> * https://mvnrepository.com/artifact/javax.ws.rs/jsr311-api
> * https://mvnrepository.com/artifact/javax.ws.rs/javax.ws.rs-api
> Seems we will need to downgrade jackson to 2.12.7 because of jax-rs 
> compatibility issues in jackson 2.13 (see 
> https://github.com/FasterXML/jackson-jaxrs-providers/issues/134)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on pull request #4547: HADOOP-18332: remove rs-api dependency as it conflicts with jsr311-api

2022-07-11 Thread GitBox


virajjasani commented on PR #4547:
URL: https://github.com/apache/hadoop/pull/4547#issuecomment-1181041026

   @pjfanning could you please create PR against branch-3.3 in parallel? I 
think this would be more beneficial to branch-3.3 than trunk. On HADOOP-18033, 
it's still being discussed if we should only consider branch-3.3 for this PR 
because HADOOP-15984 anyways requires javax.ws.rs-api (and hence, Jackson 2.13).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18033) Upgrade fasterxml Jackson to 2.13.0

2022-07-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18033?focusedWorklogId=789770=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-789770
 ]

ASF GitHub Bot logged work on HADOOP-18033:
---

Author: ASF GitHub Bot
Created on: 11/Jul/22 23:10
Start Date: 11/Jul/22 23:10
Worklog Time Spent: 10m 
  Work Description: virajjasani commented on PR #4460:
URL: https://github.com/apache/hadoop/pull/4460#issuecomment-1181035333

   > @virajjasani I think 
[FasterXML/jackson-jaxrs-providers#134](https://github.com/FasterXML/jackson-jaxrs-providers/issues/134)
 which appeared in jackson-jaxrs for v2.13.0 to be the reason rs-api was added 
to hadoop - so in #4547, I am looking to downgrade to jackson 2.12.7
   
   Exactly, without downgrading Jackson, it's not possible to remove 
javax.ws.rs-api. 




Issue Time Tracking
---

Worklog Id: (was: 789770)
Time Spent: 6.5h  (was: 6h 20m)

> Upgrade fasterxml Jackson to 2.13.0
> ---
>
> Key: HADOOP-18033
> URL: https://issues.apache.org/jira/browse/HADOOP-18033
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.2
>
>  Time Spent: 6.5h
>  Remaining Estimate: 0h
>
> Spark 3.2.0 depends on Jackson 2.12.3. Let's upgrade to 2.12.5 (2.12.x latest 
> as of now) or upper.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on pull request #4460: HADOOP-18033. [WIP] Remove jsr311-api dependency

2022-07-11 Thread GitBox


virajjasani commented on PR #4460:
URL: https://github.com/apache/hadoop/pull/4460#issuecomment-1181035333

   > @virajjasani I think 
[FasterXML/jackson-jaxrs-providers#134](https://github.com/FasterXML/jackson-jaxrs-providers/issues/134)
 which appeared in jackson-jaxrs for v2.13.0 to be the reason rs-api was added 
to hadoop - so in #4547, I am looking to downgrade to jackson 2.12.7
   
   Exactly, without downgrading Jackson, it's not possible to remove 
javax.ws.rs-api. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18330) S3AFileSystem removes Path when calling createS3Client

2022-07-11 Thread Ashutosh Pant (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Pant updated HADOOP-18330:
---
Status: Patch Available  (was: Open)

https://github.com/apache/hadoop/pull/4551

> S3AFileSystem removes Path when calling createS3Client
> --
>
> Key: HADOOP-18330
> URL: https://issues.apache.org/jira/browse/HADOOP-18330
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.3, 3.3.2, 3.3.1, 3.3.0
>Reporter: Ashutosh Pant
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> when using hadoop and spark to read/write data from an s3 bucket like -> 
> s3a://bucket/path and using a custom Credentials Provider, the path is 
> removed from the s3a URI and the credentials provider fails because the full 
> path is gone.
> In Spark 3.2,
> It was invoked as -> s3 = ReflectionUtils.newInstance(s3ClientFactoryClass, 
> conf)
> .createS3Client(name, bucket, credentials); 
> But In spark 3.3.3
> It is invoked as s3 = ReflectionUtils.newInstance(s3ClientFactoryClass, 
> conf).createS3Client(getUri(), parameters);
> the getUri() removes the path from the s3a URI



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18330) S3AFileSystem removes Path when calling createS3Client

2022-07-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18330?focusedWorklogId=789769=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-789769
 ]

ASF GitHub Bot logged work on HADOOP-18330:
---

Author: ASF GitHub Bot
Created on: 11/Jul/22 23:04
Start Date: 11/Jul/22 23:04
Worklog Time Spent: 10m 
  Work Description: ashutoshpant opened a new pull request, #4551:
URL: https://github.com/apache/hadoop/pull/4551

   ]
   
   ### Description of PR
   Added path to client creation parameters
   
   ### How was this patch tested?
   I just have an Enterprise restricted device with me so could not clone repo 
for testing purposes! Used git dev for PR
   
   ### For code changes:
   
   - [ X] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




Issue Time Tracking
---

Worklog Id: (was: 789769)
Remaining Estimate: 0h
Time Spent: 10m

> S3AFileSystem removes Path when calling createS3Client
> --
>
> Key: HADOOP-18330
> URL: https://issues.apache.org/jira/browse/HADOOP-18330
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.0, 3.3.1, 3.3.2, 3.3.3
>Reporter: Ashutosh Pant
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> when using hadoop and spark to read/write data from an s3 bucket like -> 
> s3a://bucket/path and using a custom Credentials Provider, the path is 
> removed from the s3a URI and the credentials provider fails because the full 
> path is gone.
> In Spark 3.2,
> It was invoked as -> s3 = ReflectionUtils.newInstance(s3ClientFactoryClass, 
> conf)
> .createS3Client(name, bucket, credentials); 
> But In spark 3.3.3
> It is invoked as s3 = ReflectionUtils.newInstance(s3ClientFactoryClass, 
> conf).createS3Client(getUri(), parameters);
> the getUri() removes the path from the s3a URI



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ashutoshpant closed pull request #4548: Hadoop-18330

2022-07-11 Thread GitBox


ashutoshpant closed pull request #4548: Hadoop-18330
URL: https://github.com/apache/hadoop/pull/4548


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ashutoshpant opened a new pull request, #4551: HADOOP-18330

2022-07-11 Thread GitBox


ashutoshpant opened a new pull request, #4551:
URL: https://github.com/apache/hadoop/pull/4551

   ]
   
   ### Description of PR
   Added path to client creation parameters
   
   ### How was this patch tested?
   I just have an Enterprise restricted device with me so could not clone repo 
for testing purposes! Used git dev for PR
   
   ### For code changes:
   
   - [ X] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18330) S3AFileSystem removes Path when calling createS3Client

2022-07-11 Thread Ashutosh Pant (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Pant updated HADOOP-18330:
---
Status: Open  (was: Patch Available)

> S3AFileSystem removes Path when calling createS3Client
> --
>
> Key: HADOOP-18330
> URL: https://issues.apache.org/jira/browse/HADOOP-18330
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.3, 3.3.2, 3.3.1, 3.3.0
>Reporter: Ashutosh Pant
>Priority: Minor
>  Labels: pull-request-available
>
> when using hadoop and spark to read/write data from an s3 bucket like -> 
> s3a://bucket/path and using a custom Credentials Provider, the path is 
> removed from the s3a URI and the credentials provider fails because the full 
> path is gone.
> In Spark 3.2,
> It was invoked as -> s3 = ReflectionUtils.newInstance(s3ClientFactoryClass, 
> conf)
> .createS3Client(name, bucket, credentials); 
> But In spark 3.3.3
> It is invoked as s3 = ReflectionUtils.newInstance(s3ClientFactoryClass, 
> conf).createS3Client(getUri(), parameters);
> the getUri() removes the path from the s3a URI



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] (HADOOP-18330) S3AFileSystem removes Path when calling createS3Client

2022-07-11 Thread Ashutosh Pant (Jira)


[ https://issues.apache.org/jira/browse/HADOOP-18330 ]


Ashutosh Pant deleted comment on HADOOP-18330:


was (Author: JIRAUSER292511):
[https://github.com/apache/hadoop/pull/4548] 

> S3AFileSystem removes Path when calling createS3Client
> --
>
> Key: HADOOP-18330
> URL: https://issues.apache.org/jira/browse/HADOOP-18330
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.0, 3.3.1, 3.3.2, 3.3.3
>Reporter: Ashutosh Pant
>Priority: Minor
>  Labels: pull-request-available
>
> when using hadoop and spark to read/write data from an s3 bucket like -> 
> s3a://bucket/path and using a custom Credentials Provider, the path is 
> removed from the s3a URI and the credentials provider fails because the full 
> path is gone.
> In Spark 3.2,
> It was invoked as -> s3 = ReflectionUtils.newInstance(s3ClientFactoryClass, 
> conf)
> .createS3Client(name, bucket, credentials); 
> But In spark 3.3.3
> It is invoked as s3 = ReflectionUtils.newInstance(s3ClientFactoryClass, 
> conf).createS3Client(getUri(), parameters);
> the getUri() removes the path from the s3a URI



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18217) shutdownhookmanager should not be multithreaded (deadlock possible)

2022-07-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18217?focusedWorklogId=789761=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-789761
 ]

ASF GitHub Bot logged work on HADOOP-18217:
---

Author: ASF GitHub Bot
Created on: 11/Jul/22 22:36
Start Date: 11/Jul/22 22:36
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4255:
URL: https://github.com/apache/hadoop/pull/4255#issuecomment-1180958336

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  37m 18s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 56s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  20m 51s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 47s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 15s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 50s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 14s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 30s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 14s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |  22m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 40s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  20m 40s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 34s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 11s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 10s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 30s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 41s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 36s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 215m 21s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4255/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4255 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 496b381aeb09 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 9e9dc762380d4f9c2a55c24412d3cbbdb19394bf |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4255/4/testReport/ |
   | Max. process+thread count | 3152 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4255/4/console |
  

[GitHub] [hadoop] hadoop-yetus commented on pull request #4255: HADOOP-18217. ExitUtil synchronized blocks reduced to avoid exit bloc…

2022-07-11 Thread GitBox


hadoop-yetus commented on PR #4255:
URL: https://github.com/apache/hadoop/pull/4255#issuecomment-1180958336

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  37m 18s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 56s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  20m 51s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 47s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 15s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 50s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 14s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 30s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 14s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |  22m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 40s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  20m 40s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 34s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 11s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 10s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 30s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 41s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 36s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 215m 21s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4255/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4255 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 496b381aeb09 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 9e9dc762380d4f9c2a55c24412d3cbbdb19394bf |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4255/4/testReport/ |
   | Max. process+thread count | 3152 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4255/4/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For 

[jira] [Work logged] (HADOOP-18074) Partial/Incomplete groups list can be returned in LDAP groups lookup

2022-07-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18074?focusedWorklogId=789758=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-789758
 ]

ASF GitHub Bot logged work on HADOOP-18074:
---

Author: ASF GitHub Bot
Created on: 11/Jul/22 22:19
Start Date: 11/Jul/22 22:19
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4550:
URL: https://github.com/apache/hadoop/pull/4550#issuecomment-1180923939

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  10m 50s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ branch-3.3 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  35m 55s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |  18m  9s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |   1m 20s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   2m  3s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   1m 29s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  spotbugs  |   3m 10s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  27m 39s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 41s | 
[/patch-mvninstall-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4550/1/artifact/out/patch-mvninstall-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch failed.  |
   | -1 :x: |  compile  |   1m  4s | 
[/patch-compile-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4550/1/artifact/out/patch-compile-root.txt)
 |  root in the patch failed.  |
   | -1 :x: |  javac  |   1m  4s | 
[/patch-compile-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4550/1/artifact/out/patch-compile-root.txt)
 |  root in the patch failed.  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 40s |  |  the patch passed  |
   | -1 :x: |  mvnsite  |   0m 46s | 
[/patch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4550/1/artifact/out/patch-mvnsite-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch failed.  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  the patch passed  |
   | -1 :x: |  spotbugs  |   0m 41s | 
[/patch-spotbugs-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4550/1/artifact/out/patch-spotbugs-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch failed.  |
   | -1 :x: |  shadedclient  |   6m 41s |  |  patch has errors when building 
and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   0m 43s | 
[/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4550/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 40s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 108m 46s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4550/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4550 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 5371fdc3066d 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / 5b62ea4cbca6ef02d5f16b0de81a2a6b9950e0a2 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4550/1/testReport/ |
   | Max. process+thread count | 723 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4550: HADOOP-18074 - Partial/Incomplete groups list can be returned in LDAP…

2022-07-11 Thread GitBox


hadoop-yetus commented on PR #4550:
URL: https://github.com/apache/hadoop/pull/4550#issuecomment-1180923939

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  10m 50s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ branch-3.3 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  35m 55s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |  18m  9s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |   1m 20s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   2m  3s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   1m 29s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  spotbugs  |   3m 10s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  27m 39s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 41s | 
[/patch-mvninstall-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4550/1/artifact/out/patch-mvninstall-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch failed.  |
   | -1 :x: |  compile  |   1m  4s | 
[/patch-compile-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4550/1/artifact/out/patch-compile-root.txt)
 |  root in the patch failed.  |
   | -1 :x: |  javac  |   1m  4s | 
[/patch-compile-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4550/1/artifact/out/patch-compile-root.txt)
 |  root in the patch failed.  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 40s |  |  the patch passed  |
   | -1 :x: |  mvnsite  |   0m 46s | 
[/patch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4550/1/artifact/out/patch-mvnsite-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch failed.  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  the patch passed  |
   | -1 :x: |  spotbugs  |   0m 41s | 
[/patch-spotbugs-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4550/1/artifact/out/patch-spotbugs-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch failed.  |
   | -1 :x: |  shadedclient  |   6m 41s |  |  patch has errors when building 
and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   0m 43s | 
[/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4550/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 40s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 108m 46s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4550/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4550 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 5371fdc3066d 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / 5b62ea4cbca6ef02d5f16b0de81a2a6b9950e0a2 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4550/1/testReport/ |
   | Max. process+thread count | 723 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4550/1/console |
   | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: 

[GitHub] [hadoop] goiri commented on pull request #4531: HDFS-13274. RBF: Extend RouterRpcClient to use multiple sockets

2022-07-11 Thread GitBox


goiri commented on PR #4531:
URL: https://github.com/apache/hadoop/pull/4531#issuecomment-1180857408

   > @goiri I have raised a new 
[PR-4542](https://github.com/apache/hadoop/pull/4542) to push 
[HADOOP-13144](https://issues.apache.org/jira/browse/HADOOP-13144) forward. And 
after it, then I will continue push this issue forward.
   
   I left a few minor comments; it would be good for others to review this too.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18074) Partial/Incomplete groups list can be returned in LDAP groups lookup

2022-07-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18074?focusedWorklogId=789733=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-789733
 ]

ASF GitHub Bot logged work on HADOOP-18074:
---

Author: ASF GitHub Bot
Created on: 11/Jul/22 20:34
Start Date: 11/Jul/22 20:34
Worklog Time Spent: 10m 
  Work Description: lmccay commented on PR #4550:
URL: https://github.com/apache/hadoop/pull/4550#issuecomment-1180842595

   +1 provided by @steveloughran via https://github.com/apache/hadoop/pull/4503 
- waiting on yetus greenlight here...




Issue Time Tracking
---

Worklog Id: (was: 789733)
Time Spent: 1h 40m  (was: 1.5h)

> Partial/Incomplete groups list can be returned in LDAP groups lookup
> 
>
> Key: HADOOP-18074
> URL: https://issues.apache.org/jira/browse/HADOOP-18074
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Philippe Lanoe
>Assignee: Larry McCay
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Hello,
> The  
> {code:java}
> Set doGetGroups(String user, int goUpHierarchy) {code}
> method in
> [https://github.com/apache/hadoop/blob/b27732c69b114f24358992a5a4d170bc94e2ceaf/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/LdapGroupsMapping.java#L476]
> Looks like having an issue if in the middle of the loop a *NamingException* 
> is caught:
> The groups variable is not reset in the catch clause and therefore the 
> fallback lookup cannot be executed (when goUpHierarchy==0 at least):
> ||
> {code:java}
> if (groups.isEmpty() || goUpHierarchy > 0) {
> groups = lookupGroup(result, c, goUpHierarchy);
> }
> {code}
>  
> Consequence is that only a partial list of groups is returned, which is not 
> correct.
> Following options could be used as solution:
>  * Reset the group to an empty list in the catch clause, to trigger the 
> fallback query.
>  * Add an option flag to enable ignoring groups with Naming Exception (since 
> they are not groups most probably)
> Independently, would any issue also occur (and therefore full list cannot be 
> returned) in the first lookup as well as in the fallback query, the method 
> should/could(with option flag) throw an Exception, because in some scenario 
> accuracy is important.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] lmccay commented on pull request #4550: HADOOP-18074 - Partial/Incomplete groups list can be returned in LDAP…

2022-07-11 Thread GitBox


lmccay commented on PR #4550:
URL: https://github.com/apache/hadoop/pull/4550#issuecomment-1180842595

   +1 provided by @steveloughran via https://github.com/apache/hadoop/pull/4503 
- waiting on yetus greenlight here...


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18074) Partial/Incomplete groups list can be returned in LDAP groups lookup

2022-07-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18074?focusedWorklogId=789732=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-789732
 ]

ASF GitHub Bot logged work on HADOOP-18074:
---

Author: ASF GitHub Bot
Created on: 11/Jul/22 20:28
Start Date: 11/Jul/22 20:28
Worklog Time Spent: 10m 
  Work Description: lmccay opened a new pull request, #4550:
URL: https://github.com/apache/hadoop/pull/4550

   …… (#4503)
   
   
   
   ### Description of PR
   
   Description of PR
   LdapGroupsMapping could return a partial list of group names due to 
encountering a NamingException while acquiring
   the RDN for a DN. This was due to not clearing the partially built list 
which results in the secondary query not being
   attempted. This PR clears the partially built list and forces the secondary 
query to be called.
   
   How was this patch tested?
   Existing unit tests were run and a new unit test added to insure that the 
secondary query is indeed being called.
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




Issue Time Tracking
---

Worklog Id: (was: 789732)
Time Spent: 1.5h  (was: 1h 20m)

> Partial/Incomplete groups list can be returned in LDAP groups lookup
> 
>
> Key: HADOOP-18074
> URL: https://issues.apache.org/jira/browse/HADOOP-18074
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Philippe Lanoe
>Assignee: Larry McCay
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Hello,
> The  
> {code:java}
> Set doGetGroups(String user, int goUpHierarchy) {code}
> method in
> [https://github.com/apache/hadoop/blob/b27732c69b114f24358992a5a4d170bc94e2ceaf/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/LdapGroupsMapping.java#L476]
> Looks like having an issue if in the middle of the loop a *NamingException* 
> is caught:
> The groups variable is not reset in the catch clause and therefore the 
> fallback lookup cannot be executed (when goUpHierarchy==0 at least):
> ||
> {code:java}
> if (groups.isEmpty() || goUpHierarchy > 0) {
> groups = lookupGroup(result, c, goUpHierarchy);
> }
> {code}
>  
> Consequence is that only a partial list of groups is returned, which is not 
> correct.
> Following options could be used as solution:
>  * Reset the group to an empty list in the catch clause, to trigger the 
> fallback query.
>  * Add an option flag to enable ignoring groups with Naming Exception (since 
> they are not groups most probably)
> Independently, would any issue also occur (and therefore full list cannot be 
> returned) in the first lookup as well as in the fallback query, the method 
> should/could(with option flag) throw an Exception, because in some scenario 
> accuracy is important.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] lmccay opened a new pull request, #4550: HADOOP-18074 - Partial/Incomplete groups list can be returned in LDAP…

2022-07-11 Thread GitBox


lmccay opened a new pull request, #4550:
URL: https://github.com/apache/hadoop/pull/4550

   …… (#4503)
   
   
   
   ### Description of PR
   
   Description of PR
   LdapGroupsMapping could return a partial list of group names due to 
encountering a NamingException while acquiring
   the RDN for a DN. This was due to not clearing the partially built list 
which results in the secondary query not being
   attempted. This PR clears the partially built list and forces the secondary 
query to be called.
   
   How was this patch tested?
   Existing unit tests were run and a new unit test added to insure that the 
secondary query is indeed being called.
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18217) shutdownhookmanager should not be multithreaded (deadlock possible)

2022-07-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18217?focusedWorklogId=789730=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-789730
 ]

ASF GitHub Bot logged work on HADOOP-18217:
---

Author: ASF GitHub Bot
Created on: 11/Jul/22 20:17
Start Date: 11/Jul/22 20:17
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4255:
URL: https://github.com/apache/hadoop/pull/4255#issuecomment-1180826528

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 42s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  37m 42s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m  4s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  20m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 48s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 13s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 48s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 22s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 13s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 13s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 22s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |  22m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  20m 33s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 40s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 12s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 12s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 31s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 39s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 30s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 215m  2s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4255/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4255 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux fdf031eb0575 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 7d6468bf35af9f4b56bf2c8d0102852d0de73bc7 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4255/3/testReport/ |
   | Max. process+thread count | 2938 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4255/3/console |
   

[GitHub] [hadoop] hadoop-yetus commented on pull request #4255: HADOOP-18217. ExitUtil synchronized blocks reduced to avoid exit bloc…

2022-07-11 Thread GitBox


hadoop-yetus commented on PR #4255:
URL: https://github.com/apache/hadoop/pull/4255#issuecomment-1180826528

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 42s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  37m 42s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m  4s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  20m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 48s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 13s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 48s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 22s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 13s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 13s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 22s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |  22m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  20m 33s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 40s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 12s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 12s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 31s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 39s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 30s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 215m  2s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4255/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4255 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux fdf031eb0575 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 7d6468bf35af9f4b56bf2c8d0102852d0de73bc7 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4255/3/testReport/ |
   | Max. process+thread count | 2938 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4255/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries 

[jira] [Comment Edited] (HADOOP-18033) Upgrade fasterxml Jackson to 2.13.0

2022-07-11 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17565145#comment-17565145
 ] 

Viraj Jasani edited comment on HADOOP-18033 at 7/11/22 7:11 PM:


{quote}Currently I recommend downgrading to 2.12.7 in both trunk and 
branch-3.3. That way we don't need to treat HADOOP-15984 as a blocker for 3.4.0.
{quote}
I understand that if we are doing the revert with a new Jira, the new Jira 
should ideally land on trunk before making it's way to active release branches, 
but Jackson downgrade to 2.12.7 and removal of javax.ws.rs-api would also 
likely need to be reverted as part of HADOOP-15984, so for HADOOP-15984 it will 
be too much work staying upto date with trunk (it's already struggling to do so 
btw with whatever progress is made), and now it will have to reintroduce 
javax.ws.rs-api and remove jsr311-api. So far I have jsr311-api removed from 
the current local patch, but if trunk removes javax.ws.rs-api as part of revert 
of HADOOP-18033 on trunk, there will be rework (basically, revert of revert of 
HADOOP-18033 for HADOOP-15984 to make progress) that would make the overall 
progress for HADOOP-15984 even more complicated.

Hence, I am requesting if we could only restrict the revert of HADOOP-18033 for 
branch-3.3 to unblock 3.3.4 release. IIUC, we are anyways not ready for 3.4.0 
release anytime soon?


was (Author: vjasani):
{quote}Currently I recommend downgrading to 2.12.7 in both trunk and 
branch-3.3. That way we don't need to treat HADOOP-15984 as a blocker for 3.4.0.
{quote}
I understand that if we are doing the revert with a new Jira, the new Jira 
should ideally land on trunk before making it's way to active release branches, 
but Jackson downgrade to 2.12.7 and removal of javax.ws.rs-api would also 
likely need to be reverted as part of HADOOP-15984, so for HADOOP-15984 it will 
be too much work staying upto date with trunk (it's already struggling to do so 
btw with whatever progress is made), and now it will have to reintroduce 
javax.ws.rs-api and remove jsr311-api. So far I have jsr311-api removed from 
the current local patch, but if trunk removes javax.ws.rs-api as part of revert 
of HADOOP-18033 on trunk, there will be rework (basically, revert of revert of 
HADOOP-18033 for HADOOP-15984 to make progress) that would make the overall 
progress for HADOOP-15984 even more complicated.

Hence, I am requesting if we could only restrict the revert of HADOOP-18033 for 
branch-3.3 to unblock 3.3.4 release.

> Upgrade fasterxml Jackson to 2.13.0
> ---
>
> Key: HADOOP-18033
> URL: https://issues.apache.org/jira/browse/HADOOP-18033
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.2
>
>  Time Spent: 6h 20m
>  Remaining Estimate: 0h
>
> Spark 3.2.0 depends on Jackson 2.12.3. Let's upgrade to 2.12.5 (2.12.x latest 
> as of now) or upper.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-18033) Upgrade fasterxml Jackson to 2.13.0

2022-07-11 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17565145#comment-17565145
 ] 

Viraj Jasani edited comment on HADOOP-18033 at 7/11/22 7:05 PM:


{quote}Currently I recommend downgrading to 2.12.7 in both trunk and 
branch-3.3. That way we don't need to treat HADOOP-15984 as a blocker for 3.4.0.
{quote}
I understand that if we are doing the revert with a new Jira, the new Jira 
should ideally land on trunk before making it's way to active release branches, 
but Jackson downgrade to 2.12.7 and removal of javax.ws.rs-api would also 
likely need to be reverted as part of HADOOP-15984, so for HADOOP-15984 it will 
be too much work staying upto date with trunk (it's already struggling to do so 
btw with whatever progress is made), and now it will have to reintroduce 
javax.ws.rs-api and remove jsr311-api. So far I have jsr311-api removed from 
the current local patch, but if trunk removes javax.ws.rs-api as part of revert 
of HADOOP-18033 on trunk, there will be rework (basically, revert of revert of 
HADOOP-18033 for HADOOP-15984 to make progress) that would make the overall 
progress for HADOOP-15984 even more complicated.

Hence, I am requesting if we could only restrict the revert of HADOOP-18033 for 
branch-3.3 to unblock 3.3.4 release.


was (Author: vjasani):
{quote}Currently I recommend downgrading to 2.12.7 in both trunk and 
branch-3.3. That way we don't need to treat HADOOP-15984 as a blocker for 3.4.0.
{quote}
I understand that if we are doing the revert with a new Jira, the new Jira 
should ideally land on trunk before making it's way to active release branches, 
but Jackson downgrade to 2.12.7 and removal of javax.ws.rs-api would also 
likely need to be reverted as part of HADOOP-15984, so for HADOOP-15984 it will 
be too much work staying upto date with trunk (it's already struggling to do so 
btw with whatever progress is made), and now it will have to reintroduce 
javax.ws.rs-api and remove jsr311-api. So far I have jsr311-api removed from 
the current local patch, but if trunk removes javax.ws.rs-api as part of revert 
of HADOOP-18033 on trunk, there will be rework (basically, revert of revert of 
HADOOP-18033 for HADOOP-15984 to make progress) that would make the overall 
progress for HADOOP-15984 even more complicated.

Hence, I am requesting if we could only restrict the revert of HADOOP-18033 for 
branch-3.3 to unblock 3.3.4 release. Overall, it would be as if this Jira's fix 
version was only meant for 3.4.0.

> Upgrade fasterxml Jackson to 2.13.0
> ---
>
> Key: HADOOP-18033
> URL: https://issues.apache.org/jira/browse/HADOOP-18033
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.2
>
>  Time Spent: 6h 20m
>  Remaining Estimate: 0h
>
> Spark 3.2.0 depends on Jackson 2.12.3. Let's upgrade to 2.12.5 (2.12.x latest 
> as of now) or upper.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-18033) Upgrade fasterxml Jackson to 2.13.0

2022-07-11 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17565145#comment-17565145
 ] 

Viraj Jasani edited comment on HADOOP-18033 at 7/11/22 7:04 PM:


{quote}Currently I recommend downgrading to 2.12.7 in both trunk and 
branch-3.3. That way we don't need to treat HADOOP-15984 as a blocker for 3.4.0.
{quote}
I understand that if we are doing the revert with a new Jira, the new Jira 
should ideally land on trunk before making it's way to active release branches, 
but Jackson downgrade to 2.12.7 and removal of javax.ws.rs-api would also 
likely need to be reverted as part of HADOOP-15984, so for HADOOP-15984 it will 
be too much work staying upto date with trunk (it's already struggling to do so 
btw with whatever progress is made), and now it will have to reintroduce 
javax.ws.rs-api and remove jsr311-api. So far I have jsr311-api removed from 
the current local patch, but if trunk removes javax.ws.rs-api as part of revert 
of HADOOP-18033 on trunk, there will be rework (basically, revert of revert of 
HADOOP-18033 for HADOOP-15984 to make progress) that would make the overall 
progress for HADOOP-15984 even more complicated.

Hence, I am requesting if we could only restrict the revert of HADOOP-18033 for 
branch-3.3 to unblock 3.3.4 release. Overall, it would be as if this Jira's fix 
version was only meant for 3.4.0.


was (Author: vjasani):
{quote}Currently I recommend downgrading to 2.12.7 in both trunk and 
branch-3.3. That way we don't need to treat HADOOP-15984 as a blocker for 3.4.0.
{quote}
I understand that if we are doing the revert with a new Jira, the new Jira 
should ideally land on trunk before making it's way to active release branches, 
but Jackson downgrade to 2.12.7 and removal of javax.ws.rs-api would also 
likely need to be reverted as part of HADOOP-15984, so for HADOOP-15984 it will 
be too much work staying upto date with trunk (it's already struggling to do so 
btw with whatever progress is made), and now it will have to reintroduce 
javax.ws.rs-api and remove jsr311-api. So far I have jsr311-api removed from 
the current local patch, but if trunk removes javax.ws.rs-api as part of revert 
of HADOOP-18033 on trunk, there will be rework (basically, revert of revert of 
HADOOP-18033 for HADOOP-15984 to make progress) that would make the overall 
progress for HADOOP-15984 even more complicated.

Hence, I am requesting if we could only restrict the revert of HADOOP-18033 for 
branch-3.3 to unblock 3.3.4 release.

> Upgrade fasterxml Jackson to 2.13.0
> ---
>
> Key: HADOOP-18033
> URL: https://issues.apache.org/jira/browse/HADOOP-18033
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.2
>
>  Time Spent: 6h 20m
>  Remaining Estimate: 0h
>
> Spark 3.2.0 depends on Jackson 2.12.3. Let's upgrade to 2.12.5 (2.12.x latest 
> as of now) or upper.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18217) shutdownhookmanager should not be multithreaded (deadlock possible)

2022-07-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18217?focusedWorklogId=789710=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-789710
 ]

ASF GitHub Bot logged work on HADOOP-18217:
---

Author: ASF GitHub Bot
Created on: 11/Jul/22 19:04
Start Date: 11/Jul/22 19:04
Worklog Time Spent: 10m 
  Work Description: HerCath commented on code in PR #4255:
URL: https://github.com/apache/hadoop/pull/4255#discussion_r918261848


##
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestExitUtil.java:
##
@@ -0,0 +1,122 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.util;
+
+import static org.apache.hadoop.test.LambdaTestUtils.intercept;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertSame;
+import static org.junit.Assert.assertTrue;
+
+import org.junit.Test;
+
+import org.apache.hadoop.util.ExitUtil.ExitException;
+import org.apache.hadoop.util.ExitUtil.HaltException;
+
+
+public class TestExitUtil {

Review Comment:
   Done. Weird only one other class in the package also extends it.





Issue Time Tracking
---

Worklog Id: (was: 789710)
Time Spent: 3h 20m  (was: 3h 10m)

> shutdownhookmanager should not be multithreaded (deadlock possible)
> ---
>
> Key: HADOOP-18217
> URL: https://issues.apache.org/jira/browse/HADOOP-18217
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.10.1
> Environment: linux, windows, any version
>Reporter: Catherinot Remi
>Priority: Minor
>  Labels: pull-request-available
> Attachments: wtf.java
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> the ShutdownHookManager class uses an executor to run hooks to have a 
> "timeout" notion around them. It does this using a single threaded executor. 
> It can leads to deadlock leaving a never-shutting-down JVM with this 
> execution flow:
>  * JVM need to exit (only daemon threads remaining or someone called 
> System.exit)
>  * ShutdowHookManager kicks in
>  * SHMngr executor start running some hooks
>  * SHMngr executor thread kicks in and, as a side effect, run some code from 
> one of the hook that calls System.exit (as a side effect from an external lib 
> for example)
>  * the executor thread is waiting for a lock because another thread already 
> entered System.exit and has its internal lock, so the executor never returns.
>  * SHMngr never returns
>  * 1st call to System.exit never returns
>  * JVM stuck
>  
> using an executor with a single thread does "fake" timeouts (the task keeps 
> running, you can interrupt it but until it stumble upon some piece of code 
> that is interruptible (like an IO) it will keep running) especially since the 
> executor is a single threaded one. So it has this bug for example :
>  * caller submit 1st hook (bad one that would need 1 hour of runtime and that 
> cannot be interrupted)
>  * executor start 1st hook
>  * caller of the future 1st hook result timeout
>  * caller submit 2nd hook
>  * bug : 1 hook still running, 2nd hook triggers a timeout but never got the 
> chance to run anyway, so 1st faulty hook makes it impossible for any other 
> hook to have a chance to run, so running hooks in a single separate thread 
> does not allow to run other hooks in parallel to long ones.
>  
> If we really really want to timeout the JVM shutdown, even accepting maybe 
> dirty shutdown, it should rather handle the hooks inside the initial thread 
> (not spawning new one(s) so not triggering the deadlock described on the 1st 
> place) and if a timeout was configured, only spawn a single parallel daemon 
> thread that sleeps the timeout delay, and then use Runtime.halt (which bypass 
> the hook system so should not trigger the deadlock). If the normal 
> System.exit ends before the timeout delay everything is fine. If the 
> System.exit took to 

[GitHub] [hadoop] lmccay closed pull request #4549: Hadoop 18074 branch 3.3

2022-07-11 Thread GitBox


lmccay closed pull request #4549: Hadoop 18074 branch 3.3
URL: https://github.com/apache/hadoop/pull/4549


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] HerCath commented on a diff in pull request #4255: HADOOP-18217. ExitUtil synchronized blocks reduced to avoid exit bloc…

2022-07-11 Thread GitBox


HerCath commented on code in PR #4255:
URL: https://github.com/apache/hadoop/pull/4255#discussion_r918261848


##
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestExitUtil.java:
##
@@ -0,0 +1,122 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.util;
+
+import static org.apache.hadoop.test.LambdaTestUtils.intercept;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertSame;
+import static org.junit.Assert.assertTrue;
+
+import org.junit.Test;
+
+import org.apache.hadoop.util.ExitUtil.ExitException;
+import org.apache.hadoop.util.ExitUtil.HaltException;
+
+
+public class TestExitUtil {

Review Comment:
   Done. Weird only one other class in the package also extends it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18217) shutdownhookmanager should not be multithreaded (deadlock possible)

2022-07-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18217?focusedWorklogId=789709=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-789709
 ]

ASF GitHub Bot logged work on HADOOP-18217:
---

Author: ASF GitHub Bot
Created on: 11/Jul/22 19:03
Start Date: 11/Jul/22 19:03
Worklog Time Spent: 10m 
  Work Description: HerCath commented on code in PR #4255:
URL: https://github.com/apache/hadoop/pull/4255#discussion_r918261380


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ExitUtil.java:
##
@@ -159,92 +163,165 @@ public static void disableSystemHalt() {
*/
   public static boolean terminateCalled() {
 // Either we set this member or we actually called System#exit
-return firstExitException != null;
+return FIRST_EXIT_EXCEPTION.get() != null;
   }
 
   /**
* @return true if halt has been called.
*/
   public static boolean haltCalled() {
-return firstHaltException != null;
+// Either we set this member or we actually called Runtime#halt
+return FIRST_HALT_EXCEPTION.get() != null;
   }
 
   /**
-   * @return the first ExitException thrown, null if none thrown yet.
+   * @return the first {@code ExitException} thrown, null if none thrown yet.
*/
   public static ExitException getFirstExitException() {
-return firstExitException;
+return FIRST_EXIT_EXCEPTION.get();
   }
 
   /**
* @return the first {@code HaltException} thrown, null if none thrown yet.
*/
   public static HaltException getFirstHaltException() {
-return firstHaltException;
+return FIRST_HALT_EXCEPTION.get();
   }
 
   /**
* Reset the tracking of process termination. This is for use in unit tests
* where one test in the suite expects an exit but others do not.
*/
   public static void resetFirstExitException() {
-firstExitException = null;
+FIRST_EXIT_EXCEPTION.set(null);
   }
 
+  /**
+   * Reset the tracking of process termination. This is for use in unit tests
+   * where one test in the suite expects a halt but others do not.
+   */
   public static void resetFirstHaltException() {
-firstHaltException = null;
+FIRST_HALT_EXCEPTION.set(null);
   }
 
   /**
+   * Exits the JVM if exit is enabled, rethrow provided exception or any 
raised error otherwise.
* Inner termination: either exit with the exception's exit code,
* or, if system exits are disabled, rethrow the exception.
* @param ee exit exception
+   * @throws ExitException if {@link System#exit(int)} is disabled and not 
suppressed by an Error
+   * @throws Error if {@link System#exit(int)} is disabled and one Error 
arise, suppressing
+   * anything else, even ee
*/
-  public static synchronized void terminate(ExitException ee)
+  public static void terminate(ExitException ee)
   throws ExitException {
-int status = ee.getExitCode();
-String msg = ee.getMessage();
+final int status = ee.getExitCode();
+Error caught = null;
 if (status != 0) {
-  //exit indicates a problem, log it
-  LOG.debug("Exiting with status {}: {}",  status, msg, ee);
-  LOG.info("Exiting with status {}: {}", status, msg);
+  try {
+// exit indicates a problem, log it
+String msg = ee.getMessage();
+LOG.debug("Exiting with status {}: {}",  status, msg, ee);
+LOG.info("Exiting with status {}: {}", status, msg);
+  } catch (Error e) {
+// errors have higher priority than HaltException, it may be re-thrown.
+// OOM and ThreadDeath are 2 examples of Errors to re-throw
+caught = e;
+  } catch (Throwable t) {
+// all other kind of throwables are suppressed
+if (ee != t) {

Review Comment:
   done. I've made it so it handle the suppressor == suppressed scenario + can 
be used on the Error variable "caught" may be setted or be suppressed on the 
2nd catch block in both exit and halt case.





Issue Time Tracking
---

Worklog Id: (was: 789709)
Time Spent: 3h 10m  (was: 3h)

> shutdownhookmanager should not be multithreaded (deadlock possible)
> ---
>
> Key: HADOOP-18217
> URL: https://issues.apache.org/jira/browse/HADOOP-18217
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.10.1
> Environment: linux, windows, any version
>Reporter: Catherinot Remi
>Priority: Minor
>  Labels: pull-request-available
> Attachments: wtf.java
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> the ShutdownHookManager class uses an executor to run hooks to have a 
> "timeout" notion around them. It does this using a single threaded executor. 
> It can leads to deadlock leaving a never-shutting-down JVM with this 
> 

[GitHub] [hadoop] HerCath commented on a diff in pull request #4255: HADOOP-18217. ExitUtil synchronized blocks reduced to avoid exit bloc…

2022-07-11 Thread GitBox


HerCath commented on code in PR #4255:
URL: https://github.com/apache/hadoop/pull/4255#discussion_r918261380


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ExitUtil.java:
##
@@ -159,92 +163,165 @@ public static void disableSystemHalt() {
*/
   public static boolean terminateCalled() {
 // Either we set this member or we actually called System#exit
-return firstExitException != null;
+return FIRST_EXIT_EXCEPTION.get() != null;
   }
 
   /**
* @return true if halt has been called.
*/
   public static boolean haltCalled() {
-return firstHaltException != null;
+// Either we set this member or we actually called Runtime#halt
+return FIRST_HALT_EXCEPTION.get() != null;
   }
 
   /**
-   * @return the first ExitException thrown, null if none thrown yet.
+   * @return the first {@code ExitException} thrown, null if none thrown yet.
*/
   public static ExitException getFirstExitException() {
-return firstExitException;
+return FIRST_EXIT_EXCEPTION.get();
   }
 
   /**
* @return the first {@code HaltException} thrown, null if none thrown yet.
*/
   public static HaltException getFirstHaltException() {
-return firstHaltException;
+return FIRST_HALT_EXCEPTION.get();
   }
 
   /**
* Reset the tracking of process termination. This is for use in unit tests
* where one test in the suite expects an exit but others do not.
*/
   public static void resetFirstExitException() {
-firstExitException = null;
+FIRST_EXIT_EXCEPTION.set(null);
   }
 
+  /**
+   * Reset the tracking of process termination. This is for use in unit tests
+   * where one test in the suite expects a halt but others do not.
+   */
   public static void resetFirstHaltException() {
-firstHaltException = null;
+FIRST_HALT_EXCEPTION.set(null);
   }
 
   /**
+   * Exits the JVM if exit is enabled, rethrow provided exception or any 
raised error otherwise.
* Inner termination: either exit with the exception's exit code,
* or, if system exits are disabled, rethrow the exception.
* @param ee exit exception
+   * @throws ExitException if {@link System#exit(int)} is disabled and not 
suppressed by an Error
+   * @throws Error if {@link System#exit(int)} is disabled and one Error 
arise, suppressing
+   * anything else, even ee
*/
-  public static synchronized void terminate(ExitException ee)
+  public static void terminate(ExitException ee)
   throws ExitException {
-int status = ee.getExitCode();
-String msg = ee.getMessage();
+final int status = ee.getExitCode();
+Error caught = null;
 if (status != 0) {
-  //exit indicates a problem, log it
-  LOG.debug("Exiting with status {}: {}",  status, msg, ee);
-  LOG.info("Exiting with status {}: {}", status, msg);
+  try {
+// exit indicates a problem, log it
+String msg = ee.getMessage();
+LOG.debug("Exiting with status {}: {}",  status, msg, ee);
+LOG.info("Exiting with status {}: {}", status, msg);
+  } catch (Error e) {
+// errors have higher priority than HaltException, it may be re-thrown.
+// OOM and ThreadDeath are 2 examples of Errors to re-throw
+caught = e;
+  } catch (Throwable t) {
+// all other kind of throwables are suppressed
+if (ee != t) {

Review Comment:
   done. I've made it so it handle the suppressor == suppressed scenario + can 
be used on the Error variable "caught" may be setted or be suppressed on the 
2nd catch block in both exit and halt case.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] lmccay opened a new pull request, #4549: Hadoop 18074 branch 3.3

2022-07-11 Thread GitBox


lmccay opened a new pull request, #4549:
URL: https://github.com/apache/hadoop/pull/4549

   
   
   ### Description of PR
   
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-18033) Upgrade fasterxml Jackson to 2.13.0

2022-07-11 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17565145#comment-17565145
 ] 

Viraj Jasani edited comment on HADOOP-18033 at 7/11/22 7:01 PM:


{quote}Currently I recommend downgrading to 2.12.7 in both trunk and 
branch-3.3. That way we don't need to treat HADOOP-15984 as a blocker for 3.4.0.
{quote}
I understand that if we are doing the revert with a new Jira, the new Jira 
should ideally land on trunk before making it's way to active release branches, 
but Jackson downgrade to 2.12.7 and removal of javax.ws.rs-api would also 
likely need to be reverted as part of HADOOP-15984, so for HADOOP-15984 it will 
be too much work staying upto date with trunk (it's already struggling to do so 
btw with whatever progress is made), and now it will have to reintroduce 
javax.ws.rs-api and remove jsr311-api. So far I have jsr311-api removed from 
the current local patch, but if trunk removes javax.ws.rs-api as part of revert 
of HADOOP-18033 on trunk, there will be rework (basically, revert of revert of 
HADOOP-18033 for HADOOP-15984 to make progress) that would make the overall 
progress for HADOOP-15984 even more complicated.

Hence, I am requesting if we could only restrict the revert of HADOOP-18033 for 
branch-3.3 to unblock 3.3.4 release.


was (Author: vjasani):
{quote}Currently I recommend downgrading to 2.12.7 in both trunk and 
branch-3.3. That way we don't need to treat HADOOP-15984 as a blocker for 3.4.0.
{quote}
I understand that if we are doing the revert with a new Jira, the new Jira 
should ideally land on trunk before making it's way to active release branches, 
but Jackson downgrade to 2.12.7 and removal of javax.ws.rs-api would also 
likely need to be reverted as part of HADOOP-15984, so for HADOOP-15984 it will 
be too much work staying upto date with trunk (it's already struggling to do so 
btw with whatever progress is made), and now it will have to reintroduce 
javax.ws.rs-api and remove jsr311-api. So far I have jsr311-api removed from 
the current local patch, but if trunk removes javax.ws.rs-api as part of revert 
of HADOOP-18033 on trunk, there will be rework (basically, revert of revert of 
HADOOP-18033 for HADOOP-15984 to make progress).

> Upgrade fasterxml Jackson to 2.13.0
> ---
>
> Key: HADOOP-18033
> URL: https://issues.apache.org/jira/browse/HADOOP-18033
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.2
>
>  Time Spent: 6h 20m
>  Remaining Estimate: 0h
>
> Spark 3.2.0 depends on Jackson 2.12.3. Let's upgrade to 2.12.5 (2.12.x latest 
> as of now) or upper.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18033) Upgrade fasterxml Jackson to 2.13.0

2022-07-11 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17565145#comment-17565145
 ] 

Viraj Jasani commented on HADOOP-18033:
---

{quote}Currently I recommend downgrading to 2.12.7 in both trunk and 
branch-3.3. That way we don't need to treat HADOOP-15984 as a blocker for 3.4.0.
{quote}
I understand that if we are doing the revert with a new Jira, the new Jira 
should ideally land on trunk before making it's way to active release branches, 
but Jackson downgrade to 2.12.7 and removal of javax.ws.rs-api would also 
likely need to be reverted as part of HADOOP-15984, so for HADOOP-15984 it will 
be too much work staying upto date with trunk (it's already struggling to do so 
btw with whatever progress is made), and now it will have to reintroduce 
javax.ws.rs-api and remove jsr311-api. So far I have jsr311-api removed from 
the current local patch, but if trunk removes javax.ws.rs-api as part of revert 
of HADOOP-18033 on trunk, there will be rework (basically, revert of revert of 
HADOOP-18033 for HADOOP-15984 to make progress).

> Upgrade fasterxml Jackson to 2.13.0
> ---
>
> Key: HADOOP-18033
> URL: https://issues.apache.org/jira/browse/HADOOP-18033
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.2
>
>  Time Spent: 6h 20m
>  Remaining Estimate: 0h
>
> Spark 3.2.0 depends on Jackson 2.12.3. Let's upgrade to 2.12.5 (2.12.x latest 
> as of now) or upper.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18033) Upgrade fasterxml Jackson to 2.13.0

2022-07-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18033?focusedWorklogId=789705=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-789705
 ]

ASF GitHub Bot logged work on HADOOP-18033:
---

Author: ASF GitHub Bot
Created on: 11/Jul/22 18:38
Start Date: 11/Jul/22 18:38
Worklog Time Spent: 10m 
  Work Description: pjfanning commented on PR #4460:
URL: https://github.com/apache/hadoop/pull/4460#issuecomment-1180739304

   @virajjasani I think 
https://github.com/FasterXML/jackson-jaxrs-providers/issues/134 which appeared 
in jackson-jaxrs for v2.13.0 to be the reason rs-api was added to hadoop - so 
in #4547, I am looking to downgrade to jackson 2.12.7




Issue Time Tracking
---

Worklog Id: (was: 789705)
Time Spent: 6h 20m  (was: 6h 10m)

> Upgrade fasterxml Jackson to 2.13.0
> ---
>
> Key: HADOOP-18033
> URL: https://issues.apache.org/jira/browse/HADOOP-18033
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.2
>
>  Time Spent: 6h 20m
>  Remaining Estimate: 0h
>
> Spark 3.2.0 depends on Jackson 2.12.3. Let's upgrade to 2.12.5 (2.12.x latest 
> as of now) or upper.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18217) shutdownhookmanager should not be multithreaded (deadlock possible)

2022-07-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18217?focusedWorklogId=789706=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-789706
 ]

ASF GitHub Bot logged work on HADOOP-18217:
---

Author: ASF GitHub Bot
Created on: 11/Jul/22 18:38
Start Date: 11/Jul/22 18:38
Worklog Time Spent: 10m 
  Work Description: HerCath commented on code in PR #4255:
URL: https://github.com/apache/hadoop/pull/4255#discussion_r918241764


##
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestExitUtil.java:
##
@@ -0,0 +1,122 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.util;
+
+import static org.apache.hadoop.test.LambdaTestUtils.intercept;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertSame;
+import static org.junit.Assert.assertTrue;
+
+import org.junit.Test;
+
+import org.apache.hadoop.util.ExitUtil.ExitException;
+import org.apache.hadoop.util.ExitUtil.HaltException;
+
+
+public class TestExitUtil {
+
+  @Test
+  public void testGetSetExitExceptions() throws Throwable {
+// prepare states and exceptions
+ExitUtil.disableSystemExit();
+ExitUtil.resetFirstExitException();
+ExitException ee1 = new ExitException(1, "TestExitUtil forged 1st 
ExitException");
+ExitException ee2 = new ExitException(2, "TestExitUtil forged 2nd 
ExitException");
+try {
+  // check proper initial settings
+  assertFalse("ExitUtil.terminateCalled initial value should be false",
+  ExitUtil.terminateCalled());
+  assertNull("ExitUtil.getFirstExitException initial value should be null",
+  ExitUtil.getFirstExitException());
+
+  // simulate/check 1st call
+  ExitException ee = intercept(ExitException.class, 
()->ExitUtil.terminate(ee1));
+  assertSame("ExitUtil.terminate should have rethrown its ExitException 
argument but it "
+  + "had thrown something else", ee1, ee);
+  assertTrue("ExitUtil.terminateCalled should be true after 1st 
ExitUtil.terminate call",
+  ExitUtil.terminateCalled());
+  assertSame("ExitUtil.terminate should store its 1st call's 
ExitException",
+  ee1, ExitUtil.getFirstExitException());
+
+  // simulate/check 2nd call not overwritting 1st one
+  ee = intercept(ExitException.class, ()->ExitUtil.terminate(ee2));
+  assertSame("ExitUtil.terminate should have rethrown its HaltException 
argument but it "
+  + "had thrown something else", ee2, ee);
+  assertTrue("ExitUtil.terminateCalled should still be true after 2nd 
ExitUtil.terminate call",
+  ExitUtil.terminateCalled());
+  // 2nd call rethrown the 2nd ExitException yet only the 1st only should 
have been stored
+  assertSame("ExitUtil.terminate when called twice should only remember 
1st call's "
+  + "ExitException", ee1, ExitUtil.getFirstExitException());
+
+  // simulate cleanup, also tries to make sure state is ok for all junit 
still has to do
+  ExitUtil.resetFirstExitException();
+  assertFalse("ExitUtil.terminateCalled should be false after "
+  + "ExitUtil.resetFirstExitException call", 
ExitUtil.terminateCalled());
+  assertNull("ExitUtil.getFirstExitException should be null after "
+  + "ExitUtil.resetFirstExitException call", 
ExitUtil.getFirstExitException());
+} finally {
+  // cleanup
+  ExitUtil.resetFirstExitException();

Review Comment:
   I see that other tests in the util package use Before and After rather than 
BeforeClass and AfterClass. TestExitUtil does not rely on having resources that 
would benefit from being reserved only once for all the tests so i prefer to 
stick with the package's habit : i'll use before and after, not beforeClass and 
afterClass





Issue Time Tracking
---

Worklog Id: (was: 789706)
Time Spent: 3h  (was: 2h 50m)

> shutdownhookmanager should not be multithreaded (deadlock possible)
> ---
>
> Key: 

[GitHub] [hadoop] HerCath commented on a diff in pull request #4255: HADOOP-18217. ExitUtil synchronized blocks reduced to avoid exit bloc…

2022-07-11 Thread GitBox


HerCath commented on code in PR #4255:
URL: https://github.com/apache/hadoop/pull/4255#discussion_r918241764


##
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestExitUtil.java:
##
@@ -0,0 +1,122 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.util;
+
+import static org.apache.hadoop.test.LambdaTestUtils.intercept;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertSame;
+import static org.junit.Assert.assertTrue;
+
+import org.junit.Test;
+
+import org.apache.hadoop.util.ExitUtil.ExitException;
+import org.apache.hadoop.util.ExitUtil.HaltException;
+
+
+public class TestExitUtil {
+
+  @Test
+  public void testGetSetExitExceptions() throws Throwable {
+// prepare states and exceptions
+ExitUtil.disableSystemExit();
+ExitUtil.resetFirstExitException();
+ExitException ee1 = new ExitException(1, "TestExitUtil forged 1st 
ExitException");
+ExitException ee2 = new ExitException(2, "TestExitUtil forged 2nd 
ExitException");
+try {
+  // check proper initial settings
+  assertFalse("ExitUtil.terminateCalled initial value should be false",
+  ExitUtil.terminateCalled());
+  assertNull("ExitUtil.getFirstExitException initial value should be null",
+  ExitUtil.getFirstExitException());
+
+  // simulate/check 1st call
+  ExitException ee = intercept(ExitException.class, 
()->ExitUtil.terminate(ee1));
+  assertSame("ExitUtil.terminate should have rethrown its ExitException 
argument but it "
+  + "had thrown something else", ee1, ee);
+  assertTrue("ExitUtil.terminateCalled should be true after 1st 
ExitUtil.terminate call",
+  ExitUtil.terminateCalled());
+  assertSame("ExitUtil.terminate should store its 1st call's 
ExitException",
+  ee1, ExitUtil.getFirstExitException());
+
+  // simulate/check 2nd call not overwritting 1st one
+  ee = intercept(ExitException.class, ()->ExitUtil.terminate(ee2));
+  assertSame("ExitUtil.terminate should have rethrown its HaltException 
argument but it "
+  + "had thrown something else", ee2, ee);
+  assertTrue("ExitUtil.terminateCalled should still be true after 2nd 
ExitUtil.terminate call",
+  ExitUtil.terminateCalled());
+  // 2nd call rethrown the 2nd ExitException yet only the 1st only should 
have been stored
+  assertSame("ExitUtil.terminate when called twice should only remember 
1st call's "
+  + "ExitException", ee1, ExitUtil.getFirstExitException());
+
+  // simulate cleanup, also tries to make sure state is ok for all junit 
still has to do
+  ExitUtil.resetFirstExitException();
+  assertFalse("ExitUtil.terminateCalled should be false after "
+  + "ExitUtil.resetFirstExitException call", 
ExitUtil.terminateCalled());
+  assertNull("ExitUtil.getFirstExitException should be null after "
+  + "ExitUtil.resetFirstExitException call", 
ExitUtil.getFirstExitException());
+} finally {
+  // cleanup
+  ExitUtil.resetFirstExitException();

Review Comment:
   I see that other tests in the util package use Before and After rather than 
BeforeClass and AfterClass. TestExitUtil does not rely on having resources that 
would benefit from being reserved only once for all the tests so i prefer to 
stick with the package's habit : i'll use before and after, not beforeClass and 
afterClass



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] pjfanning commented on pull request #4460: HADOOP-18033. [WIP] Remove jsr311-api dependency

2022-07-11 Thread GitBox


pjfanning commented on PR #4460:
URL: https://github.com/apache/hadoop/pull/4460#issuecomment-1180739304

   @virajjasani I think 
https://github.com/FasterXML/jackson-jaxrs-providers/issues/134 which appeared 
in jackson-jaxrs for v2.13.0 to be the reason rs-api was added to hadoop - so 
in #4547, I am looking to downgrade to jackson 2.12.7


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18033) Upgrade fasterxml Jackson to 2.13.0

2022-07-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18033?focusedWorklogId=789703=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-789703
 ]

ASF GitHub Bot logged work on HADOOP-18033:
---

Author: ASF GitHub Bot
Created on: 11/Jul/22 18:33
Start Date: 11/Jul/22 18:33
Worklog Time Spent: 10m 
  Work Description: virajjasani commented on PR #4460:
URL: https://github.com/apache/hadoop/pull/4460#issuecomment-1180734620

   > So it looks like the new dependency on rs-api needs to be removed. This 
may not require jackson to be downgraded too.
   
   javax.ws.rs-api was introduced only to support Jackson 2.13 upgrade. Without 
rs-api, (on Jackson 2.13) majority of Yarn and ATS tests fail as jackson 2.13 
does have dependency on JAX-RS 2 based rs-api (PR #3749)




Issue Time Tracking
---

Worklog Id: (was: 789703)
Time Spent: 6h 10m  (was: 6h)

> Upgrade fasterxml Jackson to 2.13.0
> ---
>
> Key: HADOOP-18033
> URL: https://issues.apache.org/jira/browse/HADOOP-18033
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.2
>
>  Time Spent: 6h 10m
>  Remaining Estimate: 0h
>
> Spark 3.2.0 depends on Jackson 2.12.3. Let's upgrade to 2.12.5 (2.12.x latest 
> as of now) or upper.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on pull request #4460: HADOOP-18033. [WIP] Remove jsr311-api dependency

2022-07-11 Thread GitBox


virajjasani commented on PR #4460:
URL: https://github.com/apache/hadoop/pull/4460#issuecomment-1180734620

   > So it looks like the new dependency on rs-api needs to be removed. This 
may not require jackson to be downgraded too.
   
   javax.ws.rs-api was introduced only to support Jackson 2.13 upgrade. Without 
rs-api, (on Jackson 2.13) majority of Yarn and ATS tests fail as jackson 2.13 
does have dependency on JAX-RS 2 based rs-api (PR #3749)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18074) Partial/Incomplete groups list can be returned in LDAP groups lookup

2022-07-11 Thread Larry McCay (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17565139#comment-17565139
 ] 

Larry McCay commented on HADOOP-18074:
--

[~ste...@apache.org] - I haven't done the CP yet, wanted to make sure I 
understood the process here.
I plan to do it via github with a new PR with a HADOOP-1807-branch-3.3 branch 
name - in order to get the full set of precommits green.


> Partial/Incomplete groups list can be returned in LDAP groups lookup
> 
>
> Key: HADOOP-18074
> URL: https://issues.apache.org/jira/browse/HADOOP-18074
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Philippe Lanoe
>Assignee: Larry McCay
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Hello,
> The  
> {code:java}
> Set doGetGroups(String user, int goUpHierarchy) {code}
> method in
> [https://github.com/apache/hadoop/blob/b27732c69b114f24358992a5a4d170bc94e2ceaf/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/LdapGroupsMapping.java#L476]
> Looks like having an issue if in the middle of the loop a *NamingException* 
> is caught:
> The groups variable is not reset in the catch clause and therefore the 
> fallback lookup cannot be executed (when goUpHierarchy==0 at least):
> ||
> {code:java}
> if (groups.isEmpty() || goUpHierarchy > 0) {
> groups = lookupGroup(result, c, goUpHierarchy);
> }
> {code}
>  
> Consequence is that only a partial list of groups is returned, which is not 
> correct.
> Following options could be used as solution:
>  * Reset the group to an empty list in the catch clause, to trigger the 
> fallback query.
>  * Add an option flag to enable ignoring groups with Naming Exception (since 
> they are not groups most probably)
> Independently, would any issue also occur (and therefore full list cannot be 
> returned) in the first lookup as well as in the fallback query, the method 
> should/could(with option flag) throw an Exception, because in some scenario 
> accuracy is important.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18106) Handle memory fragmentation in S3 Vectored IO implementation.

2022-07-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18106?focusedWorklogId=789700=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-789700
 ]

ASF GitHub Bot logged work on HADOOP-18106:
---

Author: ASF GitHub Bot
Created on: 11/Jul/22 18:23
Start Date: 11/Jul/22 18:23
Worklog Time Spent: 10m 
  Work Description: mukund-thakur closed pull request #4427: HADOOP-18106: 
Handle memory fragmentation in S3A Vectored IO
URL: https://github.com/apache/hadoop/pull/4427




Issue Time Tracking
---

Worklog Id: (was: 789700)
Time Spent: 4.5h  (was: 4h 20m)

> Handle memory fragmentation in S3 Vectored IO implementation.
> -
>
> Key: HADOOP-18106
> URL: https://issues.apache.org/jira/browse/HADOOP-18106
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> As we have implemented merging of ranges in the S3AInputStream implementation 
> of vectored IO api, it can lead to memory fragmentation. Let me explain by 
> example.
>  
> Suppose client requests for 3 ranges. 
> 0-500, 700-1000 and 1200-1500.
> Now because of merging, all the above ranges will get merged into one and we 
> will allocate a big byte buffer of 0-1500 size but return sliced byte buffers 
> for the desired ranges.
> Now once the client is done reading all the ranges, it will only be able to 
> free the memory for requested ranges and memory of the gaps will never be 
> released for eg here (500-700 and 1000-1200).
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18106) Handle memory fragmentation in S3 Vectored IO implementation.

2022-07-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18106?focusedWorklogId=789699=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-789699
 ]

ASF GitHub Bot logged work on HADOOP-18106:
---

Author: ASF GitHub Bot
Created on: 11/Jul/22 18:23
Start Date: 11/Jul/22 18:23
Worklog Time Spent: 10m 
  Work Description: mukund-thakur commented on PR #4427:
URL: https://github.com/apache/hadoop/pull/4427#issuecomment-1180725218

   rebased patch which got merged. https://github.com/apache/hadoop/pull/4445 




Issue Time Tracking
---

Worklog Id: (was: 789699)
Time Spent: 4h 20m  (was: 4h 10m)

> Handle memory fragmentation in S3 Vectored IO implementation.
> -
>
> Key: HADOOP-18106
> URL: https://issues.apache.org/jira/browse/HADOOP-18106
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> As we have implemented merging of ranges in the S3AInputStream implementation 
> of vectored IO api, it can lead to memory fragmentation. Let me explain by 
> example.
>  
> Suppose client requests for 3 ranges. 
> 0-500, 700-1000 and 1200-1500.
> Now because of merging, all the above ranges will get merged into one and we 
> will allocate a big byte buffer of 0-1500 size but return sliced byte buffers 
> for the desired ranges.
> Now once the client is done reading all the ranges, it will only be able to 
> free the memory for requested ranges and memory of the gaps will never be 
> released for eg here (500-700 and 1000-1200).
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur closed pull request #4427: HADOOP-18106: Handle memory fragmentation in S3A Vectored IO

2022-07-11 Thread GitBox


mukund-thakur closed pull request #4427: HADOOP-18106: Handle memory 
fragmentation in S3A Vectored IO
URL: https://github.com/apache/hadoop/pull/4427


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on pull request #4427: HADOOP-18106: Handle memory fragmentation in S3A Vectored IO

2022-07-11 Thread GitBox


mukund-thakur commented on PR #4427:
URL: https://github.com/apache/hadoop/pull/4427#issuecomment-1180725218

   rebased patch which got merged. https://github.com/apache/hadoop/pull/4445 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18106) Handle memory fragmentation in S3 Vectored IO implementation.

2022-07-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18106?focusedWorklogId=789696=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-789696
 ]

ASF GitHub Bot logged work on HADOOP-18106:
---

Author: ASF GitHub Bot
Created on: 11/Jul/22 18:12
Start Date: 11/Jul/22 18:12
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on PR #4427:
URL: https://github.com/apache/hadoop/pull/4427#issuecomment-1180714616

   can you close this now




Issue Time Tracking
---

Worklog Id: (was: 789696)
Time Spent: 4h 10m  (was: 4h)

> Handle memory fragmentation in S3 Vectored IO implementation.
> -
>
> Key: HADOOP-18106
> URL: https://issues.apache.org/jira/browse/HADOOP-18106
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> As we have implemented merging of ranges in the S3AInputStream implementation 
> of vectored IO api, it can lead to memory fragmentation. Let me explain by 
> example.
>  
> Suppose client requests for 3 ranges. 
> 0-500, 700-1000 and 1200-1500.
> Now because of merging, all the above ranges will get merged into one and we 
> will allocate a big byte buffer of 0-1500 size but return sliced byte buffers 
> for the desired ranges.
> Now once the client is done reading all the ranges, it will only be able to 
> free the memory for requested ranges and memory of the gaps will never be 
> released for eg here (500-700 and 1000-1200).
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #4427: HADOOP-18106: Handle memory fragmentation in S3A Vectored IO

2022-07-11 Thread GitBox


steveloughran commented on PR #4427:
URL: https://github.com/apache/hadoop/pull/4427#issuecomment-1180714616

   can you close this now


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18074) Partial/Incomplete groups list can be returned in LDAP groups lookup

2022-07-11 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17565131#comment-17565131
 ] 

Steve Loughran commented on HADOOP-18074:
-

you got a +1 in the PR from me, all you needed. have you CP'd to branch-3? you 
can do that without any need for review/approval again. sometimes i do it 
locally, sometimes i go via github again just for yetus test runs

> Partial/Incomplete groups list can be returned in LDAP groups lookup
> 
>
> Key: HADOOP-18074
> URL: https://issues.apache.org/jira/browse/HADOOP-18074
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Philippe Lanoe
>Assignee: Larry McCay
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Hello,
> The  
> {code:java}
> Set doGetGroups(String user, int goUpHierarchy) {code}
> method in
> [https://github.com/apache/hadoop/blob/b27732c69b114f24358992a5a4d170bc94e2ceaf/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/LdapGroupsMapping.java#L476]
> Looks like having an issue if in the middle of the loop a *NamingException* 
> is caught:
> The groups variable is not reset in the catch clause and therefore the 
> fallback lookup cannot be executed (when goUpHierarchy==0 at least):
> ||
> {code:java}
> if (groups.isEmpty() || goUpHierarchy > 0) {
> groups = lookupGroup(result, c, goUpHierarchy);
> }
> {code}
>  
> Consequence is that only a partial list of groups is returned, which is not 
> correct.
> Following options could be used as solution:
>  * Reset the group to an empty list in the catch clause, to trigger the 
> fallback query.
>  * Add an option flag to enable ignoring groups with Naming Exception (since 
> they are not groups most probably)
> Independently, would any issue also occur (and therefore full list cannot be 
> returned) in the first lookup as well as in the fallback query, the method 
> should/could(with option flag) throw an Exception, because in some scenario 
> accuracy is important.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18074) Partial/Incomplete groups list can be returned in LDAP groups lookup

2022-07-11 Thread Larry McCay (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17565128#comment-17565128
 ] 

Larry McCay commented on HADOOP-18074:
--

Oh, okay, I can do it then. The contribution page seemed to indicate it
should be done by the reviewer.

Thanks!




> Partial/Incomplete groups list can be returned in LDAP groups lookup
> 
>
> Key: HADOOP-18074
> URL: https://issues.apache.org/jira/browse/HADOOP-18074
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Philippe Lanoe
>Assignee: Larry McCay
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Hello,
> The  
> {code:java}
> Set doGetGroups(String user, int goUpHierarchy) {code}
> method in
> [https://github.com/apache/hadoop/blob/b27732c69b114f24358992a5a4d170bc94e2ceaf/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/LdapGroupsMapping.java#L476]
> Looks like having an issue if in the middle of the loop a *NamingException* 
> is caught:
> The groups variable is not reset in the catch clause and therefore the 
> fallback lookup cannot be executed (when goUpHierarchy==0 at least):
> ||
> {code:java}
> if (groups.isEmpty() || goUpHierarchy > 0) {
> groups = lookupGroup(result, c, goUpHierarchy);
> }
> {code}
>  
> Consequence is that only a partial list of groups is returned, which is not 
> correct.
> Following options could be used as solution:
>  * Reset the group to an empty list in the catch clause, to trigger the 
> fallback query.
>  * Add an option flag to enable ignoring groups with Naming Exception (since 
> they are not groups most probably)
> Independently, would any issue also occur (and therefore full list cannot be 
> returned) in the first lookup as well as in the fallback query, the method 
> should/could(with option flag) throw an Exception, because in some scenario 
> accuracy is important.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18315) Fix 3.3 build problems caused by backport of HADOOP-11867.

2022-07-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18315?focusedWorklogId=789695=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-789695
 ]

ASF GitHub Bot logged work on HADOOP-18315:
---

Author: ASF GitHub Bot
Created on: 11/Jul/22 18:06
Start Date: 11/Jul/22 18:06
Worklog Time Spent: 10m 
  Work Description: omalley commented on code in PR #4511:
URL: https://github.com/apache/hadoop/pull/4511#discussion_r918217672


##
hadoop-project/pom.xml:
##
@@ -1723,17 +1723,6 @@
   
 
   
-  
-org.apache.hbase
-hbase-server
-${hbase.version}
-
-  
-log4j
-log4j
-  
-
-  

Review Comment:
   Ok, that was caused by HADOOP-18088 in 160b6d106.





Issue Time Tracking
---

Worklog Id: (was: 789695)
Time Spent: 40m  (was: 0.5h)

> Fix 3.3 build problems caused by backport of HADOOP-11867.
> --
>
> Key: HADOOP-18315
> URL: https://issues.apache.org/jira/browse/HADOOP-18315
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.5
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] omalley commented on a diff in pull request #4511: HADOOP-18315. Fix 3.3 build problems caused by backport of HADOOP-11867.

2022-07-11 Thread GitBox


omalley commented on code in PR #4511:
URL: https://github.com/apache/hadoop/pull/4511#discussion_r918217672


##
hadoop-project/pom.xml:
##
@@ -1723,17 +1723,6 @@
   
 
   
-  
-org.apache.hbase
-hbase-server
-${hbase.version}
-
-  
-log4j
-log4j
-  
-
-  

Review Comment:
   Ok, that was caused by HADOOP-18088 in 160b6d106.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18315) Fix 3.3 build problems caused by backport of HADOOP-11867.

2022-07-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18315?focusedWorklogId=789693=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-789693
 ]

ASF GitHub Bot logged work on HADOOP-18315:
---

Author: ASF GitHub Bot
Created on: 11/Jul/22 18:05
Start Date: 11/Jul/22 18:05
Worklog Time Spent: 10m 
  Work Description: omalley commented on code in PR #4511:
URL: https://github.com/apache/hadoop/pull/4511#discussion_r918216569


##
hadoop-tools/hadoop-benchmark/pom.xml:
##
@@ -22,11 +22,11 @@
   
 org.apache.hadoop
 hadoop-project
-3.4.0-SNAPSHOT
+3.3.9-SNAPSHOT
 ../../hadoop-project/pom.xml
   
   hadoop-benchmark
-  3.4.0-SNAPSHOT
+  3.3.9-SNAPSHOT

Review Comment:
   It should track the current branch, so 3.3.9 is correct. This was fixed by 
HADOOP-18322 in 7eb1c908a0 .





Issue Time Tracking
---

Worklog Id: (was: 789693)
Time Spent: 0.5h  (was: 20m)

> Fix 3.3 build problems caused by backport of HADOOP-11867.
> --
>
> Key: HADOOP-18315
> URL: https://issues.apache.org/jira/browse/HADOOP-18315
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.5
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] omalley commented on a diff in pull request #4511: HADOOP-18315. Fix 3.3 build problems caused by backport of HADOOP-11867.

2022-07-11 Thread GitBox


omalley commented on code in PR #4511:
URL: https://github.com/apache/hadoop/pull/4511#discussion_r918216569


##
hadoop-tools/hadoop-benchmark/pom.xml:
##
@@ -22,11 +22,11 @@
   
 org.apache.hadoop
 hadoop-project
-3.4.0-SNAPSHOT
+3.3.9-SNAPSHOT
 ../../hadoop-project/pom.xml
   
   hadoop-benchmark
-  3.4.0-SNAPSHOT
+  3.3.9-SNAPSHOT

Review Comment:
   It should track the current branch, so 3.3.9 is correct. This was fixed by 
HADOOP-18322 in 7eb1c908a0 .



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4548: Hadoop-18330

2022-07-11 Thread GitBox


hadoop-yetus commented on PR #4548:
URL: https://github.com/apache/hadoop/pull/4548#issuecomment-1180697627

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 37s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 37s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 46s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 34s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 45s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 19s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 57s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 23s | 
[/patch-mvninstall-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4548/1/artifact/out/patch-mvninstall-hadoop-tools_hadoop-aws.txt)
 |  hadoop-aws in the patch failed.  |
   | -1 :x: |  compile  |   0m 26s | 
[/patch-compile-hadoop-tools_hadoop-aws-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4548/1/artifact/out/patch-compile-hadoop-tools_hadoop-aws-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-aws in the patch failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | -1 :x: |  javac  |   0m 26s | 
[/patch-compile-hadoop-tools_hadoop-aws-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4548/1/artifact/out/patch-compile-hadoop-tools_hadoop-aws-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-aws in the patch failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | -1 :x: |  compile  |   0m 22s | 
[/patch-compile-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4548/1/artifact/out/patch-compile-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  hadoop-aws in the patch failed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.  |
   | -1 :x: |  javac  |   0m 22s | 
[/patch-compile-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4548/1/artifact/out/patch-compile-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  hadoop-aws in the patch failed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4548/1/artifact/out/blanks-eol.txt)
 |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   0m 19s | 
[/buildtool-patch-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4548/1/artifact/out/buildtool-patch-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  The patch fails to run checkstyle in hadoop-aws  |
   | -1 :x: |  mvnsite  |   0m 25s | 
[/patch-mvnsite-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4548/1/artifact/out/patch-mvnsite-hadoop-tools_hadoop-aws.txt)
 |  hadoop-aws in the patch failed.  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | -1 :x: |  spotbugs  |   0m 23s | 
[/patch-spotbugs-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4548/1/artifact/out/patch-spotbugs-hadoop-tools_hadoop-aws.txt)

[jira] [Commented] (HADOOP-18074) Partial/Incomplete groups list can be returned in LDAP groups lookup

2022-07-11 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17565121#comment-17565121
 ] 

Ayush Saxena commented on HADOOP-18074:
---

{quote}I then realized that we should have gotten the Reviewed flag set on this 
JIRA first.
{quote}
[~lmccay] You mean to say the *Hadoop Flags: Reviewed* label in the Jira?
Then you are a committer, Once you commit it post getting binding +1 to trunk 
and cherry-pick to relevant branches, set that Flag. Has something changed with 
the Flag, or are you talking about something different

> Partial/Incomplete groups list can be returned in LDAP groups lookup
> 
>
> Key: HADOOP-18074
> URL: https://issues.apache.org/jira/browse/HADOOP-18074
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Philippe Lanoe
>Assignee: Larry McCay
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Hello,
> The  
> {code:java}
> Set doGetGroups(String user, int goUpHierarchy) {code}
> method in
> [https://github.com/apache/hadoop/blob/b27732c69b114f24358992a5a4d170bc94e2ceaf/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/LdapGroupsMapping.java#L476]
> Looks like having an issue if in the middle of the loop a *NamingException* 
> is caught:
> The groups variable is not reset in the catch clause and therefore the 
> fallback lookup cannot be executed (when goUpHierarchy==0 at least):
> ||
> {code:java}
> if (groups.isEmpty() || goUpHierarchy > 0) {
> groups = lookupGroup(result, c, goUpHierarchy);
> }
> {code}
>  
> Consequence is that only a partial list of groups is returned, which is not 
> correct.
> Following options could be used as solution:
>  * Reset the group to an empty list in the catch clause, to trigger the 
> fallback query.
>  * Add an option flag to enable ignoring groups with Naming Exception (since 
> they are not groups most probably)
> Independently, would any issue also occur (and therefore full list cannot be 
> returned) in the first lookup as well as in the fallback query, the method 
> should/could(with option flag) throw an Exception, because in some scenario 
> accuracy is important.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18217) shutdownhookmanager should not be multithreaded (deadlock possible)

2022-07-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18217?focusedWorklogId=789684=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-789684
 ]

ASF GitHub Bot logged work on HADOOP-18217:
---

Author: ASF GitHub Bot
Created on: 11/Jul/22 17:46
Start Date: 11/Jul/22 17:46
Worklog Time Spent: 10m 
  Work Description: HerCath commented on code in PR #4255:
URL: https://github.com/apache/hadoop/pull/4255#discussion_r918200036


##
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestExitUtil.java:
##
@@ -0,0 +1,122 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.util;
+
+import static org.apache.hadoop.test.LambdaTestUtils.intercept;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertSame;
+import static org.junit.Assert.assertTrue;
+
+import org.junit.Test;
+
+import org.apache.hadoop.util.ExitUtil.ExitException;
+import org.apache.hadoop.util.ExitUtil.HaltException;
+
+
+public class TestExitUtil {
+
+  @Test
+  public void testGetSetExitExceptions() throws Throwable {
+// prepare states and exceptions
+ExitUtil.disableSystemExit();
+ExitUtil.resetFirstExitException();
+ExitException ee1 = new ExitException(1, "TestExitUtil forged 1st 
ExitException");
+ExitException ee2 = new ExitException(2, "TestExitUtil forged 2nd 
ExitException");
+try {
+  // check proper initial settings
+  assertFalse("ExitUtil.terminateCalled initial value should be false",
+  ExitUtil.terminateCalled());
+  assertNull("ExitUtil.getFirstExitException initial value should be null",
+  ExitUtil.getFirstExitException());
+
+  // simulate/check 1st call
+  ExitException ee = intercept(ExitException.class, 
()->ExitUtil.terminate(ee1));
+  assertSame("ExitUtil.terminate should have rethrown its ExitException 
argument but it "
+  + "had thrown something else", ee1, ee);
+  assertTrue("ExitUtil.terminateCalled should be true after 1st 
ExitUtil.terminate call",
+  ExitUtil.terminateCalled());
+  assertSame("ExitUtil.terminate should store its 1st call's 
ExitException",
+  ee1, ExitUtil.getFirstExitException());
+
+  // simulate/check 2nd call not overwritting 1st one
+  ee = intercept(ExitException.class, ()->ExitUtil.terminate(ee2));
+  assertSame("ExitUtil.terminate should have rethrown its HaltException 
argument but it "
+  + "had thrown something else", ee2, ee);
+  assertTrue("ExitUtil.terminateCalled should still be true after 2nd 
ExitUtil.terminate call",
+  ExitUtil.terminateCalled());
+  // 2nd call rethrown the 2nd ExitException yet only the 1st only should 
have been stored
+  assertSame("ExitUtil.terminate when called twice should only remember 
1st call's "
+  + "ExitException", ee1, ExitUtil.getFirstExitException());
+
+  // simulate cleanup, also tries to make sure state is ok for all junit 
still has to do
+  ExitUtil.resetFirstExitException();
+  assertFalse("ExitUtil.terminateCalled should be false after "
+  + "ExitUtil.resetFirstExitException call", 
ExitUtil.terminateCalled());
+  assertNull("ExitUtil.getFirstExitException should be null after "
+  + "ExitUtil.resetFirstExitException call", 
ExitUtil.getFirstExitException());
+} finally {
+  // cleanup
+  ExitUtil.resetFirstExitException();

Review Comment:
   Ok, i'll then also had a beforeClass as its counter part. Yes there is no 
API to re-enable. Maybe, to limit access i can add a package protected 
enableSystemExit and enableSystemHalt so the test class can use them but not 
really everyone.
   
   what do you think ?





Issue Time Tracking
---

Worklog Id: (was: 789684)
Time Spent: 2h 50m  (was: 2h 40m)

> shutdownhookmanager should not be multithreaded (deadlock possible)
> ---
>
> Key: HADOOP-18217
> URL: 

[GitHub] [hadoop] goiri commented on a diff in pull request #4540: YARN-11160. Support getResourceProfiles, getResourceProfile API's for Federation

2022-07-11 Thread GitBox


goiri commented on code in PR #4540:
URL: https://github.com/apache/hadoop/pull/4540#discussion_r918200085


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/clientrm/TestRouterYarnClientUtils.java:
##
@@ -27,14 +27,7 @@
 
 import org.apache.commons.collections.CollectionUtils;
 import org.apache.hadoop.thirdparty.com.google.common.collect.ImmutableSet;
-import org.apache.hadoop.yarn.api.protocolrecords.GetApplicationsResponse;
-import org.apache.hadoop.yarn.api.protocolrecords.GetClusterMetricsResponse;
-import org.apache.hadoop.yarn.api.protocolrecords.GetNodesToLabelsResponse;
-import org.apache.hadoop.yarn.api.protocolrecords.GetClusterNodeLabelsResponse;
-import org.apache.hadoop.yarn.api.protocolrecords.GetLabelsToNodesResponse;
-import org.apache.hadoop.yarn.api.protocolrecords.GetQueueUserAclsInfoResponse;
-import org.apache.hadoop.yarn.api.protocolrecords.ReservationListResponse;
-import 
org.apache.hadoop.yarn.api.protocolrecords.GetAllResourceTypeInfoResponse;
+import org.apache.hadoop.yarn.api.protocolrecords.*;

Review Comment:
   Avoid



##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/clientrm/TestFederationClientInterceptor.java:
##
@@ -1169,4 +1172,57 @@ public void testGetQueueInfo() throws Exception {
 Assert.assertEquals(queueInfo.getChildQueues().size(), 12, 0);
 Assert.assertEquals(queueInfo.getAccessibleNodeLabels().size(), 1);
   }
+
+  @Test
+  public void testGetResourceProfiles() throws Exception {
+LOG.info("Test FederationClientInterceptor : Get Resource Profiles 
request.");
+
+// null request
+LambdaTestUtils.intercept(YarnException.class, "Missing 
getResourceProfiles request.",
+() -> interceptor.getResourceProfiles(null));
+
+// normal request
+GetAllResourceProfilesRequest request = 
GetAllResourceProfilesRequest.newInstance();
+GetAllResourceProfilesResponse response = 
interceptor.getResourceProfiles(request);
+
+Assert.assertNotNull(response);
+
Assert.assertEquals(response.getResourceProfiles().get("maximum").getMemorySize(),
  32768);

Review Comment:
   Can we extract some of these?
   resProfiles = response.getResourceProfiles()
   maxResProfiles = resProfiles.get("maximum")



##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/clientrm/TestFederationClientInterceptor.java:
##
@@ -1169,4 +1172,57 @@ public void testGetQueueInfo() throws Exception {
 Assert.assertEquals(queueInfo.getChildQueues().size(), 12, 0);
 Assert.assertEquals(queueInfo.getAccessibleNodeLabels().size(), 1);
   }
+
+  @Test
+  public void testGetResourceProfiles() throws Exception {
+LOG.info("Test FederationClientInterceptor : Get Resource Profiles 
request.");
+
+// null request
+LambdaTestUtils.intercept(YarnException.class, "Missing 
getResourceProfiles request.",
+() -> interceptor.getResourceProfiles(null));
+
+// normal request
+GetAllResourceProfilesRequest request = 
GetAllResourceProfilesRequest.newInstance();
+GetAllResourceProfilesResponse response = 
interceptor.getResourceProfiles(request);
+
+Assert.assertNotNull(response);
+
Assert.assertEquals(response.getResourceProfiles().get("maximum").getMemorySize(),
  32768);
+
Assert.assertEquals(response.getResourceProfiles().get("maximum").getVirtualCores(),
  16);
+
Assert.assertEquals(response.getResourceProfiles().get("default").getMemorySize(),
  8192);
+
Assert.assertEquals(response.getResourceProfiles().get("default").getVirtualCores(),
  8);
+
Assert.assertEquals(response.getResourceProfiles().get("minimum").getMemorySize(),
  4096);
+
Assert.assertEquals(response.getResourceProfiles().get("minimum").getVirtualCores(),
  4);
+  }
+
+  @Test
+  public void testGetResourceProfile() throws Exception {
+LOG.info("Test FederationClientInterceptor : Get Resource Profile 
request.");
+
+// null request
+LambdaTestUtils.intercept(YarnException.class,
+"Missing getResourceProfile request or profileName.",
+() -> interceptor.getResourceProfile(null));
+
+// normal request
+GetResourceProfileRequest request = 
GetResourceProfileRequest.newInstance("maximum");
+GetResourceProfileResponse response = 
interceptor.getResourceProfile(request);
+
+Assert.assertNotNull(response);
+Assert.assertEquals(response.getResource().getMemorySize(), 32768);
+Assert.assertEquals(response.getResource().getVirtualCores(), 16);
+
+request = GetResourceProfileRequest.newInstance("default");

Review Comment:
   request2 and response2?



##

[GitHub] [hadoop] HerCath commented on a diff in pull request #4255: HADOOP-18217. ExitUtil synchronized blocks reduced to avoid exit bloc…

2022-07-11 Thread GitBox


HerCath commented on code in PR #4255:
URL: https://github.com/apache/hadoop/pull/4255#discussion_r918200036


##
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestExitUtil.java:
##
@@ -0,0 +1,122 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.util;
+
+import static org.apache.hadoop.test.LambdaTestUtils.intercept;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertSame;
+import static org.junit.Assert.assertTrue;
+
+import org.junit.Test;
+
+import org.apache.hadoop.util.ExitUtil.ExitException;
+import org.apache.hadoop.util.ExitUtil.HaltException;
+
+
+public class TestExitUtil {
+
+  @Test
+  public void testGetSetExitExceptions() throws Throwable {
+// prepare states and exceptions
+ExitUtil.disableSystemExit();
+ExitUtil.resetFirstExitException();
+ExitException ee1 = new ExitException(1, "TestExitUtil forged 1st 
ExitException");
+ExitException ee2 = new ExitException(2, "TestExitUtil forged 2nd 
ExitException");
+try {
+  // check proper initial settings
+  assertFalse("ExitUtil.terminateCalled initial value should be false",
+  ExitUtil.terminateCalled());
+  assertNull("ExitUtil.getFirstExitException initial value should be null",
+  ExitUtil.getFirstExitException());
+
+  // simulate/check 1st call
+  ExitException ee = intercept(ExitException.class, 
()->ExitUtil.terminate(ee1));
+  assertSame("ExitUtil.terminate should have rethrown its ExitException 
argument but it "
+  + "had thrown something else", ee1, ee);
+  assertTrue("ExitUtil.terminateCalled should be true after 1st 
ExitUtil.terminate call",
+  ExitUtil.terminateCalled());
+  assertSame("ExitUtil.terminate should store its 1st call's 
ExitException",
+  ee1, ExitUtil.getFirstExitException());
+
+  // simulate/check 2nd call not overwritting 1st one
+  ee = intercept(ExitException.class, ()->ExitUtil.terminate(ee2));
+  assertSame("ExitUtil.terminate should have rethrown its HaltException 
argument but it "
+  + "had thrown something else", ee2, ee);
+  assertTrue("ExitUtil.terminateCalled should still be true after 2nd 
ExitUtil.terminate call",
+  ExitUtil.terminateCalled());
+  // 2nd call rethrown the 2nd ExitException yet only the 1st only should 
have been stored
+  assertSame("ExitUtil.terminate when called twice should only remember 
1st call's "
+  + "ExitException", ee1, ExitUtil.getFirstExitException());
+
+  // simulate cleanup, also tries to make sure state is ok for all junit 
still has to do
+  ExitUtil.resetFirstExitException();
+  assertFalse("ExitUtil.terminateCalled should be false after "
+  + "ExitUtil.resetFirstExitException call", 
ExitUtil.terminateCalled());
+  assertNull("ExitUtil.getFirstExitException should be null after "
+  + "ExitUtil.resetFirstExitException call", 
ExitUtil.getFirstExitException());
+} finally {
+  // cleanup
+  ExitUtil.resetFirstExitException();

Review Comment:
   Ok, i'll then also had a beforeClass as its counter part. Yes there is no 
API to re-enable. Maybe, to limit access i can add a package protected 
enableSystemExit and enableSystemHalt so the test class can use them but not 
really everyone.
   
   what do you think ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a diff in pull request #4543: YARN-8900. [Router] Federation: routing getContainers REST invocations transparently to multiple RMs

2022-07-11 Thread GitBox


goiri commented on code in PR #4543:
URL: https://github.com/apache/hadoop/pull/4543#discussion_r918197490


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/FederationInterceptorREST.java:
##
@@ -1336,7 +1336,51 @@ public AppAttemptInfo getAppAttempt(HttpServletRequest 
req,
   @Override
   public ContainersInfo getContainers(HttpServletRequest req,
   HttpServletResponse res, String appId, String appAttemptId) {
-throw new NotImplementedException("Code is not implemented");
+ContainersInfo containersInfo = new ContainersInfo();
+
+Map subClustersActive = null;
+try {
+  subClustersActive = federationFacade.getSubClusters(true);
+} catch (YarnException e) {
+  LOG.error(e.getLocalizedMessage());

Review Comment:
   Are we OK just swallowing this?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-13144) Enhancing IPC client throughput via multiple connections per user

2022-07-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13144?focusedWorklogId=789678=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-789678
 ]

ASF GitHub Bot logged work on HADOOP-13144:
---

Author: ASF GitHub Bot
Created on: 11/Jul/22 17:39
Start Date: 11/Jul/22 17:39
Worklog Time Spent: 10m 
  Work Description: goiri commented on code in PR #4542:
URL: https://github.com/apache/hadoop/pull/4542#discussion_r918190241


##
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java:
##
@@ -390,6 +399,56 @@ public void testProxyAddress() throws Exception {
 }
   }
 
+  @Test
+  public void testConnectionWithSocketFactory() throws IOException {
+Server server;
+TestRpcService firstProxy = null;
+TestRpcService secondProxy = null;
+
+Configuration newConf = new Configuration(conf);
+newConf.set(CommonConfigurationKeysPublic.
+HADOOP_RPC_SOCKET_FACTORY_CLASS_DEFAULT_KEY, "");
+
+RetryPolicy retryPolicy = RetryUtils.getDefaultRetryPolicy(
+newConf, "Test.No.Such.Key",
+true, // defaultRetryPolicyEnabled = true
+"Test.No.Such.Key", "1,6",
+null);
+
+// create a server with two handlers
+server = setupTestServer(newConf, 2);
+try {
+  // create the first client
+  firstProxy = getClient(addr, newConf);
+  // create the second client
+  secondProxy = getClient(addr, newConf);
+
+  firstProxy.ping(null, newEmptyRequest());
+  secondProxy.ping(null, newEmptyRequest());
+
+  Client client = ProtobufRpcEngine2.getClient(newConf);
+  assertEquals(1, client.getConnectionIds().size());
+
+  stop(null, firstProxy, secondProxy);
+  ProtobufRpcEngine2.clearClientCache();
+
+  // create the first client with index 1
+  firstProxy = getMultipleClientWithIndex(addr, newConf, retryPolicy, 1);
+  // create the second client with index 2
+  secondProxy = getMultipleClientWithIndex(addr, newConf, retryPolicy, 2);
+  firstProxy.ping(null, newEmptyRequest());
+  secondProxy.ping(null, newEmptyRequest());
+
+  client = ProtobufRpcEngine2.getClient(newConf);
+  assertEquals(2, client.getConnectionIds().size());
+} catch (ServiceException e) {
+  e.printStackTrace();

Review Comment:
   We probably want to surface this.



##
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRpcBase.java:
##
@@ -154,11 +155,53 @@ protected static TestRpcService 
getClient(InetSocketAddress serverAddr,
 }
   }
 
-  protected static void stop(Server server, TestRpcService proxy) {
-if (proxy != null) {
-  try {
-RPC.stopProxy(proxy);
-  } catch (Exception ignored) {}
+  /**
+   * Try to obtain a proxy of TestRpcService with an index.
+   * @param serverAddr input server address
+   * @param clientConf input client configuration
+   * @param retryPolicy input retryPolicy
+   * @param index input index
+   * @return one proxy of TestRpcService
+   */
+  protected static TestRpcService getMultipleClientWithIndex(InetSocketAddress 
serverAddr,
+  Configuration clientConf, RetryPolicy retryPolicy, int index)
+  throws ServiceException, IOException {
+MockConnectionId connectionId = new MockConnectionId(serverAddr,
+TestRpcService.class, UserGroupInformation.getCurrentUser(),
+RPC.getRpcTimeout(clientConf), retryPolicy, clientConf, index);
+return getClient(connectionId, clientConf);
+  }
+
+  /**
+   * Obtain a TestRpcService Proxy by a connectionId.
+   * @param connId input connectionId
+   * @param clientConf  input configuration
+   * @return a TestRpcService Proxy
+   * @throws ServiceException a ServiceException
+   */
+  protected static TestRpcService getClient(ConnectionId connId,
+  Configuration clientConf) throws ServiceException {
+try {
+  return RPC.getProtocolProxy(
+  TestRpcService.class,
+  0,
+  connId,
+  clientConf,
+  NetUtils.getDefaultSocketFactory(clientConf)).getProxy();
+} catch (IOException e) {
+  throw new ServiceException(e);
+}
+  }
+
+  protected static void stop(Server server, TestRpcService... proxies) {
+if (proxies != null) {
+  for (TestRpcService proxy : proxies) {

Review Comment:
   I believe that if proxies is null, `for (TestRpcService proxy : proxies)` 
already works fine so no need to check if it's null.
   Please double check.



##
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java:
##
@@ -390,6 +399,56 @@ public void testProxyAddress() throws Exception {
 }
   }
 
+  @Test
+  public void testConnectionWithSocketFactory() throws IOException {
+Server server;
+TestRpcService firstProxy = null;
+TestRpcService secondProxy = null;
+
+

[GitHub] [hadoop] goiri commented on a diff in pull request #4542: HADOOP-13144. Enhancing IPC client throughput via multiple connections per user

2022-07-11 Thread GitBox


goiri commented on code in PR #4542:
URL: https://github.com/apache/hadoop/pull/4542#discussion_r918190241


##
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java:
##
@@ -390,6 +399,56 @@ public void testProxyAddress() throws Exception {
 }
   }
 
+  @Test
+  public void testConnectionWithSocketFactory() throws IOException {
+Server server;
+TestRpcService firstProxy = null;
+TestRpcService secondProxy = null;
+
+Configuration newConf = new Configuration(conf);
+newConf.set(CommonConfigurationKeysPublic.
+HADOOP_RPC_SOCKET_FACTORY_CLASS_DEFAULT_KEY, "");
+
+RetryPolicy retryPolicy = RetryUtils.getDefaultRetryPolicy(
+newConf, "Test.No.Such.Key",
+true, // defaultRetryPolicyEnabled = true
+"Test.No.Such.Key", "1,6",
+null);
+
+// create a server with two handlers
+server = setupTestServer(newConf, 2);
+try {
+  // create the first client
+  firstProxy = getClient(addr, newConf);
+  // create the second client
+  secondProxy = getClient(addr, newConf);
+
+  firstProxy.ping(null, newEmptyRequest());
+  secondProxy.ping(null, newEmptyRequest());
+
+  Client client = ProtobufRpcEngine2.getClient(newConf);
+  assertEquals(1, client.getConnectionIds().size());
+
+  stop(null, firstProxy, secondProxy);
+  ProtobufRpcEngine2.clearClientCache();
+
+  // create the first client with index 1
+  firstProxy = getMultipleClientWithIndex(addr, newConf, retryPolicy, 1);
+  // create the second client with index 2
+  secondProxy = getMultipleClientWithIndex(addr, newConf, retryPolicy, 2);
+  firstProxy.ping(null, newEmptyRequest());
+  secondProxy.ping(null, newEmptyRequest());
+
+  client = ProtobufRpcEngine2.getClient(newConf);
+  assertEquals(2, client.getConnectionIds().size());
+} catch (ServiceException e) {
+  e.printStackTrace();

Review Comment:
   We probably want to surface this.



##
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRpcBase.java:
##
@@ -154,11 +155,53 @@ protected static TestRpcService 
getClient(InetSocketAddress serverAddr,
 }
   }
 
-  protected static void stop(Server server, TestRpcService proxy) {
-if (proxy != null) {
-  try {
-RPC.stopProxy(proxy);
-  } catch (Exception ignored) {}
+  /**
+   * Try to obtain a proxy of TestRpcService with an index.
+   * @param serverAddr input server address
+   * @param clientConf input client configuration
+   * @param retryPolicy input retryPolicy
+   * @param index input index
+   * @return one proxy of TestRpcService
+   */
+  protected static TestRpcService getMultipleClientWithIndex(InetSocketAddress 
serverAddr,
+  Configuration clientConf, RetryPolicy retryPolicy, int index)
+  throws ServiceException, IOException {
+MockConnectionId connectionId = new MockConnectionId(serverAddr,
+TestRpcService.class, UserGroupInformation.getCurrentUser(),
+RPC.getRpcTimeout(clientConf), retryPolicy, clientConf, index);
+return getClient(connectionId, clientConf);
+  }
+
+  /**
+   * Obtain a TestRpcService Proxy by a connectionId.
+   * @param connId input connectionId
+   * @param clientConf  input configuration
+   * @return a TestRpcService Proxy
+   * @throws ServiceException a ServiceException
+   */
+  protected static TestRpcService getClient(ConnectionId connId,
+  Configuration clientConf) throws ServiceException {
+try {
+  return RPC.getProtocolProxy(
+  TestRpcService.class,
+  0,
+  connId,
+  clientConf,
+  NetUtils.getDefaultSocketFactory(clientConf)).getProxy();
+} catch (IOException e) {
+  throw new ServiceException(e);
+}
+  }
+
+  protected static void stop(Server server, TestRpcService... proxies) {
+if (proxies != null) {
+  for (TestRpcService proxy : proxies) {

Review Comment:
   I believe that if proxies is null, `for (TestRpcService proxy : proxies)` 
already works fine so no need to check if it's null.
   Please double check.



##
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java:
##
@@ -390,6 +399,56 @@ public void testProxyAddress() throws Exception {
 }
   }
 
+  @Test
+  public void testConnectionWithSocketFactory() throws IOException {
+Server server;
+TestRpcService firstProxy = null;
+TestRpcService secondProxy = null;
+
+Configuration newConf = new Configuration(conf);
+newConf.set(CommonConfigurationKeysPublic.
+HADOOP_RPC_SOCKET_FACTORY_CLASS_DEFAULT_KEY, "");
+
+RetryPolicy retryPolicy = RetryUtils.getDefaultRetryPolicy(
+newConf, "Test.No.Such.Key",
+true, // defaultRetryPolicyEnabled = true
+"Test.No.Such.Key", "1,6",
+null);
+
+// create a server with two handlers
+

[jira] [Commented] (HADOOP-18074) Partial/Incomplete groups list can be returned in LDAP groups lookup

2022-07-11 Thread Larry McCay (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17565107#comment-17565107
 ] 

Larry McCay commented on HADOOP-18074:
--

[~ste...@apache.org] - I have addressed your review comments and committed this 
to trunk.
I then realized that we should have gotten the Reviewed flag set on this JIRA 
first.
I'd like to cherry-pick this to branch-3.3 as you indicated a +1 for 
trunk/branch-3.3 but not sure whether I should revert this and await the 
Reviewed flag or proceed with that cherry-pick. Thoughts?

> Partial/Incomplete groups list can be returned in LDAP groups lookup
> 
>
> Key: HADOOP-18074
> URL: https://issues.apache.org/jira/browse/HADOOP-18074
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Philippe Lanoe
>Assignee: Larry McCay
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Hello,
> The  
> {code:java}
> Set doGetGroups(String user, int goUpHierarchy) {code}
> method in
> [https://github.com/apache/hadoop/blob/b27732c69b114f24358992a5a4d170bc94e2ceaf/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/LdapGroupsMapping.java#L476]
> Looks like having an issue if in the middle of the loop a *NamingException* 
> is caught:
> The groups variable is not reset in the catch clause and therefore the 
> fallback lookup cannot be executed (when goUpHierarchy==0 at least):
> ||
> {code:java}
> if (groups.isEmpty() || goUpHierarchy > 0) {
> groups = lookupGroup(result, c, goUpHierarchy);
> }
> {code}
>  
> Consequence is that only a partial list of groups is returned, which is not 
> correct.
> Following options could be used as solution:
>  * Reset the group to an empty list in the catch clause, to trigger the 
> fallback query.
>  * Add an option flag to enable ignoring groups with Naming Exception (since 
> they are not groups most probably)
> Independently, would any issue also occur (and therefore full list cannot be 
> returned) in the first lookup as well as in the fallback query, the method 
> should/could(with option flag) throw an Exception, because in some scenario 
> accuracy is important.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ashutoshcipher commented on a diff in pull request #4521: HADOOP-18321.Fix when to read an additional record from a BZip2 text file split

2022-07-11 Thread GitBox


ashutoshcipher commented on code in PR #4521:
URL: https://github.com/apache/hadoop/pull/4521#discussion_r918164884


##
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/bzip2/TestBZip2TextFileWriter.java:
##
@@ -0,0 +1,91 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.io.compress.bzip2;
+
+import static 
org.apache.hadoop.io.compress.bzip2.BZip2TextFileWriter.BLOCK_SIZE;
+import static org.junit.Assert.assertEquals;
+
+import java.io.ByteArrayInputStream;

Review Comment:
   @steveloughran - I will file a JIRA and fix the imports. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >