[jira] [Work logged] (HADOOP-18241) Move to Java 11

2022-06-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18241?focusedWorklogId=780210=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-780210
 ]

ASF GitHub Bot logged work on HADOOP-18241:
---

Author: ASF GitHub Bot
Created on: 10/Jun/22 05:37
Start Date: 10/Jun/22 05:37
Worklog Time Spent: 10m 
  Work Description: ayushtkn commented on PR #4319:
URL: https://github.com/apache/hadoop/pull/4319#issuecomment-1151964919

   @slfan1989 Javadoc errors most of them I sorted, you can try get hold of 
this if you have time
   [HADOOP-15984](https://issues.apache.org/jira/browse/HADOOP-15984)
   




Issue Time Tracking
---

Worklog Id: (was: 780210)
Time Spent: 1h 20m  (was: 1h 10m)

> Move to Java 11
> ---
>
> Key: HADOOP-18241
> URL: https://issues.apache.org/jira/browse/HADOOP-18241
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> https://lists.apache.org/thread/h5lmpqo2tz7tc02j44qxpwcnjzpxo0k2



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn commented on pull request #4319: HADOOP-18241. Move to JAVA 11.

2022-06-09 Thread GitBox


ayushtkn commented on PR #4319:
URL: https://github.com/apache/hadoop/pull/4319#issuecomment-1151964919

   @slfan1989 Javadoc errors most of them I sorted, you can try get hold of 
this if you have time
   [HADOOP-15984](https://issues.apache.org/jira/browse/HADOOP-15984)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4424: HDFS-16628 RBF: kerbose user remove Non-default namespace data failed.

2022-06-09 Thread GitBox


hadoop-yetus commented on PR #4424:
URL: https://github.com/apache/hadoop/pull/4424#issuecomment-1151935757

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 41s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  37m 30s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  1s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 57s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 53s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  2s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 19s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 48s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m  5s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 44s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 28s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 32s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  21m 41s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 54s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 118m 57s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4424/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4424 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux a003555ec08e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 61dbbdce1617dedabd4e5d9e8b35009ed13c0152 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4424/1/testReport/ |
   | Max. process+thread count | 2752 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4424/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For 

[GitHub] [hadoop] GauthamBanasandra merged pull request #4370: HDFS-16463. Make dirent cross platform compatible

2022-06-09 Thread GitBox


GauthamBanasandra merged PR #4370:
URL: https://github.com/apache/hadoop/pull/4370


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ZanderXu commented on pull request #4219: HDFS-16557. BootstrapStandby failed because of checking gap for inprogress EditLogInputStream

2022-06-09 Thread GitBox


ZanderXu commented on PR #4219:
URL: https://github.com/apache/hadoop/pull/4219#issuecomment-1151901548

   > When we set dfs.ha.tail-edits.in-progress=true, the edits can be read by 
getJournaledEdits (there is no gap actually) . But there is an GAP exception 
thrown.
   
   I  think there is a gap here because bootstrap expects to get 1050196644 
txid, but can't find it in the result. So throwing GAP Exception is ok.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4269: HDFS-16570 RBF: The router using MultipleDestinationMountTableResolve…

2022-06-09 Thread GitBox


hadoop-yetus commented on PR #4269:
URL: https://github.com/apache/hadoop/pull/4269#issuecomment-1151891187

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 40s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m  1s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  1s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 57s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 44s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 55s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 15s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 48s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 19s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 44s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 43s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 28s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 24s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  21m 56s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 52s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 119m  4s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4269/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4269 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 5dc8306caacd 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 94273aed12c914e3cd5bec3f27390aa9876e519c |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4269/3/testReport/ |
   | Max. process+thread count | 2662 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4269/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For 

[GitHub] [hadoop] ZanderXu commented on pull request #4219: HDFS-16557. BootstrapStandby failed because of checking gap for inprogress EditLogInputStream

2022-06-09 Thread GitBox


ZanderXu commented on PR #4219:
URL: https://github.com/apache/hadoop/pull/4219#issuecomment-1151890746

   As I explained above, change to `if (next == 
HdfsServerConstants.INVALID_TXID || elis.isInProgress())` maybe change the 
original semantics of the `checkgap` method.
   
   About my explain, do you have any questions?  Discuss together and become 
more familiar with the relevant logic.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ZanderXu commented on pull request #4219: HDFS-16557. BootstrapStandby failed because of checking gap for inprogress EditLogInputStream

2022-06-09 Thread GitBox


ZanderXu commented on PR #4219:
URL: https://github.com/apache/hadoop/pull/4219#issuecomment-1151887486

   So in this case, we should change bootstrap logic.
   Solution one: set DFS_HA_TAILEDITS_INPROGRESS_KEY to false.
   Solution two: call getJournaledEdits multiple times until get the latest 
txid, and then go to checkgap


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ZanderXu commented on pull request #4219: HDFS-16557. BootstrapStandby failed because of checking gap for inprogress EditLogInputStream

2022-06-09 Thread GitBox


ZanderXu commented on PR #4219:
URL: https://github.com/apache/hadoop/pull/4219#issuecomment-1151885068

   Oh, i know, the root cause is that getJournaledEdits returns up to 5000 
txids by default. And 1049842441 - 1049837441 = 5000.
   
   I can't reached to 1050196644, so checkForGaps failed.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut commented on pull request #4219: HDFS-16557. BootstrapStandby failed because of checking gap for inprogress EditLogInputStream

2022-06-09 Thread GitBox


tomscut commented on PR #4219:
URL: https://github.com/apache/hadoop/pull/4219#issuecomment-1151882369

   > OK, back to BootstrapStandby GAP. Form this stack information, I got that 
it try to get streams from 1049842441 to 1050196644. But cannot get the txid 
1049842441 from the result streams. So I think we should to trace the root 
cause, why can't we find txid 1049842441 in the return result of 
`selectInputStreams(streams, 1049842441, true, true)`?
   > 
   > Please correct me if anything is wrong.
   
   Please refer to the discussion with @xkrogen above. 
   
   The root cause is the` if` condition (`if(next == 
HdfsServerConstants.INVALID_TXID)`) that does not enter properly.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ZanderXu commented on pull request #4219: HDFS-16557. BootstrapStandby failed because of checking gap for inprogress EditLogInputStream

2022-06-09 Thread GitBox


ZanderXu commented on PR #4219:
URL: https://github.com/apache/hadoop/pull/4219#issuecomment-1151877997

   OK, back to BootstrapStandby GAP.
   Form this stack information, I got that it try to get streams from 
1049842441 to 1050196644. But cannot get the txid 1049842441 from the result 
streams. 
   So I think we should to trace the root cause,  why can't we find txid 
1049842441 in the return result of `selectInputStreams(streams, 1049842441, 
true, true)`? 
   
   Please correct me if anything is wrong.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] zhangxiping1 opened a new pull request, #4424: HDFS-16628 RBF: kerbose user remove Non-default namespace data failed.

2022-06-09 Thread GitBox


zhangxiping1 opened a new pull request, #4424:
URL: https://github.com/apache/hadoop/pull/4424

   HDFS-16628 RBF: kerbose user remove Non-default namespace data failed
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15852) Refactor QuotaUsage

2022-06-09 Thread Daniel Ma (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552503#comment-17552503
 ] 

Daniel Ma edited comment on HADOOP-15852 at 6/10/22 2:37 AM:
-

/** Return storage type quota. */   
165   private long[] getTypesQuota() {  
166 return typeQuota;   
167   }

[~belugabehr]   hello, Could you pls share the reason why this function is 
removed?


was (Author: daniel ma):
/** Return storage type quota. */   
165   private long[] getTypesQuota() {  
166 return typeQuota;   
167   }

[~belugabehr]   hello, Could you pls share the reason why this function is 
remove?

> Refactor QuotaUsage
> ---
>
> Key: HADOOP-15852
> URL: https://issues.apache.org/jira/browse/HADOOP-15852
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-15852.1.patch, HADOOP-15852.2.patch, 
> HADOOP-15852.3.patch
>
>
> My new mission is to remove instances of {{StringBuffer}} in favor of 
> {{StringBuilder}}.
> * Simplify Code
> * Use Eclipse to generate hashcode/equals
> * User StringBuilder instead of StringBuffer



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15852) Refactor QuotaUsage

2022-06-09 Thread Daniel Ma (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552503#comment-17552503
 ] 

Daniel Ma commented on HADOOP-15852:


/** Return storage type quota. */   
165   private long[] getTypesQuota() {  
166 return typeQuota;   
167   }

[~belugabehr]   hello, Could you pls share the reason why this function is 
remove?

> Refactor QuotaUsage
> ---
>
> Key: HADOOP-15852
> URL: https://issues.apache.org/jira/browse/HADOOP-15852
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-15852.1.patch, HADOOP-15852.2.patch, 
> HADOOP-15852.3.patch
>
>
> My new mission is to remove instances of {{StringBuffer}} in favor of 
> {{StringBuilder}}.
> * Simplify Code
> * Use Eclipse to generate hashcode/equals
> * User StringBuilder instead of StringBuffer



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut commented on pull request #4219: HDFS-16557. BootstrapStandby failed because of checking gap for inprogress EditLogInputStream

2022-06-09 Thread GitBox


tomscut commented on PR #4219:
URL: https://github.com/apache/hadoop/pull/4219#issuecomment-1151852402

   > Thanks @tomscut , after tracing the code, I think we cannot add 
`elis.isInProgress()`.
   > 
   > And I will explain my ideas trough questions and answers. **Question one: 
Why was INVALID_TXID considered in the original code?**
   > 
   > * CheckForGaps method is used to check whether streams contains continuous 
TXids from fromTxId to toAtLeastTxid
   > * LastTxId equals INVALID_TXID means the stream is in progress
   > * toAtLeastTxid maybe abnormal value, like Long.MaxValue.  So the 
CheckForGaps method only need to cover the latest inprogress segment.
   > 
   > **Question two: What is the difference between INVALID_TXID and is 
InProgress()?**
   > 
   > * Before introducing [SBN READ], LastTxId equals INVALID_TXID means the 
stream is in progress. And stream is in progress means it's lastTxId is 
INVALID_TXID.
   > * But after introducing [SBN READ], LastTxId equals INVALID_TXID means the 
stream is in progress. But stream is in progress cannot mean it's lastTxId is 
INVALID_TXID. Because introducing getJournaledEdits.
   > * So if we add `elis.isInProgress()` in CheckForGaps, it cannot cover the 
last writing segments which actual contains latest edit.
   > 
   > Please correct me if anything is wrong.
   
   Thanks @ZanderXu for your comment. Please refer to the stack.
   
![image](https://user-images.githubusercontent.com/55134131/172977547-16c0bf94-8586-4f41-be8e-ce1e4dd41eae.png)
   
   When we set `dfs.ha.tail-edits.in-progress=true`, the txID can be read by 
getJournaledEdits (there is no gap actually) . But there is an GAP exception 
thrown.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4421: YARN-10122. Support signalToContainer API for Federation.

2022-06-09 Thread GitBox


hadoop-yetus commented on PR #4421:
URL: https://github.com/apache/hadoop/pull/4421#issuecomment-1151813201

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 39s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  37m 50s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 33s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m  1s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 52s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 16s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4421/6/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.txt)
 |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router: 
The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   0m 49s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 18s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 53s |  |  hadoop-yarn-server-router in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 39s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  91m  5s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4421/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4421 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 2ef1db663bfb 4.15.0-169-generic #177-Ubuntu SMP Thu Feb 3 
10:50:38 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 9b5229597df40a06407e4325f5bf024771410629 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4421/6/testReport/ |
   | Max. process+thread count | 1227 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router |
   | Console output | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4421: YARN-10122. Support signalToContainer API for Federation.

2022-06-09 Thread GitBox


hadoop-yetus commented on PR #4421:
URL: https://github.com/apache/hadoop/pull/4421#issuecomment-1151767771

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 40s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m  8s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 55s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 45s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 48s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 16s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 42s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 23s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4421/5/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.txt)
 |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router: 
The patch generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   0m 59s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m  0s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m 12s |  |  hadoop-yarn-server-router in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 50s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  95m 21s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4421/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4421 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux b4ca09636b18 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 
23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 82fb24709a7f6b2fac63bbc1d49f9eebf51ba272 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4421/5/testReport/ |
   | Max. process+thread count | 1295 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router |
   | Console output | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4421: YARN-10122. Support signalToContainer API for Federation.

2022-06-09 Thread GitBox


hadoop-yetus commented on PR #4421:
URL: https://github.com/apache/hadoop/pull/4421#issuecomment-1151765548

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 38s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  1s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 14s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 56s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 50s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 51s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 16s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 29s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 20s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4421/4/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.txt)
 |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router: 
The patch generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m  3s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 16s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m  0s |  |  hadoop-yarn-server-router in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 50s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  95m  1s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4421/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4421 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux ebc3ee358ffd 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 
23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 82fb24709a7f6b2fac63bbc1d49f9eebf51ba272 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4421/4/testReport/ |
   | Max. process+thread count | 1266 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router |
   | Console output | 

[GitHub] [hadoop] zhengchenyu closed pull request #4408: YARN-11172. Fix TestClientRMTokens#testDelegationToken introduced by HDFS-16563.

2022-06-09 Thread GitBox


zhengchenyu closed pull request #4408: YARN-11172. Fix 
TestClientRMTokens#testDelegationToken introduced by HDFS-16563.
URL: https://github.com/apache/hadoop/pull/4408


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] zhengchenyu closed pull request #4408: YARN-11172. Fix TestClientRMTokens#testDelegationToken introduced by HDFS-16563.

2022-06-09 Thread GitBox


zhengchenyu closed pull request #4408: YARN-11172. Fix 
TestClientRMTokens#testDelegationToken introduced by HDFS-16563.
URL: https://github.com/apache/hadoop/pull/4408


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 opened a new pull request, #4423: HDFS-16629. [JDK 11] Fix javadoc warnings in hadoop-hdfs module.

2022-06-09 Thread GitBox


slfan1989 opened a new pull request, #4423:
URL: https://github.com/apache/hadoop/pull/4423

   JIRA: HDFS-16629. [JDK 11] Fix javadoc  warnings in hadoop-hdfs module.
   
   During compilation of the most recently committed code, a java doc waring 
appeared and I will fix it.
   
   ```
   1 error
   100 warnings
   [INFO] 

   [INFO] BUILD FAILURE
   [INFO] 

   [INFO] Total time:  37.132 s
   [INFO] Finished at: 2022-06-09T17:07:12Z
   [INFO] 

   [ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:javadoc-no-fork 
(default-cli) on project hadoop-hdfs: An error has occurred in Javadoc report 
generation: 
   [ERROR] Exit code: 1 - javadoc: warning - You have specified the HTML 
version as HTML 4.01 by using the -html4 option.
   [ERROR] The default is currently HTML5 and the support for HTML 4.01 will be 
removed
   [ERROR] in a future release. To suppress this warning, please ensure that 
any HTML constructs 
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-18284) Fix Repeated Semicolons

2022-06-09 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-18284 started by fanshilun.
--
> Fix Repeated Semicolons
> ---
>
> Key: HADOOP-18284
> URL: https://issues.apache.org/jira/browse/HADOOP-18284
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> while reading the code, I found a very tiny optimization point, part of the 
> code contains 2 semicolons at the end, I will fix it. Because this change is 
> simple, I fixed it in One JIRA.
> {code:java}
> private final ReentrantReadWriteLock lock = new ReentrantReadWriteLock();;
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on a diff in pull request #4421: YARN-10122. Support signalToContainer API for Federation.

2022-06-09 Thread GitBox


slfan1989 commented on code in PR #4421:
URL: https://github.com/apache/hadoop/pull/4421#discussion_r894049538


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/clientrm/TestableFederationClientInterceptor.java:
##
@@ -75,7 +75,7 @@ protected ApplicationClientProtocol 
getClientRMProxyForSubCluster(
 mockRM.init(super.getConf());
 mockRM.start();
 try {
-  MockNM nm = mockRM.registerNode("127.0.0.1:1234", 8092,4);
+  MockNM nm = mockRM.registerNode("127.0.0.1:1234", 8 * 1024,4);

Review Comment:
   I'll fix it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a diff in pull request #4421: YARN-10122. Support signalToContainer API for Federation.

2022-06-09 Thread GitBox


goiri commented on code in PR #4421:
URL: https://github.com/apache/hadoop/pull/4421#discussion_r894045330


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/clientrm/TestableFederationClientInterceptor.java:
##
@@ -75,7 +75,7 @@ protected ApplicationClientProtocol 
getClientRMProxyForSubCluster(
 mockRM.init(super.getConf());
 mockRM.start();
 try {
-  MockNM nm = mockRM.registerNode("127.0.0.1:1234", 8092,4);
+  MockNM nm = mockRM.registerNode("127.0.0.1:1234", 8 * 1024,4);

Review Comment:
   I was also pointing to the space after the comma.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18241) Move to Java 11

2022-06-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18241?focusedWorklogId=780141=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-780141
 ]

ASF GitHub Bot logged work on HADOOP-18241:
---

Author: ASF GitHub Bot
Created on: 09/Jun/22 23:53
Start Date: 09/Jun/22 23:53
Worklog Time Spent: 10m 
  Work Description: slfan1989 commented on PR #4319:
URL: https://github.com/apache/hadoop/pull/4319#issuecomment-1151724366

   Hi, @ayushtkn I hope to provide some help, I see that during the compilation 
process of the code I submitted, there is a java doc compilation warning in 
JDK11, I will fix it.




Issue Time Tracking
---

Worklog Id: (was: 780141)
Time Spent: 1h 10m  (was: 1h)

> Move to Java 11
> ---
>
> Key: HADOOP-18241
> URL: https://issues.apache.org/jira/browse/HADOOP-18241
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> https://lists.apache.org/thread/h5lmpqo2tz7tc02j44qxpwcnjzpxo0k2



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #4319: HADOOP-18241. Move to JAVA 11.

2022-06-09 Thread GitBox


slfan1989 commented on PR #4319:
URL: https://github.com/apache/hadoop/pull/4319#issuecomment-1151724366

   Hi, @ayushtkn I hope to provide some help, I see that during the compilation 
process of the code I submitted, there is a java doc compilation warning in 
JDK11, I will fix it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #4419: HDFS-16627. improve BPServiceActor#register Log Add NN Addr.

2022-06-09 Thread GitBox


slfan1989 commented on PR #4419:
URL: https://github.com/apache/hadoop/pull/4419#issuecomment-1151721248

   @Hexiaoqiao Please help me review the code again! 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4421: YARN-10122. Support signalToContainer API for Federation.

2022-06-09 Thread GitBox


hadoop-yetus commented on PR #4421:
URL: https://github.com/apache/hadoop/pull/4421#issuecomment-1151720401

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 40s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  37m 53s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m  1s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 33s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 14s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4421/3/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.txt)
 |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router: 
The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   0m 52s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 19s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 59s |  |  hadoop-yarn-server-router in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  90m 45s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4421/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4421 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux b786df697e64 4.15.0-169-generic #177-Ubuntu SMP Thu Feb 3 
10:50:38 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / de4434e15f4c3d22494aba74e1a199c32750eac9 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4421/3/testReport/ |
   | Max. process+thread count | 1266 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router |
   | Console output | 

[GitHub] [hadoop] slfan1989 commented on pull request #4406: HDFS-16619. improve HttpHeaders.Values And HttpHeaders.Names With recommended Class

2022-06-09 Thread GitBox


slfan1989 commented on PR #4406:
URL: https://github.com/apache/hadoop/pull/4406#issuecomment-1151719908

   @tomscut Please help me review the code, this pr is to replace the 
deprecated import. There should be no code risk and will not cause junit to 
fail.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18284) Fix Repeated Semicolons

2022-06-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18284?focusedWorklogId=780137=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-780137
 ]

ASF GitHub Bot logged work on HADOOP-18284:
---

Author: ASF GitHub Bot
Created on: 09/Jun/22 23:47
Start Date: 09/Jun/22 23:47
Worklog Time Spent: 10m 
  Work Description: slfan1989 opened a new pull request, #4422:
URL: https://github.com/apache/hadoop/pull/4422

   JIRA:HADOOP-18284. Fix Repeated Semicolons.
   
   while reading the code, I found a very tiny optimization point, part of the 
code contains 2 semicolons at the end, I will fix it. Because this change is 
simple, I fixed it in One JIRA.
   
   The code looks like this:
   ```
   private final ReentrantReadWriteLock lock = new ReentrantReadWriteLock();;
   ```




Issue Time Tracking
---

Worklog Id: (was: 780137)
Remaining Estimate: 0h
Time Spent: 10m

> Fix Repeated Semicolons
> ---
>
> Key: HADOOP-18284
> URL: https://issues.apache.org/jira/browse/HADOOP-18284
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
> Fix For: 3.4.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> while reading the code, I found a very tiny optimization point, part of the 
> code contains 2 semicolons at the end, I will fix it. Because this change is 
> simple, I fixed it in One JIRA.
> {code:java}
> private final ReentrantReadWriteLock lock = new ReentrantReadWriteLock();;
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18284) Fix Repeated Semicolons

2022-06-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-18284:

Labels: pull-request-available  (was: )

> Fix Repeated Semicolons
> ---
>
> Key: HADOOP-18284
> URL: https://issues.apache.org/jira/browse/HADOOP-18284
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> while reading the code, I found a very tiny optimization point, part of the 
> code contains 2 semicolons at the end, I will fix it. Because this change is 
> simple, I fixed it in One JIRA.
> {code:java}
> private final ReentrantReadWriteLock lock = new ReentrantReadWriteLock();;
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 opened a new pull request, #4422: HADOOP-18284. Fix Repeated Semicolons.

2022-06-09 Thread GitBox


slfan1989 opened a new pull request, #4422:
URL: https://github.com/apache/hadoop/pull/4422

   JIRA:HADOOP-18284. Fix Repeated Semicolons.
   
   while reading the code, I found a very tiny optimization point, part of the 
code contains 2 semicolons at the end, I will fix it. Because this change is 
simple, I fixed it in One JIRA.
   
   The code looks like this:
   ```
   private final ReentrantReadWriteLock lock = new ReentrantReadWriteLock();;
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #4382: HDFS-16609. Fix Flakes Junit Tests that often report timeouts.

2022-06-09 Thread GitBox


slfan1989 commented on PR #4382:
URL: https://github.com/apache/hadoop/pull/4382#issuecomment-1151717102

   @tomscut @Hexiaoqiao Thanks for helping to review the code, the junit test 
has passed, the java doc problem can be ignored, I will submit pr separately to 
fix the java doc problem. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18284) Fix Repeated Semicolons

2022-06-09 Thread fanshilun (Jira)
fanshilun created HADOOP-18284:
--

 Summary: Fix Repeated Semicolons
 Key: HADOOP-18284
 URL: https://issues.apache.org/jira/browse/HADOOP-18284
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.4.0
Reporter: fanshilun
Assignee: fanshilun
 Fix For: 3.4.0


while reading the code, I found a very tiny optimization point, part of the 
code contains 2 semicolons at the end, I will fix it. Because this change is 
simple, I fixed it in One JIRA.

{code:java}
private final ReentrantReadWriteLock lock = new ReentrantReadWriteLock();;
{code}




--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4382: HDFS-16609. Fix Flakes Junit Tests that often report timeouts.

2022-06-09 Thread GitBox


hadoop-yetus commented on PR #4382:
URL: https://github.com/apache/hadoop/pull/4382#issuecomment-1151713943

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 56s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  42m 12s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 43s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   1m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 21s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 47s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   1m 24s | 
[/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4382/4/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-hdfs in trunk failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   4m  7s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m 51s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 31s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   1m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 35s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   1m  3s | 
[/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4382/4/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-hdfs in the patch failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 45s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  26m  8s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 346m  9s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 58s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 467m 30s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4382/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4382 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux d4cd432f2daf 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 20fe8be4eada6d583a1a3c29272fdf535d7b5365 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4382/4/testReport/ |
   | Max. process+thread count | 2359 (vs. ulimit of 5500) |
   | modules | C: 

[GitHub] [hadoop] slfan1989 commented on a diff in pull request #4421: YARN-10122. Support signalToContainer API for Federation.

2022-06-09 Thread GitBox


slfan1989 commented on code in PR #4421:
URL: https://github.com/apache/hadoop/pull/4421#discussion_r894032927


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/clientrm/TestableFederationClientInterceptor.java:
##
@@ -71,7 +75,8 @@ protected ApplicationClientProtocol 
getClientRMProxyForSubCluster(
 mockRM.init(super.getConf());
 mockRM.start();
 try {
-  mockRM.registerNode("h1:1234", 1024);
+  MockNM nm = mockRM.registerNode("127.0.0.1:1234", 8092,4);

Review Comment:
   Thanks for your help reviewing the code, I will fix it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a diff in pull request #4421: YARN-10122. Support signalToContainer API for Federation.

2022-06-09 Thread GitBox


goiri commented on code in PR #4421:
URL: https://github.com/apache/hadoop/pull/4421#discussion_r894029331


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/clientrm/TestableFederationClientInterceptor.java:
##
@@ -71,7 +75,8 @@ protected ApplicationClientProtocol 
getClientRMProxyForSubCluster(
 mockRM.init(super.getConf());
 mockRM.start();
 try {
-  mockRM.registerNode("h1:1234", 1024);
+  MockNM nm = mockRM.registerNode("127.0.0.1:1234", 8092,4);

Review Comment:
   8*1024, 4



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4406: HDFS-16619. improve HttpHeaders.Values And HttpHeaders.Names With recommended Class

2022-06-09 Thread GitBox


hadoop-yetus commented on PR #4406:
URL: https://github.com/apache/hadoop/pull/4406#issuecomment-1151657257

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 50s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  37m 11s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 44s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   1m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 24s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 47s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   1m 25s | 
[/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4406/3/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-hdfs in trunk failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | +1 :green_heart: |  javadoc  |   1m 49s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 45s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 43s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 28s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   1m 28s |  |  
hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
 with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 generated 0 new + 
911 unchanged - 26 fixed = 911 total (was 937)  |
   | +1 :green_heart: |  compile  |   1m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 18s |  |  
hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
 with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 generated 0 new 
+ 890 unchanged - 26 fixed = 890 total (was 916)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 24s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 55s | 
[/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4406/3/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-hdfs in the patch failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 18s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 28s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 247m 16s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 14s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 356m 51s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4406/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4406 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux fe39bdb378e4 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 44d000361245a4e2cf86089c251598f2d0daacbb |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4419: HDFS-16627. improve BPServiceActor#register Log Add NN Addr.

2022-06-09 Thread GitBox


hadoop-yetus commented on PR #4419:
URL: https://github.com/apache/hadoop/pull/4419#issuecomment-1151655696

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 37s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  37m 10s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 41s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   1m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 23s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 43s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   1m 24s | 
[/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4419/3/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-hdfs in trunk failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | +1 :green_heart: |  javadoc  |   1m 49s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 41s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 58s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 26s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 21s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  0s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 27s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 57s | 
[/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4419/3/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-hdfs in the patch failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | +1 :green_heart: |  javadoc  |   1m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 23s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 25s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 251m  3s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 14s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 359m 36s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4419/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4419 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux d837a0f53be9 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 1e3aa224e5aa4fb3f19dcb580eafd9d9b4f7af11 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4419/3/testReport/ |
   | Max. process+thread count | 3655 (vs. ulimit of 5500) |
   | modules | C: 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4419: HDFS-16627. improve BPServiceActor#register Log Add NN Addr.

2022-06-09 Thread GitBox


hadoop-yetus commented on PR #4419:
URL: https://github.com/apache/hadoop/pull/4419#issuecomment-1151655335

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 46s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  37m 48s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 44s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   1m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 26s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 44s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   1m 27s | 
[/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4419/2/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-hdfs in trunk failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | +1 :green_heart: |  javadoc  |   1m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 39s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m  7s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 29s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   1m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 26s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   1m  0s | 
[/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4419/2/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-hdfs in the patch failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 26s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 57s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 251m 44s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 13s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 361m 43s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4419/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4419 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 61d5321e2f89 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 1e3aa224e5aa4fb3f19dcb580eafd9d9b4f7af11 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4419/2/testReport/ |
   | Max. process+thread count | 2995 (vs. ulimit of 5500) |
   | modules | C: 

[jira] [Work logged] (HADOOP-17461) Add thread-level IOStatistics Context

2022-06-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17461?focusedWorklogId=780090=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-780090
 ]

ASF GitHub Bot logged work on HADOOP-17461:
---

Author: ASF GitHub Bot
Created on: 09/Jun/22 18:37
Start Date: 09/Jun/22 18:37
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on code in PR #4352:
URL: https://github.com/apache/hadoop/pull/4352#discussion_r893822412


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/statistics/impl/IOStatisticsContextImpl.java:
##
@@ -0,0 +1,86 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.fs.statistics.impl;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.fs.impl.WeakReferenceThreadMap;
+import org.apache.hadoop.fs.statistics.IOStatisticsContext;
+import org.apache.hadoop.fs.statistics.IOStatisticsSnapshot;
+
+/**
+ * Implementing the IOStatisticsContext interface.
+ */
+public class IOStatisticsContextImpl implements IOStatisticsContext {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(IOStatisticsContextImpl.class);
+
+  /**
+   * Collecting IOStatistics per thread.
+   */
+  private final WeakReferenceThreadMap
+  threadIOStatsContext = new WeakReferenceThreadMap<>(
+  this::getIOStatisticsSnapshotFactory,

Review Comment:
   needs indentation



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/statistics/impl/IOStatisticsContextImpl.java:
##
@@ -0,0 +1,86 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.fs.statistics.impl;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.fs.impl.WeakReferenceThreadMap;
+import org.apache.hadoop.fs.statistics.IOStatisticsContext;
+import org.apache.hadoop.fs.statistics.IOStatisticsSnapshot;
+
+/**
+ * Implementing the IOStatisticsContext interface.
+ */
+public class IOStatisticsContextImpl implements IOStatisticsContext {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(IOStatisticsContextImpl.class);
+
+  /**
+   * Collecting IOStatistics per thread.
+   */
+  private final WeakReferenceThreadMap
+  threadIOStatsContext = new WeakReferenceThreadMap<>(
+  this::getIOStatisticsSnapshotFactory,
+  this::referenceLost);
+
+  /**
+   * A Method to act as an IOStatisticsSnapshot factory, in a
+   * WeakReferenceThreadMap.
+   *
+   * @param key ThreadID.
+   * @return an Instance of IOStatisticsSnapshot.
+   */
+  private IOStatisticsSnapshot getIOStatisticsSnapshotFactory(Long key) {
+return new IOStatisticsSnapshot();
+  }
+
+  /**
+   * In case of reference loss.
+   *
+   * @param key ThreadID.
+   */
+  private void referenceLost(Long key) {
+LOG.info("Reference lost for threadID: {}", key);

Review Comment:
   maybe debug



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/statistics/impl/IOStatisticsContextImpl.java:
##
@@ -0,0 +1,86 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this 

[GitHub] [hadoop] steveloughran commented on a diff in pull request #4352: HADOOP-17461. Thread-level IOStatistics in S3A

2022-06-09 Thread GitBox


steveloughran commented on code in PR #4352:
URL: https://github.com/apache/hadoop/pull/4352#discussion_r893822412


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/statistics/impl/IOStatisticsContextImpl.java:
##
@@ -0,0 +1,86 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.fs.statistics.impl;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.fs.impl.WeakReferenceThreadMap;
+import org.apache.hadoop.fs.statistics.IOStatisticsContext;
+import org.apache.hadoop.fs.statistics.IOStatisticsSnapshot;
+
+/**
+ * Implementing the IOStatisticsContext interface.
+ */
+public class IOStatisticsContextImpl implements IOStatisticsContext {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(IOStatisticsContextImpl.class);
+
+  /**
+   * Collecting IOStatistics per thread.
+   */
+  private final WeakReferenceThreadMap
+  threadIOStatsContext = new WeakReferenceThreadMap<>(
+  this::getIOStatisticsSnapshotFactory,

Review Comment:
   needs indentation



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/statistics/impl/IOStatisticsContextImpl.java:
##
@@ -0,0 +1,86 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.fs.statistics.impl;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.fs.impl.WeakReferenceThreadMap;
+import org.apache.hadoop.fs.statistics.IOStatisticsContext;
+import org.apache.hadoop.fs.statistics.IOStatisticsSnapshot;
+
+/**
+ * Implementing the IOStatisticsContext interface.
+ */
+public class IOStatisticsContextImpl implements IOStatisticsContext {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(IOStatisticsContextImpl.class);
+
+  /**
+   * Collecting IOStatistics per thread.
+   */
+  private final WeakReferenceThreadMap
+  threadIOStatsContext = new WeakReferenceThreadMap<>(
+  this::getIOStatisticsSnapshotFactory,
+  this::referenceLost);
+
+  /**
+   * A Method to act as an IOStatisticsSnapshot factory, in a
+   * WeakReferenceThreadMap.
+   *
+   * @param key ThreadID.
+   * @return an Instance of IOStatisticsSnapshot.
+   */
+  private IOStatisticsSnapshot getIOStatisticsSnapshotFactory(Long key) {
+return new IOStatisticsSnapshot();
+  }
+
+  /**
+   * In case of reference loss.
+   *
+   * @param key ThreadID.
+   */
+  private void referenceLost(Long key) {
+LOG.info("Reference lost for threadID: {}", key);

Review Comment:
   maybe debug



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/statistics/impl/IOStatisticsContextImpl.java:
##
@@ -0,0 +1,86 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR 

[jira] [Work logged] (HADOOP-18231) tests in ITestS3AInputStreamPerformance are failing

2022-06-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18231?focusedWorklogId=780086=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-780086
 ]

ASF GitHub Bot logged work on HADOOP-18231:
---

Author: ASF GitHub Bot
Created on: 09/Jun/22 18:20
Start Date: 09/Jun/22 18:20
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4386:
URL: https://github.com/apache/hadoop/pull/4386#issuecomment-1151452105

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  19m 56s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 6 new or modified test files.  |
    _ feature-HADOOP-18028-s3a-prefetch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m 21s |  |  
feature-HADOOP-18028-s3a-prefetch passed  |
   | +1 :green_heart: |  compile  |   0m 59s |  |  
feature-HADOOP-18028-s3a-prefetch passed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  
feature-HADOOP-18028-s3a-prefetch passed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 46s |  |  
feature-HADOOP-18028-s3a-prefetch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 58s |  |  
feature-HADOOP-18028-s3a-prefetch passed  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  
feature-HADOOP-18028-s3a-prefetch passed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 46s |  |  
feature-HADOOP-18028-s3a-prefetch passed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 41s |  |  
feature-HADOOP-18028-s3a-prefetch passed  |
   | +1 :green_heart: |  shadedclient  |  24m 14s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 45s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 23s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4386/8/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 3 new + 14 unchanged - 1 fixed 
= 17 total (was 15)  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m  0s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m 16s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 49s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 127m  2s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4386/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4386 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux ccfb94facc8c 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | feature-HADOOP-18028-s3a-prefetch / 
a5f32e747dc2c8315704180941d5567cce4ef0cf |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4386: HADOOP-18231. Fixes failing tests & drain stream async.

2022-06-09 Thread GitBox


hadoop-yetus commented on PR #4386:
URL: https://github.com/apache/hadoop/pull/4386#issuecomment-1151452105

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  19m 56s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 6 new or modified test files.  |
    _ feature-HADOOP-18028-s3a-prefetch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m 21s |  |  
feature-HADOOP-18028-s3a-prefetch passed  |
   | +1 :green_heart: |  compile  |   0m 59s |  |  
feature-HADOOP-18028-s3a-prefetch passed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  
feature-HADOOP-18028-s3a-prefetch passed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 46s |  |  
feature-HADOOP-18028-s3a-prefetch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 58s |  |  
feature-HADOOP-18028-s3a-prefetch passed  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  
feature-HADOOP-18028-s3a-prefetch passed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 46s |  |  
feature-HADOOP-18028-s3a-prefetch passed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 41s |  |  
feature-HADOOP-18028-s3a-prefetch passed  |
   | +1 :green_heart: |  shadedclient  |  24m 14s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 45s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 23s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4386/8/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 3 new + 14 unchanged - 1 fixed 
= 17 total (was 15)  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m  0s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m 16s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 49s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 127m  2s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4386/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4386 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux ccfb94facc8c 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | feature-HADOOP-18028-s3a-prefetch / 
a5f32e747dc2c8315704180941d5567cce4ef0cf |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4386/8/testReport/ |
   | Max. process+thread count | 571 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4386/8/console |
   | 

[jira] [Work logged] (HADOOP-18242) ABFS Rename Failure when tracking metadata is in incomplete state

2022-06-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18242?focusedWorklogId=780083=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-780083
 ]

ASF GitHub Bot logged work on HADOOP-18242:
---

Author: ASF GitHub Bot
Created on: 09/Jun/22 18:11
Start Date: 09/Jun/22 18:11
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on code in PR #4331:
URL: https://github.com/apache/hadoop/pull/4331#discussion_r893636106


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java:
##
@@ -104,9 +104,14 @@ public class AbfsClient implements Closeable {
   private final ListeningScheduledExecutorService executorService;
 
   /**
-   * Has the Rename operation been retried once or not?
+   * Is Abfs metadata been in an incomplete State resulting in a rename
+   * failure?
*/
-  private boolean hasRetriedRenameOnce;
+  private boolean isMetadataIncompleteState;
+
+  /** logging the rename failure if metadata is in an incomplete state*/

Review Comment:
   nit, add a . to keep javadoc quiet



##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAbfsRenameRetryRecovery.java:
##
@@ -0,0 +1,159 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import java.lang.reflect.Field;
+
+import org.assertj.core.api.Assertions;
+import org.junit.Test;
+import org.mockito.invocation.InvocationOnMock;
+import org.mockito.stubbing.Answer;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsRestOperationException;
+import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.AzureBlobFileSystemException;
+import org.apache.hadoop.fs.azurebfs.services.AbfsClient;
+import org.apache.hadoop.fs.azurebfs.services.AbfsClientResult;
+import org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation;
+import org.apache.hadoop.fs.azurebfs.services.TestAbfsClient;
+import org.apache.hadoop.fs.statistics.IOStatistics;
+
+import static 
org.apache.hadoop.fs.azurebfs.contracts.services.AzureServiceErrorCode.RENAME_DESTINATION_PARENT_PATH_NOT_FOUND;
+import static org.apache.hadoop.fs.statistics.StoreStatisticNames.OP_RENAME;
+import static org.apache.hadoop.test.LambdaTestUtils.intercept;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.times;
+import static org.mockito.Mockito.verify;
+import static org.mockito.Mockito.when;
+
+public class TestAbfsRenameRetryRecovery extends AbstractAbfsIntegrationTest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(TestAbfsRenameRetryRecovery.class);
+
+  public TestAbfsRenameRetryRecovery() throws Exception {
+  }
+
+  /**
+   * Mock the AbfsClient to run a metadata incomplete scenario with recovery
+   * rename.
+   */
+  @Test
+  public void testRenameFailuresDueToIncompleteMetadata() throws Exception {
+String sourcePath = getMethodName() + "Source";
+String destNoParentPath = "/NoParent/Dest";
+boolean doesDestParentDirExist = false;
+AzureBlobFileSystem fs = getFileSystem();
+
+AbfsClient mockClient = TestAbfsClient.getMockAbfsClient(
+fs.getAbfsStore().getClient(),
+fs.getAbfsStore().getAbfsConfiguration());
+
+AzureBlobFileSystemStore abfsStore = fs.getAbfsStore();
+abfsStore = setAzureBlobSystemStoreField(abfsStore, "client", mockClient);
+
+// SuccessFul Result.
+AbfsRestOperation successOp = mock(AbfsRestOperation.class);
+AbfsClientResult successResult = mock(AbfsClientResult.class);

Review Comment:
   easier just to construct one with `new`



##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClientResult.java:
##
@@ -0,0 +1,53 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * 

[GitHub] [hadoop] steveloughran commented on a diff in pull request #4331: HADOOP-18242. ABFS Rename Failure when tracking metadata is in an incomplete state

2022-06-09 Thread GitBox


steveloughran commented on code in PR #4331:
URL: https://github.com/apache/hadoop/pull/4331#discussion_r893636106


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java:
##
@@ -104,9 +104,14 @@ public class AbfsClient implements Closeable {
   private final ListeningScheduledExecutorService executorService;
 
   /**
-   * Has the Rename operation been retried once or not?
+   * Is Abfs metadata been in an incomplete State resulting in a rename
+   * failure?
*/
-  private boolean hasRetriedRenameOnce;
+  private boolean isMetadataIncompleteState;
+
+  /** logging the rename failure if metadata is in an incomplete state*/

Review Comment:
   nit, add a . to keep javadoc quiet



##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAbfsRenameRetryRecovery.java:
##
@@ -0,0 +1,159 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import java.lang.reflect.Field;
+
+import org.assertj.core.api.Assertions;
+import org.junit.Test;
+import org.mockito.invocation.InvocationOnMock;
+import org.mockito.stubbing.Answer;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsRestOperationException;
+import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.AzureBlobFileSystemException;
+import org.apache.hadoop.fs.azurebfs.services.AbfsClient;
+import org.apache.hadoop.fs.azurebfs.services.AbfsClientResult;
+import org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation;
+import org.apache.hadoop.fs.azurebfs.services.TestAbfsClient;
+import org.apache.hadoop.fs.statistics.IOStatistics;
+
+import static 
org.apache.hadoop.fs.azurebfs.contracts.services.AzureServiceErrorCode.RENAME_DESTINATION_PARENT_PATH_NOT_FOUND;
+import static org.apache.hadoop.fs.statistics.StoreStatisticNames.OP_RENAME;
+import static org.apache.hadoop.test.LambdaTestUtils.intercept;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.times;
+import static org.mockito.Mockito.verify;
+import static org.mockito.Mockito.when;
+
+public class TestAbfsRenameRetryRecovery extends AbstractAbfsIntegrationTest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(TestAbfsRenameRetryRecovery.class);
+
+  public TestAbfsRenameRetryRecovery() throws Exception {
+  }
+
+  /**
+   * Mock the AbfsClient to run a metadata incomplete scenario with recovery
+   * rename.
+   */
+  @Test
+  public void testRenameFailuresDueToIncompleteMetadata() throws Exception {
+String sourcePath = getMethodName() + "Source";
+String destNoParentPath = "/NoParent/Dest";
+boolean doesDestParentDirExist = false;
+AzureBlobFileSystem fs = getFileSystem();
+
+AbfsClient mockClient = TestAbfsClient.getMockAbfsClient(
+fs.getAbfsStore().getClient(),
+fs.getAbfsStore().getAbfsConfiguration());
+
+AzureBlobFileSystemStore abfsStore = fs.getAbfsStore();
+abfsStore = setAzureBlobSystemStoreField(abfsStore, "client", mockClient);
+
+// SuccessFul Result.
+AbfsRestOperation successOp = mock(AbfsRestOperation.class);
+AbfsClientResult successResult = mock(AbfsClientResult.class);

Review Comment:
   easier just to construct one with `new`



##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClientResult.java:
##
@@ -0,0 +1,53 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4421: YARN-10122. Support signalToContainer API for Federation.

2022-06-09 Thread GitBox


hadoop-yetus commented on PR #4421:
URL: https://github.com/apache/hadoop/pull/4421#issuecomment-1151422875

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  13m 40s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  37m 24s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 50s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 48s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 49s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 52s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 18s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 40s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 23s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4421/2/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.txt)
 |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router: 
The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   0m 57s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m  4s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   5m 14s | 
[/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4421/2/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.txt)
 |  hadoop-yarn-server-router in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 49s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 110m 26s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.yarn.server.router.clientrm.TestFederationClientInterceptor |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4421/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4421 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux d5e0d831a383 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 
23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 19f27faea65a30cf85a788e528af8f9609f101e7 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 

[jira] [Work logged] (HADOOP-18231) tests in ITestS3AInputStreamPerformance are failing

2022-06-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18231?focusedWorklogId=780068=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-780068
 ]

ASF GitHub Bot logged work on HADOOP-18231:
---

Author: ASF GitHub Bot
Created on: 09/Jun/22 17:31
Start Date: 09/Jun/22 17:31
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4386:
URL: https://github.com/apache/hadoop/pull/4386#issuecomment-1151405929

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  18m 19s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 6 new or modified test files.  |
    _ feature-HADOOP-18028-s3a-prefetch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  42m 29s |  |  
feature-HADOOP-18028-s3a-prefetch passed  |
   | +1 :green_heart: |  compile  |   1m  0s |  |  
feature-HADOOP-18028-s3a-prefetch passed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 51s |  |  
feature-HADOOP-18028-s3a-prefetch passed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 44s |  |  
feature-HADOOP-18028-s3a-prefetch passed  |
   | +1 :green_heart: |  mvnsite  |   1m  0s |  |  
feature-HADOOP-18028-s3a-prefetch passed  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  |  
feature-HADOOP-18028-s3a-prefetch passed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  |  
feature-HADOOP-18028-s3a-prefetch passed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 39s |  |  
feature-HADOOP-18028-s3a-prefetch passed  |
   | +1 :green_heart: |  shadedclient  |  24m 24s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 42s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 49s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 49s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 42s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 26s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4386/6/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 3 new + 14 unchanged - 1 fixed 
= 17 total (was 15)  |
   | +1 :green_heart: |  mvnsite  |   0m 44s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 41s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m  5s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 125m 36s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4386/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4386 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 452c5c56edcf 4.15.0-166-generic #174-Ubuntu SMP Wed Dec 8 
19:07:44 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | feature-HADOOP-18028-s3a-prefetch / 
ebb1b1dcc3dc90612057f3e16e9895cd603d4041 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4386: HADOOP-18231. Fixes failing tests & drain stream async.

2022-06-09 Thread GitBox


hadoop-yetus commented on PR #4386:
URL: https://github.com/apache/hadoop/pull/4386#issuecomment-1151405929

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  18m 19s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 6 new or modified test files.  |
    _ feature-HADOOP-18028-s3a-prefetch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  42m 29s |  |  
feature-HADOOP-18028-s3a-prefetch passed  |
   | +1 :green_heart: |  compile  |   1m  0s |  |  
feature-HADOOP-18028-s3a-prefetch passed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 51s |  |  
feature-HADOOP-18028-s3a-prefetch passed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 44s |  |  
feature-HADOOP-18028-s3a-prefetch passed  |
   | +1 :green_heart: |  mvnsite  |   1m  0s |  |  
feature-HADOOP-18028-s3a-prefetch passed  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  |  
feature-HADOOP-18028-s3a-prefetch passed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  |  
feature-HADOOP-18028-s3a-prefetch passed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 39s |  |  
feature-HADOOP-18028-s3a-prefetch passed  |
   | +1 :green_heart: |  shadedclient  |  24m 24s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 42s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 49s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 49s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 42s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 26s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4386/6/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 3 new + 14 unchanged - 1 fixed 
= 17 total (was 15)  |
   | +1 :green_heart: |  mvnsite  |   0m 44s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 41s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m  5s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 125m 36s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4386/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4386 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 452c5c56edcf 4.15.0-166-generic #174-Ubuntu SMP Wed Dec 8 
19:07:44 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | feature-HADOOP-18028-s3a-prefetch / 
ebb1b1dcc3dc90612057f3e16e9895cd603d4041 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4386/6/testReport/ |
   | Max. process+thread count | 523 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4386/6/console |
   | 

[jira] [Work logged] (HADOOP-18231) tests in ITestS3AInputStreamPerformance are failing

2022-06-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18231?focusedWorklogId=780067=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-780067
 ]

ASF GitHub Bot logged work on HADOOP-18231:
---

Author: ASF GitHub Bot
Created on: 09/Jun/22 17:29
Start Date: 09/Jun/22 17:29
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4386:
URL: https://github.com/apache/hadoop/pull/4386#issuecomment-1151404926

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   6m 42s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 6 new or modified test files.  |
    _ feature-HADOOP-18028-s3a-prefetch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  42m  9s |  |  
feature-HADOOP-18028-s3a-prefetch passed  |
   | +1 :green_heart: |  compile  |   1m  1s |  |  
feature-HADOOP-18028-s3a-prefetch passed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 50s |  |  
feature-HADOOP-18028-s3a-prefetch passed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 44s |  |  
feature-HADOOP-18028-s3a-prefetch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 57s |  |  
feature-HADOOP-18028-s3a-prefetch passed  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  |  
feature-HADOOP-18028-s3a-prefetch passed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  |  
feature-HADOOP-18028-s3a-prefetch passed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 38s |  |  
feature-HADOOP-18028-s3a-prefetch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 39s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 47s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 47s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 29s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4386/7/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 3 new + 14 unchanged - 1 fixed 
= 17 total (was 15)  |
   | +1 :green_heart: |  mvnsite  |   0m 46s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 41s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 59s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 50s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 113m 20s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4386/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4386 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 0b4784e0f25d 4.15.0-166-generic #174-Ubuntu SMP Wed Dec 8 
19:07:44 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | feature-HADOOP-18028-s3a-prefetch / 
ebb1b1dcc3dc90612057f3e16e9895cd603d4041 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4386: HADOOP-18231. Fixes failing tests & drain stream async.

2022-06-09 Thread GitBox


hadoop-yetus commented on PR #4386:
URL: https://github.com/apache/hadoop/pull/4386#issuecomment-1151404926

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   6m 42s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 6 new or modified test files.  |
    _ feature-HADOOP-18028-s3a-prefetch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  42m  9s |  |  
feature-HADOOP-18028-s3a-prefetch passed  |
   | +1 :green_heart: |  compile  |   1m  1s |  |  
feature-HADOOP-18028-s3a-prefetch passed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 50s |  |  
feature-HADOOP-18028-s3a-prefetch passed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 44s |  |  
feature-HADOOP-18028-s3a-prefetch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 57s |  |  
feature-HADOOP-18028-s3a-prefetch passed  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  |  
feature-HADOOP-18028-s3a-prefetch passed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  |  
feature-HADOOP-18028-s3a-prefetch passed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 38s |  |  
feature-HADOOP-18028-s3a-prefetch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 39s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 47s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 47s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 29s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4386/7/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 3 new + 14 unchanged - 1 fixed 
= 17 total (was 15)  |
   | +1 :green_heart: |  mvnsite  |   0m 46s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 41s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 59s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 50s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 113m 20s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4386/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4386 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 0b4784e0f25d 4.15.0-166-generic #174-Ubuntu SMP Wed Dec 8 
19:07:44 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | feature-HADOOP-18028-s3a-prefetch / 
ebb1b1dcc3dc90612057f3e16e9895cd603d4041 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4386/7/testReport/ |
   | Max. process+thread count | 612 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4386/7/console |
   | 

[jira] [Work logged] (HADOOP-18197) Update protobuf 3.7.1 to a version without CVE-2021-22569

2022-06-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18197?focusedWorklogId=780062=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-780062
 ]

ASF GitHub Bot logged work on HADOOP-18197:
---

Author: ASF GitHub Bot
Created on: 09/Jun/22 17:14
Start Date: 09/Jun/22 17:14
Worklog Time Spent: 10m 
  Work Description: ayushtkn commented on PR #19:
URL: https://github.com/apache/hadoop-thirdparty/pull/19#issuecomment-1151391473

   Hbase does shade protobuf and doesn't suffix the version I suppose:
   
https://github.com/apache/hbase-thirdparty/blob/master/hbase-shaded-protobuf/pom.xml#L25
   
   We are using this internally only I guess, the version will be tied with 
hadoop-thirdparty release version. So, If I have to choose one option, I would 
choose to keep the name same.
   
   If we choose to keep changing the name for some reason, we should start that 
with guava as well at least for future releases.




Issue Time Tracking
---

Worklog Id: (was: 780062)
Time Spent: 40m  (was: 0.5h)

> Update protobuf 3.7.1 to a version without CVE-2021-22569
> -
>
> Key: HADOOP-18197
> URL: https://issues.apache.org/jira/browse/HADOOP-18197
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ivan Viaznikov
>Priority: Major
>  Labels: pull-request-available, security
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The artifact `org.apache.hadoop:hadoop-common` brings in a dependency 
> `com.google.protobuf:protobuf-java:2.5.0`, which is an outdated version 
> released in 2013 and it contains a vulnerability 
> [CVE-2021-22569|https://nvd.nist.gov/vuln/detail/CVE-2021-22569].
> Therefore, requesting you to clarify if this library version is going to be 
> updated in the following releases



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop-thirdparty] ayushtkn commented on pull request #19: HADOOP-18197. Upgrade protobuf to 3.21.1

2022-06-09 Thread GitBox


ayushtkn commented on PR #19:
URL: https://github.com/apache/hadoop-thirdparty/pull/19#issuecomment-1151391473

   Hbase does shade protobuf and doesn't suffix the version I suppose:
   
https://github.com/apache/hbase-thirdparty/blob/master/hbase-shaded-protobuf/pom.xml#L25
   
   We are using this internally only I guess, the version will be tied with 
hadoop-thirdparty release version. So, If I have to choose one option, I would 
choose to keep the name same.
   
   If we choose to keep changing the name for some reason, we should start that 
with guava as well at least for future releases.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4421: YARN-10122. Support signalToContainer API for Federation.

2022-06-09 Thread GitBox


hadoop-yetus commented on PR #4421:
URL: https://github.com/apache/hadoop/pull/4421#issuecomment-1151389304

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 47s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 15s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m  2s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 39s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 14s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4421/1/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.txt)
 |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router: 
The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   0m 51s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 27s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   4m 59s | 
[/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4421/1/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.txt)
 |  hadoop-yarn-server-router in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  92m 46s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.yarn.server.router.clientrm.TestFederationClientInterceptor |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4421/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4421 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux ab1773bf98a2 4.15.0-169-generic #177-Ubuntu SMP Thu Feb 3 
10:50:38 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 1288249b9f9b796f131e7a9b11c6bf3eb798752a |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 

[jira] [Work logged] (HADOOP-18231) tests in ITestS3AInputStreamPerformance are failing

2022-06-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18231?focusedWorklogId=780037=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-780037
 ]

ASF GitHub Bot logged work on HADOOP-18231:
---

Author: ASF GitHub Bot
Created on: 09/Jun/22 16:21
Start Date: 09/Jun/22 16:21
Worklog Time Spent: 10m 
  Work Description: ahmarsuhail closed pull request #4305: HADOOP-18231. 
Adds in new test for S3PrefetchingInputStream
URL: https://github.com/apache/hadoop/pull/4305




Issue Time Tracking
---

Worklog Id: (was: 780037)
Time Spent: 4h  (was: 3h 50m)

> tests in ITestS3AInputStreamPerformance are failing 
> 
>
> Key: HADOOP-18231
> URL: https://issues.apache.org/jira/browse/HADOOP-18231
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmar Suhail
>Assignee: Ahmar Suhail
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> The following tests are failing when prefetching is enabled:
> testRandomIORandomPolicy - expects stream to be opened 4 times (once for 
> every random read), but prefetching will only open twice. 
> testDecompressionSequential128K - expects stream to be opened once, but 
> prefetching will open once for each block the file has. landsat file used in 
> the test has size 42MB, prefetching block size = 8MB, expected open count is 
> 6.
>  testReadWithNormalPolicy - same as above. 
> testRandomIONormalPolicy - executes random IO, but with a normal policy. 
> S3AInputStream will abort the stream and change the policy, prefetching 
> handles random IO by caching blocks so doesn't do any of that. 
> testRandomReadOverBuffer - multiple assertions failing here, also depends a 
> lot on readAhead values, not very relevant for prefetching



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18221) stream warns Not all bytes were read from the S3ObjectInputStream when closed

2022-06-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18221?focusedWorklogId=780035=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-780035
 ]

ASF GitHub Bot logged work on HADOOP-18221:
---

Author: ASF GitHub Bot
Created on: 09/Jun/22 16:21
Start Date: 09/Jun/22 16:21
Worklog Time Spent: 10m 
  Work Description: ahmarsuhail closed pull request #4294: HADOOP-18221. 
Drains stream async before closing
URL: https://github.com/apache/hadoop/pull/4294




Issue Time Tracking
---

Worklog Id: (was: 780035)
Time Spent: 1h 40m  (was: 1.5h)

> stream warns Not all bytes were read from the S3ObjectInputStream when closed
> -
>
> Key: HADOOP-18221
> URL: https://issues.apache.org/jira/browse/HADOOP-18221
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmar Suhail
>Assignee: Ahmar Suhail
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Issue: [https://github.com/aws/aws-sdk-java/issues/1211] has resurfaced in 
> the prefetching stream when it is closed before reading for blocks is 
> complete. This can be fixed by draining the stream before closing 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ahmarsuhail closed pull request #4305: HADOOP-18231. Adds in new test for S3PrefetchingInputStream

2022-06-09 Thread GitBox


ahmarsuhail closed pull request #4305: HADOOP-18231. Adds in new test for 
S3PrefetchingInputStream
URL: https://github.com/apache/hadoop/pull/4305


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ahmarsuhail closed pull request #4294: HADOOP-18221. Drains stream async before closing

2022-06-09 Thread GitBox


ahmarsuhail closed pull request #4294: HADOOP-18221. Drains stream async before 
closing
URL: https://github.com/apache/hadoop/pull/4294


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #4375: HDFS-16605. Improve Code With Lambda in hadoop-hdfs-rbf moudle.

2022-06-09 Thread GitBox


slfan1989 commented on PR #4375:
URL: https://github.com/apache/hadoop/pull/4375#issuecomment-1151312523

   @goiri Please help me review the code again.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18231) tests in ITestS3AInputStreamPerformance are failing

2022-06-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18231?focusedWorklogId=780019=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-780019
 ]

ASF GitHub Bot logged work on HADOOP-18231:
---

Author: ASF GitHub Bot
Created on: 09/Jun/22 15:48
Start Date: 09/Jun/22 15:48
Worklog Time Spent: 10m 
  Work Description: ahmarsuhail commented on PR #4386:
URL: https://github.com/apache/hadoop/pull/4386#issuecomment-1151306815

   @steveloughran thanks for reviewing, I've updated as per your comments 




Issue Time Tracking
---

Worklog Id: (was: 780019)
Time Spent: 3h 50m  (was: 3h 40m)

> tests in ITestS3AInputStreamPerformance are failing 
> 
>
> Key: HADOOP-18231
> URL: https://issues.apache.org/jira/browse/HADOOP-18231
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmar Suhail
>Assignee: Ahmar Suhail
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> The following tests are failing when prefetching is enabled:
> testRandomIORandomPolicy - expects stream to be opened 4 times (once for 
> every random read), but prefetching will only open twice. 
> testDecompressionSequential128K - expects stream to be opened once, but 
> prefetching will open once for each block the file has. landsat file used in 
> the test has size 42MB, prefetching block size = 8MB, expected open count is 
> 6.
>  testReadWithNormalPolicy - same as above. 
> testRandomIONormalPolicy - executes random IO, but with a normal policy. 
> S3AInputStream will abort the stream and change the policy, prefetching 
> handles random IO by caching blocks so doesn't do any of that. 
> testRandomReadOverBuffer - multiple assertions failing here, also depends a 
> lot on readAhead values, not very relevant for prefetching



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ahmarsuhail commented on pull request #4386: HADOOP-18231. Fixes failing tests & drain stream async.

2022-06-09 Thread GitBox


ahmarsuhail commented on PR #4386:
URL: https://github.com/apache/hadoop/pull/4386#issuecomment-1151306815

   @steveloughran thanks for reviewing, I've updated as per your comments 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18231) tests in ITestS3AInputStreamPerformance are failing

2022-06-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18231?focusedWorklogId=780017=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-780017
 ]

ASF GitHub Bot logged work on HADOOP-18231:
---

Author: ASF GitHub Bot
Created on: 09/Jun/22 15:47
Start Date: 09/Jun/22 15:47
Worklog Time Spent: 10m 
  Work Description: ahmarsuhail commented on code in PR #4386:
URL: https://github.com/apache/hadoop/pull/4386#discussion_r893678217


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3PrefetchingInputStream.java:
##
@@ -0,0 +1,151 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import java.io.IOException;
+
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.fs.s3a.performance.AbstractS3ACostTest;
+import org.apache.hadoop.fs.statistics.IOStatistics;
+import org.apache.hadoop.fs.statistics.StoreStatisticNames;
+import org.apache.hadoop.fs.statistics.StreamStatisticNames;
+
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_BLOCK_DEFAULT_SIZE;
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_BLOCK_SIZE_KEY;
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_ENABLED_KEY;
+import static 
org.apache.hadoop.fs.statistics.IOStatisticAssertions.verifyStatisticCounterValue;
+
+/**
+ * Test the prefetching input stream, validates that the underlying 
S3CachingInputStream and
+ * S3InMemoryInputStream are working as expected.
+ */
+public class ITestS3PrefetchingInputStream extends AbstractS3ACostTest {

Review Comment:
   guessing you meant something else as it's already called 
ITestS3PrefetchingInputStream?





Issue Time Tracking
---

Worklog Id: (was: 780017)
Time Spent: 3.5h  (was: 3h 20m)

> tests in ITestS3AInputStreamPerformance are failing 
> 
>
> Key: HADOOP-18231
> URL: https://issues.apache.org/jira/browse/HADOOP-18231
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmar Suhail
>Assignee: Ahmar Suhail
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> The following tests are failing when prefetching is enabled:
> testRandomIORandomPolicy - expects stream to be opened 4 times (once for 
> every random read), but prefetching will only open twice. 
> testDecompressionSequential128K - expects stream to be opened once, but 
> prefetching will open once for each block the file has. landsat file used in 
> the test has size 42MB, prefetching block size = 8MB, expected open count is 
> 6.
>  testReadWithNormalPolicy - same as above. 
> testRandomIONormalPolicy - executes random IO, but with a normal policy. 
> S3AInputStream will abort the stream and change the policy, prefetching 
> handles random IO by caching blocks so doesn't do any of that. 
> testRandomReadOverBuffer - multiple assertions failing here, also depends a 
> lot on readAhead values, not very relevant for prefetching



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18231) tests in ITestS3AInputStreamPerformance are failing

2022-06-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18231?focusedWorklogId=780018=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-780018
 ]

ASF GitHub Bot logged work on HADOOP-18231:
---

Author: ASF GitHub Bot
Created on: 09/Jun/22 15:47
Start Date: 09/Jun/22 15:47
Worklog Time Spent: 10m 
  Work Description: ahmarsuhail commented on code in PR #4386:
URL: https://github.com/apache/hadoop/pull/4386#discussion_r893679145


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/read/S3File.java:
##
@@ -193,18 +203,7 @@ public InputStream openForRead(long offset, int size) 
throws IOException {
 return stream;
   }
 
-  /**
-   * Closes this stream and releases all acquired resources.
-   */
-  @Override
-  public synchronized void close() {
-List streams = new 
ArrayList(this.s3Objects.keySet());
-for (InputStream stream : streams) {
-  this.close(stream);
-}
-  }
-
-  void close(InputStream inputStream) {
+  void close(InputStream inputStream, int numRemainingBytes) {

Review Comment:
   nope, it's used by S3Reader 





Issue Time Tracking
---

Worklog Id: (was: 780018)
Time Spent: 3h 40m  (was: 3.5h)

> tests in ITestS3AInputStreamPerformance are failing 
> 
>
> Key: HADOOP-18231
> URL: https://issues.apache.org/jira/browse/HADOOP-18231
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmar Suhail
>Assignee: Ahmar Suhail
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> The following tests are failing when prefetching is enabled:
> testRandomIORandomPolicy - expects stream to be opened 4 times (once for 
> every random read), but prefetching will only open twice. 
> testDecompressionSequential128K - expects stream to be opened once, but 
> prefetching will open once for each block the file has. landsat file used in 
> the test has size 42MB, prefetching block size = 8MB, expected open count is 
> 6.
>  testReadWithNormalPolicy - same as above. 
> testRandomIONormalPolicy - executes random IO, but with a normal policy. 
> S3AInputStream will abort the stream and change the policy, prefetching 
> handles random IO by caching blocks so doesn't do any of that. 
> testRandomReadOverBuffer - multiple assertions failing here, also depends a 
> lot on readAhead values, not very relevant for prefetching



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ahmarsuhail commented on a diff in pull request #4386: HADOOP-18231. Fixes failing tests & drain stream async.

2022-06-09 Thread GitBox


ahmarsuhail commented on code in PR #4386:
URL: https://github.com/apache/hadoop/pull/4386#discussion_r893679145


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/read/S3File.java:
##
@@ -193,18 +203,7 @@ public InputStream openForRead(long offset, int size) 
throws IOException {
 return stream;
   }
 
-  /**
-   * Closes this stream and releases all acquired resources.
-   */
-  @Override
-  public synchronized void close() {
-List streams = new 
ArrayList(this.s3Objects.keySet());
-for (InputStream stream : streams) {
-  this.close(stream);
-}
-  }
-
-  void close(InputStream inputStream) {
+  void close(InputStream inputStream, int numRemainingBytes) {

Review Comment:
   nope, it's used by S3Reader 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ahmarsuhail commented on a diff in pull request #4386: HADOOP-18231. Fixes failing tests & drain stream async.

2022-06-09 Thread GitBox


ahmarsuhail commented on code in PR #4386:
URL: https://github.com/apache/hadoop/pull/4386#discussion_r893678217


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3PrefetchingInputStream.java:
##
@@ -0,0 +1,151 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import java.io.IOException;
+
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.fs.s3a.performance.AbstractS3ACostTest;
+import org.apache.hadoop.fs.statistics.IOStatistics;
+import org.apache.hadoop.fs.statistics.StoreStatisticNames;
+import org.apache.hadoop.fs.statistics.StreamStatisticNames;
+
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_BLOCK_DEFAULT_SIZE;
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_BLOCK_SIZE_KEY;
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_ENABLED_KEY;
+import static 
org.apache.hadoop.fs.statistics.IOStatisticAssertions.verifyStatisticCounterValue;
+
+/**
+ * Test the prefetching input stream, validates that the underlying 
S3CachingInputStream and
+ * S3InMemoryInputStream are working as expected.
+ */
+public class ITestS3PrefetchingInputStream extends AbstractS3ACostTest {

Review Comment:
   guessing you meant something else as it's already called 
ITestS3PrefetchingInputStream?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18231) tests in ITestS3AInputStreamPerformance are failing

2022-06-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18231?focusedWorklogId=780016=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-780016
 ]

ASF GitHub Bot logged work on HADOOP-18231:
---

Author: ASF GitHub Bot
Created on: 09/Jun/22 15:46
Start Date: 09/Jun/22 15:46
Worklog Time Spent: 10m 
  Work Description: ahmarsuhail commented on code in PR #4386:
URL: https://github.com/apache/hadoop/pull/4386#discussion_r893675594


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/read/S3File.java:
##
@@ -214,7 +213,83 @@ void close(InputStream inputStream) {
   this.s3Objects.remove(inputStream);
 }
 
+if (numRemainingBytes <= this.context.getAsyncDrainThreshold()) {
+  // don't bother with async io.
+  drain(false, "close() operation", numRemainingBytes, obj, inputStream);
+} else {
+  LOG.debug("initiating asynchronous drain of {} bytes", 
numRemainingBytes);
+  // schedule an async drain/abort with references to the fields so they
+  // can be reused
+  client.submit(() -> drain(false, "close() operation", numRemainingBytes, 
obj, inputStream));
+}
+  }
+
+  /**
+   * drain the stream. This method is intended to be
+   * used directly or asynchronously, and measures the
+   * duration of the operation in the stream statistics.
+   *
+   * @param shouldAbort   force an abort; used if explicitly requested.
+   * @param reasonreason for stream being closed; used in messages
+   * @param remaining remaining bytes
+   * @param requestObject http request object;
+   * @param inputStream   stream to close.
+   * @return was the stream aborted?
+   */
+  private boolean drain(final boolean shouldAbort, final String reason, final 
long remaining,
+  final S3Object requestObject, final InputStream inputStream) {
+
+try {
+  return 
invokeTrackingDuration(streamStatistics.initiateInnerStreamClose(shouldAbort),
+  () -> drainOrAbortHttpStream(shouldAbort, reason, remaining, 
requestObject, inputStream));
+} catch (IOException e) {
+  // this is only here because invokeTrackingDuration() has it in its
+  // signature
+  return shouldAbort;
+}
+  }
+
+  /**
+   * Drain or abort the inner stream.
+   * Exceptions are swallowed.
+   * If a close() is attempted and fails, the operation escalates to
+   * an abort.
+   *
+   * @param shouldAbort   force an abort; used if explicitly requested.
+   * @param reasonreason for stream being closed; used in messages
+   * @param remaining remaining bytes
+   * @param requestObject http request object
+   * @param inputStream   stream to close.
+   * @return was the stream aborted?
+   */
+  private boolean drainOrAbortHttpStream(boolean shouldAbort, final String 
reason,
+  final long remaining, final S3Object requestObject, final InputStream 
inputStream) {
+
+if (!shouldAbort && remaining > 0) {
+  try {
+long drained = 0;
+byte[] buffer = new byte[DRAIN_BUFFER_SIZE];
+while (true) {
+  final int count = inputStream.read(buffer);
+  if (count < 0) {
+// no more data is left
+break;
+  }
+  drained += count;
+}
+LOG.debug("Drained stream of {} bytes", drained);
+  } catch (Exception e) {
+// exception escalates to an abort
+LOG.debug("When closing {} stream for {}, will abort the stream", uri, 
reason, e);
+shouldAbort = true;
+  }
+}
 Io.closeIgnoringIoException(inputStream);
-Io.closeIgnoringIoException(obj);
+Io.closeIgnoringIoException(requestObject);

Review Comment:
   This was a new `Io` class added as part of the initial prefetching PR. I've 
removed this class, and updated usages with `cleanupWithLogger`





Issue Time Tracking
---

Worklog Id: (was: 780016)
Time Spent: 3h 20m  (was: 3h 10m)

> tests in ITestS3AInputStreamPerformance are failing 
> 
>
> Key: HADOOP-18231
> URL: https://issues.apache.org/jira/browse/HADOOP-18231
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmar Suhail
>Assignee: Ahmar Suhail
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> The following tests are failing when prefetching is enabled:
> testRandomIORandomPolicy - expects stream to be opened 4 times (once for 
> every random read), but prefetching will only open twice. 
> testDecompressionSequential128K - expects stream to be opened once, but 
> prefetching will open once for each block the file has. landsat file used in 
> the test has size 42MB, prefetching block size = 8MB, expected open count is 
> 6.
>  testReadWithNormalPolicy - 

[GitHub] [hadoop] ahmarsuhail commented on a diff in pull request #4386: HADOOP-18231. Fixes failing tests & drain stream async.

2022-06-09 Thread GitBox


ahmarsuhail commented on code in PR #4386:
URL: https://github.com/apache/hadoop/pull/4386#discussion_r893675594


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/read/S3File.java:
##
@@ -214,7 +213,83 @@ void close(InputStream inputStream) {
   this.s3Objects.remove(inputStream);
 }
 
+if (numRemainingBytes <= this.context.getAsyncDrainThreshold()) {
+  // don't bother with async io.
+  drain(false, "close() operation", numRemainingBytes, obj, inputStream);
+} else {
+  LOG.debug("initiating asynchronous drain of {} bytes", 
numRemainingBytes);
+  // schedule an async drain/abort with references to the fields so they
+  // can be reused
+  client.submit(() -> drain(false, "close() operation", numRemainingBytes, 
obj, inputStream));
+}
+  }
+
+  /**
+   * drain the stream. This method is intended to be
+   * used directly or asynchronously, and measures the
+   * duration of the operation in the stream statistics.
+   *
+   * @param shouldAbort   force an abort; used if explicitly requested.
+   * @param reasonreason for stream being closed; used in messages
+   * @param remaining remaining bytes
+   * @param requestObject http request object;
+   * @param inputStream   stream to close.
+   * @return was the stream aborted?
+   */
+  private boolean drain(final boolean shouldAbort, final String reason, final 
long remaining,
+  final S3Object requestObject, final InputStream inputStream) {
+
+try {
+  return 
invokeTrackingDuration(streamStatistics.initiateInnerStreamClose(shouldAbort),
+  () -> drainOrAbortHttpStream(shouldAbort, reason, remaining, 
requestObject, inputStream));
+} catch (IOException e) {
+  // this is only here because invokeTrackingDuration() has it in its
+  // signature
+  return shouldAbort;
+}
+  }
+
+  /**
+   * Drain or abort the inner stream.
+   * Exceptions are swallowed.
+   * If a close() is attempted and fails, the operation escalates to
+   * an abort.
+   *
+   * @param shouldAbort   force an abort; used if explicitly requested.
+   * @param reasonreason for stream being closed; used in messages
+   * @param remaining remaining bytes
+   * @param requestObject http request object
+   * @param inputStream   stream to close.
+   * @return was the stream aborted?
+   */
+  private boolean drainOrAbortHttpStream(boolean shouldAbort, final String 
reason,
+  final long remaining, final S3Object requestObject, final InputStream 
inputStream) {
+
+if (!shouldAbort && remaining > 0) {
+  try {
+long drained = 0;
+byte[] buffer = new byte[DRAIN_BUFFER_SIZE];
+while (true) {
+  final int count = inputStream.read(buffer);
+  if (count < 0) {
+// no more data is left
+break;
+  }
+  drained += count;
+}
+LOG.debug("Drained stream of {} bytes", drained);
+  } catch (Exception e) {
+// exception escalates to an abort
+LOG.debug("When closing {} stream for {}, will abort the stream", uri, 
reason, e);
+shouldAbort = true;
+  }
+}
 Io.closeIgnoringIoException(inputStream);
-Io.closeIgnoringIoException(obj);
+Io.closeIgnoringIoException(requestObject);

Review Comment:
   This was a new `Io` class added as part of the initial prefetching PR. I've 
removed this class, and updated usages with `cleanupWithLogger`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 opened a new pull request, #4421: YARN-10122. Support signalToContainer API for Federation.

2022-06-09 Thread GitBox


slfan1989 opened a new pull request, #4421:
URL: https://github.com/apache/hadoop/pull/4421

   JIRA:YARN-10122. Support signalToContainer API for Federation.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #4408: YARN-11172. Fix TestClientRMTokens#testDelegationToken introduced by HDFS-16563.

2022-06-09 Thread GitBox


steveloughran commented on PR #4408:
URL: https://github.com/apache/hadoop/pull/4408#issuecomment-1151249997

   just do the one which is broken. yes, the rest is out of date, but it would 
make for a larger patch, harder to cherrypick into branch-3.3 etc.
   
   with intercept() if the wrong exception type or message is found, the 
exception is thrown again, so we understand what is there.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ZanderXu commented on pull request #4366: HDFS-16598. All datanodes XXX are bad. Aborting...

2022-06-09 Thread GitBox


ZanderXu commented on PR #4366:
URL: https://github.com/apache/hadoop/pull/4366#issuecomment-1151236331

   > Just suggest to improve them together in one PR.
   @Hexiaoqiao do you mean to change all getReplicaInfo(ExtendedBlock b) to 
getReplicaInfo(String bpid, long blkid) in fine-grained lock  in this PR?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ZanderXu commented on pull request #4369: HDFS-16601. Failed to replace a bad datanode on the existing pipeline…

2022-06-09 Thread GitBox


ZanderXu commented on PR #4369:
URL: https://github.com/apache/hadoop/pull/4369#issuecomment-1151228050

   Thanks @Hexiaoqiao for your suggestion. Yeah, your are right, we need more 
failed information for client, like transfer source failed or transfer target 
failed.  If client have more information about failed transfer, It can 
accurately and efficiently remove abnormal nodes. But this would be a big 
feature.
   
   Fortunately, at present, as long as failed exception throw to client, the 
client defaults to thinking that the new dn is abnormal, and will exclude it 
and retry transfer. During retrying transfer, Client will chose new source dn 
and new target dn. Therefor, the source and target dn in the previous failed 
transfer round will be replaced. 
   If it is target dn caused failed, excluded the target dn will be ok.
   If it is source dn caused failed,  it will be removed when building the new 
pipeline.
   
   So I think simple process is just throw failed exception to client, and 
client can find and remove the real abnormal datanode. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #4419: HDFS-16627. improve BPServiceActor#register Log Add NN Addr.

2022-06-09 Thread GitBox


slfan1989 commented on PR #4419:
URL: https://github.com/apache/hadoop/pull/4419#issuecomment-1151214937

   > LGTM. Thanks @slfan1989
   
   Thank you for help to review the code!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on a diff in pull request #4419: HDFS-16627. improve BPServiceActor#register Log Add NN Addr.

2022-06-09 Thread GitBox


slfan1989 commented on code in PR #4419:
URL: https://github.com/apache/hadoop/pull/4419#discussion_r893598028


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java:
##
@@ -816,7 +816,7 @@ void register(NamespaceInfo nsInfo) throws IOException {
 // off disk - so update the bpRegistration object from that info
 DatanodeRegistration newBpRegistration = bpos.createRegistration();
 
-LOG.info(this + " beginning handshake with NN");
+LOG.info("{} beginning handshake with NN:{}", this, nnAddr);

Review Comment:
   @Hexiaoqiao Thanks for your help reviewing the code, I will fix it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ZanderXu commented on pull request #4367: HDFS-16600. Fix deadlock on DataNode side.

2022-06-09 Thread GitBox


ZanderXu commented on PR #4367:
URL: https://github.com/apache/hadoop/pull/4367#issuecomment-1151202018

   Oh, I'm sorry, the failed UT is 
`org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testSynchronousEviction`.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Hexiaoqiao commented on a diff in pull request #4419: HDFS-16627. improve BPServiceActor#register Log Add NN Addr.

2022-06-09 Thread GitBox


Hexiaoqiao commented on code in PR #4419:
URL: https://github.com/apache/hadoop/pull/4419#discussion_r893570312


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java:
##
@@ -816,7 +816,7 @@ void register(NamespaceInfo nsInfo) throws IOException {
 // off disk - so update the bpRegistration object from that info
 DatanodeRegistration newBpRegistration = bpos.createRegistration();
 
-LOG.info(this + " beginning handshake with NN");
+LOG.info("{} beginning handshake with NN:{}", this, nnAddr);

Review Comment:
   Please keep the same format of log, such as delete the blank at end of a 
sentence, leave one blank between two words. such as,
   L819: `LOG.info("{} beginning handshake with NN: {}", this, nnAddr);`
   L831: `LOG.info("Problem connecting to server: {}", nnAddr);`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] zhengchenyu commented on pull request #4408: YARN-11172. Fix TestClientRMTokens#testDelegationToken introduced by HDFS-16563.

2022-06-09 Thread GitBox


zhengchenyu commented on PR #4408:
URL: https://github.com/apache/hadoop/pull/4408#issuecomment-1151183063

   > LambdaTestUtils.intercept is the way to test this; its our 
reimplementation of ScalaTest intercept. if the exception class or message is 
wrong, the exception is rethrown.
   > 
   > which would you prefer in jenkins log. a message saying "an assert true 
failed" or "here is the exception which was raised with all its stack"?
   
   I will fix the code like 
   ```
 final ApplicationClientProtocol finalClientRMWithDT = clientRMWithDT;
 final GetNewApplicationRequest finalRequest = request;
 LambdaTestUtils.intercept(InvalidToken.class, "Token  has expired",
 () -> finalClientRMWithDT.getNewApplication(finalRequest));
   ```
   
   Should I fix all code in this style? Because I found many code in 
TestClientRMTokens need to fix in this way.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Hexiaoqiao commented on pull request #4367: HDFS-16600. Fix deadlock on DataNode side.

2022-06-09 Thread GitBox


Hexiaoqiao commented on PR #4367:
URL: https://github.com/apache/hadoop/pull/4367#issuecomment-1151166917

   Thanks all for the further discussion. About the unit test, I did not 
retrieve this one, 
`org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.testSynchronousEviction`.
 Anything I missed? @ZanderXu Would mind to check if this test really located 
at hadoop-hdfs module now? Please correct me if I am wrong. Thanks again.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Hexiaoqiao commented on pull request #4369: HDFS-16601. Failed to replace a bad datanode on the existing pipeline…

2022-06-09 Thread GitBox


Hexiaoqiao commented on PR #4369:
URL: https://github.com/apache/hadoop/pull/4369#issuecomment-1151160510

   Thanks for starting this proposal. I think there are still many issues for 
data transfer for pipeline recovery from my practice, which includes both basic 
function and performance. IIRC, there are only timeout exception and one no 
explicit meaning exception, thus client has no helpful information (such src 
node or target node meet issue, or other exceptions) to make decision. 
   Back to this PR, I totally agree to throw exception from datanode to client 
first(but I am not sure if it is enough at this PR, maybe we need more 
information) then add more fault-tolerant logic at client side.
   IMO, we should file one new JIRA to design/refactor fault-tolerant for data 
transfer of pipeline recovery. Just my own suggestion, not blocker.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4420: HDFS-16626. Under replicated blocks in dfsadmin report should contain pendingReconstruction‘s blocks

2022-06-09 Thread GitBox


hadoop-yetus commented on PR #4420:
URL: https://github.com/apache/hadoop/pull/4420#issuecomment-1151153833

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 56s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m  3s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 42s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   1m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 19s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 21s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 47s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 47s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m 11s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 27s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   1m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 25s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 33s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 57s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 372m  0s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4420/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 59s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 489m 39s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.namenode.TestReconstructStripedBlocks |
   |   | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics |
   |   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4420/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4420 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 6d1b43af65c1 4.15.0-166-generic #174-Ubuntu SMP Wed Dec 8 
19:07:44 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 8c07b5d97819c4532795f5643ec380953e597d0a |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4420/1/testReport/ |
   | Max. process+thread count | 2193 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 

[GitHub] [hadoop] zhengchenyu commented on pull request #4408: YARN-11172. Fix TestClientRMTokens#testDelegationToken introduced by HDFS-16563.

2022-06-09 Thread GitBox


zhengchenyu commented on PR #4408:
URL: https://github.com/apache/hadoop/pull/4408#issuecomment-1151144964

   > LambdaTestUtils.intercept is the way to test this; its our 
reimplementation of ScalaTest intercept. if the exception class or message is 
wrong, the exception is rethrown.
   > 
   > which would you prefer in jenkins log. a message saying "an assert true 
failed" or "here is the exception which was raised with all its stack"?
   
   other PR link:
   
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4314/5/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
   
   Fail log are:
   
   ```
   INFO] 
   [ERROR] Errors: 
   [ERROR] 
org.apache.hadoop.yarn.server.resourcemanager.TestClientRMTokens.testDelegationToken(org.apache.hadoop.yarn.server.resourcemanager.TestClientRMTokens)
   [ERROR]   Run 1: TestClientRMTokens.testDelegationToken:207
   [ERROR]   Run 2: TestClientRMTokens.testDelegationToken:138 禄 YarnRuntime 
java.net.BindExceptio...
   [ERROR]   Run 3: TestClientRMTokens.testDelegationToken:138 禄 YarnRuntime 
java.net.BindExceptio...
   [INFO] 
   [INFO] 
   [ERROR] Tests run: 3322, Failures: 0, Errors: 1, Skipped: 9
   [INFO] 
   [ERROR] There are test failures.
   
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ashutoshcipher commented on pull request #4389: HDFS-16576.Remove unused Imports in Hadoop HDFS project

2022-06-09 Thread GitBox


ashutoshcipher commented on PR #4389:
URL: https://github.com/apache/hadoop/pull/4389#issuecomment-1151136011

   Thanks @aajisaka and @tomscut.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on pull request #4389: HDFS-16576.Remove unused Imports in Hadoop HDFS project

2022-06-09 Thread GitBox


aajisaka commented on PR #4389:
URL: https://github.com/apache/hadoop/pull/4389#issuecomment-1151133768

   Merged. Thank you @ashutoshcipher and @tomscut 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka merged pull request #4389: HDFS-16576.Remove unused Imports in Hadoop HDFS project

2022-06-09 Thread GitBox


aajisaka merged PR #4389:
URL: https://github.com/apache/hadoop/pull/4389


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Hexiaoqiao commented on a diff in pull request #4407: HDFS-16622. addRDBI in IncrementalBlockReportManager may remove the b…

2022-06-09 Thread GitBox


Hexiaoqiao commented on code in PR #4407:
URL: https://github.com/apache/hadoop/pull/4407#discussion_r893514408


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/IncrementalBlockReportManager.java:
##
@@ -251,12 +251,20 @@ synchronized void addRDBI(ReceivedDeletedBlockInfo rdbi,
   DatanodeStorage storage) {
 // Make sure another entry for the same block is first removed.
 // There may only be one such entry.
+ReceivedDeletedBlockInfo removedInfo = null;
 for (PerStorageIBR perStorage : pendingIBRs.values()) {
-  if (perStorage.remove(rdbi.getBlock()) != null) {
+  removedInfo = perStorage.remove(rdbi.getBlock());
+  if (removedInfo != null) {
 break;
   }
 }
-getPerStorageIBR(storage).put(rdbi);
+if (removedInfo != null &&

Review Comment:
   @ZanderXu Thanks for the detailed information. It is an interesting case. 
IMO, this improvement makes sense to me. Would you mind to add unit test to 
cover this case?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4416: HDFS-16626. Under replicated blocks in dfsadmin report should contain pendingReconstruction‘s blocks

2022-06-09 Thread GitBox


hadoop-yetus commented on PR #4416:
URL: https://github.com/apache/hadoop/pull/4416#issuecomment-115223

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |  39m  5s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4416/2/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | +1 :green_heart: |  compile  |   1m 40s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   1m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 20s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 39s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 18s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 42s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 37s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 29s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   1m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  0s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 28s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 31s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 30s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 374m 30s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4416/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 58s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 489m 58s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.mover.TestMover |
   |   | hadoop.hdfs.server.datanode.TestDataNodeLifeline |
   |   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   |   | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks |
   |   | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics |
   |   | hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4416/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4416 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux fd19872945ee 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 00c7619add32505329284ed9b6128dfe922f3add |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 

[GitHub] [hadoop] steveloughran commented on pull request #4408: YARN-11172. Fix TestClientRMTokens#testDelegationToken introduced by HDFS-16563.

2022-06-09 Thread GitBox


steveloughran commented on PR #4408:
URL: https://github.com/apache/hadoop/pull/4408#issuecomment-1151052628

   LambdaTestUtils.intercept is the way to test this; its our reimplementation 
of ScalaTest intercept. if the exception class or message is wrong, the 
exception is rethrown.
   
   which would you prefer in jenkins log. a message saying "an assert true 
failed" or "here is the exception which was raised with all its stack"?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18275) update os-maven-plugin to 1.7.0

2022-06-09 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-18275:

Description: 
the os-maven-plugin we build with is 1.15; the release is up to 1.7.0 -update 
it.

when backporting this patch, YARN-11173 must be applied to keep yarn-csi in sync

  was:
the os-maven-plugin we build with is 1.15; the release is up to 1.17.0

when 


> update os-maven-plugin to 1.7.0
> ---
>
> Key: HADOOP-18275
> URL: https://issues.apache.org/jira/browse/HADOOP-18275
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.3.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.4
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> the os-maven-plugin we build with is 1.15; the release is up to 1.7.0 -update 
> it.
> when backporting this patch, YARN-11173 must be applied to keep yarn-csi in 
> sync



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18275) update os-maven-plugin to 1.7.0

2022-06-09 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-18275:

Description: 
the os-maven-plugin we build with is 1.15; the release is up to 1.17.0

when 

  was:
the os-maven-plugin we build with is 1.15; the release is up to 1.17.0

update this


> update os-maven-plugin to 1.7.0
> ---
>
> Key: HADOOP-18275
> URL: https://issues.apache.org/jira/browse/HADOOP-18275
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.3.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.4
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> the os-maven-plugin we build with is 1.15; the release is up to 1.17.0
> when 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #4417: YARN-11173. remove redeclaration of os-maven-plugin.version from yarn-csi

2022-06-09 Thread GitBox


steveloughran commented on PR #4417:
URL: https://github.com/apache/hadoop/pull/4417#issuecomment-1151043823

   thanks, merging with a comment linking to the other patch


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran merged pull request #4417: YARN-11173. remove redeclaration of os-maven-plugin.version from yarn-csi

2022-06-09 Thread GitBox


steveloughran merged PR #4417:
URL: https://github.com/apache/hadoop/pull/4417


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18197) Update protobuf 3.7.1 to a version without CVE-2021-22569

2022-06-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18197?focusedWorklogId=779906=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-779906
 ]

ASF GitHub Bot logged work on HADOOP-18197:
---

Author: ASF GitHub Bot
Created on: 09/Jun/22 12:09
Start Date: 09/Jun/22 12:09
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on PR #19:
URL: https://github.com/apache/hadoop-thirdparty/pull/19#issuecomment-1151040619

   aah, i see the discussion. ok. that complicates life even more. really not 
sure what to do here. 
   
   if we were exporting a module for other to use, that version in module names 
makes sense. if this is for internal use *only* then not giving it a name works 
better.
   
   what to do here? 
   1. rename the module and pom artifacts and then have hadoop versions import 
the protobuf_3_21 module
   2. keep both side by side
   
   if the repackaging retains the names of the paths then after adding a new 
module with the new version, new compilations will link with the new lib, but 
old stuff will still work




Issue Time Tracking
---

Worklog Id: (was: 779906)
Time Spent: 0.5h  (was: 20m)

> Update protobuf 3.7.1 to a version without CVE-2021-22569
> -
>
> Key: HADOOP-18197
> URL: https://issues.apache.org/jira/browse/HADOOP-18197
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ivan Viaznikov
>Priority: Major
>  Labels: pull-request-available, security
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The artifact `org.apache.hadoop:hadoop-common` brings in a dependency 
> `com.google.protobuf:protobuf-java:2.5.0`, which is an outdated version 
> released in 2013 and it contains a vulnerability 
> [CVE-2021-22569|https://nvd.nist.gov/vuln/detail/CVE-2021-22569].
> Therefore, requesting you to clarify if this library version is going to be 
> updated in the following releases



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop-thirdparty] steveloughran commented on pull request #19: HADOOP-18197. Upgrade protobuf to 3.21.1

2022-06-09 Thread GitBox


steveloughran commented on PR #19:
URL: https://github.com/apache/hadoop-thirdparty/pull/19#issuecomment-1151040619

   aah, i see the discussion. ok. that complicates life even more. really not 
sure what to do here. 
   
   if we were exporting a module for other to use, that version in module names 
makes sense. if this is for internal use *only* then not giving it a name works 
better.
   
   what to do here? 
   1. rename the module and pom artifacts and then have hadoop versions import 
the protobuf_3_21 module
   2. keep both side by side
   
   if the repackaging retains the names of the paths then after adding a new 
module with the new version, new compilations will link with the new lib, but 
old stuff will still work


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a diff in pull request #4386: HADOOP-18231. Fixes failing tests & drain stream async.

2022-06-09 Thread GitBox


steveloughran commented on code in PR #4386:
URL: https://github.com/apache/hadoop/pull/4386#discussion_r893241456


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/read/S3File.java:
##
@@ -98,6 +107,7 @@ public S3File(
 this.streamStatistics = streamStatistics;
 this.changeTracker = changeTracker;
 this.s3Objects = new IdentityHashMap();

Review Comment:
   while you are there, can you change to a simple <> here 



##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3PrefetchingInputStream.java:
##
@@ -0,0 +1,151 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import java.io.IOException;
+
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.fs.s3a.performance.AbstractS3ACostTest;
+import org.apache.hadoop.fs.statistics.IOStatistics;
+import org.apache.hadoop.fs.statistics.StoreStatisticNames;
+import org.apache.hadoop.fs.statistics.StreamStatisticNames;
+
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_BLOCK_DEFAULT_SIZE;
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_BLOCK_SIZE_KEY;
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_ENABLED_KEY;
+import static 
org.apache.hadoop.fs.statistics.IOStatisticAssertions.verifyStatisticCounterValue;
+
+/**
+ * Test the prefetching input stream, validates that the underlying 
S3CachingInputStream and
+ * S3InMemoryInputStream are working as expected.
+ */
+public class ITestS3PrefetchingInputStream extends AbstractS3ACostTest {
+
+  public ITestS3PrefetchingInputStream() {
+super(true);
+  }
+
+  private static final int _1K = 1024;
+  // Path for file which should have length > block size so 
S3CachingInputStream is used
+  private Path largeFile;
+  private FileSystem fs;
+  private int numBlocks;
+  private int blockSize;
+  private long largeFileSize;
+  // Size should be < block size so S3InMemoryInputStream is used
+  private static final int SMALL_FILE_SIZE = _1K * 16;
+
+
+  @Override
+  public Configuration createConfiguration() {
+Configuration conf = super.createConfiguration();
+S3ATestUtils.removeBaseAndBucketOverrides(conf, PREFETCH_ENABLED_KEY);
+conf.setBoolean(PREFETCH_ENABLED_KEY, true);
+return conf;
+  }
+
+
+  private void openFS() throws IOException {

Review Comment:
for better isolation between tests, we need that FS to *not* be cached. 
otherwise, if a test in the same JVM has already accessed this path, you may 
get its FS with its settings.
   
   use `FileSystem.createFileSystem()`, override `teardown()` and invoke 
`cleanupWithLogger(LOG, largeFileFS)` to close it if it is non null



##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/read/S3File.java:
##
@@ -214,7 +213,83 @@ void close(InputStream inputStream) {
   this.s3Objects.remove(inputStream);
 }
 
+if (numRemainingBytes <= this.context.getAsyncDrainThreshold()) {
+  // don't bother with async io.
+  drain(false, "close() operation", numRemainingBytes, obj, inputStream);
+} else {
+  LOG.debug("initiating asynchronous drain of {} bytes", 
numRemainingBytes);
+  // schedule an async drain/abort with references to the fields so they
+  // can be reused
+  client.submit(() -> drain(false, "close() operation", numRemainingBytes, 
obj, inputStream));
+}
+  }
+
+  /**
+   * drain the stream. This method is intended to be
+   * used directly or asynchronously, and measures the
+   * duration of the operation in the stream statistics.
+   *
+   * @param shouldAbort   force an abort; used if explicitly requested.
+   * @param reasonreason for stream being closed; used in messages
+   * @param remaining remaining bytes
+   * @param requestObject http request object;
+   * @param inputStream   stream to close.
+   * @return was the stream aborted?
+   */
+  private boolean drain(final boolean 

[jira] [Work logged] (HADOOP-18231) tests in ITestS3AInputStreamPerformance are failing

2022-06-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18231?focusedWorklogId=779903=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-779903
 ]

ASF GitHub Bot logged work on HADOOP-18231:
---

Author: ASF GitHub Bot
Created on: 09/Jun/22 12:02
Start Date: 09/Jun/22 12:02
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on code in PR #4386:
URL: https://github.com/apache/hadoop/pull/4386#discussion_r893241456


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/read/S3File.java:
##
@@ -98,6 +107,7 @@ public S3File(
 this.streamStatistics = streamStatistics;
 this.changeTracker = changeTracker;
 this.s3Objects = new IdentityHashMap();

Review Comment:
   while you are there, can you change to a simple <> here 



##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3PrefetchingInputStream.java:
##
@@ -0,0 +1,151 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import java.io.IOException;
+
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.fs.s3a.performance.AbstractS3ACostTest;
+import org.apache.hadoop.fs.statistics.IOStatistics;
+import org.apache.hadoop.fs.statistics.StoreStatisticNames;
+import org.apache.hadoop.fs.statistics.StreamStatisticNames;
+
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_BLOCK_DEFAULT_SIZE;
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_BLOCK_SIZE_KEY;
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_ENABLED_KEY;
+import static 
org.apache.hadoop.fs.statistics.IOStatisticAssertions.verifyStatisticCounterValue;
+
+/**
+ * Test the prefetching input stream, validates that the underlying 
S3CachingInputStream and
+ * S3InMemoryInputStream are working as expected.
+ */
+public class ITestS3PrefetchingInputStream extends AbstractS3ACostTest {
+
+  public ITestS3PrefetchingInputStream() {
+super(true);
+  }
+
+  private static final int _1K = 1024;
+  // Path for file which should have length > block size so 
S3CachingInputStream is used
+  private Path largeFile;
+  private FileSystem fs;
+  private int numBlocks;
+  private int blockSize;
+  private long largeFileSize;
+  // Size should be < block size so S3InMemoryInputStream is used
+  private static final int SMALL_FILE_SIZE = _1K * 16;
+
+
+  @Override
+  public Configuration createConfiguration() {
+Configuration conf = super.createConfiguration();
+S3ATestUtils.removeBaseAndBucketOverrides(conf, PREFETCH_ENABLED_KEY);
+conf.setBoolean(PREFETCH_ENABLED_KEY, true);
+return conf;
+  }
+
+
+  private void openFS() throws IOException {

Review Comment:
for better isolation between tests, we need that FS to *not* be cached. 
otherwise, if a test in the same JVM has already accessed this path, you may 
get its FS with its settings.
   
   use `FileSystem.createFileSystem()`, override `teardown()` and invoke 
`cleanupWithLogger(LOG, largeFileFS)` to close it if it is non null



##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/read/S3File.java:
##
@@ -214,7 +213,83 @@ void close(InputStream inputStream) {
   this.s3Objects.remove(inputStream);
 }
 
+if (numRemainingBytes <= this.context.getAsyncDrainThreshold()) {
+  // don't bother with async io.
+  drain(false, "close() operation", numRemainingBytes, obj, inputStream);
+} else {
+  LOG.debug("initiating asynchronous drain of {} bytes", 
numRemainingBytes);
+  // schedule an async drain/abort with references to the fields so they
+  // can be reused
+  client.submit(() -> drain(false, "close() operation", numRemainingBytes, 
obj, inputStream));
+}
+  }
+
+  /**
+   * drain the stream. This method is intended to be
+   * used directly or asynchronously, and measures the
+   * 

[jira] [Created] (HADOOP-18283) Review s3a prefetching input stream retry code

2022-06-09 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-18283:
---

 Summary: Review s3a prefetching input stream retry code
 Key: HADOOP-18283
 URL: https://issues.apache.org/jira/browse/HADOOP-18283
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.4.0
Reporter: Steve Loughran


Need to review S3A prefetching stream retry logic

* no attempt to retry on unrecoverable errors
* do try on recoverable ones
* no wrap of retry by retry.
* annotate classes with Retries annotations to aid the review.

a key concern has to be that transient failure of  prefetch is recovered from; 
things like deleted/shortened file fails properly on the next read call



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18231) tests in ITestS3AInputStreamPerformance are failing

2022-06-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18231?focusedWorklogId=779901=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-779901
 ]

ASF GitHub Bot logged work on HADOOP-18231:
---

Author: ASF GitHub Bot
Created on: 09/Jun/22 11:57
Start Date: 09/Jun/22 11:57
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on PR #4386:
URL: https://github.com/apache/hadoop/pull/4386#issuecomment-1151029766

   made some comments. looked at the EOF swallowing code...i think we need to 
be happy there that all is good in terms of handling recoverable failures but 
not retrying on nonrecoverable ones. 




Issue Time Tracking
---

Worklog Id: (was: 779901)
Time Spent: 3h  (was: 2h 50m)

> tests in ITestS3AInputStreamPerformance are failing 
> 
>
> Key: HADOOP-18231
> URL: https://issues.apache.org/jira/browse/HADOOP-18231
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmar Suhail
>Assignee: Ahmar Suhail
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> The following tests are failing when prefetching is enabled:
> testRandomIORandomPolicy - expects stream to be opened 4 times (once for 
> every random read), but prefetching will only open twice. 
> testDecompressionSequential128K - expects stream to be opened once, but 
> prefetching will open once for each block the file has. landsat file used in 
> the test has size 42MB, prefetching block size = 8MB, expected open count is 
> 6.
>  testReadWithNormalPolicy - same as above. 
> testRandomIONormalPolicy - executes random IO, but with a normal policy. 
> S3AInputStream will abort the stream and change the policy, prefetching 
> handles random IO by caching blocks so doesn't do any of that. 
> testRandomReadOverBuffer - multiple assertions failing here, also depends a 
> lot on readAhead values, not very relevant for prefetching



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #4386: HADOOP-18231. Fixes failing tests & drain stream async.

2022-06-09 Thread GitBox


steveloughran commented on PR #4386:
URL: https://github.com/apache/hadoop/pull/4386#issuecomment-1151029766

   made some comments. looked at the EOF swallowing code...i think we need to 
be happy there that all is good in terms of handling recoverable failures but 
not retrying on nonrecoverable ones. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18028) High performance S3A input stream with prefetching & caching

2022-06-09 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552157#comment-17552157
 ] 

Steve Loughran commented on HADOOP-18028:
-

think we might want to review the error handling code; especially what happens 
if a file is shortened while it is open

> High performance S3A input stream with prefetching & caching
> 
>
> Key: HADOOP-18028
> URL: https://issues.apache.org/jira/browse/HADOOP-18028
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Bhalchandra Pandit
>Assignee: Bhalchandra Pandit
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 13h 50m
>  Remaining Estimate: 0h
>
> I work for Pinterest. I developed a technique for vastly improving read 
> throughput when reading from the S3 file system. It not only helps the 
> sequential read case (like reading a SequenceFile) but also significantly 
> improves read throughput of a random access case (like reading Parquet). This 
> technique has been very useful in significantly improving efficiency of the 
> data processing jobs at Pinterest. 
>  
> I would like to contribute that feature to Apache Hadoop. More details on 
> this technique are available in this blog I wrote recently:
> [https://medium.com/pinterest-engineering/improving-efficiency-and-reducing-runtime-using-s3-read-optimization-b31da4b60fa0]
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] apete commented on pull request #4343: Update ojAlgo to latest version (v51.3.0)

2022-06-09 Thread GitBox


apete commented on PR #4343:
URL: https://github.com/apache/hadoop/pull/4343#issuecomment-1150973185

   > You need to create a jira as well. add that in the tittle of the PR & most 
probably run the tests in all the affected modules. check the doc- 
https://cwiki.apache.org/confluence/display/hadoop/how+to+contribute
   > 
   > let me know if you face any issue, will try to help
   
   I'm not interested in doing all those things. I've pointed out that hadoop 
depends on a very old version of ojAlgo, and I've shown that upgrading to the 
latest version works without any code changes (code compiles and no tests 
fail). Also I know that v51.3.0 is much improved in all aspects – speed, 
resource consumption, numerical stability... (and there are no transitive 
dependencies).
   
   It's up to you, or any other hadoop contributor, if you want this update or 
not.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18282) Add .asf.yaml to hadoop-thirdparty

2022-06-09 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena resolved HADOOP-18282.
---
Fix Version/s: thirdparty-1.2.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Add .asf.yaml to hadoop-thirdparty
> --
>
> Key: HADOOP-18282
> URL: https://issues.apache.org/jira/browse/HADOOP-18282
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: thirdparty-1.2.0
>
>
> no yaml file in thirdparty, it is dropping mails to common-dev for everything.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >