[GitHub] [hadoop] virajjasani opened a new pull request #3081: YARN-10809. Missing dependency causing NoClassDefFoundError in TestHBaseTimelineStorageUtils

2021-06-07 Thread GitBox


virajjasani opened a new pull request #3081:
URL: https://github.com/apache/hadoop/pull/3081


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on a change in pull request #3073: HDFS-16054. Replace Guava Lists usage by Hadoop's own Lists in hadoop-hdfs-project

2021-06-07 Thread GitBox


virajjasani commented on a change in pull request #3073:
URL: https://github.com/apache/hadoop/pull/3073#discussion_r647129153



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
##
@@ -32,7 +32,7 @@
 import org.apache.hadoop.thirdparty.com.google.common.cache.CacheBuilder;
 import org.apache.hadoop.thirdparty.com.google.common.cache.CacheLoader;
 import org.apache.hadoop.thirdparty.com.google.common.cache.LoadingCache;
-import org.apache.hadoop.thirdparty.com.google.common.collect.Lists;
+import org.apache.hadoop.util.Lists;

Review comment:
   I see, sure let me make the changes. Thanks




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17742) DistCp: distcp fail when renaming within ftp filesystem

2021-06-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17742?focusedWorklogId=608266=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-608266
 ]

ASF GitHub Bot logged work on HADOOP-17742:
---

Author: ASF GitHub Bot
Created on: 08/Jun/21 05:30
Start Date: 08/Jun/21 05:30
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3071:
URL: https://github.com/apache/hadoop/pull/3071#issuecomment-856454682


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 28s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 48s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 46s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  19m  8s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   3m 55s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 10s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m  5s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 17s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 40s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  17m  1s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 21s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 50s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  21m 50s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m  6s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  19m  6s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   4m  0s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3071/2/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 2 new + 83 unchanged - 3 fixed = 85 total (was 
86)  |
   | +1 :green_heart: |  mvnsite  |   2m  8s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m  7s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 35s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m 40s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  16m 52s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  40m  3s |  |  hadoop-distcp in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 51s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 241m 40s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3071/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3071 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux f1246607cf7d 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 
06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / dbce5a20beb5c9160c44570b59ed6cfbf40772ec |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #3071: HADOOP-17742. fix distcp fail when copying to ftp filesystem

2021-06-07 Thread GitBox


hadoop-yetus commented on pull request #3071:
URL: https://github.com/apache/hadoop/pull/3071#issuecomment-856454682


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 28s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 48s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 46s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  19m  8s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   3m 55s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 10s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m  5s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 17s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 40s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  17m  1s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 21s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 50s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  21m 50s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m  6s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  19m  6s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   4m  0s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3071/2/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 2 new + 83 unchanged - 3 fixed = 85 total (was 
86)  |
   | +1 :green_heart: |  mvnsite  |   2m  8s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m  7s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 35s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m 40s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  16m 52s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  40m  3s |  |  hadoop-distcp in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 51s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 241m 40s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3071/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3071 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux f1246607cf7d 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 
06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / dbce5a20beb5c9160c44570b59ed6cfbf40772ec |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3071/2/testReport/ |
   | Max. process+thread count | 1817 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common 
hadoop-tools/hadoop-distcp U: . |
   | Console output | 

[GitHub] [hadoop] tasanuma commented on a change in pull request #3073: HDFS-16054. Replace Guava Lists usage by Hadoop's own Lists in hadoop-hdfs-project

2021-06-07 Thread GitBox


tasanuma commented on a change in pull request #3073:
URL: https://github.com/apache/hadoop/pull/3073#discussion_r647114167



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
##
@@ -32,7 +32,7 @@
 import org.apache.hadoop.thirdparty.com.google.common.cache.CacheBuilder;
 import org.apache.hadoop.thirdparty.com.google.common.cache.CacheLoader;
 import org.apache.hadoop.thirdparty.com.google.common.cache.LoadingCache;
-import org.apache.hadoop.thirdparty.com.google.common.collect.Lists;
+import org.apache.hadoop.util.Lists;

Review comment:
   @virajjasani Could you please arrange the import statements a little 
more alphabetically? Although I don't think it needs to be perfect, it doesn't 
look good to have `import org.apache.hadoop.util` between `import 
org.apache.hadoop.thirdparty`s.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17727) Modularize docker images

2021-06-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17727?focusedWorklogId=608231=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-608231
 ]

ASF GitHub Bot logged work on HADOOP-17727:
---

Author: ASF GitHub Bot
Created on: 08/Jun/21 03:50
Start Date: 08/Jun/21 03:50
Worklog Time Spent: 10m 
  Work Description: GauthamBanasandra commented on pull request #3043:
URL: https://github.com/apache/hadoop/pull/3043#issuecomment-856416652


   Thanks a lot for your reviews and suggestions @goiri , @jojochuang 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 608231)
Time Spent: 12h 50m  (was: 12h 40m)

> Modularize docker images
> 
>
> Key: HADOOP-17727
> URL: https://issues.apache.org/jira/browse/HADOOP-17727
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: build-log-centos-7.zip, build-log-centos-8.zip, 
> build-log-ubuntu-x64.zip
>
>  Time Spent: 12h 50m
>  Remaining Estimate: 0h
>
> We're now creating the *Dockerfile*s for different platforms. We need a way 
> to manage the packages in a clean way as maintaining the packages for all the 
> different environments becomes cumbersome.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] GauthamBanasandra commented on pull request #3043: HADOOP-17727. Modularize docker images

2021-06-07 Thread GitBox


GauthamBanasandra commented on pull request #3043:
URL: https://github.com/apache/hadoop/pull/3043#issuecomment-856416652


   Thanks a lot for your reviews and suggestions @goiri , @jojochuang 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17727) Modularize docker images

2021-06-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17727?focusedWorklogId=608226=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-608226
 ]

ASF GitHub Bot logged work on HADOOP-17727:
---

Author: ASF GitHub Bot
Created on: 08/Jun/21 03:11
Start Date: 08/Jun/21 03:11
Worklog Time Spent: 10m 
  Work Description: jojochuang merged pull request #3043:
URL: https://github.com/apache/hadoop/pull/3043


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 608226)
Time Spent: 12h 40m  (was: 12.5h)

> Modularize docker images
> 
>
> Key: HADOOP-17727
> URL: https://issues.apache.org/jira/browse/HADOOP-17727
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
> Attachments: build-log-centos-7.zip, build-log-centos-8.zip, 
> build-log-ubuntu-x64.zip
>
>  Time Spent: 12h 40m
>  Remaining Estimate: 0h
>
> We're now creating the *Dockerfile*s for different platforms. We need a way 
> to manage the packages in a clean way as maintaining the packages for all the 
> different environments becomes cumbersome.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17727) Modularize docker images

2021-06-07 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HADOOP-17727.
--
Fix Version/s: 3.4.0
   Resolution: Fixed

Thanks [~gautham]

> Modularize docker images
> 
>
> Key: HADOOP-17727
> URL: https://issues.apache.org/jira/browse/HADOOP-17727
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: build-log-centos-7.zip, build-log-centos-8.zip, 
> build-log-ubuntu-x64.zip
>
>  Time Spent: 12h 40m
>  Remaining Estimate: 0h
>
> We're now creating the *Dockerfile*s for different platforms. We need a way 
> to manage the packages in a clean way as maintaining the packages for all the 
> different environments becomes cumbersome.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang merged pull request #3043: HADOOP-17727. Modularize docker images

2021-06-07 Thread GitBox


jojochuang merged pull request #3043:
URL: https://github.com/apache/hadoop/pull/3043


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17028) ViewFS should initialize target filesystems lazily

2021-06-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17028?focusedWorklogId=608219=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-608219
 ]

ASF GitHub Bot logged work on HADOOP-17028:
---

Author: ASF GitHub Bot
Created on: 08/Jun/21 02:11
Start Date: 08/Jun/21 02:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2260:
URL: https://github.com/apache/hadoop/pull/2260#issuecomment-856383553


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 5 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 52s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 35s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 34s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  19m  4s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   4m  4s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m  2s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m  5s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   3m  1s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   5m 39s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 48s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 48s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  21m 48s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 10s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  19m 10s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 58s |  |  root: The patch generated 
0 new + 154 unchanged - 5 fixed = 154 total (was 159)  |
   | +1 :green_heart: |  mvnsite  |   2m 56s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m  2s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   3m  7s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   6m  5s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 17s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  16m 51s |  |  hadoop-common in the patch 
passed.  |
   | -1 :x: |  unit  | 322m 10s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2260/7/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 59s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 534m 13s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatus 
|
   |   | hadoop.hdfs.TestDFSShell |
   |   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList |
   |   | 
hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor |
   |   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2260/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2260 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux d4f6e49ceb75 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2260: HADOOP-17028. ViewFS should initialize mounted target filesystems lazily

2021-06-07 Thread GitBox


hadoop-yetus commented on pull request #2260:
URL: https://github.com/apache/hadoop/pull/2260#issuecomment-856383553


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 5 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 52s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 35s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 34s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  19m  4s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   4m  4s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m  2s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m  5s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   3m  1s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   5m 39s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 48s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 48s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  21m 48s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 10s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  19m 10s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 58s |  |  root: The patch generated 
0 new + 154 unchanged - 5 fixed = 154 total (was 159)  |
   | +1 :green_heart: |  mvnsite  |   2m 56s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m  2s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   3m  7s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   6m  5s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 17s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  16m 51s |  |  hadoop-common in the patch 
passed.  |
   | -1 :x: |  unit  | 322m 10s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2260/7/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 59s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 534m 13s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatus 
|
   |   | hadoop.hdfs.TestDFSShell |
   |   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList |
   |   | 
hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor |
   |   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2260/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2260 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux d4f6e49ceb75 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 2f8e9a6b0df557e63ec30f65c171bb0f765f5bc4 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 

[jira] [Work logged] (HADOOP-17727) Modularize docker images

2021-06-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17727?focusedWorklogId=608193=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-608193
 ]

ASF GitHub Bot logged work on HADOOP-17727:
---

Author: ASF GitHub Bot
Created on: 08/Jun/21 01:17
Start Date: 08/Jun/21 01:17
Worklog Time Spent: 10m 
  Work Description: GauthamBanasandra commented on pull request #3043:
URL: https://github.com/apache/hadoop/pull/3043#issuecomment-856366490


   @jojochuang could you please approve this PR if it looks alright? I've fixed 
the issue with create-release as well.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 608193)
Time Spent: 12.5h  (was: 12h 20m)

> Modularize docker images
> 
>
> Key: HADOOP-17727
> URL: https://issues.apache.org/jira/browse/HADOOP-17727
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
> Attachments: build-log-centos-7.zip, build-log-centos-8.zip, 
> build-log-ubuntu-x64.zip
>
>  Time Spent: 12.5h
>  Remaining Estimate: 0h
>
> We're now creating the *Dockerfile*s for different platforms. We need a way 
> to manage the packages in a clean way as maintaining the packages for all the 
> different environments becomes cumbersome.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] GauthamBanasandra commented on pull request #3043: HADOOP-17727. Modularize docker images

2021-06-07 Thread GitBox


GauthamBanasandra commented on pull request #3043:
URL: https://github.com/apache/hadoop/pull/3043#issuecomment-856366490


   @jojochuang could you please approve this PR if it looks alright? I've fixed 
the issue with create-release as well.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4

2021-06-07 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17358861#comment-17358861
 ] 

Szilard Nemeth commented on HADOOP-15327:
-

Let me list the differences introduced because of the migration from Netty 3.x 
to 4.x.
 There is a migration guide that mentions most (but not all) of the changes: 
[https://netty.io/wiki/new-and-noteworthy-in-4.0.html]
 Please note that the below code changes are based on Wei-Chiu's branch: 
[https://github.com/jojochuang/hadoop/commits/shuffle_handler_netty4]
h2. CHANGES IN ShuffleHandler
h3. *I will list the changes mostly from ShuffleHandler as it covers almost all 
type of changes in other classes as well.*

*In TestShuffleHandler, the test code was changed by any of the justifications 
listed down below.*
h3. Change category #1: General API changes / non-configuration getters:

Details: 
[https://netty.io/wiki/new-and-noteworthy-in-4.0.html#general-api-changes]
{quote}Non-configuration getters have no get- prefix anymore. (e.g. 
Channel.getRemoteAddress() → Channel.remoteAddress())
 Boolean properties are still prefixed with is- to avoid confusion (e.g. 
'empty' is both an adjective and a verb, so empty() can have two meanings.)
{quote}
I'm just listing all the changes without additional context (in which method 
they were changed) separated by three dots, as they are simply method renamings:
{code:java}
-future.getChannel().close();
+future.channel().closeFuture().awaitUninterruptibly();
...
...
-  ChannelPipeline pipeline = future.getChannel().getPipeline();
+  ChannelPipeline pipeline = future.channel().pipeline();
...
...
-port = ((InetSocketAddress)ch.getLocalAddress()).getPort();
+port = ((InetSocketAddress)ch.localAddress()).getPort();
...
...
-  if (e.getState() == IdleState.WRITER_IDLE && enabledTimeout) {
-e.getChannel().close();
+  if (e.state() == IdleState.WRITER_IDLE && enabledTimeout) {
+ctx.channel().close();
...
...
-  accepted.add(evt.getChannel());
+  accepted.add(ctx.channel());
...
...
-new QueryStringDecoder(request.getUri()).getParameters();
+new QueryStringDecoder(request.getUri()).parameters(); //getUri was 
not changed, see this later
...
...
-  Channel ch = evt.getChannel();
-  ChannelPipeline pipeline = ch.getPipeline();
+  Channel ch = ctx.channel();
+  ChannelPipeline pipeline = ch.pipeline();
...
...
-  reduceContext.getCtx().getChannel(),
+  reduceContext.getCtx().channel(),
...
...
-  if (ch.getPipeline().get(SslHandler.class) == null) {
+  if (ch.pipeline().get(SslHandler.class) == null) {
...
...
-  Channel ch = evt.getChannel();
-  ChannelPipeline pipeline = ch.getPipeline();
+  Channel ch = ctx.channel();
+  ChannelPipeline pipeline = ch.pipeline();
...
...
-  
ctx.getChannel().write(response).addListener(ChannelFutureListener.CLOSE);
+  ctx.channel().write(response).addListener(ChannelFutureListener.CLOSE);
...
...
-  Channel ch = e.getChannel();
-  Throwable cause = e.getCause();
+  Channel ch = ctx.channel();
{code}
h3. Change category #2: General API changes / Method signature changes.

*2.1: SimpleChannelUpstreamHandler was renamed to ChannelInboundHandlerAdapter.*
 
[https://netty.io/wiki/new-and-noteworthy-in-4.0.html#upstream--inbound-downstream--outbound]
{quote}The terms 'upstream' and 'downstream' were pretty confusing to 
beginners. 4.0 uses 'inbound' and 'outbound' wherever possible.
{quote}
{code:java}
-  class Shuffle extends SimpleChannelUpstreamHandler {
+  @ChannelHandler.Sharable
+  class Shuffle extends ChannelInboundHandlerAdapter {
{code}
*2.2: Simplifed channel state model: 
[https://netty.io/wiki/new-and-noteworthy-in-4.0.html#simplified-channel-state-model]*
{quote}channelOpen, channelBound, and channelConnected have been merged to 
channelActive. channelDisconnected, channelUnbound, and channelClosed have been 
merged to channelInactive. Likewise, Channel.isBound() and isConnected() have 
been merged to isActive().
{quote}
*2.2.1 Changes in class: Shuffle*
{code:java}
 @Override
-public void channelOpen(ChannelHandlerContext ctx, ChannelStateEvent evt) 
+public void channelActive(ChannelHandlerContext ctx)
 throws Exception {
-  super.channelOpen(ctx, evt);
+  super.channelActive(ctx);
{code}
*2.2.2 Changes in 
org.apache.hadoop.mapred.ShuffleHandler.Shuffle#exceptionCaught:* 
 Quoting the change again:
{quote}channelOpen, channelBound, and channelConnected have been merged to 
channelActive. channelDisconnected, channelUnbound, and channelClosed have been 
merged to channelInactive. Likewise, Channel.isBound() and isConnected() have 
been merged to isActive().
{quote}
{code:java}
   LOG.error("Shuffle error: ", cause);
-  if (ch.isConnected()) {
-LOG.error("Shuffle error " + e);
+   

[jira] [Issue Comment Deleted] (HADOOP-11219) [Umbrella] Upgrade to netty 4

2021-06-07 Thread Szilard Nemeth (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-11219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated HADOOP-11219:

Comment: was deleted

(was: Let me list the differences introduced because of the migration from 
Netty 3.x to 4.x.
 There is a migration guide that mentions most (but not all) of the changes: 
[https://netty.io/wiki/new-and-noteworthy-in-4.0.html]
 Please note that the below code changes are based on Wei-Chiu's branch: 
[https://github.com/jojochuang/hadoop/commits/shuffle_handler_netty4]
h2. CHANGES IN ShuffleHandler
h3. *I will list the changes mostly from ShuffleHandler as it covers almost all 
type of changes in other classes as well.*

*In TestShuffleHandler, the test code was changed by any of the justifications 
listed down below.*
h3. Change category #1: General API changes / non-configuration getters:

Details: 
[https://netty.io/wiki/new-and-noteworthy-in-4.0.html#general-api-changes]
{quote}Non-configuration getters have no get- prefix anymore. (e.g. 
Channel.getRemoteAddress() → Channel.remoteAddress())
 Boolean properties are still prefixed with is- to avoid confusion (e.g. 
'empty' is both an adjective and a verb, so empty() can have two meanings.)
{quote}
I'm just listing all the changes without additional context (in which method 
they were changed) separated by three dots, as they are simply method renamings:
{code:java}
-future.getChannel().close();
+future.channel().closeFuture().awaitUninterruptibly();
...
...
-  ChannelPipeline pipeline = future.getChannel().getPipeline();
+  ChannelPipeline pipeline = future.channel().pipeline();
...
...
-port = ((InetSocketAddress)ch.getLocalAddress()).getPort();
+port = ((InetSocketAddress)ch.localAddress()).getPort();
...
...
-  if (e.getState() == IdleState.WRITER_IDLE && enabledTimeout) {
-e.getChannel().close();
+  if (e.state() == IdleState.WRITER_IDLE && enabledTimeout) {
+ctx.channel().close();
...
...
-  accepted.add(evt.getChannel());
+  accepted.add(ctx.channel());
...
...
-new QueryStringDecoder(request.getUri()).getParameters();
+new QueryStringDecoder(request.getUri()).parameters(); //getUri was 
not changed, see this later
...
...
-  Channel ch = evt.getChannel();
-  ChannelPipeline pipeline = ch.getPipeline();
+  Channel ch = ctx.channel();
+  ChannelPipeline pipeline = ch.pipeline();
...
...
-  reduceContext.getCtx().getChannel(),
+  reduceContext.getCtx().channel(),
...
...
-  if (ch.getPipeline().get(SslHandler.class) == null) {
+  if (ch.pipeline().get(SslHandler.class) == null) {
...
...
-  Channel ch = evt.getChannel();
-  ChannelPipeline pipeline = ch.getPipeline();
+  Channel ch = ctx.channel();
+  ChannelPipeline pipeline = ch.pipeline();
...
...
-  
ctx.getChannel().write(response).addListener(ChannelFutureListener.CLOSE);
+  ctx.channel().write(response).addListener(ChannelFutureListener.CLOSE);
...
...
-  Channel ch = e.getChannel();
-  Throwable cause = e.getCause();
+  Channel ch = ctx.channel();
{code}
h3. Change category #2: General API changes / Method signature changes.

*2.1: SimpleChannelUpstreamHandler was renamed to ChannelInboundHandlerAdapter.*
 
[https://netty.io/wiki/new-and-noteworthy-in-4.0.html#upstream--inbound-downstream--outbound]
{quote}The terms 'upstream' and 'downstream' were pretty confusing to 
beginners. 4.0 uses 'inbound' and 'outbound' wherever possible.
{quote}
{code:java}
-  class Shuffle extends SimpleChannelUpstreamHandler {
+  @ChannelHandler.Sharable
+  class Shuffle extends ChannelInboundHandlerAdapter {
{code}
*2.2: Simplifed channel state model: 
[https://netty.io/wiki/new-and-noteworthy-in-4.0.html#simplified-channel-state-model]*
{quote}channelOpen, channelBound, and channelConnected have been merged to 
channelActive. channelDisconnected, channelUnbound, and channelClosed have been 
merged to channelInactive. Likewise, Channel.isBound() and isConnected() have 
been merged to isActive().
{quote}
*2.2.1 Changes in class: Shuffle*
{code:java}
 @Override
-public void channelOpen(ChannelHandlerContext ctx, ChannelStateEvent evt) 
+public void channelActive(ChannelHandlerContext ctx)
 throws Exception {
-  super.channelOpen(ctx, evt);
+  super.channelActive(ctx);
{code}
*2.2.2 Changes in 
org.apache.hadoop.mapred.ShuffleHandler.Shuffle#exceptionCaught:* 
 Quoting the change again:
{quote}channelOpen, channelBound, and channelConnected have been merged to 
channelActive. channelDisconnected, channelUnbound, and channelClosed have been 
merged to channelInactive. Likewise, Channel.isBound() and isConnected() have 
been merged to isActive().
{quote}
{code:java}
   LOG.error("Shuffle error: ", cause);
-  if (ch.isConnected()) {
-LOG.error("Shuffle error " + e);
+  if 

[jira] [Comment Edited] (HADOOP-11219) [Umbrella] Upgrade to netty 4

2021-06-07 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-11219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17358850#comment-17358850
 ] 

Szilard Nemeth edited comment on HADOOP-11219 at 6/7/21, 8:40 PM:
--

Let me list the differences introduced because of the migration from Netty 3.x 
to 4.x.
 There is a migration guide that mentions most (but not all) of the changes: 
[https://netty.io/wiki/new-and-noteworthy-in-4.0.html]
 Please note that the below code changes are based on Wei-Chiu's branch: 
[https://github.com/jojochuang/hadoop/commits/shuffle_handler_netty4]
h2. CHANGES IN ShuffleHandler
h3. *I will list the changes mostly from ShuffleHandler as it covers almost all 
type of changes in other classes as well.*

*In TestShuffleHandler, the test code was changed by any of the justifications 
listed down below.*
h3. Change category #1: General API changes / non-configuration getters:

Details: 
[https://netty.io/wiki/new-and-noteworthy-in-4.0.html#general-api-changes]
{quote}Non-configuration getters have no get- prefix anymore. (e.g. 
Channel.getRemoteAddress() → Channel.remoteAddress())
 Boolean properties are still prefixed with is- to avoid confusion (e.g. 
'empty' is both an adjective and a verb, so empty() can have two meanings.)
{quote}
I'm just listing all the changes without additional context (in which method 
they were changed) separated by three dots, as they are simply method renamings:
{code:java}
-future.getChannel().close();
+future.channel().closeFuture().awaitUninterruptibly();
...
...
-  ChannelPipeline pipeline = future.getChannel().getPipeline();
+  ChannelPipeline pipeline = future.channel().pipeline();
...
...
-port = ((InetSocketAddress)ch.getLocalAddress()).getPort();
+port = ((InetSocketAddress)ch.localAddress()).getPort();
...
...
-  if (e.getState() == IdleState.WRITER_IDLE && enabledTimeout) {
-e.getChannel().close();
+  if (e.state() == IdleState.WRITER_IDLE && enabledTimeout) {
+ctx.channel().close();
...
...
-  accepted.add(evt.getChannel());
+  accepted.add(ctx.channel());
...
...
-new QueryStringDecoder(request.getUri()).getParameters();
+new QueryStringDecoder(request.getUri()).parameters(); //getUri was 
not changed, see this later
...
...
-  Channel ch = evt.getChannel();
-  ChannelPipeline pipeline = ch.getPipeline();
+  Channel ch = ctx.channel();
+  ChannelPipeline pipeline = ch.pipeline();
...
...
-  reduceContext.getCtx().getChannel(),
+  reduceContext.getCtx().channel(),
...
...
-  if (ch.getPipeline().get(SslHandler.class) == null) {
+  if (ch.pipeline().get(SslHandler.class) == null) {
...
...
-  Channel ch = evt.getChannel();
-  ChannelPipeline pipeline = ch.getPipeline();
+  Channel ch = ctx.channel();
+  ChannelPipeline pipeline = ch.pipeline();
...
...
-  
ctx.getChannel().write(response).addListener(ChannelFutureListener.CLOSE);
+  ctx.channel().write(response).addListener(ChannelFutureListener.CLOSE);
...
...
-  Channel ch = e.getChannel();
-  Throwable cause = e.getCause();
+  Channel ch = ctx.channel();
{code}
h3. Change category #2: General API changes / Method signature changes.

*2.1: SimpleChannelUpstreamHandler was renamed to ChannelInboundHandlerAdapter.*
 
[https://netty.io/wiki/new-and-noteworthy-in-4.0.html#upstream--inbound-downstream--outbound]
{quote}The terms 'upstream' and 'downstream' were pretty confusing to 
beginners. 4.0 uses 'inbound' and 'outbound' wherever possible.
{quote}
{code:java}
-  class Shuffle extends SimpleChannelUpstreamHandler {
+  @ChannelHandler.Sharable
+  class Shuffle extends ChannelInboundHandlerAdapter {
{code}
*2.2: Simplifed channel state model: 
[https://netty.io/wiki/new-and-noteworthy-in-4.0.html#simplified-channel-state-model]*
{quote}channelOpen, channelBound, and channelConnected have been merged to 
channelActive. channelDisconnected, channelUnbound, and channelClosed have been 
merged to channelInactive. Likewise, Channel.isBound() and isConnected() have 
been merged to isActive().
{quote}
*2.2.1 Changes in class: Shuffle*
{code:java}
 @Override
-public void channelOpen(ChannelHandlerContext ctx, ChannelStateEvent evt) 
+public void channelActive(ChannelHandlerContext ctx)
 throws Exception {
-  super.channelOpen(ctx, evt);
+  super.channelActive(ctx);
{code}
*2.2.2 Changes in 
org.apache.hadoop.mapred.ShuffleHandler.Shuffle#exceptionCaught:* 
 Quoting the change again:
{quote}channelOpen, channelBound, and channelConnected have been merged to 
channelActive. channelDisconnected, channelUnbound, and channelClosed have been 
merged to channelInactive. Likewise, Channel.isBound() and isConnected() have 
been merged to isActive().
{quote}
{code:java}
   LOG.error("Shuffle error: ", cause);
-  if 

[jira] [Comment Edited] (HADOOP-11219) [Umbrella] Upgrade to netty 4

2021-06-07 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-11219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17358850#comment-17358850
 ] 

Szilard Nemeth edited comment on HADOOP-11219 at 6/7/21, 8:38 PM:
--

Let me list the differences introduced because of the migration from Netty 3.x 
to 4.x.
 There is a migration guide that mentions most (but not all) of the changes: 
[https://netty.io/wiki/new-and-noteworthy-in-4.0.html]
 Please note that the below code changes are based on Wei-Chiu's branch: 
[https://github.com/jojochuang/hadoop/commits/shuffle_handler_netty4]
h2. CHANGES IN ShuffleHandler
h3. *I will list the changes mostly from ShuffleHandler as it covers almost all 
type of changes in other classes as well.*

*In TestShuffleHandler, the test code was changed by any of the justifications 
listed down below.*
h3. Change category #1: General API changes / non-configuration getters:

Details: 
[https://netty.io/wiki/new-and-noteworthy-in-4.0.html#general-api-changes]
{quote}Non-configuration getters have no get- prefix anymore. (e.g. 
Channel.getRemoteAddress() → Channel.remoteAddress())
 Boolean properties are still prefixed with is- to avoid confusion (e.g. 
'empty' is both an adjective and a verb, so empty() can have two meanings.)
{quote}
I'm just listing all the changes without additional context (in which method 
they were changed) separated by three dots, as they are simply method renamings:
{code:java}
-future.getChannel().close();
+future.channel().closeFuture().awaitUninterruptibly();
...
...
-  ChannelPipeline pipeline = future.getChannel().getPipeline();
+  ChannelPipeline pipeline = future.channel().pipeline();
...
...
-port = ((InetSocketAddress)ch.getLocalAddress()).getPort();
+port = ((InetSocketAddress)ch.localAddress()).getPort();
...
...
-  if (e.getState() == IdleState.WRITER_IDLE && enabledTimeout) {
-e.getChannel().close();
+  if (e.state() == IdleState.WRITER_IDLE && enabledTimeout) {
+ctx.channel().close();
...
...
-  accepted.add(evt.getChannel());
+  accepted.add(ctx.channel());
...
...
-new QueryStringDecoder(request.getUri()).getParameters();
+new QueryStringDecoder(request.getUri()).parameters(); //getUri was 
not changed, see this later
...
...
-  Channel ch = evt.getChannel();
-  ChannelPipeline pipeline = ch.getPipeline();
+  Channel ch = ctx.channel();
+  ChannelPipeline pipeline = ch.pipeline();
...
...
-  reduceContext.getCtx().getChannel(),
+  reduceContext.getCtx().channel(),
...
...
-  if (ch.getPipeline().get(SslHandler.class) == null) {
+  if (ch.pipeline().get(SslHandler.class) == null) {
...
...
-  Channel ch = evt.getChannel();
-  ChannelPipeline pipeline = ch.getPipeline();
+  Channel ch = ctx.channel();
+  ChannelPipeline pipeline = ch.pipeline();
...
...
-  
ctx.getChannel().write(response).addListener(ChannelFutureListener.CLOSE);
+  ctx.channel().write(response).addListener(ChannelFutureListener.CLOSE);
...
...
-  Channel ch = e.getChannel();
-  Throwable cause = e.getCause();
+  Channel ch = ctx.channel();
{code}
h3. Change category #2: General API changes / Method signature changes.

*2.1: SimpleChannelUpstreamHandler was renamed to ChannelInboundHandlerAdapter.*
 
[https://netty.io/wiki/new-and-noteworthy-in-4.0.html#upstream--inbound-downstream--outbound]
{quote}The terms 'upstream' and 'downstream' were pretty confusing to 
beginners. 4.0 uses 'inbound' and 'outbound' wherever possible.
{quote}
{code:java}
-  class Shuffle extends SimpleChannelUpstreamHandler {
+  @ChannelHandler.Sharable
+  class Shuffle extends ChannelInboundHandlerAdapter {
{code}
*2.2: Simplifed channel state model: 
[https://netty.io/wiki/new-and-noteworthy-in-4.0.html#simplified-channel-state-model]*
{quote}channelOpen, channelBound, and channelConnected have been merged to 
channelActive. channelDisconnected, channelUnbound, and channelClosed have been 
merged to channelInactive. Likewise, Channel.isBound() and isConnected() have 
been merged to isActive().
{quote}
*2.2.1 Changes in class: Shuffle*
{code:java}
 @Override
-public void channelOpen(ChannelHandlerContext ctx, ChannelStateEvent evt) 
+public void channelActive(ChannelHandlerContext ctx)
 throws Exception {
-  super.channelOpen(ctx, evt);
+  super.channelActive(ctx);
{code}
*2.2.2 Changes in 
org.apache.hadoop.mapred.ShuffleHandler.Shuffle#exceptionCaught:* 
 Quoting the change again:
{quote}channelOpen, channelBound, and channelConnected have been merged to 
channelActive. channelDisconnected, channelUnbound, and channelClosed have been 
merged to channelInactive. Likewise, Channel.isBound() and isConnected() have 
been merged to isActive().
{quote}
{code:java}
   LOG.error("Shuffle error: ", cause);
-  if 

[jira] [Comment Edited] (HADOOP-11219) [Umbrella] Upgrade to netty 4

2021-06-07 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-11219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17358850#comment-17358850
 ] 

Szilard Nemeth edited comment on HADOOP-11219 at 6/7/21, 8:36 PM:
--

Let me list the differences introduced because of the migration from Netty 3.x 
to 4.x.
 There is a migration guide that mentions most (but not all) of the changes: 
[https://netty.io/wiki/new-and-noteworthy-in-4.0.html]
 Please note that the below code changes are based on Wei-Chiu's branch: 
[https://github.com/jojochuang/hadoop/commits/shuffle_handler_netty4]
h2. CHANGES IN ShuffleHandler
h3. *I will list the changes mostly from ShuffleHandler as it covers almost all 
type of changes in other classes as well.*

*In TestShuffleHandler, the test code was changed by any of the justifications 
listed down below.*
h3. Change category #1: General API changes / non-configuration getters:

Details: 
[https://netty.io/wiki/new-and-noteworthy-in-4.0.html#general-api-changes]
{quote}Non-configuration getters have no get- prefix anymore. (e.g. 
Channel.getRemoteAddress() → Channel.remoteAddress())
 Boolean properties are still prefixed with is- to avoid confusion (e.g. 
'empty' is both an adjective and a verb, so empty() can have two meanings.)
{quote}
I'm just listing all the changes without additional context (in which method 
they were changed) separated by three dots, as they are simply method renamings:
{code:java}
-future.getChannel().close();
+future.channel().closeFuture().awaitUninterruptibly();
...
...
-  ChannelPipeline pipeline = future.getChannel().getPipeline();
+  ChannelPipeline pipeline = future.channel().pipeline();
...
...
-port = ((InetSocketAddress)ch.getLocalAddress()).getPort();
+port = ((InetSocketAddress)ch.localAddress()).getPort();
...
...
-  if (e.getState() == IdleState.WRITER_IDLE && enabledTimeout) {
-e.getChannel().close();
+  if (e.state() == IdleState.WRITER_IDLE && enabledTimeout) {
+ctx.channel().close();
...
...
-  accepted.add(evt.getChannel());
+  accepted.add(ctx.channel());
...
...
-new QueryStringDecoder(request.getUri()).getParameters();
+new QueryStringDecoder(request.getUri()).parameters(); //getUri was 
not changed, see this later
...
...
-  Channel ch = evt.getChannel();
-  ChannelPipeline pipeline = ch.getPipeline();
+  Channel ch = ctx.channel();
+  ChannelPipeline pipeline = ch.pipeline();
...
...
-  reduceContext.getCtx().getChannel(),
+  reduceContext.getCtx().channel(),
...
...
-  if (ch.getPipeline().get(SslHandler.class) == null) {
+  if (ch.pipeline().get(SslHandler.class) == null) {
...
...
-  Channel ch = evt.getChannel();
-  ChannelPipeline pipeline = ch.getPipeline();
+  Channel ch = ctx.channel();
+  ChannelPipeline pipeline = ch.pipeline();
...
...
-  
ctx.getChannel().write(response).addListener(ChannelFutureListener.CLOSE);
+  ctx.channel().write(response).addListener(ChannelFutureListener.CLOSE);
...
...
-  Channel ch = e.getChannel();
-  Throwable cause = e.getCause();
+  Channel ch = ctx.channel();
{code}
h3. Change category #2: General API changes / Method signature changes.

*2.1: SimpleChannelUpstreamHandler was renamed to ChannelInboundHandlerAdapter.*
 
[https://netty.io/wiki/new-and-noteworthy-in-4.0.html#upstream--inbound-downstream--outbound]
{quote}The terms 'upstream' and 'downstream' were pretty confusing to 
beginners. 4.0 uses 'inbound' and 'outbound' wherever possible.
{quote}
{code:java}
-  class Shuffle extends SimpleChannelUpstreamHandler {
+  @ChannelHandler.Sharable
+  class Shuffle extends ChannelInboundHandlerAdapter {
{code}
*2.2: Simplifed channel state model: 
[https://netty.io/wiki/new-and-noteworthy-in-4.0.html#simplified-channel-state-model]*
{quote}channelOpen, channelBound, and channelConnected have been merged to 
channelActive. channelDisconnected, channelUnbound, and channelClosed have been 
merged to channelInactive. Likewise, Channel.isBound() and isConnected() have 
been merged to isActive().
{quote}
*2.2.1 Changes in class: Shuffle*
{code:java}
 @Override
-public void channelOpen(ChannelHandlerContext ctx, ChannelStateEvent evt) 
+public void channelActive(ChannelHandlerContext ctx)
 throws Exception {
-  super.channelOpen(ctx, evt);
+  super.channelActive(ctx);
{code}
*2.2.2 Changes in 
org.apache.hadoop.mapred.ShuffleHandler.Shuffle#exceptionCaught:* 
 Quoting the change again:
{quote}channelOpen, channelBound, and channelConnected have been merged to 
channelActive. channelDisconnected, channelUnbound, and channelClosed have been 
merged to channelInactive. Likewise, Channel.isBound() and isConnected() have 
been merged to isActive().
{quote}
{code:java}
   LOG.error("Shuffle error: ", cause);
-  if 

[jira] [Comment Edited] (HADOOP-11219) [Umbrella] Upgrade to netty 4

2021-06-07 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-11219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17358850#comment-17358850
 ] 

Szilard Nemeth edited comment on HADOOP-11219 at 6/7/21, 8:29 PM:
--

Let me list the differences introduced because of the migration from Netty 3.x 
to 4.x.
 There is a migration guide that mentions most (but not all) of the changes: 
[https://netty.io/wiki/new-and-noteworthy-in-4.0.html]
 Please note that the below code changes are based on Wei-Chiu's branch: 
[https://github.com/jojochuang/hadoop/commits/shuffle_handler_netty4]
h2. CHANGES IN ShuffleHandler
h3. *I will list the changes mostly from ShuffleHandler as it covers almost all 
type of changes in other classes as well.*

*In TestShuffleHandler, the test code was changed by any of the justifications 
listed down below.*
h3. Change category #1: General API changes / non-configuration getters:

Details: 
[https://netty.io/wiki/new-and-noteworthy-in-4.0.html#general-api-changes]
{quote}Non-configuration getters have no get- prefix anymore. (e.g. 
Channel.getRemoteAddress() → Channel.remoteAddress())
 Boolean properties are still prefixed with is- to avoid confusion (e.g. 
'empty' is both an adjective and a verb, so empty() can have two meanings.)
{quote}
I'm just listing all the changes without additional context (in which method 
they were changed) separated by three dots, as they are simply method renamings:
{code:java}
-future.getChannel().close();
+future.channel().closeFuture().awaitUninterruptibly();
...
...
-  ChannelPipeline pipeline = future.getChannel().getPipeline();
+  ChannelPipeline pipeline = future.channel().pipeline();
...
...
-port = ((InetSocketAddress)ch.getLocalAddress()).getPort();
+port = ((InetSocketAddress)ch.localAddress()).getPort();
...
...
-  if (e.getState() == IdleState.WRITER_IDLE && enabledTimeout) {
-e.getChannel().close();
+  if (e.state() == IdleState.WRITER_IDLE && enabledTimeout) {
+ctx.channel().close();
...
...
-  accepted.add(evt.getChannel());
+  accepted.add(ctx.channel());
...
...
-new QueryStringDecoder(request.getUri()).getParameters();
+new QueryStringDecoder(request.getUri()).parameters(); //getUri was 
not changed, see this later
...
...
-  Channel ch = evt.getChannel();
-  ChannelPipeline pipeline = ch.getPipeline();
+  Channel ch = ctx.channel();
+  ChannelPipeline pipeline = ch.pipeline();
...
...
-  reduceContext.getCtx().getChannel(),
+  reduceContext.getCtx().channel(),
...
...
-  if (ch.getPipeline().get(SslHandler.class) == null) {
+  if (ch.pipeline().get(SslHandler.class) == null) {
...
...
-  Channel ch = evt.getChannel();
-  ChannelPipeline pipeline = ch.getPipeline();
+  Channel ch = ctx.channel();
+  ChannelPipeline pipeline = ch.pipeline();
...
...
-  
ctx.getChannel().write(response).addListener(ChannelFutureListener.CLOSE);
+  ctx.channel().write(response).addListener(ChannelFutureListener.CLOSE);
...
...
-  Channel ch = e.getChannel();
-  Throwable cause = e.getCause();
+  Channel ch = ctx.channel();
{code}
h3. Change category #2: General API changes / Method signature changes.

*2.1: SimpleChannelUpstreamHandler was renamed to ChannelInboundHandlerAdapter.*
 
[https://netty.io/wiki/new-and-noteworthy-in-4.0.html#upstream--inbound-downstream--outbound]
{quote}The terms 'upstream' and 'downstream' were pretty confusing to 
beginners. 4.0 uses 'inbound' and 'outbound' wherever possible.
{quote}
{code:java}
-  class Shuffle extends SimpleChannelUpstreamHandler {
+  @ChannelHandler.Sharable
+  class Shuffle extends ChannelInboundHandlerAdapter {
{code}
*2.2: Simplifed channel state model: 
[https://netty.io/wiki/new-and-noteworthy-in-4.0.html#simplified-channel-state-model]*
{quote}channelOpen, channelBound, and channelConnected have been merged to 
channelActive. channelDisconnected, channelUnbound, and channelClosed have been 
merged to channelInactive. Likewise, Channel.isBound() and isConnected() have 
been merged to isActive().
{quote}
*2.2.1 Changes in class: Shuffle*
{code:java}
 @Override
-public void channelOpen(ChannelHandlerContext ctx, ChannelStateEvent evt) 
+public void channelActive(ChannelHandlerContext ctx)
 throws Exception {
-  super.channelOpen(ctx, evt);
+  super.channelActive(ctx);
{code}
*2.2.2 Changes in 
org.apache.hadoop.mapred.ShuffleHandler.Shuffle#exceptionCaught:* 
 Quoting the change again:
{quote}channelOpen, channelBound, and channelConnected have been merged to 
channelActive. channelDisconnected, channelUnbound, and channelClosed have been 
merged to channelInactive. Likewise, Channel.isBound() and isConnected() have 
been merged to isActive().
{quote}
{code:java}
   LOG.error("Shuffle error: ", cause);
-  if 

[jira] [Comment Edited] (HADOOP-11219) [Umbrella] Upgrade to netty 4

2021-06-07 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-11219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17358850#comment-17358850
 ] 

Szilard Nemeth edited comment on HADOOP-11219 at 6/7/21, 8:28 PM:
--

Let me list the differences introduced because of the migration from Netty 3.x 
to 4.x.
 There is a migration guide that mentions most (but not all) of the changes: 
[https://netty.io/wiki/new-and-noteworthy-in-4.0.html]
 Please note that the below code changes are based on Wei-Chiu's branch: 
[https://github.com/jojochuang/hadoop/commits/shuffle_handler_netty4]
h2. CHANGES IN ShuffleHandler
h3. *I will list the changes mostly from ShuffleHandler as it covers almost all 
type of changes in other classes as well.*

*In TestShuffleHandler, the test code was changed by any of the justifications 
listed down below.*
h3. Change category #1: General API changes / non-configuration getters:

Details: 
[https://netty.io/wiki/new-and-noteworthy-in-4.0.html#general-api-changes]
{quote}Non-configuration getters have no get- prefix anymore. (e.g. 
Channel.getRemoteAddress() → Channel.remoteAddress())
 Boolean properties are still prefixed with is- to avoid confusion (e.g. 
'empty' is both an adjective and a verb, so empty() can have two meanings.)
{quote}
I'm just listing all the changes without additional context (in which method 
they were changed) separated by three dots, as they are simply method renamings:
{code:java}
-future.getChannel().close();
+future.channel().closeFuture().awaitUninterruptibly();
...
...
-  ChannelPipeline pipeline = future.getChannel().getPipeline();
+  ChannelPipeline pipeline = future.channel().pipeline();
...
...
-port = ((InetSocketAddress)ch.getLocalAddress()).getPort();
+port = ((InetSocketAddress)ch.localAddress()).getPort();
...
...
-  if (e.getState() == IdleState.WRITER_IDLE && enabledTimeout) {
-e.getChannel().close();
+  if (e.state() == IdleState.WRITER_IDLE && enabledTimeout) {
+ctx.channel().close();
...
...
-  accepted.add(evt.getChannel());
+  accepted.add(ctx.channel());
...
...
-new QueryStringDecoder(request.getUri()).getParameters();
+new QueryStringDecoder(request.getUri()).parameters(); //getUri was 
not changed, see this later
...
...
-  Channel ch = evt.getChannel();
-  ChannelPipeline pipeline = ch.getPipeline();
+  Channel ch = ctx.channel();
+  ChannelPipeline pipeline = ch.pipeline();
...
...
-  reduceContext.getCtx().getChannel(),
+  reduceContext.getCtx().channel(),
...
...
-  if (ch.getPipeline().get(SslHandler.class) == null) {
+  if (ch.pipeline().get(SslHandler.class) == null) {
...
...
-  Channel ch = evt.getChannel();
-  ChannelPipeline pipeline = ch.getPipeline();
+  Channel ch = ctx.channel();
+  ChannelPipeline pipeline = ch.pipeline();
...
...
-  
ctx.getChannel().write(response).addListener(ChannelFutureListener.CLOSE);
+  ctx.channel().write(response).addListener(ChannelFutureListener.CLOSE);
...
...
-  Channel ch = e.getChannel();
-  Throwable cause = e.getCause();
+  Channel ch = ctx.channel();
{code}
h3. Change category #2: General API changes / Method signature changes.

*2.1: SimpleChannelUpstreamHandler was renamed to ChannelInboundHandlerAdapter.*
 
[https://netty.io/wiki/new-and-noteworthy-in-4.0.html#upstream--inbound-downstream--outbound]
{quote}The terms 'upstream' and 'downstream' were pretty confusing to 
beginners. 4.0 uses 'inbound' and 'outbound' wherever possible.
{quote}
{code:java}
-  class Shuffle extends SimpleChannelUpstreamHandler {
+  @ChannelHandler.Sharable
+  class Shuffle extends ChannelInboundHandlerAdapter {
{code}
*2.2: Simplifed channel state model: 
[https://netty.io/wiki/new-and-noteworthy-in-4.0.html#simplified-channel-state-model]*
{quote}channelOpen, channelBound, and channelConnected have been merged to 
channelActive. channelDisconnected, channelUnbound, and channelClosed have been 
merged to channelInactive. Likewise, Channel.isBound() and isConnected() have 
been merged to isActive().
{quote}
*2.2.1 Changes in class: Shuffle*
{code:java}
 @Override
-public void channelOpen(ChannelHandlerContext ctx, ChannelStateEvent evt) 
+public void channelActive(ChannelHandlerContext ctx)
 throws Exception {
-  super.channelOpen(ctx, evt);
+  super.channelActive(ctx);
{code}
*2.2.2 Changes in 
org.apache.hadoop.mapred.ShuffleHandler.Shuffle#exceptionCaught:* 
 Quoting the change again:
{quote}channelOpen, channelBound, and channelConnected have been merged to 
channelActive. channelDisconnected, channelUnbound, and channelClosed have been 
merged to channelInactive. Likewise, Channel.isBound() and isConnected() have 
been merged to isActive().
{quote}
{code:java}
   LOG.error("Shuffle error: ", cause);
-  if 

[jira] [Comment Edited] (HADOOP-11219) [Umbrella] Upgrade to netty 4

2021-06-07 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-11219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17358850#comment-17358850
 ] 

Szilard Nemeth edited comment on HADOOP-11219 at 6/7/21, 8:27 PM:
--

Let me list the differences introduced because of the migration from Netty 3.x 
to 4.x.
 There is a migration guide that mentions most (but not all) of the changes: 
[https://netty.io/wiki/new-and-noteworthy-in-4.0.html]
 Please note that the below code changes are based on Wei-Chiu's branch: 
[https://github.com/jojochuang/hadoop/commits/shuffle_handler_netty4]
h2. CHANGES IN ShuffleHandler
h3. *I will list the changes mostly from ShuffleHandler as it covers almost all 
type of changes in other classes as well.*

*In TestShuffleHandler, the test code was changed by any of the justifications 
listed down below.*

Change category #1: General API changes / non-configuration getters:
 Details: 
[https://netty.io/wiki/new-and-noteworthy-in-4.0.html#general-api-changes]
{quote}Non-configuration getters have no get- prefix anymore. (e.g. 
Channel.getRemoteAddress() → Channel.remoteAddress())
 Boolean properties are still prefixed with is- to avoid confusion (e.g. 
'empty' is both an adjective and a verb, so empty() can have two meanings.)
{quote}
I'm just listing all the changes without additional context (in which method 
they were changed) separated by three dots, as they are simply method renamings:
{code:java}
-future.getChannel().close();
+future.channel().closeFuture().awaitUninterruptibly();
...
...
-  ChannelPipeline pipeline = future.getChannel().getPipeline();
+  ChannelPipeline pipeline = future.channel().pipeline();
...
...
-port = ((InetSocketAddress)ch.getLocalAddress()).getPort();
+port = ((InetSocketAddress)ch.localAddress()).getPort();
...
...
-  if (e.getState() == IdleState.WRITER_IDLE && enabledTimeout) {
-e.getChannel().close();
+  if (e.state() == IdleState.WRITER_IDLE && enabledTimeout) {
+ctx.channel().close();
...
...
-  accepted.add(evt.getChannel());
+  accepted.add(ctx.channel());
...
...
-new QueryStringDecoder(request.getUri()).getParameters();
+new QueryStringDecoder(request.getUri()).parameters(); //getUri was 
not changed, see this later
...
...
-  Channel ch = evt.getChannel();
-  ChannelPipeline pipeline = ch.getPipeline();
+  Channel ch = ctx.channel();
+  ChannelPipeline pipeline = ch.pipeline();
...
...
-  reduceContext.getCtx().getChannel(),
+  reduceContext.getCtx().channel(),
...
...
-  if (ch.getPipeline().get(SslHandler.class) == null) {
+  if (ch.pipeline().get(SslHandler.class) == null) {
...
...
-  Channel ch = evt.getChannel();
-  ChannelPipeline pipeline = ch.getPipeline();
+  Channel ch = ctx.channel();
+  ChannelPipeline pipeline = ch.pipeline();
...
...
-  
ctx.getChannel().write(response).addListener(ChannelFutureListener.CLOSE);
+  ctx.channel().write(response).addListener(ChannelFutureListener.CLOSE);
...
...
-  Channel ch = e.getChannel();
-  Throwable cause = e.getCause();
+  Channel ch = ctx.channel();
{code}
h3. Change category #2: General API changes / Method signature changes.

*2.1: SimpleChannelUpstreamHandler was renamed to ChannelInboundHandlerAdapter.*
 
[https://netty.io/wiki/new-and-noteworthy-in-4.0.html#upstream--inbound-downstream--outbound]
{quote}The terms 'upstream' and 'downstream' were pretty confusing to 
beginners. 4.0 uses 'inbound' and 'outbound' wherever possible.
{quote}
{code:java}
-  class Shuffle extends SimpleChannelUpstreamHandler {
+  @ChannelHandler.Sharable
+  class Shuffle extends ChannelInboundHandlerAdapter {
{code}
*2.2: Simplifed channel state model: 
[https://netty.io/wiki/new-and-noteworthy-in-4.0.html#simplified-channel-state-model]*
{quote}channelOpen, channelBound, and channelConnected have been merged to 
channelActive. channelDisconnected, channelUnbound, and channelClosed have been 
merged to channelInactive. Likewise, Channel.isBound() and isConnected() have 
been merged to isActive().
{quote}
*2.2.1 Changes in class: Shuffle*
{code:java}
 @Override
-public void channelOpen(ChannelHandlerContext ctx, ChannelStateEvent evt) 
+public void channelActive(ChannelHandlerContext ctx)
 throws Exception {
-  super.channelOpen(ctx, evt);
+  super.channelActive(ctx);
{code}
*2.2.2 Changes in 
org.apache.hadoop.mapred.ShuffleHandler.Shuffle#exceptionCaught:* 
 Quoting the change again:
{quote}channelOpen, channelBound, and channelConnected have been merged to 
channelActive. channelDisconnected, channelUnbound, and channelClosed have been 
merged to channelInactive. Likewise, Channel.isBound() and isConnected() have 
been merged to isActive().
{quote}
{code:java}
   LOG.error("Shuffle error: ", cause);
-  if (ch.isConnected()) {

[jira] [Commented] (HADOOP-11219) [Umbrella] Upgrade to netty 4

2021-06-07 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-11219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17358850#comment-17358850
 ] 

Szilard Nemeth commented on HADOOP-11219:
-

Let me list the differences introduced because of the migration from Netty 3.x 
to 4.x.
 There is a migration guide that mentions most (but not all) of the changes: 
[https://netty.io/wiki/new-and-noteworthy-in-4.0.html]
 Please note that the below code changes are based on Wei-Chiu's branch: 
[https://github.com/jojochuang/hadoop/commits/shuffle_handler_netty4]
h2. CHANGES IN ShuffleHandler
h3. *I will list the changes mostly from ShuffleHandler as it covers almost all 
type of changes in other classes as well.*
 *In TestShuffleHandler, the test code was changed by any of the justifications 
listed down below.*

Change category #1: General API changes / non-configuration getters:
Details: 
[https://netty.io/wiki/new-and-noteworthy-in-4.0.html#general-api-changes]
{quote}Non-configuration getters have no get- prefix anymore. (e.g. 
Channel.getRemoteAddress() → Channel.remoteAddress())
 Boolean properties are still prefixed with is- to avoid confusion (e.g. 
'empty' is both an adjective and a verb, so empty() can have two meanings.)
{quote}
I'm just listing all the changes without additional context (in which method 
they were changed) separated by three dots, as they are simply method renamings:
{code:java}
-future.getChannel().close();
+future.channel().closeFuture().awaitUninterruptibly();
...
...
-  ChannelPipeline pipeline = future.getChannel().getPipeline();
+  ChannelPipeline pipeline = future.channel().pipeline();
...
...
-port = ((InetSocketAddress)ch.getLocalAddress()).getPort();
+port = ((InetSocketAddress)ch.localAddress()).getPort();
...
...
-  if (e.getState() == IdleState.WRITER_IDLE && enabledTimeout) {
-e.getChannel().close();
+  if (e.state() == IdleState.WRITER_IDLE && enabledTimeout) {
+ctx.channel().close();
...
...
-  accepted.add(evt.getChannel());
+  accepted.add(ctx.channel());
...
...
-new QueryStringDecoder(request.getUri()).getParameters();
+new QueryStringDecoder(request.getUri()).parameters(); //getUri was 
not changed, see this later
...
...
-  Channel ch = evt.getChannel();
-  ChannelPipeline pipeline = ch.getPipeline();
+  Channel ch = ctx.channel();
+  ChannelPipeline pipeline = ch.pipeline();
...
...
-  reduceContext.getCtx().getChannel(),
+  reduceContext.getCtx().channel(),
...
...
-  if (ch.getPipeline().get(SslHandler.class) == null) {
+  if (ch.pipeline().get(SslHandler.class) == null) {
...
...
-  Channel ch = evt.getChannel();
-  ChannelPipeline pipeline = ch.getPipeline();
+  Channel ch = ctx.channel();
+  ChannelPipeline pipeline = ch.pipeline();
...
...
-  
ctx.getChannel().write(response).addListener(ChannelFutureListener.CLOSE);
+  ctx.channel().write(response).addListener(ChannelFutureListener.CLOSE);
...
...
-  Channel ch = e.getChannel();
-  Throwable cause = e.getCause();
+  Channel ch = ctx.channel();
{code}
h3. Change category #2: General API changes / Method signature changes.

*2.1: SimpleChannelUpstreamHandler was renamed to ChannelInboundHandlerAdapter.*
 
[https://netty.io/wiki/new-and-noteworthy-in-4.0.html#upstream--inbound-downstream--outbound]
{quote}The terms 'upstream' and 'downstream' were pretty confusing to 
beginners. 4.0 uses 'inbound' and 'outbound' wherever possible.
{quote}
{code:java}
-  class Shuffle extends SimpleChannelUpstreamHandler {
+  @ChannelHandler.Sharable
+  class Shuffle extends ChannelInboundHandlerAdapter {
{code}
*2.2: Simplifed channel state model: 
[https://netty.io/wiki/new-and-noteworthy-in-4.0.html#simplified-channel-state-model]*
{quote}channelOpen, channelBound, and channelConnected have been merged to 
channelActive. channelDisconnected, channelUnbound, and channelClosed have been 
merged to channelInactive. Likewise, Channel.isBound() and isConnected() have 
been merged to isActive().
{quote}
*2.2.1 Changes in class: Shuffle*
{code:java}
 @Override
-public void channelOpen(ChannelHandlerContext ctx, ChannelStateEvent evt) 
+public void channelActive(ChannelHandlerContext ctx)
 throws Exception {
-  super.channelOpen(ctx, evt);
+  super.channelActive(ctx);
{code}
*2.2.2 Changes in 
org.apache.hadoop.mapred.ShuffleHandler.Shuffle#exceptionCaught:* 
 Quoting the change again:
{quote}channelOpen, channelBound, and channelConnected have been merged to 
channelActive. channelDisconnected, channelUnbound, and channelClosed have been 
merged to channelInactive. Likewise, Channel.isBound() and isConnected() have 
been merged to isActive().
{quote}
{code:java}
   LOG.error("Shuffle error: ", cause);
-  if (ch.isConnected()) {
-LOG.error("Shuffle error " + e);
+  

[jira] [Work logged] (HADOOP-17727) Modularize docker images

2021-06-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17727?focusedWorklogId=608064=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-608064
 ]

ASF GitHub Bot logged work on HADOOP-17727:
---

Author: ASF GitHub Bot
Created on: 07/Jun/21 19:10
Start Date: 07/Jun/21 19:10
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3043:
URL: https://github.com/apache/hadoop/pull/3043#issuecomment-856188550


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  0s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 54s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m 23s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m  0s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 39s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   0m  0s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  hadolint  |   0m  8s |  |  No new issues.  |
   | +1 :green_heart: |  mvnsite  |   0m  0s |  |  the patch passed  |
   | +1 :green_heart: |  pylint  |   0m  3s |  |  No new issues.  |
   | +1 :green_heart: |  shellcheck  |   0m  3s |  |  No new issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 20s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  asflicense  |   0m 29s | 
[/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/34/artifact/out/results-asflicense.txt)
 |  The patch generated 3 ASF License warnings.  |
   |  |   |  71m 16s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/34/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3043 |
   | Optional Tests | dupname asflicense codespell shellcheck shelldocs 
hadolint pylint mvnsite unit markdownlint |
   | uname | Linux dc7f1d868fe0 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 
06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 263666dd7c100f3451f2e6d74b0ded80fb7ff2b7 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/34/testReport/ |
   | Max. process+thread count | 571 (vs. ulimit of 5500) |
   | modules | C:  U:  |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/34/console |
   | versions | git=2.25.1 maven=3.6.3 shellcheck=0.7.0 
hadolint=1.11.1-0-g0e692dd pylint=2.6.0 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 608064)
Time Spent: 12h 20m  (was: 12h 10m)

> Modularize docker images
> 
>
> Key: HADOOP-17727
> URL: https://issues.apache.org/jira/browse/HADOOP-17727
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
> Attachments: build-log-centos-7.zip, build-log-centos-8.zip, 
> build-log-ubuntu-x64.zip
>
>  Time Spent: 12h 20m
>  Remaining Estimate: 0h
>
> 

[GitHub] [hadoop] hadoop-yetus commented on pull request #3043: HADOOP-17727. Modularize docker images

2021-06-07 Thread GitBox


hadoop-yetus commented on pull request #3043:
URL: https://github.com/apache/hadoop/pull/3043#issuecomment-856188550


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  0s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 54s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m 23s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m  0s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 39s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   0m  0s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  hadolint  |   0m  8s |  |  No new issues.  |
   | +1 :green_heart: |  mvnsite  |   0m  0s |  |  the patch passed  |
   | +1 :green_heart: |  pylint  |   0m  3s |  |  No new issues.  |
   | +1 :green_heart: |  shellcheck  |   0m  3s |  |  No new issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 20s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  asflicense  |   0m 29s | 
[/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/34/artifact/out/results-asflicense.txt)
 |  The patch generated 3 ASF License warnings.  |
   |  |   |  71m 16s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/34/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3043 |
   | Optional Tests | dupname asflicense codespell shellcheck shelldocs 
hadolint pylint mvnsite unit markdownlint |
   | uname | Linux dc7f1d868fe0 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 
06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 263666dd7c100f3451f2e6d74b0ded80fb7ff2b7 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/34/testReport/ |
   | Max. process+thread count | 571 (vs. ulimit of 5500) |
   | modules | C:  U:  |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/34/console |
   | versions | git=2.25.1 maven=3.6.3 shellcheck=0.7.0 
hadolint=1.11.1-0-g0e692dd pylint=2.6.0 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17727) Modularize docker images

2021-06-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17727?focusedWorklogId=608057=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-608057
 ]

ASF GitHub Bot logged work on HADOOP-17727:
---

Author: ASF GitHub Bot
Created on: 07/Jun/21 18:57
Start Date: 07/Jun/21 18:57
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3043:
URL: https://github.com/apache/hadoop/pull/3043#issuecomment-856181150


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  shelldocs  |   0m  1s |  |  Shelldocs was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 47s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m 23s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m  0s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 48s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   0m  0s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  hadolint  |   0m  8s |  |  No new issues.  |
   | +1 :green_heart: |  mvnsite  |   0m  0s |  |  the patch passed  |
   | +1 :green_heart: |  pylint  |   0m  2s |  |  No new issues.  |
   | +1 :green_heart: |  shellcheck  |   0m  2s |  |  No new issues.  |
   | +1 :green_heart: |  shadedclient  |  15m  8s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  asflicense  |   0m 29s | 
[/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/33/artifact/out/results-asflicense.txt)
 |  The patch generated 3 ASF License warnings.  |
   |  |   |  70m 59s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/33/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3043 |
   | Optional Tests | dupname asflicense codespell shellcheck shelldocs 
hadolint pylint mvnsite unit markdownlint |
   | uname | Linux da25bdfc3805 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 
06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / c65107cae104addce89532acae77bcc0c92e0e39 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/33/testReport/ |
   | Max. process+thread count | 594 (vs. ulimit of 5500) |
   | modules | C:  U:  |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/33/console |
   | versions | git=2.25.1 maven=3.6.3 shellcheck=0.7.0 
hadolint=1.11.1-0-g0e692dd pylint=2.6.0 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 608057)
Time Spent: 12h 10m  (was: 12h)

> Modularize docker images
> 
>
> Key: HADOOP-17727
> URL: https://issues.apache.org/jira/browse/HADOOP-17727
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
> Attachments: build-log-centos-7.zip, build-log-centos-8.zip, 
> build-log-ubuntu-x64.zip
>
>  Time Spent: 12h 10m
>  Remaining Estimate: 0h
>
> 

[GitHub] [hadoop] hadoop-yetus commented on pull request #3043: HADOOP-17727. Modularize docker images

2021-06-07 Thread GitBox


hadoop-yetus commented on pull request #3043:
URL: https://github.com/apache/hadoop/pull/3043#issuecomment-856181150


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  shelldocs  |   0m  1s |  |  Shelldocs was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 47s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m 23s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m  0s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 48s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   0m  0s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  hadolint  |   0m  8s |  |  No new issues.  |
   | +1 :green_heart: |  mvnsite  |   0m  0s |  |  the patch passed  |
   | +1 :green_heart: |  pylint  |   0m  2s |  |  No new issues.  |
   | +1 :green_heart: |  shellcheck  |   0m  2s |  |  No new issues.  |
   | +1 :green_heart: |  shadedclient  |  15m  8s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  asflicense  |   0m 29s | 
[/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/33/artifact/out/results-asflicense.txt)
 |  The patch generated 3 ASF License warnings.  |
   |  |   |  70m 59s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/33/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3043 |
   | Optional Tests | dupname asflicense codespell shellcheck shelldocs 
hadolint pylint mvnsite unit markdownlint |
   | uname | Linux da25bdfc3805 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 
06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / c65107cae104addce89532acae77bcc0c92e0e39 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/33/testReport/ |
   | Max. process+thread count | 594 (vs. ulimit of 5500) |
   | modules | C:  U:  |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/33/console |
   | versions | git=2.25.1 maven=3.6.3 shellcheck=0.7.0 
hadolint=1.11.1-0-g0e692dd pylint=2.6.0 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] smengcl commented on pull request #3078: HDFS-16055. Quota is not preserved in snapshot INode

2021-06-07 Thread GitBox


smengcl commented on pull request #3078:
URL: https://github.com/apache/hadoop/pull/3078#issuecomment-856160999


   Looks like some UT failures are related to the change. Will look into the 
intended behavior in a bit.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17727) Modularize docker images

2021-06-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17727?focusedWorklogId=608024=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-608024
 ]

ASF GitHub Bot logged work on HADOOP-17727:
---

Author: ASF GitHub Bot
Created on: 07/Jun/21 18:00
Start Date: 07/Jun/21 18:00
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3043:
URL: https://github.com/apache/hadoop/pull/3043#issuecomment-856146349


   (!) A patch to the testing environment has been detected. 
   Re-executing against the patched versions to perform further tests. 
   The console is at 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/34/console in 
case of problems.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 608024)
Time Spent: 12h  (was: 11h 50m)

> Modularize docker images
> 
>
> Key: HADOOP-17727
> URL: https://issues.apache.org/jira/browse/HADOOP-17727
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
> Attachments: build-log-centos-7.zip, build-log-centos-8.zip, 
> build-log-ubuntu-x64.zip
>
>  Time Spent: 12h
>  Remaining Estimate: 0h
>
> We're now creating the *Dockerfile*s for different platforms. We need a way 
> to manage the packages in a clean way as maintaining the packages for all the 
> different environments becomes cumbersome.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #3043: HADOOP-17727. Modularize docker images

2021-06-07 Thread GitBox


hadoop-yetus commented on pull request #3043:
URL: https://github.com/apache/hadoop/pull/3043#issuecomment-856146349


   (!) A patch to the testing environment has been detected. 
   Re-executing against the patched versions to perform further tests. 
   The console is at 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/34/console in 
case of problems.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17727) Modularize docker images

2021-06-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17727?focusedWorklogId=608016=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-608016
 ]

ASF GitHub Bot logged work on HADOOP-17727:
---

Author: ASF GitHub Bot
Created on: 07/Jun/21 17:50
Start Date: 07/Jun/21 17:50
Worklog Time Spent: 10m 
  Work Description: GauthamBanasandra commented on pull request #3043:
URL: https://github.com/apache/hadoop/pull/3043#issuecomment-856140435


   @jojochuang thanks for the review.
   >Guess the script needs to add an additional mount volume?
   
   We can't use a mount volume here since the `pkg-resolver` directory is 
needed at time of `docker build`. The mount volume is useful only when we do a 
`docker run`.
   
   >I don't mind to fix that in a separate jira.
   
   I've fixed it in this PR by providing the build context path during docker 
build. Please see my changes in `dev-support/bin/create-release` file.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 608016)
Time Spent: 11h 50m  (was: 11h 40m)

> Modularize docker images
> 
>
> Key: HADOOP-17727
> URL: https://issues.apache.org/jira/browse/HADOOP-17727
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
> Attachments: build-log-centos-7.zip, build-log-centos-8.zip, 
> build-log-ubuntu-x64.zip
>
>  Time Spent: 11h 50m
>  Remaining Estimate: 0h
>
> We're now creating the *Dockerfile*s for different platforms. We need a way 
> to manage the packages in a clean way as maintaining the packages for all the 
> different environments becomes cumbersome.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] GauthamBanasandra commented on pull request #3043: HADOOP-17727. Modularize docker images

2021-06-07 Thread GitBox


GauthamBanasandra commented on pull request #3043:
URL: https://github.com/apache/hadoop/pull/3043#issuecomment-856140435


   @jojochuang thanks for the review.
   >Guess the script needs to add an additional mount volume?
   
   We can't use a mount volume here since the `pkg-resolver` directory is 
needed at time of `docker build`. The mount volume is useful only when we do a 
`docker run`.
   
   >I don't mind to fix that in a separate jira.
   
   I've fixed it in this PR by providing the build context path during docker 
build. Please see my changes in `dev-support/bin/create-release` file.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17727) Modularize docker images

2021-06-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17727?focusedWorklogId=608013=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-608013
 ]

ASF GitHub Bot logged work on HADOOP-17727:
---

Author: ASF GitHub Bot
Created on: 07/Jun/21 17:48
Start Date: 07/Jun/21 17:48
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3043:
URL: https://github.com/apache/hadoop/pull/3043#issuecomment-856138132


   (!) A patch to the testing environment has been detected. 
   Re-executing against the patched versions to perform further tests. 
   The console is at 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/33/console in 
case of problems.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 608013)
Time Spent: 11h 40m  (was: 11.5h)

> Modularize docker images
> 
>
> Key: HADOOP-17727
> URL: https://issues.apache.org/jira/browse/HADOOP-17727
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
> Attachments: build-log-centos-7.zip, build-log-centos-8.zip, 
> build-log-ubuntu-x64.zip
>
>  Time Spent: 11h 40m
>  Remaining Estimate: 0h
>
> We're now creating the *Dockerfile*s for different platforms. We need a way 
> to manage the packages in a clean way as maintaining the packages for all the 
> different environments becomes cumbersome.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #3043: HADOOP-17727. Modularize docker images

2021-06-07 Thread GitBox


hadoop-yetus commented on pull request #3043:
URL: https://github.com/apache/hadoop/pull/3043#issuecomment-856138132


   (!) A patch to the testing environment has been detected. 
   Re-executing against the patched versions to perform further tests. 
   The console is at 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/33/console in 
case of problems.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17028) ViewFS should initialize target filesystems lazily

2021-06-07 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17358592#comment-17358592
 ] 

Steve Loughran commented on HADOOP-17028:
-

bq. It looks like Steve Loughran deprecated this class later after his comment. 
Don't know what prompted the refactoring, but
{{org.apache.hadoop.fs.impl.FunctionsRaisingIOE.FunctionRaisingIOE}} should be 
now {{org.apache.hadoop.util.functional.FunctionRaisingIOE}}

I moved it because it turns out that wrapping/unwrapping IOEs is critical to 
using this in applications using the FS API, and since the relevant 
methods/interfaces were not public, the only way to do that is to have them 
public. 

Accordingly I
# replicated the functional interfaces in the public/unstable package 
{{org.apache.hadoop.util.functional}} where I'm trying to make it possible to 
use IOE-raising stuff (including RemoteIterator) in apps.
# tagged the old ones, as Deprecated, so new code will use it.


bq. Moving interfaces, which are public by default, from one package to another 
is considered an incompatible change, especially since the previous variant had 
been released

Two points to note
# I tagged the original package "Fs.impl" as not public then, isn't it?
# left the old interface alone, on the basis that if filesystems outside the 
hadoop codebase (gcs?) were using them, all would be good.

{code}
@InterfaceAudience.LimitedPrivate("Filesystems")
@InterfaceStability.Unstable
{code}

therefore, I do not consider this to be an incompatible change since
# it wasn't public, outside filesystems 
# it hasn't been removed, just deprecated.

bq.  I prefer to avoid using FunctionRaisingIOE in this patch if possible

Use the o.a.h.fs.impl for compatibility across Hadoop 3.3+

For older releases, no, it's not there. Not sure what to do there.


> ViewFS should initialize target filesystems lazily
> --
>
> Key: HADOOP-17028
> URL: https://issues.apache.org/jira/browse/HADOOP-17028
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: client-mounts, fs, viewfs
>Affects Versions: 3.2.1
>Reporter: Uma Maheswara Rao G
>Assignee: Abhishek Das
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 20m
>  Remaining Estimate: 0h
>
> Currently viewFS initialize all configured target filesystems when 
> viewfs#init itself.
> Some target file system initialization involve creating heavy objects and 
> proxy connections. Ex: DistributedFileSystem#initialize will create DFSClient 
> object which will create proxy connections to NN etc.
> For example: if ViewFS configured with 10 target fs with hdfs uri and 2 
> targets with s3a.
> If one of the client only work with s3a target, But ViewFS will initialize 
> all targets irrespective of what clients interested to work with. That means, 
> here client will create 10 DFS initializations and 2 s3a initializations. Its 
> unnecessary to have DFS initialization here. So, it will be a good idea to 
> initialize the target fs only when first time usage call come to particular 
> target fs scheme. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15901) IPC Client and Server should use Time.monotonicNow() for elapsed times.

2021-06-07 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17358580#comment-17358580
 ] 

Hadoop QA commented on HADOOP-15901:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  9s{color} 
| {color:red}{color} | {color:red} HADOOP-15901 does not apply to trunk. Rebase 
required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for 
help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-15901 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12946889/HADOOP-15901-01.patch 
|
| Console output | 
https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/195/console |
| versions | git=2.17.1 |
| Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |


This message was automatically generated.



> IPC Client and Server should use Time.monotonicNow() for elapsed times.
> ---
>
> Key: HADOOP-15901
> URL: https://issues.apache.org/jira/browse/HADOOP-15901
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc, metrics
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
> Attachments: HADOOP-15901-01.patch
>
>
> Client.java and Server.java  uses {{Time.now()}} to calculate the elapsed 
> times/timeouts. This could result in undesired results when system clock's 
> time changes.
> {{Time.monotonicNow()}} should be used for elapsed time calculations within 
> same JVM.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17463) Replace currentTimeMillis with monotonicNow in elapsed time

2021-06-07 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17358581#comment-17358581
 ] 

Steve Loughran commented on HADOOP-17463:
-

CPU cores have changed since the previous discussion and on recent intel x86 
parts the TSC clock will be consistent across all cores in the same socket, so 
risk of going backwords on rescheduling is 0 for a single-socket device.
But: this absolutely doesn't hold for multisocket servers, and I don't know 
about AMD or ARM parts.

I do think we need some form of monotonic clock for our wait loops, or at least 
those where an NTP update could trigger a false failure. VM suspend/resume, 
well, that's where currentTimeMillis() is better than CPU counters -the virtual 
CPU clock may not have increased, but relative to the outside world, time has 
passed.

> Replace currentTimeMillis with monotonicNow in elapsed time
> ---
>
> Key: HADOOP-17463
> URL: https://issues.apache.org/jira/browse/HADOOP-17463
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> I noticed that there is a widespread incorrect usage of 
> {{System.currentTimeMillis()}}  throughout the hadoop code.
> For example:
> {code:java}
> // Some comments here
> long start = System.currentTimeMillis();
> while (System.currentTimeMillis() - start < timeout) {
>   // Do something
> }
> {code}
> Elapsed time should be measured using `monotonicNow()`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17749) Remove lock contention in SelectorPool of SocketIOWithTimeout

2021-06-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17749?focusedWorklogId=607804=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-607804
 ]

ASF GitHub Bot logged work on HADOOP-17749:
---

Author: ASF GitHub Bot
Created on: 07/Jun/21 11:53
Start Date: 07/Jun/21 11:53
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3080:
URL: https://github.com/apache/hadoop/pull/3080#issuecomment-855860956


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 14s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 48s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  18m 12s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m  9s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 37s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 29s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 36s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 56s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 10s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |  20m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 14s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |  18m 14s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  6s | 
[/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3080/2/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 6 new + 1 
unchanged - 7 fixed = 7 total (was 8)  |
   | +1 :green_heart: |  mvnsite  |   1m 33s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  6s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 34s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  15m 42s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m  3s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  0s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 178m 48s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3080/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3080 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux a918fffcfe9a 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / e02750e1347afcf553f6e4e4c4b25c2940bda5cc |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #3080: HADOOP-17749. Remove lock contention in SelectorPool of SocketIOWithTimeout

2021-06-07 Thread GitBox


hadoop-yetus commented on pull request #3080:
URL: https://github.com/apache/hadoop/pull/3080#issuecomment-855860956


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 14s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 48s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  18m 12s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m  9s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 37s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 29s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 36s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 56s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 10s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |  20m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 14s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |  18m 14s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  6s | 
[/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3080/2/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 6 new + 1 
unchanged - 7 fixed = 7 total (was 8)  |
   | +1 :green_heart: |  mvnsite  |   1m 33s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  6s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 34s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  15m 42s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m  3s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  0s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 178m 48s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3080/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3080 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux a918fffcfe9a 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / e02750e1347afcf553f6e4e4c4b25c2940bda5cc |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3080/2/testReport/ |
   | Max. process+thread count | 1942 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3080/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This 

[jira] [Work logged] (HADOOP-17727) Modularize docker images

2021-06-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17727?focusedWorklogId=607752=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-607752
 ]

ASF GitHub Bot logged work on HADOOP-17727:
---

Author: ASF GitHub Bot
Created on: 07/Jun/21 10:15
Start Date: 07/Jun/21 10:15
Worklog Time Spent: 10m 
  Work Description: jojochuang commented on pull request #3043:
URL: https://github.com/apache/hadoop/pull/3043#issuecomment-855800902


   That being said, given that Inigo gave a +1 and we went through several 
iterations, I don't mind to fix that in a separate jira.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 607752)
Time Spent: 11.5h  (was: 11h 20m)

> Modularize docker images
> 
>
> Key: HADOOP-17727
> URL: https://issues.apache.org/jira/browse/HADOOP-17727
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
> Attachments: build-log-centos-7.zip, build-log-centos-8.zip, 
> build-log-ubuntu-x64.zip
>
>  Time Spent: 11.5h
>  Remaining Estimate: 0h
>
> We're now creating the *Dockerfile*s for different platforms. We need a way 
> to manage the packages in a clean way as maintaining the packages for all the 
> different environments becomes cumbersome.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on pull request #3043: HADOOP-17727. Modularize docker images

2021-06-07 Thread GitBox


jojochuang commented on pull request #3043:
URL: https://github.com/apache/hadoop/pull/3043#issuecomment-855800902


   That being said, given that Inigo gave a +1 and we went through several 
iterations, I don't mind to fix that in a separate jira.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut commented on a change in pull request #3062: HDFS-16048. RBF: Print network topology on the router web

2021-06-07 Thread GitBox


tomscut commented on a change in pull request #3062:
URL: https://github.com/apache/hadoop/pull/3062#discussion_r646369832



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterNetworkTopologyServlet.java
##
@@ -0,0 +1,210 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.router;
+
+import com.fasterxml.jackson.databind.JsonNode;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdfs.server.federation.RouterConfigBuilder;
+import org.apache.hadoop.hdfs.server.federation.StateStoreDFSCluster;
+import 
org.apache.hadoop.hdfs.server.federation.resolver.MultipleDestinationMountTableResolver;
+import org.apache.hadoop.io.IOUtils;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+import java.io.ByteArrayOutputStream;
+import java.net.HttpURLConnection;
+import java.net.URL;
+import java.util.Iterator;
+import java.util.Map;
+
+import static 
org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_HTTP_ENABLE;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+
+public class TestRouterNetworkTopologyServlet {
+
+  private static StateStoreDFSCluster clusterWithDatanodes;
+  private static StateStoreDFSCluster clusterNoDatanodes;
+
+  @BeforeClass
+  public static void setUp() throws Exception {
+// Builder configuration
+Configuration routerConf =
+new RouterConfigBuilder().stateStore().admin().quota().rpc().build();
+routerConf.set(DFS_ROUTER_HTTP_ENABLE, "true");
+Configuration hdfsConf = new Configuration(false);
+
+// Build and start a federated cluster
+clusterWithDatanodes = new StateStoreDFSCluster(false, 2,
+MultipleDestinationMountTableResolver.class);
+clusterWithDatanodes.addNamenodeOverrides(hdfsConf);
+clusterWithDatanodes.addRouterOverrides(routerConf);
+clusterWithDatanodes.setNumDatanodesPerNameservice(9);
+clusterWithDatanodes.setIndependentDNs();
+clusterWithDatanodes.setRacks(
+new String[] {"/rack1", "/rack1", "/rack1", "/rack2", "/rack2",
+"/rack2", "/rack3", "/rack3", "/rack3", "/rack4", "/rack4",
+"/rack4", "/rack5", "/rack5", "/rack5", "/rack6", "/rack6",
+"/rack6"});
+clusterWithDatanodes.startCluster();
+clusterWithDatanodes.startRouters();
+clusterWithDatanodes.waitClusterUp();
+clusterWithDatanodes.waitActiveNamespaces();
+
+// Build and start a federated cluster
+clusterNoDatanodes = new StateStoreDFSCluster(false, 2,
+MultipleDestinationMountTableResolver.class);
+clusterNoDatanodes.addNamenodeOverrides(hdfsConf);
+clusterNoDatanodes.addRouterOverrides(routerConf);
+clusterNoDatanodes.setNumDatanodesPerNameservice(0);
+clusterNoDatanodes.setIndependentDNs();
+clusterNoDatanodes.startCluster();
+clusterNoDatanodes.startRouters();
+clusterNoDatanodes.waitClusterUp();
+clusterNoDatanodes.waitActiveNamespaces();
+  }
+
+  @Test
+  public void testPrintTopologyTextFormat() throws Exception {
+// get http Address
+String httpAddress = clusterWithDatanodes.getRandomRouter().getRouter()
+.getHttpServerAddress().toString();
+
+// send http request
+URL url = new URL("http:/" + httpAddress + "/topology");
+HttpURLConnection conn = (HttpURLConnection) url.openConnection();
+conn.setReadTimeout(2);
+conn.setConnectTimeout(2);
+conn.connect();
+
+ByteArrayOutputStream out = new ByteArrayOutputStream();
+IOUtils.copyBytes(conn.getInputStream(), out, 4096, true);
+StringBuilder sb =
+new StringBuilder("-- Network Topology -- \n");
+sb.append(out);
+sb.append("\n-- Network Topology -- ");
+String topology = sb.toString();
+
+// assert rack info
+assertTrue(topology.contains("/ns0/rack1"));
+assertTrue(topology.contains("/ns0/rack2"));
+assertTrue(topology.contains("/ns0/rack3"));
+assertTrue(topology.contains("/ns1/rack4"));
+assertTrue(topology.contains("/ns1/rack5"));
+

[jira] [Work logged] (HADOOP-17749) Remove lock contention in SelectorPool of SocketIOWithTimeout

2021-06-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17749?focusedWorklogId=607702=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-607702
 ]

ASF GitHub Bot logged work on HADOOP-17749:
---

Author: ASF GitHub Bot
Created on: 07/Jun/21 08:04
Start Date: 07/Jun/21 08:04
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3080:
URL: https://github.com/apache/hadoop/pull/3080#issuecomment-855699739


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 34s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 27s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 56s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  18m  3s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m  8s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 34s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  6s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 24s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 56s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 54s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m  9s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |  20m  9s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |  18m 15s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  7s | 
[/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3080/1/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 6 new + 1 
unchanged - 7 fixed = 7 total (was 8)  |
   | +1 :green_heart: |  mvnsite  |   1m 35s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  6s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | -1 :x: |  spotbugs  |   2m 36s | 
[/new-spotbugs-hadoop-common-project_hadoop-common.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3080/1/artifact/out/new-spotbugs-hadoop-common-project_hadoop-common.html)
 |  hadoop-common-project/hadoop-common generated 1 new + 0 unchanged - 0 fixed 
= 1 total (was 0)  |
   | -1 :x: |  shadedclient  |  15m 28s |  |  patch has errors when building 
and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  18m 18s | 
[/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3080/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 58s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 180m  2s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-common-project/hadoop-common |
   |  |  Redundant nullcheck of selInfo, which is known to be non-null in 
org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.get(SelectableChannel)  
Redundant null check at SocketIOWithTimeout.java:is known to be non-null in 
org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.get(SelectableChannel)  
Redundant null check at SocketIOWithTimeout.java:[line 394] |
   | Failed junit tests | hadoop.ipc.TestRPCWaitForProxy |
   |   | hadoop.ipc.TestMiniRPCBenchmark |
   |   | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #3080: HADOOP-17749. Remove lock contention in SelectorPool of SocketIOWithTimeout

2021-06-07 Thread GitBox


hadoop-yetus commented on pull request #3080:
URL: https://github.com/apache/hadoop/pull/3080#issuecomment-855699739


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 34s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 27s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 56s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  18m  3s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m  8s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 34s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  6s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 24s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 56s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 54s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m  9s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |  20m  9s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |  18m 15s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  7s | 
[/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3080/1/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 6 new + 1 
unchanged - 7 fixed = 7 total (was 8)  |
   | +1 :green_heart: |  mvnsite  |   1m 35s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  6s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | -1 :x: |  spotbugs  |   2m 36s | 
[/new-spotbugs-hadoop-common-project_hadoop-common.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3080/1/artifact/out/new-spotbugs-hadoop-common-project_hadoop-common.html)
 |  hadoop-common-project/hadoop-common generated 1 new + 0 unchanged - 0 fixed 
= 1 total (was 0)  |
   | -1 :x: |  shadedclient  |  15m 28s |  |  patch has errors when building 
and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  18m 18s | 
[/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3080/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 58s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 180m  2s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-common-project/hadoop-common |
   |  |  Redundant nullcheck of selInfo, which is known to be non-null in 
org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.get(SelectableChannel)  
Redundant null check at SocketIOWithTimeout.java:is known to be non-null in 
org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.get(SelectableChannel)  
Redundant null check at SocketIOWithTimeout.java:[line 394] |
   | Failed junit tests | hadoop.ipc.TestRPCWaitForProxy |
   |   | hadoop.ipc.TestMiniRPCBenchmark |
   |   | hadoop.ipc.TestProtoBufRpc |
   |   | hadoop.security.TestDoAsEffectiveUser |
   |   | hadoop.ipc.TestRPCCallBenchmark |
   |   | hadoop.ipc.metrics.TestRpcMetrics |
   |   | hadoop.ipc.TestIPC |
   |   | hadoop.ipc.TestIPCServerResponder |
   |   | hadoop.ipc.TestRPC |
   |   | hadoop.ipc.TestMultipleProtocolServer |
   |   | hadoop.ipc.TestSaslRPC |
   |   | hadoop.ipc.TestServer |
   |   | 

[GitHub] [hadoop] GauthamBanasandra closed pull request #3077: [Do not commit] Modularize docker images - debug

2021-06-07 Thread GitBox


GauthamBanasandra closed pull request #3077:
URL: https://github.com/apache/hadoop/pull/3077


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17727) Modularize docker images

2021-06-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17727?focusedWorklogId=607666=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-607666
 ]

ASF GitHub Bot logged work on HADOOP-17727:
---

Author: ASF GitHub Bot
Created on: 07/Jun/21 07:14
Start Date: 07/Jun/21 07:14
Worklog Time Spent: 10m 
  Work Description: GauthamBanasandra commented on pull request #3043:
URL: https://github.com/apache/hadoop/pull/3043#issuecomment-855659981


   @jojochuang  the unit tests get stuck when I add the json files to be 
excluded from RAT check in pom.xml (I don't know why though, but I've confirmed 
this multiple times). Hence, I've removed them and the CI run is complete as 
posted in the YETUS comment above.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 607666)
Time Spent: 11h 20m  (was: 11h 10m)

> Modularize docker images
> 
>
> Key: HADOOP-17727
> URL: https://issues.apache.org/jira/browse/HADOOP-17727
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
> Attachments: build-log-centos-7.zip, build-log-centos-8.zip, 
> build-log-ubuntu-x64.zip
>
>  Time Spent: 11h 20m
>  Remaining Estimate: 0h
>
> We're now creating the *Dockerfile*s for different platforms. We need a way 
> to manage the packages in a clean way as maintaining the packages for all the 
> different environments becomes cumbersome.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] GauthamBanasandra commented on pull request #3043: HADOOP-17727. Modularize docker images

2021-06-07 Thread GitBox


GauthamBanasandra commented on pull request #3043:
URL: https://github.com/apache/hadoop/pull/3043#issuecomment-855659981


   @jojochuang  the unit tests get stuck when I add the json files to be 
excluded from RAT check in pom.xml (I don't know why though, but I've confirmed 
this multiple times). Hence, I've removed them and the CI run is complete as 
posted in the YETUS comment above.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17727) Modularize docker images

2021-06-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17727?focusedWorklogId=607650=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-607650
 ]

ASF GitHub Bot logged work on HADOOP-17727:
---

Author: ASF GitHub Bot
Created on: 07/Jun/21 06:47
Start Date: 07/Jun/21 06:47
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3043:
URL: https://github.com/apache/hadoop/pull/3043#issuecomment-855639841


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  3s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  shelldocs  |   0m  1s |  |  Shelldocs was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 56s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m 59s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m  0s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 54s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   0m  0s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  hadolint  |   0m  8s |  |  No new issues.  |
   | +1 :green_heart: |  mvnsite  |   0m  0s |  |  the patch passed  |
   | +1 :green_heart: |  pylint  |   0m  2s |  |  No new issues.  |
   | +1 :green_heart: |  shellcheck  |   0m  1s |  |  No new issues.  |
   | +1 :green_heart: |  shadedclient  |  16m 33s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  asflicense  |   0m 36s | 
[/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/32/artifact/out/results-asflicense.txt)
 |  The patch generated 3 ASF License warnings.  |
   |  |   |  74m 26s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/32/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3043 |
   | Optional Tests | dupname asflicense codespell hadolint shellcheck 
shelldocs pylint mvnsite unit markdownlint |
   | uname | Linux 34ba9d850594 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 
06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / f6433cc0a41761903241760fbde5798f0a6dcf5c |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/32/testReport/ |
   | Max. process+thread count | 633 (vs. ulimit of 5500) |
   | modules | C:  U:  |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/32/console |
   | versions | git=2.25.1 maven=3.6.3 shellcheck=0.7.0 
hadolint=1.11.1-0-g0e692dd pylint=2.6.0 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 607650)
Time Spent: 11h 10m  (was: 11h)

> Modularize docker images
> 
>
> Key: HADOOP-17727
> URL: https://issues.apache.org/jira/browse/HADOOP-17727
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
> Attachments: build-log-centos-7.zip, build-log-centos-8.zip, 
> build-log-ubuntu-x64.zip
>
>  Time Spent: 11h 10m
>  Remaining Estimate: 0h
>
> 

[GitHub] [hadoop] hadoop-yetus commented on pull request #3043: HADOOP-17727. Modularize docker images

2021-06-07 Thread GitBox


hadoop-yetus commented on pull request #3043:
URL: https://github.com/apache/hadoop/pull/3043#issuecomment-855639841


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  3s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  shelldocs  |   0m  1s |  |  Shelldocs was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 56s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m 59s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m  0s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 54s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   0m  0s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  hadolint  |   0m  8s |  |  No new issues.  |
   | +1 :green_heart: |  mvnsite  |   0m  0s |  |  the patch passed  |
   | +1 :green_heart: |  pylint  |   0m  2s |  |  No new issues.  |
   | +1 :green_heart: |  shellcheck  |   0m  1s |  |  No new issues.  |
   | +1 :green_heart: |  shadedclient  |  16m 33s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  asflicense  |   0m 36s | 
[/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/32/artifact/out/results-asflicense.txt)
 |  The patch generated 3 ASF License warnings.  |
   |  |   |  74m 26s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/32/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3043 |
   | Optional Tests | dupname asflicense codespell hadolint shellcheck 
shelldocs pylint mvnsite unit markdownlint |
   | uname | Linux 34ba9d850594 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 
06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / f6433cc0a41761903241760fbde5798f0a6dcf5c |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/32/testReport/ |
   | Max. process+thread count | 633 (vs. ulimit of 5500) |
   | modules | C:  U:  |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/32/console |
   | versions | git=2.25.1 maven=3.6.3 shellcheck=0.7.0 
hadolint=1.11.1-0-g0e692dd pylint=2.6.0 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tasanuma commented on pull request #3079: HDFS-16050. Some dynamometer tests fail.

2021-06-07 Thread GitBox


tasanuma commented on pull request #3079:
URL: https://github.com/apache/hadoop/pull/3079#issuecomment-855620544


   Dynamometer has been implemented since Hadoop-3.3.0 which uses Mockito 2.x. 
So I only backported to branch-3.3.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on pull request #3079: HDFS-16050. Some dynamometer tests fail.

2021-06-07 Thread GitBox


aajisaka commented on pull request #3079:
URL: https://github.com/apache/hadoop/pull/3079#issuecomment-855611258


   @tasanuma 
   Thank you for your review!
   I suppose some lower branches still use Mockito 1.x. Please check before 
backporting.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org