[GitHub] [hadoop] hfutatzhanghb commented on pull request #5735: HDFS-17044. Process reported block toInvalidate logic should set the block size to NO_ACK

2023-06-12 Thread via GitHub


hfutatzhanghb commented on PR #5735:
URL: https://github.com/apache/hadoop/pull/5735#issuecomment-1588580362

   > @hfutatzhanghb Hi, bro thanks for your comment. yeah, modification is 
similar to what method BlockManager#removeBlock does. If the block is no 
included in the BlocksMap, when DN processed invalidating blocks , should not 
need to notify the Namenode.
   > 
   > I will add UT try to illustrate this case.
   
   @haiyang1987 Thanks bro for replying. Got it~


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] haiyang1987 commented on pull request #5735: HDFS-17044. Process reported block toInvalidate logic should set the block size to NO_ACK

2023-06-12 Thread via GitHub


haiyang1987 commented on PR #5735:
URL: https://github.com/apache/hadoop/pull/5735#issuecomment-1588553157

   @hfutatzhanghb Hi, bro thanks for your comment.
   yeah, modification is similar to what method BlockManager#removeBlock does.
   If the block is no included in the BlocksMap, when DN processed invalidating 
blocks , should not need to notify the Namenode.
   
   I will add UT try to illustrate this case.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #5722: YARN-11504. [Federation] YARN Federation Supports Non-HA mode.

2023-06-12 Thread via GitHub


slfan1989 commented on PR #5722:
URL: https://github.com/apache/hadoop/pull/5722#issuecomment-1588525689

   @goiri Thank you very much for your help in reviewing the code!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #5676: YARN-6648. BackPort [GPG] Add SubClusterCleaner in Global Policy Generator.

2023-06-12 Thread via GitHub


slfan1989 commented on PR #5676:
URL: https://github.com/apache/hadoop/pull/5676#issuecomment-1588525875

   @goiri Thank you very much for your help in reviewing the code!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #5732: YARN-8898. [Addendum] Improve NodeManager#TestFederationInterceptor Setup Code

2023-06-12 Thread via GitHub


slfan1989 commented on PR #5732:
URL: https://github.com/apache/hadoop/pull/5732#issuecomment-1588525514

   @goiri Thank you very much for your help in reviewing the code!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn merged pull request #5696: HDFS-16946. Fix getTopTokenRealOwners to return String

2023-06-12 Thread via GitHub


ayushtkn merged PR #5696:
URL: https://github.com/apache/hadoop/pull/5696


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] zhtttylz commented on a diff in pull request #5734: HDFS-17043. HttpFS implementation for getAllErasureCodingPolicies

2023-06-12 Thread via GitHub


zhtttylz commented on code in PR #5734:
URL: https://github.com/apache/hadoop/pull/5734#discussion_r1227487775


##
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java:
##
@@ -1773,6 +1775,17 @@ public FsStatus getStatus(final Path path) throws 
IOException {
 return JsonUtilClient.toFsStatus(json);
   }
 
+  public Collection getAllErasureCodingPolicies() 
throws IOException {
+Map params = new HashMap<>();
+params.put(OP_PARAM, Operation.GETECPOLICIES.toString());
+HttpURLConnection conn =
+getConnection(Operation.GETECPOLICIES.getMethod(), params, new 
Path(getUri()

Review Comment:
   Thanks for your suggestion, I'll make the required code changes promptly



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] zhtttylz commented on a diff in pull request #4868: HDFS-16763. MoverTool: Make valid for the number of mover threads per DN.

2023-06-12 Thread via GitHub


zhtttylz commented on code in PR #4868:
URL: https://github.com/apache/hadoop/pull/4868#discussion_r1227444307


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java:
##
@@ -804,6 +804,9 @@ public class DFSConfigKeys extends CommonConfigurationKeys {
   public static final longDFS_MOVER_MOVEDWINWIDTH_DEFAULT = 5400*1000L;
   public static final String  DFS_MOVER_MOVERTHREADS_KEY = 
"dfs.mover.moverThreads";
   public static final int DFS_MOVER_MOVERTHREADS_DEFAULT = 1000;
+  public static final String  DFS_DATANODE_MOVER_MAX_NUM_CONCURRENT_MOVES_KEY =
+  "dfs.datanode.mover.max.concurrent.moves";
+  public static final int 
DFS_DATANODE_MOVER_MAX_NUM_CONCURRENT_MOVES_DEFAULT = 10;

Review Comment:
   Should the default value of "**dfs.datanode.mover.max.concurrent.moves**" be 
consistent with "**dfs.datanode.balance.max.concurrent.moves**" at 100?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5723: HDFS-17041. RBF: Fix putAll impl for mysql and file based state stores

2023-06-12 Thread via GitHub


hadoop-yetus commented on PR #5723:
URL: https://github.com/apache/hadoop/pull/5723#issuecomment-1588284790

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 46s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  37m 38s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 49s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 37s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 47s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 43s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 13s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  javac  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 29s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 57s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  21m 46s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 43s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 120m 47s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5723/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5723 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 27ded291480b 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / bce9d5c647c17c3950050834b4ee4fdad25fe671 |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5723/3/testReport/ |
   | Max. process+thread count | 2754 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5723/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5723: HDFS-17041. RBF: Fix putAll impl for mysql and file based state stores

2023-06-12 Thread via GitHub


hadoop-yetus commented on PR #5723:
URL: https://github.com/apache/hadoop/pull/5723#issuecomment-1588284604

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 49s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m 54s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 36s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 48s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 45s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 29s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  javac  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 31s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m  7s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  21m 51s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 44s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 120m 42s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5723/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5723 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 49c67b1e7a78 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / bce9d5c647c17c3950050834b4ee4fdad25fe671 |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5723/2/testReport/ |
   | Max. process+thread count | 2779 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5723/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For 

[jira] [Commented] (HADOOP-18763) Upgrade aws-java-sdk to 1.12.367+

2023-06-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17731811#comment-17731811
 ] 

ASF GitHub Bot commented on HADOOP-18763:
-

virajjasani commented on PR #5741:
URL: https://github.com/apache/hadoop/pull/5741#issuecomment-1588254555

   netty version: `4.1.86.Final`
   https://github.com/aws/aws-sdk-java/blob/1.12.367/pom.xml#L409-L410




> Upgrade aws-java-sdk to 1.12.367+
> -
>
> Key: HADOOP-18763
> URL: https://issues.apache.org/jira/browse/HADOOP-18763
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.5
>Reporter: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> aws sdk bundle < 1.12.367 uses a vulnerable versions of netty which is 
> pulling in high severity CVE and creating unhappiness in security scans, even 
> if s3a doesn't use that lib. 
> The safe version for netty is netty:4.1.86.Final and this is used by 
> aws-java-adk:1.12.367+



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on pull request #5741: HADOOP-18763. Upgrade aws-java-sdk to 1.12.367

2023-06-12 Thread via GitHub


virajjasani commented on PR #5741:
URL: https://github.com/apache/hadoop/pull/5741#issuecomment-1588254555

   netty version: `4.1.86.Final`
   https://github.com/aws/aws-sdk-java/blob/1.12.367/pom.xml#L409-L410


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5738: HDFS-17045. File renamed from a snapshottable dir to a non-snapshottable dir cannot be deleted.

2023-06-12 Thread via GitHub


hadoop-yetus commented on PR #5738:
URL: https://github.com/apache/hadoop/pull/5738#issuecomment-1588254122

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 43s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  39m 30s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   1m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  checkstyle  |   1m 35s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 44s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 59s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   4m 28s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  29m 24s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  javac  |   1m 20s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  2s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   3m 43s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 23s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 257m 12s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5738/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 54s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 379m 26s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
   |   | hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy |
   |   | hadoop.hdfs.server.namenode.ha.TestObserverNode |
   |   | hadoop.hdfs.server.blockmanagement.TestReplicationPolicy |
   |   | hadoop.hdfs.server.blockmanagement.TestErasureCodingCorruption |
   |   | hadoop.hdfs.TestDistributedFileSystemWithECFileWithRandomECPolicy |
   |   | hadoop.hdfs.TestErasureCodingExerciseAPIs |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.server.blockmanagement.TestSequentialBlockGroupId |
   |   | hadoop.hdfs.TestReadStripedFileWithDNFailure |
   |   | hadoop.hdfs.TestDistributedFileSystemWithECFile |
   |   | hadoop.hdfs.TestDecommissionWithStriped |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5738/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5738 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux c1f30dd264c9 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 271233aa328dd6194f8e79a25068d2913fc5ffe4 |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   | 

[jira] [Commented] (HADOOP-18763) Upgrade aws-java-sdk to 1.12.367+

2023-06-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17731809#comment-17731809
 ] 

ASF GitHub Bot commented on HADOOP-18763:
-

virajjasani commented on PR #5741:
URL: https://github.com/apache/hadoop/pull/5741#issuecomment-1588247976

   `us-west-2`:
   
   two rounds of testing, results look good (details on Jira)
   
   ```
   $ mvn clean verify -Dparallel-tests -DtestsThreadCount=8 -Dscale -Dprefetch
   ```
   
   ```
   $ mvn clean verify -Dparallel-tests -DtestsThreadCount=8 -Dscale
   ```
   




> Upgrade aws-java-sdk to 1.12.367+
> -
>
> Key: HADOOP-18763
> URL: https://issues.apache.org/jira/browse/HADOOP-18763
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.5
>Reporter: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> aws sdk bundle < 1.12.367 uses a vulnerable versions of netty which is 
> pulling in high severity CVE and creating unhappiness in security scans, even 
> if s3a doesn't use that lib. 
> The safe version for netty is netty:4.1.86.Final and this is used by 
> aws-java-adk:1.12.367+



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on pull request #5741: HADOOP-18763. Upgrade aws-java-sdk to 1.12.367

2023-06-12 Thread via GitHub


virajjasani commented on PR #5741:
URL: https://github.com/apache/hadoop/pull/5741#issuecomment-1588247976

   `us-west-2`:
   
   two rounds of testing, results look good (details on Jira)
   
   ```
   $ mvn clean verify -Dparallel-tests -DtestsThreadCount=8 -Dscale -Dprefetch
   ```
   
   ```
   $ mvn clean verify -Dparallel-tests -DtestsThreadCount=8 -Dscale
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18763) Upgrade aws-java-sdk to 1.12.367+

2023-06-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17731808#comment-17731808
 ] 

ASF GitHub Bot commented on HADOOP-18763:
-

virajjasani opened a new pull request, #5741:
URL: https://github.com/apache/hadoop/pull/5741

   Jira: HADOOP-18763




> Upgrade aws-java-sdk to 1.12.367+
> -
>
> Key: HADOOP-18763
> URL: https://issues.apache.org/jira/browse/HADOOP-18763
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.5
>Reporter: Steve Loughran
>Priority: Major
>
> aws sdk bundle < 1.12.367 uses a vulnerable versions of netty which is 
> pulling in high severity CVE and creating unhappiness in security scans, even 
> if s3a doesn't use that lib. 
> The safe version for netty is netty:4.1.86.Final and this is used by 
> aws-java-adk:1.12.367+



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18763) Upgrade aws-java-sdk to 1.12.367+

2023-06-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-18763:

Labels: pull-request-available  (was: )

> Upgrade aws-java-sdk to 1.12.367+
> -
>
> Key: HADOOP-18763
> URL: https://issues.apache.org/jira/browse/HADOOP-18763
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.5
>Reporter: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> aws sdk bundle < 1.12.367 uses a vulnerable versions of netty which is 
> pulling in high severity CVE and creating unhappiness in security scans, even 
> if s3a doesn't use that lib. 
> The safe version for netty is netty:4.1.86.Final and this is used by 
> aws-java-adk:1.12.367+



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani opened a new pull request, #5741: HADOOP-18763. Upgrade aws-java-sdk to 1.12.367

2023-06-12 Thread via GitHub


virajjasani opened a new pull request, #5741:
URL: https://github.com/apache/hadoop/pull/5741

   Jira: HADOOP-18763


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18761) Remove mysql-connector-java

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HADOOP-18761.
--
Fix Version/s: 3.4.0
   3.3.6
   Resolution: Fixed

> Remove mysql-connector-java
> ---
>
> Key: HADOOP-18761
> URL: https://issues.apache.org/jira/browse/HADOOP-18761
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.6
>
>
> While preparing for 3.3.6 RC, I realized the mysql-connector-java dependency 
> added by HADOOP-18535 is GPL licensed.
> Source: https://github.com/mysql/mysql-connector-j/blob/release/8.0/LICENSE 
> See legal discussion at LEGAL-423.
> I looked at the original jira and github PR and I don't think the license 
> issue was noticed. 
> Is it possible to get rid of the mysql connector dependency? As far as I can 
> tell the dependency is very limited.
> If not, I guess I'll have to revert the commits for now.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18761) Remove mysql-connector-java

2023-06-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17731806#comment-17731806
 ] 

ASF GitHub Bot commented on HADOOP-18761:
-

jojochuang commented on PR #5731:
URL: https://github.com/apache/hadoop/pull/5731#issuecomment-1588196073

   Thanks @goiri I'll also cherry pick the commit into branch-3.3.




> Remove mysql-connector-java
> ---
>
> Key: HADOOP-18761
> URL: https://issues.apache.org/jira/browse/HADOOP-18761
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Blocker
>  Labels: pull-request-available
>
> While preparing for 3.3.6 RC, I realized the mysql-connector-java dependency 
> added by HADOOP-18535 is GPL licensed.
> Source: https://github.com/mysql/mysql-connector-j/blob/release/8.0/LICENSE 
> See legal discussion at LEGAL-423.
> I looked at the original jira and github PR and I don't think the license 
> issue was noticed. 
> Is it possible to get rid of the mysql connector dependency? As far as I can 
> tell the dependency is very limited.
> If not, I guess I'll have to revert the commits for now.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18761) Remove mysql-connector-java

2023-06-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17731805#comment-17731805
 ] 

ASF GitHub Bot commented on HADOOP-18761:
-

jojochuang merged PR #5731:
URL: https://github.com/apache/hadoop/pull/5731




> Remove mysql-connector-java
> ---
>
> Key: HADOOP-18761
> URL: https://issues.apache.org/jira/browse/HADOOP-18761
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Blocker
>  Labels: pull-request-available
>
> While preparing for 3.3.6 RC, I realized the mysql-connector-java dependency 
> added by HADOOP-18535 is GPL licensed.
> Source: https://github.com/mysql/mysql-connector-j/blob/release/8.0/LICENSE 
> See legal discussion at LEGAL-423.
> I looked at the original jira and github PR and I don't think the license 
> issue was noticed. 
> Is it possible to get rid of the mysql connector dependency? As far as I can 
> tell the dependency is very limited.
> If not, I guess I'll have to revert the commits for now.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on pull request #5731: HADOOP-18761. Remove mysql-connector-java

2023-06-12 Thread via GitHub


jojochuang commented on PR #5731:
URL: https://github.com/apache/hadoop/pull/5731#issuecomment-1588196073

   Thanks @goiri I'll also cherry pick the commit into branch-3.3.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang merged pull request #5731: HADOOP-18761. Remove mysql-connector-java

2023-06-12 Thread via GitHub


jojochuang merged PR #5731:
URL: https://github.com/apache/hadoop/pull/5731


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri merged pull request #5722: YARN-11504. [Federation] YARN Federation Supports Non-HA mode.

2023-06-12 Thread via GitHub


goiri merged PR #5722:
URL: https://github.com/apache/hadoop/pull/5722


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a diff in pull request #5733: YARN-11510. [Federation] Fix NodeManager#TestFederationInterceptor Flaky Unit Test.

2023-06-12 Thread via GitHub


goiri commented on code in PR #5733:
URL: https://github.com/apache/hadoop/pull/5733#discussion_r1227301509


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/TestFederationInterceptor.java:
##
@@ -882,6 +889,10 @@ public Object run() throws Exception {
 int numberOfContainers = 3;
 // Should re-attach secondaries and get the three running containers
 Assert.assertEquals(1, interceptor.getUnmanagedAMPoolSize());
+
+// Waiting for SC-1 to time out.
+Thread.sleep(800);

Review Comment:
   Do a GenericTestUtils#waitFor()



##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/TestFederationInterceptor.java:
##
@@ -590,6 +593,10 @@ public Object run() throws Exception {
 interceptor.recover(recoveredDataMap);
 
 Assert.assertEquals(1, interceptor.getUnmanagedAMPoolSize());
+
+// Waiting for SC-1 to time out.
+Thread.sleep(800);

Review Comment:
   Can we just wait for it to be 1?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri merged pull request #5732: YARN-8898. [Addendum] Improve NodeManager#TestFederationInterceptor Setup Code

2023-06-12 Thread via GitHub


goiri merged PR #5732:
URL: https://github.com/apache/hadoop/pull/5732


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri merged pull request #5676: YARN-6648. BackPort [GPG] Add SubClusterCleaner in Global Policy Generator.

2023-06-12 Thread via GitHub


goiri merged PR #5676:
URL: https://github.com/apache/hadoop/pull/5676


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on pull request #5723: HDFS-17041. RBF: Fix putAll impl for mysql and file based state stores

2023-06-12 Thread via GitHub


goiri commented on PR #5723:
URL: https://github.com/apache/hadoop/pull/5723#issuecomment-1588181909

   Yes, it might be unrelated... let's rerun the PR just in case.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on pull request #5723: HDFS-17041. RBF: Fix putAll impl for mysql and file based state stores

2023-06-12 Thread via GitHub


virajjasani commented on PR #5723:
URL: https://github.com/apache/hadoop/pull/5723#issuecomment-1588179705

   shall we re-run the build for this PR? but otherwise i have seen NPE for 
TestRouterRPCMultipleDestinationMountTableResolver quite a few times for sure.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on pull request #5723: HDFS-17041. RBF: Fix putAll impl for mysql and file based state stores

2023-06-12 Thread via GitHub


virajjasani commented on PR #5723:
URL: https://github.com/apache/hadoop/pull/5723#issuecomment-1588178698

   same failures are present on trunk test results also, e.g. 
https://ci-hadoop.apache.org/view/Hadoop/job/hadoop-qbt-trunk-java8-linux-x86_64/1253/testReport/junit/org.apache.hadoop.hdfs.server.federation.router/TestRouterRPCMultipleDestinationMountTableResolver/


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on pull request #5723: HDFS-17041. RBF: Fix putAll impl for mysql and file based state stores

2023-06-12 Thread via GitHub


goiri commented on PR #5723:
URL: https://github.com/apache/hadoop/pull/5723#issuecomment-1588173554

   The failed unit test looks related.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18716) [JDK-17] Failed unit tests , with Java 17 runtime and compiled Java 8

2023-06-12 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17731795#comment-17731795
 ] 

Wei-Chiu Chuang commented on HADOOP-18716:
--

Please add a target version.

> [JDK-17] Failed unit tests , with Java 17 runtime and compiled Java 8
> -
>
> Key: HADOOP-18716
> URL: https://issues.apache.org/jira/browse/HADOOP-18716
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Vinay Devadiga
>Priority: Critical
>
> Compiled Hadoop - Hadoop branch 3.3.3
> mvn clean install - DskipTests=True
> Java_Home ->  points to Java-8
> maven version - 3.8.8 (Quite latest)
>  
> Ran various whole test suit on my private cloud environment -  
> Changed Java_Home to   Java-17 
>  
> mvn surefire:test 
>  
> Out of 22k tests - 2.5 k tests failed .



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18716) [JDK-17] Failed unit tests , with Java 17 runtime and compiled Java 8

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-18716:
-
Fix Version/s: (was: 3.3.6)

> [JDK-17] Failed unit tests , with Java 17 runtime and compiled Java 8
> -
>
> Key: HADOOP-18716
> URL: https://issues.apache.org/jira/browse/HADOOP-18716
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Vinay Devadiga
>Priority: Critical
>
> Compiled Hadoop - Hadoop branch 3.3.3
> mvn clean install - DskipTests=True
> Java_Home ->  points to Java-8
> maven version - 3.8.8 (Quite latest)
>  
> Ran various whole test suit on my private cloud environment -  
> Changed Java_Home to   Java-17 
>  
> mvn surefire:test 
>  
> Out of 22k tests - 2.5 k tests failed .



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18616) Java 11 JavaDoc fails due to missing package comments

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-18616:
-
Fix Version/s: (was: 3.4.0)
   (was: 3.3.6)

> Java 11 JavaDoc fails due to missing package comments
> -
>
> Key: HADOOP-18616
> URL: https://issues.apache.org/jira/browse/HADOOP-18616
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.4.0, 3.3.5, 3.3.9
> Environment: Yetus Java 11 OpenJDK JavaDoc
>Reporter: Steve Vaughan
>Assignee: Steve Vaughan
>Priority: Major
>  Labels: pull-request-available
>
> Submissions to `hadoop-common` fail in Yetus due to Java 11 JavaDoc errors:
> ```
> [ERROR] 
> /home/builder/src/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/concurrent/package-info.java:21:
>  error: unknown tag: InterfaceAudience.Private
> [ERROR] @InterfaceAudience.Private
> [ERROR] ^
> [ERROR] 
> /home/builder/src/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/concurrent/package-info.java:22:
>  error: unknown tag: InterfaceStability.Unstable
> [ERROR] @InterfaceStability.Unstable
> [ERROR] ^
> ```



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18578) Bump netty to the latest 4.1.86

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-18578:
-
Fix Version/s: (was: 3.4.0)
   (was: 3.3.6)

> Bump netty to the latest 4.1.86
> ---
>
> Key: HADOOP-18578
> URL: https://issues.apache.org/jira/browse/HADOOP-18578
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Affects Versions: 3.4.0, 3.2.4, 3.3.4
>Reporter: Donghyun Kim
>Priority: Major
>  Labels: pull-request-available, transitive-cve
>
> Netty 4.1.86 fixes the following vulnerabilities.
>  * HAProxyMessageDecoder Stack Exhaustion DoS (CVE-2022-41881)
>  * HTTP Response splitting from assigning header value iterator 
> (CVE-2022-41915)
> For more details: https://netty.io/news/2022/12/12/4-1-86-Final.html



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18311) Upgrade dependencies to address several CVEs

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-18311:
-
Fix Version/s: (was: 3.3.6)

> Upgrade dependencies to address several CVEs
> 
>
> Key: HADOOP-18311
> URL: https://issues.apache.org/jira/browse/HADOOP-18311
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.3.3, 3.3.4
>Reporter: Steve Vaughan
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> The following CVEs can be addressed by upgrading dependencies within the 
> build.  This includes a replacement of HTrace with a noop implementation.
>  * CVE-2018-7489
>  * CVE-2020-10663
>  * CVE-2020-28491
>  * CVE-2020-35490
>  * CVE-2020-35491
>  * CVE-2020-36518
>  * PRISMA-2021-0182
> This addresses all of the CVEs from 3.3.3 except for ones that would require 
> upgrading Netty to 4.x.  I'll be submitting a pull request for 3.3.4.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18459) scan detected CVE-2021-37136 and CVE-2021-37137 in netty.io_netty_codec

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HADOOP-18459.
--
Resolution: Won't Fix

snakeyaml is updated in 3.3.x.
netty3 is removed in trunk. I'll resolve this as a duplicate.

> scan detected CVE-2021-37136 and CVE-2021-37137 in netty.io_netty_codec
> ---
>
> Key: HADOOP-18459
> URL: https://issues.apache.org/jira/browse/HADOOP-18459
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: deepagkanaka
>Assignee: Ashutosh Gupta
>Priority: Minor
>
> Our security scan detected CVE-2021-37136 and CVE-2021-37137 in io.netty_netty
>  
> |Component|Version|CVE|Fixed in|
> |io.netty_netty|3.10.6|CVE-2021-37136|4.1.68|
> |io.netty_netty|3.10.6|CVE-2021-37137|4.1.68|
> |org.yaml_snakeyaml|1.26|CVE-2022-25857|1.31|



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18459) scan detected CVE-2021-37136 and CVE-2021-37137 in netty.io_netty_codec

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-18459:
-
Fix Version/s: (was: 3.3.6)

> scan detected CVE-2021-37136 and CVE-2021-37137 in netty.io_netty_codec
> ---
>
> Key: HADOOP-18459
> URL: https://issues.apache.org/jira/browse/HADOOP-18459
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: deepagkanaka
>Assignee: Ashutosh Gupta
>Priority: Minor
>
> Our security scan detected CVE-2021-37136 and CVE-2021-37137 in io.netty_netty
>  
> |Component|Version|CVE|Fixed in|
> |io.netty_netty|3.10.6|CVE-2021-37136|4.1.68|
> |io.netty_netty|3.10.6|CVE-2021-37137|4.1.68|
> |org.yaml_snakeyaml|1.26|CVE-2022-25857|1.31|



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18123) Netty version 3.10.6 to be upgraded to handle CVE-2021-43797

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HADOOP-18123.
--
Resolution: Duplicate

Netty3 is removed in Hadoop 3.4 in trunk. Resolve as a duplicate.

> Netty version 3.10.6 to be upgraded to handle CVE-2021-43797
> 
>
> Key: HADOOP-18123
> URL: https://issues.apache.org/jira/browse/HADOOP-18123
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: Sushanta Sen
>Priority: Major
>
> Netty version 3.10.6 to be upgraded to handle CVE-2021-43797, even Netty-4 
> upgraded to handle this issue.
> Please feedback



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18123) Netty version 3.10.6 to be upgraded to handle CVE-2021-43797

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-18123:
-
Fix Version/s: (was: 3.3.6)

> Netty version 3.10.6 to be upgraded to handle CVE-2021-43797
> 
>
> Key: HADOOP-18123
> URL: https://issues.apache.org/jira/browse/HADOOP-18123
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: Sushanta Sen
>Priority: Major
>
> Netty version 3.10.6 to be upgraded to handle CVE-2021-43797, even Netty-4 
> upgraded to handle this issue.
> Please feedback



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18054) Unable to load AWS credentials from any provider in the chain

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-18054:
-
Fix Version/s: (was: 3.3.6)

> Unable to load AWS credentials from any provider in the chain
> -
>
> Key: HADOOP-18054
> URL: https://issues.apache.org/jira/browse/HADOOP-18054
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth, fs, fs/s3, security
>Affects Versions: 3.3.1
> Environment: From top to down.
> Kubernetes version 1.18.20
> Spark Version: 2.4.4
> Kubernetes Setup: Pod with serviceAccountName that binds with IAM Role using 
> IRSA (EKS Feature).
> {code:java}
> apiVersion: v1
> automountServiceAccountToken: true
> kind: ServiceAccount
> metadata:
>   annotations:
>     eks.amazonaws.com/role-arn: 
> arn:aws:iam:::role/EKSDefaultPolicyFor-Spark
>   name: spark
>   namespace: spark {code}
> AWS Setup:
> IAM Role with permissions over the S3 Bucket
> Bucket with permissions granted over the IAM Role.
> Code:
> {code:java}
> def run_etl():
> sc = 
> SparkSession.builder.appName("TXD-PYSPARK-ORACLE-SIEBEL-CASOS").getOrCreate()
> sqlContext = SQLContext(sc)
> args = sys.argv
> load_date = args[1]  # Ej: "2019-05-21"
> output_path = args[2]  # Ej: s3://mybucket/myfolder
> print(args, "load_date", load_date, "output_path", output_path)
> sc._jsc.hadoopConfiguration().set(
> "fs.s3a.aws.credentials.provider",
> "com.amazonaws.auth.DefaultAWSCredentialsProviderChain"
> )
> sc._jsc.hadoopConfiguration().set("com.amazonaws.services.s3.enableV4", 
> "true")
> sc._jsc.hadoopConfiguration().set("fs.s3a.impl", 
> "org.apache.hadoop.fs.s3a.S3AFileSystem")
> # sc._jsc.hadoopConfiguration().set("fs.s3.impl", 
> "org.apache.hadoop.fs.s3native.NativeS3FileSystem")
> sc._jsc.hadoopConfiguration().set("fs.AbstractFileSystem.s3a.impl", 
> "org.apache.hadoop.fs.s3a.S3A")
> session = boto3.session.Session()
> client = session.client(service_name='secretsmanager', 
> region_name="us-east-1")
> get_secret_value_response = client.get_secret_value(
> SecretId="Siebel_Connection_Info"
> )
> secret = get_secret_value_response["SecretString"]
> secret = json.loads(secret)
> db_username = secret.get("db_username")
> db_password = secret.get("db_password")
> db_host = secret.get("db_host")
> db_port = secret.get("db_port")
> db_name = secret.get("db_name")
> db_url = "jdbc:oracle:thin:@{}:{}/{}".format(db_host, db_port, db_name)
> jdbc_driver_name = "oracle.jdbc.OracleDriver"
> dbtable = """(SELECT * FROM SIEBEL.REPORTE_DE_CASOS WHERE JOB_ID IN 
> (SELECT JOB_ID FROM SIEBEL.SERVICE_CONSUMED_STATUS WHERE 
> PUBLISH_INFORMATION_DT BETWEEN TO_DATE('{} 00:00:00', '-MM-DD 
> HH24:MI:SS') AND TO_DATE('{} 23:59:59', '-MM-DD 
> HH24:MI:SS')))""".format(load_date, load_date)
> df = sqlContext.read\
>   .format("jdbc")\
>   .option("charset", "utf8")\
>   .option("driver", jdbc_driver_name)\
>   .option("url",db_url)\
>   .option("dbtable", dbtable)\
>   .option("user", db_username)\
>   .option("password", db_password)\
>   .option("oracle.jdbc.timezoneAsRegion", "false")\
>   .load()
> # Particionado
> a_load_date = load_date.split('-')
> df = df.withColumn("year", lit(a_load_date[0]))
> df = df.withColumn("month", lit(a_load_date[1]))
> df = df.withColumn("day", lit(a_load_date[2]))
> df.write.mode("append").partitionBy(["year", "month", 
> "day"]).csv(output_path, header=True)
> # Es importante cerrar la conexion para evitar problemas como el 
> reportado en
> # 
> https://stackoverflow.com/questions/40830638/cannot-load-main-class-from-jar-file
> sc.stop()
> if __name__ == '__main__':
> run_etl() {code}
> Log's
> {code:java}
> + '[' -z s3://mybucket.spark.jobs/siebel-casos-actividades ']'
> + aws s3 cp s3://mybucket.spark.jobs/siebel-casos-actividades /opt/ 
> --recursive --include '*'
> download: 
> s3://mybucket.spark.jobs/siebel-casos-actividades/txd-pyspark-siebel-casos.py 
> to ../../txd-pyspark-siebel-casos.py
> download: 
> s3://mybucket.spark.jobs/siebel-casos-actividades/txd-pyspark-siebel-actividades.py
>  to ../../txd-pyspark-siebel-actividades.py
> download: s3://mybucket.jobs/siebel-casos-actividades/hadoop-aws-3.3.1.jar to 
> ../../hadoop-aws-3.3.1.jar
> download: s3://mybucket.spark.jobs/siebel-casos-actividades/ojdbc8.jar to 
> ../../ojdbc8.jar
> download: 
> s3://mybucket.spark.jobs/siebel-casos-actividades/aws-java-sdk-bundle-1.11.901.jar
>  to ../../aws-java-sdk-bundle-1.11.901.jar
> ++ id -u
> + myuid=0
> ++ id -g
> + mygid=0
> + set +e
> ++ getent passwd 0
> + uidentry=root:x:0:0:root:/root:/bin/ash
> + set -e
> + '[' -z 

[jira] [Updated] (HADOOP-18079) Upgrade Netty to 4.1.77.Final

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-18079:
-
Fix Version/s: (was: 3.3.6)

> Upgrade Netty to 4.1.77.Final
> -
>
> Key: HADOOP-18079
> URL: https://issues.apache.org/jira/browse/HADOOP-18079
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.3
>Reporter: Renukaprasad C
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.4, 3.2.5
>
>  Time Spent: 6h 10m
>  Remaining Estimate: 0h
>
> h4. Netty version - 4.1.71 has fix some CVEs.
> CVE-2019-20444,
> CVE-2019-20445
> CVE-2022-24823
> Upgrade to latest version.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17980) Spark application stuck at ACCEPTED state (unset port issue)

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-17980:
-
Fix Version/s: (was: 3.3.6)

> Spark application stuck at ACCEPTED state (unset port issue)
> 
>
> Key: HADOOP-17980
> URL: https://issues.apache.org/jira/browse/HADOOP-17980
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 3.2.2
>Reporter: unical1988
>Priority: Major
>
> Hello guys! 
>  
> I am using Hadoop 3.3.2 to set up a cluster of 2 nodes. I was able to start 
> manually both hadoop (through hdfs namenode -regular & hdfs datanode -regular 
> one command on each machine) and yarn (yarn resourcemanager (master) yarn 
> nodemanager (on the slave)) But when i issue a spark-submit command to run my 
> application it gets stuck in the ACCEPTED STATUS and the log of the slave 
> machine shows the following error : 
>  
>  
>  
> {noformat}
> 2021-10-26 19:51:40,359 INFO handler.ContextHandler: Started 
> o.s.j.s.ServletContextHandler@1914cad9{/executors/json,null,AVAILABLE,@Spark}
> 2021-10-26 19:51:40,359 INFO ui.ServerInfo: Adding filter to 
> /executors/threadDump: 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
> 2021-10-26 19:51:40,360 INFO handler.ContextHandler: Started 
> o.s.j.s.ServletContextHandler@1778f2da{/executors/threadDump,null,AVAILABLE,@Spark}
> 2021-10-26 19:51:40,361 INFO ui.ServerInfo: Adding filter to 
> /executors/threadDump/json: 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
> 2021-10-26 19:51:40,362 INFO handler.ContextHandler: Started 
> o.s.j.s.ServletContextHandler@22a2a185{/executors/threadDump/json,null,AVAILABLE,@Spark}
> 2021-10-26 19:51:40,362 INFO ui.ServerInfo: Adding filter to /static: 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
> 2021-10-26 19:51:40,383 INFO handler.ContextHandler: Started 
> o.s.j.s.ServletContextHandler@74a801ad{/static,null,AVAILABLE,@Spark}
> 2021-10-26 19:51:40,384 INFO ui.ServerInfo: Adding filter to /: 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
> 2021-10-26 19:51:40,385 INFO handler.ContextHandler: Started 
> o.s.j.s.ServletContextHandler@27bcbe54{/,null,AVAILABLE,@Spark}
> 2021-10-26 19:51:40,386 INFO ui.ServerInfo: Adding filter to /api: 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
> 2021-10-26 19:51:40,390 INFO handler.ContextHandler: Started 
> o.s.j.s.ServletContextHandler@19646f00{/api,null,AVAILABLE,@Spark}
> 2021-10-26 19:51:40,390 INFO ui.ServerInfo: Adding filter to /jobs/job/kill: 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
> 2021-10-26 19:51:40,391 INFO handler.ContextHandler: Started 
> o.s.j.s.ServletContextHandler@4f7ec9ca{/jobs/job/kill,null,AVAILABLE,@Spark}
> 2021-10-26 19:51:40,391 INFO ui.ServerInfo: Adding filter to 
> /stages/stage/kill: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
> 2021-10-26 19:51:40,394 INFO handler.ContextHandler: Started 
> o.s.j.s.ServletContextHandler@33a1fb05{/stages/stage/kill,null,AVAILABLE,@Spark}
> 2021-10-26 19:51:40,396 INFO ui.SparkUI: Bound SparkUI to 0.0.0.0, and 
> started at http://slaveVM1:64888
> 2021-10-26 19:51:40,486 INFO cluster.YarnClusterScheduler: Created 
> YarnClusterScheduler
> 2021-10-26 19:51:40,664 INFO util.Utils: Successfully started service 
> 'org.apache.spark.network.netty.NettyBlockTransferService' on port 64902.
> 2021-10-26 19:51:40,664 INFO netty.NettyBlockTransferService: Server created 
> on slaveVM1:64902
> 2021-10-26 19:51:40,666 INFO storage.BlockManager: Using 
> org.apache.spark.storage.RandomBlockReplicationPolicy for block replication 
> policy
> 2021-10-26 19:51:40,679 INFO storage.BlockManagerMaster: Registering 
> BlockManager BlockManagerId(driver, slaveVM1, 64902, None)
> 2021-10-26 19:51:40,685 INFO storage.BlockManagerMasterEndpoint: Registering 
> block manager slaveVM1:64902 with 366.3 MiB RAM, BlockManagerId(driver, 
> slaveVM1, 64902, None)
> 2021-10-26 19:51:40,688 INFO storage.BlockManagerMaster: Registered 
> BlockManager BlockManagerId(driver, slaveVM1, 64902, None)
> 2021-10-26 19:51:40,689 INFO storage.BlockManager: Initialized BlockManager: 
> BlockManagerId(driver, slaveVM1, 64902, None)
> 2021-10-26 19:51:40,925 INFO ui.ServerInfo: Adding filter to /metrics/json: 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
> 2021-10-26 19:51:40,926 INFO handler.ContextHandler: Started 
> o.s.j.s.ServletContextHandler@97b0a9c{/metrics/json,null,AVAILABLE,@Spark}
> 2021-10-26 19:51:41,029 INFO client.RMProxy: Connecting to ResourceManager at 
> /0.0.0.0:8030
> 2021-10-26 19:51:41,096 INFO yarn.YarnRMClient: Registering the 
> ApplicationMaster
> 2021-10-26 19:51:43,156 INFO ipc.Client: Retrying connect to server: 
> 

[jira] [Updated] (HADOOP-17842) S3a parquet reads slow with Spark on Kubernetes (EKS)

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-17842:
-
Fix Version/s: (was: 3.3.6)

> S3a parquet reads slow with Spark on Kubernetes (EKS)
> -
>
> Key: HADOOP-17842
> URL: https://issues.apache.org/jira/browse/HADOOP-17842
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Abhinav Kumar
>Priority: Minor
>
> I am trying to read parquet saved in S3 via Spark on EKS using hadoop-AWS 
> 3.2.0. There are 112 partitions (each around 130MB) for a particular month.
>  
> The data is being read but very very slowly. I just keep seeing below and 
> very small dataset actually being fetched.
>  
> 21/08/09 05:07:05 DEBUG Executor task launch worker for task 60.0 in stage 
> 3.0 (TID 63) Invoker: Values passed - text: read on 
> s3a://uat1-prp-rftu-25-045552507264-us-east-1////table_fact_mtd_c/ptn_val_txt=20200229/part-00012-32dbfb10-b43c-4066-a70e-d3575ea530d5-c000.snappy.parquet,
>  idempotent: true, Retried: 
> org.apache.hadoop.fs.s3a.S3AFileSystem$$Lambda$1199/2130521693@5259f9d0, 
> Operation:org.apache.hadoop.fs.s3a.Invoker$$Lambda$1239/37396157@454de3d3
> 21/08/09 05:07:05 DEBUG Executor task launch worker for task 60.0 in stage 
> 3.0 (TID 63) Invoker: retryUntranslated begin
> 21/08/09 05:07:05 DEBUG Executor task launch worker for task 60.0 in stage 
> 3.0 (TID 63) Invoker: Values passed - text: lazySeek on 
> s3a://uat1-prp-rftu-25-045552507264-us-east-1////table_fact_mtd_c/ptn_val_txt=20200229/part-00012-32dbfb10-b43c-4066-a70e-d3575ea530d5-c000.snappy.parquet,
>  idempotent: true, Retried: 
> org.apache.hadoop.fs.s3a.S3AFileSystem$$Lambda$1199/2130521693@5259f9d0, 
> Operation:org.apache.hadoop.fs.s3a.Invoker$$Lambda$1239/37396157@3776ef6c
> 21/08/09 05:07:05 DEBUG Executor task launch worker for task 60.0 in stage 
> 3.0 (TID 63) Invoker: retryUntranslated begin
> 21/08/09 05:07:05 DEBUG Executor task launch worker for task 60.0 in stage 
> 3.0 (TID 63) Invoker: Values passed - text: read on 
> s3a://uat1-prp-rftu-25-045552507264-us-east-1////table_fact_mtd_c/ptn_val_txt=20200229/part-00012-32dbfb10-b43c-4066-a70e-d3575ea530d5-c000.snappy.parquet,
>  idempotent: true, Retried: 
> org.apache.hadoop.fs.s3a.S3AFileSystem$$Lambda$1199/2130521693@5259f9d0, 
> Operation:org.apache.hadoop.fs.s3a.Invoker$$Lambda$1239/37396157@3602676a
> 21/08/09 05:07:05 DEBUG Executor task launch worker for task 60.0 in stage 
> 3.0 (TID 63) Invoker: retryUntranslated begin
>  
> Here is the spark config for hadoop-aws.
> |spark.hadoop.fs.s3a.assumed.role.sts.endpoint: https://sts.amazonaws.com|
> |spark.hadoop.fs.s3a.assumed.role.sts.endpoint.region: us-east-1|
> |spark.hadoop.fs.s3a.attempts.maximum: 20|
> |spark.hadoop.fs.s3a.aws.credentials.provider: 
> org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider|
> |spark.hadoop.fs.s3a.block.size: 128M|
> |spark.hadoop.fs.s3a.connection.establish.timeout: 5|
> |spark.hadoop.fs.s3a.connection.maximum: 50|
> |spark.hadoop.fs.s3a.connection.ssl.enabled: true|
> |spark.hadoop.fs.s3a.connection.timeout: 200|
> |spark.hadoop.fs.s3a.endpoint: s3.us-east-1.amazonaws.com|
> |spark.hadoop.fs.s3a.etag.checksum.enabled: false|
> |spark.hadoop.fs.s3a.experimental.input.fadvise: normal|
> |spark.hadoop.fs.s3a.fast.buffer.size: 1048576|
> |spark.hadoop.fs.s3a.fast.upload: true|
> |spark.hadoop.fs.s3a.fast.upload.active.blocks: 8|
> |spark.hadoop.fs.s3a.fast.upload.buffer: bytebuffer|
> |spark.hadoop.fs.s3a.impl: org.apache.hadoop.fs.s3a.S3AFileSystem|
> |spark.hadoop.fs.s3a.list.version: 2|
> |spark.hadoop.fs.s3a.max.total.tasks: 30|
> |spark.hadoop.fs.s3a.metadatastore.authoritative: false|
> |spark.hadoop.fs.s3a.metadatastore.impl: 
> org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore|
> |spark.hadoop.fs.s3a.multiobjectdelete.enable: true|
> |spark.hadoop.fs.s3a.multipart.purge: true|
> |spark.hadoop.fs.s3a.multipart.purge.age: 86400|
> |spark.hadoop.fs.s3a.multipart.size: 32M|
> |spark.hadoop.fs.s3a.multipart.threshold: 64M|
> |spark.hadoop.fs.s3a.paging.maximum: 5000|
> |spark.hadoop.fs.s3a.readahead.range: 65536|
> |spark.hadoop.fs.s3a.retry.interval: 500ms|
> |spark.hadoop.fs.s3a.retry.limit: 20|
> |spark.hadoop.fs.s3a.retry.throttle.interval: 500ms|
> |spark.hadoop.fs.s3a.retry.throttle.limit: 20|
> |spark.hadoop.fs.s3a.s3.client.factory.impl: 
> org.apache.hadoop.fs.s3a.DefaultS3ClientFactory|
> |spark.hadoop.fs.s3a.s3guard.ddb.background.sleep: 25|
> |spark.hadoop.fs.s3a.s3guard.ddb.max.retries: 20|
> |spark.hadoop.fs.s3a.s3guard.ddb.region: us-east-1|
> |spark.hadoop.fs.s3a.s3guard.ddb.table: s3-data-guard-master|
> |spark.hadoop.fs.s3a.s3guard.ddb.table.capacity.read: 500|
> 

[jira] [Updated] (HADOOP-17556) Understanding Netty versions and upgrading them (three findings in Hadoop we could upgrade?)

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-17556:
-
Fix Version/s: (was: 3.3.6)

> Understanding Netty versions and upgrading them (three findings in Hadoop we 
> could upgrade?)
> 
>
> Key: HADOOP-17556
> URL: https://issues.apache.org/jira/browse/HADOOP-17556
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Adam Roberts
>Priority: Major
>
> Hi everyone, have been raising a few JIRAs recently related to dependencies 
> in Flink and Hadoop, and for Hadoop I have noticed the following versions of 
> Netty in use. I'm wondering if we can work to upgrade these (potentially all 
> to the same version) to remediate any CVEs we have. 
>  
> Here's what the Twistlock container scan picked up (so, this is Flink with 
> Hadoop 3.3.1 snapshot, which I've scanned), so any thoughts or upgrade ideas 
> would be most welcome.
>  
> "version": "3.10.6.Final"
>  "name": "io.netty_netty"
> "path": "/opt/flink/lib/flink-shaded-hadoop-3-uber-3.3.1-SNAPSHOT-10.0.jar"
>  
> "version": "4.1.50.Final"
> "name": "io.netty_netty-all"
> "path": "/opt/flink/lib/flink-shaded-hadoop-3-uber-3.3.1-SNAPSHOT-10.0.jar"
>  
> "version": "4.1.42.Final"
> "name": "io.netty_netty-codec"
> "path": "/opt/flink/lib/flink-shaded-hadoop-3-uber-3.3.1-SNAPSHOT-10.0.jar"
>  
> The latest 4.1 Netty I see is
>  {{[https://mvnrepository.com/artifact/io.netty/netty-all/4.1.59.Final]}}
>  
> which may help with the above findings (assume things are all compatible!), 
> thanks
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17448) Netty library causing issue while running spark on yarn in PSuedodistributed mode

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-17448:
-
Fix Version/s: (was: 3.3.6)

> Netty library causing issue while running spark on yarn in PSuedodistributed 
> mode
> -
>
> Key: HADOOP-17448
> URL: https://issues.apache.org/jira/browse/HADOOP-17448
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.2.1
> Environment: windows 10 
> hadoop 3.2.1
> Spark 3.0.1
>Reporter: Aldrin Machado
>Priority: Minor
> Attachments: netty_issue
>
>
> While running in psuedodistributed mode,there is different version of 
> netty-all library in hdfs (4.0.52), this is fine till you try to run spark on 
> it. The spark 3.0.1 comes with netty-all 4.1.47.
> Now both the libraries get loaded in classpath and thus spark submit fails 
> with method error. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16918) Dependency update for Hadoop 2.10

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-16918:
-
Fix Version/s: (was: 3.3.6)

> Dependency update for Hadoop 2.10
> -
>
> Key: HADOOP-16918
> URL: https://issues.apache.org/jira/browse/HADOOP-16918
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Priority: Major
>  Labels: release-blocker
> Fix For: 2.10.1
>
> Attachments: dependency-check-report.html, 
> dependency-check-report.html
>
>
> A number of dependencies can be updated.
> nimbus-jose-jwt
> jetty
> netty
> zookeeper
> hbase-common
> jackson-databind
> and many more. They should be updated in the 2.10.1 release.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16982) Update Netty to 4.1.48.Final

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-16982:
-
Fix Version/s: (was: 3.3.6)

> Update Netty to 4.1.48.Final
> 
>
> Key: HADOOP-16982
> URL: https://issues.apache.org/jira/browse/HADOOP-16982
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.3.0
>Reporter: Wei-Chiu Chuang
>Assignee: Lisheng Sun
>Priority: Blocker
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HADOOP-16982-branch-3.1.001.patch, 
> HADOOP-16982-branch-3.2.001.patch, HADOOP-16982.001.patch, 
> HADOOP-16982.002.patch, HADOOP-16982.003.patch
>
>
> We are currently on Netty 4.1.45.Final. We should update to the latest 
> 4.1.48.Final



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16990) Update Mockserver

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-16990:
-
Fix Version/s: (was: 3.3.6)

> Update Mockserver
> -
>
> Key: HADOOP-16990
> URL: https://issues.apache.org/jira/browse/HADOOP-16990
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Assignee: Attila Doroszlai
>Priority: Major
> Fix For: 3.2.2, 3.3.1, 3.4.0
>
> Attachments: HADOOP-16990-branch-3.1.004.patch, 
> HADOOP-16990-branch-3.3.002.patch, HADOOP-16990.001.patch, 
> HDFS-15620-branch-3.3-addendum.patch
>
>
> We are on Mockserver 3.9.2 which is more than 5 years old. Time to update.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15980) Enable TLS in RPC client/server

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-15980:
-
Fix Version/s: (was: 3.3.6)

> Enable TLS in RPC client/server
> ---
>
> Key: HADOOP-15980
> URL: https://issues.apache.org/jira/browse/HADOOP-15980
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc, security
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> Once the RPC client and server can be configured to use Netty, the TLS engine 
> can be added to the channel pipeline.  The server should allow QoS-like 
> functionality to determine if TLS is mandatory or optional for a client.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15893) fs.TrashPolicyDefault: can't create trash directory and race condition

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-15893:
-
Fix Version/s: (was: 3.3.6)

> fs.TrashPolicyDefault: can't create trash directory and race condition
> --
>
> Key: HADOOP-15893
> URL: https://issues.apache.org/jira/browse/HADOOP-15893
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: sunlisheng
>Assignee: sunlisheng
>Priority: Major
> Attachments: HADOOP-15893.001.patch, HADOOP-15893.002.patch
>
>
> there is race condition in method moveToTrash class TrashPolicyDefault
> try {
>  if (!fs.mkdirs(baseTrashPath, PERMISSION))
> { // create current LOG.warn("Can't create(mkdir) trash directory: " + 
> baseTrashPath); return false; }
> } catch (FileAlreadyExistsException e) {
>  // find the path which is not a directory, and modify baseTrashPath
>  // & trashPath, then mkdirs
>  Path existsFilePath = baseTrashPath;
>  while (!fs.exists(existsFilePath))
> { existsFilePath = existsFilePath.getParent(); }
> {color:#ff}// case{color}
> {color:#ff}  other thread deletes existsFilePath here ,the results 
> doesn't  meet expectation{color}
> {color:#ff} for example{color}
> {color:#ff}   there is 
> /user/u_sunlisheng/.Trash/Current/user/u_sunlisheng/b{color}
> {color:#ff}   when delete /user/u_sunlisheng/b/a. if existsFilePath is 
> deleted, the result is 
> /user/u_sunlisheng/.Trash/Current/user/u_sunlisheng+timstamp/b/a{color}
> {color:#ff}  so  when existsFilePath is deleted, don't modify 
> baseTrashPath.    {color}
> baseTrashPath = new Path(baseTrashPath.toString().replace(
>  existsFilePath.toString(), existsFilePath.toString() + Time.now())
>  );
> trashPath = new Path(baseTrashPath, trashPath.getName());
>  // retry, ignore current failure
>  --i;
>  continue;
>  } catch (IOException e)
> { LOG.warn("Can't create trash directory: " + baseTrashPath, e); cause = e; 
> break; }
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16579) Upgrade to Apache Curator 4.2.0 and ZooKeeper 3.5.6 in Hadoop

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-16579:
-
Fix Version/s: (was: 3.3.6)

> Upgrade to Apache Curator 4.2.0 and ZooKeeper 3.5.6 in Hadoop
> -
>
> Key: HADOOP-16579
> URL: https://issues.apache.org/jira/browse/HADOOP-16579
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Mate Szalay-Beko
>Assignee: Norbert Kalmár
>Priority: Major
> Fix For: 3.3.0
>
>
> *Update:* the original idea was to only update Curator but keep the old 
> ZooKeeper version in Hadoop. However, we encountered some run-time 
> backward-incompatibility during unit tests with Curator 4.2.0 and ZooKeeper 
> 3.5.5. We haven't really investigated deeply these issues, but upgraded to 
> ZooKeeper 3.5.5 (and later to 3.5.6). We had to do some minor fixes in the 
> unit tests (and also had to change some deprecated Curator API calls), but 
> [the latest PR|https://github.com/apache/hadoop/pull/1656] seems to be stable.
> ZooKeeper 3.5.6 just got released during our work. (I think the official 
> announcement will get out maybe tomorrow, but it is already available in 
> maven central or on the [Apache ZooKeeper ftp 
> site|https://www-eu.apache.org/dist/zookeeper/]). It is considered to be a 
> stable version, contains some minor fixes and improvements, plus some CVE 
> fixes. See the [release 
> notes|https://github.com/apache/zookeeper/blob/branch-3.5.6/zookeeper-docs/src/main/resources/markdown/releasenotes.md].
>  
> 
> Currently in Hadoop we are using [ZooKeeper version 
> 3.4.13|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/hadoop-project/pom.xml#L90].
>  ZooKeeper 3.5.5 is the latest stable Apache ZooKeeper release. It contains 
> many new features (including SSL related improvements which can be very 
> important for production use; see [the release 
> notes|https://zookeeper.apache.org/doc/r3.5.5/releasenotes.html]).
> Apache Curator is a high level ZooKeeper client library, that makes it easier 
> to use the low level ZooKeeper API. Currently [in Hadoop we are using Curator 
> 2.13.0|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/hadoop-project/pom.xml#L91]
>  and [in Ozone we use Curator 
> 2.12.0|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/pom.ozone.xml#L146].
> Curator 2.x is supporting only the ZooKeeper 3.4.x releases, while Curator 
> 3.x is compatible only with the new ZooKeeper 3.5.x releases. Fortunately, 
> the latest Curator 4.x versions are compatible with both ZooKeeper 3.4.x and 
> 3.5.x. (see [the relevant Curator 
> page|https://curator.apache.org/zk-compatibility.html]). Many Apache projects 
> have already migrated to Curator 4 (like HBase, Phoenix, Druid, etc.), other 
> components are doing it right now (e.g. Hive).
> *The aims of this task are* to:
>  - change Curator version in Hadoop to the latest stable 4.x version 
> (currently 4.2.0)
>  - also make sure we don't have multiple ZooKeeper versions in the classpath 
> to avoid runtime problems (it is 
> [recommended|https://curator.apache.org/zk-compatibility.html] to exclude the 
> ZooKeeper which come with Curator, so that there will be only a single 
> ZooKeeper version used runtime in Hadoop)
> In this ticket we still don't want to change the default ZooKeeper version in 
> Hadoop, we only want to make it possible for the community to be able to 
> build / use Hadoop with the new ZooKeeper (e.g. if they need to secure the 
> ZooKeeper communication with SSL, what is only supported in the new ZooKeeper 
> version). Upgrading to Curator 4.x should keep Hadoop to be compatible with 
> both ZooKeeper 3.4 and 3.5.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16381) The JSON License is included in binary tarball via azure-documentdb:1.16.2

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-16381:
-
Fix Version/s: (was: 3.3.6)

> The JSON License is included in binary tarball via azure-documentdb:1.16.2
> --
>
> Key: HADOOP-16381
> URL: https://issues.apache.org/jira/browse/HADOOP-16381
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Sushil Ks
>Priority: Blocker
> Attachments: HADOOP-16381.001.patch, HADOOP-16381.002.patch
>
>
> {noformat}
> $ mvn dependency:tree
> (snip)
> [INFO] +- com.microsoft.azure:azure-documentdb:jar:1.16.2:compile
> [INFO] |  +- com.fasterxml.uuid:java-uuid-generator:jar:3.1.4:compile
> [INFO] |  +- org.json:json:jar:20140107:compile
> [INFO] |  +- org.apache.httpcomponents:httpcore:jar:4.4.10:compile
> [INFO] |  \- joda-time:joda-time:jar:2.9.9:compile
> {noformat}
> org.json:json is JSON Licensed and it must be removed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16544) update io.netty in branch-2

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-16544:
-
Fix Version/s: (was: 3.3.6)

> update io.netty in branch-2
> ---
>
> Key: HADOOP-16544
> URL: https://issues.apache.org/jira/browse/HADOOP-16544
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Assignee: Masatake Iwasaki
>Priority: Major
>  Labels: release-blocker
> Fix For: 2.10.0
>
> Attachments: HADOOP-16544-branch-2.001.patch, 
> HADOOP-16544-branch-2.002.patch, HADOOP-16544-branch-2.003.patch, 
> HADOOP-16544-branch-2.004.patch
>
>
> branch-2 pulls in io.netty 3.6.2.Final which is more than 5 years old.
> The latest is 3.10.6Final. I know updating netty is sensitive but it deserves 
> some attention.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15849) Upgrade netty version to 3.10.6

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-15849:
-
Fix Version/s: (was: 3.3.6)

> Upgrade netty version to 3.10.6 
> 
>
> Key: HADOOP-15849
> URL: https://issues.apache.org/jira/browse/HADOOP-15849
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HADOOP-15849.01.patch
>
>
> We're currently at 3.10.5. It'd be good to upgrade to the latest 3.10.6 
> release.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14813) Windows build fails "command line too long"

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-14813:
-
Fix Version/s: (was: 3.3.6)

> Windows build fails "command line too long"
> ---
>
> Key: HADOOP-14813
> URL: https://issues.apache.org/jira/browse/HADOOP-14813
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha3
> Environment: Windows. username "Administrator"
>Reporter: Steve Loughran
>Priority: Minor
>
> Trying to build trunk as user "administrator" is failing in - 
> native-maven-plugin/hadoop common; command line too long. By the look of 
> things, its the number of artifacts from the maven repository which is 
> filling up the line; the CP really needs to go in a file instead, assuming 
> the maven plugin will let us.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13866) Upgrade netty-all to 4.1.1.Final

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-13866:
-
Fix Version/s: (was: 3.3.6)

> Upgrade netty-all to 4.1.1.Final
> 
>
> Key: HADOOP-13866
> URL: https://issues.apache.org/jira/browse/HADOOP-13866
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Attachments: HADOOP-13866.v1.patch, HADOOP-13866.v2.patch, 
> HADOOP-13866.v3.patch, HADOOP-13866.v4.patch, HADOOP-13866.v6.patch, 
> HADOOP-13866.v7.patch, HADOOP-13866.v8.patch, HADOOP-13866.v8.patch, 
> HADOOP-13866.v8.patch, HADOOP-13866.v9.patch
>
>
> netty-all 4.1.1.Final is stable release which we should upgrade to.
> See bottom of HADOOP-12927 for related discussion.
> This issue was discovered since hbase 2.0 uses 4.1.1.Final of netty.
> When launching mapreduce job from hbase, /grid/0/hadoop/yarn/local/  
> usercache/hbase/appcache/application_1479850535804_0008/container_e01_1479850535804_0008_01_05/mr-framework/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar
>  (from hdfs) is ahead of 4.1.1.Final jar (from hbase) on the classpath.
> Resulting in the following exception:
> {code}
> 2016-12-01 20:17:26,678 WARN [Default-IPC-NioEventLoopGroup-1-1] 
> io.netty.util.concurrent.DefaultPromise: An exception was thrown by 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete()
> java.lang.NoSuchMethodError: 
> io.netty.buffer.ByteBuf.retainedDuplicate()Lio/netty/buffer/ByteBuf;
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:272)
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:262)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
> at 
> io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14685) Test jars to exclude from hadoop-client-minicluster jar

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-14685:
-
Fix Version/s: (was: 3.3.6)

> Test jars to exclude from hadoop-client-minicluster jar
> ---
>
> Key: HADOOP-14685
> URL: https://issues.apache.org/jira/browse/HADOOP-14685
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 3.0.0-beta1
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14685.01.patch, HADOOP-14685.patch
>
>
> This jira is to discuss, what test jars to be included/excluded from 
> hadoop-client-minicluster
> Jars included/excluded when building hadoop-client-minicluster
> [INFO] --- maven-shade-plugin:2.4.3:shade (default) @ 
> hadoop-client-minicluster ---
> [INFO] Excluding org.apache.hadoop:hadoop-client-api:jar:3.0.0-beta1-SNAPSHOT 
> from the shaded jar.
> [INFO] Excluding 
> org.apache.hadoop:hadoop-client-runtime:jar:3.0.0-beta1-SNAPSHOT from the 
> shaded jar.
> [INFO] Excluding org.apache.htrace:htrace-core4:jar:4.1.0-incubating from the 
> shaded jar.
> [INFO] Excluding org.slf4j:slf4j-api:jar:1.7.25 from the shaded jar.
> [INFO] Excluding commons-logging:commons-logging:jar:1.1.3 from the shaded 
> jar.
> [INFO] Excluding junit:junit:jar:4.11 from the shaded jar.
> [INFO] Including org.hamcrest:hamcrest-core:jar:1.3 in the shaded jar.
> [INFO] Excluding 
> org.apache.hadoop:hadoop-annotations:jar:3.0.0-beta1-SNAPSHOT from the shaded 
> jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-minicluster:jar:3.0.0-beta1-SNAPSHOT in the shaded 
> jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-yarn-server-tests:test-jar:tests:3.0.0-beta1-SNAPSHOT
>  in the shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-yarn-server-resourcemanager:jar:3.0.0-beta1-SNAPSHOT 
> in the shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-yarn-server-applicationhistoryservice:jar:3.0.0-beta1-SNAPSHOT
>  in the shaded jar.
> [INFO] Including de.ruedigermoeller:fst:jar:2.50 in the shaded jar.
> [INFO] Including com.cedarsoftware:java-util:jar:1.9.0 in the shaded jar.
> [INFO] Including com.cedarsoftware:json-io:jar:2.5.1 in the shaded jar.
> [INFO] Including org.apache.curator:curator-test:jar:2.12.0 in the shaded jar.
> [INFO] Including org.javassist:javassist:jar:3.18.1-GA in the shaded jar.
> [INFO] Including org.apache.hadoop:hadoop-hdfs:jar:3.0.0-beta1-SNAPSHOT in 
> the shaded jar.
> [INFO] Including org.eclipse.jetty:jetty-util-ajax:jar:9.3.11.v20160721 in 
> the shaded jar.
> [INFO] Including commons-daemon:commons-daemon:jar:1.0.13 in the shaded jar.
> [INFO] Including io.netty:netty-all:jar:4.0.23.Final in the shaded jar.
> [INFO] Including xerces:xercesImpl:jar:2.9.1 in the shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-mapreduce-client-hs:jar:3.0.0-beta1-SNAPSHOT in the 
> shaded jar.
> [INFO] Excluding 
> org.apache.hadoop:hadoop-yarn-server-timelineservice:jar:3.0.0-beta1-SNAPSHOT 
> from the shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-common:test-jar:tests:3.0.0-beta1-SNAPSHOT in the 
> shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-hdfs:test-jar:tests:3.0.0-beta1-SNAPSHOT in the 
> shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-mapreduce-client-jobclient:test-jar:tests:3.0.0-beta1-SNAPSHOT
>  in the shaded jar.
> [INFO] Including com.sun.jersey:jersey-core:jar:1.19 in the shaded jar.
> [INFO] Including com.sun.jersey:jersey-client:jar:1.19 in the shaded jar.
> [INFO] Including com.sun.jersey:jersey-json:jar:1.19 in the shaded jar.
> [INFO] Including org.codehaus.jettison:jettison:jar:1.1 in the shaded jar.
> [INFO] Including com.sun.xml.bind:jaxb-impl:jar:2.2.3-1 in the shaded jar.
> [INFO] Including com.sun.jersey:jersey-server:jar:1.19 in the shaded jar.
> [INFO] Including com.sun.jersey:jersey-servlet:jar:1.19 in the shaded jar.
> [INFO] Including org.eclipse.jdt:core:jar:3.1.1 in the shaded jar.
> [INFO] Including net.sf.kosmosfs:kfs:jar:0.3 in the shaded jar.
> [INFO] Including net.java.dev.jets3t:jets3t:jar:0.9.0 in the shaded jar.
> [INFO] Including com.jamesmurty.utils:java-xmlbuilder:jar:0.4 in the shaded 
> jar.
> [INFO] Including com.jcraft:jsch:jar:0.1.54 in the shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-yarn-server-nodemanager:jar:3.0.0-beta1-SNAPSHOT in 
> the shaded jar.
> [INFO] Including com.codahale.metrics:metrics-core:jar:3.0.1 in the shaded 
> jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-yarn-server-web-proxy:jar:3.0.0-beta1-SNAPSHOT in 
> the shaded jar.
> [INFO] Including org.eclipse.jetty:jetty-server:jar:9.3.11.v20160721 in the 
> shaded jar.
> [INFO] Including 

[jira] [Updated] (HADOOP-15560) ABFS: removed dependency injection and unnecessary dependencies

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-15560:
-
Fix Version/s: (was: 3.3.6)

> ABFS: removed dependency injection and unnecessary dependencies
> ---
>
> Key: HADOOP-15560
> URL: https://issues.apache.org/jira/browse/HADOOP-15560
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: HADOOP-15407
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Fix For: HADOOP-15047
>
> Attachments: HADOOP-15407-HADOOP-15407-009.patch
>
>
> # Removed dependency injection and unnecessary dependencies.
>  # Added tool to clean up test containers.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-15327:
-
Fix Version/s: (was: 3.3.6)

> Upgrade MR ShuffleHandler to use Netty4
> ---
>
> Key: HADOOP-15327
> URL: https://issues.apache.org/jira/browse/HADOOP-15327
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Szilard Nemeth
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: HADOOP-15327.001.patch, HADOOP-15327.002.patch, 
> HADOOP-15327.003.patch, HADOOP-15327.004.patch, HADOOP-15327.005.patch, 
> HADOOP-15327.005.patch, 
> getMapOutputInfo_BlockingOperationException_awaitUninterruptibly.log, 
> hades-results-20221108.zip, testfailure-testMapFileAccess-emptyresponse.zip, 
> testfailure-testReduceFromPartialMem.zip
>
>  Time Spent: 11.5h
>  Remaining Estimate: 0h
>
> This way, we can remove the dependencies on the netty3 (jboss.netty)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15124) Slow FileSystem.Statistics counters implementation

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-15124:
-
Fix Version/s: (was: 3.3.6)

> Slow FileSystem.Statistics counters implementation
> --
>
> Key: HADOOP-15124
> URL: https://issues.apache.org/jira/browse/HADOOP-15124
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 2.9.0, 2.8.3, 2.7.5, 3.0.0, 3.1.0
>Reporter: Igor Dvorzhak
>Assignee: Igor Dvorzhak
>Priority: Major
>  Labels: common, filesystem, fs, pull-request-available, 
> statistics
> Attachments: HADOOP-15124-branch-3.2.001.patch, 
> HADOOP-15124-branch-3.2.002.patch, HADOOP-15124.001.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> While profiling 1TB TeraGen job on Hadoop 2.8.2 cluster (Google Dataproc, 2 
> workers, GCS connector) I saw that FileSystem.Statistics code paths Wall time 
> is 5.58% and CPU time is 26.5% of total execution time.
> After switching FileSystem.Statistics implementation to LongAdder, consumed 
> Wall time decreased to 0.006% and CPU time to 0.104% of total execution time.
> Total job runtime decreased from 66 mins to 61 mins.
> These results are not conclusive, because I didn't benchmark multiple times 
> to average results, but regardless of performance gains switching to 
> LongAdder simplifies code and reduces its complexity.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14647) Update third-party libraries for Hadoop 3

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-14647:
-
Fix Version/s: (was: 3.3.6)

> Update third-party libraries for Hadoop 3
> -
>
> Key: HADOOP-14647
> URL: https://issues.apache.org/jira/browse/HADOOP-14647
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.0.0-beta1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Major
> Fix For: 3.0.0-beta1
>
>
> There are a bunch of old third-party dependencies in trunk.  Before we get to 
> the final release candidate, it would be good to move some of these 
> dependencies to the latest (or at least the latest compatible)
> , since it could be a while before the next update.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14539) Move commons logging APIs over to slf4j in hadoop-common

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-14539:
-
Fix Version/s: (was: 3.3.6)

> Move commons logging APIs over to slf4j in hadoop-common
> 
>
> Key: HADOOP-14539
> URL: https://issues.apache.org/jira/browse/HADOOP-14539
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: Wenxin He
>Priority: Major
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-14539-branch-2.001.patch, 
> HADOOP-14539-branch-2.002.patch, HADOOP-14539.001.patch, 
> HADOOP-14539.002.patch, HADOOP-14539.003.patch, 
> diff-checkstyle-hadoop-common-project.txt
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13436) RPC connections are leaking due to not overriding hashCode and equals

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-13436:
-
Fix Version/s: (was: 3.3.6)

> RPC connections are leaking due to not overriding hashCode and equals
> -
>
> Key: HADOOP-13436
> URL: https://issues.apache.org/jira/browse/HADOOP-13436
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.1
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Major
> Attachments: Proposal-of-Fixing-Connection-Leakage.pdf, repro.sh
>
>
> We've noticed RPC connections are increasing dramatically in a Kerberized 
> HDFS cluster with {noformat}dfs.client.retry.policy.enabled{noformat} 
> enabled. Internally,  Client#getConnection is doing lookup relying on 
> ConnectionId #hashCode and then #equals, which compose checking 
> Subclass-of-RetryPolicy #hashCode and #equals. If subclasses of RetryPolicy 
> neglect overriding #hashCode or #equals, every instance of RetryPolicy with 
> equivalent fields' values (e.g. MultipleLinearRandomRetry[6x1ms, 
> 10x6ms]) will lead to a brand new connection because the check will fall 
> back to Object#hashCode and Object#equals which is distinct and false for 
> distinct instances.
> This is stack trace where the anonymous RetryPolicy implementation 
> (neglecting overriding hashCode and equals) in 
> RetryUtils#getDefaultRetryPolicy is called:
> {noformat}
> at 
> org.apache.hadoop.io.retry.RetryUtils.getDefaultRetryPolicy(RetryUtils.java:82)
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createNNProxyWithClientProtocol(NameNodeProxies.java:409)
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:315)
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:678)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:619)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:609)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler.newDfsClient(WebHdfsHandler.java:272)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler.onOpen(WebHdfsHandler.java:215)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler.handle(WebHdfsHandler.java:135)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler$1.run(WebHdfsHandler.java:117)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler$1.run(WebHdfsHandler.java:114)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler.channelRead0(WebHdfsHandler.java:114)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.URLDispatcher.channelRead0(URLDispatcher.java:52)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.URLDispatcher.channelRead0(URLDispatcher.java:32)
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
> at 
> io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
> at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:163)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
> at 
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787)
> at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:130)
> at 
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
> at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
> at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
> at 

[jira] [Updated] (HADOOP-13786) Add S3A committers for zero-rename commits to S3 endpoints

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-13786:
-
Fix Version/s: (was: 3.3.6)

> Add S3A committers for zero-rename commits to S3 endpoints
> --
>
> Key: HADOOP-13786
> URL: https://issues.apache.org/jira/browse/HADOOP-13786
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HADOOP-13786-036.patch, HADOOP-13786-037.patch, 
> HADOOP-13786-038.patch, HADOOP-13786-039.patch, 
> HADOOP-13786-HADOOP-13345-001.patch, HADOOP-13786-HADOOP-13345-002.patch, 
> HADOOP-13786-HADOOP-13345-003.patch, HADOOP-13786-HADOOP-13345-004.patch, 
> HADOOP-13786-HADOOP-13345-005.patch, HADOOP-13786-HADOOP-13345-006.patch, 
> HADOOP-13786-HADOOP-13345-006.patch, HADOOP-13786-HADOOP-13345-007.patch, 
> HADOOP-13786-HADOOP-13345-009.patch, HADOOP-13786-HADOOP-13345-010.patch, 
> HADOOP-13786-HADOOP-13345-011.patch, HADOOP-13786-HADOOP-13345-012.patch, 
> HADOOP-13786-HADOOP-13345-013.patch, HADOOP-13786-HADOOP-13345-015.patch, 
> HADOOP-13786-HADOOP-13345-016.patch, HADOOP-13786-HADOOP-13345-017.patch, 
> HADOOP-13786-HADOOP-13345-018.patch, HADOOP-13786-HADOOP-13345-019.patch, 
> HADOOP-13786-HADOOP-13345-020.patch, HADOOP-13786-HADOOP-13345-021.patch, 
> HADOOP-13786-HADOOP-13345-022.patch, HADOOP-13786-HADOOP-13345-023.patch, 
> HADOOP-13786-HADOOP-13345-024.patch, HADOOP-13786-HADOOP-13345-025.patch, 
> HADOOP-13786-HADOOP-13345-026.patch, HADOOP-13786-HADOOP-13345-027.patch, 
> HADOOP-13786-HADOOP-13345-028.patch, HADOOP-13786-HADOOP-13345-028.patch, 
> HADOOP-13786-HADOOP-13345-029.patch, HADOOP-13786-HADOOP-13345-030.patch, 
> HADOOP-13786-HADOOP-13345-031.patch, HADOOP-13786-HADOOP-13345-032.patch, 
> HADOOP-13786-HADOOP-13345-033.patch, HADOOP-13786-HADOOP-13345-035.patch, 
> MAPREDUCE-6823-003.patch, cloud-intergration-test-failure.log, 
> objectstore.pdf, s3committer-master.zip
>
>
> A goal of this code is "support O(1) commits to S3 repositories in the 
> presence of failures". Implement it, including whatever is needed to 
> demonstrate the correctness of the algorithm. (that is, assuming that s3guard 
> provides a consistent view of the presence/absence of blobs, show that we can 
> commit directly).
> I consider ourselves free to expose the blobstore-ness of the s3 output 
> streams (ie. not visible until the close()), if we need to use that to allow 
> us to abort commit operations.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13692) hadoop-aws should declare explicit dependency on Jackson 2 jars to prevent classpath conflicts.

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-13692:
-
Fix Version/s: (was: 3.3.6)

> hadoop-aws should declare explicit dependency on Jackson 2 jars to prevent 
> classpath conflicts.
> ---
>
> Key: HADOOP-13692
> URL: https://issues.apache.org/jira/browse/HADOOP-13692
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13692-branch-2.001.patch
>
>
> If an end user's application has a dependency on hadoop-aws and no other 
> Hadoop artifacts, then it picks up a transitive dependency on Jackson 2.5.3 
> jars through the AWS SDK.  This can cause conflicts at deployment time, 
> because Hadoop has a dependency on version 2.2.3, and the 2 versions are not 
> compatible with one another.  We can prevent this problem by changing 
> hadoop-aws to declare explicit dependencies on the Jackson artifacts, at the 
> version Hadoop wants.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18763) Upgrade aws-java-sdk to 1.12.367+

2023-06-12 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17731791#comment-17731791
 ] 

Viraj Jasani commented on HADOOP-18763:
---

this time, without vpn, all tests passed for prefetch profile as well (previous 
failures testParallelRename and testThreadPoolCoolDown are no longer showing up 
with full test run)

 
{code:java}
mvn clean verify -Dparallel-tests -DtestsThreadCount=8 -Dscale -Dprefetch {code}
 

 

> Upgrade aws-java-sdk to 1.12.367+
> -
>
> Key: HADOOP-18763
> URL: https://issues.apache.org/jira/browse/HADOOP-18763
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.5
>Reporter: Steve Loughran
>Priority: Major
>
> aws sdk bundle < 1.12.367 uses a vulnerable versions of netty which is 
> pulling in high severity CVE and creating unhappiness in security scans, even 
> if s3a doesn't use that lib. 
> The safe version for netty is netty:4.1.86.Final and this is used by 
> aws-java-adk:1.12.367+



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13413) Update ZooKeeper version to 3.4.9

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HADOOP-13413.
--
Resolution: Won't Fix

Stale jira; clean up.

> Update ZooKeeper version to 3.4.9
> -
>
> Key: HADOOP-13413
> URL: https://issues.apache.org/jira/browse/HADOOP-13413
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>Priority: Major
>
> Just a reminder not to update ZooKeeper's version to 3.4.9 for syncing up 
> with netty's version.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13413) Update ZooKeeper version to 3.4.9

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-13413:
-
Fix Version/s: (was: 3.3.6)

> Update ZooKeeper version to 3.4.9
> -
>
> Key: HADOOP-13413
> URL: https://issues.apache.org/jira/browse/HADOOP-13413
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>Priority: Major
>
> Just a reminder not to update ZooKeeper's version to 3.4.9 for syncing up 
> with netty's version.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12928) Update netty to 3.10.5.Final to sync with zookeeper

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12928:
-
Fix Version/s: (was: 3.3.6)

> Update netty to 3.10.5.Final to sync with zookeeper
> ---
>
> Key: HADOOP-12928
> URL: https://issues.apache.org/jira/browse/HADOOP-12928
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.7.2
>Reporter: Hendy Irawan
>Assignee: Lei (Eddy) Xu
>Priority: Major
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-12928-branch-2.00.patch, 
> HADOOP-12928-branch-2.01.patch, HADOOP-12928-branch-2.02.patch, 
> HADOOP-12928.01.patch, HADOOP-12928.02.patch, HADOOP-12928.03.patch, 
> HDFS-12928.00.patch
>
>
> Update netty to 3.7.1.Final because hadoop-client 2.7.2 depends on zookeeper 
> 3.4.6 which depends on netty 3.7.x. Related to HADOOP-12927
> Pull request: https://github.com/apache/hadoop/pull/85



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12660) TestZKDelegationTokenSecretManager.testMultiNodeOperations failing

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12660:
-
Fix Version/s: (was: 3.3.6)

> TestZKDelegationTokenSecretManager.testMultiNodeOperations failing
> --
>
> Key: HADOOP-12660
> URL: https://issues.apache.org/jira/browse/HADOOP-12660
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ha, test
>Affects Versions: 3.0.0-alpha1
> Environment: Jenkins Java8
>Reporter: Steve Loughran
>Priority: Major
>
> Test failure
> {code}
> java.lang.AssertionError: Expected InvalidToken
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.security.token.delegation.TestZKDelegationTokenSecretManager.testMultiNodeOperations(TestZKDelegationTokenSecretManager.java:127)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12415) hdfs and nfs builds broken on -missing compile-time dependency on netty

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12415:
-
Fix Version/s: (was: 3.3.6)

> hdfs and nfs builds broken on -missing compile-time dependency on netty
> ---
>
> Key: HADOOP-12415
> URL: https://issues.apache.org/jira/browse/HADOOP-12415
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.7.1
> Environment: Bigtop, plain Linux distro of any kind
>Reporter: Konstantin I Boudnik
>Assignee: Tom Zeng
>Priority: Major
> Fix For: 2.8.0, 2.7.2, 3.0.0-alpha1
>
> Attachments: HADOOP-12415.patch
>
>
> As discovered in BIGTOP-2049 {{hadoop-nfs}} module compilation is broken. 
> Looks like that HADOOP-11489 is the root-cause of it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11086) Upgrade jets3t to 0.9.4

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-11086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-11086:
-
Fix Version/s: (was: 3.3.6)

> Upgrade jets3t to 0.9.4
> ---
>
> Key: HADOOP-11086
> URL: https://issues.apache.org/jira/browse/HADOOP-11086
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Matteo Bertozzi
>Priority: Minor
> Attachments: HADOOP-11086-branch-2-003.patch, HADOOP-11086-v0.patch, 
> HADOOP-11086.2.patch
>
>
> jets3t 0.9.2 contains a fix that caused failure of multi-part uploads with 
> service-side encryption.
> http://jets3t.s3.amazonaws.com/RELEASE_NOTES.html
> (it also removes an exception thrown from the RestS3Service constructor which 
> requires removing the try/catch around that code)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11804) Shaded Hadoop client artifacts and minicluster

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-11804:
-
Fix Version/s: (was: 3.3.6)

> Shaded Hadoop client artifacts and minicluster
> --
>
> Key: HADOOP-11804
> URL: https://issues.apache.org/jira/browse/HADOOP-11804
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-11804.1.patch, HADOOP-11804.10.patch, 
> HADOOP-11804.11.patch, HADOOP-11804.12.patch, HADOOP-11804.13.patch, 
> HADOOP-11804.14.patch, HADOOP-11804.2.patch, HADOOP-11804.3.patch, 
> HADOOP-11804.4.patch, HADOOP-11804.5.patch, HADOOP-11804.6.patch, 
> HADOOP-11804.7.patch, HADOOP-11804.8.patch, HADOOP-11804.9.patch, 
> hadoop-11804-client-test.tar.gz
>
>
> make a hadoop-client-api and hadoop-client-runtime that i.e. HBase can use to 
> talk with a Hadoop cluster without seeing any of the implementation 
> dependencies.
> see proposal on parent for details.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11219) [Umbrella] Upgrade to netty 4

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-11219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-11219:
-
Fix Version/s: (was: 3.3.6)

> [Umbrella] Upgrade to netty 4
> -
>
> Key: HADOOP-11219
> URL: https://issues.apache.org/jira/browse/HADOOP-11219
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>Priority: Major
> Fix For: 3.4.0
>
>
> This is an umbrella jira to track the effort of upgrading to Netty 4.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9991) Fix up Hadoop POMs, roll up JARs to latest versions

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-9991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-9991:

Fix Version/s: (was: 3.3.6)

> Fix up Hadoop POMs, roll up JARs to latest versions
> ---
>
> Key: HADOOP-9991
> URL: https://issues.apache.org/jira/browse/HADOOP-9991
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.1.1-beta, 2.3.0
>Reporter: Steve Loughran
>Priority: Major
> Attachments: hadoop-9991-v1.txt
>
>
> If you try using Hadoop downstream with a classpath shared with HBase and 
> Accumulo, you soon discover how messy the dependencies are.
> Hadoop's side of this problem is
> # not being up to date with some of the external releases of common JARs
> # not locking down/excluding inconsistent versions of artifacts provided down 
> the dependency graph



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9334) Update netty version

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-9334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-9334:

Fix Version/s: (was: 3.3.6)

> Update netty version
> 
>
> Key: HADOOP-9334
> URL: https://issues.apache.org/jira/browse/HADOOP-9334
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.1.0-beta, 3.0.0-alpha1
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
>Priority: Minor
> Fix For: 2.1.0-beta
>
> Attachments: 9334.branch2.v1.patch, 9334.trunk.v1.patch
>
>
> There are newer version available. HBase for example depends on the 3.5.9.
> Latest 3.5 is 3.5.11, there is the 3.6.3 as well.
> While there is no point in trying to have exactly the same version, things 
> are more comfortable if the gap in version is minimal, as the dependency is 
> client side as well (i.e. HBase has to choose a version anyway).
> Attached a patch for the branch 2.
> I haven't executed the unit tests, but HBase works ok with Hadoop on Netty 
> 3.5.9.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9961) versions of a few transitive dependencies diverged between hadoop subprojects

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-9961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-9961:

Fix Version/s: (was: 3.3.6)

> versions of a few transitive dependencies diverged between hadoop subprojects
> -
>
> Key: HADOOP-9961
> URL: https://issues.apache.org/jira/browse/HADOOP-9961
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha1
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
>Priority: Minor
> Fix For: 2.1.1-beta
>
> Attachments: HADOOP-9961.patch.txt
>
>
> I've noticed a few divergences between secondary dependencies of the various 
> hadoop subprojects. For example:
> {noformat}
> [ERROR]
> Dependency convergence error for org.apache.commons:commons-compress:1.4.1 
> paths to dependency are:
> +-org.apache.hadoop:hadoop-client:3.0.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-common:3.0.0-20130913.204420-3360
> +-org.apache.avro:avro:1.7.4
>   +-org.apache.commons:commons-compress:1.4.1
> and
> +-org.apache.hadoop:hadoop-client:3.0.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-common:3.0.0-20130913.204420-3360
> +-org.apache.commons:commons-compress:1.4
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9973) wrong dependencies

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-9973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-9973:

Fix Version/s: (was: 3.3.6)

> wrong dependencies
> --
>
> Key: HADOOP-9973
> URL: https://issues.apache.org/jira/browse/HADOOP-9973
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta, 2.1.1-beta
>Reporter: Nicolas Liochon
>Priority: Minor
>
> See HBASE-9557 for the impact: for some of them, it seems it's pushing these 
> dependencies to the client applications even if they are not used.
> mvn dependency:analyze -pl hadoop-common
> [WARNING] Used undeclared dependencies found:
> [WARNING]com.google.code.findbugs:jsr305:jar:1.3.9:compile
> [WARNING]commons-collections:commons-collections:jar:3.2.1:compile
> [WARNING] Unused declared dependencies found:
> [WARNING]com.sun.jersey:jersey-json:jar:1.9:compile
> [WARNING]tomcat:jasper-compiler:jar:5.5.23:runtime
> [WARNING]tomcat:jasper-runtime:jar:5.5.23:runtime
> [WARNING]javax.servlet.jsp:jsp-api:jar:2.1:runtime
> [WARNING]commons-el:commons-el:jar:1.0:runtime
> [WARNING]org.slf4j:slf4j-log4j12:jar:1.7.5:runtime
> mvn dependency:analyze -pl hadoop-yarn-client
> [WARNING] Used undeclared dependencies found:
> [WARNING]org.mortbay.jetty:jetty-util:jar:6.1.26:provided
> [WARNING]log4j:log4j:jar:1.2.17:compile
> [WARNING]com.google.guava:guava:jar:11.0.2:provided
> [WARNING]commons-lang:commons-lang:jar:2.5:provided
> [WARNING]commons-logging:commons-logging:jar:1.1.1:provided
> [WARNING]commons-cli:commons-cli:jar:1.2:provided
> [WARNING]
> org.apache.hadoop:hadoop-yarn-server-common:jar:2.1.2-SNAPSHOT:test
> [WARNING] Unused declared dependencies found:
> [WARNING]org.slf4j:slf4j-api:jar:1.7.5:compile
> [WARNING]org.slf4j:slf4j-log4j12:jar:1.7.5:compile
> [WARNING]com.google.inject.extensions:guice-servlet:jar:3.0:compile
> [WARNING]io.netty:netty:jar:3.6.2.Final:compile
> [WARNING]com.google.protobuf:protobuf-java:jar:2.5.0:compile
> [WARNING]commons-io:commons-io:jar:2.1:compile
> [WARNING]org.apache.hadoop:hadoop-hdfs:jar:2.1.2-SNAPSHOT:test
> [WARNING]com.google.inject:guice:jar:3.0:compile
> [WARNING]
> com.sun.jersey.jersey-test-framework:jersey-test-framework-core:jar:1.9:test
> [WARNING]
> com.sun.jersey.jersey-test-framework:jersey-test-framework-grizzly2:jar:1.9:compile
> [WARNING]com.sun.jersey:jersey-server:jar:1.9:compile
> [WARNING]com.sun.jersey:jersey-json:jar:1.9:compile
> [WARNING]com.sun.jersey.contribs:jersey-guice:jar:1.9:compile



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18760) 3.3.6 Release NOTICE and LICENSE file update

2023-06-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17731790#comment-17731790
 ] 

ASF GitHub Bot commented on HADOOP-18760:
-

hadoop-yetus commented on PR #5740:
URL: https://github.com/apache/hadoop/pull/5740#issuecomment-1588144960

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 36s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
    _ branch-3.3 Compile Tests _ |
   | +1 :green_heart: |  shadedclient  |  35m 17s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  shadedclient  |  24m 42s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 45s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  65m 28s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5740/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5740 |
   | Optional Tests | dupname asflicense codespell detsecrets shellcheck 
shelldocs |
   | uname | Linux 238c186c0144 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / 94eb3913aa9b067af256ee2035c1d7c9b7d25351 |
   | Max. process+thread count | 687 (vs. ulimit of 5500) |
   | modules | C: . U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5740/1/console |
   | versions | git=2.17.1 maven=3.6.0 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> 3.3.6 Release NOTICE and LICENSE file update
> 
>
> Key: HADOOP-18760
> URL: https://issues.apache.org/jira/browse/HADOOP-18760
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.3.6
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Blocker
>  Labels: pull-request-available
>
> As far as I can tell looking at hadoop-project/pom.xml the only difference 
> between 3.3.5 and 3.3.6 from a dependency point of view is mysql connector 
> (HADOOP-18535) derby (HADOOP-18535, HADOOP-18693).
> Json-smart, snakeyaml and jetty, jettison are updated in LICENSE-binary 
> already. grizzly was used in test scope only so its removal doesn't matter.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-6528) Jetty returns -1 resulting in Hadoop masters / slaves to fail during startup.

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-6528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-6528:

Fix Version/s: (was: 3.3.6)

> Jetty returns -1 resulting in Hadoop masters / slaves to fail during startup.
> -
>
> Key: HADOOP-6528
> URL: https://issues.apache.org/jira/browse/HADOOP-6528
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Hemanth Yamijala
>Priority: Major
> Attachments: jetty-server-failure.log
>
>
> A recent test failure on Hudson seems to indicate that Jetty's 
> Server.getConnectors()[0].getLocalPort() is returning -1 in the 
> HttpServer.getPort() method. When this happens, Hadoop masters / slaves that 
> use Jetty fail to startup correctly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9905) remove dependency of zookeeper for hadoop-client

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-9905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-9905:

Fix Version/s: (was: 3.3.6)

> remove dependency of zookeeper for hadoop-client
> 
>
> Key: HADOOP-9905
> URL: https://issues.apache.org/jira/browse/HADOOP-9905
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta, 2.0.6-alpha, 3.0.0-alpha1
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-9905-02.patch, HADOOP-9905.patch
>
>
> zookeeper dependency was added for ZKFC, which will not be used by client.
> Better remove the dependency of zookeeper jar for hadoop-client



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-7560) Make hadoop-common a POM module with sub-modules (common & alfredo)

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-7560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-7560:

Fix Version/s: (was: 3.3.6)

> Make hadoop-common a POM module with sub-modules (common & alfredo)
> ---
>
> Key: HADOOP-7560
> URL: https://issues.apache.org/jira/browse/HADOOP-7560
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
>Priority: Major
> Fix For: 0.23.0
>
> Attachments: HADOOP-7560v1.patch, HADOOP-7560v1.sh, 
> HADOOP-7560v2.patch, HADOOP-7560v2.sh
>
>
> Currently hadoop-common is a JAR module, thus it cannot aggregate sub-modules.
> Changing it to POM module it makes it an aggregator module, all the code 
> under hadoop-common must be moved to a sub-module.
> I.e.:
> mkdir hadoop-common-project
> mv hadoop-common hadoop-common-project
> mv hadoop-alfredo hadoop-common-project
> hadoop-common-project/pom.xml is a POM module that aggregates common & alfredo



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-7538) Dependencies should be revisited

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-7538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-7538:

Fix Version/s: (was: 3.3.6)

> Dependencies should be revisited
> 
>
> Key: HADOOP-7538
> URL: https://issues.apache.org/jira/browse/HADOOP-7538
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 0.23.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
>Priority: Major
>
> Some transitive dependencies seem that are not being used.
> As a follow up to HADOOP-7934 and HADOOP-7935 we should do a purging of 
> unused dependencies.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5740: HADOOP-18760. 3.3.6 Release NOTICE and LICENSE file update.

2023-06-12 Thread via GitHub


hadoop-yetus commented on PR #5740:
URL: https://github.com/apache/hadoop/pull/5740#issuecomment-1588144960

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 36s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
    _ branch-3.3 Compile Tests _ |
   | +1 :green_heart: |  shadedclient  |  35m 17s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  shadedclient  |  24m 42s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 45s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  65m 28s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5740/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5740 |
   | Optional Tests | dupname asflicense codespell detsecrets shellcheck 
shelldocs |
   | uname | Linux 238c186c0144 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / 94eb3913aa9b067af256ee2035c1d7c9b7d25351 |
   | Max. process+thread count | 687 (vs. ulimit of 5500) |
   | modules | C: . U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5740/1/console |
   | versions | git=2.17.1 maven=3.6.0 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14043) Shade netty 4 dependency in hadoop-hdfs

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-14043:
-
Fix Version/s: (was: 3.3.6)

> Shade netty 4 dependency in hadoop-hdfs
> ---
>
> Key: HADOOP-14043
> URL: https://issues.apache.org/jira/browse/HADOOP-14043
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Ted Yu
>Priority: Critical
>
> During review of HADOOP-13866, [~andrew.wang] mentioned considering  shading 
> netty before putting the fix into branch-2.
> This would give users better experience when upgrading hadoop.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-8495) Update Netty to avoid leaking file descriptors during shuffle

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-8495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-8495:

Fix Version/s: (was: 3.3.6)

> Update Netty to avoid leaking file descriptors during shuffle
> -
>
> Key: HADOOP-8495
> URL: https://issues.apache.org/jira/browse/HADOOP-8495
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.23.3, 2.0.0-alpha, 3.0.0-alpha1
>Reporter: Jason Darrell Lowe
>Assignee: Jason Darrell Lowe
>Priority: Critical
> Fix For: 0.23.3, 2.0.2-alpha
>
> Attachments: HADOOP-8495.patch
>
>
> Netty 3.2.3.Final has a known bug where writes to a closed channel do not 
> have their futures invoked.  See 
> [Netty-374|https://issues.jboss.org/browse/NETTY-374].  This can lead to file 
> descriptor leaks during shuffle as noted in MAPREDUCE-4298.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10995) HBase cannot run correctly with Hadoop trunk

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-10995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-10995:
-
Fix Version/s: (was: 3.3.6)

> HBase cannot run correctly with Hadoop trunk
> 
>
> Key: HADOOP-10995
> URL: https://issues.apache.org/jira/browse/HADOOP-10995
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Zhijie Shen
>Assignee: Zhijie Shen
>Priority: Critical
> Attachments: HADOOP-10995.1.patch, YARN-2032.dependency.patch
>
>
> Several incompatible changes that happened on trunk but not on branch-2 have 
> broken the compatibility for HBbase:
> HADOOP-10348
> HADOOP-8124
> HADOOP-10255
> In general, HttpServer is and Syncable.sync have been missed.
> It blocks YARN-2032, which makes timeline sever support HBase store.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16381) The JSON License is included in binary tarball via azure-documentdb:1.16.2

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-16381:
-
Fix Version/s: (was: 3.3.0)

> The JSON License is included in binary tarball via azure-documentdb:1.16.2
> --
>
> Key: HADOOP-16381
> URL: https://issues.apache.org/jira/browse/HADOOP-16381
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Sushil Ks
>Priority: Blocker
> Fix For: 3.3.6
>
> Attachments: HADOOP-16381.001.patch, HADOOP-16381.002.patch
>
>
> {noformat}
> $ mvn dependency:tree
> (snip)
> [INFO] +- com.microsoft.azure:azure-documentdb:jar:1.16.2:compile
> [INFO] |  +- com.fasterxml.uuid:java-uuid-generator:jar:3.1.4:compile
> [INFO] |  +- org.json:json:jar:20140107:compile
> [INFO] |  +- org.apache.httpcomponents:httpcore:jar:4.4.10:compile
> [INFO] |  \- joda-time:joda-time:jar:2.9.9:compile
> {noformat}
> org.json:json is JSON Licensed and it must be removed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13050) Upgrade to AWS SDK 1.11.45

2023-06-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-13050:
-
Fix Version/s: (was: 3.3.6)

> Upgrade to AWS SDK 1.11.45
> --
>
> Key: HADOOP-13050
> URL: https://issues.apache.org/jira/browse/HADOOP-13050
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13050-001.patch, HADOOP-13050-002.patch, 
> HADOOP-13050-branch-2-003.patch, HADOOP-13050-branch-2-004.patch, 
> HADOOP-13050-branch-2.002.patch, HADOOP-13050-branch-2.003.patch
>
>
> HADOOP-13044 highlights that AWS SDK 10.6 —shipping in Hadoop 2.7+, doesn't 
> work on open jdk >= 8u60, because a change in the JDK broke the version of 
> Joda time that AWS uses.
> Fix, update the JDK. Though, that implies updating http components: 
> HADOOP-12767.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18742) AWS v2 SDK: stabilise dependencies with rest of hadoop libraries

2023-06-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17731787#comment-17731787
 ] 

ASF GitHub Bot commented on HADOOP-18742:
-

hadoop-yetus commented on PR #5739:
URL: https://github.com/apache/hadoop/pull/5739#issuecomment-1588133461

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ feature-HADOOP-18073-s3a-sdk-upgrade Compile Tests _ |
   | +0 :ok: |  mvndep  |  20m 35s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  24m  0s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed  |
   | +1 :green_heart: |  compile  |  18m 36s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |  17m  2s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed with JDK Private 
Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed  |
   | +1 :green_heart: |  javadoc  |   1m 11s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 15s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed with JDK Private 
Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  shadedclient  | 108m 17s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 30s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   0m 46s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 53s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |  17m 53s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 57s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  javac  |  16m 57s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  shadedclient  |  30m 20s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 30s |  |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 50s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   1m  0s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 183m 49s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5739/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5739 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint |
   | uname | Linux 4e404fda821f 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | feature-HADOOP-18073-s3a-sdk-upgrade / 
7978e7056500794402e94e42e36e64ff0e138c5c |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5739/1/testReport/ |
   | Max. process+thread count | 540 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5739: HADOOP-18742. AWS v2 SDK: stabilise dependencies with rest of hadoop libraries

2023-06-12 Thread via GitHub


hadoop-yetus commented on PR #5739:
URL: https://github.com/apache/hadoop/pull/5739#issuecomment-1588133461

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ feature-HADOOP-18073-s3a-sdk-upgrade Compile Tests _ |
   | +0 :ok: |  mvndep  |  20m 35s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  24m  0s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed  |
   | +1 :green_heart: |  compile  |  18m 36s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |  17m  2s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed with JDK Private 
Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed  |
   | +1 :green_heart: |  javadoc  |   1m 11s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 15s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed with JDK Private 
Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  shadedclient  | 108m 17s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 30s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   0m 46s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 53s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |  17m 53s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 57s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  javac  |  16m 57s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  shadedclient  |  30m 20s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 30s |  |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 50s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   1m  0s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 183m 49s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5739/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5739 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint |
   | uname | Linux 4e404fda821f 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | feature-HADOOP-18073-s3a-sdk-upgrade / 
7978e7056500794402e94e42e36e64ff0e138c5c |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5739/1/testReport/ |
   | Max. process+thread count | 540 (vs. ulimit of 5500) |
   | modules | C: hadoop-project hadoop-tools/hadoop-aws U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5739/1/console |
   | versions | git=2.25.1 maven=3.6.3 |
   | Powered by | 

[jira] [Commented] (HADOOP-18763) Upgrade aws-java-sdk to 1.12.367+

2023-06-12 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17731782#comment-17731782
 ] 

Viraj Jasani commented on HADOOP-18763:
---

mvn clean verify -Dparallel-tests -DtestsThreadCount=8 -Dscale

 

results are quite good, no test failures (except for known failure of 
testRecursiveRootListing, which passes when run individually)

> Upgrade aws-java-sdk to 1.12.367+
> -
>
> Key: HADOOP-18763
> URL: https://issues.apache.org/jira/browse/HADOOP-18763
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.5
>Reporter: Steve Loughran
>Priority: Major
>
> aws sdk bundle < 1.12.367 uses a vulnerable versions of netty which is 
> pulling in high severity CVE and creating unhappiness in security scans, even 
> if s3a doesn't use that lib. 
> The safe version for netty is netty:4.1.86.Final and this is used by 
> aws-java-adk:1.12.367+



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18769) Upgrade hadoop3 docker scripts to use 3.3.6

2023-06-12 Thread Wei-Chiu Chuang (Jira)
Wei-Chiu Chuang created HADOOP-18769:


 Summary: Upgrade hadoop3 docker scripts to use 3.3.6
 Key: HADOOP-18769
 URL: https://issues.apache.org/jira/browse/HADOOP-18769
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Wei-Chiu Chuang


Similar to what was done in HADOOP-18681



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18760) 3.3.6 Release NOTICE and LICENSE file update

2023-06-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17731769#comment-17731769
 ] 

ASF GitHub Bot commented on HADOOP-18760:
-

jojochuang opened a new pull request, #5740:
URL: https://github.com/apache/hadoop/pull/5740

   ### Description of PR
   Update Netty version in LICENSE-binary.
   AWS sdk, if we end up updating it, will include it in the next iteration.
   Mysql connector not included; this dependency will be removed by 
[HADOOP-18761](https://issues.apache.org/jira/browse/HADOOP-18761)
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> 3.3.6 Release NOTICE and LICENSE file update
> 
>
> Key: HADOOP-18760
> URL: https://issues.apache.org/jira/browse/HADOOP-18760
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.3.6
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Blocker
>
> As far as I can tell looking at hadoop-project/pom.xml the only difference 
> between 3.3.5 and 3.3.6 from a dependency point of view is mysql connector 
> (HADOOP-18535) derby (HADOOP-18535, HADOOP-18693).
> Json-smart, snakeyaml and jetty, jettison are updated in LICENSE-binary 
> already. grizzly was used in test scope only so its removal doesn't matter.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18760) 3.3.6 Release NOTICE and LICENSE file update

2023-06-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-18760:

Labels: pull-request-available  (was: )

> 3.3.6 Release NOTICE and LICENSE file update
> 
>
> Key: HADOOP-18760
> URL: https://issues.apache.org/jira/browse/HADOOP-18760
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.3.6
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Blocker
>  Labels: pull-request-available
>
> As far as I can tell looking at hadoop-project/pom.xml the only difference 
> between 3.3.5 and 3.3.6 from a dependency point of view is mysql connector 
> (HADOOP-18535) derby (HADOOP-18535, HADOOP-18693).
> Json-smart, snakeyaml and jetty, jettison are updated in LICENSE-binary 
> already. grizzly was used in test scope only so its removal doesn't matter.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang opened a new pull request, #5740: HADOOP-18760. 3.3.6 Release NOTICE and LICENSE file update.

2023-06-12 Thread via GitHub


jojochuang opened a new pull request, #5740:
URL: https://github.com/apache/hadoop/pull/5740

   ### Description of PR
   Update Netty version in LICENSE-binary.
   AWS sdk, if we end up updating it, will include it in the next iteration.
   Mysql connector not included; this dependency will be removed by 
[HADOOP-18761](https://issues.apache.org/jira/browse/HADOOP-18761)
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18763) Upgrade aws-java-sdk to 1.12.367+

2023-06-12 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17731766#comment-17731766
 ] 

Viraj Jasani commented on HADOOP-18763:
---

us-west-2:

 

mvn clean verify -Dparallel-tests -DtestsThreadCount=8 -Dscale -Dprefetch

 

errors so far:
{code:java}
[ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 
1,920.089 s <<< FAILURE! - in 
org.apache.hadoop.fs.s3a.scale.ITestS3AConcurrentOps
[ERROR] 
testParallelRename(org.apache.hadoop.fs.s3a.scale.ITestS3AConcurrentOps)  Time 
elapsed: 960.003 s  <<< ERROR!
org.junit.runners.model.TestTimedOutException: test timed out after 96 
milliseconds
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
org.apache.hadoop.thirdparty.com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:537)
at 
org.apache.hadoop.thirdparty.com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:88)
at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.putObject(S3ABlockOutputStream.java:628)
at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.close(S3ABlockOutputStream.java:428)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:77)
at 
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
at 
org.apache.hadoop.fs.s3a.scale.ITestS3AConcurrentOps.parallelRenames(ITestS3AConcurrentOps.java:112)
at 
org.apache.hadoop.fs.s3a.scale.ITestS3AConcurrentOps.testParallelRename(ITestS3AConcurrentOps.java:177)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:750)


[ERROR] 
testThreadPoolCoolDown(org.apache.hadoop.fs.s3a.scale.ITestS3AConcurrentOps)  
Time elapsed: 960.005 s  <<< ERROR!
org.junit.runners.model.TestTimedOutException: test timed out after 96 
milliseconds
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
org.apache.hadoop.thirdparty.com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:537)
at 
org.apache.hadoop.thirdparty.com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:88)
at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.putObject(S3ABlockOutputStream.java:628)
at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.close(S3ABlockOutputStream.java:428)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:77)
at 
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
at 
org.apache.hadoop.fs.s3a.scale.ITestS3AConcurrentOps.parallelRenames(ITestS3AConcurrentOps.java:112)
at 
org.apache.hadoop.fs.s3a.scale.ITestS3AConcurrentOps.testThreadPoolCoolDown(ITestS3AConcurrentOps.java:189)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)

[jira] [Commented] (HADOOP-18765) Release 3.3.6

2023-06-12 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17731754#comment-17731754
 ] 

Wei-Chiu Chuang commented on HADOOP-18765:
--

I can confirm the release RC artifacts directory has the cyclone SBOM files.
For example, 
https://repository.apache.org/content/repositories/orgapachehadoop-1377/org/apache/hadoop/hadoop-auth/3.3.6/

If this is all we need we are good. Otherwise It would be a lot of hassles 
trying to aggregate all the SBOM files (we have many jars and a pair of SBOM 
files is for one jar only)

> Release 3.3.6
> -
>
> Key: HADOOP-18765
> URL: https://issues.apache.org/jira/browse/HADOOP-18765
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.3.5
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>
> * Move out all incomplete jiras
> * Branching
> * Unit test and verifications
> * License check
> * Produce signed artifacts and source tarball.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18756) CachingBlockManager to use AtomicBoolean for closed flag

2023-06-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17731736#comment-17731736
 ] 

ASF GitHub Bot commented on HADOOP-18756:
-

hadoop-yetus commented on PR #5718:
URL: https://github.com/apache/hadoop/pull/5718#issuecomment-1587884155

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 38s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  37m  4s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 28s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |  16m  6s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  checkstyle  |   1m 28s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 52s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 29s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  1s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   2m 54s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m 14s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  24m 41s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  0s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |  16m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m  4s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  javac  |  16m  4s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 20s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 47s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   2m 48s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 52s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m  2s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 13s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 192m 38s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5718/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5718 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 37a1466a8384 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 325a03094f89826ece2420b921693f626640419f |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5718/4/testReport/ |
   | Max. process+thread count | 1302 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 

  1   2   >