[GitHub] [hadoop] sravanigadey commented on a diff in pull request #4383: HADOOP-18258. Merging of S3A Audit Logs

2022-06-22 Thread GitBox


sravanigadey commented on code in PR #4383:
URL: https://github.com/apache/hadoop/pull/4383#discussion_r904609878


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/audit/S3AAuditLogMerger.java:
##
@@ -0,0 +1,78 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.audit;
+
+import java.io.BufferedReader;
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.IOException;
+import java.io.InputStreamReader;
+import java.io.PrintWriter;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Merge all the audit logs present in a directory of.
+ * multiple audit log files into a single audit log file.
+ */
+public class S3AAuditLogMerger {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(S3AAuditLogMerger.class);
+
+  /**
+   * Merge all the audit log files from a directory into single audit log file.
+   * @param auditLogsDirectoryPath path where audit log files are present.
+   * @throws IOException on any failure.
+   */
+  public void mergeFiles(String auditLogsDirectoryPath) throws IOException {
+File auditLogFilesDirectory = new File(auditLogsDirectoryPath);
+String[] auditLogFileNames = auditLogFilesDirectory.list();
+
+//Merging of audit log files present in a directory into a single audit 
log file
+if (auditLogFileNames != null && auditLogFileNames.length != 0) {
+  File auditLogFile = new File("AuditLogFile");

Review Comment:
   Do you mean listing of files in s3 path and iterating over that that?
   Can you please brief it
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18310) Add option and make 400 bad request retryable

2022-06-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18310?focusedWorklogId=784031=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-784031
 ]

ASF GitHub Bot logged work on HADOOP-18310:
---

Author: ASF GitHub Bot
Created on: 23/Jun/22 04:17
Start Date: 23/Jun/22 04:17
Worklog Time Spent: 10m 
  Work Description: taklwu commented on PR #4483:
URL: https://github.com/apache/hadoop/pull/4483#issuecomment-1163909148

   @steveloughran I should have provided the test result that executed with 
integration tests in the description, they're not perfect but we can discuss 
how we move forward. 
   
   




Issue Time Tracking
---

Worklog Id: (was: 784031)
Time Spent: 1h 20m  (was: 1h 10m)

> Add option and make 400 bad request retryable
> -
>
> Key: HADOOP-18310
> URL: https://issues.apache.org/jira/browse/HADOOP-18310
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.4
>Reporter: Tak-Lon (Stephen) Wu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> When one is using a customized credential provider via 
> fs.s3a.aws.credentials.provider, e.g. 
> org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider, when the provided 
> credential by this pluggable provider is expired and return an error code of 
> 400 as bad request exception.
> Here, the current S3ARetryPolicy will fail immediately and does not retry on 
> the S3A level. 
> Our recent use case in HBase found this use case could lead to a Region 
> Server got immediate abandoned from this Exception without retry, when the 
> file system is trying open or S3AInputStream is trying to reopen the file. 
> especially the S3AInputStream use cases, we cannot find a good way to retry 
> outside of the file system semantic (because if a ongoing stream is failing 
> currently it's considered as irreparable state), and thus we come up with 
> this optional flag for retrying in S3A.
> {code}
> Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: The provided 
> token has expired. (Service: Amazon S3; Status Code: 400; Error Code: 
> ExpiredToken; Request ID: XYZ; S3 Extended Request ID: ABC; Proxy: null), S3 
> Extended Request ID: 123
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1862)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleServiceErrorResponse(AmazonHttpClient.java:1415)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1384)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1154)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:811)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:779)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:753)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:713)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:695)
>   at 
> com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:559)
>   at 
> com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:539)
>   at 
> com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5453)
>   at 
> com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5400)
>   at 
> com.amazonaws.services.s3.AmazonS3Client.getObject(AmazonS3Client.java:1524)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem$InputStreamCallbacksImpl.getObject(S3AFileSystem.java:1506)
>   at 
> org.apache.hadoop.fs.s3a.S3AInputStream.lambda$reopen$0(S3AInputStream.java:217)
>   at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:117)
>   ... 35 more
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] taklwu commented on pull request #4483: HADOOP-18310 Add option and make 400 bad request retryable

2022-06-22 Thread GitBox


taklwu commented on PR #4483:
URL: https://github.com/apache/hadoop/pull/4483#issuecomment-1163909148

   @steveloughran I should have provided the test result that executed with 
integration tests in the description, they're not perfect but we can discuss 
how we move forward. 
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4492: YARN-9822.TimelineCollectorWebService#putEntities blocked when ATSV2 HBase is down

2022-06-22 Thread GitBox


hadoop-yetus commented on PR #4492:
URL: https://github.com/apache/hadoop/pull/4492#issuecomment-1163908973

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 36s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 45s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 18s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   9m 59s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   8m 53s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   2m 21s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   5m 23s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   5m 15s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   4m 54s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   7m 16s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 35s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   9m  7s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   9m  7s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 49s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   8m 49s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   2m  6s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   4m 40s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   4m 23s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   4m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   6m 57s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 44s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 42s |  |  hadoop-yarn-api in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 19s |  |  
hadoop-yarn-server-timelineservice in the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 17s |  |  
hadoop-yarn-server-timelineservice-hbase-client in the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 32s |  |  
hadoop-yarn-server-timelineservice-documentstore in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 20s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 185m 26s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4492/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4492 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 78e2ce2fad4d 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 62f7498391c145d082be7f8bf4510a3338c51190 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4492/1/testReport/ |
   | Max. process+thread count | 554 (vs. ulimit of 5500) |
   | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 

[jira] [Work logged] (HADOOP-18310) Add option and make 400 bad request retryable

2022-06-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18310?focusedWorklogId=784027=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-784027
 ]

ASF GitHub Bot logged work on HADOOP-18310:
---

Author: ASF GitHub Bot
Created on: 23/Jun/22 03:48
Start Date: 23/Jun/22 03:48
Worklog Time Spent: 10m 
  Work Description: taklwu commented on code in PR #4483:
URL: https://github.com/apache/hadoop/pull/4483#discussion_r904491406


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java:
##
@@ -1177,4 +1177,8 @@ private Constants() {
*/
   public static final String FS_S3A_CREATE_HEADER = "fs.s3a.create.header";
 
+  public static final String FAIL_ON_AWS_BAD_REQUEST = 
"fs.s3a.retry.failOnAwsBadRequest";

Review Comment:
   ack and thanks, I will update it soon.





Issue Time Tracking
---

Worklog Id: (was: 784027)
Time Spent: 1h 10m  (was: 1h)

> Add option and make 400 bad request retryable
> -
>
> Key: HADOOP-18310
> URL: https://issues.apache.org/jira/browse/HADOOP-18310
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.4
>Reporter: Tak-Lon (Stephen) Wu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> When one is using a customized credential provider via 
> fs.s3a.aws.credentials.provider, e.g. 
> org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider, when the provided 
> credential by this pluggable provider is expired and return an error code of 
> 400 as bad request exception.
> Here, the current S3ARetryPolicy will fail immediately and does not retry on 
> the S3A level. 
> Our recent use case in HBase found this use case could lead to a Region 
> Server got immediate abandoned from this Exception without retry, when the 
> file system is trying open or S3AInputStream is trying to reopen the file. 
> especially the S3AInputStream use cases, we cannot find a good way to retry 
> outside of the file system semantic (because if a ongoing stream is failing 
> currently it's considered as irreparable state), and thus we come up with 
> this optional flag for retrying in S3A.
> {code}
> Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: The provided 
> token has expired. (Service: Amazon S3; Status Code: 400; Error Code: 
> ExpiredToken; Request ID: XYZ; S3 Extended Request ID: ABC; Proxy: null), S3 
> Extended Request ID: 123
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1862)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleServiceErrorResponse(AmazonHttpClient.java:1415)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1384)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1154)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:811)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:779)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:753)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:713)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:695)
>   at 
> com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:559)
>   at 
> com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:539)
>   at 
> com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5453)
>   at 
> com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5400)
>   at 
> com.amazonaws.services.s3.AmazonS3Client.getObject(AmazonS3Client.java:1524)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem$InputStreamCallbacksImpl.getObject(S3AFileSystem.java:1506)
>   at 
> org.apache.hadoop.fs.s3a.S3AInputStream.lambda$reopen$0(S3AInputStream.java:217)
>   at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:117)
>   ... 35 more
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18310) Add option and make 400 bad request retryable

2022-06-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18310?focusedWorklogId=784026=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-784026
 ]

ASF GitHub Bot logged work on HADOOP-18310:
---

Author: ASF GitHub Bot
Created on: 23/Jun/22 03:48
Start Date: 23/Jun/22 03:48
Worklog Time Spent: 10m 
  Work Description: taklwu commented on code in PR #4483:
URL: https://github.com/apache/hadoop/pull/4483#discussion_r904491346


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ARetryPolicy.java:
##
@@ -214,7 +214,10 @@ protected Map, RetryPolicy> 
createExceptionMap() {
 
 // policy on a 400/bad request still ambiguous.
 // Treated as an immediate failure
-policyMap.put(AWSBadRequestException.class, fail);
+RetryPolicy awsBadRequestExceptionRetryPolicy =

Review Comment:
   correct me if I'm wrong but before our change, the response as 
`AWSBadRequestException` in fact is getting back with a HTTP 400 error code. It 
is different from other network failures that the 
`fail`/`RetryPolicies.TRY_ONCE_THEN_FAIL` has been applied for.  





Issue Time Tracking
---

Worklog Id: (was: 784026)
Time Spent: 1h  (was: 50m)

> Add option and make 400 bad request retryable
> -
>
> Key: HADOOP-18310
> URL: https://issues.apache.org/jira/browse/HADOOP-18310
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.4
>Reporter: Tak-Lon (Stephen) Wu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> When one is using a customized credential provider via 
> fs.s3a.aws.credentials.provider, e.g. 
> org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider, when the provided 
> credential by this pluggable provider is expired and return an error code of 
> 400 as bad request exception.
> Here, the current S3ARetryPolicy will fail immediately and does not retry on 
> the S3A level. 
> Our recent use case in HBase found this use case could lead to a Region 
> Server got immediate abandoned from this Exception without retry, when the 
> file system is trying open or S3AInputStream is trying to reopen the file. 
> especially the S3AInputStream use cases, we cannot find a good way to retry 
> outside of the file system semantic (because if a ongoing stream is failing 
> currently it's considered as irreparable state), and thus we come up with 
> this optional flag for retrying in S3A.
> {code}
> Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: The provided 
> token has expired. (Service: Amazon S3; Status Code: 400; Error Code: 
> ExpiredToken; Request ID: XYZ; S3 Extended Request ID: ABC; Proxy: null), S3 
> Extended Request ID: 123
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1862)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleServiceErrorResponse(AmazonHttpClient.java:1415)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1384)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1154)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:811)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:779)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:753)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:713)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:695)
>   at 
> com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:559)
>   at 
> com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:539)
>   at 
> com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5453)
>   at 
> com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5400)
>   at 
> com.amazonaws.services.s3.AmazonS3Client.getObject(AmazonS3Client.java:1524)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem$InputStreamCallbacksImpl.getObject(S3AFileSystem.java:1506)
>   at 
> org.apache.hadoop.fs.s3a.S3AInputStream.lambda$reopen$0(S3AInputStream.java:217)
>   at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:117)
>   ... 35 more
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] taklwu commented on a diff in pull request #4483: HADOOP-18310 Add option and make 400 bad request retryable

2022-06-22 Thread GitBox


taklwu commented on code in PR #4483:
URL: https://github.com/apache/hadoop/pull/4483#discussion_r904491406


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java:
##
@@ -1177,4 +1177,8 @@ private Constants() {
*/
   public static final String FS_S3A_CREATE_HEADER = "fs.s3a.create.header";
 
+  public static final String FAIL_ON_AWS_BAD_REQUEST = 
"fs.s3a.retry.failOnAwsBadRequest";

Review Comment:
   ack and thanks, I will update it soon.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] taklwu commented on a diff in pull request #4483: HADOOP-18310 Add option and make 400 bad request retryable

2022-06-22 Thread GitBox


taklwu commented on code in PR #4483:
URL: https://github.com/apache/hadoop/pull/4483#discussion_r904491346


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ARetryPolicy.java:
##
@@ -214,7 +214,10 @@ protected Map, RetryPolicy> 
createExceptionMap() {
 
 // policy on a 400/bad request still ambiguous.
 // Treated as an immediate failure
-policyMap.put(AWSBadRequestException.class, fail);
+RetryPolicy awsBadRequestExceptionRetryPolicy =

Review Comment:
   correct me if I'm wrong but before our change, the response as 
`AWSBadRequestException` in fact is getting back with a HTTP 400 error code. It 
is different from other network failures that the 
`fail`/`RetryPolicies.TRY_ONCE_THEN_FAIL` has been applied for.  



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18297) Upgrade dependency-check-maven to 7.1.1

2022-06-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18297?focusedWorklogId=784021=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-784021
 ]

ASF GitHub Bot logged work on HADOOP-18297:
---

Author: ASF GitHub Bot
Created on: 23/Jun/22 02:54
Start Date: 23/Jun/22 02:54
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4449:
URL: https://github.com/apache/hadoop/pull/4449#issuecomment-1163866633

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 58s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |  40m  7s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4449/3/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | +1 :green_heart: |  compile  |  24m 51s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  21m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  mvnsite  |  19m 44s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   8m 34s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   7m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  shadedclient  | 145m  5s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  24m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 34s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |  24m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  23m  1s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  23m  1s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |  19m 41s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   8m 32s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   7m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  shadedclient  |  60m 24s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 1074m 15s | 
[/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4449/3/artifact/out/patch-unit-root.txt)
 |  root in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   2m 16s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 1357m  2s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.mapred.TestLocalDistributedCacheManager |
   |   | hadoop.yarn.server.router.webapp.TestRouterWebServicesREST |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4449/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4449 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint |
   | uname | Linux 1e109e1e281f 4.15.0-166-generic #174-Ubuntu SMP Wed Dec 8 
19:07:44 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 1e12e7bdd13f9d1a5ecdc89a88edd1156089fa0a |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4449: HADOOP-18297. Upgrade dependency-check-maven to 7.1.1

2022-06-22 Thread GitBox


hadoop-yetus commented on PR #4449:
URL: https://github.com/apache/hadoop/pull/4449#issuecomment-1163866633

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 58s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |  40m  7s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4449/3/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | +1 :green_heart: |  compile  |  24m 51s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  21m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  mvnsite  |  19m 44s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   8m 34s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   7m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  shadedclient  | 145m  5s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  24m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 34s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |  24m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  23m  1s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  23m  1s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |  19m 41s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   8m 32s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   7m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  shadedclient  |  60m 24s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 1074m 15s | 
[/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4449/3/artifact/out/patch-unit-root.txt)
 |  root in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   2m 16s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 1357m  2s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.mapred.TestLocalDistributedCacheManager |
   |   | hadoop.yarn.server.router.webapp.TestRouterWebServicesREST |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4449/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4449 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint |
   | uname | Linux 1e109e1e281f 4.15.0-166-generic #174-Ubuntu SMP Wed Dec 8 
19:07:44 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 1e12e7bdd13f9d1a5ecdc89a88edd1156089fa0a |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4449/3/testReport/ |
   | Max. process+thread count | 2172 (vs. ulimit of 5500) |
   | modules | C: . U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4449/3/console |
   | versions | git=2.25.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0 

[jira] [Commented] (HADOOP-16206) Migrate from Log4j1 to Log4j2

2022-06-22 Thread Duo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17557765#comment-17557765
 ] 

Duo Zhang commented on HADOOP-16206:


The work is not easy. I haven't found enough time to finish it yet.

For security issues, hadoop 3.3.2 or 3.3.3 has already replaced all the log4j 
dependencies with reload4j dependencies, which should be enough for fixing 
security issues. You could try to upgrade to these versions.

Thanks.



> Migrate from Log4j1 to Log4j2
> -
>
> Key: HADOOP-16206
> URL: https://issues.apache.org/jira/browse/HADOOP-16206
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.3.0
>Reporter: Akira Ajisaka
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HADOOP-16206-wip.001.patch
>
>
> This sub-task is to remove log4j1 dependency and add log4j2 dependency.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ashutoshcipher opened a new pull request, #4492: YARN-9822.TimelineCollectorWebService#putEntities blocked when ATSV2 HBase is down

2022-06-22 Thread GitBox


ashutoshcipher opened a new pull request, #4492:
URL: https://github.com/apache/hadoop/pull/4492

   ### Description of PR
   TimelineCollectorWebService#putEntities blocked when ATSV2 HBase is down
   
   JIRA: YARN-9822
   
   ### How was this patch tested?
   Unit test is added
   
   ### For code changes:
   
   - [X] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [X] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18311) Upgrade dependencies to address several CVEs

2022-06-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18311?focusedWorklogId=784013=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-784013
 ]

ASF GitHub Bot logged work on HADOOP-18311:
---

Author: ASF GitHub Bot
Created on: 23/Jun/22 01:08
Start Date: 23/Jun/22 01:08
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4491:
URL: https://github.com/apache/hadoop/pull/4491#issuecomment-1163815041

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 38s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ branch-3.3.4 Compile Tests _ |
   | +0 :ok: |  mvndep  |   4m 58s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  31m  3s |  |  branch-3.3.4 passed  |
   | +1 :green_heart: |  compile  |  17m 55s |  |  branch-3.3.4 passed  |
   | +1 :green_heart: |  mvnsite  |   8m 23s |  |  branch-3.3.4 passed  |
   | +1 :green_heart: |  javadoc  |   7m  4s |  |  branch-3.3.4 passed  |
   | +1 :green_heart: |  shadedclient  |  94m  5s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m 18s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m  9s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 14s |  |  the patch passed  |
   | -1 :x: |  javac  |  17m 14s | 
[/results-compile-javac-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4491/1/artifact/out/results-compile-javac-root.txt)
 |  root generated 3 new + 1969 unchanged - 0 fixed = 1972 total (was 1969)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   7m 20s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  8s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   6m 56s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  33m 35s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m  6s |  |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m 17s |  |  
hadoop-yarn-server-timelineservice-hbase-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 21s |  |  
hadoop-yarn-server-timelineservice-hbase-client in the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 10s |  |  
hadoop-yarn-server-timelineservice-hbase-server-1 in the patch passed.  |
   | +1 :green_heart: |  unit  |  14m  1s |  |  
hadoop-yarn-server-timelineservice-hbase-tests in the patch passed.  |
   | +1 :green_heart: |  unit  |   1m  9s |  |  
hadoop-yarn-server-timelineservice-hbase-server-2 in the patch passed.  |
   | -1 :x: |  asflicense  |   1m 31s | 
[/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4491/1/artifact/out/results-asflicense.txt)
 |  The patch generated 1 ASF License warnings.  |
   |  |   | 181m  6s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4491/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4491 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell xml |
   | uname | Linux 780c0cfe5717 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 
23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3.4 / 7100300b0a5932521221c9abc1c917eabf4df37e |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4491/1/testReport/ |
   | Max. process+thread count | 809 (vs. ulimit of 5500) |
   | modules | C: hadoop-project 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-common
 

[jira] [Commented] (HADOOP-16206) Migrate from Log4j1 to Log4j2

2022-06-22 Thread Alex Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17557755#comment-17557755
 ] 

Alex Liu commented on HADOOP-16206:
---

Hi Team,

Great work. We are waiting for this important feature to fix security issues.

Does this feature will release with the version 3.4.0, and may I ask when will 
3.4.0 release roughly? 

Because I can't find the release date in the roadmap of Hadoop: 
[https://cwiki.apache.org/confluence/display/hadoop/Roadmap]

 

Thanks,

Alex

> Migrate from Log4j1 to Log4j2
> -
>
> Key: HADOOP-16206
> URL: https://issues.apache.org/jira/browse/HADOOP-16206
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.3.0
>Reporter: Akira Ajisaka
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HADOOP-16206-wip.001.patch
>
>
> This sub-task is to remove log4j1 dependency and add log4j2 dependency.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4491: HADOOP-18311. Upgrade dependencies to address several CVEs

2022-06-22 Thread GitBox


hadoop-yetus commented on PR #4491:
URL: https://github.com/apache/hadoop/pull/4491#issuecomment-1163815041

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 38s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ branch-3.3.4 Compile Tests _ |
   | +0 :ok: |  mvndep  |   4m 58s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  31m  3s |  |  branch-3.3.4 passed  |
   | +1 :green_heart: |  compile  |  17m 55s |  |  branch-3.3.4 passed  |
   | +1 :green_heart: |  mvnsite  |   8m 23s |  |  branch-3.3.4 passed  |
   | +1 :green_heart: |  javadoc  |   7m  4s |  |  branch-3.3.4 passed  |
   | +1 :green_heart: |  shadedclient  |  94m  5s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m 18s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m  9s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 14s |  |  the patch passed  |
   | -1 :x: |  javac  |  17m 14s | 
[/results-compile-javac-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4491/1/artifact/out/results-compile-javac-root.txt)
 |  root generated 3 new + 1969 unchanged - 0 fixed = 1972 total (was 1969)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   7m 20s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  8s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   6m 56s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  33m 35s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m  6s |  |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m 17s |  |  
hadoop-yarn-server-timelineservice-hbase-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 21s |  |  
hadoop-yarn-server-timelineservice-hbase-client in the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 10s |  |  
hadoop-yarn-server-timelineservice-hbase-server-1 in the patch passed.  |
   | +1 :green_heart: |  unit  |  14m  1s |  |  
hadoop-yarn-server-timelineservice-hbase-tests in the patch passed.  |
   | +1 :green_heart: |  unit  |   1m  9s |  |  
hadoop-yarn-server-timelineservice-hbase-server-2 in the patch passed.  |
   | -1 :x: |  asflicense  |   1m 31s | 
[/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4491/1/artifact/out/results-asflicense.txt)
 |  The patch generated 1 ASF License warnings.  |
   |  |   | 181m  6s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4491/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4491 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell xml |
   | uname | Linux 780c0cfe5717 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 
23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3.4 / 7100300b0a5932521221c9abc1c917eabf4df37e |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4491/1/testReport/ |
   | Max. process+thread count | 809 (vs. ulimit of 5500) |
   | modules | C: hadoop-project 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-common
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-server/hadoop-yarn-server-timelineservice-hbase-server-1
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 

[jira] [Work logged] (HADOOP-18300) Update Gson to 2.9.0

2022-06-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18300?focusedWorklogId=784011=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-784011
 ]

ASF GitHub Bot logged work on HADOOP-18300:
---

Author: ASF GitHub Bot
Created on: 23/Jun/22 00:45
Start Date: 23/Jun/22 00:45
Worklog Time Spent: 10m 
  Work Description: medb commented on PR #4454:
URL: https://github.com/apache/hadoop/pull/4454#issuecomment-1163804725

   Thank you for the reviews and merging PR!




Issue Time Tracking
---

Worklog Id: (was: 784011)
Time Spent: 2h  (was: 1h 50m)

> Update Gson to 2.9.0
> 
>
> Key: HADOOP-18300
> URL: https://issues.apache.org/jira/browse/HADOOP-18300
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Reporter: Igor Dvorzhak
>Assignee: Igor Dvorzhak
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.2.4, 3.3.9
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Update to the Gson 2.9.0 that has many 
> [fixes|https://github.com/google/gson/releases/tag/gson-parent-2.9.0], and 
> backward-compatible as long as Java 7+ is used.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] medb commented on pull request #4454: HADOOP-18300. Upgrade Gson dependency to version 2.9.0

2022-06-22 Thread GitBox


medb commented on PR #4454:
URL: https://github.com/apache/hadoop/pull/4454#issuecomment-1163804725

   Thank you for the reviews and merging PR!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18300) Update Gson to 2.9.0

2022-06-22 Thread Chris Nauroth (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HADOOP-18300.

Fix Version/s: 3.4.0
   3.2.4
   3.3.9
 Hadoop Flags: Reviewed
   Resolution: Fixed

I have committed this to trunk, branch-3.3 and branch-3.2. [~medb], thank you 
for the contribution. [~ayushtkn], thank you for code reviewing.

> Update Gson to 2.9.0
> 
>
> Key: HADOOP-18300
> URL: https://issues.apache.org/jira/browse/HADOOP-18300
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Reporter: Igor Dvorzhak
>Assignee: Igor Dvorzhak
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.2.4, 3.3.9
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Update to the Gson 2.9.0 that has many 
> [fixes|https://github.com/google/gson/releases/tag/gson-parent-2.9.0], and 
> backward-compatible as long as Java 7+ is used.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15984) Update jersey from 1.19 to 2.x

2022-06-22 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17557738#comment-17557738
 ] 

Viraj Jasani commented on HADOOP-15984:
---

{quote}So there is no grizzly-http-servlet version that has Jersey 2 
dependencies.
{quote}
Now that I did some more digging, I realized that this could be a big hurdle. I 
still can't get over the fact that grizzly NIO being a popular framework, has 
not even released any artifacts that are directly compatible with Jersey 2.x. 

>From 1.1.5 version 
>(https://github.com/eclipse-ee4j/grizzly/blob/2.4.4/modules/http-servlet/pom.xml#L87),
> it went on to support 3.0 version 
>(https://github.com/eclipse-ee4j/grizzly/blob/3.0.0-RELEASE/modules/http-servlet/pom.xml#L36)
> of Jersey directly.

Moreover, we use HttpServletResponseImpl from grizzly-http-servlet for instance 
and since it doesn't support jersey 2, until we really upgrade to jersey 3, we 
don't have a choice but to stay with 2.4.4 version of grizzly-http-servlet 
because that version supports jersey 1, HttpServletResponseImpl implements 
javax servlet HttpServletResponse 
([https://github.com/eclipse-ee4j/grizzly/blob/2.4.4/modules/http-servlet/src/main/java/org/glassfish/grizzly/servlet/HttpServletResponseImpl.java#L34)]
 whereas grizzly-http-servlet 3.0 HttpServletResponseImpl implements jakarta 
HttpServletResponse 
([https://github.com/eclipse-ee4j/grizzly/blob/3.0.0-RELEASE/modules/http-servlet/src/main/java/org/glassfish/grizzly/servlet/HttpServletResponseImpl.java#L38),]
 something that we cannot inherently use until we move to jersey 3. Jersey 3 is 
all about migrating to using Jakarta EE APIs only.

 

The only option I see so far is to use grizzly-http-servlet 2.4.4 (jersey 1 
compatible) but only at test scope. Let me see how it goes.

> Update jersey from 1.19 to 2.x
> --
>
> Key: HADOOP-15984
> URL: https://issues.apache.org/jira/browse/HADOOP-15984
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Viraj Jasani
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> jersey-json 1.19 depends on Jackson 1.9.2. Let's upgrade.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on pull request #4488: HDFS-16640. RBF: Show datanode IP list when click DN histogram in Router

2022-06-22 Thread GitBox


goiri commented on PR #4488:
URL: https://github.com/apache/hadoop/pull/4488#issuecomment-1163774751

   > Code seems exact copy from dfshealth.js, that is what we are doing for 
router UI, seems correct to me. Good if you can attach a screenshot as well
   
   Not an expert on javascript but it would be good to extract some if this 
common code.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18300) Update Gson to 2.9.0

2022-06-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18300?focusedWorklogId=783998=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-783998
 ]

ASF GitHub Bot logged work on HADOOP-18300:
---

Author: ASF GitHub Bot
Created on: 22/Jun/22 23:37
Start Date: 22/Jun/22 23:37
Worklog Time Spent: 10m 
  Work Description: cnauroth merged PR #4454:
URL: https://github.com/apache/hadoop/pull/4454




Issue Time Tracking
---

Worklog Id: (was: 783998)
Time Spent: 1h 50m  (was: 1h 40m)

> Update Gson to 2.9.0
> 
>
> Key: HADOOP-18300
> URL: https://issues.apache.org/jira/browse/HADOOP-18300
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Reporter: Igor Dvorzhak
>Assignee: Igor Dvorzhak
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Update to the Gson 2.9.0 that has many 
> [fixes|https://github.com/google/gson/releases/tag/gson-parent-2.9.0], and 
> backward-compatible as long as Java 7+ is used.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] cnauroth merged pull request #4454: HADOOP-18300. Upgrade Gson dependency to version 2.9.0

2022-06-22 Thread GitBox


cnauroth merged PR #4454:
URL: https://github.com/apache/hadoop/pull/4454


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18300) Update Gson to 2.9.0

2022-06-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18300?focusedWorklogId=783996=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-783996
 ]

ASF GitHub Bot logged work on HADOOP-18300:
---

Author: ASF GitHub Bot
Created on: 22/Jun/22 23:14
Start Date: 22/Jun/22 23:14
Worklog Time Spent: 10m 
  Work Description: cnauroth commented on PR #4454:
URL: https://github.com/apache/hadoop/pull/4454#issuecomment-1163749418

   I've confirmed that the failure in `TestRouterWebServicesREST` is unrelated. 
(See [YARN-11192](https://issues.apache.org/jira/browse/YARN-11192).) I'll 
begin committing this shortly.




Issue Time Tracking
---

Worklog Id: (was: 783996)
Time Spent: 1h 40m  (was: 1.5h)

> Update Gson to 2.9.0
> 
>
> Key: HADOOP-18300
> URL: https://issues.apache.org/jira/browse/HADOOP-18300
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Reporter: Igor Dvorzhak
>Assignee: Igor Dvorzhak
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Update to the Gson 2.9.0 that has many 
> [fixes|https://github.com/google/gson/releases/tag/gson-parent-2.9.0], and 
> backward-compatible as long as Java 7+ is used.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] cnauroth commented on pull request #4454: HADOOP-18300. Upgrade Gson dependency to version 2.9.0

2022-06-22 Thread GitBox


cnauroth commented on PR #4454:
URL: https://github.com/apache/hadoop/pull/4454#issuecomment-1163749418

   I've confirmed that the failure in `TestRouterWebServicesREST` is unrelated. 
(See [YARN-11192](https://issues.apache.org/jira/browse/YARN-11192).) I'll 
begin committing this shortly.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] cnauroth commented on a diff in pull request #4248: MAPREDUCE-7370. Parallelize MultipleOutputs#close call

2022-06-22 Thread GitBox


cnauroth commented on code in PR #4248:
URL: https://github.com/apache/hadoop/pull/4248#discussion_r904365716


##
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/MultipleOutputs.java:
##
@@ -570,8 +570,14 @@ public void setStatus(String status) {
*/
   @SuppressWarnings("unchecked")
   public void close() throws IOException, InterruptedException {
-for (RecordWriter writer : recordWriters.values()) {
-  writer.close(context);
-}
+recordWriters.values().parallelStream().forEach(writer -> {

Review Comment:
   Additionally, if we agree with my assertion that it's important to preserve 
the error contract, then it would be good to have a unit test that fakes an 
`IOException` during `close()` and asserts that it propagates.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] cnauroth commented on a diff in pull request #4248: MAPREDUCE-7370. Parallelize MultipleOutputs#close call

2022-06-22 Thread GitBox


cnauroth commented on code in PR #4248:
URL: https://github.com/apache/hadoop/pull/4248#discussion_r904363794


##
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/MultipleOutputs.java:
##
@@ -570,8 +570,14 @@ public void setStatus(String status) {
*/
   @SuppressWarnings("unchecked")
   public void close() throws IOException, InterruptedException {
-for (RecordWriter writer : recordWriters.values()) {
-  writer.close(context);
-}
+recordWriters.values().parallelStream().forEach(writer -> {

Review Comment:
   I'm concerned that this could have unintended side effects for callers, 
because it changes the error contract. Errors during `close()` that were 
formerly visible as a checked `IOException` or `InterruptedException` now 
become an unchecked `RuntimeException`. In the case of thread interruption, the 
interrupt now occurs on the background thread with no propagation of 
interrupted status back up to the coordinating thread.
   
   Unfortunately, `parallelStream()` with a lambda puts us down this path. It 
would be more code, but directly managing a `ThreadPoolExecutor` would give you 
the chance to preserve the contract by unwrapping checked exceptions from the 
`Future` and propagating.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18258) Merging of S3A Audit Logs

2022-06-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18258?focusedWorklogId=783992=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-783992
 ]

ASF GitHub Bot logged work on HADOOP-18258:
---

Author: ASF GitHub Bot
Created on: 22/Jun/22 22:59
Start Date: 22/Jun/22 22:59
Worklog Time Spent: 10m 
  Work Description: mukund-thakur commented on code in PR #4383:
URL: https://github.com/apache/hadoop/pull/4383#discussion_r904350867


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/audit/AuditTool.java:
##
@@ -0,0 +1,308 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.audit;
+
+import java.io.Closeable;
+import java.io.EOFException;
+import java.io.File;
+import java.io.IOException;
+import java.io.PrintWriter;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FilterFileSystem;
+import org.apache.hadoop.fs.LocatedFileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.RemoteIterator;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+
+import static 
org.apache.hadoop.service.launcher.LauncherExitCodes.EXIT_COMMAND_ARGUMENT_ERROR;
+import static 
org.apache.hadoop.service.launcher.LauncherExitCodes.EXIT_SERVICE_UNAVAILABLE;
+import static 
org.apache.hadoop.service.launcher.LauncherExitCodes.EXIT_SUCCESS;
+
+/**
+ * AuditTool is a Command Line Interface.
+ * i.e, it's functionality is to parse the merged audit log file.
+ * and generate avro file.
+ */
+public class AuditTool extends Configured implements Tool, Closeable {
+
+  private static final Logger LOG = LoggerFactory.getLogger(AuditTool.class);
+
+  private final String entryPoint = "s3audit";
+
+  private PrintWriter out;
+
+  // Exit codes
+  private static final int SUCCESS = EXIT_SUCCESS;
+  private static final int INVALID_ARGUMENT = EXIT_COMMAND_ARGUMENT_ERROR;
+
+  /**
+   * Error String when the wrong FS is used for binding: {@value}.
+   **/
+  @VisibleForTesting
+  public static final String WRONG_FILESYSTEM = "Wrong filesystem for ";
+
+  private final String usage = entryPoint + "  s3a://BUCKET\n";
+
+  public AuditTool() {
+  }
+
+  /**
+   * Tells us the usage of the AuditTool by commands.
+   *
+   * @return the string USAGE
+   */
+  public String getUsage() {
+return usage;
+  }
+
+  /**
+   * This run method in AuditTool takes S3 bucket path.
+   * which contains audit log files from command line arguments.
+   * and merge the audit log files present in that path into single file in.
+   * local system.
+   *
+   * @param args command specific arguments.
+   * @return SUCCESS i.e, '0', which is an exit code.
+   * @throws Exception on any failure.
+   */
+  @Override
+  public int run(String[] args) throws Exception {
+List argv = new ArrayList<>(Arrays.asList(args));
+if (argv.isEmpty()) {
+  errorln(getUsage());
+  throw invalidArgs("No bucket specified");
+}
+//Path of audit log files in s3 bucket
+Path s3LogsPath = new Path(argv.get(0));
+
+//Setting the file system
+URI fsURI = toUri(String.valueOf(s3LogsPath));
+S3AFileSystem s3AFileSystem =
+bindFilesystem(FileSystem.newInstance(fsURI, getConf()));
+RemoteIterator listOfS3LogFiles =
+s3AFileSystem.listFiles(s3LogsPath, true);
+
+//Merging local audit files into a single file
+File s3aLogsDirectory = new File(s3LogsPath.getName());
+boolean s3aLogsDirectoryCreation = false;
+if (!s3aLogsDirectory.exists()) {
+  

[GitHub] [hadoop] mukund-thakur commented on a diff in pull request #4383: HADOOP-18258. Merging of S3A Audit Logs

2022-06-22 Thread GitBox


mukund-thakur commented on code in PR #4383:
URL: https://github.com/apache/hadoop/pull/4383#discussion_r904350867


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/audit/AuditTool.java:
##
@@ -0,0 +1,308 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.audit;
+
+import java.io.Closeable;
+import java.io.EOFException;
+import java.io.File;
+import java.io.IOException;
+import java.io.PrintWriter;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FilterFileSystem;
+import org.apache.hadoop.fs.LocatedFileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.RemoteIterator;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+
+import static 
org.apache.hadoop.service.launcher.LauncherExitCodes.EXIT_COMMAND_ARGUMENT_ERROR;
+import static 
org.apache.hadoop.service.launcher.LauncherExitCodes.EXIT_SERVICE_UNAVAILABLE;
+import static 
org.apache.hadoop.service.launcher.LauncherExitCodes.EXIT_SUCCESS;
+
+/**
+ * AuditTool is a Command Line Interface.
+ * i.e, it's functionality is to parse the merged audit log file.
+ * and generate avro file.
+ */
+public class AuditTool extends Configured implements Tool, Closeable {
+
+  private static final Logger LOG = LoggerFactory.getLogger(AuditTool.class);
+
+  private final String entryPoint = "s3audit";
+
+  private PrintWriter out;
+
+  // Exit codes
+  private static final int SUCCESS = EXIT_SUCCESS;
+  private static final int INVALID_ARGUMENT = EXIT_COMMAND_ARGUMENT_ERROR;
+
+  /**
+   * Error String when the wrong FS is used for binding: {@value}.
+   **/
+  @VisibleForTesting
+  public static final String WRONG_FILESYSTEM = "Wrong filesystem for ";
+
+  private final String usage = entryPoint + "  s3a://BUCKET\n";
+
+  public AuditTool() {
+  }
+
+  /**
+   * Tells us the usage of the AuditTool by commands.
+   *
+   * @return the string USAGE
+   */
+  public String getUsage() {
+return usage;
+  }
+
+  /**
+   * This run method in AuditTool takes S3 bucket path.
+   * which contains audit log files from command line arguments.
+   * and merge the audit log files present in that path into single file in.
+   * local system.
+   *
+   * @param args command specific arguments.
+   * @return SUCCESS i.e, '0', which is an exit code.
+   * @throws Exception on any failure.
+   */
+  @Override
+  public int run(String[] args) throws Exception {
+List argv = new ArrayList<>(Arrays.asList(args));
+if (argv.isEmpty()) {
+  errorln(getUsage());
+  throw invalidArgs("No bucket specified");
+}
+//Path of audit log files in s3 bucket
+Path s3LogsPath = new Path(argv.get(0));
+
+//Setting the file system
+URI fsURI = toUri(String.valueOf(s3LogsPath));
+S3AFileSystem s3AFileSystem =
+bindFilesystem(FileSystem.newInstance(fsURI, getConf()));
+RemoteIterator listOfS3LogFiles =
+s3AFileSystem.listFiles(s3LogsPath, true);
+
+//Merging local audit files into a single file
+File s3aLogsDirectory = new File(s3LogsPath.getName());
+boolean s3aLogsDirectoryCreation = false;
+if (!s3aLogsDirectory.exists()) {
+  s3aLogsDirectoryCreation = s3aLogsDirectory.mkdir();
+}
+if(s3aLogsDirectoryCreation) {
+  while (listOfS3LogFiles.hasNext()) {
+Path s3LogFilePath = listOfS3LogFiles.next().getPath();
+File s3LogLocalFilePath =
+new File(s3aLogsDirectory, s3LogFilePath.getName());
+boolean localFileCreation = s3LogLocalFilePath.createNewFile();
+if (localFileCreation) {
+  

[jira] [Work logged] (HADOOP-18311) Upgrade dependencies to address several CVEs

2022-06-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18311?focusedWorklogId=783991=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-783991
 ]

ASF GitHub Bot logged work on HADOOP-18311:
---

Author: ASF GitHub Bot
Created on: 22/Jun/22 22:45
Start Date: 22/Jun/22 22:45
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4490:
URL: https://github.com/apache/hadoop/pull/4490#issuecomment-1163723631

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 59s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ branch-3.3 Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 49s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  26m 33s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |  18m 32s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   6m 37s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   5m 19s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  97m 14s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m 13s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 42s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 51s |  |  the patch passed  |
   | -1 :x: |  javac  |  17m 51s | 
[/results-compile-javac-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4490/1/artifact/out/results-compile-javac-root.txt)
 |  root generated 3 new + 1855 unchanged - 0 fixed = 1858 total (was 1855)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   5m 24s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   5m  7s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  38m  1s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 48s |  |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m  0s |  |  
hadoop-yarn-server-timelineservice-hbase-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   1m  2s |  |  
hadoop-yarn-server-timelineservice-hbase-client in the patch passed.  |
   | +1 :green_heart: |  unit  |   0m 51s |  |  
hadoop-yarn-server-timelineservice-hbase-server-1 in the patch passed.  |
   | +1 :green_heart: |  unit  |  15m 40s |  |  
hadoop-yarn-server-timelineservice-hbase-tests in the patch passed.  |
   | +1 :green_heart: |  unit  |   0m 52s |  |  
hadoop-yarn-server-timelineservice-hbase-server-2 in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 11s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 181m 36s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4490/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4490 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint |
   | uname | Linux f8808a183c0e 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / bab9001c25a5a385deb4be8afbd7512da513867c |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4490/1/testReport/ |
   | Max. process+thread count | 774 (vs. ulimit of 5500) |
   | modules | C: hadoop-project 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-common
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4490: HADOOP-18311. Upgrade dependencies to address several CVEs

2022-06-22 Thread GitBox


hadoop-yetus commented on PR #4490:
URL: https://github.com/apache/hadoop/pull/4490#issuecomment-1163723631

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 59s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ branch-3.3 Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 49s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  26m 33s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |  18m 32s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   6m 37s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   5m 19s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  97m 14s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m 13s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 42s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 51s |  |  the patch passed  |
   | -1 :x: |  javac  |  17m 51s | 
[/results-compile-javac-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4490/1/artifact/out/results-compile-javac-root.txt)
 |  root generated 3 new + 1855 unchanged - 0 fixed = 1858 total (was 1855)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   5m 24s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   5m  7s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  38m  1s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 48s |  |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m  0s |  |  
hadoop-yarn-server-timelineservice-hbase-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   1m  2s |  |  
hadoop-yarn-server-timelineservice-hbase-client in the patch passed.  |
   | +1 :green_heart: |  unit  |   0m 51s |  |  
hadoop-yarn-server-timelineservice-hbase-server-1 in the patch passed.  |
   | +1 :green_heart: |  unit  |  15m 40s |  |  
hadoop-yarn-server-timelineservice-hbase-tests in the patch passed.  |
   | +1 :green_heart: |  unit  |   0m 52s |  |  
hadoop-yarn-server-timelineservice-hbase-server-2 in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 11s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 181m 36s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4490/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4490 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint |
   | uname | Linux f8808a183c0e 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / bab9001c25a5a385deb4be8afbd7512da513867c |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4490/1/testReport/ |
   | Max. process+thread count | 774 (vs. ulimit of 5500) |
   | modules | C: hadoop-project 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-common
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-server/hadoop-yarn-server-timelineservice-hbase-server-1
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 

[jira] [Commented] (HADOOP-18305) Release Hadoop 3.3.4: minor update of hadoop-3.3.3

2022-06-22 Thread Steve Vaughan (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17557725#comment-17557725
 ] 

Steve Vaughan commented on HADOOP-18305:


We might want to consider adding 
[HADOOP-18311|https://issues.apache.org/jira/browse/HADOOP-18311] which 
addresses several other CVEs.

> Release Hadoop 3.3.4: minor update of hadoop-3.3.3
> --
>
> Key: HADOOP-18305
> URL: https://issues.apache.org/jira/browse/HADOOP-18305
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Affects Versions: 3.3.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Create a Hadoop 3.3.4 release with
> * critical fixes
> * ARM artifacts as well as the intel ones



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #4462: MAPREDUCE-7390 Remove WhiteBox in mapreduce module.

2022-06-22 Thread GitBox


slfan1989 commented on PR #4462:
URL: https://github.com/apache/hadoop/pull/4462#issuecomment-1163660746

   @jojochuang please help to review the code, thank you very much!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #4464: YARN-11169. Support moveApplicationAcrossQueues, getQueueInfo API's for Federation.

2022-06-22 Thread GitBox


slfan1989 commented on PR #4464:
URL: https://github.com/apache/hadoop/pull/4464#issuecomment-1163659525

   @goiri Please help me to review the code, Thank you very much!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #4484: YARN-11192. TestRouterWebServicesREST failing after YARN-9827.

2022-06-22 Thread GitBox


slfan1989 commented on PR #4484:
URL: https://github.com/apache/hadoop/pull/4484#issuecomment-1163658389

   @ayushtkn Thanks for your help reviewing the code!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18311) Upgrade dependencies to address several CVEs

2022-06-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18311?focusedWorklogId=783986=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-783986
 ]

ASF GitHub Bot logged work on HADOOP-18311:
---

Author: ASF GitHub Bot
Created on: 22/Jun/22 22:05
Start Date: 22/Jun/22 22:05
Worklog Time Spent: 10m 
  Work Description: snmvaughan opened a new pull request, #4491:
URL: https://github.com/apache/hadoop/pull/4491

   
   
   ### Description of PR
   
   The following CVEs can be addressed by upgrading dependencies within the 
build.  This includes a replacement of HTrace with a noop implementation.
  - CVE-2018-7489
  - CVE-2020-10663
  - CVE-2020-28491
  - CVE-2020-35490
  - CVE-2020-35491
  - CVE-2020-36518
  - PRISMA-2021-0182
 
   This addresses all of the CVEs from `branch-3.3.4` except for the kotlin 
library associated with okhttp and the ones that would require upgrading Netty 
to 4.x.
   
   This is a backport specifically targeted at 3.3.4
   
   ### How was this patch tested?
   
   Tested using a local build of `branch-3.3.4` along with this patch.
   
   ### For code changes:
   
   - [X] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [X] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




Issue Time Tracking
---

Worklog Id: (was: 783986)
Time Spent: 20m  (was: 10m)

> Upgrade dependencies to address several CVEs
> 
>
> Key: HADOOP-18311
> URL: https://issues.apache.org/jira/browse/HADOOP-18311
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.3.3, 3.3.4
>Reporter: Steve Vaughan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.4
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The following CVEs can be addressed by upgrading dependencies within the 
> build.  This includes a replacement of HTrace with a noop implementation.
>  * CVE-2018-7489
>  * CVE-2020-10663
>  * CVE-2020-28491
>  * CVE-2020-35490
>  * CVE-2020-35491
>  * CVE-2020-36518
>  * PRISMA-2021-0182
> This addresses all of the CVEs from 3.3.3 except for ones that would require 
> upgrading Netty to 4.x.  I'll be submitting a pull request for 3.3.4.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] snmvaughan opened a new pull request, #4491: HADOOP-18311. Upgrade dependencies to address several CVEs

2022-06-22 Thread GitBox


snmvaughan opened a new pull request, #4491:
URL: https://github.com/apache/hadoop/pull/4491

   
   
   ### Description of PR
   
   The following CVEs can be addressed by upgrading dependencies within the 
build.  This includes a replacement of HTrace with a noop implementation.
  - CVE-2018-7489
  - CVE-2020-10663
  - CVE-2020-28491
  - CVE-2020-35490
  - CVE-2020-35491
  - CVE-2020-36518
  - PRISMA-2021-0182
 
   This addresses all of the CVEs from `branch-3.3.4` except for the kotlin 
library associated with okhttp and the ones that would require upgrading Netty 
to 4.x.
   
   This is a backport specifically targeted at 3.3.4
   
   ### How was this patch tested?
   
   Tested using a local build of `branch-3.3.4` along with this patch.
   
   ### For code changes:
   
   - [X] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [X] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15984) Update jersey from 1.19 to 2.x

2022-06-22 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17557715#comment-17557715
 ] 

Viraj Jasani commented on HADOOP-15984:
---

Sounds good, thanks [~ayushtkn] 

I was also wondering if it might be worthwhile exploring Jersey 3?? It's far 
fetched idea for now, and I don't know how backward compatibility would work b/ 
Jersey 1 and 3 at this point in time, but if other dependencies keep screwing 
things up for us (like the one mentioned above), perhaps we could give Jersey 3 
a shot as well(?)

Release notes say it was released on Dec 3, 2020 so it's already been around 
for more than a year and that's good 
[https://github.com/eclipse-ee4j/jersey/releases/tag/3.0.0] 
(https://eclipse-ee4j.github.io/jersey.github.io/release-notes/3.0.0.html)

> Update jersey from 1.19 to 2.x
> --
>
> Key: HADOOP-15984
> URL: https://issues.apache.org/jira/browse/HADOOP-15984
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Viraj Jasani
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> jersey-json 1.19 depends on Jackson 1.9.2. Let's upgrade.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15984) Update jersey from 1.19 to 2.x

2022-06-22 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17557703#comment-17557703
 ] 

Ayush Saxena edited comment on HADOOP-15984 at 6/22/22 9:02 PM:


{quote}So there is no grizzly-http-servlet version that has Jersey 2 
dependencies.
{quote}
Awesome :( 

 

Needless to say we already are aware of what problems Jersey upgrade creates 
for downstream projects, Had an offline chit-chat with couple of folks and it 
indeed is some pain.

The good news is we aren't blocking the release for this, so we have time and 
no strict deadline pressure, so we can spend time and figure out a good & safe 
solution.

If we don't find anything after brain storming and have concrete answers, then 
we will figure out how to get rid of the Jackson upgrade

 

But we need Jersey for Java-11 also, I haven't spent time exploring any 
workaround for it.


was (Author: ayushtkn):
{quote}So there is no grizzly-http-servlet version that has Jersey 2 
dependencies.
{quote}
Awesome :( 

 

Needless to say we already are aware of what problems Jersey upgrade creates 
for downstream projects, Had an offline chit-chat with couple of folks and it 
indeed is some pain.

The good news is we aren't blocking the release for this, so we have time and 
no strict deadline pressure, so we can spend time and figure out a good & safe 
solution.

If we don't find anything after brain storming and have concrete answers, then 
we will figure out how to get rid of the Jackson upgrade

> Update jersey from 1.19 to 2.x
> --
>
> Key: HADOOP-15984
> URL: https://issues.apache.org/jira/browse/HADOOP-15984
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Viraj Jasani
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> jersey-json 1.19 depends on Jackson 1.9.2. Let's upgrade.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15984) Update jersey from 1.19 to 2.x

2022-06-22 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17557703#comment-17557703
 ] 

Ayush Saxena commented on HADOOP-15984:
---

{quote}So there is no grizzly-http-servlet version that has Jersey 2 
dependencies.
{quote}
Awesome :( 

 

Needless to say we already are aware of what problems Jersey upgrade creates 
for downstream projects, Had an offline chit-chat with couple of folks and it 
indeed is some pain.

The good news is we aren't blocking the release for this, so we have time and 
no strict deadline pressure, so we can spend time and figure out a good & safe 
solution.

If we don't find anything after brain storming and have concrete answers, then 
we will figure out how to get rid of the Jackson upgrade

> Update jersey from 1.19 to 2.x
> --
>
> Key: HADOOP-15984
> URL: https://issues.apache.org/jira/browse/HADOOP-15984
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Viraj Jasani
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> jersey-json 1.19 depends on Jackson 1.9.2. Let's upgrade.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15984) Update jersey from 1.19 to 2.x

2022-06-22 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17557697#comment-17557697
 ] 

Viraj Jasani edited comment on HADOOP-15984 at 6/22/22 8:53 PM:


Reg grizzly-http-servlet, version 2.4.4 contains Jersey 1 artifacts: 
[https://repo1.maven.org/maven2/org/glassfish/grizzly/grizzly-http-servlet/2.4.4/grizzly-http-servlet-2.4.4.pom]

The next higher version available is 3.0.0-M1 and it contains Jersey 3 
artifacts: 
[https://repo1.maven.org/maven2/org/glassfish/grizzly/grizzly-http-servlet/3.0.0-M1/grizzly-http-servlet-3.0.0-M1.pom]

So there is no grizzly-http-servlet version that has Jersey 2 dependencies.

We might have to handle this as well (perhaps get rid of it if we can)


was (Author: vjasani):
Reg grizzly-http-servlet, version 2.4.4 contains Jersey 1 artifacts: 
[https://repo1.maven.org/maven2/org/glassfish/grizzly/grizzly-http-servlet/2.4.4/grizzly-http-servlet-2.4.4.pom]

The next higher version available is 3.0.0-M1 and it contains Jersey 3 
artifacts: 
[https://repo1.maven.org/maven2/org/glassfish/grizzly/grizzly-http-servlet/3.0.0-M1/grizzly-http-servlet-3.0.0-M1.pom]

We might have to handle this as well (perhaps get rid of it if we can)

> Update jersey from 1.19 to 2.x
> --
>
> Key: HADOOP-15984
> URL: https://issues.apache.org/jira/browse/HADOOP-15984
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Viraj Jasani
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> jersey-json 1.19 depends on Jackson 1.9.2. Let's upgrade.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15984) Update jersey from 1.19 to 2.x

2022-06-22 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17557697#comment-17557697
 ] 

Viraj Jasani commented on HADOOP-15984:
---

Reg grizzly-http-servlet, version 2.4.4 contains Jersey 1 artifacts: 
[https://repo1.maven.org/maven2/org/glassfish/grizzly/grizzly-http-servlet/2.4.4/grizzly-http-servlet-2.4.4.pom]

The next higher version available is 3.0.0-M1 and it contains Jersey 3 
artifacts: 
[https://repo1.maven.org/maven2/org/glassfish/grizzly/grizzly-http-servlet/3.0.0-M1/grizzly-http-servlet-3.0.0-M1.pom]

We might have to handle this as well (perhaps get rid of it if we can)

> Update jersey from 1.19 to 2.x
> --
>
> Key: HADOOP-15984
> URL: https://issues.apache.org/jira/browse/HADOOP-15984
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Viraj Jasani
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> jersey-json 1.19 depends on Jackson 1.9.2. Let's upgrade.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18103) High performance vectored read API in Hadoop

2022-06-22 Thread Mukund Thakur (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17557679#comment-17557679
 ] 

Mukund Thakur commented on HADOOP-18103:


merged in trunk 
https://github.com/apache/hadoop/commit/e1842b2a749d79cbdc15c524515b9eda64c339d5

> High performance vectored read API in Hadoop
> 
>
> Key: HADOOP-18103
> URL: https://issues.apache.org/jira/browse/HADOOP-18103
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common, fs, fs/adl, fs/s3
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>  Labels: perfomance, pull-request-available
> Attachments: Vectored Read API for Hadoop FS_INCOMPLETE.pdf
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Add support for multiple ranged vectored read api in PositionedReadable. The 
> default iterates through the ranges to read each synchronously, but the 
> intent is that FSDataInputStream subclasses can make more efficient readers 
> especially object stores implementation.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18311) Upgrade dependencies to address several CVEs

2022-06-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-18311:

Labels: pull-request-available  (was: )

> Upgrade dependencies to address several CVEs
> 
>
> Key: HADOOP-18311
> URL: https://issues.apache.org/jira/browse/HADOOP-18311
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.3.3, 3.3.4
>Reporter: Steve Vaughan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.4
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The following CVEs can be addressed by upgrading dependencies within the 
> build.  This includes a replacement of HTrace with a noop implementation.
>  * CVE-2018-7489
>  * CVE-2020-10663
>  * CVE-2020-28491
>  * CVE-2020-35490
>  * CVE-2020-35491
>  * CVE-2020-36518
>  * PRISMA-2021-0182
> This addresses all of the CVEs from 3.3.3 except for ones that would require 
> upgrading Netty to 4.x.  I'll be submitting a pull request for 3.3.4.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18311) Upgrade dependencies to address several CVEs

2022-06-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18311?focusedWorklogId=783958=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-783958
 ]

ASF GitHub Bot logged work on HADOOP-18311:
---

Author: ASF GitHub Bot
Created on: 22/Jun/22 19:42
Start Date: 22/Jun/22 19:42
Worklog Time Spent: 10m 
  Work Description: snmvaughan opened a new pull request, #4490:
URL: https://github.com/apache/hadoop/pull/4490

   
   
   ### Description of PR
   
   The following CVEs can be addressed by upgrading dependencies within the 
build.  This includes a replacement of HTrace with a noop implementation.
   - CVE-2018-7489
   - CVE-2020-10663
   - CVE-2020-28491
   - CVE-2020-35490
   - CVE-2020-35491
   - CVE-2020-36518
   - PRISMA-2021-0182
   
   This addresses all of the CVEs from `branch-3.3` except for the kotlin 
library associated with okhttp and the ones that would require upgrading Netty 
to 4.x.
   
   ### How was this patch tested?
   
   Tested using a local build of `branch-3.3` along with this patch.
   
   ### For code changes:
   
   - [X] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [X] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




Issue Time Tracking
---

Worklog Id: (was: 783958)
Remaining Estimate: 0h
Time Spent: 10m

> Upgrade dependencies to address several CVEs
> 
>
> Key: HADOOP-18311
> URL: https://issues.apache.org/jira/browse/HADOOP-18311
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.3.3, 3.3.4
>Reporter: Steve Vaughan
>Priority: Major
> Fix For: 3.3.4
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The following CVEs can be addressed by upgrading dependencies within the 
> build.  This includes a replacement of HTrace with a noop implementation.
>  * CVE-2018-7489
>  * CVE-2020-10663
>  * CVE-2020-28491
>  * CVE-2020-35490
>  * CVE-2020-35491
>  * CVE-2020-36518
>  * PRISMA-2021-0182
> This addresses all of the CVEs from 3.3.3 except for ones that would require 
> upgrading Netty to 4.x.  I'll be submitting a pull request for 3.3.4.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] snmvaughan opened a new pull request, #4490: HADOOP-18311. Upgrade dependencies to address several CVEs

2022-06-22 Thread GitBox


snmvaughan opened a new pull request, #4490:
URL: https://github.com/apache/hadoop/pull/4490

   
   
   ### Description of PR
   
   The following CVEs can be addressed by upgrading dependencies within the 
build.  This includes a replacement of HTrace with a noop implementation.
   - CVE-2018-7489
   - CVE-2020-10663
   - CVE-2020-28491
   - CVE-2020-35490
   - CVE-2020-35491
   - CVE-2020-36518
   - PRISMA-2021-0182
   
   This addresses all of the CVEs from `branch-3.3` except for the kotlin 
library associated with okhttp and the ones that would require upgrading Netty 
to 4.x.
   
   ### How was this patch tested?
   
   Tested using a local build of `branch-3.3` along with this patch.
   
   ### For code changes:
   
   - [X] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [X] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn commented on a diff in pull request #4488: HDFS-16640. RBF: Show datanode IP list when click DN histogram in Router

2022-06-22 Thread GitBox


ayushtkn commented on code in PR #4488:
URL: https://github.com/apache/hadoop/pull/4488#discussion_r904115566


##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.js:
##
@@ -552,3 +552,45 @@
 load_page();
   });
 })();
+
+function open_hostip_list(x0, x1) {

Review Comment:
   guess here:
   
https://github.com/wzhallright/hadoop/blob/73c08effebd6823ddbb21dbb722f6059c4e1a36e/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.js#L376



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18297) Upgrade dependency-check-maven to 7.1.1

2022-06-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18297?focusedWorklogId=783924=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-783924
 ]

ASF GitHub Bot logged work on HADOOP-18297:
---

Author: ASF GitHub Bot
Created on: 22/Jun/22 18:01
Start Date: 22/Jun/22 18:01
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4449:
URL: https://github.com/apache/hadoop/pull/4449#issuecomment-1163444236

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 42s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  37m 48s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 31s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  19m 45s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  mvnsite  |  18m 57s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   8m 38s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   7m 13s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  shadedclient  | 135m 20s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  22m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 13s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |  24m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m  6s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  21m  6s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |  19m  0s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   8m 30s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   7m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  shadedclient  |  55m 58s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 808m  6s | 
[/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4449/2/artifact/out/patch-unit-root.txt)
 |  root in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 1071m 32s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.mapred.TestLocalDistributedCacheManager |
   |   | hadoop.yarn.server.router.webapp.TestRouterWebServicesREST |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4449/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4449 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint |
   | uname | Linux d304b82ef3cf 4.15.0-169-generic #177-Ubuntu SMP Thu Feb 3 
10:50:38 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 1e12e7bdd13f9d1a5ecdc89a88edd1156089fa0a |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4449/2/testReport/ |
   | Max. process+thread count | 3056 (vs. ulimit of 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4449: HADOOP-18297. Upgrade dependency-check-maven to 7.1.1

2022-06-22 Thread GitBox


hadoop-yetus commented on PR #4449:
URL: https://github.com/apache/hadoop/pull/4449#issuecomment-1163444236

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 42s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  37m 48s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 31s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  19m 45s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  mvnsite  |  18m 57s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   8m 38s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   7m 13s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  shadedclient  | 135m 20s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  22m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 13s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |  24m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m  6s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  21m  6s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |  19m  0s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   8m 30s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   7m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  shadedclient  |  55m 58s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 808m  6s | 
[/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4449/2/artifact/out/patch-unit-root.txt)
 |  root in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 1071m 32s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.mapred.TestLocalDistributedCacheManager |
   |   | hadoop.yarn.server.router.webapp.TestRouterWebServicesREST |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4449/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4449 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint |
   | uname | Linux d304b82ef3cf 4.15.0-169-generic #177-Ubuntu SMP Thu Feb 3 
10:50:38 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 1e12e7bdd13f9d1a5ecdc89a88edd1156089fa0a |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4449/2/testReport/ |
   | Max. process+thread count | 3056 (vs. ulimit of 5500) |
   | modules | C: . U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4449/2/console |
   | versions | git=2.25.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the

[jira] [Updated] (HADOOP-18237) Upgrade Apache Xerces Java to 2.12.2

2022-06-22 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-18237:

Component/s: build

> Upgrade Apache Xerces Java to 2.12.2
> 
>
> Key: HADOOP-18237
> URL: https://issues.apache.org/jira/browse/HADOOP-18237
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.3
>Reporter: Ashutosh Gupta
>Assignee: Ashutosh Gupta
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.4
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Description
> https://github.com/advisories/GHSA-h65f-jvqw-m9fj
> There's a vulnerability within the Apache Xerces Java (XercesJ) XML parser 
> when handling specially crafted XML document payloads. This causes, the 
> XercesJ XML parser to wait in an infinite loop, which may sometimes consume 
> system resources for prolonged duration. This vulnerability is present within 
> XercesJ version 2.12.1 and the previous versions.
> References
> [https://nvd.nist.gov/vuln/detail/CVE-2022-23437]
> https://lists.apache.org/thread/6pjwm10bb69kq955fzr1n0nflnjd27dl
> http://www.openwall.com/lists/oss-security/2022/01/24/3
> https://www.oracle.com/security-alerts/cpuapr2022.html



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18237) Upgrade Apache Xerces Java to 2.12.2

2022-06-22 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-18237:

Affects Version/s: 3.3.3

> Upgrade Apache Xerces Java to 2.12.2
> 
>
> Key: HADOOP-18237
> URL: https://issues.apache.org/jira/browse/HADOOP-18237
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.3
>Reporter: Ashutosh Gupta
>Assignee: Ashutosh Gupta
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.4
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Description
> https://github.com/advisories/GHSA-h65f-jvqw-m9fj
> There's a vulnerability within the Apache Xerces Java (XercesJ) XML parser 
> when handling specially crafted XML document payloads. This causes, the 
> XercesJ XML parser to wait in an infinite loop, which may sometimes consume 
> system resources for prolonged duration. This vulnerability is present within 
> XercesJ version 2.12.1 and the previous versions.
> References
> [https://nvd.nist.gov/vuln/detail/CVE-2022-23437]
> https://lists.apache.org/thread/6pjwm10bb69kq955fzr1n0nflnjd27dl
> http://www.openwall.com/lists/oss-security/2022/01/24/3
> https://www.oracle.com/security-alerts/cpuapr2022.html



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18237) Upgrade Apache Xerces Java to 2.12.2

2022-06-22 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18237.
-
Fix Version/s: 3.4.0
   3.3.4
   Resolution: Fixed

> Upgrade Apache Xerces Java to 2.12.2
> 
>
> Key: HADOOP-18237
> URL: https://issues.apache.org/jira/browse/HADOOP-18237
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ashutosh Gupta
>Assignee: Ashutosh Gupta
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.4
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Description
> https://github.com/advisories/GHSA-h65f-jvqw-m9fj
> There's a vulnerability within the Apache Xerces Java (XercesJ) XML parser 
> when handling specially crafted XML document payloads. This causes, the 
> XercesJ XML parser to wait in an infinite loop, which may sometimes consume 
> system resources for prolonged duration. This vulnerability is present within 
> XercesJ version 2.12.1 and the previous versions.
> References
> [https://nvd.nist.gov/vuln/detail/CVE-2022-23437]
> https://lists.apache.org/thread/6pjwm10bb69kq955fzr1n0nflnjd27dl
> http://www.openwall.com/lists/oss-security/2022/01/24/3
> https://www.oracle.com/security-alerts/cpuapr2022.html



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18103) High performance vectored read API in Hadoop

2022-06-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18103?focusedWorklogId=783921=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-783921
 ]

ASF GitHub Bot logged work on HADOOP-18103:
---

Author: ASF GitHub Bot
Created on: 22/Jun/22 17:50
Start Date: 22/Jun/22 17:50
Worklog Time Spent: 10m 
  Work Description: steveloughran closed pull request #4476: HADOOP-18103. 
High performance vectored read API in Hadoop
URL: https://github.com/apache/hadoop/pull/4476




Issue Time Tracking
---

Worklog Id: (was: 783921)
Time Spent: 1.5h  (was: 1h 20m)

> High performance vectored read API in Hadoop
> 
>
> Key: HADOOP-18103
> URL: https://issues.apache.org/jira/browse/HADOOP-18103
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common, fs, fs/adl, fs/s3
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>  Labels: perfomance, pull-request-available
> Attachments: Vectored Read API for Hadoop FS_INCOMPLETE.pdf
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Add support for multiple ranged vectored read api in PositionedReadable. The 
> default iterates through the ranges to read each synchronously, but the 
> intent is that FSDataInputStream subclasses can make more efficient readers 
> especially object stores implementation.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18103) High performance vectored read API in Hadoop

2022-06-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18103?focusedWorklogId=783920=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-783920
 ]

ASF GitHub Bot logged work on HADOOP-18103:
---

Author: ASF GitHub Bot
Created on: 22/Jun/22 17:50
Start Date: 22/Jun/22 17:50
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on PR #4476:
URL: https://github.com/apache/hadoop/pull/4476#issuecomment-1163433801

   merged manually, closing work. great work mukund & owen, once we get this 
picked up it is going to deliver significant speedups. get those apachecon 
demos ready!




Issue Time Tracking
---

Worklog Id: (was: 783920)
Time Spent: 1h 20m  (was: 1h 10m)

> High performance vectored read API in Hadoop
> 
>
> Key: HADOOP-18103
> URL: https://issues.apache.org/jira/browse/HADOOP-18103
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common, fs, fs/adl, fs/s3
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>  Labels: perfomance, pull-request-available
> Attachments: Vectored Read API for Hadoop FS_INCOMPLETE.pdf
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Add support for multiple ranged vectored read api in PositionedReadable. The 
> default iterates through the ranges to read each synchronously, but the 
> intent is that FSDataInputStream subclasses can make more efficient readers 
> especially object stores implementation.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a diff in pull request #4488: HDFS-16640. RBF: Show datanode IP list when click DN histogram in Router

2022-06-22 Thread GitBox


goiri commented on code in PR #4488:
URL: https://github.com/apache/hadoop/pull/4488#discussion_r904072989


##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.js:
##
@@ -552,3 +552,45 @@
 load_page();
   });
 })();
+
+function open_hostip_list(x0, x1) {

Review Comment:
   Where is this called?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran closed pull request #4476: HADOOP-18103. High performance vectored read API in Hadoop

2022-06-22 Thread GitBox


steveloughran closed pull request #4476: HADOOP-18103. High performance 
vectored read API in Hadoop
URL: https://github.com/apache/hadoop/pull/4476


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #4476: HADOOP-18103. High performance vectored read API in Hadoop

2022-06-22 Thread GitBox


steveloughran commented on PR #4476:
URL: https://github.com/apache/hadoop/pull/4476#issuecomment-1163433801

   merged manually, closing work. great work mukund & owen, once we get this 
picked up it is going to deliver significant speedups. get those apachecon 
demos ready!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4462: MAPREDUCE-7390 Remove WhiteBox in mapreduce module.

2022-06-22 Thread GitBox


hadoop-yetus commented on PR #4462:
URL: https://github.com/apache/hadoop/pull/4462#issuecomment-1163408486

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  2s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  39m 56s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 54s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 49s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 53s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 57s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 47s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m  7s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 43s |  |  
hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
 with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 generated 0 new + 
107 unchanged - 1 fixed = 107 total (was 108)  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 37s |  |  
hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
 with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 generated 0 new 
+ 101 unchanged - 1 fixed = 101 total (was 102)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 32s |  |  
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core: 
The patch generated 0 new + 16 unchanged - 4 fixed = 16 total (was 20)  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 30s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  24m  1s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   6m 23s |  |  hadoop-mapreduce-client-core in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 43s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 108m 54s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4462/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4462 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux bb344580 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / f7971e6e99ebf68e91ce9a16c0430f33ba25d5d8 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4462/5/testReport/ |
   | Max. process+thread count | 1085 (vs. ulimit of 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4462: MAPREDUCE-7390 Remove WhiteBox in mapreduce module.

2022-06-22 Thread GitBox


hadoop-yetus commented on PR #4462:
URL: https://github.com/apache/hadoop/pull/4462#issuecomment-1163408047

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 43s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  39m  2s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 59s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 54s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 55s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  1s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 46s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 46s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 47s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 47s |  |  
hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
 with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 generated 0 new + 
107 unchanged - 1 fixed = 107 total (was 108)  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 40s |  |  
hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
 with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 generated 0 new 
+ 101 unchanged - 1 fixed = 101 total (was 102)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 34s |  |  
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core: 
The patch generated 0 new + 16 unchanged - 4 fixed = 16 total (was 20)  |
   | +1 :green_heart: |  mvnsite  |   0m 45s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 39s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  24m  4s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   6m 38s |  |  hadoop-mapreduce-client-core in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 47s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 107m 40s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4462/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4462 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 75d82209bcaa 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / f7971e6e99ebf68e91ce9a16c0430f33ba25d5d8 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4462/6/testReport/ |
   | Max. process+thread count | 1487 (vs. ulimit of 

[jira] [Work logged] (HADOOP-18310) Add option and make 400 bad request retryable

2022-06-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18310?focusedWorklogId=783893=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-783893
 ]

ASF GitHub Bot logged work on HADOOP-18310:
---

Author: ASF GitHub Bot
Created on: 22/Jun/22 15:36
Start Date: 22/Jun/22 15:36
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on code in PR #4483:
URL: https://github.com/apache/hadoop/pull/4483#discussion_r903916520


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ARetryPolicy.java:
##
@@ -214,7 +214,10 @@ protected Map, RetryPolicy> 
createExceptionMap() {
 
 // policy on a 400/bad request still ambiguous.
 // Treated as an immediate failure
-policyMap.put(AWSBadRequestException.class, fail);
+RetryPolicy awsBadRequestExceptionRetryPolicy =

Review Comment:
   should the normal retry policy -which is expected to handle network errors- 
be applied here, or something else



##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java:
##
@@ -1177,4 +1177,8 @@ private Constants() {
*/
   public static final String FS_S3A_CREATE_HEADER = "fs.s3a.create.header";
 
+  public static final String FAIL_ON_AWS_BAD_REQUEST = 
"fs.s3a.retry.failOnAwsBadRequest";

Review Comment:
   1. needs to be all lower case with "." between words
   2. and javadocs with {@value)
   3. and something in the documentation





Issue Time Tracking
---

Worklog Id: (was: 783893)
Time Spent: 50m  (was: 40m)

> Add option and make 400 bad request retryable
> -
>
> Key: HADOOP-18310
> URL: https://issues.apache.org/jira/browse/HADOOP-18310
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.4
>Reporter: Tak-Lon (Stephen) Wu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> When one is using a customized credential provider via 
> fs.s3a.aws.credentials.provider, e.g. 
> org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider, when the provided 
> credential by this pluggable provider is expired and return an error code of 
> 400 as bad request exception.
> Here, the current S3ARetryPolicy will fail immediately and does not retry on 
> the S3A level. 
> Our recent use case in HBase found this use case could lead to a Region 
> Server got immediate abandoned from this Exception without retry, when the 
> file system is trying open or S3AInputStream is trying to reopen the file. 
> especially the S3AInputStream use cases, we cannot find a good way to retry 
> outside of the file system semantic (because if a ongoing stream is failing 
> currently it's considered as irreparable state), and thus we come up with 
> this optional flag for retrying in S3A.
> {code}
> Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: The provided 
> token has expired. (Service: Amazon S3; Status Code: 400; Error Code: 
> ExpiredToken; Request ID: XYZ; S3 Extended Request ID: ABC; Proxy: null), S3 
> Extended Request ID: 123
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1862)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleServiceErrorResponse(AmazonHttpClient.java:1415)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1384)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1154)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:811)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:779)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:753)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:713)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:695)
>   at 
> com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:559)
>   at 
> com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:539)
>   at 
> com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5453)
>   at 
> com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5400)
>   at 
> com.amazonaws.services.s3.AmazonS3Client.getObject(AmazonS3Client.java:1524)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem$InputStreamCallbacksImpl.getObject(S3AFileSystem.java:1506)
>   at 
> org.apache.hadoop.fs.s3a.S3AInputStream.lambda$reopen$0(S3AInputStream.java:217)
>   at 

[GitHub] [hadoop] steveloughran commented on a diff in pull request #4483: HADOOP-18310 Add option and make 400 bad request retryable

2022-06-22 Thread GitBox


steveloughran commented on code in PR #4483:
URL: https://github.com/apache/hadoop/pull/4483#discussion_r903916520


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ARetryPolicy.java:
##
@@ -214,7 +214,10 @@ protected Map, RetryPolicy> 
createExceptionMap() {
 
 // policy on a 400/bad request still ambiguous.
 // Treated as an immediate failure
-policyMap.put(AWSBadRequestException.class, fail);
+RetryPolicy awsBadRequestExceptionRetryPolicy =

Review Comment:
   should the normal retry policy -which is expected to handle network errors- 
be applied here, or something else



##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java:
##
@@ -1177,4 +1177,8 @@ private Constants() {
*/
   public static final String FS_S3A_CREATE_HEADER = "fs.s3a.create.header";
 
+  public static final String FAIL_ON_AWS_BAD_REQUEST = 
"fs.s3a.retry.failOnAwsBadRequest";

Review Comment:
   1. needs to be all lower case with "." between words
   2. and javadocs with {@value)
   3. and something in the documentation



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18310) Add option and make 400 bad request retryable

2022-06-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18310?focusedWorklogId=783891=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-783891
 ]

ASF GitHub Bot logged work on HADOOP-18310:
---

Author: ASF GitHub Bot
Created on: 22/Jun/22 15:32
Start Date: 22/Jun/22 15:32
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on PR #4483:
URL: https://github.com/apache/hadoop/pull/4483#issuecomment-1163267093

   which s3 endpoint did you run the hadoop-aws integration tests against, and 
what was the full mvn command line used? thanks




Issue Time Tracking
---

Worklog Id: (was: 783891)
Time Spent: 40m  (was: 0.5h)

> Add option and make 400 bad request retryable
> -
>
> Key: HADOOP-18310
> URL: https://issues.apache.org/jira/browse/HADOOP-18310
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.4
>Reporter: Tak-Lon (Stephen) Wu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> When one is using a customized credential provider via 
> fs.s3a.aws.credentials.provider, e.g. 
> org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider, when the provided 
> credential by this pluggable provider is expired and return an error code of 
> 400 as bad request exception.
> Here, the current S3ARetryPolicy will fail immediately and does not retry on 
> the S3A level. 
> Our recent use case in HBase found this use case could lead to a Region 
> Server got immediate abandoned from this Exception without retry, when the 
> file system is trying open or S3AInputStream is trying to reopen the file. 
> especially the S3AInputStream use cases, we cannot find a good way to retry 
> outside of the file system semantic (because if a ongoing stream is failing 
> currently it's considered as irreparable state), and thus we come up with 
> this optional flag for retrying in S3A.
> {code}
> Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: The provided 
> token has expired. (Service: Amazon S3; Status Code: 400; Error Code: 
> ExpiredToken; Request ID: XYZ; S3 Extended Request ID: ABC; Proxy: null), S3 
> Extended Request ID: 123
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1862)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleServiceErrorResponse(AmazonHttpClient.java:1415)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1384)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1154)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:811)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:779)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:753)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:713)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:695)
>   at 
> com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:559)
>   at 
> com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:539)
>   at 
> com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5453)
>   at 
> com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5400)
>   at 
> com.amazonaws.services.s3.AmazonS3Client.getObject(AmazonS3Client.java:1524)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem$InputStreamCallbacksImpl.getObject(S3AFileSystem.java:1506)
>   at 
> org.apache.hadoop.fs.s3a.S3AInputStream.lambda$reopen$0(S3AInputStream.java:217)
>   at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:117)
>   ... 35 more
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #4483: HADOOP-18310 Add option and make 400 bad request retryable

2022-06-22 Thread GitBox


steveloughran commented on PR #4483:
URL: https://github.com/apache/hadoop/pull/4483#issuecomment-1163267093

   which s3 endpoint did you run the hadoop-aws integration tests against, and 
what was the full mvn command line used? thanks


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #4463: YARN-11187. Remove WhiteBox in yarn module.

2022-06-22 Thread GitBox


slfan1989 commented on PR #4463:
URL: https://github.com/apache/hadoop/pull/4463#issuecomment-1163254696

   @jojochuang please help to review the code.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #4484: YARN-11192. TestRouterWebServicesREST failing after YARN-9827.

2022-06-22 Thread GitBox


slfan1989 commented on PR #4484:
URL: https://github.com/apache/hadoop/pull/4484#issuecomment-1163252221

   @goiri @ayushtkn  Please help me to review the code, Thank you very much!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4489: HADOOP-18292. Fix s3 select tests when running against unsupported storage class

2022-06-22 Thread GitBox


hadoop-yetus commented on PR #4489:
URL: https://github.com/apache/hadoop/pull/4489#issuecomment-1163240056

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 49s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m  6s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 51s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 44s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 53s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 26s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 57s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 45s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 37s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4489/1/artifact/out/blanks-eol.txt)
 |  The patch has 2 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  checkstyle  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 44s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 45s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 43s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 43s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 106m 39s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4489/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4489 |
   | Optional Tests | dupname asflicense mvnsite codespell detsecrets 
markdownlint compile javac javadoc mvninstall unit shadedclient spotbugs 
checkstyle |
   | uname | Linux 1cdceefc6753 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ea2ea4b2d75ef01cd9c23f5212f059ec5e266ccf |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4489/1/testReport/ |
   | Max. process+thread count | 529 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4489/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically 

[jira] [Work logged] (HADOOP-18292) s3a storage class reduced redundancy breaks s3 select tests

2022-06-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18292?focusedWorklogId=783885=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-783885
 ]

ASF GitHub Bot logged work on HADOOP-18292:
---

Author: ASF GitHub Bot
Created on: 22/Jun/22 15:13
Start Date: 22/Jun/22 15:13
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4489:
URL: https://github.com/apache/hadoop/pull/4489#issuecomment-1163240056

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 49s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m  6s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 51s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 44s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 53s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 26s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 57s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 45s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 37s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4489/1/artifact/out/blanks-eol.txt)
 |  The patch has 2 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  checkstyle  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 44s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 45s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 43s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 43s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 106m 39s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4489/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4489 |
   | Optional Tests | dupname asflicense mvnsite codespell detsecrets 
markdownlint compile javac javadoc mvninstall unit shadedclient spotbugs 
checkstyle |
   | uname | Linux 1cdceefc6753 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ea2ea4b2d75ef01cd9c23f5212f059ec5e266ccf |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4450: YARN-11183. Federation: Remove outdated ApplicationHomeSubCluster in …

2022-06-22 Thread GitBox


hadoop-yetus commented on PR #4450:
URL: https://github.com/apache/hadoop/pull/4450#issuecomment-1163174392

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 41s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  buf  |   0m  1s |  |  buf was not available.  |
   | +0 :ok: |  buf  |   0m  1s |  |  buf was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 40s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 59s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  10m 25s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   8m 59s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   2m 12s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   9m 10s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   6m 11s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   5m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |  19m 34s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 55s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   7m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  10m  5s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  cc  |  10m  5s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  10m  5s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   9m 52s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  cc  |   9m 52s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   9m 52s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   2m 13s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4450/2/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt)
 |  hadoop-yarn-project/hadoop-yarn: The patch generated 1 new + 87 unchanged - 
0 fixed = 88 total (was 87)  |
   | +1 :green_heart: |  mvnsite  |   8m 51s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   5m 40s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   5m  0s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |  19m 49s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 23s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 235m 31s | 
[/patch-unit-hadoop-yarn-project_hadoop-yarn.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4450/2/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn.txt)
 |  hadoop-yarn in the patch passed.  |
   | +1 :green_heart: |  unit  |   5m 46s |  |  hadoop-yarn-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   3m 41s |  |  hadoop-yarn-server-common in 
the patch passed.  |
   | +1 :green_heart: |  unit  |  98m 30s |  |  
hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 39s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 565m 26s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.yarn.server.router.webapp.TestRouterWebServicesREST |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4450/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4450 |
   | Optional Tests | dupname asflicense codespell detsecrets compile javac 
javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle cc buflint 
bufcompat |
   | uname | 

[jira] [Created] (HADOOP-18311) Upgrade dependencies to address several CVEs

2022-06-22 Thread Steve Vaughan (Jira)
Steve Vaughan created HADOOP-18311:
--

 Summary: Upgrade dependencies to address several CVEs
 Key: HADOOP-18311
 URL: https://issues.apache.org/jira/browse/HADOOP-18311
 Project: Hadoop Common
  Issue Type: Improvement
  Components: common
Affects Versions: 3.3.3, 3.3.4
Reporter: Steve Vaughan
 Fix For: 3.3.4


The following CVEs can be addressed by upgrading dependencies within the build. 
 This includes a replacement of HTrace with a noop implementation.
 * CVE-2018-7489
 * CVE-2020-10663
 * CVE-2020-28491
 * CVE-2020-35490
 * CVE-2020-35491
 * CVE-2020-36518
 * PRISMA-2021-0182

This addresses all of the CVEs from 3.3.3 except for ones that would require 
upgrading Netty to 4.x.  I'll be submitting a pull request for 3.3.4.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18290) Fix some compatibility issues with 3.3.3 release notes

2022-06-22 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17557456#comment-17557456
 ] 

Steve Loughran commented on HADOOP-18290:
-

if there's a way to fix up the automatin, i'm happy. but currently I'm trying 
to automate even more than we do
https://github.com/steveloughran/validate-hadoop-client-artifacts

for every release we create multiple release candidates; i did about 10+ runs 
last time hitting problems with docker timeouts, laptop connectivity, getting 
the maven artifacts up etc. we shouldn't have any manual stages at all.

> Fix some compatibility issues with 3.3.3 release notes
> --
>
> Key: HADOOP-18290
> URL: https://issues.apache.org/jira/browse/HADOOP-18290
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation
>Affects Versions: 3.3.3
>Reporter: JiangHua Zhu
>Priority: Major
> Attachments: image-2022-06-14-10-27-23-027.png, 
> image-2022-06-14-10-28-53-822.png
>
>
> 3.3.3 Release Notes:
> https://hadoop.apache.org/docs/r3.3.3/hadoop-project-dist/hadoop-common/release/3.3.3/RELEASENOTES.3.3.3.html
> There are some compatibility issues here. E.g:
>  !image-2022-06-14-10-27-23-027.png! 
> I think this is happening due to a syntax issue.
> It would be more appropriate to change it to this:
>  !image-2022-06-14-10-28-53-822.png! 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18293) Release Hadoop 3.3.4 critical fix update

2022-06-22 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18293.
-
Resolution: Duplicate

forgot about this when i created HADOOP-18305; closing

> Release Hadoop 3.3.4 critical fix update
> 
>
> Key: HADOOP-18293
> URL: https://issues.apache.org/jira/browse/HADOOP-18293
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> Create a new release off the branch-3.3.3 line with a few more changes
> * wrap up of security changes
> * cut hadoop-cos out of hadoop-cloud-storage as its dependencies break s3a 
> client...reinstate once the updated jar is tested
> * try to get an arm build out tool



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18292) s3a storage class reduced redundancy breaks s3 select tests

2022-06-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-18292:

Labels: pull-request-available  (was: )

> s3a storage class reduced redundancy breaks s3 select tests
> ---
>
> Key: HADOOP-18292
> URL: https://issues.apache.org/jira/browse/HADOOP-18292
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, test
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Monthon Klongklaew
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> when you set your fs client to work with reduced redundancy, the s3 select 
> tests fail
> probably need to clear the storage class option on the bucket before running 
> those suites



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18292) s3a storage class reduced redundancy breaks s3 select tests

2022-06-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18292?focusedWorklogId=783855=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-783855
 ]

ASF GitHub Bot logged work on HADOOP-18292:
---

Author: ASF GitHub Bot
Created on: 22/Jun/22 13:25
Start Date: 22/Jun/22 13:25
Worklog Time Spent: 10m 
  Work Description: monthonk opened a new pull request, #4489:
URL: https://github.com/apache/hadoop/pull/4489

   ### Description of PR
   HADOOP-18292. when you set your fs client to work with unsupported storage 
classes, the s3 select tests fail.
   
   Fix by removing storage class option before running S3 Select tests.
   
   ### How was this patch tested?
   Tested with a bucket in `eu-west-1` with `mvn -Dparallel-tests 
-DtestsThreadCount=16` clean verify
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [x] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




Issue Time Tracking
---

Worklog Id: (was: 783855)
Remaining Estimate: 0h
Time Spent: 10m

> s3a storage class reduced redundancy breaks s3 select tests
> ---
>
> Key: HADOOP-18292
> URL: https://issues.apache.org/jira/browse/HADOOP-18292
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, test
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Monthon Klongklaew
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> when you set your fs client to work with reduced redundancy, the s3 select 
> tests fail
> probably need to clear the storage class option on the bucket before running 
> those suites



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] monthonk opened a new pull request, #4489: HADOOP-18292. Fix s3 select tests when running against unsupported storage class

2022-06-22 Thread GitBox


monthonk opened a new pull request, #4489:
URL: https://github.com/apache/hadoop/pull/4489

   ### Description of PR
   HADOOP-18292. when you set your fs client to work with unsupported storage 
classes, the s3 select tests fail.
   
   Fix by removing storage class option before running S3 Select tests.
   
   ### How was this patch tested?
   Tested with a bucket in `eu-west-1` with `mvn -Dparallel-tests 
-DtestsThreadCount=16` clean verify
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [x] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] 9uapaw closed pull request #4468: YARN-11188. Only files belong to the first file controller are removed even if multiple log aggregation file controllers are configured

2022-06-22 Thread GitBox


9uapaw closed pull request #4468: YARN-11188. Only files belong to the first 
file controller are removed even if multiple log aggregation file controllers 
are configured
URL: https://github.com/apache/hadoop/pull/4468


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] 9uapaw commented on pull request #4468: YARN-11188. Only files belong to the first file controller are removed even if multiple log aggregation file controllers are configured

2022-06-22 Thread GitBox


9uapaw commented on PR #4468:
URL: https://github.com/apache/hadoop/pull/4468#issuecomment-1163048167

   Thanks for the fix @szilard-nemeth. Committed to trunk.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18305) Release Hadoop 3.3.4: minor update of hadoop-3.3.3

2022-06-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18305?focusedWorklogId=783838=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-783838
 ]

ASF GitHub Bot logged work on HADOOP-18305:
---

Author: ASF GitHub Bot
Created on: 22/Jun/22 12:32
Start Date: 22/Jun/22 12:32
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4482:
URL: https://github.com/apache/hadoop/pull/4482#issuecomment-1163039653

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  10m 26s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  2s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ branch-3.3 Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 25s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  27m 48s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |  18m 45s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |  20m 57s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   7m  5s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  | 114m 15s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  24m 22s |  |  branch-3.3 passed  |
   | +0 :ok: |  mvndep  |  25m 16s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  | 144m 39s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 12s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  18m 12s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |  20m 30s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   7m 27s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  54m 14s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 719m 10s | 
[/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4482/1/artifact/out/patch-unit-root.txt)
 |  root in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   2m  6s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 1090m 49s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.mapred.TestLocalDistributedCacheManager |
   |   | hadoop.hdfs.tools.TestDFSAdmin |
   |   | hadoop.hdfs.TestRollingUpgrade |
   |   | hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade |
   |   | hadoop.hdfs.server.federation.router.TestRouterRpc |
   |   | hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4482/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4482 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint |
   | uname | Linux 7e991685e284 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / 98dbdb640d2b17a2f2319644b13bb5a4c998f4fe |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4482/1/testReport/ |
   | Max. process+thread count | 1977 (vs. ulimit of 5500) |
   | modules | C: hadoop-build-tools hadoop-project 
hadoop-common-project/hadoop-annotations hadoop-project-dist hadoop-assemblies 
hadoop-maven-plugins hadoop-common-project/hadoop-minikdc 
hadoop-common-project/hadoop-auth hadoop-common-project/hadoop-auth-examples 
hadoop-common-project/hadoop-common hadoop-common-project/hadoop-nfs 
hadoop-common-project/hadoop-kms hadoop-common-project/hadoop-registry 
hadoop-common-project hadoop-hdfs-project/hadoop-hdfs-client 
hadoop-hdfs-project/hadoop-hdfs 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4482: HADOOP-18305. Preparing for 3.3.4 release

2022-06-22 Thread GitBox


hadoop-yetus commented on PR #4482:
URL: https://github.com/apache/hadoop/pull/4482#issuecomment-1163039653

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  10m 26s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  2s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ branch-3.3 Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 25s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  27m 48s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |  18m 45s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |  20m 57s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   7m  5s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  | 114m 15s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  24m 22s |  |  branch-3.3 passed  |
   | +0 :ok: |  mvndep  |  25m 16s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  | 144m 39s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 12s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  18m 12s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |  20m 30s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   7m 27s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  54m 14s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 719m 10s | 
[/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4482/1/artifact/out/patch-unit-root.txt)
 |  root in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   2m  6s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 1090m 49s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.mapred.TestLocalDistributedCacheManager |
   |   | hadoop.hdfs.tools.TestDFSAdmin |
   |   | hadoop.hdfs.TestRollingUpgrade |
   |   | hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade |
   |   | hadoop.hdfs.server.federation.router.TestRouterRpc |
   |   | hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4482/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4482 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint |
   | uname | Linux 7e991685e284 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / 98dbdb640d2b17a2f2319644b13bb5a4c998f4fe |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4482/1/testReport/ |
   | Max. process+thread count | 1977 (vs. ulimit of 5500) |
   | modules | C: hadoop-build-tools hadoop-project 
hadoop-common-project/hadoop-annotations hadoop-project-dist hadoop-assemblies 
hadoop-maven-plugins hadoop-common-project/hadoop-minikdc 
hadoop-common-project/hadoop-auth hadoop-common-project/hadoop-auth-examples 
hadoop-common-project/hadoop-common hadoop-common-project/hadoop-nfs 
hadoop-common-project/hadoop-kms hadoop-common-project/hadoop-registry 
hadoop-common-project hadoop-hdfs-project/hadoop-hdfs-client 
hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-native-client 
hadoop-hdfs-project/hadoop-hdfs-httpfs hadoop-hdfs-project/hadoop-hdfs-nfs 
hadoop-hdfs-project/hadoop-hdfs-rbf hadoop-hdfs-project 
hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 

[jira] [Commented] (HADOOP-18305) Release Hadoop 3.3.4: minor update of hadoop-3.3.3

2022-06-22 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17557413#comment-17557413
 ] 

Steve Loughran commented on HADOOP-18305:
-

going to leave out  HADOOP-15984  as it is too big for a minor release. target 
branch-3.3. and we will have it in the next main release 

> Release Hadoop 3.3.4: minor update of hadoop-3.3.3
> --
>
> Key: HADOOP-18305
> URL: https://issues.apache.org/jira/browse/HADOOP-18305
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Affects Versions: 3.3.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Create a Hadoop 3.3.4 release with
> * critical fixes
> * ARM artifacts as well as the intel ones



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18305) Release Hadoop 3.3.4: minor update of hadoop-3.3.3

2022-06-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18305?focusedWorklogId=783829=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-783829
 ]

ASF GitHub Bot logged work on HADOOP-18305:
---

Author: ASF GitHub Bot
Created on: 22/Jun/22 12:09
Start Date: 22/Jun/22 12:09
Worklog Time Spent: 10m 
  Work Description: steveloughran merged PR #4482:
URL: https://github.com/apache/hadoop/pull/4482




Issue Time Tracking
---

Worklog Id: (was: 783829)
Time Spent: 1h  (was: 50m)

> Release Hadoop 3.3.4: minor update of hadoop-3.3.3
> --
>
> Key: HADOOP-18305
> URL: https://issues.apache.org/jira/browse/HADOOP-18305
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Affects Versions: 3.3.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Create a Hadoop 3.3.4 release with
> * critical fixes
> * ARM artifacts as well as the intel ones



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran merged pull request #4482: HADOOP-18305. Preparing for 3.3.4 release

2022-06-22 Thread GitBox


steveloughran merged PR #4482:
URL: https://github.com/apache/hadoop/pull/4482


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18305) Release Hadoop 3.3.4: minor update of hadoop-3.3.3

2022-06-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18305?focusedWorklogId=783828=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-783828
 ]

ASF GitHub Bot logged work on HADOOP-18305:
---

Author: ASF GitHub Bot
Created on: 22/Jun/22 12:04
Start Date: 22/Jun/22 12:04
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on PR #4482:
URL: https://github.com/apache/hadoop/pull/4482#issuecomment-1163013956

   This is still in yetus after 17h; having things span midnight is possibly 
creating problems.
   
   I am going to merge anyway as the build part is happy.




Issue Time Tracking
---

Worklog Id: (was: 783828)
Time Spent: 50m  (was: 40m)

> Release Hadoop 3.3.4: minor update of hadoop-3.3.3
> --
>
> Key: HADOOP-18305
> URL: https://issues.apache.org/jira/browse/HADOOP-18305
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Affects Versions: 3.3.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Create a Hadoop 3.3.4 release with
> * critical fixes
> * ARM artifacts as well as the intel ones



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #4482: HADOOP-18305. Preparing for 3.3.4 release

2022-06-22 Thread GitBox


steveloughran commented on PR #4482:
URL: https://github.com/apache/hadoop/pull/4482#issuecomment-1163013956

   This is still in yetus after 17h; having things span midnight is possibly 
creating problems.
   
   I am going to merge anyway as the build part is happy.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18303) Remove shading exclusion of javax.ws.rs-api from hadoop-client-runtime

2022-06-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18303?focusedWorklogId=783827=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-783827
 ]

ASF GitHub Bot logged work on HADOOP-18303:
---

Author: ASF GitHub Bot
Created on: 22/Jun/22 12:01
Start Date: 22/Jun/22 12:01
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on PR #4461:
URL: https://github.com/apache/hadoop/pull/4461#issuecomment-1163011238

   bq. Looks like upgrading jersey to get rid of jsr311-api is the only 
solution then.
   
   ok, but then no aspects of jersey 2 must be unshaded or every project which 
still uses 1.19 is going to break. we don't want to do that




Issue Time Tracking
---

Worklog Id: (was: 783827)
Time Spent: 2h 10m  (was: 2h)

> Remove shading exclusion of javax.ws.rs-api from hadoop-client-runtime
> --
>
> Key: HADOOP-18303
> URL: https://issues.apache.org/jira/browse/HADOOP-18303
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> As part of HADOOP-18033, we have excluded shading of javax.ws.rs-api from 
> both hadoop-client-runtime and hadoop-client-minicluster. This has caused 
> issues for downstreamers e.g. 
> [https://github.com/apache/incubator-kyuubi/issues/2904], more discussions.
> We should put the shading back in hadoop-client-runtime to fix CNFE issues 
> for downstreamers.
> cc [~ayushsaxena] [~pan3793] 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #4461: HADOOP-18303. Remove shading exclusion of javax.ws.rs-api from hadoop-client-runtime

2022-06-22 Thread GitBox


steveloughran commented on PR #4461:
URL: https://github.com/apache/hadoop/pull/4461#issuecomment-1163011238

   bq. Looks like upgrading jersey to get rid of jsr311-api is the only 
solution then.
   
   ok, but then no aspects of jersey 2 must be unshaded or every project which 
still uses 1.19 is going to break. we don't want to do that


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #3855: MAPREDUCE-7371. DistributedCache alternative APIs should not use DistributedCache APIs internally

2022-06-22 Thread GitBox


steveloughran commented on PR #3855:
URL: https://github.com/apache/hadoop/pull/3855#issuecomment-1163008267

   pulling this in to branch-3.3


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran merged pull request #4472: MAPREDUCE-7391. TestLocalDistributedCacheManager failing after HADOOP-16202

2022-06-22 Thread GitBox


steveloughran merged PR #4472:
URL: https://github.com/apache/hadoop/pull/4472


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18258) Merging of S3A Audit Logs

2022-06-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18258?focusedWorklogId=783822=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-783822
 ]

ASF GitHub Bot logged work on HADOOP-18258:
---

Author: ASF GitHub Bot
Created on: 22/Jun/22 11:40
Start Date: 22/Jun/22 11:40
Worklog Time Spent: 10m 
  Work Description: sravanigadey commented on code in PR #4383:
URL: https://github.com/apache/hadoop/pull/4383#discussion_r903633874


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/audit/TestS3AAuditLogMerger.java:
##
@@ -0,0 +1,182 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.audit;
+
+import java.io.File;
+import java.io.FileWriter;
+import java.io.IOException;
+import java.nio.file.Files;
+import java.nio.file.Paths;
+
+import org.junit.After;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+/**
+ * MergerTest will implement different tests on Merger class methods.
+ */
+public class TestS3AAuditLogMerger {

Review Comment:
   extended AbstractHadoopTestBase.





Issue Time Tracking
---

Worklog Id: (was: 783822)
Time Spent: 6h 20m  (was: 6h 10m)

> Merging of S3A Audit Logs
> -
>
> Key: HADOOP-18258
> URL: https://issues.apache.org/jira/browse/HADOOP-18258
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sravani Gadey
>Assignee: Sravani Gadey
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6h 20m
>  Remaining Estimate: 0h
>
> Merging audit log files containing huge number of audit logs collected from a 
> job like Hive or Spark job containing various S3 requests like list, head, 
> get and put requests.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sravanigadey commented on a diff in pull request #4383: HADOOP-18258. Merging of S3A Audit Logs

2022-06-22 Thread GitBox


sravanigadey commented on code in PR #4383:
URL: https://github.com/apache/hadoop/pull/4383#discussion_r903633874


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/audit/TestS3AAuditLogMerger.java:
##
@@ -0,0 +1,182 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.audit;
+
+import java.io.File;
+import java.io.FileWriter;
+import java.io.IOException;
+import java.nio.file.Files;
+import java.nio.file.Paths;
+
+import org.junit.After;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+/**
+ * MergerTest will implement different tests on Merger class methods.
+ */
+public class TestS3AAuditLogMerger {

Review Comment:
   extended AbstractHadoopTestBase.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18258) Merging of S3A Audit Logs

2022-06-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18258?focusedWorklogId=783821=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-783821
 ]

ASF GitHub Bot logged work on HADOOP-18258:
---

Author: ASF GitHub Bot
Created on: 22/Jun/22 11:38
Start Date: 22/Jun/22 11:38
Worklog Time Spent: 10m 
  Work Description: sravanigadey commented on code in PR #4383:
URL: https://github.com/apache/hadoop/pull/4383#discussion_r903630927


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/audit/AuditTool.java:
##
@@ -0,0 +1,308 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.audit;
+
+import java.io.Closeable;
+import java.io.EOFException;
+import java.io.File;
+import java.io.IOException;
+import java.io.PrintWriter;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FilterFileSystem;
+import org.apache.hadoop.fs.LocatedFileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.RemoteIterator;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+
+import static 
org.apache.hadoop.service.launcher.LauncherExitCodes.EXIT_COMMAND_ARGUMENT_ERROR;
+import static 
org.apache.hadoop.service.launcher.LauncherExitCodes.EXIT_SERVICE_UNAVAILABLE;
+import static 
org.apache.hadoop.service.launcher.LauncherExitCodes.EXIT_SUCCESS;
+
+/**
+ * AuditTool is a Command Line Interface.
+ * i.e, it's functionality is to parse the merged audit log file.
+ * and generate avro file.
+ */
+public class AuditTool extends Configured implements Tool, Closeable {
+
+  private static final Logger LOG = LoggerFactory.getLogger(AuditTool.class);
+
+  private final String entryPoint = "s3audit";
+
+  private PrintWriter out;
+
+  // Exit codes
+  private static final int SUCCESS = EXIT_SUCCESS;
+  private static final int INVALID_ARGUMENT = EXIT_COMMAND_ARGUMENT_ERROR;
+
+  /**
+   * Error String when the wrong FS is used for binding: {@value}.
+   **/
+  @VisibleForTesting
+  public static final String WRONG_FILESYSTEM = "Wrong filesystem for ";
+
+  private final String usage = entryPoint + "  s3a://BUCKET\n";
+
+  public AuditTool() {
+  }
+
+  /**
+   * Tells us the usage of the AuditTool by commands.
+   *
+   * @return the string USAGE
+   */
+  public String getUsage() {
+return usage;
+  }
+
+  /**
+   * This run method in AuditTool takes S3 bucket path.
+   * which contains audit log files from command line arguments.
+   * and merge the audit log files present in that path into single file in.
+   * local system.
+   *
+   * @param args command specific arguments.
+   * @return SUCCESS i.e, '0', which is an exit code.
+   * @throws Exception on any failure.
+   */
+  @Override
+  public int run(String[] args) throws Exception {
+List argv = new ArrayList<>(Arrays.asList(args));
+if (argv.isEmpty()) {
+  errorln(getUsage());
+  throw invalidArgs("No bucket specified");
+}
+//Path of audit log files in s3 bucket
+Path s3LogsPath = new Path(argv.get(0));
+
+//Setting the file system
+URI fsURI = toUri(String.valueOf(s3LogsPath));
+S3AFileSystem s3AFileSystem =
+bindFilesystem(FileSystem.newInstance(fsURI, getConf()));
+RemoteIterator listOfS3LogFiles =
+s3AFileSystem.listFiles(s3LogsPath, true);
+
+//Merging local audit files into a single file
+File s3aLogsDirectory = new File(s3LogsPath.getName());
+boolean s3aLogsDirectoryCreation = false;
+if (!s3aLogsDirectory.exists()) {
+  

[GitHub] [hadoop] sravanigadey commented on a diff in pull request #4383: HADOOP-18258. Merging of S3A Audit Logs

2022-06-22 Thread GitBox


sravanigadey commented on code in PR #4383:
URL: https://github.com/apache/hadoop/pull/4383#discussion_r903630927


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/audit/AuditTool.java:
##
@@ -0,0 +1,308 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.audit;
+
+import java.io.Closeable;
+import java.io.EOFException;
+import java.io.File;
+import java.io.IOException;
+import java.io.PrintWriter;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FilterFileSystem;
+import org.apache.hadoop.fs.LocatedFileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.RemoteIterator;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+
+import static 
org.apache.hadoop.service.launcher.LauncherExitCodes.EXIT_COMMAND_ARGUMENT_ERROR;
+import static 
org.apache.hadoop.service.launcher.LauncherExitCodes.EXIT_SERVICE_UNAVAILABLE;
+import static 
org.apache.hadoop.service.launcher.LauncherExitCodes.EXIT_SUCCESS;
+
+/**
+ * AuditTool is a Command Line Interface.
+ * i.e, it's functionality is to parse the merged audit log file.
+ * and generate avro file.
+ */
+public class AuditTool extends Configured implements Tool, Closeable {
+
+  private static final Logger LOG = LoggerFactory.getLogger(AuditTool.class);
+
+  private final String entryPoint = "s3audit";
+
+  private PrintWriter out;
+
+  // Exit codes
+  private static final int SUCCESS = EXIT_SUCCESS;
+  private static final int INVALID_ARGUMENT = EXIT_COMMAND_ARGUMENT_ERROR;
+
+  /**
+   * Error String when the wrong FS is used for binding: {@value}.
+   **/
+  @VisibleForTesting
+  public static final String WRONG_FILESYSTEM = "Wrong filesystem for ";
+
+  private final String usage = entryPoint + "  s3a://BUCKET\n";
+
+  public AuditTool() {
+  }
+
+  /**
+   * Tells us the usage of the AuditTool by commands.
+   *
+   * @return the string USAGE
+   */
+  public String getUsage() {
+return usage;
+  }
+
+  /**
+   * This run method in AuditTool takes S3 bucket path.
+   * which contains audit log files from command line arguments.
+   * and merge the audit log files present in that path into single file in.
+   * local system.
+   *
+   * @param args command specific arguments.
+   * @return SUCCESS i.e, '0', which is an exit code.
+   * @throws Exception on any failure.
+   */
+  @Override
+  public int run(String[] args) throws Exception {
+List argv = new ArrayList<>(Arrays.asList(args));
+if (argv.isEmpty()) {
+  errorln(getUsage());
+  throw invalidArgs("No bucket specified");
+}
+//Path of audit log files in s3 bucket
+Path s3LogsPath = new Path(argv.get(0));
+
+//Setting the file system
+URI fsURI = toUri(String.valueOf(s3LogsPath));
+S3AFileSystem s3AFileSystem =
+bindFilesystem(FileSystem.newInstance(fsURI, getConf()));
+RemoteIterator listOfS3LogFiles =
+s3AFileSystem.listFiles(s3LogsPath, true);
+
+//Merging local audit files into a single file
+File s3aLogsDirectory = new File(s3LogsPath.getName());
+boolean s3aLogsDirectoryCreation = false;
+if (!s3aLogsDirectory.exists()) {
+  s3aLogsDirectoryCreation = s3aLogsDirectory.mkdir();
+}
+if(s3aLogsDirectoryCreation) {
+  while (listOfS3LogFiles.hasNext()) {
+Path s3LogFilePath = listOfS3LogFiles.next().getPath();
+File s3LogLocalFilePath =

Review Comment:
   modified file name to s3LogLocalFile



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL 

[jira] [Work logged] (HADOOP-18258) Merging of S3A Audit Logs

2022-06-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18258?focusedWorklogId=783820=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-783820
 ]

ASF GitHub Bot logged work on HADOOP-18258:
---

Author: ASF GitHub Bot
Created on: 22/Jun/22 11:37
Start Date: 22/Jun/22 11:37
Worklog Time Spent: 10m 
  Work Description: sravanigadey commented on code in PR #4383:
URL: https://github.com/apache/hadoop/pull/4383#discussion_r903629695


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/audit/AuditTool.java:
##
@@ -0,0 +1,308 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.audit;
+
+import java.io.Closeable;
+import java.io.EOFException;
+import java.io.File;
+import java.io.IOException;
+import java.io.PrintWriter;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FilterFileSystem;
+import org.apache.hadoop.fs.LocatedFileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.RemoteIterator;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+
+import static 
org.apache.hadoop.service.launcher.LauncherExitCodes.EXIT_COMMAND_ARGUMENT_ERROR;
+import static 
org.apache.hadoop.service.launcher.LauncherExitCodes.EXIT_SERVICE_UNAVAILABLE;
+import static 
org.apache.hadoop.service.launcher.LauncherExitCodes.EXIT_SUCCESS;
+
+/**
+ * AuditTool is a Command Line Interface.
+ * i.e, it's functionality is to parse the merged audit log file.
+ * and generate avro file.
+ */
+public class AuditTool extends Configured implements Tool, Closeable {
+
+  private static final Logger LOG = LoggerFactory.getLogger(AuditTool.class);
+
+  private final String entryPoint = "s3audit";
+
+  private PrintWriter out;
+
+  // Exit codes
+  private static final int SUCCESS = EXIT_SUCCESS;
+  private static final int INVALID_ARGUMENT = EXIT_COMMAND_ARGUMENT_ERROR;
+
+  /**
+   * Error String when the wrong FS is used for binding: {@value}.
+   **/
+  @VisibleForTesting
+  public static final String WRONG_FILESYSTEM = "Wrong filesystem for ";
+
+  private final String usage = entryPoint + "  s3a://BUCKET\n";
+
+  public AuditTool() {
+  }
+
+  /**
+   * Tells us the usage of the AuditTool by commands.
+   *
+   * @return the string USAGE
+   */
+  public String getUsage() {
+return usage;
+  }
+
+  /**
+   * This run method in AuditTool takes S3 bucket path.
+   * which contains audit log files from command line arguments.
+   * and merge the audit log files present in that path into single file in.
+   * local system.
+   *
+   * @param args command specific arguments.
+   * @return SUCCESS i.e, '0', which is an exit code.
+   * @throws Exception on any failure.
+   */
+  @Override
+  public int run(String[] args) throws Exception {
+List argv = new ArrayList<>(Arrays.asList(args));
+if (argv.isEmpty()) {
+  errorln(getUsage());
+  throw invalidArgs("No bucket specified");
+}
+//Path of audit log files in s3 bucket
+Path s3LogsPath = new Path(argv.get(0));
+
+//Setting the file system
+URI fsURI = toUri(String.valueOf(s3LogsPath));
+S3AFileSystem s3AFileSystem =
+bindFilesystem(FileSystem.newInstance(fsURI, getConf()));
+RemoteIterator listOfS3LogFiles =
+s3AFileSystem.listFiles(s3LogsPath, true);
+
+//Merging local audit files into a single file
+File s3aLogsDirectory = new File(s3LogsPath.getName());
+boolean s3aLogsDirectoryCreation = false;
+if (!s3aLogsDirectory.exists()) {
+  

[GitHub] [hadoop] sravanigadey commented on a diff in pull request #4383: HADOOP-18258. Merging of S3A Audit Logs

2022-06-22 Thread GitBox


sravanigadey commented on code in PR #4383:
URL: https://github.com/apache/hadoop/pull/4383#discussion_r903629695


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/audit/AuditTool.java:
##
@@ -0,0 +1,308 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.audit;
+
+import java.io.Closeable;
+import java.io.EOFException;
+import java.io.File;
+import java.io.IOException;
+import java.io.PrintWriter;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FilterFileSystem;
+import org.apache.hadoop.fs.LocatedFileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.RemoteIterator;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+
+import static 
org.apache.hadoop.service.launcher.LauncherExitCodes.EXIT_COMMAND_ARGUMENT_ERROR;
+import static 
org.apache.hadoop.service.launcher.LauncherExitCodes.EXIT_SERVICE_UNAVAILABLE;
+import static 
org.apache.hadoop.service.launcher.LauncherExitCodes.EXIT_SUCCESS;
+
+/**
+ * AuditTool is a Command Line Interface.
+ * i.e, it's functionality is to parse the merged audit log file.
+ * and generate avro file.
+ */
+public class AuditTool extends Configured implements Tool, Closeable {
+
+  private static final Logger LOG = LoggerFactory.getLogger(AuditTool.class);
+
+  private final String entryPoint = "s3audit";
+
+  private PrintWriter out;
+
+  // Exit codes
+  private static final int SUCCESS = EXIT_SUCCESS;
+  private static final int INVALID_ARGUMENT = EXIT_COMMAND_ARGUMENT_ERROR;
+
+  /**
+   * Error String when the wrong FS is used for binding: {@value}.
+   **/
+  @VisibleForTesting
+  public static final String WRONG_FILESYSTEM = "Wrong filesystem for ";
+
+  private final String usage = entryPoint + "  s3a://BUCKET\n";
+
+  public AuditTool() {
+  }
+
+  /**
+   * Tells us the usage of the AuditTool by commands.
+   *
+   * @return the string USAGE
+   */
+  public String getUsage() {
+return usage;
+  }
+
+  /**
+   * This run method in AuditTool takes S3 bucket path.
+   * which contains audit log files from command line arguments.
+   * and merge the audit log files present in that path into single file in.
+   * local system.
+   *
+   * @param args command specific arguments.
+   * @return SUCCESS i.e, '0', which is an exit code.
+   * @throws Exception on any failure.
+   */
+  @Override
+  public int run(String[] args) throws Exception {
+List argv = new ArrayList<>(Arrays.asList(args));
+if (argv.isEmpty()) {
+  errorln(getUsage());
+  throw invalidArgs("No bucket specified");
+}
+//Path of audit log files in s3 bucket
+Path s3LogsPath = new Path(argv.get(0));
+
+//Setting the file system
+URI fsURI = toUri(String.valueOf(s3LogsPath));
+S3AFileSystem s3AFileSystem =
+bindFilesystem(FileSystem.newInstance(fsURI, getConf()));
+RemoteIterator listOfS3LogFiles =
+s3AFileSystem.listFiles(s3LogsPath, true);
+
+//Merging local audit files into a single file
+File s3aLogsDirectory = new File(s3LogsPath.getName());
+boolean s3aLogsDirectoryCreation = false;
+if (!s3aLogsDirectory.exists()) {
+  s3aLogsDirectoryCreation = s3aLogsDirectory.mkdir();
+}
+if(s3aLogsDirectoryCreation) {

Review Comment:
   added space



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:

[jira] [Work logged] (HADOOP-18258) Merging of S3A Audit Logs

2022-06-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18258?focusedWorklogId=783819=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-783819
 ]

ASF GitHub Bot logged work on HADOOP-18258:
---

Author: ASF GitHub Bot
Created on: 22/Jun/22 11:35
Start Date: 22/Jun/22 11:35
Worklog Time Spent: 10m 
  Work Description: sravanigadey commented on code in PR #4383:
URL: https://github.com/apache/hadoop/pull/4383#discussion_r903626816


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/audit/AuditTool.java:
##
@@ -0,0 +1,308 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.audit;
+
+import java.io.Closeable;
+import java.io.EOFException;
+import java.io.File;
+import java.io.IOException;
+import java.io.PrintWriter;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FilterFileSystem;
+import org.apache.hadoop.fs.LocatedFileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.RemoteIterator;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+
+import static 
org.apache.hadoop.service.launcher.LauncherExitCodes.EXIT_COMMAND_ARGUMENT_ERROR;
+import static 
org.apache.hadoop.service.launcher.LauncherExitCodes.EXIT_SERVICE_UNAVAILABLE;
+import static 
org.apache.hadoop.service.launcher.LauncherExitCodes.EXIT_SUCCESS;
+
+/**
+ * AuditTool is a Command Line Interface.
+ * i.e, it's functionality is to parse the merged audit log file.
+ * and generate avro file.
+ */
+public class AuditTool extends Configured implements Tool, Closeable {
+
+  private static final Logger LOG = LoggerFactory.getLogger(AuditTool.class);
+
+  private final String entryPoint = "s3audit";
+
+  private PrintWriter out;
+
+  // Exit codes
+  private static final int SUCCESS = EXIT_SUCCESS;
+  private static final int INVALID_ARGUMENT = EXIT_COMMAND_ARGUMENT_ERROR;
+
+  /**
+   * Error String when the wrong FS is used for binding: {@value}.
+   **/
+  @VisibleForTesting
+  public static final String WRONG_FILESYSTEM = "Wrong filesystem for ";
+
+  private final String usage = entryPoint + "  s3a://BUCKET\n";
+
+  public AuditTool() {
+  }
+
+  /**
+   * Tells us the usage of the AuditTool by commands.
+   *
+   * @return the string USAGE
+   */
+  public String getUsage() {
+return usage;
+  }
+
+  /**
+   * This run method in AuditTool takes S3 bucket path.
+   * which contains audit log files from command line arguments.
+   * and merge the audit log files present in that path into single file in.
+   * local system.
+   *
+   * @param args command specific arguments.
+   * @return SUCCESS i.e, '0', which is an exit code.
+   * @throws Exception on any failure.
+   */
+  @Override
+  public int run(String[] args) throws Exception {
+List argv = new ArrayList<>(Arrays.asList(args));
+if (argv.isEmpty()) {
+  errorln(getUsage());
+  throw invalidArgs("No bucket specified");
+}
+//Path of audit log files in s3 bucket
+Path s3LogsPath = new Path(argv.get(0));
+
+//Setting the file system
+URI fsURI = toUri(String.valueOf(s3LogsPath));
+S3AFileSystem s3AFileSystem =
+bindFilesystem(FileSystem.newInstance(fsURI, getConf()));

Review Comment:
   modified to ```FileSystem.get(fsURI, getConf())```





Issue Time Tracking
---

Worklog Id: (was: 783819)
Time Spent: 5h 50m  (was: 5h 40m)

> Merging of S3A Audit Logs
> -
>
> Key: HADOOP-18258
> 

[GitHub] [hadoop] sravanigadey commented on a diff in pull request #4383: HADOOP-18258. Merging of S3A Audit Logs

2022-06-22 Thread GitBox


sravanigadey commented on code in PR #4383:
URL: https://github.com/apache/hadoop/pull/4383#discussion_r903626816


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/audit/AuditTool.java:
##
@@ -0,0 +1,308 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.audit;
+
+import java.io.Closeable;
+import java.io.EOFException;
+import java.io.File;
+import java.io.IOException;
+import java.io.PrintWriter;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FilterFileSystem;
+import org.apache.hadoop.fs.LocatedFileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.RemoteIterator;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+
+import static 
org.apache.hadoop.service.launcher.LauncherExitCodes.EXIT_COMMAND_ARGUMENT_ERROR;
+import static 
org.apache.hadoop.service.launcher.LauncherExitCodes.EXIT_SERVICE_UNAVAILABLE;
+import static 
org.apache.hadoop.service.launcher.LauncherExitCodes.EXIT_SUCCESS;
+
+/**
+ * AuditTool is a Command Line Interface.
+ * i.e, it's functionality is to parse the merged audit log file.
+ * and generate avro file.
+ */
+public class AuditTool extends Configured implements Tool, Closeable {
+
+  private static final Logger LOG = LoggerFactory.getLogger(AuditTool.class);
+
+  private final String entryPoint = "s3audit";
+
+  private PrintWriter out;
+
+  // Exit codes
+  private static final int SUCCESS = EXIT_SUCCESS;
+  private static final int INVALID_ARGUMENT = EXIT_COMMAND_ARGUMENT_ERROR;
+
+  /**
+   * Error String when the wrong FS is used for binding: {@value}.
+   **/
+  @VisibleForTesting
+  public static final String WRONG_FILESYSTEM = "Wrong filesystem for ";
+
+  private final String usage = entryPoint + "  s3a://BUCKET\n";
+
+  public AuditTool() {
+  }
+
+  /**
+   * Tells us the usage of the AuditTool by commands.
+   *
+   * @return the string USAGE
+   */
+  public String getUsage() {
+return usage;
+  }
+
+  /**
+   * This run method in AuditTool takes S3 bucket path.
+   * which contains audit log files from command line arguments.
+   * and merge the audit log files present in that path into single file in.
+   * local system.
+   *
+   * @param args command specific arguments.
+   * @return SUCCESS i.e, '0', which is an exit code.
+   * @throws Exception on any failure.
+   */
+  @Override
+  public int run(String[] args) throws Exception {
+List argv = new ArrayList<>(Arrays.asList(args));
+if (argv.isEmpty()) {
+  errorln(getUsage());
+  throw invalidArgs("No bucket specified");
+}
+//Path of audit log files in s3 bucket
+Path s3LogsPath = new Path(argv.get(0));
+
+//Setting the file system
+URI fsURI = toUri(String.valueOf(s3LogsPath));
+S3AFileSystem s3AFileSystem =
+bindFilesystem(FileSystem.newInstance(fsURI, getConf()));

Review Comment:
   modified to ```FileSystem.get(fsURI, getConf())```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18258) Merging of S3A Audit Logs

2022-06-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18258?focusedWorklogId=783818=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-783818
 ]

ASF GitHub Bot logged work on HADOOP-18258:
---

Author: ASF GitHub Bot
Created on: 22/Jun/22 11:32
Start Date: 22/Jun/22 11:32
Worklog Time Spent: 10m 
  Work Description: sravanigadey commented on code in PR #4383:
URL: https://github.com/apache/hadoop/pull/4383#discussion_r903623994


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/audit/AuditTool.java:
##
@@ -0,0 +1,308 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.audit;
+
+import java.io.Closeable;
+import java.io.EOFException;
+import java.io.File;
+import java.io.IOException;
+import java.io.PrintWriter;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FilterFileSystem;
+import org.apache.hadoop.fs.LocatedFileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.RemoteIterator;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+
+import static 
org.apache.hadoop.service.launcher.LauncherExitCodes.EXIT_COMMAND_ARGUMENT_ERROR;
+import static 
org.apache.hadoop.service.launcher.LauncherExitCodes.EXIT_SERVICE_UNAVAILABLE;
+import static 
org.apache.hadoop.service.launcher.LauncherExitCodes.EXIT_SUCCESS;
+
+/**
+ * AuditTool is a Command Line Interface.
+ * i.e, it's functionality is to parse the merged audit log file.
+ * and generate avro file.
+ */
+public class AuditTool extends Configured implements Tool, Closeable {
+
+  private static final Logger LOG = LoggerFactory.getLogger(AuditTool.class);
+
+  private final String entryPoint = "s3audit";
+
+  private PrintWriter out;
+
+  // Exit codes
+  private static final int SUCCESS = EXIT_SUCCESS;
+  private static final int INVALID_ARGUMENT = EXIT_COMMAND_ARGUMENT_ERROR;
+
+  /**
+   * Error String when the wrong FS is used for binding: {@value}.
+   **/
+  @VisibleForTesting
+  public static final String WRONG_FILESYSTEM = "Wrong filesystem for ";
+
+  private final String usage = entryPoint + "  s3a://BUCKET\n";
+
+  public AuditTool() {
+  }
+
+  /**
+   * Tells us the usage of the AuditTool by commands.
+   *
+   * @return the string USAGE
+   */
+  public String getUsage() {
+return usage;
+  }
+
+  /**
+   * This run method in AuditTool takes S3 bucket path.
+   * which contains audit log files from command line arguments.
+   * and merge the audit log files present in that path into single file in.
+   * local system.
+   *
+   * @param args command specific arguments.
+   * @return SUCCESS i.e, '0', which is an exit code.
+   * @throws Exception on any failure.
+   */
+  @Override
+  public int run(String[] args) throws Exception {
+List argv = new ArrayList<>(Arrays.asList(args));
+if (argv.isEmpty()) {
+  errorln(getUsage());
+  throw invalidArgs("No bucket specified");
+}
+//Path of audit log files in s3 bucket

Review Comment:
   added space
   





Issue Time Tracking
---

Worklog Id: (was: 783818)
Time Spent: 5h 40m  (was: 5.5h)

> Merging of S3A Audit Logs
> -
>
> Key: HADOOP-18258
> URL: https://issues.apache.org/jira/browse/HADOOP-18258
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sravani Gadey
>Assignee: Sravani Gadey
>Priority: Major
>  

[GitHub] [hadoop] sravanigadey commented on a diff in pull request #4383: HADOOP-18258. Merging of S3A Audit Logs

2022-06-22 Thread GitBox


sravanigadey commented on code in PR #4383:
URL: https://github.com/apache/hadoop/pull/4383#discussion_r903623994


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/audit/AuditTool.java:
##
@@ -0,0 +1,308 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.audit;
+
+import java.io.Closeable;
+import java.io.EOFException;
+import java.io.File;
+import java.io.IOException;
+import java.io.PrintWriter;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FilterFileSystem;
+import org.apache.hadoop.fs.LocatedFileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.RemoteIterator;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+
+import static 
org.apache.hadoop.service.launcher.LauncherExitCodes.EXIT_COMMAND_ARGUMENT_ERROR;
+import static 
org.apache.hadoop.service.launcher.LauncherExitCodes.EXIT_SERVICE_UNAVAILABLE;
+import static 
org.apache.hadoop.service.launcher.LauncherExitCodes.EXIT_SUCCESS;
+
+/**
+ * AuditTool is a Command Line Interface.
+ * i.e, it's functionality is to parse the merged audit log file.
+ * and generate avro file.
+ */
+public class AuditTool extends Configured implements Tool, Closeable {
+
+  private static final Logger LOG = LoggerFactory.getLogger(AuditTool.class);
+
+  private final String entryPoint = "s3audit";
+
+  private PrintWriter out;
+
+  // Exit codes
+  private static final int SUCCESS = EXIT_SUCCESS;
+  private static final int INVALID_ARGUMENT = EXIT_COMMAND_ARGUMENT_ERROR;
+
+  /**
+   * Error String when the wrong FS is used for binding: {@value}.
+   **/
+  @VisibleForTesting
+  public static final String WRONG_FILESYSTEM = "Wrong filesystem for ";
+
+  private final String usage = entryPoint + "  s3a://BUCKET\n";
+
+  public AuditTool() {
+  }
+
+  /**
+   * Tells us the usage of the AuditTool by commands.
+   *
+   * @return the string USAGE
+   */
+  public String getUsage() {
+return usage;
+  }
+
+  /**
+   * This run method in AuditTool takes S3 bucket path.
+   * which contains audit log files from command line arguments.
+   * and merge the audit log files present in that path into single file in.
+   * local system.
+   *
+   * @param args command specific arguments.
+   * @return SUCCESS i.e, '0', which is an exit code.
+   * @throws Exception on any failure.
+   */
+  @Override
+  public int run(String[] args) throws Exception {
+List argv = new ArrayList<>(Arrays.asList(args));
+if (argv.isEmpty()) {
+  errorln(getUsage());
+  throw invalidArgs("No bucket specified");
+}
+//Path of audit log files in s3 bucket

Review Comment:
   added space
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4472: MAPREDUCE-7391. TestLocalDistributedCacheManager failing after HADOOP-16202

2022-06-22 Thread GitBox


hadoop-yetus commented on PR #4472:
URL: https://github.com/apache/hadoop/pull/4472#issuecomment-1162962030

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 38s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m 53s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 55s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 51s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 44s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 49s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 59s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 34s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 48s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 24s |  |  
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common:
 The patch generated 0 new + 3 unchanged - 1 fixed = 3 total (was 4)  |
   | +1 :green_heart: |  mvnsite  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 23s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 17s |  |  hadoop-mapreduce-client-common 
in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 51s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  93m 48s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4472/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4472 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 9fdbb143cbb2 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b30ebb7e6b0476ecc18680bb0bfda85d13308318 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4472/3/testReport/ |
   | Max. process+thread count | 555 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common 
U: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common 
|
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4472/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 

[jira] [Work logged] (HADOOP-18258) Merging of S3A Audit Logs

2022-06-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18258?focusedWorklogId=783806=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-783806
 ]

ASF GitHub Bot logged work on HADOOP-18258:
---

Author: ASF GitHub Bot
Created on: 22/Jun/22 10:15
Start Date: 22/Jun/22 10:15
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on code in PR #4383:
URL: https://github.com/apache/hadoop/pull/4383#discussion_r903522555


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/audit/AuditTool.java:
##
@@ -0,0 +1,308 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.audit;
+
+import java.io.Closeable;
+import java.io.EOFException;
+import java.io.File;
+import java.io.IOException;
+import java.io.PrintWriter;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FilterFileSystem;
+import org.apache.hadoop.fs.LocatedFileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.RemoteIterator;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+
+import static 
org.apache.hadoop.service.launcher.LauncherExitCodes.EXIT_COMMAND_ARGUMENT_ERROR;
+import static 
org.apache.hadoop.service.launcher.LauncherExitCodes.EXIT_SERVICE_UNAVAILABLE;
+import static 
org.apache.hadoop.service.launcher.LauncherExitCodes.EXIT_SUCCESS;
+
+/**
+ * AuditTool is a Command Line Interface.
+ * i.e, it's functionality is to parse the merged audit log file.
+ * and generate avro file.
+ */
+public class AuditTool extends Configured implements Tool, Closeable {

Review Comment:
   extend S3GuardTool and make a subcommand of it.



##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/audit/AuditTool.java:
##
@@ -0,0 +1,308 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.audit;
+
+import java.io.Closeable;
+import java.io.EOFException;
+import java.io.File;
+import java.io.IOException;
+import java.io.PrintWriter;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FilterFileSystem;
+import org.apache.hadoop.fs.LocatedFileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.RemoteIterator;
+import 

[GitHub] [hadoop] steveloughran commented on a diff in pull request #4383: HADOOP-18258. Merging of S3A Audit Logs

2022-06-22 Thread GitBox


steveloughran commented on code in PR #4383:
URL: https://github.com/apache/hadoop/pull/4383#discussion_r903522555


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/audit/AuditTool.java:
##
@@ -0,0 +1,308 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.audit;
+
+import java.io.Closeable;
+import java.io.EOFException;
+import java.io.File;
+import java.io.IOException;
+import java.io.PrintWriter;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FilterFileSystem;
+import org.apache.hadoop.fs.LocatedFileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.RemoteIterator;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+
+import static 
org.apache.hadoop.service.launcher.LauncherExitCodes.EXIT_COMMAND_ARGUMENT_ERROR;
+import static 
org.apache.hadoop.service.launcher.LauncherExitCodes.EXIT_SERVICE_UNAVAILABLE;
+import static 
org.apache.hadoop.service.launcher.LauncherExitCodes.EXIT_SUCCESS;
+
+/**
+ * AuditTool is a Command Line Interface.
+ * i.e, it's functionality is to parse the merged audit log file.
+ * and generate avro file.
+ */
+public class AuditTool extends Configured implements Tool, Closeable {

Review Comment:
   extend S3GuardTool and make a subcommand of it.



##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/audit/AuditTool.java:
##
@@ -0,0 +1,308 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.audit;
+
+import java.io.Closeable;
+import java.io.EOFException;
+import java.io.File;
+import java.io.IOException;
+import java.io.PrintWriter;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FilterFileSystem;
+import org.apache.hadoop.fs.LocatedFileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.RemoteIterator;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+
+import static 
org.apache.hadoop.service.launcher.LauncherExitCodes.EXIT_COMMAND_ARGUMENT_ERROR;
+import static 
org.apache.hadoop.service.launcher.LauncherExitCodes.EXIT_SERVICE_UNAVAILABLE;
+import static 
org.apache.hadoop.service.launcher.LauncherExitCodes.EXIT_SUCCESS;
+
+/**
+ * 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4488: HDFS-16640. RBF: router datanode UI show ip list when click dn histogram

2022-06-22 Thread GitBox


hadoop-yetus commented on PR #4488:
URL: https://github.com/apache/hadoop/pull/4488#issuecomment-1162904806

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 50s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  jshint  |   0m  1s |  |  jshint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  63m 25s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  83m  3s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 42s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  shadedclient  |  19m  7s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 52s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 106m 36s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4488/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4488 |
   | Optional Tests | dupname asflicense shadedclient codespell detsecrets 
jshint |
   | uname | Linux f5fd1e8ef206 4.15.0-169-generic #177-Ubuntu SMP Thu Feb 3 
10:50:38 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 73c08effebd6823ddbb21dbb722f6059c4e1a36e |
   | Max. process+thread count | 757 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4488/1/console |
   | versions | git=2.25.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >