[jira] [Commented] (HADOOP-18146) ABFS: Add changes for expect hundred continue header with append requests

2022-12-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17644176#comment-17644176
 ] 

ASF GitHub Bot commented on HADOOP-18146:
-

hadoop-yetus commented on PR #4039:
URL: https://github.com/apache/hadoop/pull/4039#issuecomment-1340508430

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  7s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 28s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 51s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 35s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 44s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 20s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 26s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  20m 44s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m  3s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m  5s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 11s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  93m 53s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4039/30/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4039 |
   | JIRA Issue | HADOOP-18146 |
   | Optional Tests | dupname asflicense codespell detsecrets xmllint compile 
javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle 
markdownlint |
   | uname | Linux d2a2217c4b6e 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / f2e6f522fb3d170cbf33abf0d0cdb348f925c43a |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4039/30/testReport/ |
   | Max. process+thread count | 555 (vs. ulimit of 5500) |
   | modules 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4039: HADOOP-18146: ABFS: Added changes for expect hundred continue header

2022-12-06 Thread GitBox


hadoop-yetus commented on PR #4039:
URL: https://github.com/apache/hadoop/pull/4039#issuecomment-1340508430

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  7s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 28s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 51s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 35s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 44s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 20s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 26s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  20m 44s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m  3s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m  5s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 11s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  93m 53s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4039/30/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4039 |
   | JIRA Issue | HADOOP-18146 |
   | Optional Tests | dupname asflicense codespell detsecrets xmllint compile 
javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle 
markdownlint |
   | uname | Linux d2a2217c4b6e 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / f2e6f522fb3d170cbf33abf0d0cdb348f925c43a |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4039/30/testReport/ |
   | Max. process+thread count | 555 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4039/30/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache 

[GitHub] [hadoop] Neilxzn commented on pull request #5184: HDFS-16861. RBF. Truncate API always fails when dirs use AllResolver oder on Router

2022-12-06 Thread GitBox


Neilxzn commented on PR #5184:
URL: https://github.com/apache/hadoop/pull/5184#issuecomment-1340323320

   Jenkins script has failed with code 125. It seems that it has nothing to do 
with the patch.  Please help me run tests again @tomscut  


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dingshun3016 commented on pull request #5180: HDFS-16858. Dynamically adjust max slow disks to exclude.

2022-12-06 Thread GitBox


dingshun3016 commented on PR #5180:
URL: https://github.com/apache/hadoop/pull/5180#issuecomment-1340309901

   > Please fix checkstyle warning and failed unit test.
   @tomscut Thanks for your review, I have fixed checkstyle warning.  looks 
like failed unit test is unrelated to the change.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18183) s3a audit logs to publish range start/end of GET requests in audit header

2022-12-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17644086#comment-17644086
 ] 

ASF GitHub Bot commented on HADOOP-18183:
-

dannycjones commented on PR #5110:
URL: https://github.com/apache/hadoop/pull/5110#issuecomment-1340246393

   hey @steveloughran - do you have time to give this another pass over the 
next week?




> s3a audit logs to publish range start/end of GET requests in audit header
> -
>
> Key: HADOOP-18183
> URL: https://issues.apache.org/jira/browse/HADOOP-18183
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.2
>Reporter: Steve Loughran
>Assignee: Ankit Saurabh
>Priority: Minor
>  Labels: pull-request-available
>
> we don't get the range of ranged get requests in s3 server logs, because the 
> AWS s3 log doesn't record that information. we can see it's a partial get 
> from the 206 response, but the length of data retrieved is lost.
> LoggingAuditor.beforeExecution() would need to recognise a ranged GET and 
> determine the extra key-val pairs for range start and end (rs & re?)
> we might need to modify {{HttpReferrerAuditHeader.buildHttpReferrer()}} to 
> take a map of  so it can dynamically create a header for each 
> request; currently that is not in there.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dannycjones commented on pull request #5110: HADOOP-18183. s3a audit logs to publish range start/end of GET requests in audit header

2022-12-06 Thread GitBox


dannycjones commented on PR #5110:
URL: https://github.com/apache/hadoop/pull/5110#issuecomment-1340246393

   hey @steveloughran - do you have time to give this another pass over the 
next week?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on a diff in pull request #5169: YARN-11349. [Federation] Router Support DelegationToken With SQL.

2022-12-06 Thread GitBox


slfan1989 commented on code in PR #5169:
URL: https://github.com/apache/hadoop/pull/5169#discussion_r1041594729


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/utils/RowCountHandler.java:
##
@@ -0,0 +1,58 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.federation.store.utils;
+
+import org.apache.hadoop.util.StringUtils;
+
+import java.sql.SQLException;
+
+/**
+ * RowCount Handler.
+ * Used to parse out the rowCount information of the output parameter.
+ */
+public class RowCountHandler implements ResultSetHandler {

Review Comment:
   I will create sql folder and move related classes to sql folder.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on a diff in pull request #5169: YARN-11349. [Federation] Router Support DelegationToken With SQL.

2022-12-06 Thread GitBox


slfan1989 commented on code in PR #5169:
URL: https://github.com/apache/hadoop/pull/5169#discussion_r1041594322


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/SQLFederationStateStore.java:
##
@@ -1353,45 +1384,454 @@ public Connection getConn() {
 return conn;
   }
 
+  /**
+   * SQLFederationStateStore Supports Store New MasterKey.
+   *
+   * @param request The request contains RouterMasterKey, which is an 
abstraction for DelegationKey.
+   * @return routerMasterKeyResponse, the response contains the 
RouterMasterKey.
+   * @throws YarnException if the call to the state store is unsuccessful.
+   * @throws IOException An IO Error occurred.
+   */
   @Override
   public RouterMasterKeyResponse storeNewMasterKey(RouterMasterKeyRequest 
request)
   throws YarnException, IOException {
-throw new NotImplementedException("Code is not implemented");
+
+// Step1: Verify parameters to ensure that key fields are not empty.
+FederationRouterRMTokenInputValidator.validate(request);
+
+// Step2: Parse the parameters and serialize the DelegationKey as a string.
+DelegationKey delegationKey = convertMasterKeyToDelegationKey(request);
+int keyId = delegationKey.getKeyId();
+String delegationKeyStr = 
FederationStateStoreUtils.encodeWritable(delegationKey);
+
+// Step3. store data in database.
+try {
+
+  FederationSQLOutParameter rowCountOUT =
+  new FederationSQLOutParameter<>("rowCount_OUT", 
java.sql.Types.INTEGER, Integer.class);
+
+  // Execute the query
+  long startTime = clock.getTime();
+  Integer rowCount = getRowCountByProcedureSQL(CALL_SP_ADD_MASTERKEY, 
keyId,
+  delegationKeyStr, rowCountOUT);
+  long stopTime = clock.getTime();
+
+  // We hope that 1 record can be written to the database.
+  // If the number of records is not 1, it means that the data was written 
incorrectly.
+  if (rowCount != 1) {
+FederationStateStoreUtils.logAndThrowStoreException(LOG,
+"Wrong behavior during the insertion of masterKey, keyId = %s. " +
+"please check the records of the database.", 
String.valueOf(keyId));
+  }
+  FederationStateStoreClientMetrics.succeededStateStoreCall(stopTime - 
startTime);
+} catch (SQLException e) {
+  FederationStateStoreClientMetrics.failedStateStoreCall();
+  FederationStateStoreUtils.logAndThrowRetriableException(e, LOG,
+  "Unable to insert the newly masterKey, keyId = %s.", 
String.valueOf(keyId));
+}
+
+// Step4. Query Data from the database and return the result.
+return getMasterKeyByDelegationKey(request);
   }
 
+  /**
+   * SQLFederationStateStore Supports Remove MasterKey.
+   *
+   * Defined the sp_deleteMasterKey procedure.
+   * This procedure requires 1 input parameters, 1 output parameters.
+   * Input parameters
+   * 1. IN keyId_IN int
+   * Output parameters
+   * 2. OUT rowCount_OUT int
+   *
+   * @param request The request contains RouterMasterKey, which is an 
abstraction for DelegationKey
+   * @return routerMasterKeyResponse, the response contains the 
RouterMasterKey.
+   * @throws YarnException if the call to the state store is unsuccessful.
+   * @throws IOException An IO Error occurred.
+   */
   @Override
   public RouterMasterKeyResponse removeStoredMasterKey(RouterMasterKeyRequest 
request)
   throws YarnException, IOException {
-throw new NotImplementedException("Code is not implemented");
+
+// Step1: Verify parameters to ensure that key fields are not empty.
+FederationRouterRMTokenInputValidator.validate(request);
+
+// Step2: Parse parameters and get KeyId.
+RouterMasterKey paramMasterKey = request.getRouterMasterKey();
+int paramKeyId = paramMasterKey.getKeyId();
+
+// Step3. Clear data from database.
+try {
+
+  // Execute the query
+  long startTime = clock.getTime();
+  FederationSQLOutParameter rowCountOUT =
+  new FederationSQLOutParameter<>("rowCount_OUT", 
java.sql.Types.INTEGER, Integer.class);
+  Integer rowCount = getRowCountByProcedureSQL(CALL_SP_DELETE_MASTERKEY,
+  paramKeyId, rowCountOUT);
+  long stopTime = clock.getTime();
+
+  // if it is equal to 0 it means the call
+  // did not delete the reservation from FederationStateStore
+  if (rowCount == 0) {
+FederationStateStoreUtils.logAndThrowStoreException(LOG,
+"masterKeyId = %s does not exist.", String.valueOf(paramKeyId));
+  } else if (rowCount != 1) {
+// if it is different from 1 it means the call
+// had a wrong behavior. Maybe the database is not set correctly.
+FederationStateStoreUtils.logAndThrowStoreException(LOG,
+"Wrong behavior during deleting the keyId %s. " +
+"The database is expected to delete 1 record, " +
+"but the number of 

[GitHub] [hadoop] slfan1989 commented on a diff in pull request #5131: YARN-11350. [Federation] Router Support DelegationToken With ZK.

2022-12-06 Thread GitBox


slfan1989 commented on code in PR #5131:
URL: https://github.com/apache/hadoop/pull/5131#discussion_r1041589814


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/store/impl/TestZookeeperFederationStateStore.java:
##
@@ -171,38 +203,117 @@ public void testMetricsInited() throws Exception {
 MetricsRecords.assertMetric(record, 
"UpdateReservationHomeSubClusterNumOps",  expectOps);
   }
 
-  @Test(expected = NotImplementedException.class)
+  @Test
   public void testStoreNewMasterKey() throws Exception {
 super.testStoreNewMasterKey();
   }
 
-  @Test(expected = NotImplementedException.class)
+  @Test

Review Comment:
   I agree with you, I will remove this part of the code.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on a diff in pull request #5131: YARN-11350. [Federation] Router Support DelegationToken With ZK.

2022-12-06 Thread GitBox


slfan1989 commented on code in PR #5131:
URL: https://github.com/apache/hadoop/pull/5131#discussion_r1041589119


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/store/impl/TestSQLFederationStateStore.java:
##
@@ -592,4 +588,14 @@ public void testRemoveStoredToken() throws IOException, 
YarnException {
   public void testGetTokenByRouterStoreToken() throws IOException, 
YarnException {
 super.testGetTokenByRouterStoreToken();
   }
+
+  @Override
+  protected void checkRouterMasterKey(DelegationKey delegationKey,
+  RouterMasterKey routerMasterKey) throws YarnException, IOException {

Review Comment:
   Thank you for your suggestion! I'll add a comment explaining what the method 
does.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on a diff in pull request #5131: YARN-11350. [Federation] Router Support DelegationToken With ZK.

2022-12-06 Thread GitBox


slfan1989 commented on code in PR #5131:
URL: https://github.com/apache/hadoop/pull/5131#discussion_r1041588318


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/store/impl/TestSQLFederationStateStore.java:
##
@@ -18,21 +18,17 @@
 package org.apache.hadoop.yarn.server.federation.store.impl;
 
 import org.apache.commons.lang3.NotImplementedException;
+import org.apache.hadoop.security.token.delegation.DelegationKey;
 import org.apache.hadoop.test.LambdaTestUtils;
 import org.apache.hadoop.util.Time;
 import org.apache.hadoop.yarn.api.records.ApplicationId;
 import org.apache.hadoop.yarn.api.records.ReservationId;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.exceptions.YarnException;
+import org.apache.hadoop.yarn.security.client.RMDelegationTokenIdentifier;
 import org.apache.hadoop.yarn.server.federation.store.FederationStateStore;
 import 
org.apache.hadoop.yarn.server.federation.store.metrics.FederationStateStoreClientMetrics;
-import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId;
-import org.apache.hadoop.yarn.server.federation.store.records.SubClusterInfo;
-import 
org.apache.hadoop.yarn.server.federation.store.records.SubClusterRegisterRequest;
-import 
org.apache.hadoop.yarn.server.federation.store.records.ReservationHomeSubCluster;
-import 
org.apache.hadoop.yarn.server.federation.store.records.AddReservationHomeSubClusterRequest;
-import 
org.apache.hadoop.yarn.server.federation.store.records.UpdateReservationHomeSubClusterRequest;
-import 
org.apache.hadoop.yarn.server.federation.store.records.DeleteReservationHomeSubClusterRequest;
+import org.apache.hadoop.yarn.server.federation.store.records.*;

Review Comment:
   Thank you very much for helping to review the code, I will fix it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18329) Add support for IBM Semeru OE JRE 11.0.15.0 and greater

2022-12-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17644048#comment-17644048
 ] 

ASF GitHub Bot commented on HADOOP-18329:
-

hadoop-yetus commented on PR #4537:
URL: https://github.com/apache/hadoop/pull/4537#issuecomment-1340106496

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  2s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  2s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  2s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 5 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  20m 19s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  29m 15s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  25m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |  21m 41s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 14s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 18s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 46s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   2m 12s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   5m  0s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 18s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 54s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |  24m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 41s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  21m 41s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   3m 15s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   2m 12s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | -1 :x: |  spotbugs  |   1m  0s | 
[/new-spotbugs-hadoop-common-project_hadoop-auth.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4537/6/artifact/out/new-spotbugs-hadoop-common-project_hadoop-auth.html)
 |  hadoop-common-project/hadoop-auth generated 1 new + 0 unchanged - 0 fixed = 
1 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  24m  9s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 38s |  |  hadoop-minikdc in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   3m 18s |  |  hadoop-auth in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  18m 13s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m 17s |  |  hadoop-registry in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 52s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 250m 48s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-common-project/hadoop-auth |
   |  |  org.apache.hadoop.util.PlatformName.() creates a 
org.apache.hadoop.util.PlatformName$SystemClassAccessor classloader, which 
should be performed within a doPrivileged block  At PlatformName.java:a 
org.apache.hadoop.util.PlatformName$SystemClassAccessor classloader, which 
should be performed within a doPrivileged block  At PlatformName.java:[line 61] 
|
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4537/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4537 |
   | Optional Tests | dupname 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4537: HADOOP-18329 - Support for IBM Semeru JVM v>11.0.15.0 Vendor Name Changes

2022-12-06 Thread GitBox


hadoop-yetus commented on PR #4537:
URL: https://github.com/apache/hadoop/pull/4537#issuecomment-1340106496

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  2s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  2s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  2s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 5 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  20m 19s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  29m 15s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  25m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |  21m 41s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 14s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 18s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 46s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   2m 12s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   5m  0s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 18s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 54s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |  24m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 41s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  21m 41s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   3m 15s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   2m 12s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | -1 :x: |  spotbugs  |   1m  0s | 
[/new-spotbugs-hadoop-common-project_hadoop-auth.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4537/6/artifact/out/new-spotbugs-hadoop-common-project_hadoop-auth.html)
 |  hadoop-common-project/hadoop-auth generated 1 new + 0 unchanged - 0 fixed = 
1 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  24m  9s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 38s |  |  hadoop-minikdc in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   3m 18s |  |  hadoop-auth in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  18m 13s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m 17s |  |  hadoop-registry in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 52s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 250m 48s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-common-project/hadoop-auth |
   |  |  org.apache.hadoop.util.PlatformName.() creates a 
org.apache.hadoop.util.PlatformName$SystemClassAccessor classloader, which 
should be performed within a doPrivileged block  At PlatformName.java:a 
org.apache.hadoop.util.PlatformName$SystemClassAccessor classloader, which 
should be performed within a doPrivileged block  At PlatformName.java:[line 61] 
|
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4537/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4537 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint |
   | uname | Linux 42f9668f9d01 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 

[jira] [Commented] (HADOOP-18561) CVE-2021-37533 on commons-net is included in hadoop common and hadoop-client-runtime

2022-12-06 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17644043#comment-17644043
 ] 

Steve Loughran commented on HADOOP-18561:
-

looks legit, though its only of interest to ftp filesystem and we don't 
recommend that (maybe its time to cut)

HADOOP-18361 moved hadoop branch-3.3 to  3.8.0, but that is still exposed.

why don't you submit a PR updating the pom?

> CVE-2021-37533 on commons-net is included in hadoop common and 
> hadoop-client-runtime
> 
>
> Key: HADOOP-18561
> URL: https://issues.apache.org/jira/browse/HADOOP-18561
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: phoebe chen
>Priority: Major
>
> Latest 3.3.4 version of hadoop-common and hadoop-client-runtime includes 
> commons-net in version 3.6, which has vulnerability CVE-2021-37533. Need to 
> upgrade it to 3.9 to fix. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18561) CVE-2021-37533 on commons-net is included in hadoop common and hadoop-client-runtime

2022-12-06 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-18561:

Component/s: build

> CVE-2021-37533 on commons-net is included in hadoop common and 
> hadoop-client-runtime
> 
>
> Key: HADOOP-18561
> URL: https://issues.apache.org/jira/browse/HADOOP-18561
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.3.5, 3.3.4
>Reporter: phoebe chen
>Priority: Major
>  Labels: transitive-cve
>
> Latest 3.3.4 version of hadoop-common and hadoop-client-runtime includes 
> commons-net in version 3.6, which has vulnerability CVE-2021-37533. Need to 
> upgrade it to 3.9 to fix. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18561) CVE-2021-37533 on commons-net is included in hadoop common and hadoop-client-runtime

2022-12-06 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-18561:

Labels: transitive-cve  (was: )

> CVE-2021-37533 on commons-net is included in hadoop common and 
> hadoop-client-runtime
> 
>
> Key: HADOOP-18561
> URL: https://issues.apache.org/jira/browse/HADOOP-18561
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.5, 3.3.4
>Reporter: phoebe chen
>Priority: Major
>  Labels: transitive-cve
>
> Latest 3.3.4 version of hadoop-common and hadoop-client-runtime includes 
> commons-net in version 3.6, which has vulnerability CVE-2021-37533. Need to 
> upgrade it to 3.9 to fix. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18561) CVE-2021-37533 on commons-net is included in hadoop common and hadoop-client-runtime

2022-12-06 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-18561:

Affects Version/s: 3.3.4
   3.3.5

> CVE-2021-37533 on commons-net is included in hadoop common and 
> hadoop-client-runtime
> 
>
> Key: HADOOP-18561
> URL: https://issues.apache.org/jira/browse/HADOOP-18561
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.5, 3.3.4
>Reporter: phoebe chen
>Priority: Major
>
> Latest 3.3.4 version of hadoop-common and hadoop-client-runtime includes 
> commons-net in version 3.6, which has vulnerability CVE-2021-37533. Need to 
> upgrade it to 3.9 to fix. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18526) Leak of S3AInstrumentation instances via hadoop Metrics references

2022-12-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17644028#comment-17644028
 ] 

ASF GitHub Bot commented on HADOOP-18526:
-

hadoop-yetus commented on PR #5144:
URL: https://github.com/apache/hadoop/pull/5144#issuecomment-1339996002

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 41s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 21s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 41s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 12s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |  20m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   3m 42s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 36s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   1m 14s | 
[/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5144/6/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 57s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 51s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |  22m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  20m 23s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 32s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 36s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   1m  5s | 
[/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5144/6/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-common in the patch failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   4m  6s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m  4s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 18s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 53s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 59s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 225m 42s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5144/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5144 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 5c3da2b5ff42 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 48974549308b1eb63cb832e3a04beef3f593018d |
   | Default Java | Private 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5144: HADOOP-18526. Leak of S3AInstrumentation instances via hadoop Metrics references

2022-12-06 Thread GitBox


hadoop-yetus commented on PR #5144:
URL: https://github.com/apache/hadoop/pull/5144#issuecomment-1339996002

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 41s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 21s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 41s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 12s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |  20m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   3m 42s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 36s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   1m 14s | 
[/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5144/6/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 57s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 51s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |  22m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  20m 23s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 32s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 36s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   1m  5s | 
[/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5144/6/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-common in the patch failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   4m  6s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m  4s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 18s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 53s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 59s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 225m 42s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5144/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5144 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 5c3da2b5ff42 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 48974549308b1eb63cb832e3a04beef3f593018d |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   |  Test Results | 

[jira] [Updated] (HADOOP-18561) CVE-2021-37533 on commons-net is included in hadoop common and hadoop-client-runtime

2022-12-06 Thread phoebe chen (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

phoebe chen updated HADOOP-18561:
-
Description: Latest 3.3.4 version of hadoop-common and 
hadoop-client-runtime includes commons-net in version 3.6, which has 
vulnerability CVE-2021-37533. Need to upgrade it to 3.9 to fix.   (was: Latest 
3.3.4 version of hadoop-common and hadoop-client-runtime includescommons-net in 
version 3.6, which has vulnerability CVE-2021-37533. Need to upgrade it to 3.9 
to fix. )

> CVE-2021-37533 on commons-net is included in hadoop common and 
> hadoop-client-runtime
> 
>
> Key: HADOOP-18561
> URL: https://issues.apache.org/jira/browse/HADOOP-18561
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: phoebe chen
>Priority: Major
>
> Latest 3.3.4 version of hadoop-common and hadoop-client-runtime includes 
> commons-net in version 3.6, which has vulnerability CVE-2021-37533. Need to 
> upgrade it to 3.9 to fix. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18329) Add support for IBM Semeru OE JRE 11.0.15.0 and greater

2022-12-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17644002#comment-17644002
 ] 

ASF GitHub Bot commented on HADOOP-18329:
-

JackBuggins commented on PR #4537:
URL: https://github.com/apache/hadoop/pull/4537#issuecomment-1339816599

   This should be a bit more robust to extension as well as handle the concerns 
I had about the class loader from before. Is there any consensus/ruling around 
adding a test against the IBM JREs? I appreciate it would take a bit of time on 
CI and this is a once in a blue moon activity, but it could be a single suite 
of integration tests against auth that execute to verify the result against 
latest semeru is not IBM, and vice versa.




> Add support for IBM Semeru OE JRE 11.0.15.0 and greater
> ---
>
> Key: HADOOP-18329
> URL: https://issues.apache.org/jira/browse/HADOOP-18329
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth, common
>Affects Versions: 3.0.0, 3.1.0, 3.0.1, 3.2.0, 3.0.2, 3.1.1, 3.0.3, 3.3.0, 
> 3.1.2, 3.2.1, 3.1.3, 3.1.4, 3.2.2, 3.3.1, 3.2.3, 3.3.2, 3.3.3
> Environment: Running Hadoop (or Apache Spark 3.2.1 or above) on IBM 
> Semeru runtimes open edition 11.0.15.0 or greater.
>Reporter: Jack
>Priority: Major
>  Labels: pull-request-available
>   Original Estimate: 1h
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> There are checks within the PlatformName class that use the Vendor property 
> of the provided runtime JVM specifically looking for `IBM` within the name. 
> Whilst this check worked for IBM's [java technology 
> edition|https://www.ibm.com/docs/en/sdk-java-technology] it fails to work on 
> [Semeru|https://developer.ibm.com/languages/java/semeru-runtimes/] since 
> 11.0.15.0 due to the following change:
> h4. java.vendor system property
> In this release, the {{java.vendor}} system property has been changed from 
> "International Business Machines Corporation" to "IBM Corporation".
> Modules such as the below are not provided in these runtimes.
> com.ibm.security.auth.module.JAASLoginModule



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] JackBuggins commented on pull request #4537: HADOOP-18329 - Support for IBM Semeru JVM v>11.0.15.0 Vendor Name Changes

2022-12-06 Thread GitBox


JackBuggins commented on PR #4537:
URL: https://github.com/apache/hadoop/pull/4537#issuecomment-1339816599

   This should be a bit more robust to extension as well as handle the concerns 
I had about the class loader from before. Is there any consensus/ruling around 
adding a test against the IBM JREs? I appreciate it would take a bit of time on 
CI and this is a once in a blue moon activity, but it could be a single suite 
of integration tests against auth that execute to verify the result against 
latest semeru is not IBM, and vice versa.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5190: YARN-11390. TestResourceTrackerService.testNodeRemovalNormally ...

2022-12-06 Thread GitBox


hadoop-yetus commented on PR #5190:
URL: https://github.com/apache/hadoop/pull/5190#issuecomment-1339803193

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m 35s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 18s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   1m  8s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m  4s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 14s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   2m 20s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 24s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 58s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  4s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   1m  4s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 57s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 57s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 46s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m  1s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   2m  7s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  24m 55s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 102m 59s |  |  
hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 213m 20s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5190/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5190 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 1cd7b2ef8c62 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 5fc63f99d382e36adb7d6622c5c1f6b21e24ea14 |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5190/1/testReport/ |
   | Max. process+thread count | 912 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5190/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the 

[jira] [Created] (HADOOP-18561) CVE-2021-37533 on commons-net is included in hadoop common and hadoop-client-runtime

2022-12-06 Thread phoebe chen (Jira)
phoebe chen created HADOOP-18561:


 Summary: CVE-2021-37533 on commons-net is included in hadoop 
common and hadoop-client-runtime
 Key: HADOOP-18561
 URL: https://issues.apache.org/jira/browse/HADOOP-18561
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: phoebe chen


Latest 3.3.4 version of hadoop-common and hadoop-client-runtime 
includescommons-net in version 3.6, which has vulnerability CVE-2021-37533. 
Need to upgrade it to 3.9 to fix. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5191: Bump express from 4.17.1 to 4.18.2 in /hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp

2022-12-06 Thread GitBox


hadoop-yetus commented on PR #5191:
URL: https://github.com/apache/hadoop/pull/5191#issuecomment-1339792843

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 53s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  39m 56s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  59m 21s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 15s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  shadedclient  |  19m 20s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 39s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  82m 23s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5191/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5191 |
   | Optional Tests | dupname asflicense shadedclient codespell detsecrets |
   | uname | Linux b1b30ed3feac 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 8563ac7a008bb3ad1939692e84bed5d8ef065b84 |
   | Max. process+thread count | 554 (vs. ulimit of 5500) |
   | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5191/1/console |
   | versions | git=2.25.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] cnauroth commented on a diff in pull request #5190: YARN-11390. TestResourceTrackerService.testNodeRemovalNormally ...

2022-12-06 Thread GitBox


cnauroth commented on code in PR #5190:
URL: https://github.com/apache/hadoop/pull/5190#discussion_r1041276776


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestResourceTrackerService.java:
##
@@ -2959,6 +2960,20 @@ protected ResourceTrackerService 
createResourceTrackerService() {
 mockRM.stop();
   }
 
+  private void pollingAssert(Supplier supplier, String message)

Review Comment:
   In hadoop-common, there is a similar helper method: 
`org.apache.hadoop.test.GenericTestUtils#waitFor`. This also has some other 
nice features, like providing a thread dump for troubleshooting if it times 
out. Can you please look at reusing that method?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18538) Upgrade kafka to 2.8.2

2022-12-06 Thread Brahma Reddy Battula (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17643962#comment-17643962
 ] 

Brahma Reddy Battula commented on HADOOP-18538:
---

[~dmmkr]  thanks for reporting. Committed to trunk.

 

> Upgrade kafka to 2.8.2
> --
>
> Key: HADOOP-18538
> URL: https://issues.apache.org/jira/browse/HADOOP-18538
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: D M Murali Krishna Reddy
>Assignee: D M Murali Krishna Reddy
>Priority: Major
>  Labels: pull-request-available
>
> Upgrade kafka to 2.8.2 to resolve 
> [CVE-2022-34917|https://nvd.nist.gov/vuln/detail/CVE-2022-34917] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dependabot[bot] opened a new pull request, #5191: Bump express from 4.17.1 to 4.18.2 in /hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp

2022-12-06 Thread GitBox


dependabot[bot] opened a new pull request, #5191:
URL: https://github.com/apache/hadoop/pull/5191

   Bumps [express](https://github.com/expressjs/express) from 4.17.1 to 4.18.2.
   
   Release notes
   Sourced from https://github.com/expressjs/express/releases;>express's 
releases.
   
   4.18.2
   
   Fix regression routing a large stack in a single route
   deps: body-parser@1.20.1
   
   deps: qs@6.11.0
   perf: remove unnecessary object clone
   
   
   deps: qs@6.11.0
   
   4.18.1
   
   Fix hanging on large stack of sync routes
   
   4.18.0
   
   Add root option to res.download
   Allow options without filename in 
res.download
   Deprecate string and non-integer arguments to 
res.status
   Fix behavior of null/undefined as 
maxAge in res.cookie
   Fix handling very large stacks of sync middleware
   Ignore Object.prototype values in settings through 
app.set/app.get
   Invoke default with same arguments as types in 
res.format
   Support proper 205 responses using res.send
   Use http-errors for res.format error
   deps: body-parser@1.20.0
   
   Fix error message for json parse whitespace in strict
   Fix internal error when inflated body exceeds limit
   Prevent loss of async hooks context
   Prevent hanging when request already read
   deps: depd@2.0.0
   deps: http-errors@2.0.0
   deps: on-finished@2.4.1
   deps: qs@6.10.3
   deps: raw-body@2.5.1
   
   
   deps: cookie@0.5.0
   
   Add priority option
   Fix expires option to reject invalid dates
   
   
   deps: depd@2.0.0
   
   Replace internal eval usage with Function 
constructor
   Use instance methods on process to check for listeners
   
   
   deps: finalhandler@1.2.0
   
   Remove set content headers that break response
   deps: on-finished@2.4.1
   deps: statuses@2.0.1
   
   
   deps: on-finished@2.4.1
   
   Prevent loss of async hooks context
   
   
   deps: qs@6.10.3
   deps: send@0.18.0
   
   Fix emitted 416 error missing headers property
   Limit the headers removed for 304 response
   deps: depd@2.0.0
   deps: destroy@1.2.0
   deps: http-errors@2.0.0
   deps: on-finished@2.4.1
   
   
   
   
   
   ... (truncated)
   
   
   Changelog
   Sourced from https://github.com/expressjs/express/blob/master/History.md;>express's 
changelog.
   
   4.18.2 / 2022-10-08
   
   Fix regression routing a large stack in a single route
   deps: body-parser@1.20.1
   
   deps: qs@6.11.0
   perf: remove unnecessary object clone
   
   
   deps: qs@6.11.0
   
   4.18.1 / 2022-04-29
   
   Fix hanging on large stack of sync routes
   
   4.18.0 / 2022-04-25
   
   Add root option to res.download
   Allow options without filename in 
res.download
   Deprecate string and non-integer arguments to 
res.status
   Fix behavior of null/undefined as 
maxAge in res.cookie
   Fix handling very large stacks of sync middleware
   Ignore Object.prototype values in settings through 
app.set/app.get
   Invoke default with same arguments as types in 
res.format
   Support proper 205 responses using res.send
   Use http-errors for res.format error
   deps: body-parser@1.20.0
   
   Fix error message for json parse whitespace in strict
   Fix internal error when inflated body exceeds limit
   Prevent loss of async hooks context
   Prevent hanging when request already read
   deps: depd@2.0.0
   deps: http-errors@2.0.0
   deps: on-finished@2.4.1
   deps: qs@6.10.3
   deps: raw-body@2.5.1
   
   
   deps: cookie@0.5.0
   
   Add priority option
   Fix expires option to reject invalid dates
   
   
   deps: depd@2.0.0
   
   Replace internal eval usage with Function 
constructor
   Use instance methods on process to check for listeners
   
   
   deps: finalhandler@1.2.0
   
   Remove set content headers that break response
   deps: on-finished@2.4.1
   deps: statuses@2.0.1
   
   
   deps: on-finished@2.4.1
   
   Prevent loss of async hooks context
   
   
   deps: qs@6.10.3
   deps: send@0.18.0
   
   
   
   ... (truncated)
   
   
   Commits
   
   https://github.com/expressjs/express/commit/8368dc178af16b91b576c4c1d135f701a0007e5d;>8368dc1
 4.18.2
   https://github.com/expressjs/express/commit/61f40491222dbede653b9938e6a4676f187aab44;>61f4049
 docs: replace Freenode with Libera Chat
   https://github.com/expressjs/express/commit/bb7907b932afe3a19236a642f6054b6c8f7349a0;>bb7907b
 build: Node.js@18.10
   https://github.com/expressjs/express/commit/f56ce73186e885a938bfdb3d3d1005a58e6ae12b;>f56ce73
 build: supertest@6.3.0
   https://github.com/expressjs/express/commit/24b3dc551670ac4fb0cd5a2bd5ef643c9525e60f;>24b3dc5
 deps: qs@6.11.0
   https://github.com/expressjs/express/commit/689d175b8b39d8860b81d723233fb83d15201827;>689d175
 deps: body-parser@1.20.1
   https://github.com/expressjs/express/commit/340be0f79afb9b3176afb76235aa7f92acbd5050;>340be0f
 build: eslint@8.24.0
   https://github.com/expressjs/express/commit/33e8dc303af9277f8a7e4f46abfdcb5e72f6797b;>33e8dc3
 docs: use Node.js name style
   

[jira] [Commented] (HADOOP-18538) Upgrade kafka to 2.8.2

2022-12-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17643961#comment-17643961
 ] 

ASF GitHub Bot commented on HADOOP-18538:
-

brahmareddybattula merged PR #5164:
URL: https://github.com/apache/hadoop/pull/5164




> Upgrade kafka to 2.8.2
> --
>
> Key: HADOOP-18538
> URL: https://issues.apache.org/jira/browse/HADOOP-18538
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: D M Murali Krishna Reddy
>Assignee: D M Murali Krishna Reddy
>Priority: Major
>  Labels: pull-request-available
>
> Upgrade kafka to 2.8.2 to resolve 
> [CVE-2022-34917|https://nvd.nist.gov/vuln/detail/CVE-2022-34917] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] brahmareddybattula merged pull request #5164: HADOOP-18538. Upgrade kafka to 2.8.2

2022-12-06 Thread GitBox


brahmareddybattula merged PR #5164:
URL: https://github.com/apache/hadoop/pull/5164


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18538) Upgrade kafka to 2.8.2

2022-12-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17643959#comment-17643959
 ] 

ASF GitHub Bot commented on HADOOP-18538:
-

brahmareddybattula commented on PR #5164:
URL: https://github.com/apache/hadoop/pull/5164#issuecomment-1339675688

   +1, Looks build failure are unrelated.




> Upgrade kafka to 2.8.2
> --
>
> Key: HADOOP-18538
> URL: https://issues.apache.org/jira/browse/HADOOP-18538
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: D M Murali Krishna Reddy
>Assignee: D M Murali Krishna Reddy
>Priority: Major
>  Labels: pull-request-available
>
> Upgrade kafka to 2.8.2 to resolve 
> [CVE-2022-34917|https://nvd.nist.gov/vuln/detail/CVE-2022-34917] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] brahmareddybattula commented on pull request #5164: HADOOP-18538. Upgrade kafka to 2.8.2

2022-12-06 Thread GitBox


brahmareddybattula commented on PR #5164:
URL: https://github.com/apache/hadoop/pull/5164#issuecomment-1339675688

   +1, Looks build failure are unrelated.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18538) Upgrade kafka to 2.8.2

2022-12-06 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-18538:
--
Status: Patch Available  (was: Open)

> Upgrade kafka to 2.8.2
> --
>
> Key: HADOOP-18538
> URL: https://issues.apache.org/jira/browse/HADOOP-18538
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: D M Murali Krishna Reddy
>Assignee: D M Murali Krishna Reddy
>Priority: Major
>  Labels: pull-request-available
>
> Upgrade kafka to 2.8.2 to resolve 
> [CVE-2022-34917|https://nvd.nist.gov/vuln/detail/CVE-2022-34917] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a diff in pull request #5131: YARN-11350. [Federation] Router Support DelegationToken With ZK.

2022-12-06 Thread GitBox


goiri commented on code in PR #5131:
URL: https://github.com/apache/hadoop/pull/5131#discussion_r1041207295


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/store/impl/TestSQLFederationStateStore.java:
##
@@ -592,4 +588,14 @@ public void testRemoveStoredToken() throws IOException, 
YarnException {
   public void testGetTokenByRouterStoreToken() throws IOException, 
YarnException {
 super.testGetTokenByRouterStoreToken();
   }
+
+  @Override
+  protected void checkRouterMasterKey(DelegationKey delegationKey,
+  RouterMasterKey routerMasterKey) throws YarnException, IOException {

Review Comment:
   Comment on why we do nothing



##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/store/impl/TestSQLFederationStateStore.java:
##
@@ -18,21 +18,17 @@
 package org.apache.hadoop.yarn.server.federation.store.impl;
 
 import org.apache.commons.lang3.NotImplementedException;
+import org.apache.hadoop.security.token.delegation.DelegationKey;
 import org.apache.hadoop.test.LambdaTestUtils;
 import org.apache.hadoop.util.Time;
 import org.apache.hadoop.yarn.api.records.ApplicationId;
 import org.apache.hadoop.yarn.api.records.ReservationId;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.exceptions.YarnException;
+import org.apache.hadoop.yarn.security.client.RMDelegationTokenIdentifier;
 import org.apache.hadoop.yarn.server.federation.store.FederationStateStore;
 import 
org.apache.hadoop.yarn.server.federation.store.metrics.FederationStateStoreClientMetrics;
-import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId;
-import org.apache.hadoop.yarn.server.federation.store.records.SubClusterInfo;
-import 
org.apache.hadoop.yarn.server.federation.store.records.SubClusterRegisterRequest;
-import 
org.apache.hadoop.yarn.server.federation.store.records.ReservationHomeSubCluster;
-import 
org.apache.hadoop.yarn.server.federation.store.records.AddReservationHomeSubClusterRequest;
-import 
org.apache.hadoop.yarn.server.federation.store.records.UpdateReservationHomeSubClusterRequest;
-import 
org.apache.hadoop.yarn.server.federation.store.records.DeleteReservationHomeSubClusterRequest;
+import org.apache.hadoop.yarn.server.federation.store.records.*;

Review Comment:
   Avoid



##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/store/impl/TestZookeeperFederationStateStore.java:
##
@@ -171,38 +203,117 @@ public void testMetricsInited() throws Exception {
 MetricsRecords.assertMetric(record, 
"UpdateReservationHomeSubClusterNumOps",  expectOps);
   }
 
-  @Test(expected = NotImplementedException.class)
+  @Test
   public void testStoreNewMasterKey() throws Exception {
 super.testStoreNewMasterKey();
   }
 
-  @Test(expected = NotImplementedException.class)
+  @Test

Review Comment:
   I think we can just inherit and no need to have them here.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18329) Add support for IBM Semeru OE JRE 11.0.15.0 and greater

2022-12-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17643951#comment-17643951
 ] 

ASF GitHub Bot commented on HADOOP-18329:
-

JackBuggins commented on PR #4537:
URL: https://github.com/apache/hadoop/pull/4537#issuecomment-1339652525

   > Has there been any movement on this pr?
   
   I'll try and carve out a few hours this week  some slight enhancement I 
think could be made in the class loader method I originally came up with (I 
want to test my concerns).




> Add support for IBM Semeru OE JRE 11.0.15.0 and greater
> ---
>
> Key: HADOOP-18329
> URL: https://issues.apache.org/jira/browse/HADOOP-18329
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth, common
>Affects Versions: 3.0.0, 3.1.0, 3.0.1, 3.2.0, 3.0.2, 3.1.1, 3.0.3, 3.3.0, 
> 3.1.2, 3.2.1, 3.1.3, 3.1.4, 3.2.2, 3.3.1, 3.2.3, 3.3.2, 3.3.3
> Environment: Running Hadoop (or Apache Spark 3.2.1 or above) on IBM 
> Semeru runtimes open edition 11.0.15.0 or greater.
>Reporter: Jack
>Priority: Major
>  Labels: pull-request-available
>   Original Estimate: 1h
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> There are checks within the PlatformName class that use the Vendor property 
> of the provided runtime JVM specifically looking for `IBM` within the name. 
> Whilst this check worked for IBM's [java technology 
> edition|https://www.ibm.com/docs/en/sdk-java-technology] it fails to work on 
> [Semeru|https://developer.ibm.com/languages/java/semeru-runtimes/] since 
> 11.0.15.0 due to the following change:
> h4. java.vendor system property
> In this release, the {{java.vendor}} system property has been changed from 
> "International Business Machines Corporation" to "IBM Corporation".
> Modules such as the below are not provided in these runtimes.
> com.ibm.security.auth.module.JAASLoginModule



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] JackBuggins commented on pull request #4537: HADOOP-18329 - Support for IBM Semeru JVM v>11.0.15.0 Vendor Name Changes

2022-12-06 Thread GitBox


JackBuggins commented on PR #4537:
URL: https://github.com/apache/hadoop/pull/4537#issuecomment-1339652525

   > Has there been any movement on this pr?
   
   I'll try and carve out a few hours this week  some slight enhancement I 
think could be made in the class loader method I originally came up with (I 
want to test my concerns).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5131: YARN-11350. [Federation] Router Support DelegationToken With ZK.

2022-12-06 Thread GitBox


hadoop-yetus commented on PR #5131:
URL: https://github.com/apache/hadoop/pull/5131#issuecomment-1339648549

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 38s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 15s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  26m 38s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   4m 13s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   3m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 33s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 12s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 51s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   2m 43s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   5m  8s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 57s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  21m 25s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 32s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 54s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   3m 54s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   3m 23s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 13s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5131/18/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt)
 |  hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 5 
new + 10 unchanged - 0 fixed = 15 total (was 10)  |
   | +1 :green_heart: |  mvnsite  |   2m 29s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 40s | 
[/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5131/18/artifact/out/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt)
 |  hadoop-yarn-server-common in the patch failed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.  |
   | -1 :x: |  javadoc  |   0m 38s | 
[/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5131/18/artifact/out/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt)
 |  hadoop-yarn-server-common in the patch failed with JDK Private 
Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.  |
   | +1 :green_heart: |  spotbugs  |   5m 29s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 19s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m 17s |  |  hadoop-yarn-server-common in 
the patch passed.  |
   | +1 :green_heart: |  unit  |  99m 27s |  |  
hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  unit  |   0m 40s |  |  hadoop-yarn-server-router in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 45s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 239m 20s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | 

[GitHub] [hadoop] goiri merged pull request #5146: YARN-11373. [Federation] Support refreshQueues refreshNodes API's for Federation.

2022-12-06 Thread GitBox


goiri merged PR #5146:
URL: https://github.com/apache/hadoop/pull/5146


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a diff in pull request #5169: YARN-11349. [Federation] Router Support DelegationToken With SQL.

2022-12-06 Thread GitBox


goiri commented on code in PR #5169:
URL: https://github.com/apache/hadoop/pull/5169#discussion_r1041175764


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/utils/RowCountHandler.java:
##
@@ -0,0 +1,58 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.federation.store.utils;
+
+import org.apache.hadoop.util.StringUtils;
+
+import java.sql.SQLException;
+
+/**
+ * RowCount Handler.
+ * Used to parse out the rowCount information of the output parameter.
+ */
+public class RowCountHandler implements ResultSetHandler {

Review Comment:
   This class is SQL specific right? we should have it in the name or in a sql 
package.



##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/store/impl/TestSQLFederationStateStore.java:
##
@@ -558,38 +557,37 @@ public void 
testDeleteReservationHomeSubClusterAbnormalSituation() throws Except
 () -> stateStore.deleteReservationHomeSubCluster(delRequest));
   }
 
-  @Test(expected = NotImplementedException.class)
+  @Test
   public void testStoreNewMasterKey() throws Exception {

Review Comment:
   I think by default this gets executed right? It already has the Test 
annotation on the parent.
   We can get rid of this.
   (Check in the next run that this test actually runs.)



##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/SQLFederationStateStore.java:
##
@@ -1353,45 +1384,454 @@ public Connection getConn() {
 return conn;
   }
 
+  /**
+   * SQLFederationStateStore Supports Store New MasterKey.
+   *
+   * @param request The request contains RouterMasterKey, which is an 
abstraction for DelegationKey.
+   * @return routerMasterKeyResponse, the response contains the 
RouterMasterKey.
+   * @throws YarnException if the call to the state store is unsuccessful.
+   * @throws IOException An IO Error occurred.
+   */
   @Override
   public RouterMasterKeyResponse storeNewMasterKey(RouterMasterKeyRequest 
request)
   throws YarnException, IOException {
-throw new NotImplementedException("Code is not implemented");
+
+// Step1: Verify parameters to ensure that key fields are not empty.
+FederationRouterRMTokenInputValidator.validate(request);
+
+// Step2: Parse the parameters and serialize the DelegationKey as a string.
+DelegationKey delegationKey = convertMasterKeyToDelegationKey(request);
+int keyId = delegationKey.getKeyId();
+String delegationKeyStr = 
FederationStateStoreUtils.encodeWritable(delegationKey);
+
+// Step3. store data in database.
+try {
+
+  FederationSQLOutParameter rowCountOUT =
+  new FederationSQLOutParameter<>("rowCount_OUT", 
java.sql.Types.INTEGER, Integer.class);
+
+  // Execute the query
+  long startTime = clock.getTime();
+  Integer rowCount = getRowCountByProcedureSQL(CALL_SP_ADD_MASTERKEY, 
keyId,
+  delegationKeyStr, rowCountOUT);
+  long stopTime = clock.getTime();
+
+  // We hope that 1 record can be written to the database.
+  // If the number of records is not 1, it means that the data was written 
incorrectly.
+  if (rowCount != 1) {
+FederationStateStoreUtils.logAndThrowStoreException(LOG,
+"Wrong behavior during the insertion of masterKey, keyId = %s. " +
+"please check the records of the database.", 
String.valueOf(keyId));
+  }
+  FederationStateStoreClientMetrics.succeededStateStoreCall(stopTime - 
startTime);
+} catch (SQLException e) {
+  FederationStateStoreClientMetrics.failedStateStoreCall();
+  FederationStateStoreUtils.logAndThrowRetriableException(e, LOG,
+  "Unable to insert the newly masterKey, keyId = %s.", 
String.valueOf(keyId));
+}
+
+// Step4. Query Data from the database and return the result.
+return getMasterKeyByDelegationKey(request);
   }
 
+  /**
+   * SQLFederationStateStore Supports Remove MasterKey.
+   *
+   * Defined the 

[jira] [Resolved] (HADOOP-17326) mvn verify fails due to duplicate entry in the shaded jar

2022-12-06 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17326.

Resolution: Cannot Reproduce

Closing. I successfully ran {{mvn verify}} in both trunk and branch-3.3 on my 
local.

Please feel free to reopen this if it's still failing in some environment.

> mvn verify fails due to duplicate entry in the shaded jar
> -
>
> Key: HADOOP-17326
> URL: https://issues.apache.org/jira/browse/HADOOP-17326
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.2.2, 3.3.1, 3.4.0
>Reporter: Wei-Chiu Chuang
>Priority: Blocker
>
> Found this when I was chasing a separate shading error with [~smeng].
> In trunk:
> run mvn verify under hadoop-client-module/
> {noformat}
> [INFO] 
> 
> [INFO] Reactor Summary for Apache Hadoop Client Modules 3.4.0-SNAPSHOT:
> [INFO]
> [INFO] Apache Hadoop Client Aggregator  SUCCESS [  2.607 
> s]
> [INFO] Apache Hadoop Client API ... SUCCESS [03:16 
> min]
> [INFO] Apache Hadoop Client Runtime ... SUCCESS [01:30 
> min]
> [INFO] Apache Hadoop Client Test Minicluster .. FAILURE [04:44 
> min]
> [INFO] Apache Hadoop Client Packaging Invariants .. SKIPPED
> [INFO] Apache Hadoop Client Packaging Invariants for Test . SKIPPED
> [INFO] Apache Hadoop Client Packaging Integration Tests ... SKIPPED
> [INFO] Apache Hadoop Client Modules ... SKIPPED
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time:  09:34 min
> [INFO] Finished at: 2020-10-23T16:38:53-07:00
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-shade-plugin:3.2.1:shade (default) on project 
> hadoop-client-minicluster: Error creating shaded jar: duplicate entry: 
> META-INF/services/org.apache.hadoop.shaded.com.fasterxml.jackson.core.JsonFactory
>  -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> swit
>  {noformat}
> This is reproducible in trunk and branch-3.3. However, not reproducible in 
> branch-3.1.
> (branch-3.3 has a different error:
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-shade-plugin:3.2.1:shade (default) on project 
> hadoop-client-minicluster: Error creating shaded jar: duplicate entry: 
> META-INF/services/org.apache.hadoop.shaded.javax.ws.rs.ext.MessageBodyReader 
> -> [Help 1])



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17326) mvn verify fails due to duplicate entry in the shaded jar

2022-12-06 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17643931#comment-17643931
 ] 

Akira Ajisaka commented on HADOOP-17326:


Is this issue still valid? It's set to blocker but there's no progress for more 
than 2 years.

> mvn verify fails due to duplicate entry in the shaded jar
> -
>
> Key: HADOOP-17326
> URL: https://issues.apache.org/jira/browse/HADOOP-17326
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.2.2, 3.3.1, 3.4.0
>Reporter: Wei-Chiu Chuang
>Priority: Blocker
>
> Found this when I was chasing a separate shading error with [~smeng].
> In trunk:
> run mvn verify under hadoop-client-module/
> {noformat}
> [INFO] 
> 
> [INFO] Reactor Summary for Apache Hadoop Client Modules 3.4.0-SNAPSHOT:
> [INFO]
> [INFO] Apache Hadoop Client Aggregator  SUCCESS [  2.607 
> s]
> [INFO] Apache Hadoop Client API ... SUCCESS [03:16 
> min]
> [INFO] Apache Hadoop Client Runtime ... SUCCESS [01:30 
> min]
> [INFO] Apache Hadoop Client Test Minicluster .. FAILURE [04:44 
> min]
> [INFO] Apache Hadoop Client Packaging Invariants .. SKIPPED
> [INFO] Apache Hadoop Client Packaging Invariants for Test . SKIPPED
> [INFO] Apache Hadoop Client Packaging Integration Tests ... SKIPPED
> [INFO] Apache Hadoop Client Modules ... SKIPPED
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time:  09:34 min
> [INFO] Finished at: 2020-10-23T16:38:53-07:00
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-shade-plugin:3.2.1:shade (default) on project 
> hadoop-client-minicluster: Error creating shaded jar: duplicate entry: 
> META-INF/services/org.apache.hadoop.shaded.com.fasterxml.jackson.core.JsonFactory
>  -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> swit
>  {noformat}
> This is reproducible in trunk and branch-3.3. However, not reproducible in 
> branch-3.1.
> (branch-3.3 has a different error:
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-shade-plugin:3.2.1:shade (default) on project 
> hadoop-client-minicluster: Error creating shaded jar: duplicate entry: 
> META-INF/services/org.apache.hadoop.shaded.javax.ws.rs.ext.MessageBodyReader 
> -> [Help 1])



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5180: HDFS-16858. Dynamically adjust max slow disks to exclude.

2022-12-06 Thread GitBox


hadoop-yetus commented on PR #5180:
URL: https://github.com/apache/hadoop/pull/5180#issuecomment-1339565473

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  5s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  42m  2s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   1m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 21s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 46s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 50s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m 55s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   1m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 22s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  0s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 59s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 36s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  26m 26s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 462m 48s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5180/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  7s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 584m  3s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.tools.TestDFSAdmin |
   |   | hadoop.hdfs.TestLeaseRecovery2 |
   |   | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes |
   |   | hadoop.hdfs.TestViewDistributedFileSystemContract |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5180/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5180 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 8ade4aa138f7 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 10fc2c91b6738eae807a1498aa57f4b706eba6eb |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5180/5/testReport/ |
   | Max. process+thread count | 1892 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5180/5/console |
   | versions | git=2.25.1 

[jira] [Commented] (HADOOP-18546) disable purging list of in progress reads in abfs stream closed

2022-12-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17643915#comment-17643915
 ] 

ASF GitHub Bot commented on HADOOP-18546:
-

steveloughran commented on PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#issuecomment-1339526719

   clarified the cleanup problem




> disable purging list of in progress reads in abfs stream closed
> ---
>
> Key: HADOOP-18546
> URL: https://issues.apache.org/jira/browse/HADOOP-18546
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.4
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> turn off the prune of in progress reads in 
> ReadBufferManager::purgeBuffersForStream
> this will ensure active prefetches for a closed stream complete. they wiill 
> then get to the completed list and hang around until evicted by timeout, but 
> at least prefetching will be safe.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #5176: HADOOP-18546. ABFS:disable purging list of in progress reads in abfs stream closed

2022-12-06 Thread GitBox


steveloughran commented on PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#issuecomment-1339526719

   clarified the cleanup problem


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18546) disable purging list of in progress reads in abfs stream closed

2022-12-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17643914#comment-17643914
 ] 

ASF GitHub Bot commented on HADOOP-18546:
-

steveloughran commented on code in PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#discussion_r1041097514


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsInputStream.java:
##
@@ -495,6 +499,63 @@ public void testSuccessfulReadAhead() throws Exception {
 checkEvictedStatus(inputStream, 0, true);
   }
 
+  /**
+   * This test expects InProgressList is not purged by the inputStream close.
+   */
+  @Test
+  public void testStreamPurgeDuringReadAheadCallExecuting() throws Exception {
+AbfsClient client = getMockAbfsClient();
+AbfsRestOperation successOp = getMockRestOp();
+final Long serverCommunicationMockLatency = 3_000L;
+final Long readBufferTransferToInProgressProbableTime = 1_000L;
+final Integer readBufferQueuedCount = 3;
+
+Mockito.doAnswer(invocationOnMock -> {
+  //sleeping thread to mock the network latency from client to backend.
+  Thread.sleep(serverCommunicationMockLatency);
+  return successOp;
+})
+.when(client)
+.read(any(String.class), any(Long.class), any(byte[].class),
+any(Integer.class), any(Integer.class), any(String.class),
+any(String.class), any(TracingContext.class));
+
+AbfsInputStream inputStream = getAbfsInputStream(client,
+"testSuccessfulReadAhead.txt");
+queueReadAheads(inputStream);
+
+final ReadBufferManager readBufferManager
+= ReadBufferManager.getBufferManager();
+
+final int readBufferTotal = readBufferManager.getNumBuffers();
+
+//Sleeping to give ReadBufferWorker to pick the readBuffers for processing.
+Thread.sleep(readBufferTransferToInProgressProbableTime);
+
+Assertions.assertThat(readBufferManager.getInProgressCopiedList())
+.describedAs("InProgressList should have " + readBufferQueuedCount + " 
elements")
+.hasSize(readBufferQueuedCount);
+final int freeListBufferCount = readBufferTotal - readBufferQueuedCount;
+Assertions.assertThat(readBufferManager.getFreeListCopy())
+.describedAs("FreeList should have " + freeListBufferCount + 
"elements")
+.hasSize(freeListBufferCount);
+Assertions.assertThat(readBufferManager.getCompletedReadListCopy())
+.describedAs("CompletedList should have 0 elements")
+.hasSize(0);
+
+inputStream.close();

Review Comment:
   the problem with the close() here is that it will only be reached if the 
assertions hold. if anything goes wrong, an exception is raised and the stream 
kept open, with whatever resources it consumes.
   
   it should be closed in a finally block *or* the stream opened in a 
try-with-resources clause. thanks





> disable purging list of in progress reads in abfs stream closed
> ---
>
> Key: HADOOP-18546
> URL: https://issues.apache.org/jira/browse/HADOOP-18546
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.4
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> turn off the prune of in progress reads in 
> ReadBufferManager::purgeBuffersForStream
> this will ensure active prefetches for a closed stream complete. they wiill 
> then get to the completed list and hang around until evicted by timeout, but 
> at least prefetching will be safe.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a diff in pull request #5176: HADOOP-18546. ABFS:disable purging list of in progress reads in abfs stream closed

2022-12-06 Thread GitBox


steveloughran commented on code in PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#discussion_r1041097514


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsInputStream.java:
##
@@ -495,6 +499,63 @@ public void testSuccessfulReadAhead() throws Exception {
 checkEvictedStatus(inputStream, 0, true);
   }
 
+  /**
+   * This test expects InProgressList is not purged by the inputStream close.
+   */
+  @Test
+  public void testStreamPurgeDuringReadAheadCallExecuting() throws Exception {
+AbfsClient client = getMockAbfsClient();
+AbfsRestOperation successOp = getMockRestOp();
+final Long serverCommunicationMockLatency = 3_000L;
+final Long readBufferTransferToInProgressProbableTime = 1_000L;
+final Integer readBufferQueuedCount = 3;
+
+Mockito.doAnswer(invocationOnMock -> {
+  //sleeping thread to mock the network latency from client to backend.
+  Thread.sleep(serverCommunicationMockLatency);
+  return successOp;
+})
+.when(client)
+.read(any(String.class), any(Long.class), any(byte[].class),
+any(Integer.class), any(Integer.class), any(String.class),
+any(String.class), any(TracingContext.class));
+
+AbfsInputStream inputStream = getAbfsInputStream(client,
+"testSuccessfulReadAhead.txt");
+queueReadAheads(inputStream);
+
+final ReadBufferManager readBufferManager
+= ReadBufferManager.getBufferManager();
+
+final int readBufferTotal = readBufferManager.getNumBuffers();
+
+//Sleeping to give ReadBufferWorker to pick the readBuffers for processing.
+Thread.sleep(readBufferTransferToInProgressProbableTime);
+
+Assertions.assertThat(readBufferManager.getInProgressCopiedList())
+.describedAs("InProgressList should have " + readBufferQueuedCount + " 
elements")
+.hasSize(readBufferQueuedCount);
+final int freeListBufferCount = readBufferTotal - readBufferQueuedCount;
+Assertions.assertThat(readBufferManager.getFreeListCopy())
+.describedAs("FreeList should have " + freeListBufferCount + 
"elements")
+.hasSize(freeListBufferCount);
+Assertions.assertThat(readBufferManager.getCompletedReadListCopy())
+.describedAs("CompletedList should have 0 elements")
+.hasSize(0);
+
+inputStream.close();

Review Comment:
   the problem with the close() here is that it will only be reached if the 
assertions hold. if anything goes wrong, an exception is raised and the stream 
kept open, with whatever resources it consumes.
   
   it should be closed in a finally block *or* the stream opened in a 
try-with-resources clause. thanks



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] K0K0V0K opened a new pull request, #5190: YARN-11390. TestResourceTrackerService.testNodeRemovalNormally ...

2022-12-06 Thread GitBox


K0K0V0K opened a new pull request, #5190:
URL: https://github.com/apache/hadoop/pull/5190

   …Shutdown nodes should be 0 now expected: <1> but was: <0>
   
   - The hardcoded sleep what was used in the test was not the most stable 
solution
   - It was replaced with a polling assert
   
   
   
   ### Description of PR
   
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18146) ABFS: Add changes for expect hundred continue header with append requests

2022-12-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17643897#comment-17643897
 ] 

ASF GitHub Bot commented on HADOOP-18146:
-

hadoop-yetus commented on PR #4039:
URL: https://github.com/apache/hadoop/pull/4039#issuecomment-1339462660

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 46s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 39s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 59s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 56s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 52s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  3s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 51s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 27s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 24s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  21m 53s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 45s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 24s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 50s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  98m 59s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4039/29/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4039 |
   | JIRA Issue | HADOOP-18146 |
   | Optional Tests | dupname asflicense codespell detsecrets xmllint compile 
javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle 
markdownlint |
   | uname | Linux 0b5f90e2f10a 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 457fda0e5d4834b5687170023e502854d3e25b24 |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4039/29/testReport/ |
   | Max. process+thread count | 682 (vs. ulimit of 5500) |
   | modules | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4039: HADOOP-18146: ABFS: Added changes for expect hundred continue header

2022-12-06 Thread GitBox


hadoop-yetus commented on PR #4039:
URL: https://github.com/apache/hadoop/pull/4039#issuecomment-1339462660

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 46s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 39s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 59s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 56s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 52s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  3s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 51s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 27s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 24s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  21m 53s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 45s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 24s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 50s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  98m 59s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4039/29/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4039 |
   | JIRA Issue | HADOOP-18146 |
   | Optional Tests | dupname asflicense codespell detsecrets xmllint compile 
javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle 
markdownlint |
   | uname | Linux 0b5f90e2f10a 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 457fda0e5d4834b5687170023e502854d3e25b24 |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4039/29/testReport/ |
   | Max. process+thread count | 682 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4039/29/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache 

[GitHub] [hadoop] slfan1989 commented on pull request #5169: YARN-11349. [Federation] Router Support DelegationToken With SQL.

2022-12-06 Thread GitBox


slfan1989 commented on PR #5169:
URL: https://github.com/apache/hadoop/pull/5169#issuecomment-1339461796

   @goiri Can you help to review this PR again? Thank you very much!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #5146: YARN-11373. [Federation] Support refreshQueues refreshNodes API's for Federation.

2022-12-06 Thread GitBox


slfan1989 commented on PR #5146:
URL: https://github.com/apache/hadoop/pull/5146#issuecomment-1339458982

   @goiri Can you help to merge this pr into the trunk branch? Thank you very 
much!
   
   The java doc issue we have fixed.
   There are 2 compilations in the java doc of jenkins. 
   
   The first time, the trunk branch is used to compile directly.  
   
   
![image](https://user-images.githubusercontent.com/55643692/205936021-c999d448-50e9-4ee1-ac10-f7b6b706eca1.png)
   
   The second time, after merging our pr code, compile again.
   The error message is the trunk branch code, we see that there are 3 problems 
in the test report, the problem is fixed in the pr code, so the second 
compilation passed.
   
   
![image](https://user-images.githubusercontent.com/55643692/205936103-8cc93602-952f-40ac-b97c-d758c68d2be6.png)
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #5175: YARN-11226. [Federation] Add createNewReservation, submitReservation, updateReservation, deleteReservation REST APIs for Router.

2022-12-06 Thread GitBox


slfan1989 commented on PR #5175:
URL: https://github.com/apache/hadoop/pull/5175#issuecomment-1339446096

   @goiri Can you help review this PR? Thank you very much!
   
   In PR (https://github.com/apache/hadoop/pull/4892), we completed the four 
methods createNewReservation, submitReservation, updateReservation, 
deleteReservation, In this PR, we improved the code to use 
FederationActionRetry. After improvement, the code is more readable.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18538) Upgrade kafka to 2.8.2

2022-12-06 Thread D M Murali Krishna Reddy (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

D M Murali Krishna Reddy updated HADOOP-18538:
--
Affects Version/s: 3.4.0

> Upgrade kafka to 2.8.2
> --
>
> Key: HADOOP-18538
> URL: https://issues.apache.org/jira/browse/HADOOP-18538
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: D M Murali Krishna Reddy
>Assignee: D M Murali Krishna Reddy
>Priority: Major
>  Labels: pull-request-available
>
> Upgrade kafka to 2.8.2 to resolve 
> [CVE-2022-34917|https://nvd.nist.gov/vuln/detail/CVE-2022-34917] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18539) Upgrade ojalgo to 51.4.1

2022-12-06 Thread D M Murali Krishna Reddy (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

D M Murali Krishna Reddy updated HADOOP-18539:
--
Component/s: build

> Upgrade ojalgo to  51.4.1
> -
>
> Key: HADOOP-18539
> URL: https://issues.apache.org/jira/browse/HADOOP-18539
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: D M Murali Krishna Reddy
>Assignee: D M Murali Krishna Reddy
>Priority: Major
>  Labels: pull-request-available
>
> Upgrade ojalgo to  51.4.1 to resolve CWE-327: [Use of a Broken or Risky 
> Cryptographic Algorithm|https://cwe.mitre.org/data/definitions/327.html] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18541) Upgrade grizzly version to 2.4.4

2022-12-06 Thread D M Murali Krishna Reddy (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

D M Murali Krishna Reddy updated HADOOP-18541:
--
Component/s: build

> Upgrade grizzly version to 2.4.4
> 
>
> Key: HADOOP-18541
> URL: https://issues.apache.org/jira/browse/HADOOP-18541
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: D M Murali Krishna Reddy
>Assignee: D M Murali Krishna Reddy
>Priority: Major
>  Labels: pull-request-available
>
> Upgrade grizzly version to 2.4.4 to resolve
> |[[sonatype-2016-0415] CWE-79: Improper Neutralization of Input During Web 
> Page Generation ('Cross-site 
> Scripting')|https://ossindex.sonatype.org/vulnerability/sonatype-2016-0415?component-type=maven=org.glassfish.grizzly/grizzly-http-server]|
> [CVE-2014-0099|https://nvd.nist.gov/vuln/detail/CVE-2014-0099], 
> [CVE-2014-0075|https://nvd.nist.gov/vuln/detail/CVE-2014-0075], 
> [CVE-2017-128|https://nvd.nist.gov/vuln/detail/CVE-2017-128]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18538) Upgrade kafka to 2.8.2

2022-12-06 Thread D M Murali Krishna Reddy (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

D M Murali Krishna Reddy updated HADOOP-18538:
--
Component/s: build

> Upgrade kafka to 2.8.2
> --
>
> Key: HADOOP-18538
> URL: https://issues.apache.org/jira/browse/HADOOP-18538
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: D M Murali Krishna Reddy
>Assignee: D M Murali Krishna Reddy
>Priority: Major
>  Labels: pull-request-available
>
> Upgrade kafka to 2.8.2 to resolve 
> [CVE-2022-34917|https://nvd.nist.gov/vuln/detail/CVE-2022-34917] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18541) Upgrade grizzly version to 2.4.4

2022-12-06 Thread D M Murali Krishna Reddy (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

D M Murali Krishna Reddy updated HADOOP-18541:
--
Affects Version/s: 3.4.0

> Upgrade grizzly version to 2.4.4
> 
>
> Key: HADOOP-18541
> URL: https://issues.apache.org/jira/browse/HADOOP-18541
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: D M Murali Krishna Reddy
>Assignee: D M Murali Krishna Reddy
>Priority: Major
>  Labels: pull-request-available
>
> Upgrade grizzly version to 2.4.4 to resolve
> |[[sonatype-2016-0415] CWE-79: Improper Neutralization of Input During Web 
> Page Generation ('Cross-site 
> Scripting')|https://ossindex.sonatype.org/vulnerability/sonatype-2016-0415?component-type=maven=org.glassfish.grizzly/grizzly-http-server]|
> [CVE-2014-0099|https://nvd.nist.gov/vuln/detail/CVE-2014-0099], 
> [CVE-2014-0075|https://nvd.nist.gov/vuln/detail/CVE-2014-0075], 
> [CVE-2017-128|https://nvd.nist.gov/vuln/detail/CVE-2017-128]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18539) Upgrade ojalgo to 51.4.1

2022-12-06 Thread D M Murali Krishna Reddy (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

D M Murali Krishna Reddy updated HADOOP-18539:
--
Affects Version/s: 3.4.0

> Upgrade ojalgo to  51.4.1
> -
>
> Key: HADOOP-18539
> URL: https://issues.apache.org/jira/browse/HADOOP-18539
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: D M Murali Krishna Reddy
>Assignee: D M Murali Krishna Reddy
>Priority: Major
>  Labels: pull-request-available
>
> Upgrade ojalgo to  51.4.1 to resolve CWE-327: [Use of a Broken or Risky 
> Cryptographic Algorithm|https://cwe.mitre.org/data/definitions/327.html] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18540) Upgrade Bouncy Castle to 1.70

2022-12-06 Thread D M Murali Krishna Reddy (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

D M Murali Krishna Reddy updated HADOOP-18540:
--
Component/s: build

> Upgrade Bouncy Castle to 1.70
> -
>
> Key: HADOOP-18540
> URL: https://issues.apache.org/jira/browse/HADOOP-18540
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: D M Murali Krishna Reddy
>Assignee: D M Murali Krishna Reddy
>Priority: Major
>  Labels: pull-request-available
>
> Upgrade Bouncycastle to 1.70 to resolve
>  
> |[[sonatype-2021-4916] CWE-327: Use of a Broken or Risky Cryptographic 
> Algorithm|https://ossindex.sonatype.org/vulnerability/sonatype-2021-4916?component-type=maven=org.bouncycastle/bcprov-jdk15on]|
> |[[sonatype-2019-0673] CWE-400: Uncontrolled Resource Consumption ('Resource 
> Exhaustion')|https://ossindex.sonatype.org/vulnerability/sonatype-2019-0673?component-type=maven=org.bouncycastle/bcprov-jdk15on]|



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18540) Upgrade Bouncy Castle to 1.70

2022-12-06 Thread D M Murali Krishna Reddy (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

D M Murali Krishna Reddy updated HADOOP-18540:
--
Affects Version/s: 3.4.0

> Upgrade Bouncy Castle to 1.70
> -
>
> Key: HADOOP-18540
> URL: https://issues.apache.org/jira/browse/HADOOP-18540
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: D M Murali Krishna Reddy
>Assignee: D M Murali Krishna Reddy
>Priority: Major
>  Labels: pull-request-available
>
> Upgrade Bouncycastle to 1.70 to resolve
>  
> |[[sonatype-2021-4916] CWE-327: Use of a Broken or Risky Cryptographic 
> Algorithm|https://ossindex.sonatype.org/vulnerability/sonatype-2021-4916?component-type=maven=org.bouncycastle/bcprov-jdk15on]|
> |[[sonatype-2019-0673] CWE-400: Uncontrolled Resource Consumption ('Resource 
> Exhaustion')|https://ossindex.sonatype.org/vulnerability/sonatype-2019-0673?component-type=maven=org.bouncycastle/bcprov-jdk15on]|



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on a diff in pull request #5169: YARN-11349. [Federation] Router Support DelegationToken With SQL.

2022-12-06 Thread GitBox


slfan1989 commented on code in PR #5169:
URL: https://github.com/apache/hadoop/pull/5169#discussion_r1040937723


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/security/RouterDelegationTokenSecretManager.java:
##
@@ -137,6 +141,29 @@ public void storeNewToken(RMDelegationTokenIdentifier 
identifier,
 }
   }
 
+  /**
+   * The Router Supports Store new Token.
+   *
+   * @param identifier RMDelegationToken.
+   * @param tokenInfo DelegationTokenInformation.
+   */
+  public void storeNewToken(RMDelegationTokenIdentifier identifier,
+  DelegationTokenInformation tokenInfo) {
+try {
+  String token =
+  
RouterDelegationTokenSupport.encodeDelegationTokenInformation(tokenInfo);
+  long renewDate = tokenInfo.getRenewDate();
+
+  federationFacade.storeNewToken(identifier, renewDate, token);
+} catch (Exception e) {
+  if (!shouldIgnoreException(e)) {
+LOG.error("Error in storing RMDelegationToken with sequence number: 
{}.",
+identifier.getSequenceNumber());
+ExitUtil.terminate(1, e);

Review Comment:
   Thank you for your question. In YARN Federation, for Client, Router plays a 
role similar to RM. Our code here refers to the code of RM. From my personal 
point of view, I think it should still maintain the same processing logic as 
RM, which will be better.
   
   RMDelegationTokenSecretManager#storeNewToken
   ```
protected void storeNewToken(RMDelegationTokenIdentifier identifier,
 long renewDate) {
   try {
 LOG.info("storing RMDelegation token with sequence number: "
 + identifier.getSequenceNumber());
 rm.getRMContext().getStateStore().storeRMDelegationToken(identifier,
 renewDate);
   } catch (Exception e) {
 if (!shouldIgnoreException(e)) {
   LOG.error("Error in storing RMDelegationToken with sequence number: "
   + identifier.getSequenceNumber());
   ExitUtil.terminate(1, e);
 }
   }
 }
   ```
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18146) ABFS: Add changes for expect hundred continue header with append requests

2022-12-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17643864#comment-17643864
 ] 

ASF GitHub Bot commented on HADOOP-18146:
-

anmolanmol1234 commented on code in PR #4039:
URL: https://github.com/apache/hadoop/pull/4039#discussion_r1040936094


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java:
##
@@ -314,18 +314,21 @@ public void sendRequest(byte[] buffer, int offset, int 
length) throws IOExceptio
 if (this.isTraceEnabled) {
   startTime = System.nanoTime();
 }
-try (OutputStream outputStream = this.connection.getOutputStream()) {
-  // update bytes sent before they are sent so we may observe
-  // attempted sends as well as successful sends via the
-  // accompanying statusCode
-  this.bytesSent = length;
+OutputStream outputStream;
+try {
+  try {
+outputStream = this.connection.getOutputStream();
+  } catch (IOException e) {
+// If getOutputStream fails with an exception due to 100-continue

Review Comment:
   1. The first point is valid, I have made the change where getOutputStream 
throws exception for the cases where 100 continue is not enabled and returns 
back to the caller when it catches an IOException due to 100 continue being 
enabled which leads to processResponse getting the correct status code and then 
eventually the retry logic coming into play. 
   
   2. We need to update the bytes sent for failed as well as passed cases. The 
current change will not swallow any exceptions.
   The handling for various status code with 100 continue enabled is as follows 
   
   1. Case 1 :- getOutputStream doesn't throw any exception, response is 
processed and it gives status code of 200, no retry is needed and hence the 
request succeeds.
   2. Case 2:- getOutputSteam throws exception, we return to the caller and in 
processResponse in this.connection.getResponseCode() it gives status code of 
404 (user error), exponential retry is not needed. We retry without 100 
continue enabled.
   3. Case 3:- getOutputSteam throws exception, we return to the caller and in 
processResponse  it gives status code of 503,
   which shows throttling so we backoff accordingly with exponential retry. 
Since each append request waits for 100 continue response, the stress on the 
server gets reduced.





> ABFS: Add changes for expect hundred continue header with append requests
> -
>
> Key: HADOOP-18146
> URL: https://issues.apache.org/jira/browse/HADOOP-18146
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.1
>Reporter: Anmol Asrani
>Assignee: Anmol Asrani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
>  Heavy load from a Hadoop cluster lead to high resource utilization at FE 
> nodes. Investigations from the server side indicate payload buffering at 
> Http.Sys as the cause. Payload of requests that eventually fail due to 
> throttling limits are also getting buffered, as its triggered before FE could 
> start request processing.
> Approach: Client sends Append Http request with Expect header, but holds back 
> on payload transmission until server replies back with HTTP 100. We add this 
> header for all append requests so as to reduce.
> We made several workload runs with and without hundred continue enabled and 
> the overall observation is that :-
>  # The ratio of TCP SYN packet count with and without expect hundred continue 
> enabled is 0.32 : 3 on average.
>  #  The ingress into the machine at TCP level is almost 3 times lesser with 
> hundred continue enabled which implies a lot of bandwidth save.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18146) ABFS: Add changes for expect hundred continue header with append requests

2022-12-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17643865#comment-17643865
 ] 

ASF GitHub Bot commented on HADOOP-18146:
-

anmolanmol1234 commented on code in PR #4039:
URL: https://github.com/apache/hadoop/pull/4039#discussion_r1040936094


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java:
##
@@ -314,18 +314,21 @@ public void sendRequest(byte[] buffer, int offset, int 
length) throws IOExceptio
 if (this.isTraceEnabled) {
   startTime = System.nanoTime();
 }
-try (OutputStream outputStream = this.connection.getOutputStream()) {
-  // update bytes sent before they are sent so we may observe
-  // attempted sends as well as successful sends via the
-  // accompanying statusCode
-  this.bytesSent = length;
+OutputStream outputStream;
+try {
+  try {
+outputStream = this.connection.getOutputStream();
+  } catch (IOException e) {
+// If getOutputStream fails with an exception due to 100-continue

Review Comment:
   1. The first point is valid, I have made the change where getOutputStream 
throws exception for the cases where 100 continue is not enabled and returns 
back to the caller when it catches an IOException due to 100 continue being 
enabled which leads to processResponse getting the correct status code and then 
eventually the retry logic coming into play. 
   
   2. We need to update the bytes sent for failed as well as passed cases. The 
current change will not swallow any exceptions.
   The handling for various status code with 100 continue enabled is as follows 
   
   1. Case 1 :- getOutputStream doesn't throw any exception, response is 
processed and it gives status code of 200, no retry is needed and hence the 
request succeeds.
   2. Case 2:- getOutputSteam throws exception, we return to the caller and 
in processResponse in this.connection.getResponseCode() it gives status code of 
404 (user error), exponential retry is not needed. We retry without 100 
continue enabled.
   3. Case 3:- getOutputSteam throws exception, we return to the caller and 
in processResponse  it gives status code of 503,
   which shows throttling so we backoff accordingly with exponential retry. 
Since each append request waits for 100 continue response, the stress on the 
server gets reduced.





> ABFS: Add changes for expect hundred continue header with append requests
> -
>
> Key: HADOOP-18146
> URL: https://issues.apache.org/jira/browse/HADOOP-18146
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.1
>Reporter: Anmol Asrani
>Assignee: Anmol Asrani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
>  Heavy load from a Hadoop cluster lead to high resource utilization at FE 
> nodes. Investigations from the server side indicate payload buffering at 
> Http.Sys as the cause. Payload of requests that eventually fail due to 
> throttling limits are also getting buffered, as its triggered before FE could 
> start request processing.
> Approach: Client sends Append Http request with Expect header, but holds back 
> on payload transmission until server replies back with HTTP 100. We add this 
> header for all append requests so as to reduce.
> We made several workload runs with and without hundred continue enabled and 
> the overall observation is that :-
>  # The ratio of TCP SYN packet count with and without expect hundred continue 
> enabled is 0.32 : 3 on average.
>  #  The ingress into the machine at TCP level is almost 3 times lesser with 
> hundred continue enabled which implies a lot of bandwidth save.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anmolanmol1234 commented on a diff in pull request #4039: HADOOP-18146: ABFS: Added changes for expect hundred continue header

2022-12-06 Thread GitBox


anmolanmol1234 commented on code in PR #4039:
URL: https://github.com/apache/hadoop/pull/4039#discussion_r1040936094


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java:
##
@@ -314,18 +314,21 @@ public void sendRequest(byte[] buffer, int offset, int 
length) throws IOExceptio
 if (this.isTraceEnabled) {
   startTime = System.nanoTime();
 }
-try (OutputStream outputStream = this.connection.getOutputStream()) {
-  // update bytes sent before they are sent so we may observe
-  // attempted sends as well as successful sends via the
-  // accompanying statusCode
-  this.bytesSent = length;
+OutputStream outputStream;
+try {
+  try {
+outputStream = this.connection.getOutputStream();
+  } catch (IOException e) {
+// If getOutputStream fails with an exception due to 100-continue

Review Comment:
   1. The first point is valid, I have made the change where getOutputStream 
throws exception for the cases where 100 continue is not enabled and returns 
back to the caller when it catches an IOException due to 100 continue being 
enabled which leads to processResponse getting the correct status code and then 
eventually the retry logic coming into play. 
   
   2. We need to update the bytes sent for failed as well as passed cases. The 
current change will not swallow any exceptions.
   The handling for various status code with 100 continue enabled is as follows 
   
   1. Case 1 :- getOutputStream doesn't throw any exception, response is 
processed and it gives status code of 200, no retry is needed and hence the 
request succeeds.
   2. Case 2:- getOutputSteam throws exception, we return to the caller and 
in processResponse in this.connection.getResponseCode() it gives status code of 
404 (user error), exponential retry is not needed. We retry without 100 
continue enabled.
   3. Case 3:- getOutputSteam throws exception, we return to the caller and 
in processResponse  it gives status code of 503,
   which shows throttling so we backoff accordingly with exponential retry. 
Since each append request waits for 100 continue response, the stress on the 
server gets reduced.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anmolanmol1234 commented on a diff in pull request #4039: HADOOP-18146: ABFS: Added changes for expect hundred continue header

2022-12-06 Thread GitBox


anmolanmol1234 commented on code in PR #4039:
URL: https://github.com/apache/hadoop/pull/4039#discussion_r1040936094


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java:
##
@@ -314,18 +314,21 @@ public void sendRequest(byte[] buffer, int offset, int 
length) throws IOExceptio
 if (this.isTraceEnabled) {
   startTime = System.nanoTime();
 }
-try (OutputStream outputStream = this.connection.getOutputStream()) {
-  // update bytes sent before they are sent so we may observe
-  // attempted sends as well as successful sends via the
-  // accompanying statusCode
-  this.bytesSent = length;
+OutputStream outputStream;
+try {
+  try {
+outputStream = this.connection.getOutputStream();
+  } catch (IOException e) {
+// If getOutputStream fails with an exception due to 100-continue

Review Comment:
   1. The first point is valid, I have made the change where getOutputStream 
throws exception for the cases where 100 continue is not enabled and returns 
back to the caller when it catches an IOException due to 100 continue being 
enabled which leads to processResponse getting the correct status code and then 
eventually the retry logic coming into play. 
   
   2. We need to update the bytes sent for failed as well as passed cases. The 
current change will not swallow any exceptions.
   The handling for various status code with 100 continue enabled is as follows 
   
   1. Case 1 :- getOutputStream doesn't throw any exception, response is 
processed and it gives status code of 200, no retry is needed and hence the 
request succeeds.
   2. Case 2:- getOutputSteam throws exception, we return to the caller and in 
processResponse in this.connection.getResponseCode() it gives status code of 
404 (user error), exponential retry is not needed. We retry without 100 
continue enabled.
   3. Case 3:- getOutputSteam throws exception, we return to the caller and in 
processResponse  it gives status code of 503,
   which shows throttling so we backoff accordingly with exponential retry. 
Since each append request waits for 100 continue response, the stress on the 
server gets reduced.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on a diff in pull request #5169: YARN-11349. [Federation] Router Support DelegationToken With SQL.

2022-12-06 Thread GitBox


slfan1989 commented on code in PR #5169:
URL: https://github.com/apache/hadoop/pull/5169#discussion_r1040926729


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/security/token/delegation/RouterDelegationTokenSupport.java:
##
@@ -0,0 +1,65 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.security.token.delegation;
+
+import org.apache.hadoop.io.WritableUtils;
+import 
org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.DelegationTokenInformation;
+
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.io.DataInputStream;
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.util.Base64;
+
+/**
+ * Workaround for serialization of {@link DelegationTokenInformation} through 
package access.
+ * Future version of Hadoop should add this to DelegationTokenInformation 
itself.
+ */
+public final class RouterDelegationTokenSupport {
+
+  private RouterDelegationTokenSupport() {
+  }
+
+  public static String 
encodeDelegationTokenInformation(DelegationTokenInformation token) {
+try {
+  ByteArrayOutputStream bos = new ByteArrayOutputStream();
+  DataOutputStream out = new DataOutputStream(bos);
+  WritableUtils.writeVInt(out, token.password.length);
+  out.write(token.password);
+  out.writeLong(token.renewDate);
+  out.flush();
+  byte[] tokenInfoBytes = bos.toByteArray();
+  return Base64.getUrlEncoder().encodeToString(tokenInfoBytes);
+} catch (IOException ex) {
+  throw new RuntimeException("Failed to encode token.", ex);

Review Comment:
   Thank you very much for helping to review the code, I will modify the code.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #5182: YARN-11385. Fix hadoop-yarn-server-common module Java Doc Errors.

2022-12-06 Thread GitBox


slfan1989 commented on PR #5182:
URL: https://github.com/apache/hadoop/pull/5182#issuecomment-1339264728

   @cnauroth Can you help to review this PR again? Thank you very much! 
   
   @goiri Thank you very much for helping to review the code!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] GauthamBanasandra merged pull request #3558: YARN-10978. Fixing ApplicationClassLoader to Correctly Expand Glob for Windows Path

2022-12-06 Thread GitBox


GauthamBanasandra merged PR #3558:
URL: https://github.com/apache/hadoop/pull/3558


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] GauthamBanasandra commented on pull request #3558: YARN-10978. Fixing ApplicationClassLoader to Correctly Expand Glob for Windows Path

2022-12-06 Thread GitBox


GauthamBanasandra commented on PR #3558:
URL: https://github.com/apache/hadoop/pull/3558#issuecomment-1339156347

   LGTM as well.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] GauthamBanasandra merged pull request #5183: YARN-11386. Fix issue with classpath resolution

2022-12-06 Thread GitBox


GauthamBanasandra merged PR #5183:
URL: https://github.com/apache/hadoop/pull/5183


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18546) disable purging list of in progress reads in abfs stream closed

2022-12-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17643788#comment-17643788
 ] 

ASF GitHub Bot commented on HADOOP-18546:
-

pranavsaxena-microsoft commented on PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#issuecomment-1339119467

   > one final change; the cleanup of the input stream in the test.
   > 
   > giving a +1 pending that, and I'm going to test this through spark today 
... writing a test to do replicate the failure and then verify that all is good 
when the jar is update
   
   Thanks. We are doing inputStream.close() at 
https://github.com/apache/hadoop/pull/5176/files#diff-bdc464e1bfa3d270e552bdf740fc29ec808be9ab2c4f77a99bf896ac605a5698R546.
 Kindly advise please what is expected from the inputStream cleanup. I agree to 
the comment for String.format, I shall refactor the code accordingly.
   
   Regards.




> disable purging list of in progress reads in abfs stream closed
> ---
>
> Key: HADOOP-18546
> URL: https://issues.apache.org/jira/browse/HADOOP-18546
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.4
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> turn off the prune of in progress reads in 
> ReadBufferManager::purgeBuffersForStream
> this will ensure active prefetches for a closed stream complete. they wiill 
> then get to the completed list and hang around until evicted by timeout, but 
> at least prefetching will be safe.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] pranavsaxena-microsoft commented on pull request #5176: HADOOP-18546. ABFS:disable purging list of in progress reads in abfs stream closed

2022-12-06 Thread GitBox


pranavsaxena-microsoft commented on PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#issuecomment-1339119467

   > one final change; the cleanup of the input stream in the test.
   > 
   > giving a +1 pending that, and I'm going to test this through spark today 
... writing a test to do replicate the failure and then verify that all is good 
when the jar is update
   
   Thanks. We are doing inputStream.close() at 
https://github.com/apache/hadoop/pull/5176/files#diff-bdc464e1bfa3d270e552bdf740fc29ec808be9ab2c4f77a99bf896ac605a5698R546.
 Kindly advise please what is expected from the inputStream cleanup. I agree to 
the comment for String.format, I shall refactor the code accordingly.
   
   Regards.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5189: update cloud 123

2022-12-06 Thread GitBox


hadoop-yetus commented on PR #5189:
URL: https://github.com/apache/hadoop/pull/5189#issuecomment-1339113759

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   2m 23s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  shadedclient  |  37m 51s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  shadedclient  |  26m 47s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   1m  2s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  70m 20s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5189/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5189 |
   | Optional Tests | dupname asflicense codespell detsecrets |
   | uname | Linux da2b9e1c817a 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 1942eef8061406d8b738a957d25c0006b0aa9473 |
   | Max. process+thread count | 641 (vs. ulimit of 5500) |
   | modules | C: . U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5189/1/console |
   | versions | git=2.25.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18546) disable purging list of in progress reads in abfs stream closed

2022-12-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17643779#comment-17643779
 ] 

ASF GitHub Bot commented on HADOOP-18546:
-

steveloughran commented on code in PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#discussion_r1040769983


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsInputStream.java:
##
@@ -524,30 +527,33 @@ public void testStreamPurgeDuringReadAheadCallExecuting() 
throws Exception {
 final ReadBufferManager readBufferManager
 = ReadBufferManager.getBufferManager();
 
+final int readBufferTotal = readBufferManager.getNumBuffers();
+
 //Sleeping to give ReadBufferWorker to pick the readBuffers for processing.
-Thread.sleep(1_000L);
+Thread.sleep(readBufferTransferToInProgressProbableTime);
 
 Assertions.assertThat(readBufferManager.getInProgressCopiedList())
-.describedAs("InProgressList should have 3 elements")
-.hasSize(3);
+.describedAs("InProgressList should have " + readBufferQueuedCount + " 
elements")
+.hasSize(readBufferQueuedCount);
+final int freeListBufferCount = readBufferTotal - readBufferQueuedCount;
 Assertions.assertThat(readBufferManager.getFreeListCopy())
-.describedAs("FreeList should have 13 elements")
-.hasSize(13);
+.describedAs("FreeList should have " + freeListBufferCount + 
"elements")

Review Comment:
   you can actually use string.format patterns here; most relevant for on 
demand toString calls which are more expensive. I'm not worrying about it here 
though





> disable purging list of in progress reads in abfs stream closed
> ---
>
> Key: HADOOP-18546
> URL: https://issues.apache.org/jira/browse/HADOOP-18546
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.4
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> turn off the prune of in progress reads in 
> ReadBufferManager::purgeBuffersForStream
> this will ensure active prefetches for a closed stream complete. they wiill 
> then get to the completed list and hang around until evicted by timeout, but 
> at least prefetching will be safe.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a diff in pull request #5176: HADOOP-18546. ABFS:disable purging list of in progress reads in abfs stream closed

2022-12-06 Thread GitBox


steveloughran commented on code in PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#discussion_r1040769983


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsInputStream.java:
##
@@ -524,30 +527,33 @@ public void testStreamPurgeDuringReadAheadCallExecuting() 
throws Exception {
 final ReadBufferManager readBufferManager
 = ReadBufferManager.getBufferManager();
 
+final int readBufferTotal = readBufferManager.getNumBuffers();
+
 //Sleeping to give ReadBufferWorker to pick the readBuffers for processing.
-Thread.sleep(1_000L);
+Thread.sleep(readBufferTransferToInProgressProbableTime);
 
 Assertions.assertThat(readBufferManager.getInProgressCopiedList())
-.describedAs("InProgressList should have 3 elements")
-.hasSize(3);
+.describedAs("InProgressList should have " + readBufferQueuedCount + " 
elements")
+.hasSize(readBufferQueuedCount);
+final int freeListBufferCount = readBufferTotal - readBufferQueuedCount;
 Assertions.assertThat(readBufferManager.getFreeListCopy())
-.describedAs("FreeList should have 13 elements")
-.hasSize(13);
+.describedAs("FreeList should have " + freeListBufferCount + 
"elements")

Review Comment:
   you can actually use string.format patterns here; most relevant for on 
demand toString calls which are more expensive. I'm not worrying about it here 
though



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18560) AvroFSInput opens a stream twice and discards the second one without closing

2022-12-06 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18560.
-
Fix Version/s: 3.4.0
   3.3.5
   Resolution: Fixed

> AvroFSInput opens a stream twice and discards the second one without closing
> 
>
> Key: HADOOP-18560
> URL: https://issues.apache.org/jira/browse/HADOOP-18560
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.3.5
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.5
>
>
> late breaking blocker for 3.3.5; AvroFsinput can leak input streams because 
> the change of HADOOP-16202 failed to comment out the original open() call.
> noticed during a code review with [~harshit.gupta]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18560) AvroFSInput opens a stream twice and discards the second one without closing

2022-12-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17643766#comment-17643766
 ] 

ASF GitHub Bot commented on HADOOP-18560:
-

steveloughran merged PR #5186:
URL: https://github.com/apache/hadoop/pull/5186




> AvroFSInput opens a stream twice and discards the second one without closing
> 
>
> Key: HADOOP-18560
> URL: https://issues.apache.org/jira/browse/HADOOP-18560
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.3.5
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
>  Labels: pull-request-available
>
> late breaking blocker for 3.3.5; AvroFsinput can leak input streams because 
> the change of HADOOP-16202 failed to comment out the original open() call.
> noticed during a code review with [~harshit.gupta]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran merged pull request #5186: HADOOP-18560. AvroFSInput opens a stream twice and discards the second one without closing

2022-12-06 Thread GitBox


steveloughran merged PR #5186:
URL: https://github.com/apache/hadoop/pull/5186


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5184: HDFS-16861. RBF. Truncate API always fails when dirs use AllResolver oder on Router

2022-12-06 Thread GitBox


hadoop-yetus commented on PR #5184:
URL: https://github.com/apache/hadoop/pull/5184#issuecomment-1339054149

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 26s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 10s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  6s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   1m  3s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 54s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  9s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   0m 42s | 
[/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5184/2/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt)
 |  hadoop-hdfs-rbf in trunk failed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.  |
   | -1 :x: |  javadoc  |   0m 50s | 
[/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5184/2/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt)
 |  hadoop-hdfs-rbf in trunk failed with JDK Private 
Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.  |
   | -1 :x: |  spotbugs  |   0m 50s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5184/2/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in trunk failed.  |
   | +1 :green_heart: |  shadedclient  |   6m 12s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 32s | 
[/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5184/2/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch failed.  |
   | -1 :x: |  compile  |   0m 34s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5184/2/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt)
 |  hadoop-hdfs-rbf in the patch failed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.  |
   | -1 :x: |  javac  |   0m 34s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5184/2/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt)
 |  hadoop-hdfs-rbf in the patch failed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.  |
   | +1 :green_heart: |  compile  |   1m  1s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m  1s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 28s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5184/2/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 4 new + 2 
unchanged - 0 fixed = 6 total (was 2)  |
   | +1 :green_heart: |  mvnsite  |   0m 47s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 54s | 
[/results-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5184/2/artifact/out/results-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt)
 |  
hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04
 with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 generated 99 new + 0 
unchanged - 0 fixed = 99 total (was 

[jira] [Commented] (HADOOP-18560) AvroFSInput opens a stream twice and discards the second one without closing

2022-12-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17643763#comment-17643763
 ] 

ASF GitHub Bot commented on HADOOP-18560:
-

steveloughran commented on PR #5186:
URL: https://github.com/apache/hadoop/pull/5186#issuecomment-1339053451

   thanks chris. was my fault in the first place though -don't thank me too 
much 




> AvroFSInput opens a stream twice and discards the second one without closing
> 
>
> Key: HADOOP-18560
> URL: https://issues.apache.org/jira/browse/HADOOP-18560
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.3.5
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
>  Labels: pull-request-available
>
> late breaking blocker for 3.3.5; AvroFsinput can leak input streams because 
> the change of HADOOP-16202 failed to comment out the original open() call.
> noticed during a code review with [~harshit.gupta]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #5186: HADOOP-18560. AvroFSInput opens a stream twice and discards the second one without closing

2022-12-06 Thread GitBox


steveloughran commented on PR #5186:
URL: https://github.com/apache/hadoop/pull/5186#issuecomment-1339053451

   thanks chris. was my fault in the first place though -don't thank me too 
much 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18546) disable purging list of in progress reads in abfs stream closed

2022-12-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17643760#comment-17643760
 ] 

ASF GitHub Bot commented on HADOOP-18546:
-

pranavsaxena-microsoft commented on PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#issuecomment-1339051008

   > sorry, should have been clearer: a local spark build and spark-shell 
process is ideal for replication and validation -as all splits are processed in 
different worker threads in that process, it recreates the exact failure mode.
   > 
   > script you can take and tune for your system; uses the mkcsv command in 
cloudstore JAR.
   > 
   > I am going to add this as a scalatest suite in the same module 
https://github.com/hortonworks-spark/cloud-integration/blob/master/spark-cloud-integration/src/scripts/validating-csv-record-io.sc
   
   Thanks for the script. I had applied following changes on the script: 
https://github.com/pranavsaxena-microsoft/cloud-integration/commit/1d779f22150be3102635819e4525967573602dd9.
   
   On trunk's jar, got exception:
   ```
   22/12/05 23:51:27 ERROR Executor: Exception in task 4.0 in stage 1.0 (TID 5)
   java.lang.NullPointerException: Null value appeared in non-nullable field:
   - field (class: "scala.Long", name: "rowId")
   - root class: 
"$line85.$read.$iw.$iw.$iw.$iw.$iw.$iw.$iw.$iw.$iw.$iw.$iw.$iw.$iw.$iw.$iw.$iw.$iw.$iw.CsvRecord"
   If the schema is inferred from a Scala tuple/case class, or a Java bean, 
please try to use scala.Option[_] or other nullable types (e.g. 
java.lang.Integer instead of int/scala.Int).
   at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply_0_0$(Unknown
 Source)
   at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply(Unknown
 Source)
   at scala.collection.Iterator$$anon$10.next(Iterator.scala:461)
   at scala.collection.Iterator$$anon$10.next(Iterator.scala:461)
   at scala.collection.Iterator.foreach(Iterator.scala:943)
   at scala.collection.Iterator.foreach$(Iterator.scala:943)
   at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
   at org.apache.spark.rdd.RDD.$anonfun$foreach$2(RDD.scala:1001)
   at 
org.apache.spark.rdd.RDD.$anonfun$foreach$2$adapted(RDD.scala:1001)
   at 
org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2302)
   at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:92)
   at 
org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161)
   at org.apache.spark.scheduler.Task.run(Task.scala:139)
   at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554)
   at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1502)
   at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557)
   at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   at java.lang.Thread.run(Thread.java:750)
   ```
   
   Using the jar of the PR's code:
   ```
   minimums=((action_http_get_request.min=-1) 
(action_http_get_request.failures.min=-1));
   maximums=((action_http_get_request.max=-1) 
(action_http_get_request.failures.max=-1));
   means=((action_http_get_request.failures.mean=(samples=0, sum=0, 
mean=0.)) (action_http_get_request.mean=(samples=0, sum=0, mean=0.)));
   }} 
   22/12/06 01:04:22 INFO TaskSetManager: Finished task 8.0 in stage 1.0 (TID 
9) in 14727 ms on snvijaya-Virtual-Machine.mshome.net (executor driver) (9/9)
   22/12/06 01:04:22 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks 
have all completed, from pool 
   22/12/06 01:04:22 INFO DAGScheduler: ResultStage 1 (foreach at 
/home/snvijaya/Desktop/cloud-integration/spark-cloud-integration/src/scripts/validating-csv-record-io.sc:46)
 finished in 115.333 s
   22/12/06 01:04:22 INFO DAGScheduler: Job 1 is finished. Cancelling potential 
speculative or zombie tasks for this job
   22/12/06 01:04:22 INFO TaskSchedulerImpl: Killing all running tasks in stage 
1: Stage finished
   22/12/06 01:04:22 INFO DAGScheduler: Job 1 finished: foreach at 
/home/snvijaya/Desktop/cloud-integration/spark-cloud-integration/src/scripts/validating-csv-record-io.sc:46,
 took 115.337621 s
   res35: String = validation completed [start: string, rowId: bigint ... 6 
more fields]
   ```
   
   Commands executed:
   ```
   :load 
/home/snvijaya/Desktop/cloud-integration/spark-cloud-integration/src/scripts/validating-csv-record-io.sc
   validateDS(rowsDS)
   ```




> disable purging list of in progress reads in abfs stream closed
> ---
>
> Key: HADOOP-18546
> URL: 

[GitHub] [hadoop] pranavsaxena-microsoft commented on pull request #5176: HADOOP-18546. ABFS:disable purging list of in progress reads in abfs stream closed

2022-12-06 Thread GitBox


pranavsaxena-microsoft commented on PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#issuecomment-1339051008

   > sorry, should have been clearer: a local spark build and spark-shell 
process is ideal for replication and validation -as all splits are processed in 
different worker threads in that process, it recreates the exact failure mode.
   > 
   > script you can take and tune for your system; uses the mkcsv command in 
cloudstore JAR.
   > 
   > I am going to add this as a scalatest suite in the same module 
https://github.com/hortonworks-spark/cloud-integration/blob/master/spark-cloud-integration/src/scripts/validating-csv-record-io.sc
   
   Thanks for the script. I had applied following changes on the script: 
https://github.com/pranavsaxena-microsoft/cloud-integration/commit/1d779f22150be3102635819e4525967573602dd9.
   
   On trunk's jar, got exception:
   ```
   22/12/05 23:51:27 ERROR Executor: Exception in task 4.0 in stage 1.0 (TID 5)
   java.lang.NullPointerException: Null value appeared in non-nullable field:
   - field (class: "scala.Long", name: "rowId")
   - root class: 
"$line85.$read.$iw.$iw.$iw.$iw.$iw.$iw.$iw.$iw.$iw.$iw.$iw.$iw.$iw.$iw.$iw.$iw.$iw.$iw.CsvRecord"
   If the schema is inferred from a Scala tuple/case class, or a Java bean, 
please try to use scala.Option[_] or other nullable types (e.g. 
java.lang.Integer instead of int/scala.Int).
   at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply_0_0$(Unknown
 Source)
   at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply(Unknown
 Source)
   at scala.collection.Iterator$$anon$10.next(Iterator.scala:461)
   at scala.collection.Iterator$$anon$10.next(Iterator.scala:461)
   at scala.collection.Iterator.foreach(Iterator.scala:943)
   at scala.collection.Iterator.foreach$(Iterator.scala:943)
   at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
   at org.apache.spark.rdd.RDD.$anonfun$foreach$2(RDD.scala:1001)
   at 
org.apache.spark.rdd.RDD.$anonfun$foreach$2$adapted(RDD.scala:1001)
   at 
org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2302)
   at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:92)
   at 
org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161)
   at org.apache.spark.scheduler.Task.run(Task.scala:139)
   at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554)
   at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1502)
   at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557)
   at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   at java.lang.Thread.run(Thread.java:750)
   ```
   
   Using the jar of the PR's code:
   ```
   minimums=((action_http_get_request.min=-1) 
(action_http_get_request.failures.min=-1));
   maximums=((action_http_get_request.max=-1) 
(action_http_get_request.failures.max=-1));
   means=((action_http_get_request.failures.mean=(samples=0, sum=0, 
mean=0.)) (action_http_get_request.mean=(samples=0, sum=0, mean=0.)));
   }} 
   22/12/06 01:04:22 INFO TaskSetManager: Finished task 8.0 in stage 1.0 (TID 
9) in 14727 ms on snvijaya-Virtual-Machine.mshome.net (executor driver) (9/9)
   22/12/06 01:04:22 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks 
have all completed, from pool 
   22/12/06 01:04:22 INFO DAGScheduler: ResultStage 1 (foreach at 
/home/snvijaya/Desktop/cloud-integration/spark-cloud-integration/src/scripts/validating-csv-record-io.sc:46)
 finished in 115.333 s
   22/12/06 01:04:22 INFO DAGScheduler: Job 1 is finished. Cancelling potential 
speculative or zombie tasks for this job
   22/12/06 01:04:22 INFO TaskSchedulerImpl: Killing all running tasks in stage 
1: Stage finished
   22/12/06 01:04:22 INFO DAGScheduler: Job 1 finished: foreach at 
/home/snvijaya/Desktop/cloud-integration/spark-cloud-integration/src/scripts/validating-csv-record-io.sc:46,
 took 115.337621 s
   res35: String = validation completed [start: string, rowId: bigint ... 6 
more fields]
   ```
   
   Commands executed:
   ```
   :load 
/home/snvijaya/Desktop/cloud-integration/spark-cloud-integration/src/scripts/validating-csv-record-io.sc
   validateDS(rowsDS)
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: 

[GitHub] [hadoop] DMMinhas opened a new pull request, #5189: update cloud 123

2022-12-06 Thread GitBox


DMMinhas opened a new pull request, #5189:
URL: https://github.com/apache/hadoop/pull/5189

   
   
   ### Description of PR
   
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org