[jira] [Updated] (HADOOP-18534) Propose a mechanism to free the direct memory occupied by RPC Connections

2022-11-20 Thread xinqiu.hu (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xinqiu.hu updated HADOOP-18534:
---
Attachment: (was: 未命名文件 (1).png)

> Propose a mechanism to free the direct memory occupied by RPC Connections
> -
>
> Key: HADOOP-18534
> URL: https://issues.apache.org/jira/browse/HADOOP-18534
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: rpc-server
>Reporter: xinqiu.hu
>Priority: Minor
>
>   In the RPC Client, a thread called RpcRequestSender is responsible for 
> writing the connection request to the socket. Every time a request is sent, a 
> direct memory is applied for in sun.nio.ch.IOUtil#write() and cached.
>   If Connection and RpcRequestSender are promoted to the old generation, they 
> will not be recycled when full gc is not performed, resulting in the 
> DirectByteBuffer cached in sun.nio.ch.Util not being recycled. When the 
> memory occupied by DirectByteBuffer is too large, the jvm process may not 
> have the opportunity to do full gc and is killed.
>   Unfortunately, there is no easy way to free these DirectByteBuffers. 
> Perhaps, we can manually free these DirectByteBuffers by the following 
> methods when the Connection is closed.
> {code:java}
> private void freeDirectBuffer() {
>   try {
> ByteBuffer buffer = Util.getTemporaryDirectBuffer(1);
> int i = 0;
> while (buffer.capacity() != 1 && i < 1024) {
>   ((DirectBuffer) buffer).cleaner().clean();
>   buffer = Util.getTemporaryDirectBuffer(1);
>   i++;
> }
> ((DirectBuffer) buffer).cleaner().clean();
>   } catch (Throwable t) {
> LOG.error("free direct memory error, connectionId: " + remoteId, t);
>   }
> }{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18534) Propose a mechanism to free the direct memory occupied by RPC Connections

2022-11-20 Thread xinqiu.hu (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xinqiu.hu updated HADOOP-18534:
---
Description: 
  In the RPC Client, a thread called RpcRequestSender is responsible for 
writing the connection request to the socket. Every time a request is sent, a 
direct memory is applied for in sun.nio.ch.IOUtil#write() and cached.

  If Connection and RpcRequestSender are promoted to the old generation, they 
will not be recycled when full gc is not performed, resulting in the 
DirectByteBuffer cached in sun.nio.ch.Util not being recycled. When the memory 
occupied by DirectByteBuffer is too large, the jvm process may not have the 
opportunity to do full gc and is killed.

  Unfortunately, there is no easy way to free these DirectByteBuffers. Perhaps, 
we can manually free these DirectByteBuffers by the following methods when the 
Connection is closed.
{code}
private void freeDirectBuffer() {
  try {
DirectBuffer buffer = (DirectBuffer) Util.getTemporaryDirectBuffer(1);
buffer.cleaner().clean();
  } catch (Throwable t) {
LOG.error("free direct memory error, connectionId: " + remoteId, t);
  }
}{code}

  was:
  In the RPC Client, a thread called RpcRequestSender is responsible for 
writing the connection request to the socket. Every time a request is sent, a 
direct memory is applied for in sun.nio.ch.IOUtil#write() and cached.

  If Connection and RpcRequestSender are promoted to the old generation, they 
will not be recycled when full gc is not performed, resulting in the 
DirectByteBuffer cached in sun.nio.ch.Util not being recycled. When the memory 
occupied by DirectByteBuffer is too large, the jvm process may not have the 
opportunity to do full gc and is killed.

  Unfortunately, there is no easy way to free these DirectByteBuffers. Perhaps, 
we can manually free these DirectByteBuffers by the following methods when the 
Connection is closed.
{code:keyword}
private void freeDirectBuffer() {
  try {
DirectBuffer buffer = (DirectBuffer) Util.getTemporaryDirectBuffer(1);
buffer.cleaner().clean();
  } catch (Throwable t) {
LOG.error("free direct memory error, connectionId: " + remoteId, t);
  }
}{code}


> Propose a mechanism to free the direct memory occupied by RPC Connections
> -
>
> Key: HADOOP-18534
> URL: https://issues.apache.org/jira/browse/HADOOP-18534
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: rpc-server
>Reporter: xinqiu.hu
>Priority: Minor
>
>   In the RPC Client, a thread called RpcRequestSender is responsible for 
> writing the connection request to the socket. Every time a request is sent, a 
> direct memory is applied for in sun.nio.ch.IOUtil#write() and cached.
>   If Connection and RpcRequestSender are promoted to the old generation, they 
> will not be recycled when full gc is not performed, resulting in the 
> DirectByteBuffer cached in sun.nio.ch.Util not being recycled. When the 
> memory occupied by DirectByteBuffer is too large, the jvm process may not 
> have the opportunity to do full gc and is killed.
>   Unfortunately, there is no easy way to free these DirectByteBuffers. 
> Perhaps, we can manually free these DirectByteBuffers by the following 
> methods when the Connection is closed.
> {code}
> private void freeDirectBuffer() {
>   try {
> DirectBuffer buffer = (DirectBuffer) Util.getTemporaryDirectBuffer(1);
> buffer.cleaner().clean();
>   } catch (Throwable t) {
> LOG.error("free direct memory error, connectionId: " + remoteId, t);
>   }
> }{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18534) Propose a mechanism to free the direct memory occupied by RPC Connections

2022-11-20 Thread xinqiu.hu (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xinqiu.hu updated HADOOP-18534:
---
Description: 
  In the RPC Client, a thread called RpcRequestSender is responsible for 
writing the connection request to the socket. Every time a request is sent, a 
direct memory is applied for in sun.nio.ch.IOUtil#write() and cached.

  If Connection and RpcRequestSender are promoted to the old generation, they 
will not be recycled when full gc is not performed, resulting in the 
DirectByteBuffer cached in sun.nio.ch.Util not being recycled. When the memory 
occupied by DirectByteBuffer is too large, the jvm process may not have the 
opportunity to do full gc and is killed.

  Unfortunately, there is no easy way to free these DirectByteBuffers. Perhaps, 
we can manually free these DirectByteBuffers by the following methods when the 
Connection is closed.
{code:keyword}
private void freeDirectBuffer() {
  try {
DirectBuffer buffer = (DirectBuffer) Util.getTemporaryDirectBuffer(1);
buffer.cleaner().clean();
  } catch (Throwable t) {
LOG.error("free direct memory error, connectionId: " + remoteId, t);
  }
}{code}

  was:
  In the RPC Client, a thread called RpcRequestSender is responsible for 
writing the connection request to the socket. Every time a request is sent, a 
direct memory is applied for in sun.nio.ch.IOUtil#write() and cached.

  If Connection and RpcRequestSender are promoted to the old generation, they 
will not be recycled when full gc is not performed, resulting in the 
DirectByteBuffer cached in sun.nio.ch.Util not being recycled. When the memory 
occupied by DirectByteBuffer is too large, the jvm process may not have the 
opportunity to do full gc and is killed.

  Unfortunately, there is no easy way to free these DirectByteBuffers. Perhaps, 
we can manually free these DirectByteBuffers by the following methods when the 
Connection is closed.
{code:java}
private void freeDirectBuffer() {
  try {
ByteBuffer buffer = Util.getTemporaryDirectBuffer(1);
int i = 0;
while (buffer.capacity() != 1 && i < 1024) {
  ((DirectBuffer) buffer).cleaner().clean();
  buffer = Util.getTemporaryDirectBuffer(1);
  i++;
}
((DirectBuffer) buffer).cleaner().clean();
  } catch (Throwable t) {
LOG.error("free direct memory error, connectionId: " + remoteId, t);
  }
}{code}


> Propose a mechanism to free the direct memory occupied by RPC Connections
> -
>
> Key: HADOOP-18534
> URL: https://issues.apache.org/jira/browse/HADOOP-18534
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: rpc-server
>Reporter: xinqiu.hu
>Priority: Minor
>
>   In the RPC Client, a thread called RpcRequestSender is responsible for 
> writing the connection request to the socket. Every time a request is sent, a 
> direct memory is applied for in sun.nio.ch.IOUtil#write() and cached.
>   If Connection and RpcRequestSender are promoted to the old generation, they 
> will not be recycled when full gc is not performed, resulting in the 
> DirectByteBuffer cached in sun.nio.ch.Util not being recycled. When the 
> memory occupied by DirectByteBuffer is too large, the jvm process may not 
> have the opportunity to do full gc and is killed.
>   Unfortunately, there is no easy way to free these DirectByteBuffers. 
> Perhaps, we can manually free these DirectByteBuffers by the following 
> methods when the Connection is closed.
> {code:keyword}
> private void freeDirectBuffer() {
>   try {
> DirectBuffer buffer = (DirectBuffer) Util.getTemporaryDirectBuffer(1);
> buffer.cleaner().clean();
>   } catch (Throwable t) {
> LOG.error("free direct memory error, connectionId: " + remoteId, t);
>   }
> }{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn commented on a diff in pull request #5152: YARN-11380. Fix hadoop-yarn-api module Java Doc Errors.

2022-11-20 Thread GitBox


ayushtkn commented on code in PR #5152:
URL: https://github.com/apache/hadoop/pull/5152#discussion_r1027665142


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/package-info.java:
##
@@ -1,4 +1,4 @@
-/*
+/**

Review Comment:
   this isn't required, it isn't a javadoc AFAIK, is it also giving failures



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka closed pull request #3322: HDFS-6874. Add GETFILEBLOCKLOCATIONS operation to HttpFS.

2022-11-20 Thread GitBox


aajisaka closed pull request #3322: HDFS-6874. Add GETFILEBLOCKLOCATIONS 
operation to HttpFS.
URL: https://github.com/apache/hadoop/pull/3322


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on pull request #3322: HDFS-6874. Add GETFILEBLOCKLOCATIONS operation to HttpFS.

2022-11-20 Thread GitBox


aajisaka commented on PR #3322:
URL: https://github.com/apache/hadoop/pull/3322#issuecomment-1321579547

   Fixed by #4750 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #5152: YARN-11380. Fix hadoop-yarn-api module Java Doc Errors.

2022-11-20 Thread GitBox


slfan1989 commented on PR #5152:
URL: https://github.com/apache/hadoop/pull/5152#issuecomment-1321574600

   @ayushtkn @aajisaka Can you help to review the code again? Thank you very 
much! I have fixed the java doc problem of hadoop-yarn-api module.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] K0K0V0K commented on a diff in pull request #5119: YARN-5607. Document TestContainerResourceUsage#waitForContainerCompletion

2022-11-20 Thread GitBox


K0K0V0K commented on code in PR #5119:
URL: https://github.com/apache/hadoop/pull/5119#discussion_r1027655301


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/CommonUtil.java:
##
@@ -0,0 +1,410 @@
+package org.apache.hadoop.yarn.server.resourcemanager;
+
+import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
+import org.apache.hadoop.yarn.api.records.ApplicationId;
+import org.apache.hadoop.yarn.api.records.ContainerId;
+import org.apache.hadoop.yarn.api.records.NodeId;
+import org.apache.hadoop.yarn.api.records.ContainerStatus;
+import org.apache.hadoop.yarn.api.records.NodeState;
+import org.apache.hadoop.yarn.server.api.protocolrecords.NMContainerStatus;
+import org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMApp;
+import org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppState;
+import 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttempt;
+import 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptState;
+import org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer;
+import 
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerState;
+import org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNode;
+import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler;
+import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplication;
+import org.junit.Assert;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.List;
+import java.util.Map;
+
+public class CommonUtil {
+static final Logger LOG = LoggerFactory.getLogger(MockRM.class);
+private static final int SECOND = 1000;
+private static final int TIMEOUT_MS_FOR_ATTEMPT = 40 * SECOND;
+private static final int TIMEOUT_MS_FOR_APP_REMOVED = 40 * SECOND;
+private static final int TIMEOUT_MS_FOR_CONTAINER_AND_NODE = 20 * SECOND;
+private static final int WAIT_MS_PER_LOOP = 10;
+
+/**
+ * Wait until an application has reached a specified state.
+ * The timeout is 80 seconds.
+ *
+ * @param appId  the id of an application
+ * @param finalState the application state waited
+ * @throws InterruptedException if interrupted while waiting for the state 
transition
+ */
+public static void waitForState(MockRM rm, ApplicationId appId, RMAppState 
finalState)
+throws InterruptedException {
+rm.drainEventsImplicitly();
+RMApp app = rm.getRMContext().getRMApps().get(appId);
+Assert.assertNotNull("app shouldn't be null", app);
+final int timeoutMsecs = 80 * SECOND;
+int timeWaiting = 0;
+while (!finalState.equals(app.getState())) {
+if (timeWaiting >= timeoutMsecs) {
+break;
+}
+
+LOG.info("App : " + appId + " State is : " + app.getState() +
+" Waiting for state : " + finalState);
+Thread.sleep(WAIT_MS_PER_LOOP);
+timeWaiting += WAIT_MS_PER_LOOP;
+}
+
+LOG.info("App State is : " + app.getState());
+Assert.assertEquals("App State is not correct (timeout).", finalState,
+app.getState());
+}
+
+/**
+ * Wait until an attempt has reached a specified state.
+ * The timeout is 40 seconds.
+ *
+ * @param attemptId  the id of an attempt
+ * @param finalState the attempt state waited
+ * @throws InterruptedException if interrupted while waiting for the state 
transition
+ */
+public static void waitForState(MockRM rm, ApplicationAttemptId attemptId,
+ RMAppAttemptState finalState) throws 
InterruptedException {
+waitForState(rm, attemptId, finalState, TIMEOUT_MS_FOR_ATTEMPT);
+}
+
+/**
+ * Wait until an attempt has reached a specified state.
+ * The timeout can be specified by the parameter.
+ *
+ * @param attemptIdthe id of an attempt
+ * @param finalState   the attempt state waited
+ * @param timeoutMsecs the length of timeout in milliseconds
+ * @throws InterruptedException if interrupted while waiting for the state 
transition
+ */
+private static void waitForState(MockRM rm, ApplicationAttemptId attemptId,
+ RMAppAttemptState finalState, int timeoutMsecs)
+throws InterruptedException {
+rm.start();
+rm.drainEventsImplicitly();
+RMApp app = 
rm.getRMContext().getRMApps().get(attemptId.getApplicationId());
+Assert.assertNotNull("app shouldn't be null", app);
+RMAppAttempt attempt = app.getRMAppAttempt(attemptId);
+waitForState(attempt, finalState, timeoutMsecs);
+}
+
+/**
+ * Wait until an attempt has reached a specified state.
+ * The timeout is 40 seconds.
+ 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5152: YARN-11380. Fix hadoop-yarn-api module Java Doc Errors.

2022-11-20 Thread GitBox


hadoop-yetus commented on PR #5152:
URL: https://github.com/apache/hadoop/pull/5152#issuecomment-1321559011

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 37s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 32s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 57s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   0m 52s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 40s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 57s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   0m 57s | 
[/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5152/3/artifact/out/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-yarn-api in trunk failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   0m 51s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 13s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 47s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 42s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 46s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   0m 46s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 44s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 20s |  |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api: The patch generated 0 new + 
133 unchanged - 27 fixed = 133 total (was 160)  |
   | +1 :green_heart: |  mvnsite  |   0m 43s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04
 with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 generated 0 new + 0 
unchanged - 107 fixed = 0 total (was 107)  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08
 with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 generated 0 new + 0 
unchanged - 146 fixed = 0 total (was 146)  |
   | +1 :green_heart: |  spotbugs  |   2m  6s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 49s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m  8s |  |  hadoop-yarn-api in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 43s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  98m 16s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5152/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5152 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux dfda7194caec 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 93c628570fa118cde0386e352a94a320ddde1bd2 |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   |  Test Results | 

[GitHub] [hadoop] susheel-gupta commented on a diff in pull request #5119: YARN-5607. Document TestContainerResourceUsage#waitForContainerCompletion

2022-11-20 Thread GitBox


susheel-gupta commented on code in PR #5119:
URL: https://github.com/apache/hadoop/pull/5119#discussion_r1027642975


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/CommonUtil.java:
##
@@ -0,0 +1,410 @@
+package org.apache.hadoop.yarn.server.resourcemanager;
+
+import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
+import org.apache.hadoop.yarn.api.records.ApplicationId;
+import org.apache.hadoop.yarn.api.records.ContainerId;
+import org.apache.hadoop.yarn.api.records.NodeId;
+import org.apache.hadoop.yarn.api.records.ContainerStatus;
+import org.apache.hadoop.yarn.api.records.NodeState;
+import org.apache.hadoop.yarn.server.api.protocolrecords.NMContainerStatus;
+import org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMApp;
+import org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppState;
+import 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttempt;
+import 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptState;
+import org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer;
+import 
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerState;
+import org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNode;
+import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler;
+import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplication;
+import org.junit.Assert;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.List;
+import java.util.Map;
+
+public class CommonUtil {
+static final Logger LOG = LoggerFactory.getLogger(MockRM.class);
+private static final int SECOND = 1000;
+private static final int TIMEOUT_MS_FOR_ATTEMPT = 40 * SECOND;
+private static final int TIMEOUT_MS_FOR_APP_REMOVED = 40 * SECOND;
+private static final int TIMEOUT_MS_FOR_CONTAINER_AND_NODE = 20 * SECOND;
+private static final int WAIT_MS_PER_LOOP = 10;
+
+/**
+ * Wait until an application has reached a specified state.
+ * The timeout is 80 seconds.
+ *
+ * @param appId  the id of an application
+ * @param finalState the application state waited
+ * @throws InterruptedException if interrupted while waiting for the state 
transition
+ */
+public static void waitForState(MockRM rm, ApplicationId appId, RMAppState 
finalState)
+throws InterruptedException {
+rm.drainEventsImplicitly();
+RMApp app = rm.getRMContext().getRMApps().get(appId);
+Assert.assertNotNull("app shouldn't be null", app);
+final int timeoutMsecs = 80 * SECOND;
+int timeWaiting = 0;
+while (!finalState.equals(app.getState())) {
+if (timeWaiting >= timeoutMsecs) {
+break;
+}
+
+LOG.info("App : " + appId + " State is : " + app.getState() +
+" Waiting for state : " + finalState);
+Thread.sleep(WAIT_MS_PER_LOOP);
+timeWaiting += WAIT_MS_PER_LOOP;
+}
+
+LOG.info("App State is : " + app.getState());
+Assert.assertEquals("App State is not correct (timeout).", finalState,
+app.getState());
+}
+
+/**
+ * Wait until an attempt has reached a specified state.
+ * The timeout is 40 seconds.
+ *
+ * @param attemptId  the id of an attempt
+ * @param finalState the attempt state waited
+ * @throws InterruptedException if interrupted while waiting for the state 
transition
+ */
+public static void waitForState(MockRM rm, ApplicationAttemptId attemptId,
+ RMAppAttemptState finalState) throws 
InterruptedException {
+waitForState(rm, attemptId, finalState, TIMEOUT_MS_FOR_ATTEMPT);
+}
+
+/**
+ * Wait until an attempt has reached a specified state.
+ * The timeout can be specified by the parameter.
+ *
+ * @param attemptIdthe id of an attempt
+ * @param finalState   the attempt state waited
+ * @param timeoutMsecs the length of timeout in milliseconds
+ * @throws InterruptedException if interrupted while waiting for the state 
transition
+ */
+private static void waitForState(MockRM rm, ApplicationAttemptId attemptId,
+ RMAppAttemptState finalState, int timeoutMsecs)
+throws InterruptedException {
+rm.start();
+rm.drainEventsImplicitly();
+RMApp app = 
rm.getRMContext().getRMApps().get(attemptId.getApplicationId());
+Assert.assertNotNull("app shouldn't be null", app);
+RMAppAttempt attempt = app.getRMAppAttempt(attemptId);
+waitForState(attempt, finalState, timeoutMsecs);
+}
+
+/**
+ * Wait until an attempt has reached a specified state.
+ * The timeout is 40 seconds.

[jira] [Commented] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4

2022-11-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17636482#comment-17636482
 ] 

ASF GitHub Bot commented on HADOOP-15327:
-

aajisaka closed pull request #3259: HADOOP-15327. Upgrade MR ShuffleHandler to 
use Netty4
URL: https://github.com/apache/hadoop/pull/3259




> Upgrade MR ShuffleHandler to use Netty4
> ---
>
> Key: HADOOP-15327
> URL: https://issues.apache.org/jira/browse/HADOOP-15327
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Szilard Nemeth
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: HADOOP-15327.001.patch, HADOOP-15327.002.patch, 
> HADOOP-15327.003.patch, HADOOP-15327.004.patch, HADOOP-15327.005.patch, 
> HADOOP-15327.005.patch, 
> getMapOutputInfo_BlockingOperationException_awaitUninterruptibly.log, 
> hades-results-20221108.zip, testfailure-testMapFileAccess-emptyresponse.zip, 
> testfailure-testReduceFromPartialMem.zip
>
>  Time Spent: 11.5h
>  Remaining Estimate: 0h
>
> This way, we can remove the dependencies on the netty3 (jboss.netty)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4

2022-11-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17636481#comment-17636481
 ] 

ASF GitHub Bot commented on HADOOP-15327:
-

aajisaka commented on PR #3259:
URL: https://github.com/apache/hadoop/pull/3259#issuecomment-1321546109

   Closing this PR as it's already merged into trunk. Thank you.




> Upgrade MR ShuffleHandler to use Netty4
> ---
>
> Key: HADOOP-15327
> URL: https://issues.apache.org/jira/browse/HADOOP-15327
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Szilard Nemeth
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: HADOOP-15327.001.patch, HADOOP-15327.002.patch, 
> HADOOP-15327.003.patch, HADOOP-15327.004.patch, HADOOP-15327.005.patch, 
> HADOOP-15327.005.patch, 
> getMapOutputInfo_BlockingOperationException_awaitUninterruptibly.log, 
> hades-results-20221108.zip, testfailure-testMapFileAccess-emptyresponse.zip, 
> testfailure-testReduceFromPartialMem.zip
>
>  Time Spent: 11.5h
>  Remaining Estimate: 0h
>
> This way, we can remove the dependencies on the netty3 (jboss.netty)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka closed pull request #3259: HADOOP-15327. Upgrade MR ShuffleHandler to use Netty4

2022-11-20 Thread GitBox


aajisaka closed pull request #3259: HADOOP-15327. Upgrade MR ShuffleHandler to 
use Netty4
URL: https://github.com/apache/hadoop/pull/3259


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on pull request #3259: HADOOP-15327. Upgrade MR ShuffleHandler to use Netty4

2022-11-20 Thread GitBox


aajisaka commented on PR #3259:
URL: https://github.com/apache/hadoop/pull/3259#issuecomment-1321546109

   Closing this PR as it's already merged into trunk. Thank you.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on pull request #5125: HDFS-16838. Fix NPE in testAddRplicaProcessorForAddingReplicaInMap

2022-11-20 Thread GitBox


aajisaka commented on PR #5125:
URL: https://github.com/apache/hadoop/pull/5125#issuecomment-1321544345

   I'm +1 for the @xinglin 's proposal. @ZanderXu could you update the PR?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18532) Update command usage in FileSystemShell.md

2022-11-20 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-18532:
---
Summary: Update command usage in FileSystemShell.md  (was: fix typos in 
FileSystemShell)

> Update command usage in FileSystemShell.md
> --
>
> Key: HADOOP-18532
> URL: https://issues.apache.org/jira/browse/HADOOP-18532
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.3.4
>Reporter: guophilipse
>Assignee: guophilipse
>Priority: Trivial
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>
> Fixt typos in FileSystemShell.md



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18532) fix typos in FileSystemShell

2022-11-20 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-18532.

Fix Version/s: 3.4.0
   3.3.9
   Resolution: Fixed

Committed to trunk and branch-3.3.

> fix typos in FileSystemShell
> 
>
> Key: HADOOP-18532
> URL: https://issues.apache.org/jira/browse/HADOOP-18532
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.3.4
>Reporter: guophilipse
>Assignee: guophilipse
>Priority: Trivial
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>
> Fixt typos in FileSystemShell.md



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18532) fix typos in FileSystemShell

2022-11-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17636469#comment-17636469
 ] 

ASF GitHub Bot commented on HADOOP-18532:
-

aajisaka commented on PR #5141:
URL: https://github.com/apache/hadoop/pull/5141#issuecomment-1321541024

   Thank you @GuoPhilipse for your contribution and thank you @slfan1989 
@ashutoshcipher for your reviews!




> fix typos in FileSystemShell
> 
>
> Key: HADOOP-18532
> URL: https://issues.apache.org/jira/browse/HADOOP-18532
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.3.4
>Reporter: guophilipse
>Priority: Trivial
>  Labels: pull-request-available
>
> Fixt typos in FileSystemShell.md



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-18532) fix typos in FileSystemShell

2022-11-20 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HADOOP-18532:
--

Assignee: guophilipse

> fix typos in FileSystemShell
> 
>
> Key: HADOOP-18532
> URL: https://issues.apache.org/jira/browse/HADOOP-18532
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.3.4
>Reporter: guophilipse
>Assignee: guophilipse
>Priority: Trivial
>  Labels: pull-request-available
>
> Fixt typos in FileSystemShell.md



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on pull request #5141: HADOOP-18532. Update command usage in FileSystemShell.md

2022-11-20 Thread GitBox


aajisaka commented on PR #5141:
URL: https://github.com/apache/hadoop/pull/5141#issuecomment-1321541024

   Thank you @GuoPhilipse for your contribution and thank you @slfan1989 
@ashutoshcipher for your reviews!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18532) fix typos in FileSystemShell

2022-11-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17636468#comment-17636468
 ] 

ASF GitHub Bot commented on HADOOP-18532:
-

aajisaka merged PR #5141:
URL: https://github.com/apache/hadoop/pull/5141




> fix typos in FileSystemShell
> 
>
> Key: HADOOP-18532
> URL: https://issues.apache.org/jira/browse/HADOOP-18532
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.3.4
>Reporter: guophilipse
>Priority: Trivial
>  Labels: pull-request-available
>
> Fixt typos in FileSystemShell.md



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka merged pull request #5141: HADOOP-18532. Update command usage in FileSystemShell.md

2022-11-20 Thread GitBox


aajisaka merged PR #5141:
URL: https://github.com/apache/hadoop/pull/5141


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18532) fix typos in FileSystemShell

2022-11-20 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-18532:
---
Issue Type: Bug  (was: Improvement)

> fix typos in FileSystemShell
> 
>
> Key: HADOOP-18532
> URL: https://issues.apache.org/jira/browse/HADOOP-18532
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.3.4
>Reporter: guophilipse
>Priority: Trivial
>  Labels: pull-request-available
>
> Fixt typos in FileSystemShell.md



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 opened a new pull request, #5153: YARN-11381. Fix hadoop-yarn-common module Java Doc Errors.

2022-11-20 Thread GitBox


slfan1989 opened a new pull request, #5153:
URL: https://github.com/apache/hadoop/pull/5153

   JIRA: YARN-11381. Fix hadoop-yarn-common module Java Doc Errors.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on a diff in pull request #4797: YARN-11277. Trigger log-dir deletion by size for NonAggregatingLogHandler

2022-11-20 Thread GitBox


aajisaka commented on code in PR #4797:
URL: https://github.com/apache/hadoop/pull/4797#discussion_r1027616899


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/loghandler/NonAggregatingLogHandler.java:
##
@@ -149,47 +156,52 @@ private void recover() throws IOException {
   @Override
   public void handle(LogHandlerEvent event) {
 switch (event.getType()) {
-  case APPLICATION_STARTED:
-LogHandlerAppStartedEvent appStartedEvent =
-(LogHandlerAppStartedEvent) event;
-this.appOwners.put(appStartedEvent.getApplicationId(),
-appStartedEvent.getUser());
-this.dispatcher.getEventHandler().handle(
-new ApplicationEvent(appStartedEvent.getApplicationId(),
-ApplicationEventType.APPLICATION_LOG_HANDLING_INITED));
+case APPLICATION_STARTED:
+  LogHandlerAppStartedEvent appStartedEvent =
+  (LogHandlerAppStartedEvent) event;
+  this.appOwners.put(appStartedEvent.getApplicationId(),
+  appStartedEvent.getUser());
+  this.dispatcher.getEventHandler().handle(
+  new ApplicationEvent(appStartedEvent.getApplicationId(),
+  ApplicationEventType.APPLICATION_LOG_HANDLING_INITED));
+  break;
+case CONTAINER_FINISHED:
+  // Ignore
+  break;
+case APPLICATION_FINISHED:
+  LogHandlerAppFinishedEvent appFinishedEvent =
+  (LogHandlerAppFinishedEvent) event;
+  ApplicationId appId = appFinishedEvent.getApplicationId();
+  String user = appOwners.remove(appId);
+  if (user == null) {
+LOG.error("Unable to locate user for " + appId);
+// send LOG_HANDLING_FAILED out
+NonAggregatingLogHandler.this.dispatcher.getEventHandler().handle(
+new ApplicationEvent(appId,
+ApplicationEventType.APPLICATION_LOG_HANDLING_FAILED));
 break;
-  case CONTAINER_FINISHED:
-// Ignore
-break;
-  case APPLICATION_FINISHED:
-LogHandlerAppFinishedEvent appFinishedEvent =
-(LogHandlerAppFinishedEvent) event;
-ApplicationId appId = appFinishedEvent.getApplicationId();
+  }
+  LogDeleterRunnable logDeleter = new LogDeleterRunnable(user, appId);
+  long appLogSize = calculateSizeOfAppLogs(user, appId);
+  long deletionTimestamp = System.currentTimeMillis()
+  + this.deleteDelaySeconds * 1000;
+  LogDeleterProto deleterProto = LogDeleterProto.newBuilder()
+  .setUser(user)
+  .setDeletionTime(deletionTimestamp)
+  .build();
+  try {
+stateStore.storeLogDeleter(appId, deleterProto);
+  } catch (IOException e) {
+LOG.error("Unable to record log deleter state", e);
+  }
+  // delete no delay if log size exceed deleteThreshold
+  if (enableTriggerDeleteBySize && appLogSize >= deleteThreshold) {

Review Comment:
   Hi @leixm thank you for your update.
   
   1. Can we calculate the size of the application log directory only if the 
feature is enabled?
   2. Can we use `sched.schedule(logDeleter, 0, TimeUnit.SECONDS);` to delete 
the files in background?
   
   The code will be like
   ```java
   try {
 boolean logDeleterStarted = false;
 if (enableTriggerDeleteBySize) {
   final long appLogSize = calculateSizeOfAppLogs(user, appId);
   if (appLogSize >= threshold) {
 ...
 sched.schedule(logDeleter, 0, TimeUnit.SECONDS);
 logDeleterStarted = true;
   }
 }
 if (!logDeleterStarted) {
   sched.schedule(logDeleter, this.deleteDelaySeconds, TimeUnit.SECONDS);
 }
   } catch (RejectedExecutionException e) {
 logDeleter.run();
   }
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-8728) Display (fs -text) shouldn't hard-depend on Writable serialized sequence files.

2022-11-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-8728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17636457#comment-17636457
 ] 

ASF GitHub Bot commented on HADOOP-8728:


ashutoshcipher commented on PR #5010:
URL: https://github.com/apache/hadoop/pull/5010#issuecomment-1321499379

   Thanks @aajisaka for reviewing and merging. 




> Display (fs -text) shouldn't hard-depend on Writable serialized sequence 
> files.
> ---
>
> Key: HADOOP-8728
> URL: https://issues.apache.org/jira/browse/HADOOP-8728
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: Harsh J
>Assignee: Ashutosh Gupta
>Priority: Minor
>  Labels: BB2015-05-TBR, pull-request-available
> Fix For: 3.4.0
>
> Attachments: HADOOP-8728-002.patch, HADOOP-8728-003.patch, 
> HADOOP-8728.patch
>
>
> The Display command (fs -text) currently reads only Writable-based 
> SequenceFiles. This isn't necessary to do, and prevents reading 
> non-Writable-based serialization in SequenceFiles from the shell.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-8728) Display (fs -text) shouldn't hard-depend on Writable serialized sequence files.

2022-11-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-8728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17636456#comment-17636456
 ] 

ASF GitHub Bot commented on HADOOP-8728:


aajisaka commented on PR #5010:
URL: https://github.com/apache/hadoop/pull/5010#issuecomment-1321498932

   Thank you @ashutoshcipher for your contribution.




> Display (fs -text) shouldn't hard-depend on Writable serialized sequence 
> files.
> ---
>
> Key: HADOOP-8728
> URL: https://issues.apache.org/jira/browse/HADOOP-8728
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: Harsh J
>Assignee: Ashutosh Gupta
>Priority: Minor
>  Labels: BB2015-05-TBR, pull-request-available
> Attachments: HADOOP-8728-002.patch, HADOOP-8728-003.patch, 
> HADOOP-8728.patch
>
>
> The Display command (fs -text) currently reads only Writable-based 
> SequenceFiles. This isn't necessary to do, and prevents reading 
> non-Writable-based serialization in SequenceFiles from the shell.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-8728) Display (fs -text) shouldn't hard-depend on Writable serialized sequence files.

2022-11-20 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-8728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-8728:
--
Fix Version/s: 3.4.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Display (fs -text) shouldn't hard-depend on Writable serialized sequence 
> files.
> ---
>
> Key: HADOOP-8728
> URL: https://issues.apache.org/jira/browse/HADOOP-8728
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: Harsh J
>Assignee: Ashutosh Gupta
>Priority: Minor
>  Labels: BB2015-05-TBR, pull-request-available
> Fix For: 3.4.0
>
> Attachments: HADOOP-8728-002.patch, HADOOP-8728-003.patch, 
> HADOOP-8728.patch
>
>
> The Display command (fs -text) currently reads only Writable-based 
> SequenceFiles. This isn't necessary to do, and prevents reading 
> non-Writable-based serialization in SequenceFiles from the shell.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ashutoshcipher commented on pull request #5010: HADOOP-8728. Display (fs -text) shouldn't hard-depend on Writable serialized sequence files.

2022-11-20 Thread GitBox


ashutoshcipher commented on PR #5010:
URL: https://github.com/apache/hadoop/pull/5010#issuecomment-1321499379

   Thanks @aajisaka for reviewing and merging. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on pull request #5010: HADOOP-8728. Display (fs -text) shouldn't hard-depend on Writable serialized sequence files.

2022-11-20 Thread GitBox


aajisaka commented on PR #5010:
URL: https://github.com/apache/hadoop/pull/5010#issuecomment-1321498932

   Thank you @ashutoshcipher for your contribution.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-8728) Display (fs -text) shouldn't hard-depend on Writable serialized sequence files.

2022-11-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-8728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17636455#comment-17636455
 ] 

ASF GitHub Bot commented on HADOOP-8728:


aajisaka merged PR #5010:
URL: https://github.com/apache/hadoop/pull/5010




> Display (fs -text) shouldn't hard-depend on Writable serialized sequence 
> files.
> ---
>
> Key: HADOOP-8728
> URL: https://issues.apache.org/jira/browse/HADOOP-8728
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: Harsh J
>Assignee: Ashutosh Gupta
>Priority: Minor
>  Labels: BB2015-05-TBR, pull-request-available
> Attachments: HADOOP-8728-002.patch, HADOOP-8728-003.patch, 
> HADOOP-8728.patch
>
>
> The Display command (fs -text) currently reads only Writable-based 
> SequenceFiles. This isn't necessary to do, and prevents reading 
> non-Writable-based serialization in SequenceFiles from the shell.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka merged pull request #5010: HADOOP-8728. Display (fs -text) shouldn't hard-depend on Writable serialized sequence files.

2022-11-20 Thread GitBox


aajisaka merged PR #5010:
URL: https://github.com/apache/hadoop/pull/5010


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ashutoshcipher commented on pull request #4717: YARN-6946. Upgrade JUnit from 4 to 5 in hadoop-yarn-common

2022-11-20 Thread GitBox


ashutoshcipher commented on PR #4717:
URL: https://github.com/apache/hadoop/pull/4717#issuecomment-1321498614

   Thanks @aajisaka for reviewing and merging.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-18534) Propose a mechanism to free the direct memory occupied by RPC Connections

2022-11-20 Thread fanshilun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17636453#comment-17636453
 ] 

fanshilun edited comment on HADOOP-18534 at 11/21/22 5:50 AM:
--

Usually an rpc request is returned in milliseconds. At this time, I think the 
impact of GC can be ignored. A large HDFS system, the Audit Log will record 
100-200 million requests, and I have not seen any abnormalities in rpc client 
gc.

RPC service is the core service of the whole system. We need to clarify the 
benefits before modifying it, otherwise it will bring great risks.


was (Author: slfan1989):
Usually an rpc request is returned in milliseconds. At this time, I think the 
impact of GC can be ignored.

> Propose a mechanism to free the direct memory occupied by RPC Connections
> -
>
> Key: HADOOP-18534
> URL: https://issues.apache.org/jira/browse/HADOOP-18534
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: rpc-server
>Reporter: xinqiu.hu
>Priority: Minor
> Attachments: 未命名文件 (1).png
>
>
>   In the RPC Client, a thread called RpcRequestSender is responsible for 
> writing the connection request to the socket. Every time a request is sent, a 
> direct memory is applied for in sun.nio.ch.IOUtil#write() and cached.
>   If Connection and RpcRequestSender are promoted to the old generation, they 
> will not be recycled when full gc is not performed, resulting in the 
> DirectByteBuffer cached in sun.nio.ch.Util not being recycled. When the 
> memory occupied by DirectByteBuffer is too large, the jvm process may not 
> have the opportunity to do full gc and is killed.
>   Unfortunately, there is no easy way to free these DirectByteBuffers. 
> Perhaps, we can manually free these DirectByteBuffers by the following 
> methods when the Connection is closed.
> {code:java}
> private void freeDirectBuffer() {
>   try {
> ByteBuffer buffer = Util.getTemporaryDirectBuffer(1);
> int i = 0;
> while (buffer.capacity() != 1 && i < 1024) {
>   ((DirectBuffer) buffer).cleaner().clean();
>   buffer = Util.getTemporaryDirectBuffer(1);
>   i++;
> }
> ((DirectBuffer) buffer).cleaner().clean();
>   } catch (Throwable t) {
> LOG.error("free direct memory error, connectionId: " + remoteId, t);
>   }
> }{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18534) Propose a mechanism to free the direct memory occupied by RPC Connections

2022-11-20 Thread fanshilun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17636453#comment-17636453
 ] 

fanshilun commented on HADOOP-18534:


Usually an rpc request is returned in milliseconds. At this time, I think the 
impact of GC can be ignored.

> Propose a mechanism to free the direct memory occupied by RPC Connections
> -
>
> Key: HADOOP-18534
> URL: https://issues.apache.org/jira/browse/HADOOP-18534
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: rpc-server
>Reporter: xinqiu.hu
>Priority: Minor
> Attachments: 未命名文件 (1).png
>
>
>   In the RPC Client, a thread called RpcRequestSender is responsible for 
> writing the connection request to the socket. Every time a request is sent, a 
> direct memory is applied for in sun.nio.ch.IOUtil#write() and cached.
>   If Connection and RpcRequestSender are promoted to the old generation, they 
> will not be recycled when full gc is not performed, resulting in the 
> DirectByteBuffer cached in sun.nio.ch.Util not being recycled. When the 
> memory occupied by DirectByteBuffer is too large, the jvm process may not 
> have the opportunity to do full gc and is killed.
>   Unfortunately, there is no easy way to free these DirectByteBuffers. 
> Perhaps, we can manually free these DirectByteBuffers by the following 
> methods when the Connection is closed.
> {code:java}
> private void freeDirectBuffer() {
>   try {
> ByteBuffer buffer = Util.getTemporaryDirectBuffer(1);
> int i = 0;
> while (buffer.capacity() != 1 && i < 1024) {
>   ((DirectBuffer) buffer).cleaner().clean();
>   buffer = Util.getTemporaryDirectBuffer(1);
>   i++;
> }
> ((DirectBuffer) buffer).cleaner().clean();
>   } catch (Throwable t) {
> LOG.error("free direct memory error, connectionId: " + remoteId, t);
>   }
> }{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on pull request #4717: YARN-6946. Upgrade JUnit from 4 to 5 in hadoop-yarn-common

2022-11-20 Thread GitBox


aajisaka commented on PR #4717:
URL: https://github.com/apache/hadoop/pull/4717#issuecomment-1321492007

   Thank you @ashutoshcipher for your contribution and thank you @slfan1989 for 
your review! Also, thank you for cleaning up bunch of checkstyle issues :)
   
   > hadoop-yarn-project: The patch generated 12 new + 196 unchanged - 171 
fixed = 208 total (was 367)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka merged pull request #4717: YARN-6946. Upgrade JUnit from 4 to 5 in hadoop-yarn-common

2022-11-20 Thread GitBox


aajisaka merged PR #4717:
URL: https://github.com/apache/hadoop/pull/4717


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #5152: YARN-11380. Fix hadoop-yarn-api module Java Doc Errors.

2022-11-20 Thread GitBox


slfan1989 commented on PR #5152:
URL: https://github.com/apache/hadoop/pull/5152#issuecomment-1321491555

   > The following error is remaining: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5152/2/artifact/out/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt
   > 
   > ```
   > [ERROR] 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-5152/ubuntu-focal/src/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/factory/providers/package-info.java:18:
 error: unknown tag: InterfaceAudience.Private
   > [ERROR] @InterfaceAudience.Private
   > [ERROR] ^
   > ```
   > 
   > Would you fix it?
   
   Thank you very much for your help reviewing the code, I will fix this 
problem.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18534) Propose a mechanism to free the direct memory occupied by RPC Connections

2022-11-20 Thread xinqiu.hu (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xinqiu.hu updated HADOOP-18534:
---
Description: 
  In the RPC Client, a thread called RpcRequestSender is responsible for 
writing the connection request to the socket. Every time a request is sent, a 
direct memory is applied for in sun.nio.ch.IOUtil#write() and cached.

  If Connection and RpcRequestSender are promoted to the old generation, they 
will not be recycled when full gc is not performed, resulting in the 
DirectByteBuffer cached in sun.nio.ch.Util not being recycled. When the memory 
occupied by DirectByteBuffer is too large, the jvm process may not have the 
opportunity to do full gc and is killed.

  Unfortunately, there is no easy way to free these DirectByteBuffers. Perhaps, 
we can manually free these DirectByteBuffers by the following methods when the 
Connection is closed.
{code:java}
private void freeDirectBuffer() {
  try {
ByteBuffer buffer = Util.getTemporaryDirectBuffer(1);
int i = 0;
while (buffer.capacity() != 1 && i < 1024) {
  ((DirectBuffer) buffer).cleaner().clean();
  buffer = Util.getTemporaryDirectBuffer(1);
  i++;
}
((DirectBuffer) buffer).cleaner().clean();
  } catch (Throwable t) {
LOG.error("free direct memory error, connectionId: " + remoteId, t);
  }
}{code}

  was:
  In the RPC Client, a thread called RpcRequestSender is responsible for 
writing the connection request to the socket. Every time a request is sent, a 
direct memory is applied for in sun.nio.ch.IOUtil#write() and cached.
  If Connection and RpcRequestSender are promoted to the old generation, they 
will not be recycled when full gc is not performed, resulting in the 
DirectByteBuffer cached in sun.nio.ch.Util not being recycled. When the memory 
occupied by DirectByteBuffer is too large, the jvm process may not have the 
opportunity to do full gc and is killed.

  Unfortunately, there is no easy way to free these DirectByteBuffers. Perhaps, 
we can manually free these DirectByteBuffers by the following methods when the 
Connection is closed.

 
{code:java}
private void freeDirectBuffer() {
  try {
ByteBuffer buffer = Util.getTemporaryDirectBuffer(1);
int i = 0;
while (buffer.capacity() != 1 && i < 1024) {
  ((DirectBuffer) buffer).cleaner().clean();
  buffer = Util.getTemporaryDirectBuffer(1);
  i++;
}
((DirectBuffer) buffer).cleaner().clean();
  } catch (Throwable t) {
LOG.error("free direct memory error, connectionId: " + remoteId, t);
  }
}{code}


> Propose a mechanism to free the direct memory occupied by RPC Connections
> -
>
> Key: HADOOP-18534
> URL: https://issues.apache.org/jira/browse/HADOOP-18534
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: rpc-server
>Reporter: xinqiu.hu
>Priority: Minor
> Attachments: 未命名文件 (1).png
>
>
>   In the RPC Client, a thread called RpcRequestSender is responsible for 
> writing the connection request to the socket. Every time a request is sent, a 
> direct memory is applied for in sun.nio.ch.IOUtil#write() and cached.
>   If Connection and RpcRequestSender are promoted to the old generation, they 
> will not be recycled when full gc is not performed, resulting in the 
> DirectByteBuffer cached in sun.nio.ch.Util not being recycled. When the 
> memory occupied by DirectByteBuffer is too large, the jvm process may not 
> have the opportunity to do full gc and is killed.
>   Unfortunately, there is no easy way to free these DirectByteBuffers. 
> Perhaps, we can manually free these DirectByteBuffers by the following 
> methods when the Connection is closed.
> {code:java}
> private void freeDirectBuffer() {
>   try {
> ByteBuffer buffer = Util.getTemporaryDirectBuffer(1);
> int i = 0;
> while (buffer.capacity() != 1 && i < 1024) {
>   ((DirectBuffer) buffer).cleaner().clean();
>   buffer = Util.getTemporaryDirectBuffer(1);
>   i++;
> }
> ((DirectBuffer) buffer).cleaner().clean();
>   } catch (Throwable t) {
> LOG.error("free direct memory error, connectionId: " + remoteId, t);
>   }
> }{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18534) Propose a mechanism to free the direct memory occupied by RPC Connections

2022-11-20 Thread xinqiu.hu (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xinqiu.hu updated HADOOP-18534:
---
Description: 
  In the RPC Client, a thread called RpcRequestSender is responsible for 
writing the connection request to the socket. Every time a request is sent, a 
direct memory is applied for in sun.nio.ch.IOUtil#write() and cached.
  If Connection and RpcRequestSender are promoted to the old generation, they 
will not be recycled when full gc is not performed, resulting in the 
DirectByteBuffer cached in sun.nio.ch.Util not being recycled. When the memory 
occupied by DirectByteBuffer is too large, the jvm process may not have the 
opportunity to do full gc and is killed.

  Unfortunately, there is no easy way to free these DirectByteBuffers. Perhaps, 
we can manually free these DirectByteBuffers by the following methods when the 
Connection is closed.

 
{code:java}
private void freeDirectBuffer() {
  try {
ByteBuffer buffer = Util.getTemporaryDirectBuffer(1);
int i = 0;
while (buffer.capacity() != 1 && i < 1024) {
  ((DirectBuffer) buffer).cleaner().clean();
  buffer = Util.getTemporaryDirectBuffer(1);
  i++;
}
((DirectBuffer) buffer).cleaner().clean();
  } catch (Throwable t) {
LOG.error("free direct memory error, connectionId: " + remoteId, t);
  }
}{code}

  was:
  In the RPC Client, a thread called RpcRequestSender is responsible for 
writing the connection request to the socket. Every time a request is sent, a 
direct memory is applied for in sun.nio.ch.IOUtil#write() and cached.
  If Connection and RpcRequestSender are promoted to the old generation, they 
will not be recycled when full gc is not performed, resulting in the 
DirectByteBuffer cached in sun.nio.ch.Util not being recycled. When the memory 
occupied by DirectByteBuffer is too large, the jvm process may not have the 
opportunity to do full gc and is killed.

!未命名文件 (1).png|width=494,height=267!

  Unfortunately, there is no easy way to free these DirectByteBuffers. Perhaps, 
we can manually free these DirectByteBuffers by the following methods when the 
Connection is closed.

 
{code}
private void freeDirectBuffer() {
  try {
ByteBuffer buffer = Util.getTemporaryDirectBuffer(1);
int i = 0;
while (buffer.capacity() != 1 && i < 1024) {
  ((DirectBuffer) buffer).cleaner().clean();
  buffer = Util.getTemporaryDirectBuffer(1);
  i++;
}
((DirectBuffer) buffer).cleaner().clean();
  } catch (Throwable t) {
LOG.error("free direct memory error, connectionId: " + remoteId, t);
  }
}{code}


> Propose a mechanism to free the direct memory occupied by RPC Connections
> -
>
> Key: HADOOP-18534
> URL: https://issues.apache.org/jira/browse/HADOOP-18534
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: rpc-server
>Reporter: xinqiu.hu
>Priority: Minor
> Attachments: 未命名文件 (1).png
>
>
>   In the RPC Client, a thread called RpcRequestSender is responsible for 
> writing the connection request to the socket. Every time a request is sent, a 
> direct memory is applied for in sun.nio.ch.IOUtil#write() and cached.
>   If Connection and RpcRequestSender are promoted to the old generation, they 
> will not be recycled when full gc is not performed, resulting in the 
> DirectByteBuffer cached in sun.nio.ch.Util not being recycled. When the 
> memory occupied by DirectByteBuffer is too large, the jvm process may not 
> have the opportunity to do full gc and is killed.
>   Unfortunately, there is no easy way to free these DirectByteBuffers. 
> Perhaps, we can manually free these DirectByteBuffers by the following 
> methods when the Connection is closed.
>  
> {code:java}
> private void freeDirectBuffer() {
>   try {
> ByteBuffer buffer = Util.getTemporaryDirectBuffer(1);
> int i = 0;
> while (buffer.capacity() != 1 && i < 1024) {
>   ((DirectBuffer) buffer).cleaner().clean();
>   buffer = Util.getTemporaryDirectBuffer(1);
>   i++;
> }
> ((DirectBuffer) buffer).cleaner().clean();
>   } catch (Throwable t) {
> LOG.error("free direct memory error, connectionId: " + remoteId, t);
>   }
> }{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ashutoshcipher commented on pull request #5029: MAPREDUCE-7422. Upgrade Junit 4 to 5 in hadoop-mapreduce-examples

2022-11-20 Thread GitBox


ashutoshcipher commented on PR #5029:
URL: https://github.com/apache/hadoop/pull/5029#issuecomment-1321490540

   Thanks for reviewing and merging @aajisaka 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on pull request #5029: MAPREDUCE-7422. Upgrade Junit 4 to 5 in hadoop-mapreduce-examples

2022-11-20 Thread GitBox


aajisaka commented on PR #5029:
URL: https://github.com/apache/hadoop/pull/5029#issuecomment-1321490052

   Thank you @ashutoshcipher 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka merged pull request #5029: MAPREDUCE-7422. Upgrade Junit 4 to 5 in hadoop-mapreduce-examples

2022-11-20 Thread GitBox


aajisaka merged PR #5029:
URL: https://github.com/apache/hadoop/pull/5029


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18534) Propose a mechanism to free the direct memory occupied by RPC Connections

2022-11-20 Thread xinqiu.hu (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xinqiu.hu updated HADOOP-18534:
---
Description: 
  In the RPC Client, a thread called RpcRequestSender is responsible for 
writing the connection request to the socket. Every time a request is sent, a 
direct memory is applied for in sun.nio.ch.IOUtil#write() and cached.
  If Connection and RpcRequestSender are promoted to the old generation, they 
will not be recycled when full gc is not performed, resulting in the 
DirectByteBuffer cached in sun.nio.ch.Util not being recycled. When the memory 
occupied by DirectByteBuffer is too large, the jvm process may not have the 
opportunity to do full gc and is killed.

!未命名文件 (1).png|width=494,height=267!

  Unfortunately, there is no easy way to free these DirectByteBuffers. Perhaps, 
we can manually free these DirectByteBuffers by the following methods when the 
Connection is closed.

 
{code}
private void freeDirectBuffer() {
  try {
ByteBuffer buffer = Util.getTemporaryDirectBuffer(1);
int i = 0;
while (buffer.capacity() != 1 && i < 1024) {
  ((DirectBuffer) buffer).cleaner().clean();
  buffer = Util.getTemporaryDirectBuffer(1);
  i++;
}
((DirectBuffer) buffer).cleaner().clean();
  } catch (Throwable t) {
LOG.error("free direct memory error, connectionId: " + remoteId, t);
  }
}{code}

  was:
  In the RPC Client, a thread called RpcRequestSender is responsible for 
writing the connection request to the socket. Every time a request is sent, a 
direct memory is applied for in sun.nio.ch.IOUtil#write() and cached.
  If Connection and RpcRequestSender are promoted to the old generation, they 
will not be recycled when full gc is not performed, resulting in the 
DirectByteBuffer cached in sun.nio.ch.Util not being recycled. When the memory 
occupied by DirectByteBuffer is too large, the jvm process may not have the 
opportunity to do full gc and is killed.

!未命名文件 (1).png|width=494,height=267!

  Unfortunately, there is no easy way to free these DirectByteBuffers. Perhaps, 
we can manually free these DirectByteBuffers by the following methods when the 
Connection is closed.

 
{code:keyword}
private void freeDirectBuffer() {
  try {
ByteBuffer buffer = Util.getTemporaryDirectBuffer(1);
int i = 0;
while (buffer.capacity() != 1 && i < 1024) {
  ((DirectBuffer) buffer).cleaner().clean();
  buffer = Util.getTemporaryDirectBuffer(1);
  i++;
}
((DirectBuffer) buffer).cleaner().clean();
  } catch (Throwable t) {
LOG.error("free direct memory error, connectionId: " + remoteId, t);
  }
}{code}


> Propose a mechanism to free the direct memory occupied by RPC Connections
> -
>
> Key: HADOOP-18534
> URL: https://issues.apache.org/jira/browse/HADOOP-18534
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: rpc-server
>Reporter: xinqiu.hu
>Priority: Minor
> Attachments: 未命名文件 (1).png
>
>
>   In the RPC Client, a thread called RpcRequestSender is responsible for 
> writing the connection request to the socket. Every time a request is sent, a 
> direct memory is applied for in sun.nio.ch.IOUtil#write() and cached.
>   If Connection and RpcRequestSender are promoted to the old generation, they 
> will not be recycled when full gc is not performed, resulting in the 
> DirectByteBuffer cached in sun.nio.ch.Util not being recycled. When the 
> memory occupied by DirectByteBuffer is too large, the jvm process may not 
> have the opportunity to do full gc and is killed.
> !未命名文件 (1).png|width=494,height=267!
>   Unfortunately, there is no easy way to free these DirectByteBuffers. 
> Perhaps, we can manually free these DirectByteBuffers by the following 
> methods when the Connection is closed.
>  
> {code}
> private void freeDirectBuffer() {
>   try {
> ByteBuffer buffer = Util.getTemporaryDirectBuffer(1);
> int i = 0;
> while (buffer.capacity() != 1 && i < 1024) {
>   ((DirectBuffer) buffer).cleaner().clean();
>   buffer = Util.getTemporaryDirectBuffer(1);
>   i++;
> }
> ((DirectBuffer) buffer).cleaner().clean();
>   } catch (Throwable t) {
> LOG.error("free direct memory error, connectionId: " + remoteId, t);
>   }
> }{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18534) Propose a mechanism to free the direct memory occupied by RPC Connections

2022-11-20 Thread xinqiu.hu (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xinqiu.hu updated HADOOP-18534:
---
Description: 
  In the RPC Client, a thread called RpcRequestSender is responsible for 
writing the connection request to the socket. Every time a request is sent, a 
direct memory is applied for in sun.nio.ch.IOUtil#write() and cached.
  If Connection and RpcRequestSender are promoted to the old generation, they 
will not be recycled when full gc is not performed, resulting in the 
DirectByteBuffer cached in sun.nio.ch.Util not being recycled. When the memory 
occupied by DirectByteBuffer is too large, the jvm process may not have the 
opportunity to do full gc and is killed.

!未命名文件 (1).png|width=494,height=267!

  Unfortunately, there is no easy way to free these DirectByteBuffers. Perhaps, 
we can manually free these DirectByteBuffers by the following methods when the 
Connection is closed.

 
{code:keyword}
private void freeDirectBuffer() {
  try {
ByteBuffer buffer = Util.getTemporaryDirectBuffer(1);
int i = 0;
while (buffer.capacity() != 1 && i < 1024) {
  ((DirectBuffer) buffer).cleaner().clean();
  buffer = Util.getTemporaryDirectBuffer(1);
  i++;
}
((DirectBuffer) buffer).cleaner().clean();
  } catch (Throwable t) {
LOG.error("free direct memory error, connectionId: " + remoteId, t);
  }
}{code}

  was:
  In the RPC Client, a thread called RpcRequestSender is responsible for 
writing the connection request to the socket. Every time a request is sent, a 
direct memory is applied for in sun.nio.ch.IOUtil#write() and cached.
  If Connection and RpcRequestSender are promoted to the old generation, they 
will not be recycled when full gc is not performed, resulting in the 
DirectByteBuffer cached in sun.nio.ch.Util not being recycled. When the memory 
occupied by DirectByteBuffer is too large, the jvm process may not have the 
opportunity to do full gc and is killed.

!未命名文件 (1).png|width=494,height=267!

  Unfortunately, there is no easy way to free these DirectByteBuffers. Perhaps, 
we can manually free these DirectByteBuffers by the following methods when the 
Connection is closed.

```java

private void freeDirectBuffer() {
try {
ByteBuffer buffer = Util.getTemporaryDirectBuffer(1);
int i = 0;
while (buffer.capacity() != 1 && i < 1024) {
((DirectBuffer) buffer).cleaner().clean();
buffer = Util.getTemporaryDirectBuffer(1);
i++;
}
((DirectBuffer) buffer).cleaner().clean();
} catch (Throwable t) {
LOG.error("free direct memory error, connectionId: " + remoteId, t);
}
}

```


> Propose a mechanism to free the direct memory occupied by RPC Connections
> -
>
> Key: HADOOP-18534
> URL: https://issues.apache.org/jira/browse/HADOOP-18534
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: rpc-server
>Reporter: xinqiu.hu
>Priority: Minor
> Attachments: 未命名文件 (1).png
>
>
>   In the RPC Client, a thread called RpcRequestSender is responsible for 
> writing the connection request to the socket. Every time a request is sent, a 
> direct memory is applied for in sun.nio.ch.IOUtil#write() and cached.
>   If Connection and RpcRequestSender are promoted to the old generation, they 
> will not be recycled when full gc is not performed, resulting in the 
> DirectByteBuffer cached in sun.nio.ch.Util not being recycled. When the 
> memory occupied by DirectByteBuffer is too large, the jvm process may not 
> have the opportunity to do full gc and is killed.
> !未命名文件 (1).png|width=494,height=267!
>   Unfortunately, there is no easy way to free these DirectByteBuffers. 
> Perhaps, we can manually free these DirectByteBuffers by the following 
> methods when the Connection is closed.
>  
> {code:keyword}
> private void freeDirectBuffer() {
>   try {
> ByteBuffer buffer = Util.getTemporaryDirectBuffer(1);
> int i = 0;
> while (buffer.capacity() != 1 && i < 1024) {
>   ((DirectBuffer) buffer).cleaner().clean();
>   buffer = Util.getTemporaryDirectBuffer(1);
>   i++;
> }
> ((DirectBuffer) buffer).cleaner().clean();
>   } catch (Throwable t) {
> LOG.error("free direct memory error, connectionId: " + remoteId, t);
>   }
> }{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18534) Propose a mechanism to free the direct memory occupied by RPC Connections

2022-11-20 Thread xinqiu.hu (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xinqiu.hu updated HADOOP-18534:
---
Description: 
  In the RPC Client, a thread called RpcRequestSender is responsible for 
writing the connection request to the socket. Every time a request is sent, a 
direct memory is applied for in sun.nio.ch.IOUtil#write() and cached.
  If Connection and RpcRequestSender are promoted to the old generation, they 
will not be recycled when full gc is not performed, resulting in the 
DirectByteBuffer cached in sun.nio.ch.Util not being recycled. When the memory 
occupied by DirectByteBuffer is too large, the jvm process may not have the 
opportunity to do full gc and is killed.

!未命名文件 (1).png|width=494,height=267!

  Unfortunately, there is no easy way to free these DirectByteBuffers. Perhaps, 
we can manually free these DirectByteBuffers by the following methods when the 
Connection is closed.

```java

private void freeDirectBuffer() {
try {
ByteBuffer buffer = Util.getTemporaryDirectBuffer(1);
int i = 0;
while (buffer.capacity() != 1 && i < 1024) {
((DirectBuffer) buffer).cleaner().clean();
buffer = Util.getTemporaryDirectBuffer(1);
i++;
}
((DirectBuffer) buffer).cleaner().clean();
} catch (Throwable t) {
LOG.error("free direct memory error, connectionId: " + remoteId, t);
}
}

```

  was:
  In the RPC Client, a thread called RpcRequestSender is responsible for 
writing the connection request to the socket. Every time a request is sent, a 
direct memory is applied for in sun.nio.ch.IOUtil#write() and cached.
  If Connection and RpcRequestSender are promoted to the old generation, they 
will not be recycled when full gc is not performed, resulting in the 
DirectByteBuffer cached in sun.nio.ch.Util not being recycled. When the memory 
occupied by DirectByteBuffer is too large, the jvm process may not have the 
opportunity to do full gc and is killed.

!未命名文件 (1).png|width=494,height=267!

  Unfortunately, there is no easy way to free these DirectByteBuffers. Perhaps, 
we can manually free these DirectByteBuffers by the following methods when the 
Connection is closed.


> Propose a mechanism to free the direct memory occupied by RPC Connections
> -
>
> Key: HADOOP-18534
> URL: https://issues.apache.org/jira/browse/HADOOP-18534
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: rpc-server
>Reporter: xinqiu.hu
>Priority: Minor
> Attachments: 未命名文件 (1).png
>
>
>   In the RPC Client, a thread called RpcRequestSender is responsible for 
> writing the connection request to the socket. Every time a request is sent, a 
> direct memory is applied for in sun.nio.ch.IOUtil#write() and cached.
>   If Connection and RpcRequestSender are promoted to the old generation, they 
> will not be recycled when full gc is not performed, resulting in the 
> DirectByteBuffer cached in sun.nio.ch.Util not being recycled. When the 
> memory occupied by DirectByteBuffer is too large, the jvm process may not 
> have the opportunity to do full gc and is killed.
> !未命名文件 (1).png|width=494,height=267!
>   Unfortunately, there is no easy way to free these DirectByteBuffers. 
> Perhaps, we can manually free these DirectByteBuffers by the following 
> methods when the Connection is closed.
> ```java
> private void freeDirectBuffer() {
> try {
> ByteBuffer buffer = Util.getTemporaryDirectBuffer(1);
> int i = 0;
> while (buffer.capacity() != 1 && i < 1024) {
> ((DirectBuffer) buffer).cleaner().clean();
> buffer = Util.getTemporaryDirectBuffer(1);
> i++;
> }
> ((DirectBuffer) buffer).cleaner().clean();
> } catch (Throwable t) {
> LOG.error("free direct memory error, connectionId: " + remoteId, t);
> }
> }
> ```



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18534) Propose a mechanism to free the direct memory occupied by RPC Connections

2022-11-20 Thread xinqiu.hu (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xinqiu.hu updated HADOOP-18534:
---
Description: 
  In the RPC Client, a thread called RpcRequestSender is responsible for 
writing the connection request to the socket. Every time a request is sent, a 
direct memory is applied for in sun.nio.ch.IOUtil#write() and cached.
  If Connection and RpcRequestSender are promoted to the old generation, they 
will not be recycled when full gc is not performed, resulting in the 
DirectByteBuffer cached in sun.nio.ch.Util not being recycled. When the memory 
occupied by DirectByteBuffer is too large, the jvm process may not have the 
opportunity to do full gc and is killed.

!未命名文件 (1).png|width=494,height=267!

  Unfortunately, there is no easy way to free these DirectByteBuffers. Perhaps, 
we can manually free these DirectByteBuffers by the following methods when the 
Connection is closed.

  was:
  In the RPC Client, a thread called RpcRequestSender is responsible for 
writing the connection request to the socket. Every time a request is sent, a 
direct memory is applied for in sun.nio.ch.IOUtil#write() and cached.
  If Connection and RpcRequestSender are promoted to the old generation, they 
will not be recycled when full gc is not performed, resulting in the 
DirectByteBuffer cached in sun.nio.ch.Util not being recycled. When the memory 
occupied by DirectByteBuffer is too large, the jvm process may not have the 
opportunity to do full gc and is killed.

!未命名文件 (1).png|width=494,height=267!


> Propose a mechanism to free the direct memory occupied by RPC Connections
> -
>
> Key: HADOOP-18534
> URL: https://issues.apache.org/jira/browse/HADOOP-18534
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: rpc-server
>Reporter: xinqiu.hu
>Priority: Minor
> Attachments: 未命名文件 (1).png
>
>
>   In the RPC Client, a thread called RpcRequestSender is responsible for 
> writing the connection request to the socket. Every time a request is sent, a 
> direct memory is applied for in sun.nio.ch.IOUtil#write() and cached.
>   If Connection and RpcRequestSender are promoted to the old generation, they 
> will not be recycled when full gc is not performed, resulting in the 
> DirectByteBuffer cached in sun.nio.ch.Util not being recycled. When the 
> memory occupied by DirectByteBuffer is too large, the jvm process may not 
> have the opportunity to do full gc and is killed.
> !未命名文件 (1).png|width=494,height=267!
>   Unfortunately, there is no easy way to free these DirectByteBuffers. 
> Perhaps, we can manually free these DirectByteBuffers by the following 
> methods when the Connection is closed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on pull request #5152: YARN-11380. Fix hadoop-yarn-api module Java Doc Errors.

2022-11-20 Thread GitBox


aajisaka commented on PR #5152:
URL: https://github.com/apache/hadoop/pull/5152#issuecomment-1321483958

   The following error is remaining: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5152/2/artifact/out/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt
   ```
   [ERROR] 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-5152/ubuntu-focal/src/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/factory/providers/package-info.java:18:
 error: unknown tag: InterfaceAudience.Private
   [ERROR] @InterfaceAudience.Private
   [ERROR] ^
   ```
   Would you fix it?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ashutoshcipher commented on pull request #4717: YARN-6946. Upgrade JUnit from 4 to 5 in hadoop-yarn-common

2022-11-20 Thread GitBox


ashutoshcipher commented on PR #4717:
URL: https://github.com/apache/hadoop/pull/4717#issuecomment-1321483949

   > Thank you @ashutoshcipher The test result looks good to me. Could you 
revert the empty line change?
   
   @aajisaka - Done in my last commit.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18534) Propose a mechanism to free the direct memory occupied by RPC Connections

2022-11-20 Thread xinqiu.hu (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xinqiu.hu updated HADOOP-18534:
---
Description: 
  In the RPC Client, a thread called RpcRequestSender is responsible for 
writing the connection request to the socket. Every time a request is sent, a 
direct memory is applied for in sun.nio.ch.IOUtil#write() and cached.
  If Connection and RpcRequestSender are promoted to the old generation, they 
will not be recycled when full gc is not performed, resulting in the 
DirectByteBuffer cached in sun.nio.ch.Util not being recycled. When the memory 
occupied by DirectByteBuffer is too large, the jvm process may not have the 
opportunity to do full gc and is killed.

!未命名文件 (1).png|width=494,height=267!

  was:
  In the RPC Client, a thread called RpcRequestSender is responsible for 
writing the connection request to the socket. Every time a request is sent, a 
direct memory is applied for in sun.nio.ch.IOUtil#write() and cached.
  If Connection and RpcRequestSender are promoted to the old generation, they 
will not be recycled when full gc is not performed, resulting in the 
DirectByteBuffer cached in sun.nio.ch.Util not being recycled. When the memory 
occupied by DirectByteBuffer is too large, the jvm process may not have the 
opportunity to do full gc and is killed.

!未命名文件 (1).png|width=409,height=221!


> Propose a mechanism to free the direct memory occupied by RPC Connections
> -
>
> Key: HADOOP-18534
> URL: https://issues.apache.org/jira/browse/HADOOP-18534
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: rpc-server
>Reporter: xinqiu.hu
>Priority: Minor
> Attachments: 未命名文件 (1).png
>
>
>   In the RPC Client, a thread called RpcRequestSender is responsible for 
> writing the connection request to the socket. Every time a request is sent, a 
> direct memory is applied for in sun.nio.ch.IOUtil#write() and cached.
>   If Connection and RpcRequestSender are promoted to the old generation, they 
> will not be recycled when full gc is not performed, resulting in the 
> DirectByteBuffer cached in sun.nio.ch.Util not being recycled. When the 
> memory occupied by DirectByteBuffer is too large, the jvm process may not 
> have the opportunity to do full gc and is killed.
> !未命名文件 (1).png|width=494,height=267!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18534) Propose a mechanism to free the direct memory occupied by RPC Connections

2022-11-20 Thread xinqiu.hu (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xinqiu.hu updated HADOOP-18534:
---
Description: 
  In the RPC Client, a thread called RpcRequestSender is responsible for 
writing the connection request to the socket. Every time a request is sent, a 
direct memory is applied for in sun.nio.ch.IOUtil#write() and cached.
  If Connection and RpcRequestSender are promoted to the old generation, they 
will not be recycled when full gc is not performed, resulting in the 
DirectByteBuffer cached in sun.nio.ch.Util not being recycled. When the memory 
occupied by DirectByteBuffer is too large, the jvm process may not have the 
opportunity to do full gc and is killed.

!未命名文件 (1).png|width=409,height=221!

  was:
  In the RPC Client, a thread called RpcRequestSender is responsible for 
writing the connection request to the socket. Every time a request is sent, a 
direct memory is applied for in sun.nio.ch.IOUtil#write() and cached.
  If Connection and RpcRequestSender are promoted to the old generation, they 
will not be recycled when full gc is not performed, resulting in the 
DirectByteBuffer cached in sun.nio.ch.Util not being recycled. When the memory 
occupied by DirectByteBuffer is too large, the jvm process may not have the 
opportunity to do full gc and is killed.

!未命名文件 (1).png!


> Propose a mechanism to free the direct memory occupied by RPC Connections
> -
>
> Key: HADOOP-18534
> URL: https://issues.apache.org/jira/browse/HADOOP-18534
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: rpc-server
>Reporter: xinqiu.hu
>Priority: Minor
> Attachments: 未命名文件 (1).png
>
>
>   In the RPC Client, a thread called RpcRequestSender is responsible for 
> writing the connection request to the socket. Every time a request is sent, a 
> direct memory is applied for in sun.nio.ch.IOUtil#write() and cached.
>   If Connection and RpcRequestSender are promoted to the old generation, they 
> will not be recycled when full gc is not performed, resulting in the 
> DirectByteBuffer cached in sun.nio.ch.Util not being recycled. When the 
> memory occupied by DirectByteBuffer is too large, the jvm process may not 
> have the opportunity to do full gc and is killed.
> !未命名文件 (1).png|width=409,height=221!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18534) Propose a mechanism to free the direct memory occupied by RPC Connections

2022-11-20 Thread xinqiu.hu (Jira)
xinqiu.hu created HADOOP-18534:
--

 Summary: Propose a mechanism to free the direct memory occupied by 
RPC Connections
 Key: HADOOP-18534
 URL: https://issues.apache.org/jira/browse/HADOOP-18534
 Project: Hadoop Common
  Issue Type: Improvement
  Components: rpc-server
Reporter: xinqiu.hu
 Attachments: 未命名文件 (1).png

  In the RPC Client, a thread called RpcRequestSender is responsible for 
writing the connection request to the socket. Every time a request is sent, a 
direct memory is applied for in sun.nio.ch.IOUtil#write() and cached.
  If Connection and RpcRequestSender are promoted to the old generation, they 
will not be recycled when full gc is not performed, resulting in the 
DirectByteBuffer cached in sun.nio.ch.Util not being recycled. When the memory 
occupied by DirectByteBuffer is too large, the jvm process may not have the 
opportunity to do full gc and is killed.

!未命名文件 (1).png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ashutoshcipher commented on pull request #4717: YARN-6946. Upgrade JUnit from 4 to 5 in hadoop-yarn-common

2022-11-20 Thread GitBox


ashutoshcipher commented on PR #4717:
URL: https://github.com/apache/hadoop/pull/4717#issuecomment-1321480128

   Thanks @aajisaka I will revert that


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] riyakhdl commented on pull request #5113: YARN-6971 Clean up different ways to create resources

2022-11-20 Thread GitBox


riyakhdl commented on PR #5113:
URL: https://github.com/apache/hadoop/pull/5113#issuecomment-1321479727

   @ashutoshcipher Thanks for the review, 
   1. I have done a small performance test to prove it, hence attaching the 
logs.
   
[code_Benchmarking.txt](https://github.com/apache/hadoop/files/10053607/code_Benchmarking.txt)
   
[log_Benchmarking.txt](https://github.com/apache/hadoop/files/10053608/log_Benchmarking.txt)
   
   2. Yes, I have done the changes. 
   
   
   
   > Thanks @riyakhdl for your contribution. I have a couple of comments.
   > 
   > 1. Possible to add any performance stats due to this?
   > 2. I can see you haven't replaced `BuilderUtils.newResource` in all the 
places in YARN module. Can you do that?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on pull request #4717: YARN-6946. Upgrade JUnit from 4 to 5 in hadoop-yarn-common

2022-11-20 Thread GitBox


aajisaka commented on PR #4717:
URL: https://github.com/apache/hadoop/pull/4717#issuecomment-1321479607

   Thank you @ashutoshcipher The test result looks good to me. Could you revert 
the empty line change?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5152: YARN-11380. Fix hadoop-yarn-api module Java Doc Errors.

2022-11-20 Thread GitBox


hadoop-yetus commented on PR #5152:
URL: https://github.com/apache/hadoop/pull/5152#issuecomment-1321345447

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 35s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  39m 33s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  3s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   0m 53s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 51s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 57s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   0m 58s | 
[/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5152/2/artifact/out/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-yarn-api in trunk failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 15s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 42s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 43s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 48s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   0m 48s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api: The patch generated 0 new + 
133 unchanged - 26 fixed = 133 total (was 159)  |
   | +1 :green_heart: |  mvnsite  |   0m 43s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 35s | 
[/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5152/2/artifact/out/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-yarn-api in the patch failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08
 with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 generated 0 new + 0 
unchanged - 146 fixed = 0 total (was 146)  |
   | +1 :green_heart: |  spotbugs  |   1m 57s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 27s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m  2s |  |  hadoop-yarn-api in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 43s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  98m 37s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5152/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5152 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 58baf41df37e 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / f1a136b5ef1cdcad3760c50bdbba3802fb4e63af |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 

[GitHub] [hadoop] tomscut commented on pull request #4209: HDFS-16550. Improper cache-size for journal node may cause cluster crash

2022-11-20 Thread GitBox


tomscut commented on PR #4209:
URL: https://github.com/apache/hadoop/pull/4209#issuecomment-1321325256

   Hi @tasanuma @ayushtkn , could you also please take a look? Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut commented on a diff in pull request #5129: HDFS-16840. Enhance the usage description about oiv in HDFSCommands.md and OfflineImageViewerPB

2022-11-20 Thread GitBox


tomscut commented on code in PR #5129:
URL: https://github.com/apache/hadoop/pull/5129#discussion_r1027393176


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/OfflineImageViewerPB.java:
##
@@ -101,8 +103,24 @@ public class OfflineImageViewerPB {
   + "   against image file. (XML|FileDistribution|\n"
   + "   ReverseXML|Web|Delimited|DetectCorruption)\n"
   + "   The default is Web.\n"
+  + "-addr Specify the address(host:port) to listen.\n"
+  + "   (localhost:5978 by default). This option is\n"
+  + "   used with Web processor.\n"
+  + "-maxSize  Specify the range [0, maxSize] of file sizes\n"
+  + "   to be analyzed in bytes (128GB by default).\n"
+  + "   This option is used with FileDistribution 
processor.\n"
+  + "-step Specify the granularity of the distribution in 
bytes\n"
+  + "   (2MB by default). This option is used\n"
+  + "   with FileDistribution processor.\n"
+  + "-formatFormat the output result in a human-readable 
fashion rather\n"
+  + "   than a number of bytes. (false by default).\n"
+  + "   This option is used with FileDistribution 
processor.\n"
   + "-delimiterDelimiting string to use with Delimited or \n"
   + "   DetectCorruption processor. \n"
+  + "-spWhether to print Storage policy (default is 
false). \n"
+  + "   Is used by Delimited processor only. \n"
+  + "-ecWhether to print Erasure coding policy 
(default is false). \n"

Review Comment:
   nit: `Storage policy` -> `storage policy`,  `Erasure coding policy` -> 
`erasure coding policy`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut commented on a diff in pull request #5129: HDFS-16840. Enhance the usage description about oiv in HDFSCommands.md and OfflineImageViewerPB

2022-11-20 Thread GitBox


tomscut commented on code in PR #5129:
URL: https://github.com/apache/hadoop/pull/5129#discussion_r1027391813


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/OfflineImageViewerPB.java:
##
@@ -81,6 +81,8 @@ public class OfflineImageViewerPB {
   + "changed via the -delimiter argument.\n"
   + "-sp print storage policy, used by delimiter only.\n"
   + "-ec print erasure coding policy, used by delimiter only.\n"
+  + "-m,--multiThread defines multiThread to process sub-sections, \n"

Review Comment:
   Are there some duplicated options here?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut commented on a diff in pull request #5129: HDFS-16840. Enhance the usage description about oiv in HDFSCommands.md and OfflineImageViewerPB

2022-11-20 Thread GitBox


tomscut commented on code in PR #5129:
URL: https://github.com/apache/hadoop/pull/5129#discussion_r1027391813


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/OfflineImageViewerPB.java:
##
@@ -81,6 +81,8 @@ public class OfflineImageViewerPB {
   + "changed via the -delimiter argument.\n"
   + "-sp print storage policy, used by delimiter only.\n"
   + "-ec print erasure coding policy, used by delimiter only.\n"
+  + "-m,--multiThread defines multiThread to process sub-sections, \n"

Review Comment:
   Duplicated option `-m`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18533) RPC Client performance improvement

2022-11-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17636371#comment-17636371
 ] 

ASF GitHub Bot commented on HADOOP-18533:
-

slfan1989 commented on PR #5151:
URL: https://github.com/apache/hadoop/pull/5151#issuecomment-1321285981

   @huxinqiu I will deal with the problem of javadoc, we can focus on the RPC 
code logic.




> RPC Client performance improvement
> --
>
> Key: HADOOP-18533
> URL: https://issues.apache.org/jira/browse/HADOOP-18533
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: rpc-server
>Reporter: xinqiu.hu
>Priority: Minor
>  Labels: pull-request-available
>
>   The current implementation copies the rpcRequest and header to a 
> ByteArrayOutputStream in order to calculate the total length of the sent 
> request, and then writes it to the socket buffer.
>   But if the rpc engine is ProtobufRpcEngine2, we can pre-calculate the 
> request size, and then send the request directly to the socket buffer, 
> reducing a memory copy. And avoid allocating 1024 bytes of ResponseBuffer 
> each time a request is sent.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #5151: HADOOP-18533. RPC Client performance improvement

2022-11-20 Thread GitBox


slfan1989 commented on PR #5151:
URL: https://github.com/apache/hadoop/pull/5151#issuecomment-1321285981

   @huxinqiu I will deal with the problem of javadoc, we can focus on the RPC 
code logic.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #5152: YARN-11380. Fix hadoop-yarn-api module Java Doc Errors.

2022-11-20 Thread GitBox


slfan1989 commented on PR #5152:
URL: https://github.com/apache/hadoop/pull/5152#issuecomment-1321285035

   > The Jdk-11 one is still failing
   > 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5152/1/artifact/out/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt
   
   Thank you very much for reviewing the code! I will fix it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5143: HDFS-16846. EC: Only EC blocks should be effected by max-streams-hard-limit configuration

2022-11-20 Thread GitBox


hadoop-yetus commented on PR #5143:
URL: https://github.com/apache/hadoop/pull/5143#issuecomment-1321230672

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 57s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  43m 30s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 52s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   1m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 22s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 45s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 45s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   4m  2s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m 36s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 40s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   1m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  0s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5143/6/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 86 unchanged - 
0 fixed = 88 total (was 86)  |
   | +1 :green_heart: |  mvnsite  |   1m 34s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 47s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  26m 22s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 387m 18s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5143/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  3s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 510m 11s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestLeaseRecovery2 |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5143/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5143 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 90c154170cfa 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 86140acf2a8c7147861f8373c8f28e9f4c2fa459 |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5143/6/testReport/ |
   | Max. process+thread count | 2227 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5137: HDFS-16841. Enhance the function of DebugAdmin#VerifyECCommand

2022-11-20 Thread GitBox


hadoop-yetus commented on PR #5137:
URL: https://github.com/apache/hadoop/pull/5137#issuecomment-1321226138

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 46s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 41s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 33s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   1m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 25s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 15s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 37s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m  9s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 24s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 57s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 25s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 21s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 35s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 309m 12s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5137/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 12s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 418m  9s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestLeaseRecovery2 |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5137/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5137 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux fa0cc0849bb9 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 2dbc42694a803f94cd118766d4fdbf4d58118fc2 |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5137/4/testReport/ |
   | Max. process+thread count | 3042 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5137/4/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5113: YARN-6971 Clean up different ways to create resources

2022-11-20 Thread GitBox


hadoop-yetus commented on PR #5113:
URL: https://github.com/apache/hadoop/pull/5113#issuecomment-1321206502

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 59s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 34 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 56s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  28m 59s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  25m 32s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |  22m 10s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   4m 28s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 38s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   4m  9s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   3m 52s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   7m 31s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m 41s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 44s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 50s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |  24m 50s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m  9s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  22m  9s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   4m 21s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   4m 36s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   4m  6s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   4m 47s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |  12m 50s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  27m 57s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m 26s |  |  hadoop-yarn-server-common in 
the patch passed.  |
   | -1 :x: |  unit  | 104m 22s | 
[/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5113/8/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt)
 |  hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  unit  |  25m 27s |  |  hadoop-yarn-server-nodemanager 
in the patch passed.  |
   | +1 :green_heart: |  unit  |   8m 56s |  |  hadoop-mapreduce-client-app in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 12s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 398m 19s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMService |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5113/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5113 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 1ada1405bae6 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 8d93cd3be03440d22f88067bb156f78cb7a69e4a |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 

[GitHub] [hadoop] ayushtkn commented on pull request #5152: YARN-11380. Fix hadoop-yarn-api module Java Doc Errors.

2022-11-20 Thread GitBox


ayushtkn commented on PR #5152:
URL: https://github.com/apache/hadoop/pull/5152#issuecomment-1321204292

   The Jdk-11 one is still failing
   
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5152/1/artifact/out/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18533) RPC Client performance improvement

2022-11-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17636309#comment-17636309
 ] 

ASF GitHub Bot commented on HADOOP-18533:
-

hadoop-yetus commented on PR #5151:
URL: https://github.com/apache/hadoop/pull/5151#issuecomment-1321171513

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 36s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 22s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |  21m 13s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 34s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 54s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   1m 28s | 
[/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/7/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   1m  1s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m  4s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 42s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |  24m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 49s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  21m 49s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 25s |  |  
hadoop-common-project/hadoop-common: The patch generated 0 new + 176 unchanged 
- 1 fixed = 176 total (was 177)  |
   | +1 :green_heart: |  mvnsite  |   1m 47s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   1m 12s | 
[/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/7/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-common in the patch failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 55s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 27s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 25s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 16s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 215m 18s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5151 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 049ca5ea60c8 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / c2233e55d5e068ee5c07086a98700df37faef809 |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5151: HADOOP-18533. RPC Client performance improvement

2022-11-20 Thread GitBox


hadoop-yetus commented on PR #5151:
URL: https://github.com/apache/hadoop/pull/5151#issuecomment-1321171513

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 36s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 22s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |  21m 13s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 34s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 54s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   1m 28s | 
[/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/7/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   1m  1s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m  4s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 42s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |  24m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 49s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  21m 49s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 25s |  |  
hadoop-common-project/hadoop-common: The patch generated 0 new + 176 unchanged 
- 1 fixed = 176 total (was 177)  |
   | +1 :green_heart: |  mvnsite  |   1m 47s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   1m 12s | 
[/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/7/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-common in the patch failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 55s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 27s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 25s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 16s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 215m 18s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5151 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 049ca5ea60c8 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / c2233e55d5e068ee5c07086a98700df37faef809 |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/7/testReport/ |
   | Max. process+thread count | 3159 (vs. ulimit of 5500) |
   | modules | C: 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5152: YARN-11380. Fix hadoop-yarn-api module Java Doc Errors.

2022-11-20 Thread GitBox


hadoop-yetus commented on PR #5152:
URL: https://github.com/apache/hadoop/pull/5152#issuecomment-1321169830

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 35s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m 43s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  1s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   0m 51s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  1s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   0m 59s | 
[/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5152/1/artifact/out/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-yarn-api in trunk failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 20s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 10s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 46s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   0m 46s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 42s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 20s |  |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api: The patch generated 0 new + 
133 unchanged - 19 fixed = 133 total (was 152)  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 33s | 
[/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5152/1/artifact/out/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-yarn-api in the patch failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08
 with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 generated 0 new + 0 
unchanged - 146 fixed = 0 total (was 146)  |
   | +1 :green_heart: |  spotbugs  |   1m 59s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m  3s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m  7s |  |  hadoop-yarn-api in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 44s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 103m 26s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5152/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5152 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux fcbe6bd5fa7d 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 39a5c9fb0c4171ade6ec822ed2a72d30ef8a54b1 |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 

[jira] [Commented] (HADOOP-18533) RPC Client performance improvement

2022-11-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17636304#comment-17636304
 ] 

ASF GitHub Bot commented on HADOOP-18533:
-

hadoop-yetus commented on PR #5151:
URL: https://github.com/apache/hadoop/pull/5151#issuecomment-1321168319

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 37s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 39s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |  21m  3s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 26s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m  2s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   1m 29s | 
[/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/6/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   1m 19s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 56s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 36s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 50s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |  22m 50s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 54s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  20m 54s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 33s |  |  
hadoop-common-project/hadoop-common: The patch generated 0 new + 176 unchanged 
- 1 fixed = 176 total (was 177)  |
   | +1 :green_heart: |  mvnsite  |   1m 53s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   1m 24s | 
[/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/6/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-common in the patch failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   1m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 57s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  27m 44s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 47s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 19s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 219m 13s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5151 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux acc74e3ecf85 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / da321a733bdbbfb74caf19d32b3a6d9bbb992361 |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5151: HADOOP-18533. RPC Client performance improvement

2022-11-20 Thread GitBox


hadoop-yetus commented on PR #5151:
URL: https://github.com/apache/hadoop/pull/5151#issuecomment-1321168319

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 37s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 39s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |  21m  3s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 26s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m  2s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   1m 29s | 
[/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/6/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   1m 19s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 56s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 36s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 50s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |  22m 50s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 54s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  20m 54s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 33s |  |  
hadoop-common-project/hadoop-common: The patch generated 0 new + 176 unchanged 
- 1 fixed = 176 total (was 177)  |
   | +1 :green_heart: |  mvnsite  |   1m 53s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   1m 24s | 
[/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/6/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-common in the patch failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   1m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 57s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  27m 44s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 47s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 19s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 219m 13s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5151 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux acc74e3ecf85 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / da321a733bdbbfb74caf19d32b3a6d9bbb992361 |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/6/testReport/ |
   | Max. process+thread count | 2158 (vs. ulimit of 5500) |
   | modules | C: 

[GitHub] [hadoop] slfan1989 commented on pull request #5152: YARN-11380. Fix hadoop-yarn-api module Java Doc Errors.

2022-11-20 Thread GitBox


slfan1989 commented on PR #5152:
URL: https://github.com/apache/hadoop/pull/5152#issuecomment-1321147063

   @ayushtkn Can you help review this PR? Thank you very much!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 opened a new pull request, #5152: YARN-11380. Fix hadoop-yarn-api module Java Doc Errors.

2022-11-20 Thread GitBox


slfan1989 opened a new pull request, #5152:
URL: https://github.com/apache/hadoop/pull/5152

   JIRA: YARN-11380. Fix hadoop-yarn-api module Java Doc Errors.
   
   When finishing 
[YARN-11373](https://issues.apache.org/jira/browse/YARN-11373), ps has java-doc 
compilation errors, I will fix all java-doc compilation errors of 
hadoop-yarn-api module. I will pay attention to the java doc compilation of 
jdk8 & jdk11 at the same time.
   
   ```
   [ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:javadoc-no-fork 
(default-cli) on project hadoop-yarn-api: An error has occurred in Javadoc 
report generation: 
   [ERROR] Exit code: 1 - javadoc: warning - You have specified the HTML 
version as HTML 4.01 by using the -html4 option.
   [ERROR] The default is currently HTML5 and the support for HTML 4.01 will be 
removed
   [ERROR] in a future release. To suppress this warning, please ensure that 
any HTML constructs
   [ERROR] in your comments are valid in HTML5, and remove the -html4 option.
   [ERROR] 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-5146/ubuntu-focal/src/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationBaseProtocol.java:97:
 warning: no description for @throws
   [ERROR]* @throws YarnException
   [ERROR]  ^
   [ERROR] 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-5146/ubuntu-focal/src/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationBaseProtocol.java:98:
 warning: no description for @throws
   [ERROR]* @throws IOException
   [ERROR]  ^
   [ERROR] 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-5146/ubuntu-focal/src/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationBaseProtocol.java:129:
 warning: no description for @throws
   [ERROR]* @throws YarnException
   [ERROR]  ^
   [ERROR] 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-5146/ubuntu-focal/src/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationBaseProtocol.java:130:
 warning: no description for @throws
   [ERROR]* @throws IOException 
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18533) RPC Client performance improvement

2022-11-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17636297#comment-17636297
 ] 

ASF GitHub Bot commented on HADOOP-18533:
-

slfan1989 commented on PR #5151:
URL: https://github.com/apache/hadoop/pull/5151#issuecomment-1321142480

   @huxinqiu Thank you very much for your contribution!
   
   We need to discuss something:
   
   1.  It seems that the benefit is to avoid declaring this variable 
`ResponseBuffer`, bringing the initialized 1024 byte.
   Then we moved the original internal calculation code directly to the outside.
   
   > modified code
   ```
   int computedSize = connectionContextHeader.getSerializedSize();
   computedSize += CodedOutputStream.computeUInt32SizeNoTag(computedSize);
   int messageSize = message.getSerializedSize();
   computedSize += messageSize;
   computedSize += CodedOutputStream.computeUInt32SizeNoTag(messageSize);
   byte[] dataLengthBuffer = new byte[4];
   dataLengthBuffer[0] = (byte)((computedSize >>> 24) & 0xFF);
   dataLengthBuffer[1] = (byte)((computedSize >>> 16) & 0xFF);
   dataLengthBuffer[2] = (byte)((computedSize >>>  8) & 0xFF);
   dataLengthBuffer[3] = (byte)(computedSize & 0xFF);
   ```
   
   > The original calculation code is like this 
connectionContextHeader.writeDelimitedTo(buf)
   ```
   int serialized = this.getSerializedSize();
   int bufferSize = 
CodedOutputStream.computePreferredBufferSize(CodedOutputStream.computeRawVarint32Size(serialized)
 + serialized);
   CodedOutputStream codedOutput = CodedOutputStream.newInstance(output, 
bufferSize);
   codedOutput.writeRawVarint32(serialized);
   this.writeTo(codedOutput);
   codedOutput.flush();
   ```
   
   > ResponseBuffer#setSize
   ```
   @Override
   public int size() {
 return count - FRAMING_BYTES;
   }
   void setSize(int size) {
 buf[0] = (byte)((size >>> 24) & 0xFF);
 buf[1] = (byte)((size >>> 16) & 0xFF);
 buf[2] = (byte)((size >>>  8) & 0xFF);
 buf[3] = (byte)((size >>>  0) & 0xFF);
   }
   ```
   
   2. code duplication
   The following calculation logic appears 3 times
   
   ```
   this.dataLengthBuffer = new byte[4];
 dataLengthBuffer[0] = (byte)((computedSize >>> 24) & 0xFF);
 dataLengthBuffer[1] = (byte)((computedSize >>> 16) & 0xFF);
 dataLengthBuffer[2] = (byte)((computedSize >>>  8) & 0xFF);
 dataLengthBuffer[3] = (byte)(computedSize & 0xFF);
 this.header = header;
 this.rpcRequest = rpcRequest;
   ```
   
   RpcProtobufRequestWithHeader#Constructor
   SaslRpcClient#sendSaslMessage
   Client#writeConnectionContext
   




> RPC Client performance improvement
> --
>
> Key: HADOOP-18533
> URL: https://issues.apache.org/jira/browse/HADOOP-18533
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: rpc-server
>Reporter: xinqiu.hu
>Priority: Minor
>  Labels: pull-request-available
>
>   The current implementation copies the rpcRequest and header to a 
> ByteArrayOutputStream in order to calculate the total length of the sent 
> request, and then writes it to the socket buffer.
>   But if the rpc engine is ProtobufRpcEngine2, we can pre-calculate the 
> request size, and then send the request directly to the socket buffer, 
> reducing a memory copy. And avoid allocating 1024 bytes of ResponseBuffer 
> each time a request is sent.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #5151: HADOOP-18533. RPC Client performance improvement

2022-11-20 Thread GitBox


slfan1989 commented on PR #5151:
URL: https://github.com/apache/hadoop/pull/5151#issuecomment-1321142480

   @huxinqiu Thank you very much for your contribution!
   
   We need to discuss something:
   
   1.  It seems that the benefit is to avoid declaring this variable 
`ResponseBuffer`, bringing the initialized 1024 byte.
   Then we moved the original internal calculation code directly to the outside.
   
   > modified code
   ```
   int computedSize = connectionContextHeader.getSerializedSize();
   computedSize += CodedOutputStream.computeUInt32SizeNoTag(computedSize);
   int messageSize = message.getSerializedSize();
   computedSize += messageSize;
   computedSize += CodedOutputStream.computeUInt32SizeNoTag(messageSize);
   byte[] dataLengthBuffer = new byte[4];
   dataLengthBuffer[0] = (byte)((computedSize >>> 24) & 0xFF);
   dataLengthBuffer[1] = (byte)((computedSize >>> 16) & 0xFF);
   dataLengthBuffer[2] = (byte)((computedSize >>>  8) & 0xFF);
   dataLengthBuffer[3] = (byte)(computedSize & 0xFF);
   ```
   
   > The original calculation code is like this 
connectionContextHeader.writeDelimitedTo(buf)
   ```
   int serialized = this.getSerializedSize();
   int bufferSize = 
CodedOutputStream.computePreferredBufferSize(CodedOutputStream.computeRawVarint32Size(serialized)
 + serialized);
   CodedOutputStream codedOutput = CodedOutputStream.newInstance(output, 
bufferSize);
   codedOutput.writeRawVarint32(serialized);
   this.writeTo(codedOutput);
   codedOutput.flush();
   ```
   
   > ResponseBuffer#setSize
   ```
   @Override
   public int size() {
 return count - FRAMING_BYTES;
   }
   void setSize(int size) {
 buf[0] = (byte)((size >>> 24) & 0xFF);
 buf[1] = (byte)((size >>> 16) & 0xFF);
 buf[2] = (byte)((size >>>  8) & 0xFF);
 buf[3] = (byte)((size >>>  0) & 0xFF);
   }
   ```
   
   2. code duplication
   The following calculation logic appears 3 times
   
   ```
   this.dataLengthBuffer = new byte[4];
 dataLengthBuffer[0] = (byte)((computedSize >>> 24) & 0xFF);
 dataLengthBuffer[1] = (byte)((computedSize >>> 16) & 0xFF);
 dataLengthBuffer[2] = (byte)((computedSize >>>  8) & 0xFF);
 dataLengthBuffer[3] = (byte)(computedSize & 0xFF);
 this.header = header;
 this.rpcRequest = rpcRequest;
   ```
   
   RpcProtobufRequestWithHeader#Constructor
   SaslRpcClient#sendSaslMessage
   Client#writeConnectionContext
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] haiyang1987 commented on pull request #4902: HDFS-16775.Improve BlockPlacementPolicyRackFaultTolerant's chooseOnce

2022-11-20 Thread GitBox


haiyang1987 commented on PR #4902:
URL: https://github.com/apache/hadoop/pull/4902#issuecomment-1321129513

   Hi @Hexiaoqiao @ZanderXu @tomscut @ayushtkn 
   please help me review this pr when you are available, Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] haiyang1987 commented on pull request #5137: HDFS-16841. Enhance the function of DebugAdmin#VerifyECCommand

2022-11-20 Thread GitBox


haiyang1987 commented on PR #5137:
URL: https://github.com/apache/hadoop/pull/5137#issuecomment-1321125773

   Update PR, 
   1.change to -skipFailureBlocks
   2.about return value logic(return 0 if there are not any failures, else 
return 1), keep the previous logic
   
   please @ZanderXu @tomscut @tasanuma help me review it again, Thanks.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18533) RPC Client performance improvement

2022-11-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17636285#comment-17636285
 ] 

ASF GitHub Bot commented on HADOOP-18533:
-

hadoop-yetus commented on PR #5151:
URL: https://github.com/apache/hadoop/pull/5151#issuecomment-1321113348

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  42m 13s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  26m 16s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |  22m  7s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 26s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 54s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   2m 58s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m 14s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  4s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |  24m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m  5s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  22m  5s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 50s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   2m 58s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  26m 25s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 49s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 10s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 228m 46s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5151 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 9c0fc2e22b97 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / f9a1e4c5eaf7191559f4fe2548c0e5b74ae499c6 |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/5/testReport/ |
   | Max. process+thread count | 1410 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/5/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> RPC Client performance 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5151: HADOOP-18533. RPC Client performance improvement

2022-11-20 Thread GitBox


hadoop-yetus commented on PR #5151:
URL: https://github.com/apache/hadoop/pull/5151#issuecomment-1321113348

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  42m 13s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  26m 16s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |  22m  7s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 26s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 54s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   2m 58s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m 14s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  4s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |  24m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m  5s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  22m  5s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 50s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   2m 58s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  26m 25s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 49s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 10s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 228m 46s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5151 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 9c0fc2e22b97 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / f9a1e4c5eaf7191559f4fe2548c0e5b74ae499c6 |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/5/testReport/ |
   | Max. process+thread count | 1410 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/5/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please 

[jira] [Commented] (HADOOP-18531) assertion failure in ITestS3APrefetchingInputStream

2022-11-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17636281#comment-17636281
 ] 

ASF GitHub Bot commented on HADOOP-18531:
-

slfan1989 commented on PR #5149:
URL: https://github.com/apache/hadoop/pull/5149#issuecomment-1321102666

   LGTM.




> assertion failure in ITestS3APrefetchingInputStream
> ---
>
> Key: HADOOP-18531
> URL: https://issues.apache.org/jira/browse/HADOOP-18531
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Ashutosh Gupta
>Priority: Major
>  Labels: pull-request-available
>
> assert failure in 
> {{ITestS3APrefetchingInputStream.testReadLargeFileFullyLazySeek}}; looks like 
> the executor was acquired faster than the test expected.
> {code}
> java.lang.AssertionError: 
> [Maxiumum named action_executor_acquired.max] 
> Expecting:
>  <0L>
> to be greater than:
>  <0L> 
> {code}
> proposed: cut that assert as it doesn't seem needed



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #5149: HADOOP-18531. Fix assertion failure in ITestS3APrefetchingInputStream

2022-11-20 Thread GitBox


slfan1989 commented on PR #5149:
URL: https://github.com/apache/hadoop/pull/5149#issuecomment-1321102666

   LGTM.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut commented on a diff in pull request #5137: HDFS-16841. Enhance the function of DebugAdmin#VerifyECCommand

2022-11-20 Thread GitBox


tomscut commented on code in PR #5137:
URL: https://github.com/apache/hadoop/pull/5137#discussion_r1027256611


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DebugAdmin.java:
##
@@ -432,8 +432,16 @@ private class VerifyECCommand extends DebugCommand {
 
 VerifyECCommand() {
   super("verifyEC",
-  "verifyEC -file ",
-  "  Verify HDFS erasure coding on all block groups of the file.");
+  "verifyEC -file  [-blockId ] [-ignoreFailures]",

Review Comment:
   > If the option `-verifyAllFailures` is not specified, if the all block 
groups of file is healthy, actually it will verify all blocks. so for 
`verifyAll`, maybe not very easy to understand.
   > 
   > @ZanderXu @tomscut @tasanuma how about change to -skipFailureBlocks ?
   
   I think `-skipFailureBlocks` is ok.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tasanuma commented on a diff in pull request #5137: HDFS-16841. Enhance the function of DebugAdmin#VerifyECCommand

2022-11-20 Thread GitBox


tasanuma commented on code in PR #5137:
URL: https://github.com/apache/hadoop/pull/5137#discussion_r1027248133


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DebugAdmin.java:
##
@@ -432,8 +432,16 @@ private class VerifyECCommand extends DebugCommand {
 
 VerifyECCommand() {
   super("verifyEC",
-  "verifyEC -file ",
-  "  Verify HDFS erasure coding on all block groups of the file.");
+  "verifyEC -file  [-blockId ] [-ignoreFailures]",

Review Comment:
   @haiyang1987 `-skipFailureBlocks`  seems good to me. If we use the words 
"skip" or "ignore", I don't mind the return value. (I mean, either "always 
return 0" or "return 0 if there are not any failures, else return 1" would be 
fine.)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org