[jira] [Commented] (HADOOP-18533) RPC Client performance improvement

2024-01-04 Thread Shilun Fan (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17802847#comment-17802847
 ] 

Shilun Fan commented on HADOOP-18533:
-

Bulk update: moved all 3.4.0 non-blocker issues, please move back if it is a 
blocker. Retarget 3.5.0.

> RPC Client performance improvement
> --
>
> Key: HADOOP-18533
> URL: https://issues.apache.org/jira/browse/HADOOP-18533
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: rpc-server
>Reporter: asdfgh19
>Priority: Minor
>  Labels: pull-request-available
>
>   The current implementation copies the rpcRequest and header to a 
> ByteArrayOutputStream in order to calculate the total length of the sent 
> request, and then writes it to the socket buffer.
>   But if the rpc engine is ProtobufRpcEngine2, we can pre-calculate the 
> request size, and then send the request directly to the socket buffer, 
> reducing a memory copy. And avoid allocating 1024 bytes of ResponseBuffer 
> each time a request is sent.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18533) RPC Client performance improvement

2022-11-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17637730#comment-17637730
 ] 

ASF GitHub Bot commented on HADOOP-18533:
-

huxinqiu commented on code in PR #5151:
URL: https://github.com/apache/hadoop/pull/5151#discussion_r1027073648


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProtobufRpcEngine2.java:
##
@@ -27,15 +27,13 @@
 import org.apache.hadoop.ipc.Client.ConnectionId;
 import org.apache.hadoop.ipc.RPC.RpcInvoker;
 import 
org.apache.hadoop.ipc.protobuf.ProtobufRpcEngine2Protos.RequestHeaderProto;
+import org.apache.hadoop.ipc.protobuf.RpcHeaderProtos;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.security.token.SecretManager;
 import org.apache.hadoop.security.token.TokenIdentifier;
 import org.apache.hadoop.classification.VisibleForTesting;
-import org.apache.hadoop.thirdparty.protobuf.BlockingService;
+import org.apache.hadoop.thirdparty.protobuf.*;

Review Comment:
   I have fixed it.





> RPC Client performance improvement
> --
>
> Key: HADOOP-18533
> URL: https://issues.apache.org/jira/browse/HADOOP-18533
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: rpc-server
>Reporter: xinqiu.hu
>Priority: Minor
>  Labels: pull-request-available
>
>   The current implementation copies the rpcRequest and header to a 
> ByteArrayOutputStream in order to calculate the total length of the sent 
> request, and then writes it to the socket buffer.
>   But if the rpc engine is ProtobufRpcEngine2, we can pre-calculate the 
> request size, and then send the request directly to the socket buffer, 
> reducing a memory copy. And avoid allocating 1024 bytes of ResponseBuffer 
> each time a request is sent.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18533) RPC Client performance improvement

2022-11-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17637012#comment-17637012
 ] 

ASF GitHub Bot commented on HADOOP-18533:
-

huxinqiu closed pull request #5151: HADOOP-18533. RPC Client performance 
improvement
URL: https://github.com/apache/hadoop/pull/5151




> RPC Client performance improvement
> --
>
> Key: HADOOP-18533
> URL: https://issues.apache.org/jira/browse/HADOOP-18533
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: rpc-server
>Reporter: xinqiu.hu
>Priority: Minor
>  Labels: pull-request-available
>
>   The current implementation copies the rpcRequest and header to a 
> ByteArrayOutputStream in order to calculate the total length of the sent 
> request, and then writes it to the socket buffer.
>   But if the rpc engine is ProtobufRpcEngine2, we can pre-calculate the 
> request size, and then send the request directly to the socket buffer, 
> reducing a memory copy. And avoid allocating 1024 bytes of ResponseBuffer 
> each time a request is sent.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18533) RPC Client performance improvement

2022-11-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17637011#comment-17637011
 ] 

ASF GitHub Bot commented on HADOOP-18533:
-

hadoop-yetus commented on PR #5151:
URL: https://github.com/apache/hadoop/pull/5151#issuecomment-1323121288

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  42m 39s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  25m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |  21m 57s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 24s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 53s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m  4s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m 32s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  4s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 43s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |  24m 43s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 12s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  22m 12s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 19s |  |  
hadoop-common-project/hadoop-common: The patch generated 0 new + 185 unchanged 
- 1 fixed = 185 total (was 186)  |
   | +1 :green_heart: |  mvnsite  |   1m 50s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   2m 57s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  26m  6s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 34s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  8s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 228m 22s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5151 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 23938719269c 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 57eda6889798acb592616de4c920fde477f2e392 |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/9/testReport/ |
   | Max. process+thread count | 1331 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/9/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org 

[jira] [Commented] (HADOOP-18533) RPC Client performance improvement

2022-11-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17637008#comment-17637008
 ] 

ASF GitHub Bot commented on HADOOP-18533:
-

huxinqiu commented on PR #5151:
URL: https://github.com/apache/hadoop/pull/5151#issuecomment-1323110323

   [HADOOP-18536](https://github.com/apache/hadoop/pull/5156) may be more 
suitable




> RPC Client performance improvement
> --
>
> Key: HADOOP-18533
> URL: https://issues.apache.org/jira/browse/HADOOP-18533
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: rpc-server
>Reporter: xinqiu.hu
>Priority: Minor
>  Labels: pull-request-available
>
>   The current implementation copies the rpcRequest and header to a 
> ByteArrayOutputStream in order to calculate the total length of the sent 
> request, and then writes it to the socket buffer.
>   But if the rpc engine is ProtobufRpcEngine2, we can pre-calculate the 
> request size, and then send the request directly to the socket buffer, 
> reducing a memory copy. And avoid allocating 1024 bytes of ResponseBuffer 
> each time a request is sent.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18533) RPC Client performance improvement

2022-11-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17636744#comment-17636744
 ] 

ASF GitHub Bot commented on HADOOP-18533:
-

hadoop-yetus commented on PR #5151:
URL: https://github.com/apache/hadoop/pull/5151#issuecomment-1322263878

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 36s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 35s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 34s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |  21m  1s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 52s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   1m 48s | 
[/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/8/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   1m 16s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 52s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 53s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  4s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 40s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |  22m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m  5s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  21m  5s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 36s | 
[/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/8/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 2 new + 185 
unchanged - 1 fixed = 187 total (was 186)  |
   | +1 :green_heart: |  mvnsite  |   1m 53s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   1m 16s | 
[/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/8/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-common in the patch failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   1m  1s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m  7s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m  8s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 44s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 20s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 213m 24s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5151 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux bb2eeff8f248 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d4d033ddbaf44b451ad1f0d317747b74eb876b55 

[jira] [Commented] (HADOOP-18533) RPC Client performance improvement

2022-11-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17636570#comment-17636570
 ] 

ASF GitHub Bot commented on HADOOP-18533:
-

huxinqiu commented on PR #5151:
URL: https://github.com/apache/hadoop/pull/5151#issuecomment-1321736995

   > @huxinqiu Thank you very much for your contribution!
   > 
   > We need to discuss something:
   > 
   > 1. It seems that the benefit is to avoid declaring this variable 
`ResponseBuffer`, bringing the initialized 1024 byte.
   >Then we moved the original internal calculation code directly to the 
outside.
   > 
   > > modified code
   > 
   > ```
   > int computedSize = connectionContextHeader.getSerializedSize();
   > computedSize += CodedOutputStream.computeUInt32SizeNoTag(computedSize);
   > int messageSize = message.getSerializedSize();
   > computedSize += messageSize;
   > computedSize += CodedOutputStream.computeUInt32SizeNoTag(messageSize);
   > byte[] dataLengthBuffer = new byte[4];
   > dataLengthBuffer[0] = (byte)((computedSize >>> 24) & 0xFF);
   > dataLengthBuffer[1] = (byte)((computedSize >>> 16) & 0xFF);
   > dataLengthBuffer[2] = (byte)((computedSize >>>  8) & 0xFF);
   > dataLengthBuffer[3] = (byte)(computedSize & 0xFF);
   > ```
   > 
   > > The original calculation code is like this 
connectionContextHeader.writeDelimitedTo(buf)
   > 
   > ```
   > int serialized = this.getSerializedSize();
   > int bufferSize = 
CodedOutputStream.computePreferredBufferSize(CodedOutputStream.computeRawVarint32Size(serialized)
 + serialized);
   > CodedOutputStream codedOutput = CodedOutputStream.newInstance(output, 
bufferSize);
   > codedOutput.writeRawVarint32(serialized);
   > this.writeTo(codedOutput);
   > codedOutput.flush();
   > ```
   > 
   > > ResponseBuffer#setSize
   > 
   > ```
   > @Override
   > public int size() {
   >   return count - FRAMING_BYTES;
   > }
   > void setSize(int size) {
   >   buf[0] = (byte)((size >>> 24) & 0xFF);
   >   buf[1] = (byte)((size >>> 16) & 0xFF);
   >   buf[2] = (byte)((size >>>  8) & 0xFF);
   >   buf[3] = (byte)((size >>>  0) & 0xFF);
   > }
   > ```
   > 
   > 2. code duplication
   >The following calculation logic appears 3 times
   > 
   > ```
   > this.dataLengthBuffer = new byte[4];
   >   dataLengthBuffer[0] = (byte)((computedSize >>> 24) & 0xFF);
   >   dataLengthBuffer[1] = (byte)((computedSize >>> 16) & 0xFF);
   >   dataLengthBuffer[2] = (byte)((computedSize >>>  8) & 0xFF);
   >   dataLengthBuffer[3] = (byte)(computedSize & 0xFF);
   >   this.header = header;
   >   this.rpcRequest = rpcRequest;
   > ```
   > 
   > RpcProtobufRequestWithHeader#Constructor SaslRpcClient#sendSaslMessage 
Client#writeConnectionContext
   @slfan1989 
 1. Yes, IpcStreams#out is a BufferedOutputStream, which has a byte array 
inside it, and protobuf's CodedOutputStream also has a byte array cache inside 
to optimize writing, we don't need to aggregate dataLength, header and 
rpcRequest into a ResponseBuffer which is actually a byte array. 
 The only extra performance cost in the RpcRequestSender thread is the 
serialization of protobuf, usually the request size is only a few hundred 
bytes, and the serialization will only cost tens of microseconds. So I think it 
is better to first calculate the request size, and then write the dataLength, 
header and rpcRequest to the BufferedOutputStream one by one, so as to avoid 
requesting a 1024-byte array for each request.
 2.I'll fix these code duplication afterwards




> RPC Client performance improvement
> --
>
> Key: HADOOP-18533
> URL: https://issues.apache.org/jira/browse/HADOOP-18533
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: rpc-server
>Reporter: xinqiu.hu
>Priority: Minor
>  Labels: pull-request-available
>
>   The current implementation copies the rpcRequest and header to a 
> ByteArrayOutputStream in order to calculate the total length of the sent 
> request, and then writes it to the socket buffer.
>   But if the rpc engine is ProtobufRpcEngine2, we can pre-calculate the 
> request size, and then send the request directly to the socket buffer, 
> reducing a memory copy. And avoid allocating 1024 bytes of ResponseBuffer 
> each time a request is sent.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18533) RPC Client performance improvement

2022-11-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17636371#comment-17636371
 ] 

ASF GitHub Bot commented on HADOOP-18533:
-

slfan1989 commented on PR #5151:
URL: https://github.com/apache/hadoop/pull/5151#issuecomment-1321285981

   @huxinqiu I will deal with the problem of javadoc, we can focus on the RPC 
code logic.




> RPC Client performance improvement
> --
>
> Key: HADOOP-18533
> URL: https://issues.apache.org/jira/browse/HADOOP-18533
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: rpc-server
>Reporter: xinqiu.hu
>Priority: Minor
>  Labels: pull-request-available
>
>   The current implementation copies the rpcRequest and header to a 
> ByteArrayOutputStream in order to calculate the total length of the sent 
> request, and then writes it to the socket buffer.
>   But if the rpc engine is ProtobufRpcEngine2, we can pre-calculate the 
> request size, and then send the request directly to the socket buffer, 
> reducing a memory copy. And avoid allocating 1024 bytes of ResponseBuffer 
> each time a request is sent.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18533) RPC Client performance improvement

2022-11-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17636309#comment-17636309
 ] 

ASF GitHub Bot commented on HADOOP-18533:
-

hadoop-yetus commented on PR #5151:
URL: https://github.com/apache/hadoop/pull/5151#issuecomment-1321171513

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 36s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 22s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |  21m 13s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 34s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 54s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   1m 28s | 
[/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/7/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   1m  1s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m  4s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 42s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |  24m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 49s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  21m 49s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 25s |  |  
hadoop-common-project/hadoop-common: The patch generated 0 new + 176 unchanged 
- 1 fixed = 176 total (was 177)  |
   | +1 :green_heart: |  mvnsite  |   1m 47s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   1m 12s | 
[/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/7/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-common in the patch failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 55s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 27s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 25s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 16s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 215m 18s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5151 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 049ca5ea60c8 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / c2233e55d5e068ee5c07086a98700df37faef809 |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 

[jira] [Commented] (HADOOP-18533) RPC Client performance improvement

2022-11-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17636304#comment-17636304
 ] 

ASF GitHub Bot commented on HADOOP-18533:
-

hadoop-yetus commented on PR #5151:
URL: https://github.com/apache/hadoop/pull/5151#issuecomment-1321168319

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 37s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 39s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |  21m  3s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 26s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m  2s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   1m 29s | 
[/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/6/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   1m 19s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 56s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 36s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 50s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |  22m 50s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 54s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  20m 54s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 33s |  |  
hadoop-common-project/hadoop-common: The patch generated 0 new + 176 unchanged 
- 1 fixed = 176 total (was 177)  |
   | +1 :green_heart: |  mvnsite  |   1m 53s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   1m 24s | 
[/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/6/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-common in the patch failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   1m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 57s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  27m 44s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 47s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 19s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 219m 13s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5151 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux acc74e3ecf85 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / da321a733bdbbfb74caf19d32b3a6d9bbb992361 |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 

[jira] [Commented] (HADOOP-18533) RPC Client performance improvement

2022-11-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17636297#comment-17636297
 ] 

ASF GitHub Bot commented on HADOOP-18533:
-

slfan1989 commented on PR #5151:
URL: https://github.com/apache/hadoop/pull/5151#issuecomment-1321142480

   @huxinqiu Thank you very much for your contribution!
   
   We need to discuss something:
   
   1.  It seems that the benefit is to avoid declaring this variable 
`ResponseBuffer`, bringing the initialized 1024 byte.
   Then we moved the original internal calculation code directly to the outside.
   
   > modified code
   ```
   int computedSize = connectionContextHeader.getSerializedSize();
   computedSize += CodedOutputStream.computeUInt32SizeNoTag(computedSize);
   int messageSize = message.getSerializedSize();
   computedSize += messageSize;
   computedSize += CodedOutputStream.computeUInt32SizeNoTag(messageSize);
   byte[] dataLengthBuffer = new byte[4];
   dataLengthBuffer[0] = (byte)((computedSize >>> 24) & 0xFF);
   dataLengthBuffer[1] = (byte)((computedSize >>> 16) & 0xFF);
   dataLengthBuffer[2] = (byte)((computedSize >>>  8) & 0xFF);
   dataLengthBuffer[3] = (byte)(computedSize & 0xFF);
   ```
   
   > The original calculation code is like this 
connectionContextHeader.writeDelimitedTo(buf)
   ```
   int serialized = this.getSerializedSize();
   int bufferSize = 
CodedOutputStream.computePreferredBufferSize(CodedOutputStream.computeRawVarint32Size(serialized)
 + serialized);
   CodedOutputStream codedOutput = CodedOutputStream.newInstance(output, 
bufferSize);
   codedOutput.writeRawVarint32(serialized);
   this.writeTo(codedOutput);
   codedOutput.flush();
   ```
   
   > ResponseBuffer#setSize
   ```
   @Override
   public int size() {
 return count - FRAMING_BYTES;
   }
   void setSize(int size) {
 buf[0] = (byte)((size >>> 24) & 0xFF);
 buf[1] = (byte)((size >>> 16) & 0xFF);
 buf[2] = (byte)((size >>>  8) & 0xFF);
 buf[3] = (byte)((size >>>  0) & 0xFF);
   }
   ```
   
   2. code duplication
   The following calculation logic appears 3 times
   
   ```
   this.dataLengthBuffer = new byte[4];
 dataLengthBuffer[0] = (byte)((computedSize >>> 24) & 0xFF);
 dataLengthBuffer[1] = (byte)((computedSize >>> 16) & 0xFF);
 dataLengthBuffer[2] = (byte)((computedSize >>>  8) & 0xFF);
 dataLengthBuffer[3] = (byte)(computedSize & 0xFF);
 this.header = header;
 this.rpcRequest = rpcRequest;
   ```
   
   RpcProtobufRequestWithHeader#Constructor
   SaslRpcClient#sendSaslMessage
   Client#writeConnectionContext
   




> RPC Client performance improvement
> --
>
> Key: HADOOP-18533
> URL: https://issues.apache.org/jira/browse/HADOOP-18533
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: rpc-server
>Reporter: xinqiu.hu
>Priority: Minor
>  Labels: pull-request-available
>
>   The current implementation copies the rpcRequest and header to a 
> ByteArrayOutputStream in order to calculate the total length of the sent 
> request, and then writes it to the socket buffer.
>   But if the rpc engine is ProtobufRpcEngine2, we can pre-calculate the 
> request size, and then send the request directly to the socket buffer, 
> reducing a memory copy. And avoid allocating 1024 bytes of ResponseBuffer 
> each time a request is sent.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18533) RPC Client performance improvement

2022-11-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17636285#comment-17636285
 ] 

ASF GitHub Bot commented on HADOOP-18533:
-

hadoop-yetus commented on PR #5151:
URL: https://github.com/apache/hadoop/pull/5151#issuecomment-1321113348

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  42m 13s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  26m 16s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |  22m  7s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 26s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 54s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   2m 58s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m 14s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  4s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |  24m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m  5s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  22m  5s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 50s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   2m 58s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  26m 25s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 49s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 10s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 228m 46s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5151 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 9c0fc2e22b97 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / f9a1e4c5eaf7191559f4fe2548c0e5b74ae499c6 |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/5/testReport/ |
   | Max. process+thread count | 1410 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/5/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> RPC Client performance 

[jira] [Commented] (HADOOP-18533) RPC Client performance improvement

2022-11-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17636250#comment-17636250
 ] 

ASF GitHub Bot commented on HADOOP-18533:
-

slfan1989 commented on PR #5151:
URL: https://github.com/apache/hadoop/pull/5151#issuecomment-1321000642

   @huxinqiu Can we add some unit tests?




> RPC Client performance improvement
> --
>
> Key: HADOOP-18533
> URL: https://issues.apache.org/jira/browse/HADOOP-18533
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: rpc-server
>Reporter: xinqiu.hu
>Priority: Minor
>  Labels: pull-request-available
>
>   The current implementation copies the rpcRequest and header to a 
> ByteArrayOutputStream in order to calculate the total length of the sent 
> request, and then writes it to the socket buffer.
>   But if the rpc engine is ProtobufRpcEngine2, we can pre-calculate the 
> request size, and then send the request directly to the socket buffer, 
> reducing a memory copy. And avoid allocating 1024 bytes of ResponseBuffer 
> each time a request is sent.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18533) RPC Client performance improvement

2022-11-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17636248#comment-17636248
 ] 

ASF GitHub Bot commented on HADOOP-18533:
-

slfan1989 commented on code in PR #5151:
URL: https://github.com/apache/hadoop/pull/5151#discussion_r1027163886


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java:
##
@@ -1941,6 +1963,20 @@ public ByteBuffer readResponse() throws IOException {
 public void sendRequest(byte[] buf) throws IOException {
   out.write(buf);
 }
+  
+public void sendRequest(ProtobufRpcEngine2.RpcProtobufRequestWithHeader 
rpcRequest)
+throws IOException {
+  out.writeInt(rpcRequest.getLength());
+  rpcRequest.getHeader().writeDelimitedTo(out);
+  rpcRequest.getRpcRequest().writeTo(out);
+}
+  
+public void sendRequest(int totalSize, RpcRequestHeaderProto header,
+Message rpcRequest) throws IOException {

Review Comment:
   5 character indentation is usually required
   
   ```
public void sendRequest(int totalSize, RpcRequestHeaderProto header,
   Message rpcRequest) throws IOException {
   ```





> RPC Client performance improvement
> --
>
> Key: HADOOP-18533
> URL: https://issues.apache.org/jira/browse/HADOOP-18533
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: rpc-server
>Reporter: xinqiu.hu
>Priority: Minor
>  Labels: pull-request-available
>
>   The current implementation copies the rpcRequest and header to a 
> ByteArrayOutputStream in order to calculate the total length of the sent 
> request, and then writes it to the socket buffer.
>   But if the rpc engine is ProtobufRpcEngine2, we can pre-calculate the 
> request size, and then send the request directly to the socket buffer, 
> reducing a memory copy. And avoid allocating 1024 bytes of ResponseBuffer 
> each time a request is sent.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18533) RPC Client performance improvement

2022-11-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17636249#comment-17636249
 ] 

ASF GitHub Bot commented on HADOOP-18533:
-

slfan1989 commented on code in PR #5151:
URL: https://github.com/apache/hadoop/pull/5151#discussion_r1027163886


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java:
##
@@ -1941,6 +1963,20 @@ public ByteBuffer readResponse() throws IOException {
 public void sendRequest(byte[] buf) throws IOException {
   out.write(buf);
 }
+  
+public void sendRequest(ProtobufRpcEngine2.RpcProtobufRequestWithHeader 
rpcRequest)
+throws IOException {
+  out.writeInt(rpcRequest.getLength());
+  rpcRequest.getHeader().writeDelimitedTo(out);
+  rpcRequest.getRpcRequest().writeTo(out);
+}
+  
+public void sendRequest(int totalSize, RpcRequestHeaderProto header,
+Message rpcRequest) throws IOException {

Review Comment:
   5 character indentation is usually required
   
   ```
public void sendRequest(int totalSize, RpcRequestHeaderProto header,
Message rpcRequest) throws IOException {
   ```





> RPC Client performance improvement
> --
>
> Key: HADOOP-18533
> URL: https://issues.apache.org/jira/browse/HADOOP-18533
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: rpc-server
>Reporter: xinqiu.hu
>Priority: Minor
>  Labels: pull-request-available
>
>   The current implementation copies the rpcRequest and header to a 
> ByteArrayOutputStream in order to calculate the total length of the sent 
> request, and then writes it to the socket buffer.
>   But if the rpc engine is ProtobufRpcEngine2, we can pre-calculate the 
> request size, and then send the request directly to the socket buffer, 
> reducing a memory copy. And avoid allocating 1024 bytes of ResponseBuffer 
> each time a request is sent.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18533) RPC Client performance improvement

2022-11-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17636194#comment-17636194
 ] 

ASF GitHub Bot commented on HADOOP-18533:
-

hadoop-yetus commented on PR #5151:
URL: https://github.com/apache/hadoop/pull/5151#issuecomment-1320922086

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 36s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 25s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  27m  0s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |  22m 50s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 32s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 52s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   1m 46s | 
[/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/4/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 16s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m 19s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  25m 10s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |  25m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  25m 10s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  25m 10s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 34s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 59s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   1m 21s | 
[/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/4/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-common in the patch failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   1m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 50s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  27m 30s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 32s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 31s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 234m 15s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5151 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 5c1f30cad1fa 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 89917b19fc0758352acefa65a465a8d53ffd834e |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 

[jira] [Commented] (HADOOP-18533) RPC Client performance improvement

2022-11-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17636190#comment-17636190
 ] 

ASF GitHub Bot commented on HADOOP-18533:
-

hadoop-yetus commented on PR #5151:
URL: https://github.com/apache/hadoop/pull/5151#issuecomment-1320919572

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m 59s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  25m 47s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |  22m 15s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 25s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 53s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   2m 57s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m  0s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  3s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  25m  4s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |  25m  4s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m  4s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  22m  4s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/3/artifact/out/blanks-eol.txt)
 |  The patch has 9 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  checkstyle  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 52s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   2m 56s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 45s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 34s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  8s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 226m 37s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5151 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux dada20d15998 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 8602d3e1cb87a60792a9ff0f5c1c5dfac19da453 |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/3/testReport/ |
   | Max. process+thread count | 3137 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 

[jira] [Commented] (HADOOP-18533) RPC Client performance improvement

2022-11-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17636170#comment-17636170
 ] 

ASF GitHub Bot commented on HADOOP-18533:
-

huxinqiu closed pull request #5151: HADOOP-18533. RPC Client performance 
improvement
URL: https://github.com/apache/hadoop/pull/5151




> RPC Client performance improvement
> --
>
> Key: HADOOP-18533
> URL: https://issues.apache.org/jira/browse/HADOOP-18533
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: rpc-server
>Reporter: xinqiu.hu
>Priority: Minor
>  Labels: pull-request-available
>
>   The current implementation copies the rpcRequest and header to a 
> ByteArrayOutputStream in order to calculate the total length of the sent 
> request, and then writes it to the socket buffer. And avoid allocating 1024 
> bytes of ResponseBuffer each time a request is sent.
>   But if the rpc engine is ProtobufRpcEngine2, we can pre-calculate the 
> request size, and then send the request directly to the socket buffer, 
> reducing a memory copy.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18533) RPC Client performance improvement

2022-11-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17636171#comment-17636171
 ] 

ASF GitHub Bot commented on HADOOP-18533:
-

huxinqiu opened a new pull request, #5151:
URL: https://github.com/apache/hadoop/pull/5151

   ### Description of PR
   JIRA - [HADOOP-18533](https://issues.apache.org/jira/browse/HADOOP-18533)
   The current implementation copies the rpcRequest and header to a 
ByteArrayOutputStream in order to calculate the total length of the sent 
request, and then writes it to the socket buffer. And avoid allocating 1024 
bytes of ResponseBuffer each time a request is sent.
   Perhaps if the rpc engine is ProtobufRpcEngine2, we can pre-calculate the 
request size, and then send the request directly to the socket buffer, reducing 
one memory copy.
   
   




> RPC Client performance improvement
> --
>
> Key: HADOOP-18533
> URL: https://issues.apache.org/jira/browse/HADOOP-18533
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: rpc-server
>Reporter: xinqiu.hu
>Priority: Minor
>  Labels: pull-request-available
>
>   The current implementation copies the rpcRequest and header to a 
> ByteArrayOutputStream in order to calculate the total length of the sent 
> request, and then writes it to the socket buffer. And avoid allocating 1024 
> bytes of ResponseBuffer each time a request is sent.
>   But if the rpc engine is ProtobufRpcEngine2, we can pre-calculate the 
> request size, and then send the request directly to the socket buffer, 
> reducing a memory copy.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18533) RPC Client performance improvement

2022-11-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17636167#comment-17636167
 ] 

ASF GitHub Bot commented on HADOOP-18533:
-

hadoop-yetus commented on PR #5151:
URL: https://github.com/apache/hadoop/pull/5151#issuecomment-1320892363

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 47s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 57s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |  20m 52s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 29s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 54s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   1m 27s | 
[/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/2/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   1m  2s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 54s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m  6s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  3s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 41s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |  22m 41s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 10s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  21m 10s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/2/artifact/out/blanks-eol.txt)
 |  The patch has 9 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  checkstyle  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m  1s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   1m 18s | 
[/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/2/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-common in the patch failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   1m  3s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m  6s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 42s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  20m 33s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 29s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 216m  4s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5151 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 5ad6d256c7cd 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 

[jira] [Commented] (HADOOP-18533) RPC Client performance improvement

2022-11-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17636158#comment-17636158
 ] 

ASF GitHub Bot commented on HADOOP-18533:
-

huxinqiu commented on code in PR #5151:
URL: https://github.com/apache/hadoop/pull/5151#discussion_r1027073201


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java:
##
@@ -392,7 +394,7 @@ private class Connection extends Thread {
 private IOException closeException; // close reason
 
 private final Thread rpcRequestThread;
-private final SynchronousQueue> rpcRequestQueue 
=
+private final SynchronousQueue> rpcRequestQueue =

Review Comment:
   Because for WritableRpcEngine, there is no easy way to calculate the size of 
request parameters. ResponseBuffer is a better choice, so Object represents 
ResponseBuffer or RpcProtobufRequestWithHeader.





> RPC Client performance improvement
> --
>
> Key: HADOOP-18533
> URL: https://issues.apache.org/jira/browse/HADOOP-18533
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: rpc-server
>Reporter: xinqiu.hu
>Priority: Minor
>  Labels: pull-request-available
>
>   The current implementation copies the rpcRequest and header to a 
> ByteArrayOutputStream in order to calculate the total length of the sent 
> request, and then writes it to the socket buffer.
>   But if the rpc engine is ProtobufRpcEngine2, we can pre-calculate the 
> request size, and then send the request directly to the socket buffer, 
> reducing a memory copy.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18533) RPC Client performance improvement

2022-11-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17636152#comment-17636152
 ] 

ASF GitHub Bot commented on HADOOP-18533:
-

hadoop-yetus commented on PR #5151:
URL: https://github.com/apache/hadoop/pull/5151#issuecomment-1320873743

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 35s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  39m 25s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 15s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |  21m  3s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 54s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   1m 40s | 
[/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/1/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   1m 11s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 54s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 58s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 42s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |  22m 42s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 55s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  20m 55s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/1/artifact/out/blanks-eol.txt)
 |  The patch has 11 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  checkstyle  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m  8s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   1m 19s | 
[/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/1/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-common in the patch failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 49s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 59s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 44s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 24s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 214m 20s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5151 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 7d5514b79fb7 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 

[jira] [Commented] (HADOOP-18533) RPC Client performance improvement

2022-11-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17636143#comment-17636143
 ] 

ASF GitHub Bot commented on HADOOP-18533:
-

huxinqiu commented on code in PR #5151:
URL: https://github.com/apache/hadoop/pull/5151#discussion_r1027073648


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProtobufRpcEngine2.java:
##
@@ -27,15 +27,13 @@
 import org.apache.hadoop.ipc.Client.ConnectionId;
 import org.apache.hadoop.ipc.RPC.RpcInvoker;
 import 
org.apache.hadoop.ipc.protobuf.ProtobufRpcEngine2Protos.RequestHeaderProto;
+import org.apache.hadoop.ipc.protobuf.RpcHeaderProtos;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.security.token.SecretManager;
 import org.apache.hadoop.security.token.TokenIdentifier;
 import org.apache.hadoop.classification.VisibleForTesting;
-import org.apache.hadoop.thirdparty.protobuf.BlockingService;
+import org.apache.hadoop.thirdparty.protobuf.*;

Review Comment:
   I have fixed it.





> RPC Client performance improvement
> --
>
> Key: HADOOP-18533
> URL: https://issues.apache.org/jira/browse/HADOOP-18533
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: rpc-server
>Reporter: xinqiu.hu
>Priority: Minor
>  Labels: pull-request-available
>
>   The current implementation copies the rpcRequest and header to a 
> ByteArrayOutputStream in order to calculate the total length of the sent 
> request, and then writes it to the socket buffer.
>   But if the rpc engine is ProtobufRpcEngine2, we can pre-calculate the 
> request size, and then send the request directly to the socket buffer, 
> reducing a memory copy.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18533) RPC Client performance improvement

2022-11-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17636142#comment-17636142
 ] 

ASF GitHub Bot commented on HADOOP-18533:
-

huxinqiu commented on code in PR #5151:
URL: https://github.com/apache/hadoop/pull/5151#discussion_r1027073201


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java:
##
@@ -392,7 +394,7 @@ private class Connection extends Thread {
 private IOException closeException; // close reason
 
 private final Thread rpcRequestThread;
-private final SynchronousQueue> rpcRequestQueue 
=
+private final SynchronousQueue> rpcRequestQueue =

Review Comment:
   Because for WritableRpcEngine, there is no easy way to calculate the size of 
request parameters, ResponseBuffer is a better choice, so Object represents 
ResponseBuffer or RpcProtobufRequestWithHeader.





> RPC Client performance improvement
> --
>
> Key: HADOOP-18533
> URL: https://issues.apache.org/jira/browse/HADOOP-18533
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: rpc-server
>Reporter: xinqiu.hu
>Priority: Minor
>  Labels: pull-request-available
>
>   The current implementation copies the rpcRequest and header to a 
> ByteArrayOutputStream in order to calculate the total length of the sent 
> request, and then writes it to the socket buffer.
>   But if the rpc engine is ProtobufRpcEngine2, we can pre-calculate the 
> request size, and then send the request directly to the socket buffer, 
> reducing a memory copy.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18533) RPC Client performance improvement

2022-11-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17636134#comment-17636134
 ] 

ASF GitHub Bot commented on HADOOP-18533:
-

slfan1989 commented on code in PR #5151:
URL: https://github.com/apache/hadoop/pull/5151#discussion_r1027064922


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProtobufRpcEngine2.java:
##
@@ -27,15 +27,13 @@
 import org.apache.hadoop.ipc.Client.ConnectionId;
 import org.apache.hadoop.ipc.RPC.RpcInvoker;
 import 
org.apache.hadoop.ipc.protobuf.ProtobufRpcEngine2Protos.RequestHeaderProto;
+import org.apache.hadoop.ipc.protobuf.RpcHeaderProtos;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.security.token.SecretManager;
 import org.apache.hadoop.security.token.TokenIdentifier;
 import org.apache.hadoop.classification.VisibleForTesting;
-import org.apache.hadoop.thirdparty.protobuf.BlockingService;
+import org.apache.hadoop.thirdparty.protobuf.*;

Review Comment:
   avoid *





> RPC Client performance improvement
> --
>
> Key: HADOOP-18533
> URL: https://issues.apache.org/jira/browse/HADOOP-18533
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: rpc-server
>Reporter: xinqiu.hu
>Priority: Minor
>  Labels: pull-request-available
>
>   The current implementation copies the rpcRequest and header to a 
> ByteArrayOutputStream in order to calculate the total length of the sent 
> request, and then writes it to the socket buffer.
>   But if the rpc engine is ProtobufRpcEngine2, we can pre-calculate the 
> request size, and then send the request directly to the socket buffer, 
> reducing a memory copy.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18533) RPC Client performance improvement

2022-11-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17636133#comment-17636133
 ] 

ASF GitHub Bot commented on HADOOP-18533:
-

slfan1989 commented on code in PR #5151:
URL: https://github.com/apache/hadoop/pull/5151#discussion_r1027064892


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java:
##
@@ -1941,6 +1963,20 @@ public ByteBuffer readResponse() throws IOException {
 public void sendRequest(byte[] buf) throws IOException {
   out.write(buf);
 }
+  
+public void sendRequest(ProtobufRpcEngine2.RpcProtobufRequestWithHeader 
rpcRequest)
+throws IOException {
+  out.writeInt(rpcRequest.getLength());
+  rpcRequest.getHeader().writeDelimitedTo(out);
+  rpcRequest.getRpcRequest().writeTo(out);
+}
+  
+public void sendRequest(int totalSize, RpcRequestHeaderProto header,
+Message rpcRequest) throws IOException {

Review Comment:
   indentation



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java:
##
@@ -1941,6 +1963,20 @@ public ByteBuffer readResponse() throws IOException {
 public void sendRequest(byte[] buf) throws IOException {
   out.write(buf);
 }
+  
+public void sendRequest(ProtobufRpcEngine2.RpcProtobufRequestWithHeader 
rpcRequest)
+throws IOException {

Review Comment:
   indentation





> RPC Client performance improvement
> --
>
> Key: HADOOP-18533
> URL: https://issues.apache.org/jira/browse/HADOOP-18533
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: rpc-server
>Reporter: xinqiu.hu
>Priority: Minor
>  Labels: pull-request-available
>
>   The current implementation copies the rpcRequest and header to a 
> ByteArrayOutputStream in order to calculate the total length of the sent 
> request, and then writes it to the socket buffer.
>   But if the rpc engine is ProtobufRpcEngine2, we can pre-calculate the 
> request size, and then send the request directly to the socket buffer, 
> reducing a memory copy.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18533) RPC Client performance improvement

2022-11-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17636132#comment-17636132
 ] 

ASF GitHub Bot commented on HADOOP-18533:
-

slfan1989 commented on code in PR #5151:
URL: https://github.com/apache/hadoop/pull/5151#discussion_r1027064855


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java:
##
@@ -392,7 +394,7 @@ private class Connection extends Thread {
 private IOException closeException; // close reason
 
 private final Thread rpcRequestThread;
-private final SynchronousQueue> rpcRequestQueue 
=
+private final SynchronousQueue> rpcRequestQueue =

Review Comment:
   Why should it be replaced by Object?





> RPC Client performance improvement
> --
>
> Key: HADOOP-18533
> URL: https://issues.apache.org/jira/browse/HADOOP-18533
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: rpc-server
>Reporter: xinqiu.hu
>Priority: Minor
>  Labels: pull-request-available
>
>   The current implementation copies the rpcRequest and header to a 
> ByteArrayOutputStream in order to calculate the total length of the sent 
> request, and then writes it to the socket buffer.
>   But if the rpc engine is ProtobufRpcEngine2, we can pre-calculate the 
> request size, and then send the request directly to the socket buffer, 
> reducing a memory copy.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18533) RPC Client performance improvement

2022-11-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17636125#comment-17636125
 ] 

ASF GitHub Bot commented on HADOOP-18533:
-

huxinqiu opened a new pull request, #5151:
URL: https://github.com/apache/hadoop/pull/5151

   ### Description of PR
   JIRA - [HADOOP-18533](https://issues.apache.org/jira/browse/HADOOP-18533)
 The current implementation copies the rpcRequest and header to a 
ByteArrayOutputStream in order to calculate the total length of the sent 
request, and then writes it to the socket buffer.
     Perhaps if the rpc engine is ProtobufRpcEngine2, we can pre-calculate the 
request size, and then send the request directly to the socket buffer, reducing 
one memory copy.
   
   




> RPC Client performance improvement
> --
>
> Key: HADOOP-18533
> URL: https://issues.apache.org/jira/browse/HADOOP-18533
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: rpc-server
>Reporter: xinqiu.hu
>Priority: Minor
>
>   The current implementation copies the rpcRequest and header to a 
> ByteArrayOutputStream in order to calculate the total length of the sent 
> request, and then writes it to the socket buffer.
>   But if the rpc engine is ProtobufRpcEngine2, we can pre-calculate the 
> request size, and then send the request directly to the socket buffer, 
> reducing a memory copy.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org