dpcollins-google commented on code in PR #13162:
URL: https://github.com/apache/kafka/pull/13162#discussion_r1090733865


##########
clients/src/main/java/org/apache/kafka/common/utils/Utils.java:
##########
@@ -1225,13 +1226,11 @@ public static long tryWriteTo(TransferableChannel 
destChannel,
      * @param length The number of bytes to write
      * @throws IOException For any errors writing to the output
      */
-    public static void writeTo(DataOutput out, ByteBuffer buffer, int length) 
throws IOException {
+    public static void writeTo(DataOutputStream out, ByteBuffer buffer, int 
length) throws IOException {
         if (buffer.hasArray()) {
             out.write(buffer.array(), buffer.position() + 
buffer.arrayOffset(), length);
         } else {
-            int pos = buffer.position();
-            for (int i = pos; i < length + pos; i++)
-                out.writeByte(buffer.get(i));
+            Channels.newChannel(out).write(buffer);

Review Comment:
   Per 1): This parameter is always buffer.remaining(), I've cleaned up the 
call sites and removed this parameter.
   
   Per 2): Yes, its substantial. The reason is WritableByteChannelImpl writes 
in 8k chunks when feasible, instead of 1 byte chunks 
https://github.com/AdoptOpenJDK/openjdk-jdk8u/blob/2544d2a351eca1a3d62276f969dd2d95e4a4d2b6/jdk/src/share/classes/java/nio/channels/Channels.java#L442
   
   I can't show benchmarks unfortunately to demonstrate this, as they're of a 
production application and collected using internal tooling



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to