liuzqt commented on code in PR #38064:
URL: https://github.com/apache/spark/pull/38064#discussion_r1006141237
##########
core/src/main/scala/org/apache/spark/util/io/ChunkedByteBuffer.scala:
##########
@@ -84,6 +91,74 @@ private[spark] class ChunkedByteBuffer(var chunks:
Array[ByteBuffer]) {
}
}
+ /**
+ * write to ObjectOutput with zero copy if possible
+ */
+ override def writeExternal(out: ObjectOutput): Unit = {
+ // we want to keep the chunks layout
+ out.writeInt(chunks.length)
+ chunks.foreach(buffer => out.writeInt(buffer.limit()))
+ chunks.foreach(buffer => out.writeBoolean(buffer.isDirect))
+ var buffer: Array[Byte] = null
+ val bufferLen = ChunkedByteBuffer.COPY_BUFFER_LEN
+
+ getChunks().foreach { chunk => {
+ if (chunk.hasArray) {
+ // zero copy if the bytebuffer is backed by bytes array
+ out.write(chunk.array(), chunk.arrayOffset(), chunk.limit())
+ } else {
+ // fallback to copy approach
+ if (buffer == null) {
+ buffer = new Array[Byte](bufferLen)
+ }
+ var bytesToRead = Math.min(chunk.remaining(), bufferLen)
+ while (bytesToRead > 0) {
+ chunk.get(buffer, 0, bytesToRead)
+ out.write(buffer, 0, bytesToRead)
+ bytesToRead = Math.min(chunk.remaining(), bufferLen)
+ }
Review Comment:
I tried to reuse `Utils.writeByteBuffer`, and I noticed that there're two
versions of `Utils.writeByteBuffer` for `OutputStream` and `DataOutput`
respectively and the code is identical so added a `Utils.writeByteBufferImpl`
to extract the common logic, also added ThreadLocal[Array[Byte]]
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]