Victsm commented on a change in pull request #30433:
URL: https://github.com/apache/spark/pull/30433#discussion_r542684269
##########
File path:
common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/RemoteBlockPushResolver.java
##########
@@ -586,6 +592,7 @@ public void onData(String streamId, ByteBuffer buf) throws
IOException {
deferredBufs = null;
return;
}
+ abortIfNecessary();
Review comment:
@Ngone51 I think that's an existing behavior in Spark with how it uses
`StreamInterceptor` to stream data coming from a RPC message that's out of the
frame.
This is also documented with SPARK-6237 in #21346:
https://github.com/apache/spark/blob/82aca7eb8f2501dceaf610f1aaa86082153ef5ee/common/network-common/src/main/java/org/apache/spark/network/server/RpcHandler.java#L62-L66
It is the server which closes the channel if exception is thrown from
`onData`.
Once an exception gets thrown from `onData` while `StreamInterceptor` hasn't
finished processing all the out of frame bytes for a given RPC message, the
`TransportFrameDecoder` will no longer be able to successfully decode following
RPC messages from this channel.
Thus, the server needs to close the channel at this point.
Once the channel gets closed, the client will no longer be able to transfer
any more data to the server using the same channel.
The connection needs to be reestablished at this time, resetting state on
the client side.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]