otterc commented on a change in pull request #33613:
URL: https://github.com/apache/spark/pull/33613#discussion_r682962892



##########
File path: 
common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/RemoteBlockPushResolver.java
##########
@@ -471,9 +488,10 @@ public void onData(String streamId, ByteBuffer buf) {
         public void onComplete(String streamId) {
           if (isStaleBlockOrTooLate) {
             // Throw an exception here so the block data is drained from 
channel and server
-            // responds RpcFailure to the client.
-            throw new RuntimeException(String.format("Block %s %s", streamId,
-              
ErrorHandler.BlockPushErrorHandler.TOO_LATE_OR_STALE_BLOCK_PUSH_MESSAGE_SUFFIX));
+            // responds the error code to the client.
+            throw new BlockPushNonFatalFailure(
+              new 
PushBlockNonFatalErrorCode(ErrorCode.TOO_LATE_OR_STALE_BLOCK_PUSH.id())

Review comment:
       @Victsm that's a good point but we can still replace the usage of 
`StaleBlockPushException` which is created with a message that uses 
`String.format` with `BlockPushNonFatalFailure`.  We can still catch 
`BlockPushNonFatalFailure` and based on the `ErrorCode` enum differentiate 
between stale and too late.
   
   Also, for stale block pushes that are for prior application attempts, isn't 
closing the channel better? For clients that are pushing blocks from prior 
application attempt, we just want them to stop pushing all together correct?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to