attilapiros commented on issue #24499: [SPARK-25888][Core] Serve local disk 
persisted blocks by the external service after releasing executor by dynamic 
allocation
URL: https://github.com/apache/spark/pull/24499#issuecomment-490562212
 
 
   I think it is important to mention why in the previous commit the 
`spark.shuffle.io.maxRetries` is set to 0 for testing:
   
https://github.com/apache/spark/blob/dfeeda24c0f5d60bf6d2e1868c5290a1f62dc558/common/network-shuffle/src/test/java/org/apache/spark/network/shuffle/ExternalShuffleIntegrationSuite.java#L103
   
   Without this settings the runtime of the test with the corrupt file 
(ExternalShuffleIntegrationSuite#testFetchCorruptRddBlock) increases 
dramatically (with 0 retries it is only takes 0.2 seconds but with 3 retries it 
goes up to 15sec). I think it is because the error is detected at a very deep 
level within Netty and the channel is closed right here:
   
   
https://github.com/apache/spark/blob/cc7aea020a45adda9a464b3bb9300a6b35ec77ca/common/network-common/src/main/java/org/apache/spark/network/server/ChunkFetchRequestHandler.java#L131-L133
 
   
   So not the quick `ChunkFetchFailure` is sent right away for this request.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to