prakharjain09 commented on a change in pull request #27539: [SPARK-30786]
[CORE] Fix Block replication failure propogation issue in BlockManager
URL: https://github.com/apache/spark/pull/27539#discussion_r380089887
##########
File path:
core/src/main/scala/org/apache/spark/network/netty/NettyBlockRpcServer.scala
##########
@@ -105,8 +105,14 @@ class NettyBlockRpcServer(
val blockId = BlockId(uploadBlock.blockId)
logDebug(s"Receiving replicated block $blockId with level ${level} " +
s"from ${client.getSocketAddress}")
- blockManager.putBlockData(blockId, data, level, classTag)
- responseContext.onSuccess(ByteBuffer.allocate(0))
+ val blockStored = blockManager.putBlockData(blockId, data, level,
classTag)
+ if (blockStored) {
+ responseContext.onSuccess(ByteBuffer.allocate(0))
+ } else {
+ val exception = new Exception(s"Upload block for $blockId failed.
This mostly happens " +
Review comment:
I think @karuppayya is trying to say that we can throw specific exceptions
from the "blockManager.putBlockData()" in order to pass exact failure message
from server to client.
The "putBlockData" method of "BlockDataManager" interface has a boolean
return type. So it can still return false in case it is not able to store a
block. If this line throws exception, the same exception message will be passed
to client as RPC failure - No change in that behavior.
This PR handles the scenario where it returns false. In that case also we
should send RPC failure to client.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]