ivoson commented on code in PR #39459:
URL: https://github.com/apache/spark/pull/39459#discussion_r1096526714


##########
core/src/main/scala/org/apache/spark/storage/BlockManager.scala:
##########
@@ -1424,6 +1457,16 @@ private[spark] class BlockManager(
     blockStoreUpdater.save()
   }
 
+  // Check whether a rdd block is visible or not.
+  private[spark] def isRDDBlockVisible(blockId: RDDBlockId): Boolean = {
+    // If the rdd block visibility information not available in the block 
manager,
+    // asking master for the information.
+    if (blockInfoManager.isRDDBlockVisible(blockId)) {
+      return true
+    }
+    master.isRDDBlockVisible(blockId)

Review Comment:
   One way in my mind is that, we cache the results(for visible rdd blocks) in 
the block manager. Once the rdd is removed, a broadcast message will be sent to 
each BlockManager to clean the cache.
   
   I am wondering is it worth to do this? Since there is locality scheduling, 
if tasks got scheduled to the executor where cached block exists, there'll be 
no calls to master.
   
   Let me know your thoughts about this, thanks. cc @mridulm @Ngone51 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to