mccheah commented on a change in pull request #28618:
URL: https://github.com/apache/spark/pull/28618#discussion_r487545374



##########
File path: 
core/src/main/java/org/apache/spark/shuffle/sort/io/LocalDiskShuffleExecutorComponents.java
##########
@@ -17,69 +17,64 @@
 
 package org.apache.spark.shuffle.sort.io;
 
-import java.util.Map;
 import java.util.Optional;
 
 import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Supplier;
+import com.google.common.base.Suppliers;
 
 import org.apache.spark.SparkConf;
 import org.apache.spark.SparkEnv;
+import org.apache.spark.shuffle.IndexShuffleBlockResolver;
 import org.apache.spark.shuffle.api.ShuffleExecutorComponents;
 import org.apache.spark.shuffle.api.ShuffleMapOutputWriter;
-import org.apache.spark.shuffle.IndexShuffleBlockResolver;
 import org.apache.spark.shuffle.api.SingleSpillShuffleMapOutputWriter;
-import org.apache.spark.storage.BlockManager;
 
 public class LocalDiskShuffleExecutorComponents implements 
ShuffleExecutorComponents {
 
   private final SparkConf sparkConf;
-  private BlockManager blockManager;
-  private IndexShuffleBlockResolver blockResolver;
+  private final Supplier<IndexShuffleBlockResolver> blockResolver;

Review comment:
       Hm, I think that now that the laziness is handled at the level above 
(MemoizingShuffleDataIO) it should be fine to make these non-lazy then. I'll 
try that and push and check the build after. I recall there being something 
weird with local mode, but that might have been a state that isn't possible now 
with the new memoizing layer above.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to