advancedxy commented on a change in pull request #26040: [SPARK-9853][Core] 
Optimize shuffle fetch of continuous partition IDs
URL: https://github.com/apache/spark/pull/26040#discussion_r333855219
 
 

 ##########
 File path: 
core/src/main/scala/org/apache/spark/shuffle/BlockStoreShuffleReader.scala
 ##########
 @@ -41,6 +43,20 @@ private[spark] class BlockStoreShuffleReader[K, C](
 
   private val dep = handle.dependency
 
+  private def fetchContinuousBlocksInBatch: Boolean = {
+    val conf = SparkEnv.get.conf
+    val compressed = conf.get(config.SHUFFLE_COMPRESS)
+    val featureEnabled = 
conf.get(config.SHUFFLE_FETCH_CONTINUOUS_BLOCKS_IN_BATCH)
+    val serializerRelocatable = 
dep.serializer.supportsRelocationOfSerializedObjects
+    // The batch fetching feature only works for reading consolidate file 
written by
+    // SortShuffleWriter or UnsafeShuffleWriter.
+    val readConsolidateFile = 
!handle.isInstanceOf[BypassMergeSortShuffleHandle[_, _]]
+
+    readConsolidateFile && featureEnabled && endPartition - startPartition > 1 
&&
 
 Review comment:
   How about we add a log here when featureEnabled is true while the final 
decision is false to indicate users that fetchContinuous is not actually used 
due to incompatible configures of serializer or compression codec.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to