[ 
https://issues.apache.org/jira/browse/SPARK-22144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16182317#comment-16182317
 ] 

Lijia Liu commented on SPARK-22144:
-----------------------------------

cc Yin Huai

> ExchangeCoordinator will not combine the partitions of an 0 sized pre-shuffle
> -----------------------------------------------------------------------------
>
>                 Key: SPARK-22144
>                 URL: https://issues.apache.org/jira/browse/SPARK-22144
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 2.2.0
>         Environment: spark: version:Spark 2.2
> master: yarn
> deploy-mode: cluster
>            Reporter: Lijia Liu
>
> A simple case:
> spark.conf.set("spark.sql.adaptive.enabled", "true")
> val df = spark.range(0, 0, 1, 10).selectExpr("id as key1") 
> .groupBy("key1").count()
> val exchange = df.queryExecution.executedPlan.collect{case e: 
> org.apache.spark.sql.execution.exchange.ShuffleExchange => e}(0)
> println(exchange.outputPartitioning.numPartitions) // The value will be 
> spark.sql.shuffle.partitions and ExchangeCoordinator did not took effect. At 
> the same time, a job with some(spark.sql.shuffle.partitions) tasks will be 
> submited. 
> In my opinion, when data is empty, this job is useless and superfluous.
> This job cause waste of resources, in special when 
> spark.sql.shuffle.partitions was set very large.
> So, as far as I'm concerned, when the length of pre-shuffle's partitions is 
> 0, the length of post-shuffle's partitions should be 0 instead of 
> spark.sql.shuffle.partitions.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to