wypoon commented on a change in pull request #2512:
URL: https://github.com/apache/iceberg/pull/2512#discussion_r657350965



##########
File path: 
spark3-extensions/src/main/scala/org/apache/spark/sql/catalyst/utils/DistributionAndOrderingUtils.scala
##########
@@ -80,7 +94,11 @@ object DistributionAndOrderingUtils {
       // the conversion to catalyst expressions above produces SortOrder 
expressions
       // for OrderedDistribution and generic expressions for 
ClusteredDistribution
       // this allows RepartitionByExpression to pick either range or hash 
partitioning
-      RepartitionByExpression(distribution, query, numShufflePartitions)
+      if (Spark3VersionUtil.isSpark30) {
+        repartitionByExpressionCtor.newInstance(distribution.toSeq, query, new 
Integer(numShufflePartitions))
+      } else {
+        repartitionByExpressionCtor.newInstance(distribution.toSeq, query, 
Some(numShufflePartitions))

Review comment:
       You're right. There is a difference between passing 
`Some(conf.numShufflePartitions)` and passing `None`. In the latter case, 
`ShuffleExchangeExec` will have `true` for `canChangeNumParts`. 
   I can see that there will be opportunities to vary the logic depending on 
Spark 3 version to take advantage of new Spark 3 features. However, as you 
remarked, that is out of the scope of this PR. Here we just want to support 
building and running against both 3.0 and 3.1 and keep the logic the same.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to