RussellSpitzer commented on a change in pull request #2512:
URL: https://github.com/apache/iceberg/pull/2512#discussion_r658040759
##########
File path:
spark3-extensions/src/main/scala/org/apache/spark/sql/catalyst/utils/DistributionAndOrderingUtils.scala
##########
@@ -80,7 +94,11 @@ object DistributionAndOrderingUtils {
// the conversion to catalyst expressions above produces SortOrder
expressions
// for OrderedDistribution and generic expressions for
ClusteredDistribution
// this allows RepartitionByExpression to pick either range or hash
partitioning
- RepartitionByExpression(distribution, query, numShufflePartitions)
+ if (Spark3VersionUtil.isSpark30) {
+ repartitionByExpressionCtor.newInstance(distribution.toSeq, query, new
Integer(numShufflePartitions))
+ } else {
+ repartitionByExpressionCtor.newInstance(distribution.toSeq, query,
Some(numShufflePartitions))
Review comment:
I'm +1, I think as a follow up we should be checking if we are in Spark3
and doing something like
```
val numPartitions = write.requiredNumPartitions()
val finalNumPartitions = if (numPartitions > 0) {
numPartitions
} else {
conf.numShufflePartitions
}
```
Which is what we are doing, but I think that's a bit of a perf improvement
and not neccessary for getting this compatibility in. Maybe @wypoon you want to
make a followup?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]