maropu commented on a change in pull request #29473:
URL: https://github.com/apache/spark/pull/29473#discussion_r475587292



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/DataSourceScanExec.scala
##########
@@ -166,15 +167,24 @@ case class FileSourceScanExec(
     requiredSchema: StructType,
     partitionFilters: Seq[Expression],
     optionalBucketSet: Option[BitSet],
-    optionalNumCoalescedBuckets: Option[Int],
+    optionalNewNumBuckets: Option[Int],
     dataFilters: Seq[Expression],
     tableIdentifier: Option[TableIdentifier])
   extends DataSourceScanExec {
 
   // Note that some vals referring the file-based relation are lazy 
intentionally
   // so that this plan can be canonicalized on executor side too. See 
SPARK-23731.
   override lazy val supportsColumnar: Boolean = {
-    relation.fileFormat.supportBatch(relation.sparkSession, schema)
+    // `RepartitioningBucketRDD` converts columnar batches to rows to 
calculate bucket id for each
+    // row, thus columnar is not supported when `RepartitioningBucketRDD` is 
used to avoid
+    // conversions from batches to rows and back to batches.
+    relation.fileFormat.supportBatch(relation.sparkSession, schema) && 
!isRepartitioningBuckets

Review comment:
       I'm not sure about how much this columnar execution makes performance 
gains though, the proposed idea is to give up the gains then use bucket 
repartitioning instead?

##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/bucketing/CoalesceOrRepartitionBucketsInJoin.scala
##########
@@ -83,23 +82,39 @@ case class CoalesceBucketsInJoin(conf: SQLConf) extends 
Rule[SparkPlan] {
   }
 
   def apply(plan: SparkPlan): SparkPlan = {
-    if (!conf.coalesceBucketsInJoinEnabled) {
+    if (!conf.coalesceBucketsInJoinEnabled && 
!conf.repartitionBucketsInJoinEnabled) {
       return plan
     }
 
+    if (conf.coalesceBucketsInJoinEnabled && 
conf.repartitionBucketsInJoinEnabled) {
+      throw new AnalysisException("Both 
'spark.sql.bucketing.coalesceBucketsInJoin.enabled' and " +
+        "'spark.sql.bucketing.repartitionBucketsInJoin.enabled' cannot be set 
to true at the" +
+        "same time")

Review comment:
       Could we use `Enumeration` and `checkValues` instead? I think this check 
should be done in `SQLConf`.

##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/DataSourceScanExec.scala
##########
@@ -166,15 +167,24 @@ case class FileSourceScanExec(
     requiredSchema: StructType,
     partitionFilters: Seq[Expression],
     optionalBucketSet: Option[BitSet],
-    optionalNumCoalescedBuckets: Option[Int],
+    optionalNewNumBuckets: Option[Int],
     dataFilters: Seq[Expression],
     tableIdentifier: Option[TableIdentifier])
   extends DataSourceScanExec {
 
   // Note that some vals referring the file-based relation are lazy 
intentionally
   // so that this plan can be canonicalized on executor side too. See 
SPARK-23731.
   override lazy val supportsColumnar: Boolean = {
-    relation.fileFormat.supportBatch(relation.sparkSession, schema)
+    // `RepartitioningBucketRDD` converts columnar batches to rows to 
calculate bucket id for each
+    // row, thus columnar is not supported when `RepartitioningBucketRDD` is 
used to avoid
+    // conversions from batches to rows and back to batches.
+    relation.fileFormat.supportBatch(relation.sparkSession, schema) && 
!isRepartitioningBuckets
+  }
+
+  @transient private lazy val isRepartitioningBuckets: Boolean = {

Review comment:
       nit: we don't need `: Boolean `?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to