andygrove commented on code in PR #731:
URL: https://github.com/apache/datafusion-comet/pull/731#discussion_r1693967998


##########
spark/src/main/scala/org/apache/spark/sql/comet/CometRowToColumnarExec.scala:
##########
@@ -60,8 +62,17 @@ case class CometRowToColumnarExec(child: SparkPlan)
     val timeZoneId = conf.sessionLocalTimeZone
     val schema = child.schema
 
-    child
-      .execute()
+    val rdd: RDD[InternalRow] = if (child.supportsColumnar) {
+      child
+        .executeColumnar()
+        .mapPartitionsInternal { iter =>
+          iter.flatMap(_.rowIterator().asScala)
+        }
+    } else {
+      child.execute()
+    }

Review Comment:
   My understanding of this is that it allows us to read from parquet files 
containing structs in tests by falling back to Spark, and that this new code 
wouldn't be executed outside of tests? Could you add some comments in here to 
that effect if this is the case?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org
For additional commands, e-mail: github-h...@datafusion.apache.org

Reply via email to