revans2 commented on a change in pull request #34499:
URL: https://github.com/apache/spark/pull/34499#discussion_r745574708
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkPlan.scala
##########
@@ -322,7 +322,12 @@ abstract class SparkPlan extends QueryPlan[SparkPlan] with
Logging with Serializ
*/
private def getByteArrayRdd(
n: Int = -1, takeFromEnd: Boolean = false): RDD[(Long, Array[Byte])] = {
- execute().mapPartitionsInternal { iter =>
+ val rdd = if (supportsColumnar) {
+ ColumnarToRowExec(this).execute()
Review comment:
From my experience that is what happens already. executeCollect will be
called on the `ColumnarToRowExec`, not on anything that supports columnar
processing. But I don't know what issues others have run into. For us one of
the main issues comes up with operators like `CollectLimitExec` which want
their `executeCollect` to be called so they can limit the number of tasks that
have to run to produce a result. If the `ColumnarToRowExec` is inserted after a
columnar version of `CollectLimitExec`, like it is done for this patch, then
execute is called and you get less desirable behavior.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]