cloud-fan commented on a change in pull request #34499:
URL: https://github.com/apache/spark/pull/34499#discussion_r745891358
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkPlan.scala
##########
@@ -322,7 +322,12 @@ abstract class SparkPlan extends QueryPlan[SparkPlan] with
Logging with Serializ
*/
private def getByteArrayRdd(
n: Int = -1, takeFromEnd: Boolean = false): RDD[(Long, Array[Byte])] = {
- execute().mapPartitionsInternal { iter =>
+ val rdd = if (supportsColumnar) {
+ ColumnarToRowExec(this).execute()
Review comment:
I'm a bit uncomfortable with creating query plans on the fly in an
execution code path. People may have to apply some final transformation before
running a physical plan, but now there is no chance to do it. How about this
```
class SparkPlan {
def toRowBased: SparkPlan = if (supportsColumnar) ColumnarToRowExec(this)
else this
}
```
Then during debugging we can do `plan.toRowBased.executeCollect()`
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]