Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/21815#discussion_r203679895
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/DataSourceScanExec.scala
---
@@ -166,10 +166,10 @@ case class FileSourceScanExec(
override val tableIdentifier: Option[TableIdentifier])
extends DataSourceScanExec with ColumnarBatchScan {
- override val supportsBatch: Boolean = relation.fileFormat.supportBatch(
+ override lazy val supportsBatch: Boolean =
relation.fileFormat.supportBatch(
relation.sparkSession, StructType.fromAttributes(output))
- override val needsUnsafeRowConversion: Boolean = {
+ override lazy val needsUnsafeRowConversion: Boolean = {
if (relation.fileFormat.isInstanceOf[ParquetSource]) {
SparkSession.getActiveSession.get.sessionState.conf.parquetVectorizedReaderEnabled
--- End diff --
Let's leave this out of this PR's scope. That's more like making the plan
workable whereas this PR targets the plan can be canonicalized.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]