monkeyboy123 edited a comment on pull request #35662: URL: https://github.com/apache/spark/pull/35662#issuecomment-1053498753
> On the other hand, if we backport [SPARK-35798](https://issues.apache.org/jira/browse/SPARK-35798) to branch-3.1, can this issue be solved? I will try backport [SPARK-35798](https://issues.apache.org/jira/browse/SPARK-35798) ,then tell the answer new NullPointerException will throws: ``` case class FileSourceScanExec( @transient relation: HadoopFsRelation, output: Seq[Attribute], requiredSchema: StructType, partitionFilters: Seq[Expression], optionalBucketSet: Option[BitSet], optionalNumCoalescedBuckets: Option[Int], dataFilters: Seq[Expression], tableIdentifier: Option[TableIdentifier], disableBucketedScan: Boolean = false) extends DataSourceScanExec { // Note that some vals referring the file-based relation are lazy intentionally // so that this plan can be canonicalized on executor side too. See SPARK-23731. override lazy val supportsColumnar: Boolean = { relation.fileFormat.supportBatch(relation.sparkSession, schema) } ``` because relation is null -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
