rahil-c commented on code in PR #8885:
URL: https://github.com/apache/hudi/pull/8885#discussion_r1224609025


##########
hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/hudi/BaseFileOnlyRelation.scala:
##########
@@ -66,17 +66,21 @@ case class BaseFileOnlyRelation(override val sqlContext: 
SQLContext,
   // NOTE: This override has to mirror semantic of whenever this Relation is 
converted into [[HadoopFsRelation]],
   //       which is currently done for all cases, except when Schema Evolution 
is enabled
   override protected val shouldExtractPartitionValuesFromPartitionPath: 
Boolean =
-    internalSchemaOpt.isEmpty
+  internalSchemaOpt.isEmpty
 
   override lazy val mandatoryFields: Seq[String] = Seq.empty
 
+  // Before Spark 3.4.0: PartitioningAwareFileIndex.BASE_PATH_PARAM
+  // Since Spark 3.4.0: FileIndexOptions.BASE_PATH_PARAM
+  val BASE_PATH_PARAM = "basePath"
+
   override def updatePrunedDataSchema(prunedSchema: StructType): Relation =
     this.copy(prunedDataSchema = Some(prunedSchema))
 
   override def imbueConfigs(sqlContext: SQLContext): Unit = {
     super.imbueConfigs(sqlContext)
     // TODO Issue with setting this to true in spark 332
-    if (!HoodieSparkUtils.gteqSpark3_3_2) {
+    if (HoodieSparkUtils.gteqSpark3_4 || !HoodieSparkUtils.gteqSpark3_3_2) {

Review Comment:
   @yihua Now when I think about it, I think the changes we made in 3.4 should 
be brought to our 3.3.2 impl. This would then allow us to remove this if check 
all together, and allow vectorizedReader to be enabled regardless of Spark 
version.  



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to