TheR1sing3un commented on code in PR #12677:
URL: https://github.com/apache/hudi/pull/12677#discussion_r1928007792


##########
hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/hudi/HoodieBaseRelation.scala:
##########
@@ -278,13 +278,21 @@ abstract class HoodieBaseRelation(val sqlContext: 
SQLContext,
 
   /**
    * Columns that relation has to read from the storage to properly execute on 
its semantic: for ex,
-   * for Merge-on-Read tables key fields as well and precombine field comprise 
mandatory set of columns,
+   * for performing incremental read, the {@link 
HoodieRecord.COMMIT_TIME_METADATA_FIELD} is required for filtering the 
out-of-range records
+   *
+   * @VisibleInTests
+   */
+  lazy val mandatoryFields: Seq[String] = Seq.empty
+
+  /**
+   * Columns that relation may need to read from the storage to properly 
execute on its semantic: for ex,
+   * for Merge-on-Read tables key fields as well and pre-combine field 
comprise mandatory set of columns,
    * meaning that regardless of whether this columns are being requested by 
the query they will be fetched
-   * regardless so that relation is able to combine records properly (if 
necessary)
+   * regardless so that relation is able to combine records properly (when 
performing Snapshot-Read on the file-groups with log files)
    *
    * @VisibleInTests
    */
-  val mandatoryFields: Seq[String]
+  lazy val optionalExtraFields: Seq[String] = Seq.empty

Review Comment:
   > Not sure how the fields got set up and whether they are required or not.
   
   `mandatory` is for fields that are required to be read regardless of any 
read behavior, such as `_hoodie_commit_time` in incremental-read, and 
`optional` is for fields that are required to be merged during snapshot reads, 
such as `_hoodie_record_key` and `precombine-key`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to