danny0405 commented on code in PR #13208:
URL: https://github.com/apache/hudi/pull/13208#discussion_r2061009361


##########
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/client/common/SparkReaderContextFactory.java:
##########
@@ -88,6 +90,9 @@ class SparkReaderContextFactory implements 
ReaderContextFactory<InternalRow> {
     // Spark parquet reader has to be instantiated on the driver and broadcast 
to the executors
     SparkParquetReader parquetFileReader = 
sparkAdapter.createParquetFileReader(false, sqlConf, options, configs);
     parquetReaderBroadcast = jsc.broadcast(parquetFileReader);
+    // Broadcast: TableConfig.
+    HoodieTableConfig tableConfig = metaClient.getTableConfig();

Review Comment:
   > The SerializableConfiguration is just the hadoop and IO focused props.
   
   Not really, the `SparkReaderContextFactory#getHadoopConfiguration` sets up 
option like schema evolution and parquet.
   
   I kind of think the `ReaderContext` is kind of developer API, should we 
expose component directly instead of Hudi related configs in case the developer 
has no good insight of the numerous Hudi options.
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to