boneanxs commented on code in PR #6028:
URL: https://github.com/apache/hudi/pull/6028#discussion_r928368597


##########
hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/hudi/HoodieSparkSqlWriter.scala:
##########
@@ -523,17 +523,19 @@ object HoodieSparkSqlWriter {
     val params: mutable.Map[String, String] = 
collection.mutable.Map(parameters.toSeq: _*)
     params(HoodieWriteConfig.AVRO_SCHEMA_STRING.key) = schema.toString
     val writeConfig = DataSourceUtils.createHoodieConfig(schema.toString, 
path, tblName, mapAsJavaMap(params))
-    val bulkInsertPartitionerRows: BulkInsertPartitioner[Dataset[Row]] = if 
(populateMetaFields) {
+    val bulkInsertPartitionerRows: BulkInsertPartitioner[Dataset[Row]] = {
       val userDefinedBulkInsertPartitionerOpt = 
DataSourceUtils.createUserDefinedBulkInsertPartitionerWithRows(writeConfig)

Review Comment:
   Whether we should have a new method in `partitioner` to validate columns 
meet requirement(like return mandatoryFields, and we use it to check)? 
Currently if users set user-defined partitioner which acquire metafields, we 
will also accept it and not throw errors...
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to