wombatu-kun commented on code in PR #10949:
URL: https://github.com/apache/hudi/pull/10949#discussion_r1554482596
##########
hudi-spark-datasource/hudi-spark/src/test/scala/org/apache/spark/sql/hudi/ddl/TestSpark3DDL.scala:
##########
@@ -742,6 +744,8 @@ class TestSpark3DDL extends HoodieSparkSqlTestBase {
option("hoodie.schema.on.read.enable","true").
option("hoodie.datasource.write.reconcile.schema","true").
option(DataSourceWriteOptions.TABLE_NAME.key(), tableName).
+ option(HoodieWriteConfig.WRITE_PAYLOAD_CLASS_NAME.key(),
classOf[OverwriteWithLatestAvroPayload].getName).
Review Comment:
> Are you saying the discrepency of #combineAndGetUpdateValue?
Exactly. `OverwriteWithLatestAvroPayload` overwrites storage with latest
delta record, `DefaultHoodieRecordPayload` chooses the latest record based on
ordering field value.
So for this test to pass with `DefaultHoodieRecordPayload` we must not use
random generated values in ordering field (ts). Thats what i did in my last fix.
##########
hudi-spark-datasource/hudi-spark/src/test/scala/org/apache/spark/sql/hudi/ddl/TestSpark3DDL.scala:
##########
@@ -742,6 +744,8 @@ class TestSpark3DDL extends HoodieSparkSqlTestBase {
option("hoodie.schema.on.read.enable","true").
option("hoodie.datasource.write.reconcile.schema","true").
option(DataSourceWriteOptions.TABLE_NAME.key(), tableName).
+ option(HoodieWriteConfig.WRITE_PAYLOAD_CLASS_NAME.key(),
classOf[OverwriteWithLatestAvroPayload].getName).
Review Comment:
> Are you saying the discrepency of #combineAndGetUpdateValue?
Exactly. `OverwriteWithLatestAvroPayload` overwrites storage with latest
delta record, `DefaultHoodieRecordPayload` chooses the latest record based on
ordering field value.
So for this test to pass with `DefaultHoodieRecordPayload` we must not use
random generated values in ordering field (ts). Thats what i did in my last fix.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]