rkwagner commented on code in PR #13291:
URL: https://github.com/apache/hudi/pull/13291#discussion_r2093541105
##########
hudi-client/hudi-spark-client/src/main/scala/org/apache/spark/sql/HoodieDataTypeUtils.scala:
##########
@@ -39,7 +39,9 @@ object HoodieDataTypeUtils {
StructType.fromString(jsonSchema)
def canUseRowWriter(schema: Schema, conf: Configuration): Boolean = {
- if (conf.getBoolean(AvroWriteSupport.WRITE_OLD_LIST_STRUCTURE, true)) {
+ if (HoodieAvroUtils.hasTimestampMillisField(schema)) {
Review Comment:
The fix is for this issue:
https://github.com/apache/hudi/issues/13233
Where Hudi streamers always force Timestamp into micros no matter what the
user specifies at the output schema in the case of a new table. As you can see
in the internal converter, no matter what version of timestamp is used in the
output schema (millis or micros), you'll always end up with micros.
The OR clause here makes that clear:
https://github.com/apache/hudi/pull/13291/files#diff-2d823101c425b4f9fbc444d1def5b6ebe1607bf19b532c80f5b0851cfd27a292
And is reproducible in the script in the linked issue.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]