blackcheckren commented on issue #9748:
URL: https://github.com/apache/hudi/issues/9748#issuecomment-1762783089

   @ad1happy2go The problem has been located under the tips of friends in hudi 
technical communication group. This problem is because the Spark timestampType 
data is written to the Hudi table parquet file, which will cause data errors 
and loss. You only need to convert the type of data to string to avoid this 
problem, which is indeed the case after the verification of two friends. But I 
still have some questions. This problem is mainly caused by bulk_Insert 
operation, but not in insert mode. Will these two operation types handle data 
writing files differently? I am not familiar with the source code, hope to get 
your reply, thank you.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to