JoyJoyJo commented on issue #12561: URL: https://github.com/apache/hudi/issues/12561#issuecomment-2570670632
BTW, I have encouterred a failed case before when I used spark to backtrack some history partition data into a cow table. The table was appended by flink originally and I did not specify the record key. By default, `_hoodie_record_key` metadata field would filled with a placeholder `__empty__` in flink job. However, Spark can not generate `_hoodie_record_key` metadata field without primary key or record key. If the table is not only be appended by flink, but also needs spark or other engine to process, I think the record key is required. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
