KnightChess created HUDI-4348:
---------------------------------
Summary: merge into will cause data quality in concurrent scene
Key: HUDI-4348
URL: https://issues.apache.org/jira/browse/HUDI-4348
Project: Apache Hudi
Issue Type: Bug
Components: spark-sql
Reporter: KnightChess
a hudi table with 15 billion pieces of data, the update records has 30 million
every day, the 1000 records is different with hive table.
when I set `executor-cores 1` and `spark.task.cpus 1`, there is no problem, but
when the parallelism over 1 in every executor, the data quality will appear.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)