jmnatzaganian opened a new issue, #9742: URL: https://github.com/apache/hudi/issues/9742
**Describe the problem you faced** When using the bucket index in an insert only mode data is only written if it goes to a new file group. New records targeting a pre-existing file group are not inserted. The settings to define it as an insert only operation are below, with the expectation that if the record already exists it won't be inserted. New record keys will be inserted. ``` "hoodie.datasource.write.operation": "insert", "hoodie.sql.insert.mode": "strict", "hoodie.datasource.write.insert.drop.duplicates": True, "hoodie.datasource.write.payload.class": "org.apache.hudi.common.model.DefaultHoodieRecordPayload", "hoodie.merge.allow.duplicate.on.inserts": False, "hoodie.combine.before.insert": True, "hoodie.payload.ordering.field": "ts", ``` **To Reproduce** See the attached script [hudi_bucket_ix_issue.py](https://github.com/apache/hudi/files/12650059/hudi_bucket_ix_issue.py.txt). The output is [here](https://github.com/apache/hudi/files/12650057/script_output.txt). **Expected behavior** See the attached script. Simple index shows the expected behavior. Dupes are dropped and new records are inserted. **Environment Description** * Hudi version: 0.13.1 * Spark version: 3.1.1 * Hive version: N/A * Hadoop version: * Storage (HDFS/S3/GCS..): Local * Running on Docker? (yes/no): No -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
