punish-yh opened a new issue, #9587: URL: https://github.com/apache/hudi/issues/9587
**_Tips before filing an issue_** - Have you gone through our [FAQs](https://hudi.apache.org/learn/faq/)? - Join the mailing list to engage in conversations and get faster support at [email protected]. - If you have triaged this as a bug, then file an [issue](https://issues.apache.org/jira/projects/HUDI/issues) directly. **Describe the problem you faced** When I use bulk_insert mode inject data, the hoodie.datasource.write.keygenerator.class configurate doest not work. here is hudi sql: ```` CREATE TEMPORARY TABLE IF NOT EXISTS table_name( `eid` STRING, `_time` BIGINT, `_db` String, `_table` String, PRIMARY KEY(eid) NOT ENFORCED ) PARTITIONED BY(`_db`,`_table`) WITH ( 'connector' = 'hudi', 'path' = 'xxxx', 'hoodie.datasource.write.keygenerator.class'='org.apache.hudi.keygen.ComplexAvroKeyGenerator', 'hoodie.datasource.write.recordkey.field'='eid', 'hoodie.datasource.write.partitionpath.field'='_db,_table', 'write.precombine' = 'true', 'write.precombine.field' = '_time', 'write.tasks' ='1', 'write.bucket_assign.tasks'='1', 'write.operation'='bulk_insert', 'changelog.enabled' = 'true', 'metadata.enabled' = 'false', 'compaction.async.enabled' = 'false', 'index.type'='BUCKET', 'hoodie.bucket.index.hash.field' = 'eid', 'hoodie.bucket.index.num.buckets' = '4', 'table.type' = 'MERGE_ON_READ', 'hoodie.parquet.compression.codec'='zstd' ) ```` and then I start a flink batch job write record to this table. This job is success execute and no exception. Then I select this table with metadata_field, I found the _record_record_key type no key just has value. use same configurate exception write.operation=upsert, _record_record_key type is key:value The following figure shows the specific query results:  it is bulk_insert job problem or upsert job? **To Reproduce** Steps to reproduce the behavior: 1. create hudi sink table with sql. 2. inject data to hudi table. 3. select data **Expected behavior** A clear and concise description of what you expected to happen. **Environment Description** * Hudi version : 0.12.3 * Flink version : 1.15.4 * Spark version : no * Hive version : no * Hadoop version : 6.3.2 * Storage (HDFS/S3/GCS..) : HDFS * Running on Docker? (yes/no) : no **Additional context** Add any other context about the problem here. **Stacktrace** ```Add the stacktrace of the error.``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
