medivh511 commented on issue #2701: URL: https://github.com/apache/paimon/issues/2701#issuecomment-2133783298
> > > @zhangjun0x01 I remember this feature is already supported. Is there any problem now? 我记得这个功能已经支持了。现在有什么问题吗? > > > > > > The json given by flink-cdc does not contain the field pkNames, but the debezuim-json parsed in the source code needs to contain pkNames, otherwise when synchronizing multiple tables, it will report that the primary key does not have this exception.When it parses debezium-json, the required fields are as follows: private static final String FIELD_SCHEMA = "schema"; private static final String FIELD_PAYLOAD = "payload"; private static final String FIELD_BEFORE = "before"; private static final String FIELD_AFTER = "after"; private static final String FIELD_SOURCE = "source"; private static final String FIELD_PRIMARY = "pkNames"; private static final String FIELD_DB = "db"; private static final String FIELD_TYPE = "op"; private static final String OP_INSERT = "c"; private static final String OP_UPDATE = "u"; private static final String OP_DELETE = "d"; private static final String OP_READE = "r"; > > At present, the synchronization mode should only support specifying IDs for a single table, as the debezium data ID is stored in the key, so multi table recognition requires special compatibility 在你的commit中是这样描述的: When Debezium's data is written into Kafka, the primary key will be automatically stored in the key. When Paimon parses Kafka messages, the data in the key will be attached to the ’pkNames‘ field in the value . There are some demos in unit testing 我们用oralce cdc 到 kafka,除了value的标准格式外,key是什么样的格式?每种key.convert 会带来不同的格式 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
