[ 
https://issues.apache.org/jira/browse/FLINK-23613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17393755#comment-17393755
 ] 

zoucao commented on FLINK-23613:
--------------------------------

Hi [~jark], sorry for the late reply, most of the changlog stream comes from 
mysql-cdc or kafka with debezium-json/canal-json, the upstream of kafka is 
mysql binlog or tidb (ticdc). The other comes from an unbounded aggregation, 
but I think it's trivial.

> debezium and canal support read medata op and type
> --------------------------------------------------
>
>                 Key: FLINK-23613
>                 URL: https://issues.apache.org/jira/browse/FLINK-23613
>             Project: Flink
>          Issue Type: New Feature
>          Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>            Reporter: Ward Harris
>            Priority: Major
>
> in our scene, there will be two types of database data delivered to the data 
> warehouse:
>  1. the first type is exactly the same as the online table
>  2. the second type adds two columns on the basis of the previous table, 
> representing action_type and action_time respectively, which is to record 
> events
>  in order to solve this demand by flink sql, it is necessary to be able to 
> read the action_type and action_time from debezium or canal metadata, 
> action_time can read from ingestion-timestamp metadata, but can not read 
> action_type from metadata.
> the database action is insert/update/delete, but there will be 
> insert/update_before/update_after/delete in Flink's RowKind, so action_type 
> is RowKind will be better for us. at the same time, flink needs to modify 
> RowKind to insert for record this event table.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to