[ 
https://issues.apache.org/jira/browse/FLINK-23613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17393551#comment-17393551
 ] 

Jark Wu commented on FLINK-23613:
---------------------------------

>  If I have a changlog stream and want to do some Agg by 'op', I have to sink 
> the changlog stream to kafka as json format firstly or do some trick 
> operation.

Hi [~zoucao], where is the "changelog stream" from? Is it a "debezium-json" 
source or "mysql-cdc" source or an unbounded aggregation? 

> debezium and canal support read medata op and type
> --------------------------------------------------
>
>                 Key: FLINK-23613
>                 URL: https://issues.apache.org/jira/browse/FLINK-23613
>             Project: Flink
>          Issue Type: New Feature
>          Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>            Reporter: Ward Harris
>            Priority: Major
>
> in our scene, there will be two types of database data delivered to the data 
> warehouse:
>  1. the first type is exactly the same as the online table
>  2. the second type adds two columns on the basis of the previous table, 
> representing action_type and action_time respectively, which is to record 
> events
>  in order to solve this demand by flink sql, it is necessary to be able to 
> read the action_type and action_time from debezium or canal metadata, 
> action_time can read from ingestion-timestamp metadata, but can not read 
> action_type from metadata.
> the database action is insert/update/delete, but there will be 
> insert/update_before/update_after/delete in Flink's RowKind, so action_type 
> is RowKind will be better for us. at the same time, flink needs to modify 
> RowKind to insert for record this event table.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to