[ 
https://issues.apache.org/jira/browse/FLINK-19149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Knauf updated FLINK-19149:
-------------------------------------
    Description: 
I would like to be able to interpret a compacted Kafka Topic as a upsert stream 
in Apache Flink. Similarly, I would like to be able to write an upsert stream 
to Kafka (into a compacted topic). 

I would like to be able to interpret a compacted Kafka Topic as a versioned 
table without creating an additional view (similar to Debezium/Canal; see 
https://cwiki.apache.org/confluence/display/FLINK/FLIP-132+Temporal+Table+DDL+and+Temporal+Table+Join)
 

In both cases, the (possibly implicit) primary key of the Flink SQL Table would 
need to correspond to the fields that make up the keys of the Kafka records.

A message for an existing key (with a higher offset) corresponds to an udate. 
A message for an existing key with a null value is interpreted as a delete.



  was:
I would like to be able to interpret a compacted Kafka Topic as a upsert stream 
in Apache Flink. Similarly, I would like to be able to write an upsert stream 
to Kafka (into a compacted topic). 

I would like to be able to interpret a compacted Kafka Topic as a versioned 
table without an additional view (similar to Debezium/Canal; see 
https://cwiki.apache.org/confluence/display/FLINK/FLIP-132+Temporal+Table+DDL+and+Temporal+Table+Join)
 

In both cases, the (possibly implicit) primary key of the Flink SQL Table would 
need to correspond to the fields that make up the keys of the Kafka records.

A message for an existing key (with a higher offset) corresponds to an udate. 
A message for an existing key with a null value is interpreted as a delete.




> Compacted Kafka Topic can be interpreted as Changelog Stream
> ------------------------------------------------------------
>
>                 Key: FLINK-19149
>                 URL: https://issues.apache.org/jira/browse/FLINK-19149
>             Project: Flink
>          Issue Type: Improvement
>          Components: Connectors / Kafka
>            Reporter: Konstantin Knauf
>            Priority: Critical
>
> I would like to be able to interpret a compacted Kafka Topic as a upsert 
> stream in Apache Flink. Similarly, I would like to be able to write an upsert 
> stream to Kafka (into a compacted topic). 
> I would like to be able to interpret a compacted Kafka Topic as a versioned 
> table without creating an additional view (similar to Debezium/Canal; see 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-132+Temporal+Table+DDL+and+Temporal+Table+Join)
>  
> In both cases, the (possibly implicit) primary key of the Flink SQL Table 
> would need to correspond to the fields that make up the keys of the Kafka 
> records.
> A message for an existing key (with a higher offset) corresponds to an udate. 
> A message for an existing key with a null value is interpreted as a delete.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to