[
https://issues.apache.org/jira/browse/FLINK-12256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16833576#comment-16833576
]
Dawid Wysakowicz commented on FLINK-12256:
------------------------------------------
Few comments from just a quick skim over the document:
What I would really like to see in the design document is more in-depth
discussion about the mapping between {{topic <> subject}} both in case of
reading and writing. Some problems that we should solve from top of my head:
* what happens if the id in the record does not correspond to the subject name
used from catalog, and how do we check for that
* which part is responsible for creating entries in the catalog
* how do we store information is the stream is append stream or a changelog
* how do we define schema for key and value of a Kafka message?
* the JIRA issue claims you want to implement only ReadableCatalog, but you
discuss also creating entries in the Schema Registry, this looks for me as some
design flaw. We should really think where and how do we create Tables.
> Implement KafkaReadableCatalog
> ------------------------------
>
> Key: FLINK-12256
> URL: https://issues.apache.org/jira/browse/FLINK-12256
> Project: Flink
> Issue Type: New Feature
> Components: Connectors / Kafka, Table SQL / Client
> Affects Versions: 1.9.0
> Reporter: Artsem Semianenka
> Assignee: Artsem Semianenka
> Priority: Major
>
> KafkaReadableCatalog is a special implementation of ReadableCatalog
> interface (which introduced in
> [FLIP-30|https://cwiki.apache.org/confluence/display/FLINK/FLIP-30%3A+Unified+Catalog+APIs]
> ) to retrieve meta information such topic name/schema of the topic from
> Apache Kafka and Confluent Schema Registry.
> New ReadableCatalog allows a user to run SQL queries like:
> {code:java}
> Select * form kafka.topic_name
> {code}
> without the need for manual definition of the table schema.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)