[ 
https://issues.apache.org/jira/browse/FLINK-19517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17209525#comment-17209525
 ] 

Kevin Kwon edited comment on FLINK-19517 at 10/7/20, 1:11 PM:
--------------------------------------------------------------

[~jark] thanks for the quick support

First apologies for not reading through the 1.12 snapshot doc before asking the 
for wishlist since Avro confluent support and schema registry is already there

However, somehow, I had trouble setting properties.* via DDL which was ignored 
without any warning. I'll check again if the configuration gets set

If properties.* are fully supported, can we manually set the exactly-once 
property by setting *enable.idempotence,* *transactional.id*? this is just out 
of curiosity if this is possible

 

Aside from that, it'd be nice if we can add to the document maybe as a oneliner 
that other properties are also supported besides *properties.bootstrap.servers* 
& *properties.group.id* which is in the 
[https://ci.apache.org/projects/flink/flink-docs-master/dev/table/connectors/kafka.html]
 because it might also look like as if those are the only properties that the 
connector supports. What is your opinion?

 


was (Author: ksp0422):
[~jark] thanks for the quick support

First apologies for not reading through the 1.12 snapshot doc before asking the 
for wishlist since Avro confluent support and schema registry is already there

Somehow, I had trouble setting properties.* via DDL which was ignored without 
any warning. I'll check again if the configuration gets set

If properties.* are fully supported, can we manually set the exactly-once 
property by setting *enable.idempotence,* *transactional.id*? this is just out 
of curiosity if this is possible

 

Aside from that, it'd be nice if we can add to the document maybe as a oneliner 
that other properties are also supported besides *properties.bootstrap.servers* 
& *properties.group.id* which is in the 
[https://ci.apache.org/projects/flink/flink-docs-master/dev/table/connectors/kafka.html]
 because it might also look like as if those are the only properties that the 
connector supports. What is your opinion?

 

> Support for Confluent Kafka of Table Creation in Flink SQL Client
> -----------------------------------------------------------------
>
>                 Key: FLINK-19517
>                 URL: https://issues.apache.org/jira/browse/FLINK-19517
>             Project: Flink
>          Issue Type: Wish
>          Components: Connectors / Kafka, Table SQL / Ecosystem
>    Affects Versions: 1.12.0
>            Reporter: Kevin Kwon
>            Priority: Major
>
> Currently, table creation from SQL client such as below works well
> {code:sql}
> CREATE TABLE kafkaTable (
>   user_id BIGINT,
>   item_id BIGINT,
>   category_id BIGINT,
>   behavior STRING,
>   ts TIMESTAMP(3)
> ) WITH (
>   'connector' = 'kafka',
>   'topic' = 'user_behavior',
>   'properties.bootstrap.servers' = 'localhost:9092',
>   'properties.group.id' = 'testGroup',
>   'format' = 'avro',
>   'scan.startup.mode' = 'earliest-offset'
> )
> {code}
> Although I would wish for the table creation to support Confluent Kafka 
> configuration as well. For example something like
> {code:sql}
> CREATE TABLE kafkaTable (
>   user_id BIGINT,
>   item_id BIGINT,
>   category_id BIGINT,
>   behavior STRING,
>   ts TIMESTAMP(3)
> ) WITH (
>   'connector' = 'confluent-kafka',
>   'topic' = 'user_behavior',
>   'properties.bootstrap.servers' = 'localhost:9092',
>   'properties.group.id' = 'testGroup',
>   'schema-registry' = 'http://schema-registry.com',
>   'scan.startup.mode' = 'earliest-offset'
> )
> {code}
> If this is enabled, it will be much more convenient to test queries 
> on-the-fly that business analysts want to test against with 'Confluent Kafka'
> Additionally, it will be better if we can
>  - specify 'parallelism' within WITH clause to support parallel partition 
> processing
>  - specify custom properties within WITH clause specified in 
> [https://docs.confluent.io/5.4.2/installation/configuration/consumer-configs.html]
>  - have remote access to SQL client in cluster from local environment



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to