[ 
https://issues.apache.org/jira/browse/FLINK-8240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16339532#comment-16339532
 ] 

ASF GitHub Bot commented on FLINK-8240:
---------------------------------------

Github user twalthr commented on the issue:

    https://github.com/apache/flink/pull/5240
  
    Thanks for the feedback @fhueske. I hope I could address most of it. I 
think we should merge this PR (if you agree) and add more PRs for this issue as 
the next steps. I suggest the following subtasks:
    
    - Add validation for the CSV format
    - Add full CsvTableSourceFactory support (incl. proctime, rowtime, and 
schema mapping)
    - Add a JSON schema parser to the JSON and logic for creating a table 
source from it
    - Add validation for the JSON format
    - Add validation for the Rowtime descriptor
    - Add validation for StreamTableDescriptor
    - Add validation for BatchTableDescriptor
    - Add KafkaTableSource factories 
    
    What do you think?


> Create unified interfaces to configure and instatiate TableSources
> ------------------------------------------------------------------
>
>                 Key: FLINK-8240
>                 URL: https://issues.apache.org/jira/browse/FLINK-8240
>             Project: Flink
>          Issue Type: New Feature
>          Components: Table API & SQL
>            Reporter: Timo Walther
>            Assignee: Timo Walther
>            Priority: Major
>
> At the moment every table source has different ways for configuration and 
> instantiation. Some table source are tailored to a specific encoding (e.g., 
> {{KafkaAvroTableSource}}, {{KafkaJsonTableSource}}) or only support one 
> encoding for reading (e.g., {{CsvTableSource}}). Each of them might implement 
> a builder or support table source converters for external catalogs.
> The table sources should have a unified interface for discovery, defining 
> common properties, and instantiation. The {{TableSourceConverters}} provide a 
> similar functionality but use an external catalog. We might generialize this 
> interface.
> In general a table source declaration depends on the following parts:
> {code}
> - Source
>   - Type (e.g. Kafka, Custom)
>   - Properties (e.g. topic, connection info)
> - Encoding
>   - Type (e.g. Avro, JSON, CSV)
>   - Schema (e.g. Avro class, JSON field names/types)
> - Rowtime descriptor/Proctime
>   - Watermark strategy and Watermark properties
>   - Time attribute info
> - Bucketization
> {code}
> This issue needs a design document before implementation. Any discussion is 
> very welcome.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to