[ 
https://issues.apache.org/jira/browse/FLINK-20445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17241644#comment-17241644
 ] 

Timo Walther commented on FLINK-20445:
--------------------------------------

[~15652768278] only legacy connectors (`connector.type=kafka` instead of 
`connector=kafka`) are supported in the YAML at the moment. You can use regular 
DDL instead.

> NoMatchingTableFactoryException
> -------------------------------
>
>                 Key: FLINK-20445
>                 URL: https://issues.apache.org/jira/browse/FLINK-20445
>             Project: Flink
>          Issue Type: Bug
>          Components: Table SQL / Client
>    Affects Versions: 1.11.2
>         Environment: * Version:1.11.2
>            Reporter: Ke Li
>            Priority: Major
>
> When I use sql client configuration, an error is reported, the instruction is 
> as follows:
> {code:java}
> ./sql-client.sh embedded -e /root/flink-sql-client/sql-client-demo.yml
> {code}
> sql-client-demo.yml:
> {code:java}
> tables:
>   - name: SourceTable
>     type: source-table
>     update-mode: append
>     connector:
>       type: datagen
>       rows-per-second: 5
>       fields:
>         f_sequence:
>           kind: sequence
>           start: 1
>           end: 1000
>         f_random:
>           min: 1
>           max: 1000
>         f_random_str:
>           length: 10
>     schema:
>       - name: f_sequence
>         data-type: INT
>       - name: f_random
>         data-type: INT
>       - name: f_random_str
>         data-type: STRING
> {code}
> The error is as follows:
> {code:java}
> No default environment specified.No default environment specified.Searching 
> for 
> '/data/data_gas/flink/flink-1.11.2/conf/sql-client-defaults.yaml'...found.Reading
>  default environment from: 
> file:/data/data_gas/flink/flink-1.11.2/conf/sql-client-defaults.yamlReading 
> session environment from: 
> file:/root/flink-sql-client/sql-client-demo.ymlException in thread "main" 
> org.apache.flink.table.client.SqlClientException: Unexpected exception. This 
> is a bug. Please consider filing an issue. at 
> org.apache.flink.table.client.SqlClient.main(SqlClient.java:213)Caused by: 
> org.apache.flink.table.client.gateway.SqlExecutionException: Could not create 
> execution context. at 
> org.apache.flink.table.client.gateway.local.ExecutionContext$Builder.build(ExecutionContext.java:870)
>  at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.openSession(LocalExecutor.java:227)
>  at org.apache.flink.table.client.SqlClient.start(SqlClient.java:108) at 
> org.apache.flink.table.client.SqlClient.main(SqlClient.java:201)Caused by: 
> org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a 
> suitable table factory for 
> 'org.apache.flink.table.factories.TableSourceFactory' inthe classpath.
> Reason: Required context properties mismatch.
> The matching 
> candidates:org.apache.flink.table.sources.CsvAppendTableSourceFactoryMismatched
>  properties:'connector.type' expects 'filesystem', but is 
> 'datagen''format.type' expects 'csv', but is 'json'
> The following properties are 
> requested:connector.fields.f_random.max=1000connector.fields.f_random.min=1connector.fields.f_random_str.length=10connector.fields.f_sequence.end=1000connector.fields.f_sequence.kind=sequenceconnector.fields.f_sequence.start=1connector.rows-per-second=5connector.type=datagenformat.type=jsonschema.0.data-type=INTschema.0.name=f_sequenceschema.1.data-type=INTschema.1.name=f_randomschema.2.data-type=STRINGschema.2.name=f_random_strupdate-mode=append
> The following factories have been 
> considered:org.apache.flink.streaming.connectors.kafka.KafkaTableSourceSinkFactoryorg.apache.flink.connector.jdbc.table.JdbcTableSourceSinkFactoryorg.apache.flink.table.sources.CsvBatchTableSourceFactoryorg.apache.flink.table.sources.CsvAppendTableSourceFactoryorg.apache.flink.table.filesystem.FileSystemTableFactory
>  at 
> org.apache.flink.table.factories.TableFactoryService.filterByContext(TableFactoryService.java:322)
>  at 
> org.apache.flink.table.factories.TableFactoryService.filter(TableFactoryService.java:190)
>  at 
> org.apache.flink.table.factories.TableFactoryService.findSingleInternal(TableFactoryService.java:143)
>  at 
> org.apache.flink.table.factories.TableFactoryService.find(TableFactoryService.java:113)
>  at 
> org.apache.flink.table.client.gateway.local.ExecutionContext.createTableSource(ExecutionContext.java:384)
>  at 
> org.apache.flink.table.client.gateway.local.ExecutionContext.lambda$initializeCatalogs$7(ExecutionContext.java:638)
>  at java.util.LinkedHashMap.forEach(LinkedHashMap.java:684) at 
> org.apache.flink.table.client.gateway.local.ExecutionContext.initializeCatalogs(ExecutionContext.java:636)
>  at 
> org.apache.flink.table.client.gateway.local.ExecutionContext.initializeTableEnvironment(ExecutionContext.java:523)
>  at 
> org.apache.flink.table.client.gateway.local.ExecutionContext.<init>(ExecutionContext.java:183)
>  at 
> org.apache.flink.table.client.gateway.local.ExecutionContext.<init>(ExecutionContext.java:136)
>  at 
> org.apache.flink.table.client.gateway.local.ExecutionContext$Builder.build(ExecutionContext.java:859)
>  ... 3 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to