chenhaonan313 opened a new issue, #2929:
URL: https://github.com/apache/incubator-streampark/issues/2929

   ### Search before asking
   
   - [X] I had searched in the 
[issues](https://github.com/apache/incubator-streampark/issues?q=is%3Aissue+label%3A%22bug%22)
 and found no similar issues.
   
   
   ### Java Version
   
   1.8
   
   ### Scala Version
   
   2.12.x
   
   ### StreamPark Version
   
   2.1.1
   
   ### Flink Version
   
   1.14.5
   
   ### deploy mode
   
   remote
   
   ### What happened
   
   According to the official website, using docker deployment and then using 
quick Start case execution to get error results, how to solve this problem
   
   ### Error Exception
   
   ```log
   java.util.concurrent.CompletionException: 
java.lang.reflect.InvocationTargetException
        at 
java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)
        at 
java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)
        at 
java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1606)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:750)
   Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.apache.streampark.flink.client.FlinkClient$.$anonfun$proxy$1(FlinkClient.scala:80)
        at 
org.apache.streampark.flink.proxy.FlinkShimsProxy$.$anonfun$proxy$1(FlinkShimsProxy.scala:60)
        at 
org.apache.streampark.common.util.ClassLoaderUtils$.runAsClassLoader(ClassLoaderUtils.scala:38)
        at 
org.apache.streampark.flink.proxy.FlinkShimsProxy$.proxy(FlinkShimsProxy.scala:60)
        at 
org.apache.streampark.flink.client.FlinkClient$.proxy(FlinkClient.scala:75)
        at 
org.apache.streampark.flink.client.FlinkClient$.submit(FlinkClient.scala:49)
        at 
org.apache.streampark.flink.client.FlinkClient.submit(FlinkClient.scala)
        at 
org.apache.streampark.console.core.service.impl.ApplicationServiceImpl.lambda$start$10(ApplicationServiceImpl.java:1544)
        at 
java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
        ... 3 more
   Caused by: org.apache.flink.client.program.ProgramInvocationException: The 
main method caused an error: findAndCreateTableSource failed.
        at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:372)
        at 
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:222)
        at 
org.apache.flink.client.program.PackagedProgramUtils.getPipelineFromProgram(PackagedProgramUtils.java:158)
        at 
org.apache.flink.client.program.PackagedProgramUtils.createJobGraph(PackagedProgramUtils.java:82)
        at 
org.apache.streampark.flink.client.trait.FlinkClientTrait.getJobGraph(FlinkClientTrait.scala:242)
        at 
org.apache.streampark.flink.client.trait.FlinkClientTrait.getJobGraph$(FlinkClientTrait.scala:222)
        at 
org.apache.streampark.flink.client.impl.RemoteClient$.jobGraphSubmit(RemoteClient.scala:147)
        at 
org.apache.streampark.flink.client.impl.RemoteClient$.$anonfun$doSubmit$2(RemoteClient.scala:49)
        at 
org.apache.streampark.flink.client.trait.FlinkClientTrait.$anonfun$trySubmit$5(FlinkClientTrait.scala:212)
        at scala.util.Try$.apply(Try.scala:209)
        at 
org.apache.streampark.flink.client.trait.FlinkClientTrait.$anonfun$trySubmit$3(FlinkClientTrait.scala:212)
        at scala.util.Failure.getOrElse(Try.scala:218)
        at 
org.apache.streampark.flink.client.trait.FlinkClientTrait.trySubmit(FlinkClientTrait.scala:210)
        at 
org.apache.streampark.flink.client.trait.FlinkClientTrait.trySubmit$(FlinkClientTrait.scala:203)
        at 
org.apache.streampark.flink.client.impl.RemoteClient$.doSubmit(RemoteClient.scala:49)
        at 
org.apache.streampark.flink.client.trait.FlinkClientTrait.submit(FlinkClientTrait.scala:125)
        at 
org.apache.streampark.flink.client.trait.FlinkClientTrait.submit$(FlinkClientTrait.scala:62)
        at 
org.apache.streampark.flink.client.impl.RemoteClient$.submit(RemoteClient.scala:36)
        at 
org.apache.streampark.flink.client.FlinkClientHandler$.submit(FlinkClientHandler.scala:40)
        at 
org.apache.streampark.flink.client.FlinkClientHandler.submit(FlinkClientHandler.scala)
        ... 16 more
   Caused by: org.apache.flink.table.api.TableException: 
findAndCreateTableSource failed.
        at 
org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:45)
        at 
org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:76)
        at 
org.apache.flink.table.planner.plan.schema.LegacyCatalogSourceTable.findAndCreateLegacyTableSource(LegacyCatalogSourceTable.scala:187)
        at 
org.apache.flink.table.planner.plan.schema.LegacyCatalogSourceTable.toRel(LegacyCatalogSourceTable.scala:96)
        at 
org.apache.calcite.sql2rel.SqlToRelConverter.toRel(SqlToRelConverter.java:3585)
        at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertIdentifier(SqlToRelConverter.java:2507)
        at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2144)
        at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2093)
        at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2050)
        at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertSelectImpl(SqlToRelConverter.java:663)
        at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertSelect(SqlToRelConverter.java:644)
        at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertQueryRecursive(SqlToRelConverter.java:3438)
        at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:570)
        at 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$rel(FlinkPlannerImpl.scala:177)
        at 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:169)
        at 
org.apache.flink.table.planner.operations.SqlToOperationConverter.toQueryOperation(SqlToOperationConverter.java:1057)
        at 
org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlQuery(SqlToOperationConverter.java:1026)
        at 
org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:301)
        at 
org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlInsert(SqlToOperationConverter.java:639)
        at 
org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:290)
        at 
org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:101)
        at 
org.apache.flink.table.api.internal.StatementSetImpl.addInsertSql(StatementSetImpl.java:53)
        at 
org.apache.flink.table.api.bridge.scala.internal.StreamStatementSetImpl.addInsertSql(StreamStatementSetImpl.scala:33)
        at 
org.apache.flink.table.api.bridge.scala.internal.StreamStatementSetImpl.addInsertSql(StreamStatementSetImpl.scala:28)
        at 
org.apache.streampark.flink.core.FlinkSqlExecutor$.$anonfun$executeSql$3(FlinkSqlExecutor.scala:118)
        at 
org.apache.streampark.flink.core.FlinkSqlExecutor$.$anonfun$executeSql$3$adapted(FlinkSqlExecutor.scala:57)
        at scala.collection.immutable.List.foreach(List.scala:388)
        at 
org.apache.streampark.flink.core.FlinkSqlExecutor$.executeSql(FlinkSqlExecutor.scala:57)
        at 
org.apache.streampark.flink.core.FlinkStreamTableTrait.sql(FlinkStreamTableTrait.scala:87)
        at 
org.apache.streampark.flink.cli.SqlClient$StreamSqlApp$.handle(SqlClient.scala:69)
        at 
org.apache.streampark.flink.core.scala.FlinkStreamTable.main(FlinkStreamTable.scala:48)
        at 
org.apache.streampark.flink.core.scala.FlinkStreamTable.main$(FlinkStreamTable.scala:45)
        at 
org.apache.streampark.flink.cli.SqlClient$StreamSqlApp$.main(SqlClient.scala:68)
        at 
org.apache.streampark.flink.cli.SqlClient$.delayedEndpoint$org$apache$streampark$flink$cli$SqlClient$1(SqlClient.scala:58)
        at 
org.apache.streampark.flink.cli.SqlClient$delayedInit$body.apply(SqlClient.scala:31)
        at scala.Function0.apply$mcV$sp(Function0.scala:34)
        at scala.Function0.apply$mcV$sp$(Function0.scala:34)
        at 
scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
        at scala.App.$anonfun$main$1$adapted(App.scala:76)
        at scala.collection.immutable.List.foreach(List.scala:388)
        at scala.App.main(App.scala:76)
        at scala.App.main$(App.scala:74)
        at org.apache.streampark.flink.cli.SqlClient$.main(SqlClient.scala:31)
        at org.apache.streampark.flink.cli.SqlClient.main(SqlClient.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:355)
        ... 35 more
   Caused by: org.apache.flink.table.api.NoMatchingTableFactoryException: Could 
not find a suitable table factory for 
'org.apache.flink.table.factories.TableSourceFactory' in
   the classpath.
   
   Reason: Required context properties mismatch.
   
   The matching candidates:
   org.apache.flink.table.sources.CsvAppendTableSourceFactory
   Mismatched properties:
   'connector.type' expects 'filesystem', but is 'kafka'
   
   The following properties are requested:
   connector.properties.bootstrap.servers=10.100.0.46:6667
   connector.startup-mode=earliest-offset
   connector.topic=user_behavior
   connector.type=kafka
   connector.version=universal
   format.derive-schema=true
   format.type=csv
   schema.0.data-type=VARCHAR(2147483647)
   schema.0.name=user_id
   schema.1.data-type=VARCHAR(2147483647)
   schema.1.name=item_id
   schema.2.data-type=VARCHAR(2147483647)
   schema.2.name=category_id
   schema.3.data-type=VARCHAR(2147483647)
   schema.3.name=behavior
   schema.4.data-type=TIMESTAMP(3)
   schema.4.name=ts
   update-mode=append
   
   The following factories have been considered:
   org.apache.flink.connector.jdbc.table.JdbcTableSourceSinkFactory
   org.apache.flink.table.sources.CsvBatchTableSourceFactory
   org.apache.flink.table.sources.CsvAppendTableSourceFactory
        at 
org.apache.flink.table.factories.TableFactoryService.filterByContext(TableFactoryService.java:315)
        at 
org.apache.flink.table.factories.TableFactoryService.filter(TableFactoryService.java:193)
        at 
org.apache.flink.table.factories.TableFactoryService.findSingleInternal(TableFactoryService.java:154)
        at 
org.apache.flink.table.factories.TableFactoryService.find(TableFactoryService.java:108)
        at 
org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:41)
        ... 83 more
   ```
   
   
   ### Screenshots
   
   <img width="608" alt="40ce656c79dc6aba2b26a6664352be1" 
src="https://github.com/apache/incubator-streampark/assets/79635793/cdbcee84-6614-48a5-a251-ad7fbd1a6856";>
   <img width="647" alt="7e35ef93098243de1dfeb4b2ec4284f" 
src="https://github.com/apache/incubator-streampark/assets/79635793/3ffb7482-5a88-4677-ab65-8478ce6114d0";>
   <img width="618" alt="00c88b60a28797c1406ee6e42f3a97c" 
src="https://github.com/apache/incubator-streampark/assets/79635793/b720a86f-6f62-430e-b57d-68f7fb6ca76d";>
   
   
   ### Are you willing to submit PR?
   
   - [X] Yes I am willing to submit a PR!(您是否要贡献这个PR?)
   
   ### Code of Conduct
   
   - [X] I agree to follow this project's [Code of 
Conduct](https://www.apache.org/foundation/policies/conduct)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to