[ 
https://issues.apache.org/jira/browse/FLINK-14667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chun111111 updated FLINK-14667:
-------------------------------
    Description: 
[root@mj flink-1.9.1]# ./bin/flink run -m yarn-cluster 
/root/cp19/streaming-view-0.0.1-SNAPSHOT-jar-with-dependencies.jar

2019-11-07 16:48:57,616 INFO  org.apache.hadoop.yarn.client.RMProxy             
            - Connecting to ResourceManager at /0.0.0.0:8032

2019-11-07 16:48:57,789 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli     
            - No path for the flink jar passed. Using the location of class 
org.apache.flink.yarn.YarnClusterDescriptor to locate the jar

2019-11-07 16:48:57,789 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli     
            - No path for the flink jar passed. Using the location of class 
org.apache.flink.yarn.YarnClusterDescriptor to locate the jar

2019-11-07 16:48:57,986 INFO  
org.apache.flink.yarn.AbstractYarnClusterDescriptor           - Cluster 
specification: ClusterSpecification\{masterMemoryMB=1024, 
taskManagerMemoryMB=1024, numberTaskManagers=1, slotsPerTaskManager=1}

2019-11-07 16:48:58,657 WARN  
org.apache.flink.yarn.AbstractYarnClusterDescriptor           - The 
configuration directory ('/opt/app/flink-1.9.1/conf') contains both LOG4J and 
Logback configuration files. Please delete or rename one of them.

2019-11-07 16:49:00,954 INFO  
org.apache.flink.yarn.AbstractYarnClusterDescriptor           - Submitting 
application master application_1573090964983_0039

2019-11-07 16:49:00,986 INFO  
org.apache.hadoop.yarn.client.api.impl.YarnClientImpl         - Submitted 
application application_1573090964983_0039

2019-11-07 16:49:00,986 INFO  
org.apache.flink.yarn.AbstractYarnClusterDescriptor           - Waiting for the 
cluster to be allocated

2019-11-07 16:49:00,988 INFO  
org.apache.flink.yarn.AbstractYarnClusterDescriptor           - Deploying 
cluster, current state ACCEPTED

2019-11-07 16:49:06,534 INFO  
org.apache.flink.yarn.AbstractYarnClusterDescriptor           - YARN 
application has been deployed successfully.

Starting execution of program

 

------------------------------------------------------------

 The program finished with the following exception:

 

org.apache.flink.client.program.ProgramInvocationException: The main method 
caused an error: findAndCreateTableSource failed.

       at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:593)

       at 
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438)

       at 
org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:274)

       at 
org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:746)

       at 
org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:273)

       at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205)

       at 
org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1010)

       at 
org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1083)

       at java.security.AccessController.doPrivileged(Native Method)

       at javax.security.auth.Subject.doAs(Subject.java:422)

       at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)

       at 
org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)

       at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1083)

Caused by: org.apache.flink.table.api.TableException: findAndCreateTableSource 
failed.

       at 
org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:67)

       at 
org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:54)

       at 
org.apache.flink.table.descriptors.ConnectTableDescriptor.registerTableSource(ConnectTableDescriptor.java:69)

       at 
com.streaming.activity.task.PaymentViewIndex.registerKafkaTable(PaymentViewIndex.java:205)

       at 
com.streaming.activity.task.PaymentViewIndex.registerSourceTable(PaymentViewIndex.java:106)

       at 
com.streaming.activity.task.PaymentViewIndex.main(PaymentViewIndex.java:68)

       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

       at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

       at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

       at java.lang.reflect.Method.invoke(Method.java:498)

       at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:576)

       ... 12 more

{color:#de350b}*Caused by: 
org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a 
suitable table factory for 
'org.apache.flink.table.factories.TableSourceFactory' in*{color}

{color:#de350b}*the classpath.*{color}

 

Reason: No context matches.

 

The following properties are requested:

connector.properties.0.key=key.deserializer

connector.properties.0.value=org.apache.kafka.common.serialization.StringDeserializer

connector.properties.1.key=value.deserializer

connector.properties.1.value=org.apache.kafka.common.serialization.StringDeserializer

connector.properties.2.key=group.id

connector.properties.2.value=stream_01

connector.properties.3.key=bootstrap.servers

connector.properties.3.value=192.168.163.129:9092

connector.property-version=1

connector.startup-mode=latest-offset

connector.topic=ac

connector.type=kafka

connector.version=0.11

format.derive-schema=true

format.fail-on-missing-field=false

format.property-version=1

format.type=json

schema.0.name=rowtime

schema.0.rowtime.timestamps.from=timestamp

schema.0.rowtime.timestamps.type=from-field

schema.0.rowtime.watermarks.delay=60000

schema.0.rowtime.watermarks.type=periodic-bounded

schema.0.type=TIMESTAMP

schema.1.name=proctime

schema.1.proctime=true

schema.1.type=TIMESTAMP

schema.2.name=tradeNo

schema.2.type=VARCHAR

update-mode=append

 

The following factories have been considered:

org.apache.flink.table.catalog.GenericInMemoryCatalogFactory

org.apache.flink.table.sources.CsvBatchTableSourceFactory

org.apache.flink.table.sources.CsvAppendTableSourceFactory

org.apache.flink.table.sinks.CsvBatchTableSinkFactory

org.apache.flink.table.sinks.CsvAppendTableSinkFactory

org.apache.flink.table.planner.StreamPlannerFactory

org.apache.flink.table.executor.StreamExecutorFactory

org.apache.flink.table.planner.delegation.BlinkPlannerFactory

org.apache.flink.table.planner.delegation.BlinkExecutorFactory

org.apache.flink.formats.json.JsonRowFormatFactory

       at 
org.apache.flink.table.factories.TableFactoryService.filterByContext(TableFactoryService.java:283)

       at 
org.apache.flink.table.factories.TableFactoryService.filter(TableFactoryService.java:191)

       at 
org.apache.flink.table.factories.TableFactoryService.findSingleInternal(TableFactoryService.java:144)

       at 
org.apache.flink.table.factories.TableFactoryService.find(TableFactoryService.java:97)

       at 
org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:64)

       ... 22 more

[root@mj flink-1.9.1]#

  was:
[root@mj flink-1.9.1]# ./bin/flink run -m yarn-cluster 
/root/cp19/streaming-view-0.0.1-SNAPSHOT-jar-with-dependencies.jar[root@mj 
flink-1.9.1]# ./bin/flink run -m yarn-cluster 
/root/cp19/streaming-view-0.0.1-SNAPSHOT-jar-with-dependencies.jar2019-11-07 
16:48:57,616 INFO  org.apache.hadoop.yarn.client.RMProxy                        
 - Connecting to ResourceManager at /0.0.0.0:80322019-11-07 16:48:57,789 INFO  
org.apache.flink.yarn.cli.FlinkYarnSessionCli                 - No path for the 
flink jar passed. Using the location of class 
org.apache.flink.yarn.YarnClusterDescriptor to locate the jar2019-11-07 
16:48:57,789 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                
 - No path for the flink jar passed. Using the location of class 
org.apache.flink.yarn.YarnClusterDescriptor to locate the jar2019-11-07 
16:48:57,986 INFO  org.apache.flink.yarn.AbstractYarnClusterDescriptor          
 - Cluster specification: ClusterSpecification\{masterMemoryMB=1024, 
taskManagerMemoryMB=1024, numberTaskManagers=1, 
slotsPerTaskManager=1}2019-11-07 16:48:58,657 WARN  
org.apache.flink.yarn.AbstractYarnClusterDescriptor           - The 
configuration directory ('/opt/app/flink-1.9.1/conf') contains both LOG4J and 
Logback configuration files. Please delete or rename one of them.2019-11-07 
16:49:00,954 INFO  org.apache.flink.yarn.AbstractYarnClusterDescriptor          
 - Submitting application master application_1573090964983_00392019-11-07 
16:49:00,986 INFO  org.apache.hadoop.yarn.client.api.impl.YarnClientImpl        
 - Submitted application application_1573090964983_00392019-11-07 16:49:00,986 
INFO  org.apache.flink.yarn.AbstractYarnClusterDescriptor           - Waiting 
for the cluster to be allocated2019-11-07 16:49:00,988 INFO  
org.apache.flink.yarn.AbstractYarnClusterDescriptor           - Deploying 
cluster, current state ACCEPTED2019-11-07 16:49:06,534 INFO  
org.apache.flink.yarn.AbstractYarnClusterDescriptor           - YARN 
application has been deployed successfully.Starting execution of program
------------------------------------------------------------ The program 
finished with the following exception:
org.apache.flink.client.program.ProgramInvocationException: The main method 
caused an error: findAndCreateTableSource failed. at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:593)
 at 
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438)
 at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:274) 
at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:746) 
at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:273) at 
org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205) at 
org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1010) 
at 
org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1083) 
at java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:422) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
 at 
org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
 at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1083)Caused 
by: org.apache.flink.table.api.TableException: findAndCreateTableSource failed. 
at 
org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:67)
 at 
org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:54)
 at 
org.apache.flink.table.descriptors.ConnectTableDescriptor.registerTableSource(ConnectTableDescriptor.java:69)
 at 
com.streaming.activity.task.PaymentViewIndex.registerKafkaTable(PaymentViewIndex.java:205)
 at 
com.streaming.activity.task.PaymentViewIndex.registerSourceTable(PaymentViewIndex.java:106)
 at com.streaming.activity.task.PaymentViewIndex.main(PaymentViewIndex.java:68) 
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:576)
 ... 12 moreCaused by: 
org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a 
suitable table factory for 
'org.apache.flink.table.factories.TableSourceFactory' inthe classpath.
Reason: No context matches.
The following properties are 
requested:connector.properties.0.key=key.deserializerconnector.properties.0.value=org.apache.kafka.common.serialization.StringDeserializerconnector.properties.1.key=value.deserializerconnector.properties.1.value=org.apache.kafka.common.serialization.StringDeserializerconnector.properties.2.key=group.idconnector.properties.2.value=stream_01connector.properties.3.key=bootstrap.serversconnector.properties.3.value=192.168.163.129:9092connector.property-version=1connector.startup-mode=latest-offsetconnector.topic=acconnector.type=kafkaconnector.version=0.11format.derive-schema=trueformat.fail-on-missing-field=falseformat.property-version=1format.type=jsonschema.0.name=rowtimeschema.0.rowtime.timestamps.from=timestampschema.0.rowtime.timestamps.type=from-fieldschema.0.rowtime.watermarks.delay=60000schema.0.rowtime.watermarks.type=periodic-boundedschema.0.type=TIMESTAMPschema.1.name=proctimeschema.1.proctime=trueschema.1.type=TIMESTAMPschema.2.name=tradeNoschema.2.type=VARCHARupdate-mode=append
The following factories have been 
considered:org.apache.flink.table.catalog.GenericInMemoryCatalogFactoryorg.apache.flink.table.sources.CsvBatchTableSourceFactoryorg.apache.flink.table.sources.CsvAppendTableSourceFactoryorg.apache.flink.table.sinks.CsvBatchTableSinkFactoryorg.apache.flink.table.sinks.CsvAppendTableSinkFactoryorg.apache.flink.table.planner.StreamPlannerFactoryorg.apache.flink.table.executor.StreamExecutorFactoryorg.apache.flink.table.planner.delegation.BlinkPlannerFactoryorg.apache.flink.table.planner.delegation.BlinkExecutorFactoryorg.apache.flink.formats.json.JsonRowFormatFactory
 at 
org.apache.flink.table.factories.TableFactoryService.filterByContext(TableFactoryService.java:283)
 at 
org.apache.flink.table.factories.TableFactoryService.filter(TableFactoryService.java:191)
 at 
org.apache.flink.table.factories.TableFactoryService.findSingleInternal(TableFactoryService.java:144)
 at 
org.apache.flink.table.factories.TableFactoryService.find(TableFactoryService.java:97)
 at 
org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:64)
 ... 22 more[root@mj flink-1.9.1]#


> flink1.9,1.8.2 run flinkSql fat jar ,can't load the right tableFactory 
> (TableSourceFactory) for the kafka 
> ----------------------------------------------------------------------------------------------------------
>
>                 Key: FLINK-14667
>                 URL: https://issues.apache.org/jira/browse/FLINK-14667
>             Project: Flink
>          Issue Type: Bug
>          Components: Table SQL / Client
>    Affects Versions: 1.8.2, 1.9.1
>            Reporter: chun111111
>            Priority: Major
>
> [root@mj flink-1.9.1]# ./bin/flink run -m yarn-cluster 
> /root/cp19/streaming-view-0.0.1-SNAPSHOT-jar-with-dependencies.jar
> 2019-11-07 16:48:57,616 INFO  org.apache.hadoop.yarn.client.RMProxy           
>               - Connecting to ResourceManager at /0.0.0.0:8032
> 2019-11-07 16:48:57,789 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli   
>               - No path for the flink jar passed. Using the location of class 
> org.apache.flink.yarn.YarnClusterDescriptor to locate the jar
> 2019-11-07 16:48:57,789 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli   
>               - No path for the flink jar passed. Using the location of class 
> org.apache.flink.yarn.YarnClusterDescriptor to locate the jar
> 2019-11-07 16:48:57,986 INFO  
> org.apache.flink.yarn.AbstractYarnClusterDescriptor           - Cluster 
> specification: ClusterSpecification\{masterMemoryMB=1024, 
> taskManagerMemoryMB=1024, numberTaskManagers=1, slotsPerTaskManager=1}
> 2019-11-07 16:48:58,657 WARN  
> org.apache.flink.yarn.AbstractYarnClusterDescriptor           - The 
> configuration directory ('/opt/app/flink-1.9.1/conf') contains both LOG4J and 
> Logback configuration files. Please delete or rename one of them.
> 2019-11-07 16:49:00,954 INFO  
> org.apache.flink.yarn.AbstractYarnClusterDescriptor           - Submitting 
> application master application_1573090964983_0039
> 2019-11-07 16:49:00,986 INFO  
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl         - Submitted 
> application application_1573090964983_0039
> 2019-11-07 16:49:00,986 INFO  
> org.apache.flink.yarn.AbstractYarnClusterDescriptor           - Waiting for 
> the cluster to be allocated
> 2019-11-07 16:49:00,988 INFO  
> org.apache.flink.yarn.AbstractYarnClusterDescriptor           - Deploying 
> cluster, current state ACCEPTED
> 2019-11-07 16:49:06,534 INFO  
> org.apache.flink.yarn.AbstractYarnClusterDescriptor           - YARN 
> application has been deployed successfully.
> Starting execution of program
>  
> ------------------------------------------------------------
>  The program finished with the following exception:
>  
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: findAndCreateTableSource failed.
>        at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:593)
>        at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438)
>        at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:274)
>        at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:746)
>        at 
> org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:273)
>        at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205)
>        at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1010)
>        at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1083)
>        at java.security.AccessController.doPrivileged(Native Method)
>        at javax.security.auth.Subject.doAs(Subject.java:422)
>        at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
>        at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>        at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1083)
> Caused by: org.apache.flink.table.api.TableException: 
> findAndCreateTableSource failed.
>        at 
> org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:67)
>        at 
> org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:54)
>        at 
> org.apache.flink.table.descriptors.ConnectTableDescriptor.registerTableSource(ConnectTableDescriptor.java:69)
>        at 
> com.streaming.activity.task.PaymentViewIndex.registerKafkaTable(PaymentViewIndex.java:205)
>        at 
> com.streaming.activity.task.PaymentViewIndex.registerSourceTable(PaymentViewIndex.java:106)
>        at 
> com.streaming.activity.task.PaymentViewIndex.main(PaymentViewIndex.java:68)
>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>        at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>        at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>        at java.lang.reflect.Method.invoke(Method.java:498)
>        at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:576)
>        ... 12 more
> {color:#de350b}*Caused by: 
> org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a 
> suitable table factory for 
> 'org.apache.flink.table.factories.TableSourceFactory' in*{color}
> {color:#de350b}*the classpath.*{color}
>  
> Reason: No context matches.
>  
> The following properties are requested:
> connector.properties.0.key=key.deserializer
> connector.properties.0.value=org.apache.kafka.common.serialization.StringDeserializer
> connector.properties.1.key=value.deserializer
> connector.properties.1.value=org.apache.kafka.common.serialization.StringDeserializer
> connector.properties.2.key=group.id
> connector.properties.2.value=stream_01
> connector.properties.3.key=bootstrap.servers
> connector.properties.3.value=192.168.163.129:9092
> connector.property-version=1
> connector.startup-mode=latest-offset
> connector.topic=ac
> connector.type=kafka
> connector.version=0.11
> format.derive-schema=true
> format.fail-on-missing-field=false
> format.property-version=1
> format.type=json
> schema.0.name=rowtime
> schema.0.rowtime.timestamps.from=timestamp
> schema.0.rowtime.timestamps.type=from-field
> schema.0.rowtime.watermarks.delay=60000
> schema.0.rowtime.watermarks.type=periodic-bounded
> schema.0.type=TIMESTAMP
> schema.1.name=proctime
> schema.1.proctime=true
> schema.1.type=TIMESTAMP
> schema.2.name=tradeNo
> schema.2.type=VARCHAR
> update-mode=append
>  
> The following factories have been considered:
> org.apache.flink.table.catalog.GenericInMemoryCatalogFactory
> org.apache.flink.table.sources.CsvBatchTableSourceFactory
> org.apache.flink.table.sources.CsvAppendTableSourceFactory
> org.apache.flink.table.sinks.CsvBatchTableSinkFactory
> org.apache.flink.table.sinks.CsvAppendTableSinkFactory
> org.apache.flink.table.planner.StreamPlannerFactory
> org.apache.flink.table.executor.StreamExecutorFactory
> org.apache.flink.table.planner.delegation.BlinkPlannerFactory
> org.apache.flink.table.planner.delegation.BlinkExecutorFactory
> org.apache.flink.formats.json.JsonRowFormatFactory
>        at 
> org.apache.flink.table.factories.TableFactoryService.filterByContext(TableFactoryService.java:283)
>        at 
> org.apache.flink.table.factories.TableFactoryService.filter(TableFactoryService.java:191)
>        at 
> org.apache.flink.table.factories.TableFactoryService.findSingleInternal(TableFactoryService.java:144)
>        at 
> org.apache.flink.table.factories.TableFactoryService.find(TableFactoryService.java:97)
>        at 
> org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:64)
>        ... 22 more
> [root@mj flink-1.9.1]#



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to