tobezhou33 opened a new issue #1498:
URL: https://github.com/apache/incubator-seatunnel/issues/1498


   ### Search before asking
   
   - [X] I had searched in the 
[issues](https://github.com/apache/incubator-seatunnel/issues?q=is%3Aissue+label%3A%22bug%22)
 and found no similar issues.
   
   
   ### What happened
   
   when i try to read kafka data and print to consule。 it happen an error.
   
    org.apache.flink.table.api.ValidationException: Temporary table 
'`default_catalog`.`default_database`.`test`' already exists
   
   in the kafkaTableStream ,getData method use createTemporaryTable to create a 
temporarytable by the config field result_table_name。
   
![image](https://user-images.githubusercontent.com/32997128/158797298-c17f141a-b16c-4fd4-9738-f6a248a61920.png)
   but in FlinkStreamExecution.start method。it also   
registerResultTable(source, dataStream), TableUtil.tableExists only check the 
catalogs tables but exclude TemporaryTable. so it happen an error int the last 
pic.
   
   
![image](https://user-images.githubusercontent.com/32997128/158797852-9bd28f7c-1f00-49a1-a08e-0490a753d77d.png)
   
   
![image](https://user-images.githubusercontent.com/32997128/158797737-8c83cf00-6099-46c5-9732-22908188be34.png)
   
   
![image](https://user-images.githubusercontent.com/32997128/158798190-5e60798b-145b-4ca2-aafa-2dd5873bfb5a.png)
   
   
   i suggest use tableEnvironment.listTables() to check all the tables that has 
been created. include TemporaryTable.
   
   
   ### SeaTunnel Version
   
   2.0.5 latest
   
   ### SeaTunnel Config
   
   ```conf
   env {
     # You can set flink configuration here
     execution.parallelism = 1
     #execution.checkpoint.interval = 10000
     #execution.checkpoint.data-uri = "hdfs://localhost:9000/checkpoint"
   }
   
   source {
     # This is a example source plugin **only for test and demonstrate the 
feature source plugin**
       KafkaTableStream {
             result_table_name = test
             topics = test
             consumer.group.id = "seatunnel#2"
             consumer.bootstrap.servers = "127.0.0.1:9092"
             schema = "{\"uid\":111,\"event_name\":\"aaaa\"}"
             format.type = json
             format.field-delimiter = ","
             format.allow-comments = "true"
             format.ignore-parse-errors = "true"
       }
   
     # If you would like to get more information about how to configure 
seatunnel and see full list of source plugins,
     # please go to 
https://seatunnel.apache.org/docs/flink/configuration/source-plugins/Fake
   }
   
   transform {
   
   
     # If you would like to get more information about how to configure 
seatunnel and see full list of transform plugins,
     # please go to 
https://seatunnel.apache.org/docs/flink/configuration/transform-plugins/Sql
   }
   
   sink {
     ConsoleSink {}
   
     # If you would like to get more information about how to configure 
seatunnel and see full list of sink plugins,
     # please go to 
https://seatunnel.apache.org/docs/flink/configuration/sink-plugins/Console
   }
   ```
   
   
   ### Running Command
   
   ```shell
   IDEA run LocalFlinkExample.java and change the configflie to 
kafka_to_console.cnf。
   just like SeaTunnel Config above
   ```
   
   
   ### Error Exception
   
   ```log
   org.apache.flink.table.api.ValidationException: Temporary table 
'`default_catalog`.`default_database`.`test`' already exists
   ```
   
   
   ### Flink or Spark Version
   
   flink 1.13
   
   ### Java or Scala Version
   
   jdk8
   
   ### Screenshots
   
   _No response_
   
   ### Are you willing to submit PR?
   
   - [X] Yes I am willing to submit a PR!
   
   ### Code of Conduct
   
   - [X] I agree to follow this project's [Code of 
Conduct](https://www.apache.org/foundation/policies/conduct)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to