Hi all, Following the relase of Flink 1.6.0, I'm trying to setup a simple csv file reader as to load its content in a sql table. I'm using the new table connectors and this code recommended in Jira FLINK-9813 and this doc : https://ci.apache.org/projects/flink/flink-docs-release-1.6/dev/table/connect.html#connectors
Csv csv_format = new Csv().schema( TableSchema.fromTypeInfo(AvroSchemaConverter.convertToTypeInfo("My avro schema text"))); ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment(); TableEnvironment tEnv = TableEnvironment.getTableEnvironment(env); tEnv.withFormat(csv_format); tEnv.connect(new FileSystem().path("my_local_file.csv")); Table csv_table = tEnv.scan(); Is this right or I'm missing something? I'm not sure format, file path etc should be put in TableEnvironment object. Should I get as many TableEnvironment objects as I have different csv files? Thanks in advance for your input, all the best François Lacombe