Hi Kamil,
I guess `statement_set.execute()` should be enough. You could also check
whether the job graph is expected via one of the following ways:
- Call `print(statement_set.explain())`
- Check the Flink web ui to see the job graph of the running job
For your problems, could you double check wh
Hello,
In my pyflink job I have such flow:
1. Use table API to get messages from Kafka
2. Convert the table to a data stream
3. Convert the data stream back to the table API
4. Use a statement set to write the data to two filesystem sinks (avro and
parquet)
I'm able to run the job and everything