Hello,
In my pyflink job I have such flow:

1. Use table API to get messages from Kafka
2. Convert the table to a data stream
3. Convert the data stream back to the table API
4. Use a statement set to write the data to two filesystem sinks (avro and
parquet)

I'm able to run the job and everything seems to be working but the files
are not filling with data and are stuck in progress.

I'm suspecting that I'm doing something wrong with how i run .execute().

Currently at the end of my script I use:
statement_set.execute()
streaming_environment.execute("My job")

My question is what would be the correct way to run a job with the flow
specified. I can share the code if needed.

I would appreciate any help.

Kind regards
Kamil

Reply via email to