Hi Sumeet,
Yes, this approach also works in Table API.
Could you share which API you use to execute the job? For jobs with multiple
sinks, StatementSet should be used. You could refer to [1] for more details on
this.
Regards,
Dian
[1]
https://ci.apache.org/projects/flink/flink-docs-release-1
Hi,
I would like to split streamed data from Kafka into 2 streams based on some
filter criteria, using PyFlink Table API. As described here [1], a way to
do this is to use .filter() which should split the stream for parallel
processing.
Does this approach work in Table API as well? I'm doing the