Hmmm, that's a strange behavior that is unexpected (to me).
Flink optimizes the Table API / SQL queries when a Table is converted into
a DataStream (or DataSet) or emitted to a TableSink.
So, given that you convert the result tables in addSink() into a DataStream
and write them to a sink function,
Hi, has there been any changes to state handling with Flink SQL? Anything
planned?
I didn't find it at
https://ci.apache.org/projects/flink/flink-docs-master/dev/table/sql.html.
Recently I ran into problems when trying to restore the state after changes
that I thought wouldn't change the executi
Hi Juan,
usually the Flink operators contain the optimized expression that was
defined in SQL. You can also name the the entire job using
env.execute("Your Name") if that would help to identify the query.
Regarding checkpoints, it depends how you define "small changes". You
must ensure that