Hi all,

SQL modules like the state processor API could potentially add SQL
functions which are
coming from external connectors like Kafka, Iceberg, etc...
The dynamic nature, instead of hardcoding it into the module comes from the
fact that
we're not intended to add direct external connector dependencies to Flink
itself (state processor API here).

One of the most obvious example of such functionality is getting the Kafka
offsets.
The intended end-user interaction would look like the following:
- Add state processor API jar to the classpath
- Add Kafka connector jar to the classpath
- LOAD MODULE state
- SELECT * FROM get_kafka_offsets(...)

In the background nothing super complex thing would happen, just a service
loader
can load function definitions dynamically in the state module. It worth to
highlight that
no intention to change any actual APIs just adding this as new feature.

Please share your thought on this.

BR,
G

Reply via email to