Hi Mayank,

Thank you for starting the discussion! In general, I think such functionality
would be a really great addition to Flink.

Could you pls. elaborate a bit more one what is the reason of defining a
`connection` resource on the database level instead of the catalog level?
If I think about `JdbcCatalog`, or `HiveCatalog`, the catalog is in 1-to-1
mapping with an RDBMS, or a HiveMetastore, so my initial thinking is that a
`connection` seems more like a catalog level resource.

WDYT?

Thanks,
Ferenc



On Tuesday, April 29th, 2025 at 17:08, Mayank Juneja <mayankjunej...@gmail.com> 
wrote:

> 
> 
> Hi all,
> 
> I would like to open up for discussion a new FLIP-529 [1].
> 
> Motivation:
> Currently, Flink SQL handles external connectivity by defining endpoints
> and credentials in table configuration. This approach prevents reusability
> of these connections and makes table definition less secure by exposing
> sensitive information.
> We propose the introduction of a new "connection" resource in Flink. This
> will be a pluggable resource configured with a remote endpoint and
> associated access key. Once defined, connections can be reused across table
> definitions, and eventually for model definition (as discussed in FLIP-437)
> for inference, enabling seamless and secure integration with external
> systems.
> The connection resource will provide a new, optional way to manage external
> connectivity in Flink. Existing methods for table definitions will remain
> unchanged.
> 
> [1] https://cwiki.apache.org/confluence/x/cYroF
> 
> Best Regards,
> Mayank Juneja

Reply via email to