Could anyone give me permission to create a FLIP in the wiki
https://cwiki.apache.org/confluence/display/FLINK/Apache+Flink+Home? I
found I don't have the permission.


roryqi <[email protected]> 于2026年2月3日周二 16:03写道:

> Thanks for your input.
>
> 1. I can create a FLIP for this discussion.
> 2. CDC only uses insert data to imitate delete and update data. It should
> use `insert` privilege from the catalog perspective.
> 3. The behavior depends on the security model. There are two security
> models, invoker and definer. If you use the definer security model, you
> will use the privileges of the view definer to access underlying tables. If
> you use an invoker security model, you will use your own privileges to
> access underlying tables. More information,
> https://www.postgresql.org/docs/current/sql-createview.html
>
>
> Xuyang <[email protected]> 于2026年2月3日周二 15:02写道:
>
>> Hi, Rory. Thanks for sharing. I've took a look at your PR and have a few
>> questions:
>>
>>
>> 1. I believe this warrants a FLIP, as it enriches the (public) Catalog
>> interface.
>> 2. I noticed that in `convertSqlInsert`, the sink's privileges are set to
>> INSERT only. I'm a bit curious: in the context of an INSERT INTO ...
>> statement, when the sink processes CDC data, it not only inserts new rows
>> but may also update or delete existing rows.
>> 3. Moreover, aside from write privileges, I can think of an interesting
>> scenario regarding read privileges: the user of the current pipeline has
>> privileges on a persistent view in the catalog but lacks privileges on the
>> underlying tables of that view. I’m wondering how we should handle this
>> case — should we reject it outright, or leave it to the catalog to handle?
>>
>>
>>
>>
>>
>> --
>>
>>     Best!
>>     Xuyang
>>
>>
>>
>> At 2026-02-01 16:25:57, "roryqi" <[email protected]> wrote:
>> >Hi Flink Community,
>> >
>> >I'd like to propose an enhancement to the Catalog interface to better
>> >support access control scenarios.
>> >Problem Statement
>> >
>> >For custom catalogs that implement access control, read and write
>> >permissions often need to be distinguished. Currently, Flink always
>> invokes
>> >Catalog#getTableto look up tables, regardless of whether the operation is
>> >for reading or writing. This limitation makes it challenging for catalogs
>> >to enforce proper write-level access control.
>> >Proposed Solution
>> >
>> >I've submitted a PR that adds a new variant of getTablemethod which
>> >explicitly indicates when write privileges are required. Key aspects of
>> >this implementation:
>> >
>> >   -
>> >
>> >   *Backward Compatibility*: The new method includes a default
>> >   implementation that calls the existing getTable, ensuring no breaking
>> >   changes
>> >   -
>> >
>> >   *Write Operation Coverage*: All write operations (INSERT, UPDATE,
>> >   DELETE, etc.) will use this new method for table lookup
>> >   -
>> >
>> >   *Industry Validation*: This approach aligns with Apache Spark's
>> similar
>> >   interface enhancement (see https://github.com/apache/spark/pull/47772
>> )
>> >
>> >Reference Materials
>> >
>> >   -
>> >
>> >   JIRA Issue: https://issues.apache.org/jira/browse/FLINK-38848
>> >   -
>> >
>> >   Pull Request: https://github.com/apache/flink/pull/27389
>> >
>> >This enhancement would significantly improve catalog implementations'
>> >ability to enforce fine-grained access control, particularly important in
>> >multi-tenant environments. I'm happy to start a discussion on the dev
>> >mailing list if that would be more appropriate for this type of interface
>> >change.
>> >
>> >Looking forward to the community's feedback on this discussion
>> >Best
>> >
>> >Rory
>>
>

Reply via email to