JingsongLi opened a new issue #1303:
URL: https://github.com/apache/iceberg/issues/1303


   Unlike Spark (Table is only required on the client/driver side), Flink needs 
obtain `Table` object in Job Manager or Task.
   - For writer (https://github.com/apache/iceberg/pull/1185): Flink needs 
obtain `Table` in committer task for appending files.
   - For reader (https://github.com/apache/iceberg/pull/1293): Flink needs 
obtain `Table` in Job Manager for planing tasks.
   
   So we can introduce a `IcebergCatalogLoader` for reader and writer, users 
can define a custom catalog loader in `FlinkCatalogFactory`.
   ```
   public interface IcebergCatalogLoader extends Serializable {
     Catalog loadCatalog(Configuration hadoopConf);
   }
   ```


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to