[
https://issues.apache.org/jira/browse/FLINK-20416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17245195#comment-17245195
]
Jark Wu commented on FLINK-20416:
---------------------------------
Hi [~shared_ptr], thanks for the contribution. However, before moving to the
pull request, it would be better to reach a consensus on the proposal.
First of all, I agree caching is good in this situation. However, I'm wondering
whether it is necessary to have a generic cache layer in the framework.
If this is a framework cache, then how to enable it? By job configuration?
If this is enabled by catalog option (i.e. the with option when create
catalog), then it should belongs to the speicific catalog implementation just
like current lookup cache implementation.
>From my point of view, caching in specific catalog would be better. Because
>this won't affect other catalogs. Besides, it's possible to have modification
>notification in connector implementation to have better freshness.
What do you think?
> Need a cached catalog for batch SQL job
> ---------------------------------------
>
> Key: FLINK-20416
> URL: https://issues.apache.org/jira/browse/FLINK-20416
> Project: Flink
> Issue Type: Improvement
> Components: Table SQL / API, Table SQL / Planner
> Reporter: Sebastian Liu
> Priority: Major
> Labels: pull-request-available
>
> For OLAP scenarios, There are usually some analytical queries which running
> time is relatively short. These queries are also sensitive to latency. In the
> current Blink sql processing, parse/validate/optimize stages are all need
> meta data from catalog API. But each request to the catalog requires re-run
> of the underlying meta query.
>
> We may need a cached catalog which can cache the table schema and statistic
> info to avoid unnecessary repeated meta requests.
> I have submitted a related PR for adding a genetic cached catalog, which can
> delegate other implementations of {{AbstractCatalog. }}
> {{[https://github.com/apache/flink/pull/14260]}}
--
This message was sent by Atlassian Jira
(v8.3.4#803005)