[
https://issues.apache.org/jira/browse/IMPALA-12402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17762954#comment-17762954
]
Maxwell Guo commented on IMPALA-12402:
--------------------------------------
[~MikaelSmith] thanks for your reply,I think it is better to make this param
of guava cache's concurrencyLevel (also I may want to make more than this one
param) configurable instand of the default value 4.
for many tables I think the value should be more than 4 like 128 or 256. When
we saw the jstack for impala at startup stage, we found the threads are all
waitting for the lock. see
https://github.com/google/guava/blob/master/guava/src/com/google/common/cache/CacheBuilder.java#L432
lower value will lead to thread contention .
As in this cache ,the concurrency level can be use as the buckect number . So
more buckect little thread contention I think(We assume that the values are
random enough).
> Add some configurations for CatalogdMetaProvider's cache_
> ---------------------------------------------------------
>
> Key: IMPALA-12402
> URL: https://issues.apache.org/jira/browse/IMPALA-12402
> Project: IMPALA
> Issue Type: Improvement
> Components: fe
> Reporter: Maxwell Guo
> Assignee: Maxwell Guo
> Priority: Minor
> Labels: pull-request-available
>
> when the cluster contains many db and tables such as if there are more than
> 100000 tables, and if we restart the impalad , the local cache_
> CatalogMetaProvider's need to doing some loading process.
> As we know that the goole's guava cache 's concurrencyLevel os set to 4 by
> default.
> but if there is many tables the loading process will need more time and
> increase the probability of lock contention, see
> [here|https://github.com/google/guava/blob/master/guava/src/com/google/common/cache/CacheBuilder.java#L437].
>
> So we propose to add some configurations here, the first is the concurrency
> of cache.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]