[
https://issues.apache.org/jira/browse/IMPALA-9405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17041062#comment-17041062
]
Zoltán Borók-Nagy commented on IMPALA-9405:
-------------------------------------------
Yeah, that makes sense to me. I don't think we need multiple connection pools
of the same thing. It just makes more it more difficult to correctly setup the
connections as it turned out.
> Improvements for Frontend#metaStoreClientPool_
> ----------------------------------------------
>
> Key: IMPALA-9405
> URL: https://issues.apache.org/jira/browse/IMPALA-9405
> Project: IMPALA
> Issue Type: Improvement
> Components: Frontend
> Reporter: Sahil Takiar
> Priority: Major
>
> While trying to resurrect {{tests/experiments/test_catalog_hms_failures.py}}
> I noticed the test {{TestCatalogHMSFailures::test_start_catalog_before_hms}}
> has started to fail. The reason is that when this test was written, only the
> catalogd was connecting to HMS, but with catalog v2 and ACID integration this
> is no longer the case.
> It looks like catalog v2 honors {{initial_hms_cnxn_timeout_s}}, (at least
> {{DirectMetaProvider}} honors the flag, and I *think* that is part of the
> metadata v2 code), but the {{Frontend}} Java class has a member variable
> {{metaStoreClientPool_}} that does not use the flag. It looks like that pool
> was added for ACID integration.
> The flag {{initial_hms_cnxn_timeout_s}} was added in IMPALA-4278 to help with
> concurrent startup of Impala and HMS.
> Somewhat related to this issue, is that there seems to be multiple places
> where Impala creates a {{MetaStoreClientPool}}, I think it would make more
> sense to just have one global pool that is used across the process. Doing so
> would improve connection re-use and possibly decrease the number of HMS
> connections. There is actually a TODO in {{DirectMetaProvider}} as well that
> says {{msClientPool_}} should be a process wide singleton.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]