[ 
https://issues.apache.org/jira/browse/IMPALA-14220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18008049#comment-18008049
 ] 

Quanlong Huang commented on IMPALA-14220:
-----------------------------------------

Now catalogd can accept requests right after the is_active flag is set to true:
{code:cpp}
void CatalogServer::UpdateActiveCatalogd(bool is_registration_reply,
    int64_t active_catalogd_version, const TCatalogRegistration& 
catalogd_registration) {
  ...
  if (is_matching) {
    if (!is_active_.Load()) {
      is_active_.Store(true);  // <-------- Requests are admitted after this 
point. But metadata is not reset yet.
      ...
      if (FLAGS_catalogd_ha_reset_metadata_on_failover) {
        triggered_first_reset_ = false;  // <---------- After this, the 
TriggerResetMetadata thread will run after the current thread release the 
catalog_lock_
{code}
The request will use the old metadata cache if it comes before the catalog 
reset starts. To be specific, requests come after 
[is_active_.Store(true)|https://github.com/apache/impala/blob/d41d325b4154f9526991b6fb568b59fa1ffe5501/be/src/catalog/catalog-server.cc#L820]
 and beforeĀ 
[ResetMetadata|https://github.com/apache/impala/blob/d41d325b4154f9526991b6fb568b59fa1ffe5501/be/src/catalog/catalog-server.cc#L885]
 holds the [Java catalog 
versionLock_|https://github.com/apache/impala/blob/d41d325b4154f9526991b6fb568b59fa1ffe5501/fe/src/main/java/org/apache/impala/catalog/CatalogServiceCatalog.java#L2435],
 can see stale metadata.

Requests come after that are OK since they will be blocked in Java side, i.e. 
in wait methods of CatalogResetManager.

I can reproduce the issue by adding a sleep in that period and run 
TestCatalogdHA.test_metadata_after_failover in local-catalog mode with an debug 
action to delay HMS event processing:
{code:python}
diff --git a/be/src/catalog/catalog-server.cc b/be/src/catalog/catalog-server.cc
index f74e1419b..48295c7f6 100644
--- a/be/src/catalog/catalog-server.cc
+++ b/be/src/catalog/catalog-server.cc
@@ -965,6 +965,7 @@ void CatalogServer::WaitUntilHmsEventsSynced(const 
unique_lock<std::mutex>& lock
       }
     }
 
+    SleepForMs(5000);
     // Run ResetMetadata without holding 'catalog_lock_' so that it does not 
block
     // gathering thread from starting. Note that gathering thread will still 
compete
     // for CatalogServiceCatalog.versionLock_.writeLock() in JVM.
diff --git a/tests/custom_cluster/test_catalogd_ha.py 
b/tests/custom_cluster/test_catalogd_ha.py
index 289d8ec4f..eb4a0e9e2 100644
--- a/tests/custom_cluster/test_catalogd_ha.py
+++ b/tests/custom_cluster/test_catalogd_ha.py
@@ -522,7 +522,10 @@ class TestCatalogdHA(CustomClusterTestSuite):
 
   @CustomClusterTestSuite.with_args(
     statestored_args="--use_subscriber_id_as_catalogd_priority=true",
-    catalogd_args="--catalogd_ha_reset_metadata_on_failover=true",
+    catalogd_args="--catalogd_ha_reset_metadata_on_failover=true "
+                  "--debug_actions=catalogd_event_processing_delay:SLEEP@3000 "
+                  "--catalog_topic_mode=minimal",
+    impalad_args="--use_local_catalog=true",
     start_args="--enable_catalogd_ha")
   def test_metadata_after_failover(self, unique_database):
     self._test_metadata_after_failover(unique_database){code}

> IsActive checks blocked by the getCatalogDelta operation when there are slow 
> DDLs
> ---------------------------------------------------------------------------------
>
>                 Key: IMPALA-14220
>                 URL: https://issues.apache.org/jira/browse/IMPALA-14220
>             Project: IMPALA
>          Issue Type: Bug
>          Components: Backend, Catalog
>            Reporter: Quanlong Huang
>            Assignee: Riza Suminto
>            Priority: Blocker
>             Fix For: Impala 5.0.0
>
>
> When catalogd HA is enabled, catalogd will check whther it's the active one 
> before serving each request, i.e. in 
> [AcceptRequest()|https://github.com/apache/impala/blob/8d56eea72518aa11a36aa086dc8961bc8cdbd1fd/be/src/catalog/catalog-server.cc#L593]:
> {code:cpp}
>   Status AcceptRequest(CatalogServiceVersion::type client_version) {
>     ...
>     } else if (FLAGS_enable_catalogd_ha && !catalog_server_->IsActive()) {
>       status = Status(Substitute("Request for Catalog service is rejected 
> since "
>           "catalogd $0 is in standby mode", server_address_));
>     }
> {code}
> This check requires holding the catalog_lock_:
> {code:cpp}
> bool CatalogServer::IsActive() {
>   lock_guard<mutex> l(catalog_lock_);
>   return is_active_;
> }{code}
> [https://github.com/apache/impala/blob/8d56eea72518aa11a36aa086dc8961bc8cdbd1fd/be/src/catalog/catalog-server.cc#L896]
> This lock is also held by 
> [GatherCatalogUpdatesThread|https://github.com/apache/impala/blob/8d56eea72518aa11a36aa086dc8961bc8cdbd1fd/be/src/catalog/catalog-server.cc#L905]
>  (a.k.a. topic update thread) which invokes JNI method GetCatalogDelta to 
> collect catalog updates.
> It's known that collecting catalog updates could be blocked by slow DDLs that 
> holding the table lock for a long time (IMPALA-6671). The topic update thread 
> usually waits for 1 minute (configured by topic_update_tbl_max_wait_time_ms / 
> 2) on the table lock and then skips it with a warning like this:
> {noformat}
> Table tpch.lineitem (version=2373, lastSeen=2373) is skipping topic update 
> (2387, 2388] due to lock contention{noformat}
> If the table hasn't been collected 3 consecutive times (configured by 
> catalog_max_lock_skipped_topic_updates), topic update thread will wait 
> infinitely on it in the next time.
> So when the topic update thread is slow in collecting one round of updates, 
> it holds the catalog_lock_ for a long time and blocks all new requests on 
> this catalogd. This impacts performance on all queries that requires loading 
> metadata from catalogd.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to