[
https://issues.apache.org/jira/browse/IMPALA-14227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18007937#comment-18007937
]
ASF subversion and git services commented on IMPALA-14227:
----------------------------------------------------------
Commit 64abca481ffefcc67cee9e8c20de51e68238be95 in impala's branch
refs/heads/master from stiga-huang
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=64abca481 ]
IMPALA-14227: In HA failover, passive catalogd should apply pending HMS events
before being active
After IMPALA-14074, the passive catalogd can have a warmed up metadata
cache during failover (with catalogd_ha_reset_metadata_on_failover=false
and a non-empty warmup_tables_config_file). However, it could still use
a stale metadata cache when some pending HMS events generated by the
previous active catalogd are not applied yet.
This patch adds a wait during HA failover to ensure HMS events before
the failover happens are all applied on the new active catalogd. The
timeout is configured by a new flag which defaults to 300 (5 minutes):
catalogd_ha_failover_catchup_timeout_s. When timeout happens, by default
catalogd will fallback to resetting all metadata. Users can decide
whether to reset or continue using the current cache. This is configured
by another flag, catalogd_ha_reset_metadata_on_failover_catchup_timeout.
Since the passive catalogd depends on HMS event processing to keep its
metadata up-to-date with the active catalogd, this patch adds validation
to avoid starting catalogd with catalogd_ha_reset_metadata_on_failover
set to false and hms_event_polling_interval_s <= 0.
This patch also makes catalogd_ha_reset_metadata_on_failover a
non-hidden flag so it's shown in the /varz web page.
Tests:
- Ran test_warmed_up_metadata_after_failover 200 times. Without the
fix, it usually fails in several runs.
- Added new tests for the new flags.
Change-Id: Icf4fcb0e27c14197f79625749949b47c033a5f31
Reviewed-on: http://gerrit.cloudera.org:8080/23174
Reviewed-by: Impala Public Jenkins <[email protected]>
Tested-by: Impala Public Jenkins <[email protected]>
> In HA failover, passive catalogd should apply pending HMS events before being
> active
> ------------------------------------------------------------------------------------
>
> Key: IMPALA-14227
> URL: https://issues.apache.org/jira/browse/IMPALA-14227
> Project: IMPALA
> Issue Type: Bug
> Reporter: Quanlong Huang
> Assignee: Quanlong Huang
> Priority: Blocker
>
> After IMPALA-14074, the passive catalogd can have a warmed up metadata cache
> during failover (with catalogd_ha_reset_metadata_on_failover=false). However,
> it could still have pending HMS events that are not applied and so is using a
> stale metadata cache.
> For instance, the active catalogd creates a table and then crash. The passive
> catalogd should apply the CREATE_TABLE events before being active. Otherwise,
> Impala queries might see stale metadata in a while (until the new catalogd
> catch up with HMS events generated by the previous active catalogd).
> There is a test failure caused by this:
> {code:python}
> custom_cluster/test_catalogd_ha.py:540: in
> test_warmed_up_metadata_after_failover
> latest_catalogd = self._test_metadata_after_failover(unique_database,
> True)
> custom_cluster/test_catalogd_ha.py:584: in _test_metadata_after_failover
> self.execute_query_expect_success(self.client, "describe %s.tbl" %
> unique_database)
> common/impala_test_suite.py:1121: in wrapper
> return function(*args, **kwargs)
> common/impala_test_suite.py:1131: in execute_query_expect_success
> result = cls.__execute_query(impalad_client, query, query_options, user)
> common/impala_test_suite.py:1294: in __execute_query
> return impalad_client.execute(query, user=user)
> common/impala_connection.py:687: in execute
> cursor.execute(sql_stmt, configuration=self.__query_options)
> ../infra/python/env-gcc10.4.0/lib/python2.7/site-packages/impala/hiveserver2.py:392:
> in execute
> configuration=configuration)
> ../infra/python/env-gcc10.4.0/lib/python2.7/site-packages/impala/hiveserver2.py:443:
> in execute_async
> self._execute_async(op)
> ../infra/python/env-gcc10.4.0/lib/python2.7/site-packages/impala/hiveserver2.py:462:
> in _execute_async
> operation_fn()
> ../infra/python/env-gcc10.4.0/lib/python2.7/site-packages/impala/hiveserver2.py:440:
> in op
> run_async=True)
> ../infra/python/env-gcc10.4.0/lib/python2.7/site-packages/impala/hiveserver2.py:1324:
> in execute
> return self._operation('ExecuteStatement', req, False)
> ../infra/python/env-gcc10.4.0/lib/python2.7/site-packages/impala/hiveserver2.py:1244:
> in _operation
> resp = self._rpc(kind, request, safe_to_retry)
> ../infra/python/env-gcc10.4.0/lib/python2.7/site-packages/impala/hiveserver2.py:1181:
> in _rpc
> err_if_rpc_not_ok(response)
> ../infra/python/env-gcc10.4.0/lib/python2.7/site-packages/impala/hiveserver2.py:867:
> in err_if_rpc_not_ok
> raise HiveServer2Error(resp.status.errorMessage)
> E HiveServer2Error: Query eb405217bbb418ee:a1033c0000000000 failed:
> E AnalysisException: Could not resolve path:
> 'test_warmed_up_metadata_after_failover_452d93b4.tbl'{code}
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]