Hello Tamas Mate, Qifan Chen, Riza Suminto, [email protected], Impala 
Public Jenkins,

I'd like you to reexamine a change. Please visit

    http://gerrit.cloudera.org:8080/19234

to look at the new patch set (#3).

Change subject: IMPALA-11721: Impala query keep being retried over frequently 
updated iceberg table
......................................................................

IMPALA-11721: Impala query keep being retried over frequently updated iceberg 
table

Iceberg table loading can fail in local catalog mode if the table gets
updated frequently. This is what happens during table loading in local
catalog mode: every query starts with it's own empty local catalog.
Table metadata is fetched in multiple requests via a MetaProvider which
is always a CatalogdMetaProvider. CatalogdMetaProvider caches requests
and the cache key also includes the table's catalog version.

The Iceberg table is loaded by the following requests:

1 CatalogdMetaProvider.loadTable()
2 CatalogdMetaProvider.loadIcebergTable()
3 CatalogdMetaProvider.loadIcebergApiTable() # This actually directly
                                             # loads the Iceberg table
                                             # via Iceberg API
                                             # (no CatalogD involved)
4 CatalogdMetaProvider.loadTableColumnStatistics()
5 CatalogdMetaProvider.loadPartitionList()
6 CatalogdMetaProvider.loadPartitionsByRefs()

Steps 1-4 happens during table loading, steps 5-6 happens during
planning. We cannot really reorder these invocations, but since
CatalogdMetaProvider caches these, only the very first invocations need
to reach out to CatalogD and check the table's catalog version.
Subsequent invocations, i.e. subsequent queries that use the Iceberg
table can use the cached metadata, and no need to check the catalog
version of the cached metadata since the cache key also includes
the catalog version, hence we have corresponding metadata in the cache.

This patch resolves the issue by pre-warming the metaprovider's cache
before issuing loadIcebergApiTable() so the CatalogdMetaProvider.load*()
operations can be served from cache.

So what happens when the metaprovider's cache gets invalidated due to
concurrent updates to the table and we are still processing the query?
No problem, only the top-level TableCacheKey gets invalidated. The
cache will still be able to answer the fine-grained load requests that
are keyed by the now outdated catalog version. E.g. ColStatsCacheKey
hashes db name, table name, catalog version, and column name as a key
in the cache. Therefore the current query processing can be finished
using a consistent state of the metadata. Subsequent queries will use
a newer version of the table.

Testing:
 * modified test_insert_stress.py so it won't tolerate inconsistent
   metadata fetch exceptions (Frontend already tolerates them
   to some degree)

Change-Id: Iac28224b2b6d67725eeb17f3e9d813ba622edb43
---
M fe/src/main/java/org/apache/impala/catalog/local/LocalIcebergTable.java
M tests/stress/test_insert_stress.py
2 files changed, 42 insertions(+), 26 deletions(-)


  git pull ssh://gerrit.cloudera.org:29418/Impala-ASF refs/changes/34/19234/3
--
To view, visit http://gerrit.cloudera.org:8080/19234
To unsubscribe, visit http://gerrit.cloudera.org:8080/settings

Gerrit-Project: Impala-ASF
Gerrit-Branch: master
Gerrit-MessageType: newpatchset
Gerrit-Change-Id: Iac28224b2b6d67725eeb17f3e9d813ba622edb43
Gerrit-Change-Number: 19234
Gerrit-PatchSet: 3
Gerrit-Owner: Zoltan Borok-Nagy <[email protected]>
Gerrit-Reviewer: Anonymous Coward <[email protected]>
Gerrit-Reviewer: Impala Public Jenkins <[email protected]>
Gerrit-Reviewer: Qifan Chen <[email protected]>
Gerrit-Reviewer: Riza Suminto <[email protected]>
Gerrit-Reviewer: Tamas Mate <[email protected]>
Gerrit-Reviewer: Zoltan Borok-Nagy <[email protected]>

Reply via email to