Hello Andrew Sherman, Riza Suminto, Impala Public Jenkins,

I'd like you to reexamine a change. Please visit

    http://gerrit.cloudera.org:8080/20673

to look at the new patch set (#5).

Change subject: IMPALA-12486: Add catalog metrics for metadata loading
......................................................................

IMPALA-12486: Add catalog metrics for metadata loading

This patch adds the following catalog metrics which indicate the load on
HDFS for loading file metadata:
 - catalog-server.metadata.file.num-loading-threads: The total size of
   all thread pools used in loading file metadata.
 - catalog-server.metadata.file.num-loading-tasks: The total number of
   unfinished file metadata loading tasks. Each task corresponds to a
   partition.
 - catalog-server.metadata.table.num-loading-file-metadata: The total
   number of tables that are loading file metadata.

Also adds some metrics for metadata loading on all tables. Note that
metadata loading of an HDFS table consists of loading HMS metadata and
HDFS file metadata, etc.
 - catalog-server.metadata.table.num-loading-metadata: The total number
   of tables that are loading metadata.
 - catalog-server.metadata.table.async-loading.num-in-progress: The
   total number of tables that are loading metadata asynchorously. E.g.
   the initial metadata loading triggered by the first access on a
   table.
 - catalog-server.metadata.table.async-loading.queue-len: The total
   number of tables that are waiting for asynchorous loading. If this
   number raises, consider bumping --num_metadata_loading_threads.

Three metrics about the catalog cache are also added:
 - catalog.num-databases
 - catalog.num-tables
 - catalog.num-functions
Note that the first two are also shown in WebUI of coordinators and we
plan to deprecate them and only show them in catalogd's WebUI.

The number of idle and in-use HMS clients are also exposed in this
patch:
 - catalog.hms-client-pool.num-idle
 - catalog.hms-client-pool.num-in-use

Tests
 - Launch catalogd locally with load_catalog_in_background=true and
   verified the metrics.
 - Add e2e tests in tests/webserver/test_web_pages.py

Change-Id: Icef7b123bdcb0f5b8572635eeaacd8294990f9ba
---
M be/src/catalog/catalog-server.cc
M be/src/catalog/catalog-server.h
M common/thrift/JniCatalog.thrift
M common/thrift/metrics.json
M fe/src/main/java/org/apache/impala/catalog/Catalog.java
M fe/src/main/java/org/apache/impala/catalog/CatalogObjectCache.java
M fe/src/main/java/org/apache/impala/catalog/CatalogServiceCatalog.java
M fe/src/main/java/org/apache/impala/catalog/DataSourceTable.java
M fe/src/main/java/org/apache/impala/catalog/Db.java
M fe/src/main/java/org/apache/impala/catalog/FileMetadataLoader.java
M fe/src/main/java/org/apache/impala/catalog/HBaseTable.java
M fe/src/main/java/org/apache/impala/catalog/HdfsTable.java
M fe/src/main/java/org/apache/impala/catalog/IcebergFileMetadataLoader.java
M fe/src/main/java/org/apache/impala/catalog/KuduTable.java
M fe/src/main/java/org/apache/impala/catalog/MetaStoreClientPool.java
M fe/src/main/java/org/apache/impala/catalog/ParallelFileMetadataLoader.java
M fe/src/main/java/org/apache/impala/catalog/Table.java
M fe/src/main/java/org/apache/impala/catalog/TableLoadingMgr.java
M fe/src/main/java/org/apache/impala/catalog/View.java
M fe/src/main/java/org/apache/impala/service/JniCatalog.java
M tests/webserver/test_web_pages.py
21 files changed, 366 insertions(+), 16 deletions(-)


  git pull ssh://gerrit.cloudera.org:29418/Impala-ASF refs/changes/73/20673/5
--
To view, visit http://gerrit.cloudera.org:8080/20673
To unsubscribe, visit http://gerrit.cloudera.org:8080/settings

Gerrit-Project: Impala-ASF
Gerrit-Branch: master
Gerrit-MessageType: newpatchset
Gerrit-Change-Id: Icef7b123bdcb0f5b8572635eeaacd8294990f9ba
Gerrit-Change-Number: 20673
Gerrit-PatchSet: 5
Gerrit-Owner: Quanlong Huang <[email protected]>
Gerrit-Reviewer: Andrew Sherman <[email protected]>
Gerrit-Reviewer: Impala Public Jenkins <[email protected]>
Gerrit-Reviewer: Quanlong Huang <[email protected]>
Gerrit-Reviewer: Riza Suminto <[email protected]>

Reply via email to