[
https://issues.apache.org/jira/browse/PHOENIX-7111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17794527#comment-17794527
]
ASF GitHub Bot commented on PHOENIX-7111:
-----------------------------------------
palashc commented on PR #1744:
URL: https://github.com/apache/phoenix/pull/1744#issuecomment-1846580671
> Also the newly introduced metric failed in the latest run.
https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1744/9/testReport/org.apache.phoenix.cache/ServerMetadataCacheTest/testServerSideMetrics/
@shahrs87 The test passed on my local but failed here because there was one
extra query in the background from
[TaskRegionObserver](https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/TaskRegionObserver.java#L166).
It runs every minute and looks for any tasks to execute. In the jenkins run,
the SYSTEM.TASK query coincided with the new test and caused the failure. I had
two questions about this:
1. Do you think we should never replace System tables from
ServerMetadataCache ?
2. Should we exclude System tables from these new metrics or should I create
separate metrics for tracking validateDDL requests and cache hit/miss for
system tables?
> Metrics for server-side metadata cache
> --------------------------------------
>
> Key: PHOENIX-7111
> URL: https://issues.apache.org/jira/browse/PHOENIX-7111
> Project: Phoenix
> Issue Type: Sub-task
> Reporter: Palash Chauhan
> Assignee: Palash Chauhan
> Priority: Major
>
> Add metrics for monitoring the new metadata caching design.
> # Time taken to invalidate cache on all region servers during a DDL operation
> # Number of failed DDL operations due to failure in cache invalidation
> # Number of validateLastDDLTimestamp requests per region-server
> # Cache hits/misses per region-server when validating timestamps
--
This message was sent by Atlassian Jira
(v8.20.10#820010)