bobhan1 opened a new pull request, #61036: URL: https://github.com/apache/doris/pull/61036
## Proposed changes This PR adds visibility into "hidden" time sinks in SyncRowset that are not covered by existing RPC metrics. ### Problem Query profile shows `SyncRowsetTime: 3sec828ms` but all RPC detail metrics are 0, making it difficult to diagnose where the time is spent. ### Solution Added 3 new profile metrics under `SyncRowsetTime`: - **SyncRowsetBthreadScheduleWaitTime**: bthread scheduling delay (from task creation to execution start) - **SyncRowsetMetaLockWaitTime**: total wait time for `_meta_lock` (std::shared_mutex) acquisitions - **SyncRowsetSyncMetaLockWaitTime**: total wait time for `_sync_meta_lock` (bthread::Mutex) acquisitions These metrics track lock wait times in: - `CloudTablet::sync_rowsets()` - `CloudTablet::sync_if_not_running()` - `CloudMetaMgr::sync_tablet_rowsets_unlocked()` ### Checklist - [x] Compiles and builds successfully - [ ] Existing UT passed (need verification) - [ ] Manual testing done ## Types of changes - [ ] Bugfix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected) - [ ] Docs Update / Doc Refactor - [ ] Code Refactor (internal modification that does not change functionality) - [ ] Test Case - [ ] CI (CD related, like github action, k8s yaml, etc.) ## Further comments Helps diagnose cases where high SyncRowsetTime is caused by: - Lock contention between concurrent scans on the same tablet - bthread worker pool saturation -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
