[
https://issues.apache.org/jira/browse/HDDS-9052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Mohammad Arafat Khan updated HDDS-9052:
---------------------------------------
Description:
Summary for Jira Ticket:
While analyzing recent GitHub acceptance test runs, it has been observed that
there is an issue with the new OM DB insights in Recon. The logs are flooded
with messages like the following during tests such as acceptance HA and
acceptance compat:
```
{{recon_1 | 2023-07-13 10:09:48,563 [pool-27-thread-1] INFO
impl.OzoneManagerServiceProviderImpl: Number of updates received from OM : 10,
SequenceNumber diff: 30, SequenceNumber Lag from OM 0.recon_1 | 2023-07-13
10:09:48,563 [pool-27-thread-1] INFO impl.OzoneManagerServiceProviderImpl:
Delta updates received from OM : 1 loops, 30 recordsrecon_1 | 2023-07-13
10:09:48,636 [pool-49-thread-1] WARN tasks.OmTableInsightTask: Update event
does not have the old Key Info for #TRANSACTIONINFO.recon_1 | 2023-07-13
10:09:48,636 [pool-49-thread-1] WARN tasks.OmTableInsightTask: Update event
does not have the old Key Info for /vol-xgtxr/buc-oplxd.recon_1 | 2023-07-13
10:09:48,638 [pool-49-thread-1] WARN tasks.OmTableInsightTask: Update event
does not have the old Key Info for #TRANSACTIONINFO.recon_1 | 2023-07-13
10:09:48,638 [pool-49-thread-1] WARN tasks.OmTableInsightTask: Update event
does not have the old Key Info for /vol-xgtxr/buc-oplxd....
```
In some cases, the logging appears to be occurring for every key received in
the update, which results in rapid log growth and makes it difficult to review
the logs. This issue needs to be triaged, and appropriate action should be taken
was:
Summary for Jira Ticket:
While analyzing recent GitHub acceptance test runs, it has been observed that
there is an issue with the new OM DB insights in Recon. The logs are flooded
with messages like the following during tests such as acceptance HA and
acceptance compat:
```
{{recon_1 | 2023-07-13 10:09:48,563 [pool-27-thread-1] INFO
impl.OzoneManagerServiceProviderImpl: Number of updates received from OM : 10,
SequenceNumber diff: 30, SequenceNumber Lag from OM 0.recon_1 | 2023-07-13
10:09:48,563 [pool-27-thread-1] INFO impl.OzoneManagerServiceProviderImpl:
Delta updates received from OM : 1 loops, 30 recordsrecon_1 | 2023-07-13
10:09:48,636 [pool-49-thread-1] WARN tasks.OmTableInsightTask: Update event
does not have the old Key Info for #TRANSACTIONINFO.recon_1 | 2023-07-13
10:09:48,636 [pool-49-thread-1] WARN tasks.OmTableInsightTask: Update event
does not have the old Key Info for /vol-xgtxr/buc-oplxd.recon_1 | 2023-07-13
10:09:48,638 [pool-49-thread-1] WARN tasks.OmTableInsightTask: Update event
does not have the old Key Info for #TRANSACTIONINFO.recon_1 | 2023-07-13
10:09:48,638 [pool-49-thread-1] WARN tasks.OmTableInsightTask: Update event
does not have the old Key Info for /vol-xgtxr/buc-oplxd....
```}}
In some cases, the logging appears to be occurring for every key received in
the update, which results in rapid log growth and makes it difficult to review
the logs. This issue needs to be triaged, and appropriate action should be taken
> The logs in Recon are flooded with excessive warning messages, resulting in
> log overload
> -----------------------------------------------------------------------------------------
>
> Key: HDDS-9052
> URL: https://issues.apache.org/jira/browse/HDDS-9052
> Project: Apache Ozone
> Issue Type: Bug
> Reporter: Mohammad Arafat Khan
> Priority: Critical
>
> Summary for Jira Ticket:
> While analyzing recent GitHub acceptance test runs, it has been observed that
> there is an issue with the new OM DB insights in Recon. The logs are flooded
> with messages like the following during tests such as acceptance HA and
> acceptance compat:
>
> ```
> {{recon_1 | 2023-07-13 10:09:48,563 [pool-27-thread-1] INFO
> impl.OzoneManagerServiceProviderImpl: Number of updates received from OM :
> 10, SequenceNumber diff: 30, SequenceNumber Lag from OM 0.recon_1 |
> 2023-07-13 10:09:48,563 [pool-27-thread-1] INFO
> impl.OzoneManagerServiceProviderImpl: Delta updates received from OM : 1
> loops, 30 recordsrecon_1 | 2023-07-13 10:09:48,636 [pool-49-thread-1] WARN
> tasks.OmTableInsightTask: Update event does not have the old Key Info for
> #TRANSACTIONINFO.recon_1 | 2023-07-13 10:09:48,636 [pool-49-thread-1] WARN
> tasks.OmTableInsightTask: Update event does not have the old Key Info for
> /vol-xgtxr/buc-oplxd.recon_1 | 2023-07-13 10:09:48,638 [pool-49-thread-1]
> WARN tasks.OmTableInsightTask: Update event does not have the old Key Info
> for #TRANSACTIONINFO.recon_1 | 2023-07-13 10:09:48,638 [pool-49-thread-1]
> WARN tasks.OmTableInsightTask: Update event does not have the old Key Info
> for /vol-xgtxr/buc-oplxd....
> ```
> In some cases, the logging appears to be occurring for every key received in
> the update, which results in rapid log growth and makes it difficult to
> review the logs. This issue needs to be triaged, and appropriate action
> should be taken
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]