[
https://issues.apache.org/jira/browse/HDDS-9051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Attila Doroszlai resolved HDDS-9051.
------------------------------------
Fix Version/s: 1.5.0
Resolution: Done
> Change the log level when no nodes are available in NetworkTopologyImpl
> -----------------------------------------------------------------------
>
> Key: HDDS-9051
> URL: https://issues.apache.org/jira/browse/HDDS-9051
> Project: Apache Ozone
> Issue Type: Task
> Reporter: Siddhant Sangwan
> Assignee: Tejaskriya Madhan
> Priority: Major
> Labels: pull-request-available
> Fix For: 1.5.0
>
>
> We currently have this log message in NetworkTopologyImpl set at the WARN
> level:
> {code}
> LOG.warn("No available node in (scope=\"{}\" excludedScope=\"{}\" " +
> "excludedNodes=\"{}\" ancestorGen=\"{}\").",
> scopeNode.getNetworkFullPath(), excludedScopes, excludedNodes,
> ancestorGen);
> {code}
> It’ll usually show up with a bunch of other related, repeating logs in this
> pattern:
> {code}
> 2023-07-19 13:43:47,381 [main] WARN net.NetworkTopologyImpl
> (NetworkTopologyImpl.java:chooseNodeInternal(651)) - No available node in
> (scope="/rack2" excludedScope="null"
> excludedNodes="[e7cd8996-5fad-46a0-b8e2-9debede578d6(node0/87.128.240.111),
> a683a02f-94fd-4a3a-b059-6771e1dad24e(node1/183.114.115.141),
> 439665c0-1063-4a60-abe5-ff48a4687a1b(node5/204.110.22.90),
> 3eb4ccab-97f2-41a9-88d6-75bb4c436010(node3/206.217.64.205),
> d306212d-3cfa-43f6-99e4-f577d2f53d3c(node4/84.38.152.124)]" ancestorGen="0").
> 2023-07-19 13:43:47,382 [main] DEBUG
> algorithms.SCMContainerPlacementRackScatter
> (SCMContainerPlacementRackScatter.java:chooseNode(466)) - Failed to find the
> datanode for container. excludedNodes:
> [e7cd8996-5fad-46a0-b8e2-9debede578d6(node0/87.128.240.111),
> a683a02f-94fd-4a3a-b059-6771e1dad24e(node1/183.114.115.141),
> 439665c0-1063-4a60-abe5-ff48a4687a1b(node5/204.110.22.90),
> 3eb4ccab-97f2-41a9-88d6-75bb4c436010(node3/206.217.64.205),
> d306212d-3cfa-43f6-99e4-f577d2f53d3c(node4/84.38.152.124)], rack /rack2
> 2023-07-19 13:43:47,382 [main] WARN net.NetworkTopologyImpl
> (NetworkTopologyImpl.java:chooseNodeInternal(651)) - No available node in
> (scope="/rack2" excludedScope="null"
> excludedNodes="[e7cd8996-5fad-46a0-b8e2-9debede578d6(node0/87.128.240.111),
> a683a02f-94fd-4a3a-b059-6771e1dad24e(node1/183.114.115.141),
> 439665c0-1063-4a60-abe5-ff48a4687a1b(node5/204.110.22.90),
> 3eb4ccab-97f2-41a9-88d6-75bb4c436010(node3/206.217.64.205),
> d306212d-3cfa-43f6-99e4-f577d2f53d3c(node4/84.38.152.124)]" ancestorGen="0").
> 2023-07-19 13:43:47,382 [main] DEBUG
> algorithms.SCMContainerPlacementRackScatter
> (SCMContainerPlacementRackScatter.java:chooseNode(466)) - Failed to find the
> datanode for container. excludedNodes:
> [e7cd8996-5fad-46a0-b8e2-9debede578d6(node0/87.128.240.111),
> a683a02f-94fd-4a3a-b059-6771e1dad24e(node1/183.114.115.141),
> 439665c0-1063-4a60-abe5-ff48a4687a1b(node5/204.110.22.90),
> 3eb4ccab-97f2-41a9-88d6-75bb4c436010(node3/206.217.64.205),
> d306212d-3cfa-43f6-99e4-f577d2f53d3c(node4/84.38.152.124)], rack /rack2
> 2023-07-19 13:43:47,382 [main] WARN net.NetworkTopologyImpl
> (NetworkTopologyImpl.java:chooseNodeInternal(651)) - No available node in
> (scope="/rack2" excludedScope="null"
> excludedNodes="[e7cd8996-5fad-46a0-b8e2-9debede578d6(node0/87.128.240.111),
> a683a02f-94fd-4a3a-b059-6771e1dad24e(node1/183.114.115.141),
> 439665c0-1063-4a60-abe5-ff48a4687a1b(node5/204.110.22.90),
> 3eb4ccab-97f2-41a9-88d6-75bb4c436010(node3/206.217.64.205),
> d306212d-3cfa-43f6-99e4-f577d2f53d3c(node4/84.38.152.124)]" ancestorGen="0").
> 2023-07-19 13:43:47,382 [main] DEBUG
> algorithms.SCMContainerPlacementRackScatter
> (SCMContainerPlacementRackScatter.java:chooseNode(466)) - Failed to find the
> datanode for container. excludedNodes:
> [e7cd8996-5fad-46a0-b8e2-9debede578d6(node0/87.128.240.111),
> a683a02f-94fd-4a3a-b059-6771e1dad24e(node1/183.114.115.141),
> 439665c0-1063-4a60-abe5-ff48a4687a1b(node5/204.110.22.90),
> 3eb4ccab-97f2-41a9-88d6-75bb4c436010(node3/206.217.64.205),
> d306212d-3cfa-43f6-99e4-f577d2f53d3c(node4/84.38.152.124)], rack /rack2
> 2023-07-19 13:43:47,382 [main] WARN net.NetworkTopologyImpl
> (NetworkTopologyImpl.java:chooseNodeInternal(651)) - No available node in
> (scope="/rack2" excludedScope="null"
> excludedNodes="[e7cd8996-5fad-46a0-b8e2-9debede578d6(node0/87.128.240.111),
> a683a02f-94fd-4a3a-b059-6771e1dad24e(node1/183.114.115.141),
> 439665c0-1063-4a60-abe5-ff48a4687a1b(node5/204.110.22.90),
> 3eb4ccab-97f2-41a9-88d6-75bb4c436010(node3/206.217.64.205),
> d306212d-3cfa-43f6-99e4-f577d2f53d3c(node4/84.38.152.124)]" ancestorGen="0").
> 2023-07-19 13:43:47,383 [main] DEBUG
> algorithms.SCMContainerPlacementRackScatter
> (SCMContainerPlacementRackScatter.java:chooseNode(466)) - Failed to find the
> datanode for container. excludedNodes:
> [e7cd8996-5fad-46a0-b8e2-9debede578d6(node0/87.128.240.111),
> a683a02f-94fd-4a3a-b059-6771e1dad24e(node1/183.114.115.141),
> 439665c0-1063-4a60-abe5-ff48a4687a1b(node5/204.110.22.90),
> 3eb4ccab-97f2-41a9-88d6-75bb4c436010(node3/206.217.64.205),
> d306212d-3cfa-43f6-99e4-f577d2f53d3c(node4/84.38.152.124)], rack /rack2
> 2023-07-19 13:43:47,383 [main] INFO
> algorithms.SCMContainerPlacementRackScatter
> (SCMContainerPlacementRackScatter.java:chooseNode(472)) - No satisfied
> datanode to meet the constraints. Metadatadata size required: 0 Data size
> required: 5, scope /rack2, excluded nodes
> [e7cd8996-5fad-46a0-b8e2-9debede578d6(node0/87.128.240.111),
> a683a02f-94fd-4a3a-b059-6771e1dad24e(node1/183.114.115.141),
> 439665c0-1063-4a60-abe5-ff48a4687a1b(node5/204.110.22.90),
> 3eb4ccab-97f2-41a9-88d6-75bb4c436010(node3/206.217.64.205),
> d306212d-3cfa-43f6-99e4-f577d2f53d3c(node4/84.38.152.124)]
> {code}
> I wonder if we’re better off setting it to DEBUG level, and relying on the
> INFO message at the end as a summary of what happened.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]