[
https://issues.apache.org/jira/browse/HDFS-16179?focusedWorklogId=640121&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-640121
]
ASF GitHub Bot logged work on HDFS-16179:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 20/Aug/21 05:13
Start Date: 20/Aug/21 05:13
Worklog Time Spent: 10m
Work Description: tomscut commented on pull request #3313:
URL: https://github.com/apache/hadoop/pull/3313#issuecomment-902436338
> @tomscut Thanks for comments.
>
> > but there are no redundancies in most cases
>
> I see that if shouldProcessExtraRedundancy returns true, It will enter
this method.
> If no redundancies, it will not enter this method, is it right?
> Are there any reasons that it prints lots of logs, maybe there is a hidden
reason ?
Thanks @ferhui for your comments and careful consideration. I found this
issue [HDFS-9876](https://issues.apache.org/jira/browse/HDFS-9876), and we can
refer to it.
Hi @Jing9 , could you please take a look at this. Thanks.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 640121)
Time Spent: 1.5h (was: 1h 20m)
> Update loglevel for BlockManager#chooseExcessRedundancyStriped to avoid too
> much logs
> -------------------------------------------------------------------------------------
>
> Key: HDFS-16179
> URL: https://issues.apache.org/jira/browse/HDFS-16179
> Project: Hadoop HDFS
> Issue Type: Improvement
> Affects Versions: 3.1.0
> Reporter: tomscut
> Assignee: tomscut
> Priority: Minor
> Labels: pull-request-available
> Attachments: log-count.jpg, logs.jpg
>
> Time Spent: 1.5h
> Remaining Estimate: 0h
>
> {code:java}
> private void chooseExcessRedundancyStriped(BlockCollection bc,
> final Collection<DatanodeStorageInfo> nonExcess,
> BlockInfo storedBlock,
> DatanodeDescriptor delNodeHint) {
> ...
> // cardinality of found indicates the expected number of internal blocks
> final int numOfTarget = found.cardinality();
> final BlockStoragePolicy storagePolicy = storagePolicySuite.getPolicy(
> bc.getStoragePolicyID());
> final List<StorageType> excessTypes = storagePolicy.chooseExcess(
> (short) numOfTarget, DatanodeStorageInfo.toStorageTypes(nonExcess));
> if (excessTypes.isEmpty()) {
> LOG.warn("excess types chosen for block {} among storages {} is empty",
> storedBlock, nonExcess);
> return;
> }
> ...
> }
> {code}
>
> IMO, here is just detecting excess StorageType and setting the log level to
> debug has no effect.
>
> We have a cluster that uses the EC policy to store data. The current log
> level is WARN here, and in about 50 minutes, 286,093 logs are printed, which
> can cause other important logs to drown out.
>
> !logs.jpg|width=1167,height=62!
>
> !log-count.jpg|width=760,height=30!
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]