[ 
https://issues.apache.org/jira/browse/HDFS-16179?focusedWorklogId=640096&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-640096
 ]

ASF GitHub Bot logged work on HDFS-16179:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 20/Aug/21 02:07
            Start Date: 20/Aug/21 02:07
    Worklog Time Spent: 10m 
      Work Description: tomscut edited a comment on pull request #3313:
URL: https://github.com/apache/hadoop/pull/3313#issuecomment-902367933


   > @tomscut Thanks for contribution.
   > I'm confused here. The log is necessary or not. If it's necessary, many 
logs as you said drown out other logs. If it's not necessary, I think DEBUG 
level is OK.
   
   Thanks @ferhui for your review. I think this method is just detecting 
redundancies and deleting them, but there are no redundancies in most cases. 
IMO, the log is not necessary here.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 640096)
    Time Spent: 1h 10m  (was: 1h)

> Update loglevel for BlockManager#chooseExcessRedundancyStriped to avoid too 
> much logs
> -------------------------------------------------------------------------------------
>
>                 Key: HDFS-16179
>                 URL: https://issues.apache.org/jira/browse/HDFS-16179
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>    Affects Versions: 3.1.0
>            Reporter: tomscut
>            Assignee: tomscut
>            Priority: Minor
>              Labels: pull-request-available
>         Attachments: log-count.jpg, logs.jpg
>
>          Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> {code:java}
> private void chooseExcessRedundancyStriped(BlockCollection bc,
>     final Collection<DatanodeStorageInfo> nonExcess,
>     BlockInfo storedBlock,
>     DatanodeDescriptor delNodeHint) {
>   ...
>   // cardinality of found indicates the expected number of internal blocks
>   final int numOfTarget = found.cardinality();
>   final BlockStoragePolicy storagePolicy = storagePolicySuite.getPolicy(
>       bc.getStoragePolicyID());
>   final List<StorageType> excessTypes = storagePolicy.chooseExcess(
>       (short) numOfTarget, DatanodeStorageInfo.toStorageTypes(nonExcess));
>   if (excessTypes.isEmpty()) {
>     LOG.warn("excess types chosen for block {} among storages {} is empty",
>         storedBlock, nonExcess);
>     return;
>   }
>   ...
> }
> {code}
>  
> IMO, here is just detecting excess StorageType and setting the log level to 
> debug has no effect.
>  
> We have a cluster that uses the EC policy to store data. The current log 
> level is WARN here, and in about 50 minutes, 286,093 logs are printed, which 
> can cause other important logs to drown out.
>  
> !logs.jpg|width=1167,height=62!
>  
> !log-count.jpg|width=760,height=30!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to