[ 
https://issues.apache.org/jira/browse/HDFS-10448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15340851#comment-15340851
 ] 

Colin Patrick McCabe commented on HDFS-10448:
---------------------------------------------

Hi [~linyiqun],

Sorry, I misread the patch the first time around.  You are indeed changing 
computeNeeded to take the replication factor into account, which seems like a 
better way to go.

+1

> CacheManager#addInternal tracks bytesNeeded incorrectly when dealing with 
> replication factors other than 1
> ----------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-10448
>                 URL: https://issues.apache.org/jira/browse/HDFS-10448
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: caching
>    Affects Versions: 2.7.1
>            Reporter: Yiqun Lin
>            Assignee: Yiqun Lin
>         Attachments: HDFS-10448.001.patch
>
>
> The logic in {{CacheManager#checkLimit}} is not correct. In this method, it 
> does with these three logic:
> First, it will compute needed bytes for the specific path.
> {code}
> CacheDirectiveStats stats = computeNeeded(path, replication);
> {code}
> But the param {{replication}} is not used here. And the bytesNeeded is just 
> one replication's vaue.
> {code}
> return new CacheDirectiveStats.Builder()
>         .setBytesNeeded(requestedBytes)
>         .setFilesCached(requestedFiles)
>         .build();
> {code}
> Second, then it should be multiply by the replication to compare the limit 
> size because the method {{computeNeeded}} was not used replication.
> {code}
> pool.getBytesNeeded() + (stats.getBytesNeeded() * replication) > 
> pool.getLimit()
> {code}
> Third, if we find the size was more than the limit value and then print 
> warning info. It divided by replication here, while the 
> {{stats.getBytesNeeded()}} was just one replication value.
> {code}
>       throw new InvalidRequestException("Caching path " + path + " of size "
>           + stats.getBytesNeeded() / replication + " bytes at replication "
>           + replication + " would exceed pool " + pool.getPoolName()
>           + "'s remaining capacity of "
>           + (pool.getLimit() - pool.getBytesNeeded()) + " bytes.");
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to