Yiqun Lin created HDFS-10448:
--------------------------------

             Summary: CacheManager#checkLimit  not correctly
                 Key: HDFS-10448
                 URL: https://issues.apache.org/jira/browse/HDFS-10448
             Project: Hadoop HDFS
          Issue Type: Bug
          Components: caching
    Affects Versions: 2.7.1
            Reporter: Yiqun Lin
            Assignee: Yiqun Lin


The logic in {{CacheManager#checkLimit}} is not correct. In this method, it 
does with these three logic:

First, it will compute needed bytes for the specific path.
{code}
CacheDirectiveStats stats = computeNeeded(path, replication);
{code}
But the param {{replication}} is not used here. And the bytesNeeded is just one 
replication's vaue.
{code}
return new CacheDirectiveStats.Builder()
        .setBytesNeeded(requestedBytes)
        .setFilesCached(requestedFiles)
        .build();
{code}

Second, then it should be multiply by the replication to compare the limit size 
because the method {{computeNeeded}} was not used replication.
{code}
pool.getBytesNeeded() + (stats.getBytesNeeded() * replication) > pool.getLimit()
{code}

Third, if we find the size was more than the limit value and then print warning 
info. It divided by replication here, while the {{stats.getBytesNeeded()}} was 
just one replication value.
{code}
      throw new InvalidRequestException("Caching path " + path + " of size "
          + stats.getBytesNeeded() / replication + " bytes at replication "
          + replication + " would exceed pool " + pool.getPoolName()
          + "'s remaining capacity of "
          + (pool.getLimit() - pool.getBytesNeeded()) + " bytes.");
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to