[jira] [Updated] (HDFS-10448) CacheManager#addInternal tracks bytesNeeded incorrectly when dealing with replication factors other than 1

2016-06-20 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-10448:

  Resolution: Fixed
   Fix Version/s: 2.8.0
Target Version/s: 2.8.0
  Status: Resolved  (was: Patch Available)

> CacheManager#addInternal tracks bytesNeeded incorrectly when dealing with 
> replication factors other than 1
> --
>
> Key: HDFS-10448
> URL: https://issues.apache.org/jira/browse/HDFS-10448
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Fix For: 2.8.0
>
> Attachments: HDFS-10448.001.patch
>
>
> The logic in {{CacheManager#checkLimit}} is not correct. In this method, it 
> does with these three logic:
> First, it will compute needed bytes for the specific path.
> {code}
> CacheDirectiveStats stats = computeNeeded(path, replication);
> {code}
> But the param {{replication}} is not used here. And the bytesNeeded is just 
> one replication's vaue.
> {code}
> return new CacheDirectiveStats.Builder()
> .setBytesNeeded(requestedBytes)
> .setFilesCached(requestedFiles)
> .build();
> {code}
> Second, then it should be multiply by the replication to compare the limit 
> size because the method {{computeNeeded}} was not used replication.
> {code}
> pool.getBytesNeeded() + (stats.getBytesNeeded() * replication) > 
> pool.getLimit()
> {code}
> Third, if we find the size was more than the limit value and then print 
> warning info. It divided by replication here, while the 
> {{stats.getBytesNeeded()}} was just one replication value.
> {code}
>   throw new InvalidRequestException("Caching path " + path + " of size "
>   + stats.getBytesNeeded() / replication + " bytes at replication "
>   + replication + " would exceed pool " + pool.getPoolName()
>   + "'s remaining capacity of "
>   + (pool.getLimit() - pool.getBytesNeeded()) + " bytes.");
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10448) CacheManager#addInternal tracks bytesNeeded incorrectly when dealing with replication factors other than 1

2016-06-20 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-10448:

Summary: CacheManager#addInternal tracks bytesNeeded incorrectly when 
dealing with replication factors other than 1  (was: CacheManager#checkLimit  
always assumes a replication factor of 1)

> CacheManager#addInternal tracks bytesNeeded incorrectly when dealing with 
> replication factors other than 1
> --
>
> Key: HDFS-10448
> URL: https://issues.apache.org/jira/browse/HDFS-10448
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-10448.001.patch
>
>
> The logic in {{CacheManager#checkLimit}} is not correct. In this method, it 
> does with these three logic:
> First, it will compute needed bytes for the specific path.
> {code}
> CacheDirectiveStats stats = computeNeeded(path, replication);
> {code}
> But the param {{replication}} is not used here. And the bytesNeeded is just 
> one replication's vaue.
> {code}
> return new CacheDirectiveStats.Builder()
> .setBytesNeeded(requestedBytes)
> .setFilesCached(requestedFiles)
> .build();
> {code}
> Second, then it should be multiply by the replication to compare the limit 
> size because the method {{computeNeeded}} was not used replication.
> {code}
> pool.getBytesNeeded() + (stats.getBytesNeeded() * replication) > 
> pool.getLimit()
> {code}
> Third, if we find the size was more than the limit value and then print 
> warning info. It divided by replication here, while the 
> {{stats.getBytesNeeded()}} was just one replication value.
> {code}
>   throw new InvalidRequestException("Caching path " + path + " of size "
>   + stats.getBytesNeeded() / replication + " bytes at replication "
>   + replication + " would exceed pool " + pool.getPoolName()
>   + "'s remaining capacity of "
>   + (pool.getLimit() - pool.getBytesNeeded()) + " bytes.");
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org