[ 
https://issues.apache.org/jira/browse/HDFS-6107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13938232#comment-13938232
 ] 

Hudson commented on HDFS-6107:
------------------------------

SUCCESS: Integrated in Hadoop-trunk-Commit #5342 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5342/])
HDFS-6107: fix comment (cmccabe) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1578511)
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestFsDatasetCache.java


> When a block can't be cached due to limited space on the DataNode, that block 
> becomes uncacheable
> -------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-6107
>                 URL: https://issues.apache.org/jira/browse/HDFS-6107
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>    Affects Versions: 2.4.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>             Fix For: 2.4.0
>
>         Attachments: HDFS-6107.001.patch
>
>
> When a block can't be cached due to limited space on the DataNode, that block 
> becomes uncacheable.  This is because the CachingTask fails to reset the 
> block state in this error handling case.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to