[ 
https://issues.apache.org/jira/browse/HDFS-6107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-6107:
---------------------------------------

    Status: Patch Available  (was: Open)

> When a block can't be cached due to limited space on the DataNode, that block 
> becomes uncacheable
> -------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-6107
>                 URL: https://issues.apache.org/jira/browse/HDFS-6107
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>    Affects Versions: 2.4.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>         Attachments: HDFS-6107.001.patch
>
>
> When a block can't be cached due to limited space on the DataNode, that block 
> becomes uncacheable.  This is because the CachingTask fails to reset the 
> block state in this error handling case.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to