[
https://issues.apache.org/jira/browse/HDFS-6107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13938147#comment-13938147
]
Andrew Wang commented on HDFS-6107:
-----------------------------------
Code looks good. Only nit is that you have "HDFS-XXXX" in the new unit test,
which I guess should be updated to this JIRA (HDFS-6107).
Otherwise +1 pending.
> When a block can't be cached due to limited space on the DataNode, that block
> becomes uncacheable
> -------------------------------------------------------------------------------------------------
>
> Key: HDFS-6107
> URL: https://issues.apache.org/jira/browse/HDFS-6107
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: datanode
> Affects Versions: 2.4.0
> Reporter: Colin Patrick McCabe
> Assignee: Colin Patrick McCabe
> Attachments: HDFS-6107.001.patch
>
>
> When a block can't be cached due to limited space on the DataNode, that block
> becomes uncacheable. This is because the CachingTask fails to reset the
> block state in this error handling case.
--
This message was sent by Atlassian JIRA
(v6.2#6252)