[
https://issues.apache.org/jira/browse/HDFS-5589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13861804#comment-13861804
]
Andrew Wang commented on HDFS-5589:
-----------------------------------
When we're populating {{possibilities}}, we check the DNs for validity,
including having enough remaining capacity, so I think this is technically
right. I agree though that it reads poorly, so I'll refactor this, and also add
a test that tries to cache some big files.
> Namenode loops caching and uncaching when data should be uncached
> -----------------------------------------------------------------
>
> Key: HDFS-5589
> URL: https://issues.apache.org/jira/browse/HDFS-5589
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Components: caching, namenode
> Affects Versions: 3.0.0
> Reporter: Andrew Wang
> Assignee: Andrew Wang
> Attachments: hdfs-5589-1.patch
>
>
> This was reported by [~cnauroth] and [~brandonli], and [~schu] repro'd it too.
> If you add a new caching directive then remove it, the Namenode will
> sometimes get stuck in a loop where it sends DNA_CACHE and then DNA_UNCACHE
> repeatedly to the datanodes where the data was previously cached.
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)