[
https://issues.apache.org/jira/browse/HDFS-5366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13817838#comment-13817838
]
Colin Patrick McCabe commented on HDFS-5366:
--------------------------------------------
The eclipse:eclipse target failure doesn't have anything to do with this patch.
This is also causing the bogus release audit:
{code}
!????? hs_err_pid4577.log
Lines that start with ????? in the release audit report indicate files that do
not have an Apache license header.
{code}
> recaching improvements
> ----------------------
>
> Key: HDFS-5366
> URL: https://issues.apache.org/jira/browse/HDFS-5366
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Components: namenode
> Affects Versions: 3.0.0
> Reporter: Colin Patrick McCabe
> Assignee: Colin Patrick McCabe
> Attachments: HDFS-5366-caching.001.patch, HDFS-5366.002.patch
>
>
> There are a few things about our HDFS-4949 recaching strategy that could be
> improved.
> * We should monitor the DN's maximum and current mlock'ed memory consumption
> levels, so that we don't ask the DN to do stuff it can't.
> * We should not try to initiate caching on stale or decomissioning DataNodes
> (although we should not recache things stored on such nodes until they're
> declared dead).
> * We might want to resend the {{DNA_CACHE}} or {{DNA_UNCACHE}} command a few
> times before giving up. Currently, we only send it once.
--
This message was sent by Atlassian JIRA
(v6.1#6144)