[ 
https://issues.apache.org/jira/browse/HDFS-5348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13793100#comment-13793100
 ] 

Colin Patrick McCabe commented on HDFS-5348:
--------------------------------------------

This also fixes an issue where the faulty configuration set in 
{{TestDatanodeConfig#testMemlockLimit}} was being mistakenly used for other 
unit tests in {{TestDatanodeConfig}}, causing intermittent failures (depending 
on test ordering).  This issue occurs because the test reuses the same 
{{Configuration}} object for all the tests.  (This is the current design of 
{{TestDatanodeConfig}}, not something we added).

> Fix error message when dfs.datanode.max.locked.memory is improperly configured
> ------------------------------------------------------------------------------
>
>                 Key: HDFS-5348
>                 URL: https://issues.apache.org/jira/browse/HDFS-5348
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: datanode
>    Affects Versions: HDFS-4949
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>         Attachments: HDFS-5348-caching.001.patch
>
>
> We need to fix the error message when dfs.datanode.max.locked.memory is 
> improperly configured.  Currently it says the size is "less than the 
> datanode's available RLIMIT_MEMLOCK limit" when it really means "more"



--
This message was sent by Atlassian JIRA
(v6.1#6144)

Reply via email to