[ 
https://issues.apache.org/jira/browse/HDFS-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544212#comment-14544212
 ] 

Colin Patrick McCabe commented on HDFS-8157:
--------------------------------------------

I still don't understand why we would ever round down.  If a block contains 11 
kb, 12kb of RAM will be used to cache it.  So why would we ever round down?  
This is the reason why HDFS read caching (HDFS-4949) always rounds up to the 
nearest 4kb (or whatever the OS page size is).  Perhaps I am misunderstanding a 
detail here.

> Writes to RAM DISK reserve locked memory for block files
> --------------------------------------------------------
>
>                 Key: HDFS-8157
>                 URL: https://issues.apache.org/jira/browse/HDFS-8157
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: datanode
>            Reporter: Arpit Agarwal
>            Assignee: Arpit Agarwal
>         Attachments: HDFS-8157.01.patch, HDFS-8157.02.patch, 
> HDFS-8157.03.patch
>
>
> Per discussion on HDFS-6919, the first step is that writes to RAM disk will 
> reserve locked memory via the FsDatasetCache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to