[ 
https://issues.apache.org/jira/browse/HDFS-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16800592#comment-16800592
 ] 

Feilong He edited comment on HDFS-14355 at 3/25/19 12:51 PM:
-------------------------------------------------------------

Thanks [~Sammi] for your valuable comments.
{quote}User would like to know the relationship between 
dfs.datanode.cache.pmem.capacity and dfs.datanode.max.locked.memory by reading 
the descriptions in hdfs-default.xml
{quote}
I will proofread the description to make them clear to user.
{quote}PmemUsedBytesCount, is there any forsee issue to reuse UsedBytesCount 
instead?  Also byte roundup is not addressed in PmemUsedBytesCount
{quote}
As you know, UsedBytesCount is used to count the DRAM bytes. It can ensure that 
after reserving bytes for DRAM cache the used bytes will not exceed maxBytes 
(dfs.datanode.max.locked.memory). We found that besides HDFS DRAM cache, Lazy 
Persist Writes also uses this UsedBytesCount to reserve/release bytes. Since 
supporting Lazy Persist Writes on pmem is not the target of this jira, we 
introduced PmemUsedBytesCount for pmem to separate pmem's cache bytes 
management with DRAM's. Thus Lazy Persist Writes will not be affected. User can 
still enable Lazy Persist Writes by configuring dfs.datanode.max.locked.memory. 
Pmem may not have page size like mechanism as DRAM (we will confirm it). So we 
didn't round up the bytes to a page size like value. Because of this 
difference, DRAM cache & pmem cache have different reserve/release methods, 
which also makes adding PmemUsedBytesCount necessary.
{quote}FsDatasetCached is not a good place to put specific memory loader 
implemetation functions like reservePmem, releasePmem. FsDatasetCached should 
be generic.
{quote}
Good suggestion. We are aware of this issue as you pointed out. In the new 
path, we will move PmemUsedBytesCount, reservePmem, releasePmem to a new class 
PmemVolumeManager to keep FsDatasetCache generic.
{quote}As [~daryn] suggested, more elegant error handling.
{quote}
We are checking our code to make exceptions be handled elegantly.

 

Thanks again for your huge efforts on reviewing this patch. Your suggestions 
will be seriously considered by us.


was (Author: philohe):
Thanks [~Sammi] for your valuable comments.
{quote}User would like to know the relationship between 
dfs.datanode.cache.pmem.capacity and dfs.datanode.max.locked.memory by reading 
the descriptions in hdfs-default.xml
{quote}
I will proofread the description to make them clear to user.
{quote}PmemUsedBytesCount, is there any forsee issue to reuse UsedBytesCount 
instead?  Also byte roundup is not addressed in PmemUsedBytesCount
{quote}
As you know, UsedBytesCount is used to count the DRAM bytes. It can ensure that 
after reserving bytes for DRAM cache the used bytes will not exceed maxBytes 
(dfs.datanode.max.locked.memory). We found that besides HDFS DRAM cache, Lazy 
Persist Writes also uses this UsedBytesCount to reserve/release bytes. Since 
supporting Lazy Persist Writes on pmem is not the target of this jira, we 
introduced PmemUsedBytesCount for pmem to separate pmem's cache bytes 
management with DRAM's. Thus Lazy Persist Writes will not be affected. User can 
still enable Lazy Persist Writes by configuring dfs.datanode.max.locked.memory. 
Pmem may not have page size like mechanism as DRAM (we will confirm it). So we 
didn't round up the bytes to a page size like value. Because of this 
difference, DRAM cache & pmem cache have different reserve/release methods, 
which also makes adding PmemUsedBytesCount necessary.

 
{quote}FsDatasetCached is not a good place to put specific memory loader 
implemetation functions like reservePmem, releasePmem. FsDatasetCached should 
be generic.
{quote}
Good suggestion. We are aware of this issue as you pointed out. In the new 
path, we will move PmemUsedBytesCount, reservePmem, releasePmem to a new class 
PmemVolumeManager to keep FsDatasetCache generic.
{quote}As [~daryn] suggested, more elegant error handling.
{quote}
We are checking our code to make exceptions be handled elegantly.

 

Thanks again for your huge efforts on reviewing this patch. Your suggestions 
will be seriously considered by us.

> Implement HDFS cache on SCM by using pure java mapped byte buffer
> -----------------------------------------------------------------
>
>                 Key: HDFS-14355
>                 URL: https://issues.apache.org/jira/browse/HDFS-14355
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: caching, datanode
>            Reporter: Feilong He
>            Assignee: Feilong He
>            Priority: Major
>         Attachments: HDFS-14355.000.patch, HDFS-14355.001.patch, 
> HDFS-14355.002.patch, HDFS-14355.003.patch
>
>
> This task is to implement the caching to persistent memory using pure 
> {{java.nio.MappedByteBuffer}}, which could be useful in case native support 
> isn't available or convenient in some environments or platforms.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to