[ 
https://issues.apache.org/jira/browse/HDFS-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16792605#comment-16792605
 ] 

Rakesh R edited comment on HDFS-14355 at 3/14/19 12:11 PM:
-----------------------------------------------------------

Thanks [~PhiloHe] for the incremental patch. Following are few quick comments, 
I will continue reviewing the patch.

# Please rename configs: {{dfs.datanode.cache.loader.impl.classname}}  to => 
{{dfs.datanode.cache.loader.class}}
  {{DFS_DATANODE_CACHE_LOADER_IMPL_CLASSNAME}} to => 
{{DFS_DATANODE_CACHE_LOADER_CLASS}}
  {{DFS_DATANODE_CACHE_LOADER_IMPL_CLASSNAME_DEFAULT}} to => 
{{DFS_DATANODE_CACHE_LOADER_CLASS_DEFAULT}}
# Replace the config reading logic like below. Also, this would help avoiding 
if-else checks : {{if 
(cacheLoader.equals(MemoryMappableBlockLoader.class.getSimpleName()))}} to 
determine which class is configured by the user.
{code}
DFSConfigKeys.java
  public static final String DFS_DATANODE_CACHE_LOADER_CLASS = 
"dfs.datanode.cache.loader.class";
  public static final Class<MemoryMappableBlockLoader> 
DFS_DATANODE_CACHE_LOADER_CLASS_DEFAULT = MemoryMappableBlockLoader.class;

   You can use the following way to instantiate the cache loader.
  ..
  ..
    this.cacheLoader = getConf().getClass(
        DFSConfigKeys.DFS_DATANODE_CACHE_LOADER_CLASS,
        DFSConfigKeys.DFS_DATANODE_CACHE_LOADER_CLASS_DEFAULT, 
MappableBlockLoader.class);
{code}
# Add config name into the message {{"The persistent memory volumes are not 
configured!"}} to => {{"The persistent memory volume, " + 
DFSConfigKeys.DFS_DATANODE_CACHE_PMEM_DIR_KEY + " is not configured!"}}
# Good to Unmaps the block {{mappedoutbuffer}} before deleting the file like 
below,
{code}
FileMappableBlockLoader.java

    #verifyIfValidPmemVolume(){
           ...
           ...
            if (file != null) {
          IOUtils.closeQuietly(file);
          NativeIO.POSIX.munmap(out);
        try {
          FsDatasetUtil.deleteMappedFile(testFilePath);
        } catch (IOException e) {
          LOG.warn("Failed to delete test file " + testFilePath +
              " from persistent memory", e);
        }
{code}
# FileMappableBlockLoader - Please remove the {{assert 
NativeIO.isAvailable();}} check, its not needed right?
# Describe briefly about the filepath string formation pattern 
{{'PmemDir/BlockPoolId-BlockId'}} either at class or function level javadocs.
{code}
FileMappableBlockLoader#load()
    ....
        filePath = getOneLocation() + "/" + key.getBlockPoolId() +
                          "-" + key.getBlockId();
{code}
# Add @VisibleForTesting to {{public static void verifyIfValidPmemVolume(File 
pmemDir)}} function
# Add annotation to the new classes FileMappedBlock, FileMappableBlockLoader.
{code}
        @InterfaceAudience.Private
        @InterfaceStability.Unstable
{code}
# Comments on TestCacheWithFileMappableBlockLoader:
## Remove MLOCK config, which is not required.
{code}
    myConf.setLong(DFSConfigKeys.DFS_DATANODE_MAX_LOCKED_MEMORY_KEY,
        CACHE_CAPACITY);
{code}
## Move the test TestCacheWithFileMappableBlockLoader.java class to {{package 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl}}. This will avoid making 
the class FsDatasetImpl public and infact no changes to FsDatasetImpl class 
required. 


was (Author: rakeshr):
Thanks [~PhiloHe] for the incremental patch. Following are few quick comments, 
I will continue reviewing the patch.
 # Please rename configs: {{dfs.datanode.cache.loader.impl.classname}} to => 
{{dfs.datanode.cache.loader.class}}
 {{DFS_DATANODE_CACHE_LOADER_IMPL_CLASSNAME}} to => 
{{DFS_DATANODE_CACHE_LOADER_CLASS}}
 {{DFS_DATANODE_CACHE_LOADER_IMPL_CLASSNAME_DEFAULT}} to => 
{{DFS_DATANODE_CACHE_LOADER_CLASS_DEFAULT}}
 # Replace the config reading logic like below. Also, this would help avoiding 
if-else checks : {{if 
(cacheLoader.equals(MemoryMappableBlockLoader.class.getSimpleName()))}} to 
determine which class is configured by the user.
{code:java}
DFSConfigKeys.java
  public static final String DFS_DATANODE_CACHE_LOADER_CLASS = 
"dfs.datanode.cache.loader.class";
  public static final Class<MemoryMappableBlockLoader> 
DFS_DATANODE_CACHE_LOADER_CLASS_DEFAULT = MemoryMappableBlockLoader.class;

   You can use the following way to instantiate the cache loader.
  ..
  ..
    this.cacheLoader = getConf().getClass(
        DFSConfigKeys.DFS_DATANODE_CACHE_LOADER_CLASS,
        DFSConfigKeys.DFS_DATANODE_CACHE_LOADER_CLASS_DEFAULT, 
MappableBlockLoader.class);
{code}

 # Add config name into the message {{"The persistent memory volumes are not 
configured!"}} to => {{"The persistent memory volume, " + 
DFSConfigKeys.DFS_DATANODE_CACHE_PMEM_DIR_KEY + " is not configured!"}}
 # Good to Unmaps the block {{mappedoutbuffer}} before deleting the file like 
below,
{code:java}
FileMappableBlockLoader.java

    #verifyIfValidPmemVolume(){
           ...
           ...
            if (file != null) {
          IOUtils.closeQuietly(file);
          NativeIO.POSIX.munmap(out);
        try {
          FsDatasetUtil.deleteMappedFile(testFilePath);
        } catch (IOException e) {
          LOG.warn("Failed to delete test file " + testFilePath +
              " from persistent memory", e);
        }
{code}

 # FileMappableBlockLoader - Please remove the {{assert 
NativeIO.isAvailable();}} check, its not needed right?
 # Describe briefly about the filepath string formation pattern 
'{{PmemDir/BlockPoolId-BlockId'}} either at class or function level javadocs.
{code:java}
FileMappableBlockLoader#load()
    ....
        filePath = getOneLocation() + "/" + key.getBlockPoolId() +
                          "-" + key.getBlockId();
{code}

 # Add @VisibleForTesting to {{public static void verifyIfValidPmemVolume(File 
pmemDir)}} function
 # Add annotation to the new classes FileMappedBlock, FileMappableBlockLoader.
{code:java}
        @InterfaceAudience.Private
        @InterfaceStability.Unstable
{code}

 # Comments on TestCacheWithFileMappableBlockLoader:
 ## Remove MLOCK config, which is not required.
{code:java}
    myConf.setLong(DFSConfigKeys.DFS_DATANODE_MAX_LOCKED_MEMORY_KEY,
        CACHE_CAPACITY);
{code}

 ## Move the test TestCacheWithFileMappableBlockLoader.java class to {{package 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl}}. This will avoid making 
the class FsDatasetImpl public and infact no changes to FsDatasetImpl class 
required.

> Implement SCM cache using pure java mapped byte buffer
> ------------------------------------------------------
>
>                 Key: HDFS-14355
>                 URL: https://issues.apache.org/jira/browse/HDFS-14355
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: caching, datanode
>            Reporter: Feilong He
>            Assignee: Feilong He
>            Priority: Major
>         Attachments: HDFS-14355.000.patch, HDFS-14355.001.patch
>
>
> This task is to implement the caching to persistent memory using pure 
> {{java.nio.MappedByteBuffer}}, which could be useful in case native support 
> isn't available or convenient in some environments or platforms.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to