[
https://issues.apache.org/jira/browse/HDFS-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16794827#comment-16794827
]
Rakesh R commented on HDFS-14355:
---------------------------------
Thanks [~PhiloHe] for the good progress. Adding second set of review comments,
please go through it.
# Close {{file = new RandomAccessFile(filePath, "rw");}}
{code:java}
IOUtils.closeQuietly(file);
{code}
# Looks like unused code, please remove it.
{code:java}
private FsDatasetImpl dataset;
public MemoryMappableBlockLoader(FsDatasetImpl dataset) {
this.dataset = dataset;
}
{code}
# FileMappableBlockLoader#loadVolumes exception handling. I feel this is not
required, please remove it. If you still need this for some purpose, then
please add message arg to {{IOException("Failed to parse persistent memory
location " + location, e)}}
{code:java}
} catch (IllegalArgumentException e) {
LOG.error("Failed to parse persistent memory location " + location +
" for " + e.getMessage());
throw new IOException(e);
}
{code}
# Debuggability: FileMappableBlockLoader#verifyIfValidPmemVolume. Here, add
exception message arg to {{throw new IOException(t);}}
{code:java}
throw new IOException(
"Exception while writing data to persistent storage dir: " + pmemDir,
t);
{code}
# Debuggability: FileMappableBlockLoader#load. Here, add blockFileName to the
exception message.
{code:java}
if (out == null) {
throw new IOException("Fail to map the block " + blockFileName
+ " to persistent storage.");
}
{code}
# Debuggability: FileMappableBlockLoader#verifyChecksumAndMapBlock
{code:java}
throw new IOException(
"checksum verification failed for the blockfile:" + blockFileName
+ ": premature EOF");
{code}
# FileMappedBlock#afterCache. Suppressing exception may give wrong statistics,
right? Assume, {{afterCache}} throws exception and not cached the file path.
Here, the cached block won't be readable but unnecessarily consumes space. How
about moving {{mappableBlock.afterCache();}} call right after
{{mappableBlockLoader.load()}} function and add throws IOException clause to
{{afterCache}} ?
{code:java}
LOG.warn("Fail to find the replica file of PoolID = " +
key.getBlockPoolId() + ", BlockID = " + key.getBlockId() +
" for :" + e.getMessage());
{code}
# FsDatasetCache.java : reserve() and release() OS page size math is not
required in FileMappedBlock. Appreciate if you could avoid these calls. Also,
can you re-visit the caching and un-caching logic(for example,
datanode.getMetrics() updates etc ) present in this class.
{code:java}
CachingTask#run(){
....
long newUsedBytes = reserve(length);
...
if (reservedBytes) {
release(length);
}
UncachingTask#run() {
...
long newUsedBytes = release(value.mappableBlock.getLength());
{code}
# I have changed jira status and triggered QA. Please fix checkstyle warnings
and test case failures. Also, can you uncomment {{Test//(timeout=120000)}} two
occurrences in the test.
> Implement HDFS cache on SCM by using pure java mapped byte buffer
> -----------------------------------------------------------------
>
> Key: HDFS-14355
> URL: https://issues.apache.org/jira/browse/HDFS-14355
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Components: caching, datanode
> Reporter: Feilong He
> Assignee: Feilong He
> Priority: Major
> Attachments: HDFS-14355.000.patch, HDFS-14355.001.patch,
> HDFS-14355.002.patch
>
>
> This task is to implement the caching to persistent memory using pure
> {{java.nio.MappedByteBuffer}}, which could be useful in case native support
> isn't available or convenient in some environments or platforms.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]