[ 
https://issues.apache.org/jira/browse/HDFS-5096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13795541#comment-13795541
 ] 

Chris Nauroth commented on HDFS-5096:
-------------------------------------

I just looked at patch version 9.  For the sake of completeness, here is a full 
list of all of my remaining feedback up to this point, though I realize you're 
in progress on some of it already.

hdfs-default.xml: Let's document 
{{dfs.namenode.path.based.cache.refresh.interval.ms}}.

{{CacheReplicationMonitor#addNewPendingCached}}
{code}
    // To figure out which replicas can be cached, we consult the
    // blocksMap.  We don't want to try to cache a corrupt replica, though.
    BlockInfo blockInfo = blockManager.
          getStoredBlock(new Block(cachedBlock.getBlockId()));
    if (!blockInfo.isComplete()) {
{code}
I think there is a risk of {{NullPointerException}} on the call to 
{{blockInfo#isComplete}} because of the fact that {{rescan}} releases and 
reacquires the write lock between the two steps.  Imagine the following 
sequence:
# {{CacheReplicationMonitor#rescan}} acquires the lock and identifies file X 
needs to be cached.
# {{FSNamesystem#delete}} is called independently by an HDFS client.  This 
acquires the write lock and deletes the inode for X and its blocks.
# {{CacheReplicationMonitor#rescan}} acquires the lock the second time and 
tries to add the now-deleted file's blocks to the pending-cached list.
The block might not be in the {{BlockManager}} anymore.


{{CacheReplicationMonitor#rescanCachedBlockMap}}
{code}
      if (neededCached <= numCached) {
        // If we have all the replicas we need, or too few, drop all 
        // pending cached.
        for (DatanodeDescriptor datanode : pendingCached) {
          datanode.getPendingCached().removeElement(cblock);
        }
      }
      if (neededCached >= numCached) {
        // If we have all the replicas we need, or too many, drop all
        // pending uncached.
        for (DatanodeDescriptor datanode : pendingUncached) {
          datanode.getPendingUncached().removeElement(cblock);
        }
      }
{code}
Are those comments still inverted?


{{CacheReplicationMonitor#rescanFile}}
{code}
        } else {
          // Mark already set in this scan.  Set replication to highest value in
          // // any PathBasedCacheEntry that covers this file.
          ocblock.setReplicationAndMark((short)Math.max(
              pce.getReplication(), ocblock.getReplication()), mark);
        }
{code}
Super-minor: The comment has two sets of double-slashes.

{{CacheManager#setCachedLocations}}: I suspect some kind of synchronization is 
needed here, perhaps acquiring the namesystem read lock?  {{cachedBlocks}} is 
getting modified inside {{processCacheReportImpl}} with the write lock held.  
{{CacheManager#setCachedLocations}} gets called from 
{{FSNamesystem#getBlockLocations}} with *no lock held*, so this would cause 
concurrent reading and writing.


> Automatically cache new data added to a cached path
> ---------------------------------------------------
>
>                 Key: HDFS-5096
>                 URL: https://issues.apache.org/jira/browse/HDFS-5096
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: datanode, namenode
>            Reporter: Andrew Wang
>            Assignee: Colin Patrick McCabe
>         Attachments: HDFS-5096-caching.005.patch, 
> HDFS-5096-caching.006.patch, HDFS-5096-caching.009.patch
>
>
> For some applications, it's convenient to specify a path to cache, and have 
> HDFS automatically cache new data added to the path without sending a new 
> caching request or a manual refresh command.
> One example is new data appended to a cached file. It would be nice to 
> re-cache a block at the new appended length, and cache new blocks added to 
> the file.
> Another example is a cached Hive partition directory, where a user can drop 
> new files directly into the partition. It would be nice if these new files 
> were cached.
> In both cases, this automatic caching would happen after the file is closed, 
> i.e. block replica is finalized.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

Reply via email to