[ 
https://issues.apache.org/jira/browse/HDFS-5096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13795734#comment-13795734
 ] 

Andrew Wang commented on HDFS-5096:
-----------------------------------

Some more patch notes. I haven't gone through to verify my previous review 
comments yet, but will do so when Colin signals that it's ready. I think we're 
getting close though, and I really like the net of this diff stat:

{code}
 23 files changed, 1354 insertions(+), 2129 deletions(-)
{code}

* In DatanodeManager, we pop commands off of the pending lists when we send 
them. This means the NN no longer knows what's in-flight on the cluster, 
potentially leading to a few issues if there are lost/delayed messages:
** If a command gets popped then lost, it won't be retried until the next rescan
** If the CRMon interval is too low, it could over-cache if 
caching/cacheReports aren't timely.
** Could under-cache if it asks the same DN twice, or a cache request is lost
** Could retry a failed caching attempt on the same DN again
* I guess it's okay to be a bit sloppy, but we probably want pending / timeout 
/ failed information for error reporting and statistics too
* Do we clear all the pending queues before a rescan? We don't want a DN 
getting commands from two different scans.

I think it makes sense to have different periods for the replication monitor, 
caching request timeout, and the PCE scanner. The first can be a couple seconds 
(BlockManager uses 3) to quickly generate new caching work on DN failures, the 
second maybe a minute since it's easier than block replication, the third 5 or 
10 like it currently is. It'd also be nice to re-introduce some edge triggering 
via an incremental {{#kick()}} taking a set of {{PBCEntries}} or {{Paths}} 
since we could sprinkle it around FSN to improve our reaction time. It'd also 
also be good to edge trigger on DN failures like 
{{BlockManager#removeStoredBlock}} does. Depending on how hard these tasks are, 
we could move them to follow-on JIRAs.

* Decommission status isn't taken into account when doing cache replication
* I think scan and cur need to be reversed here:
{code}
              LOG.info("Rescanning after " + (scanTimeMs - curTimeMs) +
                  " milliseconds");
{code}
* Right below that, I think we can move setting {{scanTimeMs}} and {{mark}} out 
of the synchronized block
* Maybe rename {{TestPathBasedCacheRequests}} to {{TestCacheManager}} ? It 
deserves a more generic name now.

> Automatically cache new data added to a cached path
> ---------------------------------------------------
>
>                 Key: HDFS-5096
>                 URL: https://issues.apache.org/jira/browse/HDFS-5096
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: datanode, namenode
>            Reporter: Andrew Wang
>            Assignee: Colin Patrick McCabe
>         Attachments: HDFS-5096-caching.005.patch, 
> HDFS-5096-caching.006.patch, HDFS-5096-caching.009.patch
>
>
> For some applications, it's convenient to specify a path to cache, and have 
> HDFS automatically cache new data added to the path without sending a new 
> caching request or a manual refresh command.
> One example is new data appended to a cached file. It would be nice to 
> re-cache a block at the new appended length, and cache new blocks added to 
> the file.
> Another example is a cached Hive partition directory, where a user can drop 
> new files directly into the partition. It would be nice if these new files 
> were cached.
> In both cases, this automatic caching would happen after the file is closed, 
> i.e. block replica is finalized.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

Reply via email to