[
https://issues.apache.org/jira/browse/HDFS-5096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13794601#comment-13794601
]
Chris Nauroth commented on HDFS-5096:
-------------------------------------
bq. Let me know if you think the comments in the code could be improved.
I think some version of what Andrew just said would do the job nicely, i.e.:
{code}
if (mark != ocblock.getMark()) {
// Mark hasn't been set in this scan, so update replication and mark.
ocblock.setReplicationAndMark(pce.getReplication(), mark);
} else {
// Mark already set in this scan. Set replication to highest value in
// any PathBasedCacheEntry that covers this file.
ocblock.setReplicationAndMark((short)Math.max(
pce.getReplication(), ocblock.getReplication()), mark);
}
{code}
bq. thanks for your patience, both of you
No problem, thanks for your code. :-)
> Automatically cache new data added to a cached path
> ---------------------------------------------------
>
> Key: HDFS-5096
> URL: https://issues.apache.org/jira/browse/HDFS-5096
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Components: datanode, namenode
> Reporter: Andrew Wang
> Assignee: Colin Patrick McCabe
> Attachments: HDFS-5096-caching.005.patch, HDFS-5096-caching.006.patch
>
>
> For some applications, it's convenient to specify a path to cache, and have
> HDFS automatically cache new data added to the path without sending a new
> caching request or a manual refresh command.
> One example is new data appended to a cached file. It would be nice to
> re-cache a block at the new appended length, and cache new blocks added to
> the file.
> Another example is a cached Hive partition directory, where a user can drop
> new files directly into the partition. It would be nice if these new files
> were cached.
> In both cases, this automatic caching would happen after the file is closed,
> i.e. block replica is finalized.
--
This message was sent by Atlassian JIRA
(v6.1#6144)