[
https://issues.apache.org/jira/browse/HADOOP-9639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13745431#comment-13745431
]
Sangjin Lee commented on HADOOP-9639:
-------------------------------------
{quote}
How does the client determine which job submission files are eligible for
participating in the public cache?
{quote}
Right now, it is a binary choice. If the binary key is set, the job jar and
libjars (if any) will be all sharable/shared. However, with the APIs you should
be able to have a finer-grained control. Is that acceptable? Could you give me
a scenario under which the client may want a finer-grained control? Is that
what you were getting at?
{quote}
The algorithm refers to a file being uploaded and "fully written." If files are
only renamed to the advertised location after it has finished uploading to a
temporary location first, how does a final name end up not being "fully
written"?
{quote}
Maybe the term "fully written" is bit misleading. It is really a defensive
check to guard against an *erroneous* file. You're right if the file is
uploaded via client with this functionality, it shouldn't be possible. This is
more about guarding against bad files.
{quote}
Speaking of temporary files, there probably should be an algorithm clients
should use to try to derive a unique temporary name to avoid collisions,
similar to the mechanism to derive a unique reader lock. Bonus points if the
cleaner service recognizes and acts upon orphaned temporary files from clients
that crashed/disconnected during upload.
{quote}
Agreed on having a common (or similar) algorithm for deriving unique names.
I've thought about cleaning up orphaned temporary files. The difficulty is to
determine authoritatively whether a certain temporary file is truly unused. One
could check whether the file is closed. But note that a closed temp file may be
in use (as a client may be using a temp file and it's being localized, etc.).
Otherwise, heuristics may become hairy; e.g. "old enough" (how old is old
enough?), etc. One mitigating factor is that the occurrence of orphaned
temporary files would be pretty uncommon (unlike the orphaned reader lock
files).
{quote}
When a client uploads a new file, loses the race with another client, and
determines for some reason that the version that won is bad, do we really want
that client removing the file? That seems like it will break the job that won
the race.
{quote}
I believe the file should be necessarily a bad one in this situation. For this
to happen, the size of the uploaded jar should not match what I have. However,
given the right checksum, the size cannot be different. I had this step here to
clean up the bad jar so that other clients can "repair" the bad jar.
{quote}
Many workflows use "fire-and-forget" clients that do not have the job
submission client waiting for job completion. In that case the read lock will
be leaked by the client and the cleaner must remove any read lock files that
are considered stale even if it finds there is an active reference to the cache
entry. Failure to do so for a consistently active entry in the cache leads to
unbounded namespace leaks.
{quote}
Yes, that's a good point. We thought about this scenario (see the tasks
section), and I think it should be safe to have the cleaner service remove all
reader locks except for the latest one. I'll consider doing that. We just need
to be careful in selecting the directories to clean in this manner as removing
these files would also update the directory modification time and "refresh" the
directory artificially.
> truly shared cache for jars (jobjar/libjar)
> -------------------------------------------
>
> Key: HADOOP-9639
> URL: https://issues.apache.org/jira/browse/HADOOP-9639
> Project: Hadoop Common
> Issue Type: New Feature
> Components: filecache
> Affects Versions: 2.0.4-alpha
> Reporter: Sangjin Lee
> Assignee: Sangjin Lee
> Attachments: shared_cache_design.pdf
>
>
> Currently there is the distributed cache that enables you to cache jars and
> files so that attempts from the same job can reuse them. However, sharing is
> limited with the distributed cache because it is normally on a per-job basis.
> On a large cluster, sometimes copying of jobjars and libjars becomes so
> prevalent that it consumes a large portion of the network bandwidth, not to
> speak of defeating the purpose of "bringing compute to where data is". This
> is wasteful because in most cases code doesn't change much across many jobs.
> I'd like to propose and discuss feasibility of introducing a truly shared
> cache so that multiple jobs from multiple users can share and cache jars.
> This JIRA is to open the discussion.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira