[
https://issues.apache.org/jira/browse/MAPREDUCE-1288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12860786#action_12860786
]
Hemanth Yamijala commented on MAPREDUCE-1288:
---------------------------------------------
bq. Why should an old job fail because of what is, essentially, an external
event?
The job failing is unlikely, right ? Please note that I said tasks fail. I hope
someone can clarify (given the two statuses we have - i.e. tasks failed vs
killed), whether this condition can lead Hadoop to abort after sufficient
number of failures. Even if it does, it should happen that at least one task''s
attempts should get scheduled on 4 such nodes, and fail on all four. I am
thinking this is unlikely. But let's hope someone (alias Amarsri *smile*) can
clarify this.
> DistributedCache localizes only once per cache URI
> --------------------------------------------------
>
> Key: MAPREDUCE-1288
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1288
> Project: Hadoop Map/Reduce
> Issue Type: Bug
> Components: security, tasktracker
> Affects Versions: 0.21.0
> Reporter: Devaraj Das
> Priority: Blocker
> Fix For: 0.21.0
>
>
> As part of the file localization the distributed cache localizer creates a
> copy of the file in the corresponding user's private directory. The
> localization in DistributedCache assumes the key as the URI of the cachefile
> and if it already exists in the map, the localization is not done again. This
> means that another user cannot access the same distributed cache file. We
> should change the key to include the username so that localization is done
> for every user.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.