[
https://issues.apache.org/jira/browse/MAPREDUCE-4820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13614463#comment-13614463
]
Robert Joseph Evans commented on MAPREDUCE-4820:
------------------------------------------------
The following is an excerpt from git log for branch-2.
{noformat}
commit 9aba3ebb2d455932981cc37fe8e3fa7a6ec4da82
Author: Alejandro Abdelnur <[email protected]>
Date: Tue Dec 11 19:50:32 2012 +0000
MAPREDUCE-4549. Distributed cache conflicts breaks backwards compatability.
(Robert Evans via tucu)
git-svn-id:
https://svn.apache.org/repos/asf/hadoop/common/branches/branch-2@1420363
13f79535-47bb-0310-9956-ffa450edef68
commit 686cf9752f8416cc513fbea3d846953192e28a67
Author: Jonathan Turner Eagles <[email protected]>
Date: Fri Aug 3 20:42:18 2012 +0000
svn merge -c 1369197 FIXES: MAPREDUCE-4503. Should throw
InvalidJobConfException if duplicates found in cacheArchives or cacheFiles
(Robert Evans via jeagles)
git-svn-id:
https://svn.apache.org/repos/asf/hadoop/common/branches/branch-2@1369201
13f79535-47bb-0310-9956-ffa450edef68
{noformat}
Both of these patches only touched
./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/util/MRApps.java
and some tests.
specifically the method parseDistributedCacheArtifacts of MRApps.
MAPREDUCE-4503 added in the following check
{code}
LocalResource orig = localResources.get(linkName);
if(orig != null && !orig.getResource().equals(
ConverterUtils.getYarnUrlFromURI(p.toUri()))) {
throw new InvalidJobConfException(
getResourceDescription(orig.getType()) + orig.getResource() +
" conflicts with " + getResourceDescription(type) + u);
}
{code}
MAPREDUCE-4549 changed it as follows
{code}
- if(orig != null && !orig.getResource().equals(
- ConverterUtils.getYarnUrlFromURI(p.toUri()))) {
- throw new InvalidJobConfException(
- getResourceDescription(orig.getType()) + orig.getResource() +
- " conflicts with " + getResourceDescription(type) + u);
+ org.apache.hadoop.yarn.api.records.URL url =
+ ConverterUtils.getYarnUrlFromURI(p.toUri());
+ if(orig != null && !orig.getResource().equals(url)) {
+ LOG.warn(
+ getResourceDescription(orig.getType()) +
+ toString(orig.getResource()) + " conflicts with " +
+ getResourceDescription(type) + toString(url) +
+ " This will be an error in Hadoop 2.0");
+ continue;
{code}
The end result of both of these JIRA is a noop, except for a warning that has
the wrong version of Hadoop in it where this is an error and not a warning.
I don't think that these fixed the issue. I just don't think that either of
these JIRA is the cause of the issues you are currently seeing. Looking at the
logs that you posted I don't see the warning anywhere in there.
> MRApps distributed-cache duplicate checks are incorrect
> -------------------------------------------------------
>
> Key: MAPREDUCE-4820
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-4820
> Project: Hadoop Map/Reduce
> Issue Type: Bug
> Components: mr-am
> Affects Versions: 2.0.2-alpha
> Reporter: Alejandro Abdelnur
> Priority: Blocker
> Fix For: 2.0.4-alpha
>
> Attachments: launcher-job.conf.xml, launcher-job.logs.txt,
> mr-job.conf.xml, mr-job.logs.txt
>
>
> This seems a combination of issues that are being exposed in 2.0.2-alpha by
> MAPREDUCE-4549.
> MAPREDUCE-4549 introduces a check to to ensure there are not duplicate JARs
> in the distributed-cache (using the JAR name as identity).
> In Hadoop 2 (different from Hadoop 1), all JARs in the distributed-cache are
> symlink-ed to the current directory of the task.
> MRApps, when setting up the DistributedCache
> (MRApps#setupDistributedCache->parseDistributedCacheArtifacts) assumes that
> the local resources (this includes files in the CURRENT_DIR/,
> CURRENT_DIR/classes/ and files in CURRENT_DIR/lib/) are part of the
> distributed-cache already.
> For systems, like Oozie, which use a launcher job to submit the real job this
> poses a problem because MRApps is run from the launcher job to submit the
> real job. The configuration of the real job has the correct distributed-cache
> entries (no duplicates), but because the current dir has the same files, the
> submission fails.
> It seems that MRApps should not be checking dups in the distributed-cached
> against JARs in the CURRENT_DIR/ or CURRENT_DIR/lib/. The dup check should be
> done among distributed-cached entries only.
> It seems YARNRunner is symlink-ing all files in the distributed cached in the
> current directory. In Hadoop 1 this was done only for files added to the
> distributed-cache using a fragment (ie "#FOO") to trigger a symlink creation.
> Marking as a blocker because without a fix for this, Oozie cannot submit jobs
> to Hadoop 2 (i've debugged Oozie in a live cluster being used by BigTop
> -thanks Roman- to test their release work, and I've verified that Oozie 3.3
> does not create duplicated entries in the distributed-cache)
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira