[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13472173#comment-13472173
 ] 

Arun C Murthy commented on MAPREDUCE-4568:
------------------------------------------

Ok, I spent an inordinate amount of time into this, but I finally am ready to 
give up.

Unfortunately none of the DistributedCache apis (i.e. 
DistributedCache.addCache(File|Archive) or Job.addCache(File|Archive) ) have an 
exception specification - this means we'll need to resort to throw 
RuntimeException or such which I'm not a fan of...

For now, I feel the best we can do (without breaking compat) is to just 
document this and leave it as it is... 

Thoughts?
                
> Throw "early" exception when duplicate files or archives are found in 
> distributed cache
> ---------------------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-4568
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4568
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>            Reporter: Mohammad Kamrul Islam
>            Assignee: Arun C Murthy
>
> According to #MAPREDUCE-4549, Hadoop 2.x throws exception if duplicates found 
> in cacheFiles or cacheArchives. The exception  throws during job submission.
> This JIRA is to throw the exception ==early== when it is first added to the 
> Distributed Cache through addCacheFile or addFileToClassPath.
> It will help the client to decide whether to fail-fast or continue w/o the 
> duplicated entries.
> Alternatively, Hadoop could provide a knob where user will choose whether to 
> throw error( coming behavior) or silently ignore (old behavior).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to