[ 
https://issues.apache.org/jira/browse/MAPREDUCE-2384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13284747#comment-13284747
 ] 

Hudson commented on MAPREDUCE-2384:
-----------------------------------

Integrated in Hadoop-Hdfs-trunk #1060 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1060/])
    MAPREDUCE-2384. The job submitter should make sure to validate jobs before 
creation of necessary files. (harsh) (Revision 1343240)

     Result = SUCCESS
harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1343240
Files : 
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/TestMRJobClient.java

                
> The job submitter should make sure to validate jobs before creation of 
> necessary files
> --------------------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-2384
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2384
>             Project: Hadoop Map/Reduce
>          Issue Type: Improvement
>          Components: job submission, test
>    Affects Versions: 0.21.0
>            Reporter: Denny Ye
>            Assignee: Harsh J
>             Fix For: 3.0.0
>
>         Attachments: MAPREDUCE-2384.r1.diff, MAPREDUCE-2384.r2.diff, 
> MAPREDUCE-2384.r3.diff, MAPREDUCE-2384.r4.diff
>
>
> In 0.20.x/1.x, 0.21, 0.22 the MapReduce job submitter writes some 
> job-necessary files to the JT FS before checking for output specs or other 
> job validation items. This appears unnecessary to do.
> This has since been silently fixed in the rewrite of the MRApp (called MRv2) 
> in the MAPREDUCE-279 dump thats now replaced the older MR (or, MRv1 now). 
> However, we can still do with a test case to prevent regressing again.
> Original description below:
> {quote}
> When I read the source code of MapReduce in Hadoop 0.21.0, sometimes it made 
> me confused about error response. For example:
>         1. JobSubmitter checking output for each job. MapReduce makes rule to 
> limit that each job output must be not exist to avoid fault overwrite. In my 
> opinion, MR should verify output at the point of client submitting. Actually, 
> it copies related files to specified target and then, doing the verifying. 
>         2. JobTracker.   Job has been submitted to JobTracker. In first step, 
> JT create JIT object that is very "huge" . Next step, JT start to verify job 
> queue authority and memory requirements.
>  
>         In normal case, verifying client input then response immediately if 
> any cases in fault. Regular logic can be performed if all the inputs have 
> passed.  
>         It seems like that those code does not make sense for understanding. 
> Is only my personal opinion? Wish someone help me to explain the details. 
> Thanks!
> {quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to