[ 
https://issues.apache.org/jira/browse/HADOOP-5977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12716507#action_12716507
 ] 

Vinod K V commented on HADOOP-5977:
-----------------------------------

+1 given that we are doing 2-3 roundtrips to DFS in the JobInProgress 
constructor itself.

Further, the current code has a bug. JT downloads job files from the DFS in the 
constructor onto its local file system. When a job is rejected by access/queue 
checks, even though the job files on the DFS are deleted, the files on JT's 
local FS are never deleted. This happens because JobInProgress.garbageCollect() 
is never called.

> Avoid creating JobInProgress objects before Access checks and Queues checks 
> are done in JobTracker submitJob 
> -------------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-5977
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5977
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: mapred
>            Reporter: rahul k singh
>
> In JobTracker submitJob , JobInProgress instance gets created . after this 
> checks are done for access and queue state. In event of checks failed . There 
> isn't any use for these JIP objects , hence in event of failure only reason 
> these objects were created was to get conf data and be deleted.
> We need to fetch the information required to only do the checks instead of 
> creating a JobInProgress object

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to