[ 
https://issues.apache.org/jira/browse/OOZIE-588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virag Kothari updated OOZIE-588:
--------------------------------


For the patch committed in trunk with this JIRA ID (also available under the 
same JIRA ID in Apache ReviewBoard system), I grant license to ASF for 
inclusion in ASF works (as per the Apache License ยง5) 
                
> Oozie to allow drill down to hadoop job's details  
> ---------------------------------------------------
>
>                 Key: OOZIE-588
>                 URL: https://issues.apache.org/jira/browse/OOZIE-588
>             Project: Oozie
>          Issue Type: New Feature
>            Reporter: Mohammad Kamrul Islam
>            Assignee: Virag Kothari
>             Fix For: 3.2.0
>
>
> High-level Requirements:
> -----------------------------------
> Since Oozie is designed as the gateway to grid, we need to support WS API for 
> most common hadoop commands through Oozie.  User doesn't want to go to 
> multiple system to get the required data. Based on these, we propose to 
> implement the following requirements into Oozie.
>  
> R1: Oozie will provide WS endpoints to get hadoop job details (including job 
> counters).
> R2: It will support both types of hadoop jobs : MR job created for MR action, 
>  MR jobs created as part of pig script.
> R3:  In addition, for pig action, oozie will provide a way to query the pig 
> stats.
> Proposed design:
> ----------------------
>      D1: Oozie will store the *summary* jobcounter /pigstats into oozie DB. 
> The items in the summary stats will be determined by oozie to limit the size. 
> However,the commonly used stats will be include into the summary. It is 
> important to note that summary information will be collected *after* the job 
> finished.
>      
>     D2: If the user asks for *details* hadoop job stats ,  the user needs to 
> query using different WS API. In this query, a user will specify *a* hadoop 
> job id.  Oozie will directly query the hadoop JT/RM/HS. Since it is an 
> external call with undetermined response time, Oozie will provide only one 
> hadoop job id per-request to avoid the timeout in WS call. Caveats: If hadoop 
> is down or the job is not in JT/RM/History Server, Oozie will fail to collect 
> the details. 
>     
>      D3: For pig, Oozie will store the pig-generated hadoop ids in it DB and 
> will expose that to user throw the "verbose" query.
>      D4: Oozie will need to collect those summary pig stats and corresponding 
> job counters and store it in Oozie DB. PigStats has a way of getting job 
> counter for each hadoop job that it submits. We could use that API to collect 
> summary counters for pig-created jobs.
>      D5: The complete/detail pigstats will be stored into Pig Launcher Mapper 
> as job counter. So that if a user wants to get the detail pig stats, we could 
> get it from the LM directly.
>      
> Open questions:
> ----------------------
> * What should be in the summary counters/stats? 
> * What is the max size of stats?
> Advanced planning: <Not in the scope of this task, but might required for 
> design to support later>
> --------------------------
> * Some users are asking to query the job stats when the job is RUNNING. They 
> need it to decide for subsequent job submissions.
> * By the above design , user could use D2, to get the counter when MR action 
> is running.
> * However, for pig, it is not that straight forward. Because Pig submits the 
> jobs during execution. But the new PigRunner provide a listener concept where 
> user can get the notifications such as when a new MR job submitted and its ID.
> * By using this, Oozie could get the running hadoop job id instantly. In 
> future, user might want this to query using D2.
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira


Reply via email to