[ 
https://issues.apache.org/jira/browse/HIVE-1100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12805365#action_12805365
 ] 

Zheng Shao commented on HIVE-1100:
----------------------------------

What will be the semantics of pre-execution and post-execution hooks?

Shall we re-execute pre-execution hooks every time we resume the job?
Shall we fail the query if the post-execution hooks failed?

My preference is to treat the hooks as a task in the query plan - once it 
succeeds then we don't run it again when resuming the query.



> Make it possible for users to retry map-reduce jobs in a single Hive query
> --------------------------------------------------------------------------
>
>                 Key: HIVE-1100
>                 URL: https://issues.apache.org/jira/browse/HIVE-1100
>             Project: Hadoop Hive
>          Issue Type: New Feature
>    Affects Versions: 0.6.0
>            Reporter: Zheng Shao
>            Assignee: Zheng Shao
>
> Sometimes a single hive query get compiled into several map-reduce jobs, and 
> one of the jobs failed because of some transient errors.
> Users would need to start from scratch.
> We should allow the user to start from the point of failure to continue the 
> query.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to