[ 
https://issues.apache.org/jira/browse/HIVE-480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12718736#action_12718736
 ] 

Namit Jain commented on HIVE-480:
---------------------------------

The changes look good - I had a question about the usage. When will the default 
be greater than 1 ?
If a long job gets retried after running for 5 hours, it may really increase 
the load on the cluster.
So, if a cluster is unhealthy for some random reason, it may incur further pain 
on the cluster.

Although, this is mute till max retries is 1, where current behavior is 
preserved.


> allow option to retry map-reduce tasks
> --------------------------------------
>
>                 Key: HIVE-480
>                 URL: https://issues.apache.org/jira/browse/HIVE-480
>             Project: Hadoop Hive
>          Issue Type: New Feature
>          Components: Query Processor
>            Reporter: Joydeep Sen Sarma
>         Attachments: HIVE-480.1.patch
>
>
> for long running queries with multiple map-reduce jobs - this should help in 
> dealing with any transient cluster failures without having to re-running all 
> the tasks.
> ideally - the entire plan can be serialized out and the actual process of 
> executing the workflow can be left to a pluggable workflow execution engine 
> (since this is a problem that has been solved many times already).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to