[
https://issues.apache.org/jira/browse/PIG-1249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12983462#action_12983462
]
Anup commented on PIG-1249:
---------------------------
one thing that we didn't take care is the use of the hadoop parameter
"mapred.reduce.tasks".
If I specify the hadoop parameter -Dmapred.reduce.tasks=450 for all the MR jobs
, it is overwritten by estimateNumberOfReducers(conf,mro), which in my case
is 15.
I am not specifying any default_parallel and PARALLEL statements.
Ideally, the number of reducer should be 450.
I think we should prioritize this parameter above the estimate reducers
calculations.
The priority list should be
1. PARALLEL statement
2. default_parallel statement
3. mapred.reduce.task hadoop parameter
4. estimateNumberOfreducers();
> Safe-guards against misconfigured Pig scripts without PARALLEL keyword
> ----------------------------------------------------------------------
>
> Key: PIG-1249
> URL: https://issues.apache.org/jira/browse/PIG-1249
> Project: Pig
> Issue Type: Improvement
> Affects Versions: 0.8.0
> Reporter: Arun C Murthy
> Assignee: Jeff Zhang
> Priority: Critical
> Fix For: 0.8.0
>
> Attachments: PIG-1249-4.patch, PIG-1249.patch, PIG-1249_5.patch,
> PIG_1249_2.patch, PIG_1249_3.patch
>
>
> It would be *very* useful for Pig to have safe-guards against naive scripts
> which process a *lot* of data without the use of PARALLEL keyword.
> We've seen a fair number of instances where naive users process huge
> data-sets (>10TB) with badly mis-configured #reduces e.g. 1 reduce.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.