[
https://issues.apache.org/jira/browse/HADOOP-3956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12622777#action_12622777
]
Suhas Gogate commented on HADOOP-3956:
--------------------------------------
1. I have an agent written in perl, which I call it as "Hadoop Performance
Adviser". It provides an extensible framework for evaluating the performance of
a map/reduce job. It generates a report indicating potential problems affecting
the job performance and the advice (if any) to take any corrective actions to
rectify the problem.
2. Framework is extensible in the sense,
-- it allows adding new entries to a pre-defined list of performance and
cluster utilization hints and framework evaluates them against job execution
counters and configuration parameters parsed through log files.
-- Also the hint subroutines are written in such a fashion that more
complex hints can be built using a boolean expression around existing set of
hints.
I agree that there is a lot of potential in such tools that can help user (as
well as a grid service provider) to get more targeted advice on the job
efficiency.
> map-reduce doctor (Mr Doctor)
> -----------------------------
>
> Key: HADOOP-3956
> URL: https://issues.apache.org/jira/browse/HADOOP-3956
> Project: Hadoop Core
> Issue Type: New Feature
> Reporter: Amir Youssefi
>
> Problem Description:
> Users typically submit jobs with sub-optimal parameters resulting in
> under-utilization, black-listed task-trackers, time-outs, re-tries etc.
> Issue can be mitigated by submitting job with custom Hadoop parameters.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.