[ 
https://issues.apache.org/jira/browse/MAPREDUCE-7022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16326703#comment-16326703
 ] 

Johan Gustavsson commented on MAPREDUCE-7022:
---------------------------------------------

Thanks for the detailed feedback [~jlowe]

I believe I have dealt with all the points you brought up by:
 # Merging fatalError and fatalErrorFailFast into fatalError with an added 
boolean parameter determining fail fast.
 # Changed all instances of job to task except for in mapred-default.xml
 # Updated TestTaskImpl to send proper type event for all T_ATTEMPT_FAILED
 # Rooted all configs in mapreduce.job.local-fs.single-disk-limit
 # Removed strange comments and added fixed logging around disk monitor. Since 
we are using slf4j though there is no fatal so it's set to error.
 # Additionally I added the missing apache license notes on the 2 new classes 
in this patch

As for the failed tests below:
 * org.apache.hadoop.mapreduce.v2.TestUberAM is failing on trunk too
 * org.apache.hadoop.mapred.TestTaskProgressReporter runs without issue when I 
rerun it locally against latest trunk

> Fast fail rogue jobs based on task scratch dir size
> ---------------------------------------------------
>
>                 Key: MAPREDUCE-7022
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-7022
>             Project: Hadoop Map/Reduce
>          Issue Type: Improvement
>          Components: task
>    Affects Versions: 2.7.0, 2.8.0, 2.9.0
>            Reporter: Johan Gustavsson
>            Assignee: Johan Gustavsson
>            Priority: Major
>         Attachments: MAPREDUCE-7022.001.patch, MAPREDUCE-7022.002.patch, 
> MAPREDUCE-7022.003.patch, MAPREDUCE-7022.004.patch, MAPREDUCE-7022.005.patch, 
> MAPREDUCE-7022.006.patch, MAPREDUCE-7022.007.patch, MAPREDUCE-7022.008.patch
>
>
> With the introduction of MAPREDUCE-6489 there are some options to kill rogue 
> tasks based on writes to local disk writes. In our environment are we mainly 
> run Hive based jobs we noticed that this counter and the size of the local 
> scratch dirs were very different. We had tasks where BYTES_WRITTEN counter 
> were at 300Gb and where it was at 10Tb both producing around 200Gb on local 
> disk, so it didn't help us much. So to extend this feature tasks should 
> monitor local scratchdir size and fail if they pass the limit. In these cases 
> the tasks should not be retried either but instead the job should fast fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to