[ https://issues.apache.org/jira/browse/MAPREDUCE-7457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Mudit Sharma updated MAPREDUCE-7457: ------------------------------------ Priority: Critical (was: Major) > Limit number of spill files getting created > ------------------------------------------- > > Key: MAPREDUCE-7457 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7457 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Reporter: Mudit Sharma > Priority: Critical > Labels: pull-request-available > > Hi, > > We have been facing some issues where many of our cluster node disks go full > because of some rogue applications creating a lot of spill data > We wanted to fail the app if more than a threshold amount of spill files are > written > Please let us know if any such capability is supported > > If the capability is not there, we are proposing it to support it via a > config, we have added a PR for the same: > [https://github.com/apache/hadoop/pull/6155] please let us know your > thoughts on it -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org