[ http://issues.apache.org/jira/browse/HADOOP-313?page=comments#action_12417837 ]
Owen O'Malley commented on HADOOP-313: -------------------------------------- An option to keep failed reduces would be useful, but I've certainly seen cases before where I wanted to run a particular fragment in the debugger, even if it didn't crash. > running a reduce task standalone > -------------------------------- > > Key: HADOOP-313 > URL: http://issues.apache.org/jira/browse/HADOOP-313 > Project: Hadoop > Type: Bug > Reporter: Michel Tourn > Assignee: Michel Tourn > Attachments: sareduce.patch > > This is a tool to reproduce problems and to run unit tests involving a reduce > task. > You just give it a reduce directory on the command line. > Usage: java org.apache.hadoop.mapred.StandaloneReduceTask <taskdir> > [<limitmaps>] > taskdir name encodes: task_<jobid>_r_<partition>_<attempt> > taskdir contains job.xml and one or more input files named: map_<dddd>.out > You should run with the same -Xmx option as the TaskTracker child JVM -- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://issues.apache.org/jira/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira
