[
https://issues.apache.org/jira/browse/HADOOP-8904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13479073#comment-13479073
]
Tom White commented on HADOOP-8904:
-----------------------------------
This change makes the behaviour the same as the old API where the Mapper's
close() method is called in a finally block - which is a good thing. Will it
cause incompatibilities with existing code - i.e. any that assume cleanup()
won't be called if the map throws an exception?
We should at least mark this as an incompatible change with a note saying that
you need to override the Mapper's (or Reducer's) run() method to restore the
old behaviour.
> Hadoop does not close output file / does not call Mapper.cleanup if
> exception in map
> -------------------------------------------------------------------------------------
>
> Key: HADOOP-8904
> URL: https://issues.apache.org/jira/browse/HADOOP-8904
> Project: Hadoop Common
> Issue Type: Bug
> Affects Versions: 1-win
> Reporter: Daniel Dai
> Assignee: Daniel Dai
> Attachments: HADOOP-23-2.patch, HADOOP-8904-1.patch
>
>
> Find this in Pig unit test TestStore under Windows. There are dangling files
> because map does not close the file when exception happens in map(). In
> Windows, Hadoop will not remove a file if it is not closed. This happens in
> reduce() as well.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira