[
https://issues.apache.org/jira/browse/HIVE-1492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12893782#action_12893782
]
He Yongqiang commented on HIVE-1492:
------------------------------------
The assumption of Map-reduce is
if we give same input and same m/r function, the output should be always the
same.
Otherwise the map-reduce fault tolerance mechanism is wrong.
> FileSinkOperator should remove duplicated files from the same task based on
> file sizes
> --------------------------------------------------------------------------------------
>
> Key: HIVE-1492
> URL: https://issues.apache.org/jira/browse/HIVE-1492
> Project: Hadoop Hive
> Issue Type: Bug
> Affects Versions: 0.7.0
> Reporter: Ning Zhang
> Assignee: Ning Zhang
> Fix For: 0.7.0
>
> Attachments: HIVE-1492.patch, HIVE-1492_branch-0.6.patch
>
>
> FileSinkOperator.jobClose() calls Utilities.removeTempOrDuplicateFiles() to
> retain only one file for each task. A task could produce multiple files due
> to failed attempts or speculative runs. The largest file should be retained
> rather than the first file for each task.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.