[
https://issues.apache.org/jira/browse/FLINK-11733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791513#comment-16791513
]
vinoyang commented on FLINK-11733:
----------------------------------
Hi [~fhueske] sorry I did provide enough description about this issue. I have
added more information.
About your last question, yes, your idea can also implement it. However, that
means user need extends a subclass of Mapper right? Here, I think a good
experience is to let the old program (MapReduce program written by Hadoop new
MapReduce API) keep the least changes. What do you think? If we use reflection,
we only need to get it once.
> Provide HadoopMapFunction for org.apache.hadoop.mapreduce.Mapper
> ----------------------------------------------------------------
>
> Key: FLINK-11733
> URL: https://issues.apache.org/jira/browse/FLINK-11733
> Project: Flink
> Issue Type: Improvement
> Components: Connectors / Hadoop Compatibility
> Reporter: vinoyang
> Assignee: vinoyang
> Priority: Major
>
> Currently, Flink only support
> {{org.apache.flink.hadoopcompatibility.mapred.Mapper}} in module
> flink-hadoop-compatibility. I think we also need to support Hadoop new Mapper
> API : {{org.apache.hadoop.mapreduce.Mapper}}.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)