[ 
https://issues.apache.org/jira/browse/FLINK-1076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Artem Tsikiridis updated FLINK-1076:
------------------------------------

    Description: While the Flink wrappers for Hadoop Map and Reduce tasks are 
implemented in https://github.com/apache/incubator-flink/pull/37 it is 
currently not possible to use the {{HadoopMapFunction}} and the 
{{HadoopReduceFunction}} without a {{JobConf}}. It woule be useful if we could 
specify a Hadoop Mapper, Reducer (or Combiner) and use them as seperate 
components in a Flink Job.  (was: While the Flink wrappers for Hadoop Map and 
Reduce tasks are implemented in 
https://github.com/apache/incubator-flink/pull/37 it is currently not possible 
to use the {{ HadoopMapFunction }} and the {{ HadoopReduceFunction }} without a 
{{ JobConf }}. It woule be useful if we could specify a Hadoop Mapper, Reducer 
(or Combiner) and use them as seperate components in a Flink Job.)

>  Support function-level compatibility for  Hadoop's wrappers functions
> ----------------------------------------------------------------------
>
>                 Key: FLINK-1076
>                 URL: https://issues.apache.org/jira/browse/FLINK-1076
>             Project: Flink
>          Issue Type: New Feature
>          Components: Hadoop Compatibility
>    Affects Versions: 0.7-incubating
>            Reporter: Artem Tsikiridis
>              Labels: features
>
> While the Flink wrappers for Hadoop Map and Reduce tasks are implemented in 
> https://github.com/apache/incubator-flink/pull/37 it is currently not 
> possible to use the {{HadoopMapFunction}} and the {{HadoopReduceFunction}} 
> without a {{JobConf}}. It woule be useful if we could specify a Hadoop 
> Mapper, Reducer (or Combiner) and use them as seperate components in a Flink 
> Job.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to