[ 
https://issues.apache.org/jira/browse/HADOOP-3149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12584312#action_12584312
 ] 

Owen O'Malley commented on HADOOP-3149:
---------------------------------------

Looking over your and Runping's patches, I'd suggest defining a subclass that 
looks like:

{code}
package org.apache.hadoop.mapred.lib;
public class KeyValue<K,V> {
     private K key;
     private V value;
     public KeyValue();
     public KeyValue(K key, V value);
     public K getKey() ;
     public V getValue();
     public void setKey(K k); 
     public void setValue(V v);
 }

public class MultipleOutputStreams extends MultipleOutputFormat {
  // modifiy job conf to control how format a given stream
  // should be called once for each stream kind
  public static void addOutputStream(JobConf conf,
                                                                  String kind, 
                                                                  Class<? 
extends OutputFormat> outFormat,
                                                                  Class<?> 
keyClass,
                                                                  Class<?> 
valueClass);
}
{code}

So client code would look like:
{code}
In launcher:
   MultipleOutputStreams.addOutputStream(job, "foo", 
SequenceFileOutputFormat.class, Text.class, IntegerWritable.class);
   MultipleOutputStreams.addOutputStream(job, "bar", TextOutputFormat.class, 
Text.class, Text.class);

In reducer:
  out.collect("foo", new KeyValue(new Text("hi"), new IntegerWritable(12));
  out.collect("bar", new KeyValue(k2, v2));
{code}

> supporting multiple outputs for M/R jobs
> ----------------------------------------
>
>                 Key: HADOOP-3149
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3149
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: mapred
>         Environment: all
>            Reporter: Alejandro Abdelnur
>            Assignee: Alejandro Abdelnur
>             Fix For: 0.17.0
>
>         Attachments: patch3149.txt
>
>
> The outputcollector supports writing data to a single output, the 'part' 
> files in the output path.
> We found quite common that our M/R jobs have to write data to different 
> output. For example when classifying data as NEW, UPDATE, DELETE, NO-CHANGE 
> to later do different processing on it.
> Handling the initialization of additional outputs from within the M/R code 
> complicates the code and is counter intuitive with the notion of job 
> configuration.
> It would be desirable to:
> # Configure the additional outputs in the jobconf, potentially specifying 
> different outputformats, key and value classes for each one.
> # Write to the additional outputs in a similar way as data is written to the 
> outputcollector.
> # Support the speculative execution semantics for the output files, only 
> visible in the final output for promoted tasks.
> To support multiple outputs the following classes would be added to 
> mapred/lib:
> * {{MOJobConf}} : extends {{JobConf}} adding methods to define named outputs 
> (name, outputformat, key class, value class)
> * {{MOOutputCollector}} : extends {{OutputCollector}} adding a 
> {{collect(String outputName, WritableComparable key, Writable value)}} method.
> * {{MOMapper}} and {{MOReducer}}: implement {{Mapper}} and {{Reducer}} adding 
> a new {{configure}}, {{map}} and {{reduce}} signature that take the 
> corresponding {{MO}} classes and performs the proper initialization.
> The data flow behavior would be: key/values written to the default (unnamed) 
> output (using the original OutputCollector {{collect}} signature) take part 
> of the shuffle/sort/reduce processing phases. key/values written to a named 
> output from within a map don't.
> The named output files would be named using the task type and task ID to 
> avoid collision among tasks (i.e. 'new-m-00002' and 'new-r-00001').
> Together with the setInputPathFilter feature introduced by HADOOP-2055 it 
> would become very easy to chain jobs working on particular named outputs 
> within a single directory.
> We are using heavily this pattern and it greatly simplified our M/R code as 
> well as chaining different M/R. 
> We wanted to contribute this back to Hadoop as we think is a generic feature 
> many could benefit from.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to