[ 
https://issues.apache.org/jira/browse/HADOOP-1986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12536048
 ] 

Doug Cutting commented on HADOOP-1986:
--------------------------------------

I think I'd opt for another level of indirection, for two reasons:
- to bind together serializers and deserializers, which are almost always 
paired;
- to permit generation of more specialized serializers and deserializers

E.g.,

{noformat}
<property>
  <name>io.serialization</name>
  <value>WritableSerialization</value>
</property>

public interface Serialization {
  Serializer getSerializer();
  Deserializer getDeserializer();
}
public class SerializationFactory {
  public Serializer getSerializer(Class c) { return 
getSerialization(c).getSerializer(); }
  public Deserializer getDeserializer(Class c) { return 
getSerialization(c).getDeserializer(); }
  public getSerialization(Class c) { ... infer from c's superclasses & 
interfaces ... }
}
{noformat}


> Add support for a general serialization mechanism for Map Reduce
> ----------------------------------------------------------------
>
>                 Key: HADOOP-1986
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1986
>             Project: Hadoop
>          Issue Type: New Feature
>          Components: mapred
>            Reporter: Tom White
>            Assignee: Tom White
>             Fix For: 0.16.0
>
>         Attachments: SerializableWritable.java, serializer-v1.patch
>
>
> Currently Map Reduce programs have to use WritableComparable-Writable 
> key-value pairs. While it's possible to write Writable wrappers for other 
> serialization frameworks (such as Thrift), this is not very convenient: it 
> would be nicer to be able to use arbitrary types directly, without explicit 
> wrapping and unwrapping.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to