[ 
https://issues.apache.org/jira/browse/HADOOP-3788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12630531#action_12630531
 ] 

Tom White commented on HADOOP-3788:
-----------------------------------

bq. But reading until EOF seems a poor way of communicating this. Perhaps we 
could change the deserialize API to be optionally passed a length or somesuch.

+1

How about adding a method to Deserializer (an interface, sigh), which is called 
when the length of the serialized object is known:
{noformat}
T deserialize(T t, int length) throws IOException;
{noformat}

This still needs help from PB to work. I wonder if CodedInputStream#pushLimit() 
would do the trick?

> Add serialization for Protocol Buffers
> --------------------------------------
>
>                 Key: HADOOP-3788
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3788
>             Project: Hadoop Core
>          Issue Type: Wish
>          Components: examples, mapred
>    Affects Versions: 0.19.0
>            Reporter: Tom White
>            Assignee: Alex Loddengaard
>             Fix For: 0.19.0
>
>         Attachments: hadoop-3788-v1.patch, hadoop-3788-v2.patch, 
> protobuf-java-2.0.1.jar
>
>
> Protocol Buffers (http://code.google.com/p/protobuf/) are a way of encoding 
> data in a compact binary format. This issue is to write a 
> ProtocolBuffersSerialization to support using Protocol Buffers types in 
> MapReduce programs, including an example program. This should probably go 
> into contrib. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to