You should verify that the bytes that come out of the InputStream really are
the exact same bytes that were written by the serializer to the OutputStream
originally.  You could do this by computing a checksum at both ends and
printing it, then inspecting visually.  You'll probably find that the bytes
differ somehow, or don't end at the same point.

On Thu, Feb 18, 2010 at 2:48 AM, Yang <teddyyyy...@gmail.com> wrote:

> I tried to use protocol buffer in hadoop,
>
> so far it works fine with SequenceFile, after I hook it up with a simple
> wrapper,
>
> but after I put in a compressor in sequenceFile, it fails, because it read
> all the messages and yet still wants to advance the read pointer, and
> then readTag() returns 0, so the mergeFrom() returns a message with no
> fields set.
>
> anybody familiar with both SequenceFile and protocol buffer has an idea why
> it fails like this?
> I find it difficult to understand because the InputStream is simply the
> same, whether it comes through a compressor or not
>
>
> thanks
> Yang
>
> --
> You received this message because you are subscribed to the Google Groups
> "Protocol Buffers" group.
> To post to this group, send email to proto...@googlegroups.com.
> To unsubscribe from this group, send email to
> protobuf+unsubscr...@googlegroups.com<protobuf%2bunsubscr...@googlegroups.com>
> .
> For more options, visit this group at
> http://groups.google.com/group/protobuf?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To post to this group, send email to proto...@googlegroups.com.
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en.

Reply via email to