I tried to use protocol buffer in hadoop,

so far it works fine with SequenceFile, after I hook it up with a simple
wrapper,

but after I put in a compressor in sequenceFile, it fails, because it read
all the messages and yet still wants to advance the read pointer, and
then readTag() returns 0, so the mergeFrom() returns a message with no
fields set.

anybody familiar with both SequenceFile and protocol buffer has an idea why
it fails like this?
I find it difficult to understand because the InputStream is simply the
same, whether it comes through a compressor or not


thanks
Yang

-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en.

Reply via email to