The official implementation does #1:  It assumes that if there are 10 bytes
or more in a row with the high bit set when a varint is expected, then some
sort of corruption must have occurred.  It's impossible to reasonably
recover from such corruption, so the parser just gives up and returns an
error.

On Sat, Jan 30, 2010 at 7:56 PM, kfitc...@gmail.com <kfitc...@gmail.com>wrote:

> When reading about the varint encoding I wondered if there was a
> specific guidance on how to handle a badly encoded/out-of-bounds
> varint. In particular, what if I get more than ten consecutive bytes
> with the high bit set?
>
> 1) Treat the whole message as bad, and drop it
>
> 2) Treat the field as bad, do I keep eating as long as there are high
> bits set, or stop after ten and try to decode a new
> tag?
>
> 3) Keep eating bytes until a high bit isn't set, and just throw away
> the bits that wont fit into the type I am trying to decode into.
>
> 4) Eat up to 10 bytes, decode the least significant 64 bits from the
> encoded field, and try to decode a new tag
>
> Kevin
>
> --
> You received this message because you are subscribed to the Google Groups
> "Protocol Buffers" group.
> To post to this group, send email to proto...@googlegroups.com.
> To unsubscribe from this group, send email to
> protobuf+unsubscr...@googlegroups.com<protobuf%2bunsubscr...@googlegroups.com>
> .
> For more options, visit this group at
> http://groups.google.com/group/protobuf?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To post to this group, send email to proto...@googlegroups.com.
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en.

Reply via email to