On Mon, Jan 6, 2014 at 2:07 PM,  <jonathan.w...@gree.co.jp> wrote:
> Sorry if this has been covered before. I searched but couldn't find a
> complete answer (or at least what I thought was complete).
> When I write a varint to a coded output stream via
> coded_stream.WriteVarInt32([some value]) is it possible to just do a quick
> calculation to find the number of bytes that would be written to the stream
> in that scenario just based on the value of the integer passed in?
> Is there any additional overhead to indicate that it is a varint when
> encoded to the stream or is the varint size just the same calculation as
> dictated in the language docs (here
> https://developers.google.com/protocol-buffers/docs/encoding?hl=zh-CN#varints).
> Obviously one easy way to find the size out be to create a coded output
> stream, write the varint to it and then find the byte size difference. I'm
> just wondering if there is a better/faster way than having to construct and
> delete a coded output stream for a calculation.

Find the highest bit set on your integer, divide by 7 and round up
(i.e. + 6 / 7) -- that should be the number of bytes it takes to
encode the varint. On x86, there is a bsr instruction which computes
ilog2 (with gcc you could do e.g. 32 - __builtin_clz(var)).


You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To post to this group, send email to protobuf@googlegroups.com.
Visit this group at http://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to