I have a structure, something like

<Meta info about A>

Where there can be many many B objects. I am using C++ and want to
generate a message that is small enough to send in a single UDP
datagram of around 1024 bytes in size. I might have up to 2000 B
blocks, and they can be up to 300 bytes in size (with a minimum size
of 5 bytes). In each datagram I need to have one 'A' header, and I
want to maximise the number of B objects per packet.

Is the most efficient way to do this something like this:

A a;
<set a meta data>
for ( i = start; i < end; ++ i)
  a.append ( b )
  if ( a.ByteSize() > 1024 )
uint8_t data[4096];
uint8_t *end = aSet.SerializeWithCachedSizesToArray ( data );

And repeat until my list of B objects has all been sent? I've written
a really simple test program which compares the time taken when you
have code similar to the above, and when you know how many items will
fit and don't call ByteSize() in a loop and there is a very large
performance hit.

If I serialised each 'B' block on it's own into a large block of
memory, and remembered each of the sizes, then just iterated over the
resulting block of memory, and sent off each block size - would that
be more efficient? Is there a large efficiency gain (in memory or wire
size) from encoding 10 'B' blocks at once?

Any suggestions would be greatly appreciated - thanks!

You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To post to this group, send email to protobuf@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to