Speed varies a lot depending on the precise content. My benchmarks
generally show serialization performance somewhere between 100 MB/s and 1
GB/s, whereas you're seeing 33MB/s, but my benchmarks do not include any
kind of I/O. Maybe you could separate the serialization step from the I/O
(by serializing to one huge in-memory buffer) so that you can measure the
On Mon, Jul 13, 2009 at 9:36 PM, Alex Black <a...@alexblack.ca> wrote:
> I'm sending a message with about ~150k repeated items in it, total
> size is about 3.3mb, and its taking me about 100ms to serialize it and
> send it out.
> Can I expect to do any better than this? What could I look into to
> improve this?
> - I have "option optimize_for = SPEED;" set in my proto file
> - I'm compiling with -O3
> - I'm sending my message in batches of 1000
> - I'm using C++, on ubuntu, x64
> - I'm testing all on one machine (e.g. client and server are on one
> My message looks like:
> message NodeWithNeighbors
> required Id nodeId = 1;
> repeated IdConfidence neighbors = 2;
> message GetNeighborsResponse
> repeated NodeWithNeighbors nodesWithNeighbors = 1;
> message IdConfidence
> required bytes id = 1;
> required float confidence = 2;
> Where "bytes id" is used to send 16byte IDs (uuids).
> I'm writing each message (batch) out like this:
> CodedOutputStream codedOutputStream(&m_ProtoBufStream);
> // Write out the size of the message
> // Ask the message to serialize itself to our stream adapter, which
> ultimately calls Write on us
> // which we then call Write on our composed stream
> In my stream implementation I'm buffering every 16kb, and calling send
> on the socket once i have 16kb.
> - Alex
You received this message because you are subscribed to the Google Groups
"Protocol Buffers" group.
To post to this group, send email to email@example.com
To unsubscribe from this group, send email to
For more options, visit this group at