FYI, I reran my comparison benchmark using optimize_for = SPEED, and
got the following results:

1) dns.proto with key/value/first/last/type/ttl (mostly strings),
5,000 elements in DnsRecord:

0.019223 seconds to serialize as 658124 bytes with JSON::XS
0.0092 seconds to deserialize 658124 bytes with JSON::XS
0.018292 seconds to serialize as 414859 bytes with protobuf
0.006274 seconds to deserialize 414859 bytes with protobuf
0.028614 seconds to serialize as 692824 bytes with Storable
0.009033 seconds to deserialize 692824 bytes with Storable

2) dns.proto with sip/dip/sport/dport, 10,000 elements in DnsRecord:

0.003612 seconds to serialize as 300330 bytes with JSON::XS
0.004833 seconds to deserialize 300330 bytes with JSON::XS
0.002075 seconds to serialize as 85841 bytes with protobuf
0.000549 seconds to deserialize 85841 bytes with protobuf
0.013752 seconds to serialize as 340907 bytes with Storable
0.004676 seconds to deserialize 340907 bytes with Storable

So, I guess PB isn't kidding around when we say optimize_for = SPEED.
Straight across the board, faster than JSON::XS or Storable.  It looks
like for packing and unpacking messages with a lot of varint data,
protobuf blows the doors off of the other Perl serialization
mechanisms, but even for string-heavy messages, it packs at least as
fast as the others, and unpacks significantly faster (probably due to
the smaller message size where the message has a lot of small

In summary, I don't think there is any faster way to serialize
structured data from Perl (as long as you're willing to write
the .proto files and use protobuf-perlxs to compile them into Perl/XS
extension modules, of course).


On Mar 5, 4:34 pm, Justin  Azoff <> wrote:
> On Mar 4, 10:24 pm, David Anderson <> wrote:
> > I think the major point to take away from the comparison is: use the
> > correct tool for your needs. If you need backward/forward
> > compatibility, heterogeneous versions of software interacting and some
> > structural validation (just structure, not talking about the higher
> > level semantics of fields), PB/Thrift is what you need. If you don't
> > care about the above points, by all means use json (and don't forget
> > to get your web server to gzip traffic).
> > - Dave
> I definitely agree!  I have been also been looking at this from
> another angle:
> Right now JSON is faster than protobuf (at least in python), but
> protobuf produces smaller output.  Protobuf will only get faster, but
> JSON can not get any smaller.  Looking forward, protobuf definitely
> has an advantage.
> --
> - Justin
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To post to this group, send email to
To unsubscribe from this group, send email to
For more options, visit this group at

Reply via email to