Alkis is quite right, sorry for the typo.

What I mean my result is
Without optimize_for = SPEED

ProtoBuf Loop  : 10,000,000
Get object     : 15,130msec
Serdes protobuf: 68,600msec
Objs per second: 145,772
Total bytes    : 829,996,683

Thrift Loop    : 10,000,000
Get object     : 12,651msec
Serdes thrift  : 36,904msec
Objs per second: 270,973
Total bytes    : 1,130,000,000


With optimize_for = SPEED

ProtoBuf Loop  : 10,000,000
Get object     : 8,170msec
Serdes protobuf: 33,054msec
Objs per second: 302,535
Total bytes    : 829,997,866

Thrift Loop    : 10,000,000
Get object     : 14,394msec
Serdes thrift  : 37,671msec
Objs per second: 265,456
Total bytes    : 1,130,000,000

On Apr 17, 10:25 pm, Alkis Evlogimenos ('Αλκης Ευλογημένος)
<evlogime...@gmail.com> wrote:
> Are the with/without optimize_for = SPEED flipped? It seems that what you
> suggest is that protobuf with optimize_for = SPEED is slower than without.
>
>
>
> On Fri, Apr 17, 2009 at 4:45 AM, TimYang <iso1...@gmail.com> wrote:
>
> > I've made two similar tests in Java, comparing Thrift and Protocol
> > Buffers, and here is the result.
>
> > Without optimize_for = SPEED
>
> > Thrift Loop    : 10,000,000
> > Get object     : 14,394msec
> > Serdes thrift  : 37,671msec
> > Objs per second: 265,456
> > Total bytes    : 1,130,000,000
>
> > ProtoBuf Loop  : 10,000,000
> > Get object     : 8,170msec
> > Serdes protobuf: 33,054msec
> > Objs per second: 302,535
> > Total bytes    : 829,997,866
>
> > With optimize_for = SPEED
>
> > ProtoBuf Loop  : 10,000,000
> > Get object     : 15,130msec
> > Serdes protobuf: 68,600msec
> > Objs per second: 145,772
> > Total bytes    : 829,996,683
>
> > Thrift Loop    : 10,000,000
> > Get object     : 12,651msec
> > Serdes thrift  : 36,904msec
> > Objs per second: 270,973
> > Total bytes    : 1,130,000,000
>
> > Details of the tests see
> > Round 1:
> >http://timyang.net/programming/thrift-protocol-buffers-performance-java/
> > Round 2:
> >http://timyang.net/programming/thrift-protocol-buffers-performance-2/
>
> > On Mar 6, 3:37 pm, Dave Bailey <d...@daveb.net> wrote:
> > > FYI, I reran my comparison benchmark using optimize_for = SPEED, and
> > > got the following results:
>
> > > 1) dns.proto with key/value/first/last/type/ttl (mostly strings),
> > > 5,000 elements in DnsRecord:
>
> > > 0.019223 seconds to serialize as 658124 bytes with JSON::XS
> > > 0.0092 seconds to deserialize 658124 bytes with JSON::XS
> > > 0.018292 seconds to serialize as 414859 bytes with protobuf
> > > 0.006274 seconds to deserialize 414859 bytes with protobuf
> > > 0.028614 seconds to serialize as 692824 bytes with Storable
> > > 0.009033 seconds to deserialize 692824 bytes with Storable
>
> > > 2) dns.proto with sip/dip/sport/dport, 10,000 elements in DnsRecord:
>
> > > 0.003612 seconds to serialize as 300330 bytes with JSON::XS
> > > 0.004833 seconds to deserialize 300330 bytes with JSON::XS
> > > 0.002075 seconds to serialize as 85841 bytes with protobuf
> > > 0.000549 seconds to deserialize 85841 bytes with protobuf
> > > 0.013752 seconds to serialize as 340907 bytes with Storable
> > > 0.004676 seconds to deserialize 340907 bytes with Storable
>
> > > So, I guess PB isn't kidding around when we say optimize_for = SPEED.
> > > Straight across the board, faster than JSON::XS or Storable.  It looks
> > > like for packing and unpacking messages with a lot of varint data,
> > > protobuf blows the doors off of the other Perl serialization
> > > mechanisms, but even for string-heavy messages, it packs at least as
> > > fast as the others, and unpacks significantly faster (probably due to
> > > the smaller message size where the message has a lot of small
> > > strings).
>
> > > In summary, I don't think there is any faster way to serialize
> > > structured data from Perl (as long as you're willing to write
> > > the .proto files and use protobuf-perlxs to compile them into Perl/XS
> > > extension modules, of course).
>
> > > -dave
>
> > > On Mar 5, 4:34 pm, Justin  Azoff <justin.az...@gmail.com> wrote:
>
> > > > On Mar 4, 10:24 pm, David Anderson <d...@natulte.net> wrote:
>
> > > > > I think the major point to take away from the comparison is: use the
> > > > > correct tool for your needs. If you need backward/forward
> > > > > compatibility, heterogeneous versions of software interacting and
> > some
> > > > > structural validation (just structure, not talking about the higher
> > > > > level semantics of fields), PB/Thrift is what you need. If you don't
> > > > > care about the above points, by all means use json (and don't forget
> > > > > to get your web server to gzip traffic).
>
> > > > > - Dave
>
> > > > I definitely agree!  I have been also been looking at this from
> > > > another angle:
> > > > Right now JSON is faster than protobuf (at least inpython), but
> > > > protobuf produces smaller output.  Protobuf will only get faster, but
> > > > JSON can not get any smaller.  Looking forward, protobuf definitely
> > > > has an advantage.
>
> > > > --
> > > > - Justin
>
> --
>
> Alkis
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to