On Tue, Dec 7, 2010 at 7:08 PM, Kenton Varda <ken...@google.com> wrote:
> Cool.  Serialization and parsing themselves should actually be improved even
> more than that, but having other Python code around it waters down the
> numbers.  :)

The times are from a minimal microbenchmark using Python's timeit module:

nruns = 1000
nwarmups = 100

es = ... # the protobufs

def ser():
  return [e.SerializeToString() for e in es]

def parse(ses):
  for se in ses: pb.Email().ParseFromString(se)

t = timeit.Timer(lambda:None)
t.timeit(nwarmups)
print 'noop:', t.timeit(nruns) / nruns

t = timeit.Timer(ser)
t.timeit(nwarmups)
print 'ser:', t.timeit(nruns) / nruns / len(es)

ses = ser()
t = timeit.Timer(lambda: parse(ses))
t.timeit(nwarmups)
print 'parse:', t.timeit(nruns) / nruns / len(es)

print 'msg size:', sum(len(se) for se in ses) / len(ses)

> Also, note that if you explicitly compile C++ versions of your
> messages and link them into the process, they'll be even faster.  (If you
> don't, the library falls back to DynamicMessage which is not as fast as
> generated code.)

I'm trying to decipher that last hint, but having some trouble - what
exactly do you mean / how do I do that? I'm just using protoc
--py_out=... and PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=cpp.

> As for when 2.4.0 might be released, it's hard to say.  There's a lot of
> work to do, and we have a new person doing this release so he has to learn
> the process.  Also, holidays are coming up.  So, I'd guess it will be ready
> sometime in January.

Thanks for the estimate; even a ballpark without commitment is useful.

-- 
Yang Zhang
http://yz.mit.edu/

-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To post to this group, send email to proto...@googlegroups.com.
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en.

Reply via email to