The plan is to modify the current Python implementation so that message
objects can wrap C++ protobuf objects via the reflection interface.  This
means a few things:
* The Python API will be exactly the same (except for setting an option
saying that you want to do this wrapping).

* If we wrap DynamicMessage, then you should not have to compile any C/C++
code specific to your message type at all.  So, once the protocol buffer
extension itself is installed, users will not have to install their own
extensions.

* There would also be the option of wrapping generated C++ classes specific
to your message types, which would squeeze out a few more drops of
performance at the expense of having to compile C++ code.

On Wed, Jun 3, 2009 at 6:49 AM, Nicholas Reid <njr...@gmail.com> wrote:

>
> Hi Kenton,
>
> If you're in the process of reviewing accelerated Python support for
> PB2, were you thinking about something the following approach?
>
> 1. Add another output format to protoc (say --pylib_out) which
> generates both the existing C++ code plus whatever additional C++
> necessary to create a compiled Python module from the specified .proto
> file.
>
> The kind of thing mentioned here:
> http://docs.python.org/extending/extending.html#a-simple-example.
>
> 2. Ideally, it would even be possible to automatically generate the
> Python distutils configuration (http://docs.python.org/extending/
> building.html#building) so that
>
> $> python setup.py install
>
> would compile and install the generated C module automatically.
>
> I appreciate this probably means a lot of work, but it will give the
> highest possible performance from within Python and may even surpass
> Thrift's Python performance.
>
> If you were able to maintain both the --python_out and the --pylib_out
> options then users who need pure Python PB support (say for the App
> Engine etc) can use one and performance critical applications can use
> the other.
>
> Would it also be a good time to look at what would need to be changed
> to support Python 3.0?
>
> Cheers,
>
> Nicholas Reid
>
>
> On Jun 2, 6:41 pm, Kenton Varda <ken...@google.com> wrote:
> > Yeah, we've dropped the ball on this.  Unfortunately the people in charge
> of
> > the Python protocol buffers implementation also have other, apparently
> very
> > demanding projects, and thus Python protobuf hasn't gotten the attention
> it
> > deserves.  Sadly even Google doesn't have infinite engineering resources.
> >  I'm trying to find a solution to this -- it's a management issue,
> though,
> > not a technical one.
> >
> > On Tue, Jun 2, 2009 at 1:15 AM, Daniel L <unem...@gmail.com> wrote:
> >
> > > Indeed. Python performance is quite low according to my tests.
> > > Definitely workable for small objects, although very slow for larger
> > > structures. When comparing to C++ or Apache Thrift python bindings on
> > > the same datastructure (basically a 2D array of strings @ >10MB) we
> > > saw several orders of magnitude difference, both serializing and
> > > deserializing.
> >
> > > This was on Python 2.4. I'm sure it's faster on 2.6.
> >
> > > An alternate set of python bindings to the C++ interface would be a
> > > lovely thing.
> >
> > > Cheers,
> > > Daniel Lundin
> >
> > > On May 15, 4:21 pm, Nicholas Reid <njr...@gmail.com> wrote:
> > > > Have there been any recent developments on the Python C
> > > > implementation? I'd be more than happy to run a prototype through its
> > > > paces for testing purposes and provide feedback - it would be great
> to
> > > > have closer-to-C++ speeds in a Python environment.
> >
>

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to