On Apr 18, 2013 9:12 AM, "Alex Leach" <beamesle...@gmail.com> wrote:
>
> Dear list,
>
> Apologies if this has been asked before, but I'm struggling to find
anything strictly related..
>
> Background
> ----------
>
> This library I'm trying to wrap uses its own memory management model,
where almost every class derives from an object with loads of memory
management-related member functions; it also has a couple of friend classes
related to counting and locking. I don't intend to expose any of these
memory-related functions or friend classes to Python, but I was thinking
that performance could be quite badly affected if both Python and C++ code
are performing separate memory management implementations.
>
> Optimal memory usage
> --------------------
>
> I would suppose that memory usage on class instances would probably
contain unnecessary bloat too, as I think each exposed class instantiation
would allocate memory for a normal PyObject as well as unexposed C++ member
functions.
>
> Right thing to do
> -----------------
>
> I initially hoped to use a 'return_internal_reference' CallPolicy on the
class_<..> init calls, but I doubt that is The Right Thing To Do.
>
> Would it be a better design to define a PyTypeObject for this C++ base
class and its friends? If I did, could I still use functions in
boost::python? I don't think PyTypeObject's are supposed to be derived, so
I don't have a clue what extra I'd have to do to make it work with
Boost::Python.
>
>
>
> How should one proceed with this? Links to archived emails or
documentation would be great.. If I can conjure up something good enough
for Boost, I'd be happy to contribute, if possible.
>

If you go with writing your own PyTypeObject, you will indeed have a lot
more control, but it will greatly limit how much Boost.Python you can use
(no class_, for instance, at least), and you'll need to dive deep into the
Boost.Python implementation to learn how and when you can use it.  I'd only
consider recommending this approach if you wanted to wrap one or two simple
classes this way and then use regular Boost.Python for everything else.

I think the best solution would probably be to use shared_ptr with a custom
deleter, as that gives you control over how your objects are allocated
while giving Boost.Python an object it knows how to handle extremely well.
One key ingredient of this is that instead of wrapping C++ constructors,
you'll want to wrap factory functions that return shared_ptrs.  You can
even wrap such functions as Python constructors using make_constructor.

All that said, my first recommendation would be to try to wrap it (or at
least a subset of it) without trying to get the optimal memory performance
first, and only fix it if it actually is a performance problem.  You might
be surprised at where the time ends up going.

Jim
_______________________________________________
Cplusplus-sig mailing list
Cplusplus-sig@python.org
http://mail.python.org/mailman/listinfo/cplusplus-sig

Reply via email to