>> I have some code that takes member function pointers of the form
>>   R Obj::*( A1, A2, A3, ..., An )
>> and converts them to a function object Q with the following signature:
>>   Rnew Q( Obj&, B1, B2, ..., Bn )
>> where
>>   Rnew = result_conversion_metafunction<R>::type
>>   Bi = arg_conversion_metafunction<Ai>::type
>> and Q is exposed via boost.python as a member of Obj.
>>
>> For example, Ai could be a fixed-point number, with Bi being a double so
>> that
>> the python side does not know anything about fixed-point numbers.
>
> I gather that custom converters aren't preferred because:
>
> 1.  It is a hassle to talk directly to the underlying raw PyObject
> pointers and manage storage in the converter methods
>
> 2.  The converted type still leaks out to python in docstrings
>
> Are there others?  Or am I completely off?

Neither of the above is a big issue. In fact, python-side knowledge of the
converted type would even be mildly useful. The main problem is the number
of types:
  my_fixed_point_type< int bit_width, // 1 to 63
                       int binary_pt_location, // -63 to 63
                       bool signed,
                       property_tag // policies for rounding, etc.
                     >
  std::complex< my_fixed_point_type<...> >
  boost::numeric::ublas::array< ... >
  boost::numeric::ublas::matrix< ... >
There are simply far too many types to create converters for and to
register explicitly. Therefore, I use inline functions/functors to do all
the conversions on the C++ side, and python knows only about real and
complex doubles.

>  > An instance
>> of Q would convert eh floating point numbers passed from the python side
>> into
>> fixed-point numbers, call the member function, and convert the returned
>> value
>> to a double for use on the python side.
>>
>> For these "converters", metafunctions from boost.function_types are used
>> to
>> obtain the mpl::vector specifying result type and argument types.
>> Extending
>> the technique above to function objects which take Obj* as their first
>> argument, I have a protocol which relies on the presence of a typedef
>> called
>> 'components' in the function object so that I can use the converter when
>> exposing via boost.python:
>>
>> struct my_functor {
>>   typedef mpl::vector<R, Obj*, A1, A2, A3, A4> components;
>>   R operator()( Obj*, A1, A2, A3, A4 ) { ... }
>> };
>
> To be sure I'm following you, correct me:
>
> * for each member function signature that you wrap, you have a matching
> my_functor that calls it
> * my_functor does nothing but call the member function it wraps
> * my_converter<my_functor>::type does the conversion from Rnew (B1, B2,
> ... Bn) to R (A1, A2, ...An) and calls my_functor.
>
> Have I got it?

Yes.

> It is all very interesting, and to a certain degree duplicates some
> functionality in detail/caller.hpp.  How do you generate these
> my_functors...   preprocessor?  fusion?

Mostly boost.preprocessor (with some template metaprogramming using mpl).
Fusion doesn't work very well for me since the effort needed to get my
functors into a form usable from fusion is greater than just coding my own
invocation functions via boost.preprocessor; pseudo-code:

template <typename Func> struct my_converter
{
Func f_;

typedef typename ft::components<F>::type components;
typedef typename arg_converter<components>::type arg_types;
typedef typename result_converter<components>::type result_type;

// Shown for non-void return with one argument; can be generalized
// using enable_if for void types and using boost.preprocessor for
// different numbers of arguments

result_type operator()( typename at_c<1,arg_types>::type a1 )
{
  return convert_result<result_type>( f_(
       convert_arg<typename at_c<1,components>::type>( a1 ) ) );
}
};

template <typename Func>
my_converter<Func> make_converted_func( Func f )
{
  return my_converter<Func>( f );
}


> I'm wondering if there isn't motivation here to cleanly integrate a
> general facility for additional c++-side type conversions.  The
> following came to mind, which imitates a function type style that
> boost::proto uses extensively:
>
>    struct hidden_type;  // python shouldn't see this at all
>
>    struct int_2_hidden  // converts python-side 'int' to hidden
>    {
>      typedef hidden_type result_type;
>
>      hidden_type operator()(int) const;
>    };
>
>    // fnobj takes a hidden, doesn't know it is wrapped in python
>    struct fnobj {
>      typedef void result_type;
>      void operator()(float, hidden_type);
>    };
>
>    def("myfunctor", as<void(float, int_to_hidden(int))>(fnobj()));
>
> where int_to_hidden(int) is a function type (not a function call, but it
> later becomes a function call), indicating that what python passes
> should be converted to int, then the int converted to hidden_type via an
> instance of int_to_hidden, then the hidden_type passed to the underlying
> instance of fnobj.
>
> I realize this doesn't involve using the mpl::vectors you've already
> calculated, just throwing it out there.

Such a general facility would probably be better than my code and would be
greatly useful for me, but are there enough people out there with this use
case? I don't care about using mpl::vector if there is a more general
solution.

>> // my_converter uses the components typedef
>> typedef typename my_converter<my_functor>::type Q;
>
> so Q could have a nested typedef (note I say Rnew, not R):
>
>   typedef mpl::vector<Rnew, Obj*, B1, B2, B3, B4> components;
>
> or use that in combination with function_types to synthesize
>
>   typedef Rnew(Obj*, B1, B2, B3, B4) signature;

Yes.

Regards,
Ravi



_______________________________________________
Cplusplus-sig mailing list
Cplusplus-sig@python.org
http://mail.python.org/mailman/listinfo/cplusplus-sig

Reply via email to