On Freitag 24 Oktober 2008, Robert Dailey wrote:
> What happens if I do the following?
>
> using namespace boost::python;
>
> import( "__main__" ).attr( "new_global" ) = 40.0f;
> import( "__main__" ).attr( "another_global" ) = 100.0f:
>
>
> My main concern here is performance. I'm wondering if each call to
> import() results in a disk query for the script in question and loads it
> from there.

As Stefan said, bp::import is only a wrapper around the standard python 
import, which (as you can easily check) does not load a module twice.

However, IIRC I read on some performance tuning page that an import-statement 
within a frequently called function still does take some time.  Again, this 
can be easily checked (disclaimer: I did not set my cpufreq scheduler from 
conservative to performance, so the numbers have to be considered bogus):

In [2]: %timeit import numpy
100000 loops, best of 3: 4.49 µs per loop

In [3]: %timeit None # for comparison
1000000 loops, best of 3: 216 ns per loop

In your above code, it should be trivial to assign the module to a variable  
(of type bp::object) though.

-- 
Ciao, /  /                                                    .o.
     /--/                                                     ..o
    /  / ANS                                                  ooo

Attachment: signature.asc
Description: This is a digitally signed message part.

_______________________________________________
Cplusplus-sig mailing list
Cplusplus-sig@python.org
http://mail.python.org/mailman/listinfo/cplusplus-sig

Reply via email to