Re: [Numpy-discussion] Do ufuncs returned by frompyfunc(), have the out arg?

2010-04-06 Thread Ken Basye

From: Anne Archibald 

On 6 April 2010 15:42, Ken Basye  wrote:


> Folks,
>  I hope this is a simple question.  When I created a ufunc with
> np.frompyfunc(), I got an error when I called the result with an 'out'
> argument:
  


In fact, ordinary ufuncs do not accept names for their arguments. This
is annoying, but fixing it involves rooting around in the bowels of
the ufunc machinery, which are not hacker-friendly.

Anne



>  >>> def foo(x): return x * x + 1
>  >>> ufoo = np.frompyfunc(foo, 1, 1)
>  >>> arr = np.arange(9).reshape(3,3)
>  >>> ufoo(arr, out=arr)
> Traceback (most recent call last):
>  File "", line 1, in 
> TypeError: 'out' is an invalid keyword to foo (vectorized)
>
> But I notice that if I just put the array there as a second argument, it
> seems to work:
>  >>> ufoo(arr, arr)
> array([[2, 5, 26],
>   [101, 290, 677],
>   [1370, 2501, 4226]], dtype=object)
>
> # and now arr is the same as the return value
>
>
> Is it reasonable to conclude that there is an out-arg in the resulting
> ufunc and I just don't know the right name for it?  I also tried putting
> some other right-shaped array as a second argument and it did indeed get
> filled in.
>
>   Thanks as always,
>   Ken


Thanks - I hadn't noticed that it's apparently only the array methods 
that can take keyword arguments.  So I assume that if I call a '1-arg' 
ufunc (whether from frompyfunc or an already existing one) with a second 
argument, the second argument will be used as the output location.

  Ken

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpy build system questions for use in another project (fwrap)

2010-04-06 Thread Kurt Smith
Briefly, I'm encountering difficulties getting things working in numpy
distutils for fwrap's build system.

Here are the steps I want the build system to accomplish:

1) Compile a directory of Fortran 90 source code -- this works.
- The .mod files generated by this compilation step are put in the
build directory.

2) Fwrap needs to call numpy.distutils.config.try_compile with some
generated fortran source code that depends on the .mod files generated
in 1).

For example:

Suppose a fortran 90 module 'type_params.mod' is generated in step 1).

Fwrap would generate this fortran source code and try to compile it:

subroutine s0(a)
use, intrinsic :: iso_c_binding
integer(c_int) :: a
interface
subroutine s1(a)
use type_params, only : int_param
integer(int_param) :: a
end subroutine s1
end interface
call s1(a)
end subroutine s0

If this compiles correctly with no errors about type
incompatibilities, then we know that the int_param in type_params
module corresponds to the C int type.  If it doesn't, then we try the
other C integer types ('c_long', 'c_long_long', 'c_short', 'c_char')
until no error is generated (otherwise fail).  This then ensures that
the type mappings between Fortran and C are correct, and we can then
go on to generate appropriate config files with this information.

So, this generated fortran source code depends on step 1), since the
type_params.mod file must exist first.  Then we call
config.try_compile and see if it works.

My problem is in instantiating numpy.distutils.config such that it is
appropriately configured with command line flags.

I've tried the following with no success:

('self' is a build_ext instance)
cfg = self.distribution.get_command_obj('config')
cfg.initialize_options()
cfg.finalize_options()  # doesn't do what I hoped it would do.

This creates a config object, but it doesn't use the command line
flags (e.g. --fcompiler=gfortran doesn't affect the fortran compiler
used).

Any pointers?  More generally -- seeing the above, any ideas on how to
go about doing what I'm trying to do better?

Thanks,

Kurt
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Do ufuncs returned by frompyfunc() have the out arg?

2010-04-06 Thread Anne Archibald
On 6 April 2010 15:42, Ken Basye  wrote:
> Folks,
>  I hope this is a simple question.  When I created a ufunc with
> np.frompyfunc(), I got an error when I called the result with an 'out'
> argument:

In fact, ordinary ufuncs do not accept names for their arguments. This
is annoying, but fixing it involves rooting around in the bowels of
the ufunc machinery, which are not hacker-friendly.

Anne

>  >>> def foo(x): return x * x + 1
>  >>> ufoo = np.frompyfunc(foo, 1, 1)
>  >>> arr = np.arange(9).reshape(3,3)
>  >>> ufoo(arr, out=arr)
> Traceback (most recent call last):
>  File "", line 1, in 
> TypeError: 'out' is an invalid keyword to foo (vectorized)
>
> But I notice that if I just put the array there as a second argument, it
> seems to work:
>  >>> ufoo(arr, arr)
> array([[2, 5, 26],
>   [101, 290, 677],
>   [1370, 2501, 4226]], dtype=object)
>
> # and now arr is the same as the return value
>
>
> Is it reasonable to conclude that there is an out-arg in the resulting
> ufunc and I just don't know the right name for it?  I also tried putting
> some other right-shaped array as a second argument and it did indeed get
> filled in.
>
>   Thanks as always,
>   Ken
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Possible improvement to numpy.mean()

2010-04-06 Thread Bruce Southey
On 04/06/2010 04:07 PM, Michael Gilbert wrote:
> Hi,
>
> I am applying Monte Carlo for a problem involving mixed deterministic
> and random values.  In order to avoid a lot of special handling and
> corner cases, I am using using numpy arrays full of a single value to
> represent the deterministic quantities.
>
> Anyway, I found that the standard deviation turns out to be non-zero
> when these deterministic sets take on large values, which is wrong.
> This is due to machine precision loss.
>
> It turns out to be fairly straightforward to check for this situation
> upfront. See attached code. I've also shown a more accurate algorithm
> for computing the mean, but it adds an additional multiplication for
> every term in the sum, which is obviously undesirable from a
> performance perspective. Would it make sense to automatically detect
> the precision loss and use the more accurate approach when that is the
> case?
>
> If that seems ok, I can take a look at the numpy code, and submit a
> patch.
>
> Best wishes,
> Mike
>
>
Hi,
Actually you fail to check for the reverse situation when your 'norm' 
value is greater than the expected eps value. But these differences are 
within the expected numerical precision even for float128. In the last 
two cases you are unfairly comparing float64 arrays with float128 
arrays. It would be fairer to use float128 precision as the dtype 
argument in the mean function.

Also any patch has to work with the axis argument. So any 'pre-division 
by the number of observations' has to be done across the user-selected 
axis. For some situations I expect that this 'pre-division' will create 
it's own numerical imprecision - try using the inverse of your inputs.

Bruce




___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Possible improvement to numpy.mean()

2010-04-06 Thread Michael Gilbert
Hi,

I am applying Monte Carlo for a problem involving mixed deterministic
and random values.  In order to avoid a lot of special handling and
corner cases, I am using using numpy arrays full of a single value to
represent the deterministic quantities.

Anyway, I found that the standard deviation turns out to be non-zero
when these deterministic sets take on large values, which is wrong.
This is due to machine precision loss.

It turns out to be fairly straightforward to check for this situation
upfront. See attached code. I've also shown a more accurate algorithm
for computing the mean, but it adds an additional multiplication for
every term in the sum, which is obviously undesirable from a
performance perspective. Would it make sense to automatically detect
the precision loss and use the more accurate approach when that is the
case?

If that seems ok, I can take a look at the numpy code, and submit a
patch.

Best wishes,
Mike


mean-problem
Description: Binary data
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Getting data from NDarrays to Blitz++ and back again

2010-04-06 Thread Fernando Perez
On Sat, Apr 3, 2010 at 2:28 PM, Philip Sterne  wrote:
> I hope this is the correct place to post my question.  I'd like python to
> interface with c++ code that makes heavy use of Blitz++ arrays. After a
> day's hacking I appear to have a simple working solution which I am
> attaching.  (It uses Boost.Python and CMake.)

You may want to have a look at the blitz support in scipy.weave:

http://svn.scipy.org/svn/scipy/trunk/scipy/weave/blitz_spec.py
http://svn.scipy.org/svn/scipy/trunk/scipy/weave/blitz_tools.py

Cheers,

f
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Do ufuncs returned by frompyfunc() have the out arg?

2010-04-06 Thread Ken Basye
Folks,
  I hope this is a simple question.  When I created a ufunc with 
np.frompyfunc(), I got an error when I called the result with an 'out' 
argument:

 >>> def foo(x): return x * x + 1
 >>> ufoo = np.frompyfunc(foo, 1, 1)
 >>> arr = np.arange(9).reshape(3,3)
 >>> ufoo(arr, out=arr)
Traceback (most recent call last):
  File "", line 1, in 
TypeError: 'out' is an invalid keyword to foo (vectorized)

But I notice that if I just put the array there as a second argument, it 
seems to work:
 >>> ufoo(arr, arr)
array([[2, 5, 26],
   [101, 290, 677],
   [1370, 2501, 4226]], dtype=object)

# and now arr is the same as the return value


Is it reasonable to conclude that there is an out-arg in the resulting 
ufunc and I just don't know the right name for it?  I also tried putting 
some other right-shaped array as a second argument and it did indeed get 
filled in.

   Thanks as always,
   Ken

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Math Library

2010-04-06 Thread Travis Oliphant


On Apr 6, 2010, at 9:08 AM, David Cournapeau wrote:


Hi Travis,

On Tue, Apr 6, 2010 at 7:43 AM, Travis Oliphant > wrote:



I should have some time over the next couple of weeks, and I am very
interested in refactoring the NumPy code to separate out the Python
interface layer from the "library" layer as much as possible.   I had
some discussions with people at PyCon about making it easier for
Jython, IronPython, and perhaps even other high-level languages to
utilize NumPy.

Is there a willingness to consider as part of this reorganization
creating a clear boundary between the NumPy library code and the
Python-specific interface to it?   What other re-organization  
thoughts

are you having David?


This is mainly it, reorganizing the code for clearer boundaries
between boilerplate (python C API) and actual compuational code.
Besides helping other python implementations, I think this would
benefit NumPy itself in the long run (code maintainability), as well
as scipy (and other C extensions). I think the npymath effort is a
good example: albeit simple in nature (the API and boundaries are
obvious), it has already helped a lot to solve numerous platform
specific issues in numpy and scipy, and I think the overall code
quality is better.

My own goals were:
- exposing core computational parts through an exported C API, so
that other C extensions may use it (for example, exposing basic
blas/lapack operations)
- dynamic loading of the code (for example depending on the CPU
capabilities - I have a git branch somewhere where I started exposing
a simple C API to query cpu capabilities like cache size or SSE
dynamically to that intent)
- more amenable codebase: I think multiarray in particular is too
big. I don't know the code well enough to know what can be split and
how, but I would have hoped that the scalartypes, the type descriptor
could be put out of multiarray proper. Also, exposing an API for
things like fancy indexing would be very useful, but I don't know if
it even makes sense - I think a pure python implementation of fancy
indexing as a reference would be very useful for array-like classes (I
am thinking about scipy.sparse, for example).

Unfortunately, I won't be able to help much in the near future (except
maybe for the fancy indexing as this could be useful for my job),



I understand.   It just happens that there is some significant time  
for me to look at this over the next few weeks and I would really like  
to make progress on re-factoring.   I think it's O.K. if you don't  
have time right now to help as long as you have time to offer  
criticism and suggestions.


We could even do that over Skype with whomever else wanted to join us  
(we could do a GotoMeeting discussion as well) if you think it would  
be faster to just talk in a group setting instead of email. Of  
course, a summary of any off-line discussion should be sent to the list.


Thanks for the input,

-Travis




___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Math Library

2010-04-06 Thread David Cournapeau
Hi Travis,

On Tue, Apr 6, 2010 at 7:43 AM, Travis Oliphant  wrote:
>
>
> I should have some time over the next couple of weeks, and I am very
> interested in refactoring the NumPy code to separate out the Python
> interface layer from the "library" layer as much as possible.   I had
> some discussions with people at PyCon about making it easier for
> Jython, IronPython, and perhaps even other high-level languages to
> utilize NumPy.
>
> Is there a willingness to consider as part of this reorganization
> creating a clear boundary between the NumPy library code and the
> Python-specific interface to it?   What other re-organization thoughts
> are you having David?

This is mainly it, reorganizing the code for clearer boundaries
between boilerplate (python C API) and actual compuational code.
Besides helping other python implementations, I think this would
benefit NumPy itself in the long run (code maintainability), as well
as scipy (and other C extensions). I think the npymath effort is a
good example: albeit simple in nature (the API and boundaries are
obvious), it has already helped a lot to solve numerous platform
specific issues in numpy and scipy, and I think the overall code
quality is better.

My own goals were:
 - exposing core computational parts through an exported C API, so
that other C extensions may use it (for example, exposing basic
blas/lapack operations)
 - dynamic loading of the code (for example depending on the CPU
capabilities - I have a git branch somewhere where I started exposing
a simple C API to query cpu capabilities like cache size or SSE
dynamically to that intent)
 - more amenable codebase: I think multiarray in particular is too
big. I don't know the code well enough to know what can be split and
how, but I would have hoped that the scalartypes, the type descriptor
could be put out of multiarray proper. Also, exposing an API for
things like fancy indexing would be very useful, but I don't know if
it even makes sense - I think a pure python implementation of fancy
indexing as a reference would be very useful for array-like classes (I
am thinking about scipy.sparse, for example).

Unfortunately, I won't be able to help much in the near future (except
maybe for the fancy indexing as this could be useful for my job),

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Extracting values from one array corresponding to argmax elements in another array

2010-04-06 Thread josef . pktd
On Tue, Apr 6, 2010 at 9:22 AM, Ken Basye  wrote:
> From: Vincent Schut 
>
> On 04/05/2010 06:06 PM, Keith Goodman wrote:
>
>
> On Mon, Apr 5, 2010 at 8:44 AM, Ken Basye  wrote:
>
>
> Hi Folks,
>   I have two arrays, A and B, with the same shape.  I want to find the
> highest values in A along some axis, then extract the corresponding
> values from B.  I can get the highest values in A with A.max(axis=0) and
> the indices of these highest values with A.argmax(axis=0).  I'm trying
> to figure out a loop-free way to extract the corresponding elements from
> B using these indices.  Here's code with a loop that will do what I want
> for two-dimensional arrays:
>
>   >>>  a
> array([[ 100.,0.,0.],
>[   0.,  100.,  100.],
>[   0.,0.,0.]])
>
>   >>>  a.max(axis=0)
> array([ 100.,  100.,  100.])
>
>   >>>  sel = a.argmax(axis=0)
>   >>>sel
> array([0, 1, 1])
>
>   >>>  b = np.arange(9).reshape((3,3))
>   >>>  b
> array([[0, 1, 2],
>[3, 4, 5],
>[6, 7, 8]])
>
>   >>>  b_best = np.empty(3)
>   >>>  for i in xrange(3):
> ...b_best[i] = b[sel[i], i]
> ...
>   >>>  b_best
> array([ 0.,  4.,  5.])
>
>
> Here's one way:
>
>
>
> b[a.argmax(axis=0), range(3)]
>
>
> array([0, 4, 5])
>
>
> Which does not work anymore when your arrays become more-dimensional
> (like in my case: 4 or more) and the axis you want to select on is not
> the first/last one. If I recall correctly, I needed to construct the
> full index arrays for the other dimensions too (like with ogrid I
> think). So: create the ogrid, replace the one for the dimensions you
> want the argmax selection to take place on with the argmax parameter,
> and use those index arrays to index your b array.
> I'd need to look up my source code to be more sure/precise. If anyone
> would like me to, please let me know. If anyone knows a less elaborate
> way, also please let us know! :-)
>
>
> Hi Vincent,
>   I'd like to see more about your solution.  For my present purposes,
> Keith's solution was sufficient, but I'm still very interested in a solution
> that's independent of dimension and axis.
>   Thanks (and thanks, Keith),
>  Ken

an alternative to Vincent's general solution, if you have unique max
or want all argmax is using a mask

>>> a
array([[ 100.,0.,0.],
   [   0.,  100.,  100.],
   [   0.,0.,0.]])
>>> b
array([[0, 1, 2],
   [3, 4, 5],
   [6, 7, 8]])

>>> ax=0; b[a==np.expand_dims(a.max(ax),ax)]
array([0, 4, 5])
>>> ax=1; b[a==np.expand_dims(a.max(ax),ax)]
array([0, 4, 5, 6, 7, 8])

>>> aa=np.eye(3)
>>> ax=1; b[aa==np.expand_dims(aa.max(ax),ax)]
array([0, 4, 8])
>>> ax=0; b[aa==np.expand_dims(aa.max(ax),ax)]
array([0, 4, 8])

Josef
>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Extracting values from one array corresponding to argmax elements in another array

2010-04-06 Thread Ken Basye

From: Vincent Schut 

On 04/05/2010 06:06 PM, Keith Goodman wrote:
  

On Mon, Apr 5, 2010 at 8:44 AM, Ken Basye  wrote:


Hi Folks,
  I have two arrays, A and B, with the same shape.  I want to find the
highest values in A along some axis, then extract the corresponding
values from B.  I can get the highest values in A with A.max(axis=0) and
the indices of these highest values with A.argmax(axis=0).  I'm trying
to figure out a loop-free way to extract the corresponding elements from
B using these indices.  Here's code with a loop that will do what I want
for two-dimensional arrays:

  >>>  a
array([[ 100.,0.,0.],
   [   0.,  100.,  100.],
   [   0.,0.,0.]])

  >>>  a.max(axis=0)
array([ 100.,  100.,  100.])

  >>>  sel = a.argmax(axis=0)
  >>>sel
array([0, 1, 1])

  >>>  b = np.arange(9).reshape((3,3))
  >>>  b
array([[0, 1, 2],
   [3, 4, 5],
   [6, 7, 8]])

  >>>  b_best = np.empty(3)
  >>>  for i in xrange(3):
...b_best[i] = b[sel[i], i]
...
  >>>  b_best
array([ 0.,  4.,  5.])
  

Here's one way:



b[a.argmax(axis=0), range(3)]


array([0, 4, 5])



Which does not work anymore when your arrays become more-dimensional 
(like in my case: 4 or more) and the axis you want to select on is not 
the first/last one. If I recall correctly, I needed to construct the 
full index arrays for the other dimensions too (like with ogrid I 
think). So: create the ogrid, replace the one for the dimensions you 
want the argmax selection to take place on with the argmax parameter, 
and use those index arrays to index your b array.
I'd need to look up my source code to be more sure/precise. If anyone 
would like me to, please let me know. If anyone knows a less elaborate 
way, also please let us know! :-)
  

Hi Vincent,
 I'd like to see more about your solution.  For my present purposes, 
Keith's solution was sufficient, but I'm still very interested in a 
solution that's independent of dimension and axis. 
 Thanks (and thanks, Keith),

Ken

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy Segmentation Fault

2010-04-06 Thread Alan G Isaac
On 4/6/2010 6:46 AM, Yogesh Tomar wrote:
> import numpy
> import numpy.linalg
> x=numpy.eye(1000)
> for i in range(10):
>  eigenvalues,eigenvectors=numpy.linalg.eig(x)
>  eigenvalues,eigenvectors=numpy.linalg.eig(x)
>  print str(i),'---'*


I'm not seeing any problem with 1.4.1rc.

Alan Isaac
(Python 2.6.5 on Vista)

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy Segmentation Fault

2010-04-06 Thread Yogesh Tomar
I also think the same. There is some problem with my python installation.
Because a similar installation python-2.6.4 and numpy-1.3.0 which I did
elsewhere does not seg fault for the same code.

But I need your help to figure it out.
Can you please elaborate on ref count bug? that might help.

Regards,
Yogesh Tomar


On Tue, Apr 6, 2010 at 5:17 PM, David Cournapeau  wrote:

> On Tue, Apr 6, 2010 at 8:28 PM, Yogesh Tomar  wrote:
> > numpy 1.3.0 also segfaults the same way.
>
> I mean building numpy 1.3.0 against python 2.6.1 instead of 2.6.4 -
> since the crash happen on a python you built by yourself, that's the
> first thing I would look into before looking into numpy or python bug.
>
> > Is it the problem with libc library?
>
> Very unlikely, this looks like a ref count bug,
>
> cheers,
>
> David
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy Segmentation Fault

2010-04-06 Thread David Cournapeau
On Tue, Apr 6, 2010 at 8:28 PM, Yogesh Tomar  wrote:
> numpy 1.3.0 also segfaults the same way.

I mean building numpy 1.3.0 against python 2.6.1 instead of 2.6.4 -
since the crash happen on a python you built by yourself, that's the
first thing I would look into before looking into numpy or python bug.

> Is it the problem with libc library?

Very unlikely, this looks like a ref count bug,

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy Segmentation Fault

2010-04-06 Thread Yogesh Tomar
numpy 1.3.0 also segfaults the same way.
Is it the problem with libc library?

Gdb stacktrace.



gdb) run /home/eqan/64bit/current/segf.py
Starting program: /home/eqan/tapas/64bit/Python/2.6.4_x641/bin/python
/home/eqan/64bit/current/segf.py
[Thread debugging using libthread_db enabled]
[New Thread 47653213733440 (LWP 24970)]
0 ---
1 ---
2 ---
3 ---
4 ---
5 ---
6 ---
7 ---
8 ---
9 ---
here

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 47653213733440 (LWP 24970)]
0x0034d8c71033 in _int_free () from /lib64/libc.so.6
(gdb) up
#1  0x0034d8c74c5c in free () from /lib64/libc.so.6
(gdb) up
#2  0x2b57203451db in code_dealloc (co=0xdab63f0) at
Objects/codeobject.c:260
260 Py_XDECREF(co->co_code);
(gdb) up
#3  0x2b5720359a73 in func_dealloc (op=0xdab67d0) at
Objects/funcobject.c:454
454 Py_DECREF(op->func_code);
(gdb) up
#4  0x2b57203691ed in insertdict (mp=0xda8ae90, key=0xda2f180,
hash=1904558708720393281, value=0x2b572066c210) at Objects/dictobject.c:459
459 Py_DECREF(old_value); /* which **CAN** re-enter */
(gdb) up
#5  0x2b572036b153 in PyDict_SetItem (op=0xda8ae90, key=0xda2f180,
value=0x2b572066c210) at Objects/dictobject.c:701
701 if (insertdict(mp, key, hash, value) != 0)
(gdb) up
#6  0x2b572036d5c8 in _PyModule_Clear (m=) at
Objects/moduleobject.c:138
138 PyDict_SetItem(d, key, Py_None);
(gdb) up
#7  0x2b57203dfa34 in PyImport_Cleanup () at Python/import.c:474
474 _PyModule_Clear(value);
(gdb) up
#8  0x2b57203ed167 in Py_Finalize () at Python/pythonrun.c:434
434 PyImport_Cleanup();
(gdb) up
#9  0x2b57203f9cdc in Py_Main (argc=,
argv=0x7fff8a7bf478) at Modules/main.c:625
625 Py_Finalize();
(gdb) up
#10 0x0034d8c1d8b4 in __libc_start_main () from /lib64/libc.so.6
(gdb) up

Regards,
Yogesh Tomar


On Tue, Apr 6, 2010 at 4:22 PM, David Cournapeau  wrote:

> On Tue, Apr 6, 2010 at 7:46 PM, Yogesh Tomar 
> wrote:
> >
> > Hi all,
> >
> > I am getting numpy segmentation fault on a custom install of python to a
> > prefix.
>
> What happens if you build numpy 1.3.0 against python 2.6.1 (instead of
> your own 2.6.4) ? For numpy, a strace is not really useful (a
> backtrace from gdb much more),
>
> cheers,
>
> David
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy Segmentation Fault

2010-04-06 Thread David Cournapeau
On Tue, Apr 6, 2010 at 7:46 PM, Yogesh Tomar  wrote:
>
> Hi all,
>
> I am getting numpy segmentation fault on a custom install of python to a
> prefix.

What happens if you build numpy 1.3.0 against python 2.6.1 (instead of
your own 2.6.4) ? For numpy, a strace is not really useful (a
backtrace from gdb much more),

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Extracting values from one array corresponding to argmax elements in another array

2010-04-06 Thread Vincent Schut
On 04/05/2010 06:06 PM, Keith Goodman wrote:
> On Mon, Apr 5, 2010 at 8:44 AM, Ken Basye  wrote:
>> Hi Folks,
>>   I have two arrays, A and B, with the same shape.  I want to find the
>> highest values in A along some axis, then extract the corresponding
>> values from B.  I can get the highest values in A with A.max(axis=0) and
>> the indices of these highest values with A.argmax(axis=0).  I'm trying
>> to figure out a loop-free way to extract the corresponding elements from
>> B using these indices.  Here's code with a loop that will do what I want
>> for two-dimensional arrays:
>>
>>   >>>  a
>> array([[ 100.,0.,0.],
>>[   0.,  100.,  100.],
>>[   0.,0.,0.]])
>>
>>   >>>  a.max(axis=0)
>> array([ 100.,  100.,  100.])
>>
>>   >>>  sel = a.argmax(axis=0)
>>   >>>sel
>> array([0, 1, 1])
>>
>>   >>>  b = np.arange(9).reshape((3,3))
>>   >>>  b
>> array([[0, 1, 2],
>>[3, 4, 5],
>>[6, 7, 8]])
>>
>>   >>>  b_best = np.empty(3)
>>   >>>  for i in xrange(3):
>> ...b_best[i] = b[sel[i], i]
>> ...
>>   >>>  b_best
>> array([ 0.,  4.,  5.])
>
> Here's one way:
>
>>> b[a.argmax(axis=0), range(3)]
> array([0, 4, 5])

Which does not work anymore when your arrays become more-dimensional 
(like in my case: 4 or more) and the axis you want to select on is not 
the first/last one. If I recall correctly, I needed to construct the 
full index arrays for the other dimensions too (like with ogrid I 
think). So: create the ogrid, replace the one for the dimensions you 
want the argmax selection to take place on with the argmax parameter, 
and use those index arrays to index your b array.
I'd need to look up my source code to be more sure/precise. If anyone 
would like me to, please let me know. If anyone knows a less elaborate 
way, also please let us know! :-)

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Numpy linalg.eig seg faults

2010-04-06 Thread Yogesh Tomar
Hi all,

I am getting numpy segmentation fault on a custom install of python to a
prefix.

I am running this example code.

*import numpy
import numpy.linalg
x=numpy.eye(1000)
for i in range(10):
eigenvalues,eigenvectors=numpy.linalg.eig(x)
eigenvalues,eigenvectors=numpy.linalg.eig(x)
print str(i),'---'*

I have been trying to debug this for the last two weeks and with no success
so far.
The same code runs fine with Python-2.6.1 and numpy-1.2.1 but seg faults in
Python-2.6.4 and numpy-1.3.0
 Also, the error occurs non-deterministically in the loop and sometimes it
does not even occur at all.

I have tried reinstalling and rebuilding python but even that does not help.


Please find some information which I think might be relevant for us to
figure out the root cause.

System info

*Python 2.6.4 (r264:75706, Apr  6 2010, 04:49:11)
[GCC 4.1.2 20071124 (Red Hat 4.1.2-42)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>>*

strace output tail

*) = 34
futex(0x1001f6e0, FUTEX_WAKE, 1)= 0
futex(0x1001f6e0, FUTEX_WAKE, 1)= 0
write(1, "here\n", 5here
)   = 5
rt_sigaction(SIGINT, {SIG_DFL}, {0x2b78c55fec60, [], SA_RESTORER,
0x34d940de70}, 8) = 0
brk(0x117ff000) = 0x117ff000
brk(0x108b1000) = 0x108b1000
--- SIGSEGV (Segmentation fault) @ 0 (0) ---
+++ killed by SIGSEGV +++

*Regards,
Yogesh Tomar
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Numpy Segmentation Fault

2010-04-06 Thread Yogesh Tomar
Hi all,

I am getting numpy segmentation fault on a custom install of python to a
prefix.

I am running this example code.

*import numpy
import numpy.linalg
x=numpy.eye(1000)
for i in range(10):
eigenvalues,eigenvectors=numpy.linalg.eig(x)
eigenvalues,eigenvectors=numpy.linalg.eig(x)
print str(i),'---'*

I have been trying to debug this for the last two weeks and with no success
so far.
The same code runs fine with Python-2.6.1 and numpy-1.2.1 but seg faults in
Python-2.6.4 and numpy-1.3.0
 Also, the error occurs non-deterministically in the loop and sometimes it
does not even occur at all.

I have tried reinstalling and rebuilding python but even that does not help.
Please find attached the information which I think might be relevant for us
to figure out the root cause.

System info

*Python 2.6.4 (r264:75706, Apr  6 2010, 04:49:11)
[GCC 4.1.2 20071124 (Red Hat 4.1.2-42)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>>*

strace output tail

*) = 34
futex(0x1001f6e0, FUTEX_WAKE, 1)= 0
futex(0x1001f6e0, FUTEX_WAKE, 1)= 0
write(1, "here\n", 5here
)   = 5
rt_sigaction(SIGINT, {SIG_DFL}, {0x2b78c55fec60, [], SA_RESTORER,
0x34d940de70}, 8) = 0
brk(0x117ff000) = 0x117ff000
brk(0x108b1000) = 0x108b1000
--- SIGSEGV (Segmentation fault) @ 0 (0) ---
+++ killed by SIGSEGV +++*


Regards,
Yogesh Tomar
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Getting data from NDarrays to Blitz++ and back again

2010-04-06 Thread Philip Sterne
If anyone was interested I found that the easiest solution involved 
patching up the strides after calling PyArray_SimpleNewFromData().  I 
still haven't gotten any sort of memory interaction so any objects 
created by Blitz are deleted by Blitz, while Python objects are deleted 
by Python.  (Irrespective of any pointers held by the other side.)


I find for my purposes that this is sufficient since I just want to be 
able to peer into the workings of a c++ program.


-Philip.

Philip Sterne wrote:

Hi all,
I hope this is the correct place to post my question.  I'd like python 
to interface with c++ code that makes heavy use of Blitz++ arrays. After 
a day's hacking I appear to have a simple working solution which I am 
attaching.  (It uses Boost.Python and CMake.)


However I believe this solution won't work if the blitz array is not 
laid out contiguously in memory.  I also haven't really thought about 
reference counting issues (although the example seems to work).  I 
imagine that those sorts of issues will lead me to call the more 
complicated:

PyArray_New(...)
or:
PyArray_NewFromDescr(...)
instead of the PyArray_SimpleNewFromData(...) that I currently use. 
However I couldn't figure out some of the extra arguments from the API 
documentation.


Can someone point out all the things that will break when this code 
actually gets used in the real world (and maybe even how to avoid them)?


Thanks for your time!
-Philip.




___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
#include 
#include 

#include 

using namespace boost::python;
using namespace blitz;

template
Array py_to_blitz(object arr)
{
PyArrayObject* arr_obj = (PyArrayObject*) arr.ptr();
TinyVector shape(0);
TinyVector strides(0);
for (int i = 0; i < N; i++)
{
shape[i] = arr_obj->dimensions[i];
strides[i] = arr_obj->strides[i]/sizeof(T);
}
return Array((T*) arr_obj->data,shape,strides,neverDeleteData);
}

template
PyObject* blitz_to_py(Array arr)
{
	import_array1((PyObject*)NULL);
	npy_intp dims[N];
	for (int n=0;n& b)
{
	for (int n=0; n >("Bool1");
	class_ >("Bool2");
	class_ >("Bool3");

	class_ >("Double1");
	class_ >("Double2");
	class_ >("Double3");

	def("tBool1",&py_to_blitz);
	def("tBool2",&py_to_blitz);
	def("tBool3",&py_to_blitz);
	def("fBool1",&blitz_to_py);
	def("fBool2",&blitz_to_py);
	def("fBool3",&blitz_to_py);

	def("tDouble1",&py_to_blitz);
	def("tDouble2",&py_to_blitz);
	def("tDouble3",&py_to_blitz);
	def("fDouble1",&blitz_to_py);
	def("fDouble2",&blitz_to_py);
	def("fDouble3",&blitz_to_py);
	
	def("cout",&print);
}
from pylab import *
from libsimple import *

py = rand(10)<0.5
print py
c = toBoolVec(py)
cout(c)
py2 = fromBoolVec(c)
del(py)
print py2
del(py2)
cout(c)
del c
cmake_minimum_required(VERSION 2.6)

PROJECT(simple)

find_package(PythonInterp)
find_package(PythonLibs)
find_package(Boost COMPONENTS python)

INCLUDE_DIRECTORIES(${PYTHON_INCLUDE_PATH})
INCLUDE_DIRECTORIES (${Boost_INCLUDE_DIR})

ADD_DEFINITIONS(-Wall -O2)

add_library (simple SHARED simple.cpp)

SET_TARGET_PROPERTIES(simple
PROPERTIES
SOVERSION 0.0.1
VERSION 0.0.1
)

TARGET_LINK_LIBRARIES (simple ${PYTHON_LIBRARIES})
TARGET_LINK_LIBRARIES (simple ${Boost_LIBRARIES})
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion