[Numpy-discussion] Simple way to launch python processes?

2011-12-07 Thread Lou Pecora
I would like to launch python modules or functions (I don't know which is 
easier to do, modules or functions) in separate Terminal windows so I can see 
the output from each as they execute.  I need to be able to pass each module or 
function a set of parameters.  I would like to do this from a python script 
already running in a Terminal window.  In other words, I'd start up a master 
script and it would launch, say, three processes using another module or a 
function with different parameter values for each launch and each would run 
independently in its own Terminal window so stdout from each process would go 
to it's own respective window.  When the process terminated the window would 
remain open.

I've begun to look at subprocess modules, etc., but that's pretty confusing. I 
can do what I say above manually, but it's gotten clumsy as I want to run 
eventually in 12 cores.

I have a Mac Pro running Mac OS X 10.6.

If there is a better forum to ask this question, please let me know. 

Thanks for any advice.

 
-- Lou Pecora,   my views are my own.___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Simple way to launch python processes?

2011-12-07 Thread Lou Pecora
From: Olivier Delalleau sh...@keba.be

To: Discussion of Numerical Python numpy-discussion@scipy.org 
Sent: Wednesday, December 7, 2011 3:43 PM
Subject: Re: [Numpy-discussion] Simple way to launch python processes?
 

Maybe try stackoverflow, since this isn't really a numpy question.
To run a command like python myscript.py arg1 arg2 in a separate process, you 
can do:
    p = subprocess.Popen(python myscript.py arg1 arg2.split())
You can launch many of these, and if you want to know if a process p is over, 
you can call p.poll().
I'm sure there are other (and better) options though.

-=- Olivier



Thank you.
 
-- Lou Pecora, my views are my own.


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Simple way to launch python processes?

2011-12-07 Thread Lou Pecora
From: Jean-Baptiste Marquette marqu...@iap.fr

To: Discussion of Numerical Python numpy-discussion@scipy.org 
Sent: Wednesday, December 7, 2011 4:23 PM
Subject: Re: [Numpy-discussion] Simple way to launch python processes?
 

You should consider the powerful multiprocessing package. Have a look on this 
piece of code:

importglob
importos
import multiprocessing as multi
import subprocess as sub
importtime

NPROC = 4
Python = '/Library/Frameworks/EPD64.framework/Versions/Current/bin/python'
Xterm = '/usr/X11/bin/xterm '

coord = []
Size = '100x10'
XPos = 810
YPos = 170
XOffset = 0
YOffset = 0

for i in range(NPROC):
    if i % 2 == 0:
        coord.append(Size + '+' + str(YPos) + '+' + str(YOffset))
    else:
        coord.append(Size + '+' + str(XPos) + '+' + str(YOffset))
        YOffset = YOffset + YPos

def CompareColourRef(Champ):
    BaseChamp = os.path.basename(Champ)
    NameProc = int(multi.current_process().name[-1]) - 1
    print 'Processing', BaseChamp, 'on processor', NameProc+1
    os.putenv('ADAM_USER', DirWrk + 'adam_' + str(NameProc+1))
    Command =  Xterm + '-geometry ' + '' + coord[NameProc] + ' -T  Proc' + 
str(NameProc+1) + ' ' + BaseChamp + ' ' + ' -e  ' + Python + ' ' + DirSrc + \
        'CompareColourRef.py ' + BaseChamp + ' 21 | tee' + DirLog + BaseChamp 
+ '.log'
    Process = sub.Popen([Command], shell=True)
    Process.wait()
    print BaseChamp, 'processed on processor', NameProc+1
    return

pool = multi.Pool(processes=NPROC)

Champs = glob.glob(DirImg + '*/*')
results = pool.map_async(CompareColourRef, Champs)
pool.close()

while results._number_left  0:
    printWaiting for, results._number_left, 'tasks to complete'
    time.sleep(15)
    

pool.join()

print'Process completed'
exit(0)

Cheers
Jean-Baptiste

--

Wow.  I will have to digest that, but thank you.

 
-- Lou Pecora, my views are my own.


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] SVD does not converge on clean matrix

2011-08-14 Thread Lou Pecora
Chuck wrote:




Fails here also, fedora 15 64 bits AMD 940. There should be a maximum 
iterations argument somewhere...

Chuck 

---

   ***  Here's the FIX:

Chuck is right.  There is a max iterations.  Here is a reply from a thread of 
mine in this group several years ago about this problem and some comments that 
might help you.

 From Mr. Damian Menscher who was kind enough to find the iteration 
location and provide some insight:

Ok, so after several hours of trying to read that code, I found
the parameter that needs to be tuned.  In case anyone has this
problem and finds this thread a year from now, here's your hint:

File: Src/dlapack_lite.c
Subroutine: dlasd4_
Line: 22562

There's a for loop there that limits the number of iterations to
20.  Increasing this value to 50 allows my matrix to converge.
I have not bothered to test what the best value for this number
is, though.  In any case, it appears the number just exists to
prevent infinite loops, and 50 isn't really that much closer to
infinity than 20  (Actually, I'm just going to set it to 100
so I don't have to think about it ever again.)

Damian Menscher
-- 
-=#| Physics Grad Student  SysAdmin @ U Illinois Urbana-Champaign |#=-
-=#| 488 LLP, 1110 W. Green St, Urbana, IL 61801 Ofc:(217)333-0038 |#=-
-=#| 1412 DCL, Workstation Services Group, CITES Ofc:(217)244-3862 |#=-
-=#| menscher at uiuc.edu www.uiuc.edu/~menscher/ Fax:(217)333-9819 |#=-


 My reply and a fix of sorts without changing the hard coded iteration 
max:

I have looked in Src/dlapack_lite.c and line 22562 is no longer a line that 
sets a max. iterations parameter.  There are several set in the file, but that 
code is hard to figure (sort of a Fortran-in-C hybrid).  

Here's one, for example:

    maxit = *n * 6 * *n;   // Line 887

I have no idea which parameter to tweak.  Apparently this error is still in 
numpy (at least to my version). Does anyone have a fix?  Should I start a 
ticket (I think this is what people do)?  Any help appreciated.

I'm using a Mac Book Pro (Intel chip), system 10.4.11, Python 2.4.4.

 Possible try/except === 

#  A is the original matrix
try:
    U,W,VT=linalg.svd(A)
except linalg.linalg.LinAlgError:  # Square the matrix and do SVD

    print Got svd except, trying square of A.
    A2=dot(conj(A.T),A)
    U,W2,VT=linalg.svd(A2)


This works so far.

---

I've been using that simple fix of squaring the original matrix for several 
years and it's worked every time.  I'm not sure why.  It was just a test and it 
worked.  

You could also change the underlying C or Fortran code, but you then have to 
recompile everything in numpy.  I wasn't that brave.


-- Lou Pecora, my views are my own.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] What Requires C and what is just python

2011-03-20 Thread Lou Pecora
I'll add my $0.02 here.  Someone mentioned SAGE.  I can say that on the Mac the 
sage package seems to install very easily and reliably.   I've done 4 
installations on Macs 10.4 to 10.6.  You can do them with one command line. 
 They take a few hours, but all have gone flawlessly.  The installation 
contains 
a LOT of python stuff (including all the packages mentioned here) and you use 
it 
just like any other installation except you need to point to the sage folder. 
 There are examples in the documentation.  
 -- Lou Pecora,   my views are my own.


  
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Change in Python/Numpy numerics with Py version 2.6 ?

2010-11-12 Thread Lou Pecora
I ran across what seems to be a change in how numerics are handled in Python 
2.6 
or Numpy 1.3.0 or both, I'm not sure.  I've recently switched from using Python 
2.4 and Numpy 1.0.3 to using the Python 2.6 and Numpy 1.3.0 that comes with 
SAGE 
which is a large mathematical package.  But the issue seems to be a Python one, 
not a SAGE one.
 
Here is a short example of code that gives the new behavior:
 
#  Return the angle between two vectors 
def get_angle(v1,v2):
 '''v1 and v2 are 1 dimensional numpy arrays'''
 # Make unit vectors out of v1 and v2
 v1norm=sqrt(dot(v1,v1))
 v2norm=sqrt(dot(v2,v2))
 v1unit=v1/v1norm
 v2unit=v2/v2norm
 ang=acos(dot(v1unit,v2unit))
 return ang
 
When using Python 2.6 with Numpy 1.3.0 and v1 and v2 are parallel the dot 
product of v1unit and v2unit sometimes gives 1.0+epsilon where epsilon is the 
smallest step in the floating point numbers around 1.0 as given in the sys 
module. This causes acos to throw a Domain Exception. This does not happen when 
I use Python 2.4 with Numpy 1.0.3. 

 
I have put in a check for the exception situation and the code works fine 
again.  I am wondering if there are other changes that I should be aware of.  
Does anyone know the origin of the change above or other differences in the 
handling of numerics between the two versions?
Thanks for any insight.
 -- Lou Pecora,   my views are my own.



  
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Change in Python/Numpy numerics with Py version 2.6 ?

2010-11-12 Thread Lou Pecora
- Original Message 
From: Robert Kern robert.k...@gmail.com
To: Discussion of Numerical Python numpy-discussion@scipy.org
Sent: Fri, November 12, 2010 10:39:54 AM
Subject: Re: [Numpy-discussion] Change in Python/Numpy numerics with Py version 
2.6 ?

On Fri, Nov 12, 2010 at 09:02, Lou Pecora lou_boog2...@yahoo.com wrote:
 I ran across what seems to be a change in how numerics are handled in Python 
2.6
 or Numpy 1.3.0 or both, I'm not sure.  I've recently switched from using 
Python
 2.4 and Numpy 1.0.3 to using the Python 2.6 and Numpy 1.3.0 that comes with 
SAGE
 which is a large mathematical package.  But the issue seems to be a Python 
one,
 not a SAGE one.

 Here is a short example of code that gives the new behavior:

 #  Return the angle between two vectors 
 def get_angle(v1,v2):
 '''v1 and v2 are 1 dimensional numpy arrays'''
 # Make unit vectors out of v1 and v2
 v1norm=sqrt(dot(v1,v1))
 v2norm=sqrt(dot(v2,v2))
 v1unit=v1/v1norm
 v2unit=v2/v2norm
 ang=acos(dot(v1unit,v2unit))
 return ang

 When using Python 2.6 with Numpy 1.3.0 and v1 and v2 are parallel the dot
 product of v1unit and v2unit sometimes gives 1.0+epsilon where epsilon is the
 smallest step in the floating point numbers around 1.0 as given in the sys
 module. This causes acos to throw a Domain Exception. This does not happen 
when
 I use Python 2.4 with Numpy 1.0.3.

acos() is not a numpy function. It comes from the stdlib math module.
Python 2.6 tightened up many of the border cases for the math
functions. That is probably where the behavior difference comes from.

-- 
Robert Kern



Thanks, Robert.  I thought some math functions were replaced by numpy functions 
that can also operate on arrays when using from numpy import *. That's the 
reason I asked about numpy.  I know I should change this. 

But your explanation sounds like it is indeed in Py 2.6 where they tightened 
things up.  I'll just leave the check for exceptions in place and use it more 
often now.
 -- Lou Pecora, my views are my own.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion



  
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Change in Python/Numpy numerics with Py version 2.6 ?

2010-11-12 Thread Lou Pecora




From: Charles R Harris charlesr.har...@gmail.com
To: Discussion of Numerical Python numpy-discussion@scipy.org
Sent: Fri, November 12, 2010 10:34:58 AM
Subject: Re: [Numpy-discussion] Change in Python/Numpy numerics with Py version 
2.6 ?

On Fri, Nov 12, 2010 at 8:02 AM, Lou Pecora lou_boog2...@yahoo.com wrote:

I ran across what seems to be a change in how numerics are handled in Python 2.6
or Numpy 1.3.0 or both, I'm not sure.  I've recently switched from using Python
2.4 and Numpy 1.0.3 to using the Python 2.6 and Numpy 1.3.0 that comes with 
SAGE
which is a large mathematical package.  But the issue seems to be a Python one,
not a SAGE one.

Here is a short example of code that gives the new behavior:

#  Return the angle between two vectors 
def get_angle(v1,v2):
'''v1 and v2 are 1 dimensional numpy arrays'''
# Make unit vectors out of v1 and v2
v1norm=sqrt(dot(v1,v1))
v2norm=sqrt(dot(v2,v2))
v1unit=v1/v1norm
v2unit=v2/v2norm
ang=acos(dot(v1unit,v2unit))
return ang

When using Python 2.6 with Numpy 1.3.0 and v1 and v2 are parallel the dot
product of v1unit and v2unit sometimes gives 1.0+epsilon where epsilon is the
smallest step in the floating point numbers around 1.0 as given in the sys
module. This causes acos to throw a Domain Exception. This does not happen when
I use Python 2.4 with Numpy 1.0.3.



Probably an accident or slight difference in compiler optimization. Are you 
running on a 32 bit system by any chance?

Yes, I am running on a 32 bit system.  Mac OS X 10.4.11.
  


I have put in a check for the exception situation and the code works fine
again.  I am wondering if there are other changes that I should be aware of.
Does anyone know the origin of the change above or other differences in the
handling of numerics between the two versions?
Thanks for any insight.


The check should have been there in the first place, the straight forward 
method 
is subject to roundoff error. It isn't very accurate for almost identical 
vectors either, you would do better to work with the difference vector in that 
case: 2*arcsin(||v1 - v2||/2) when v1 and v2 are normalized, or you could try 
2*arctan2(||v1 - v2||, ||v1 + v2||). It is also useful to see how the 
Householder reflection deals with the problem.

I left the exception catch in place.  You're right.  I hadn't thought about the 
Householder reflection.  Good point. 

Thanks.


 -- Lou Pecora, my views are my own.



  ___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] C extension compiling question

2010-10-29 Thread Lou Pecora
- Original Message 
From: Henry Gomersall wh...@cam.ac.uk
To: numpy-discussion numpy-discussion@scipy.org
Sent: Fri, October 29, 2010 9:11:41 AM
Subject: [Numpy-discussion] C extension compiling question

I'm trying to get a really simple toy example for a numpy extension
working (you may notice its based on the example in the numpy docs and
the python extension docs). The code is given below.



---

If you're trying to use numpy and standard pythons types, you might want to 
look 
into ctypes which allows you to write C extensions that pass arrays in a much 
easier way.  I was into writing bare C extensions from the ground up like you 
when someone put me onto ctypes.  MUCH easier and cleaner and, as a result, 
easier to debug.  I recommend it.
-- Lou Pecora, my views are my own.


  
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] How to solve homogeneous linear equations with NumPy?

2009-12-03 Thread Lou Pecora
From: Peter Cai newpt...@gmail.com
To: Discussion of Numerical Python numpy-discussion@scipy.org
Sent: Thu, December 3, 2009 1:13:40 AM
Subject: Re: [Numpy-discussion] How to solve homogeneous linear equations with 
NumPy?

Thanks a lot.

But my knowledge of linear equations are limited, so can explain in your code,
 which result represent the solution set of solution?

BTW : since [1, 1, 1, 1] is an obviously non-trivial solution, can you prove 
your method could verify it? 



As usual, Google is your friend.  Also check on Wikipedia, Scholarpedia, and 
http://mathworld.wolfram.com/.  If you are serious about getting a solution, 
then it is worth spending some time learning about linear systems.

-- Lou Pecora, my views are my own.


  ___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] finding close together points.

2009-11-12 Thread Lou Pecora


- Original Message 
From: Christopher Barker chris.bar...@noaa.gov
To: Discussion of Numerical Python numpy-discussion@scipy.org
Sent: Thu, November 12, 2009 12:37:37 PM
Subject: Re: [Numpy-discussion] finding close together points.

Lou Pecora wrote:
 a KD tree for 2D nearest neighbor seems like over kill.  You
 might want to try the simple approach of using boxes of points to
 narrow things down by sorting on the first component.

yeah, we'll probably do something like that if we have to write the code 
ourselves. At the moment, we're using geohash:

http://pypi.python.org/pypi/Geohash/

(this is for points on the earth)

and it's working OK. I was just hoping kdtree would work out of the box!

 where for a
 static data set it can match KD trees in speed

Why does it have to be static -- it doesn't look hard to insert/remove 
points.

--- Lou answers --


It doesn't have to be static, but if I remember correctly, when you add points 
you have to resort or do a smart insert (which may require a lot of data 
shifting).  Not hard, but the original paper claimed that if there are a lot 
of changes to the data set this becomes more expensive than the kd-tree.  If 
you can get the original paper I mentioned, it will have more info.  It's 
pretty clearly written and contains some nice comparisons to kd-trees for 
finding near neighbors.  

Bottom line is that you can still try it with changing data.  See what the run 
times are like.  I've used the approach for up to eight dimensions with about 
10^5 data points.  It worked nicely.  I remember looking for a kd-tree library 
that would work out of the box (C or C++), but everything I found was not as 
simple as I would have liked.  The slice searching was a nice solution.  But 
maybe I just didn't know where to look or I'm not understanding some of the 
libraries I did find.

Let us know how it goes.

-- Lou Pecora


  
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] finding close together points.

2009-11-11 Thread Lou Pecora
- Original Message 
From: Christopher Barker chris.bar...@noaa.gov
To: Discussion of Numerical Python numpy-discussion@scipy.org
Sent: Tue, November 10, 2009 7:07:32 PM
Subject: [Numpy-discussion] finding close together points.

Hi all,

I have a bunch of points in 2-d space, and I need to find out which 
pairs of points are within a certain distance of one-another (regular 
old Euclidean norm).

scipy.spatial.KDTree.query_ball_tree() seems like it's built for this.




Chris,

Maybe I'm missing something simple, but if your array of 2D points is static, a 
KD tree for 2D nearest neighbor seems like over kill.  You might want to try 
the simple approach of using boxes of points to narrow things down by sorting 
on the first component.  If your distances are, say, 10% of the variance, then 
you'll *roughly* decrease the remaining points to search by a factor of 10.  
This can get more sophisticated and is useful in higher dimensions (see:  
Sameer A. Nene and Shree K. Nayar, A Simple Algorithm for Nearest Neighbor 
Search in High Dimensions, IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE 
INTELLIGENCE 19 (9), 989 (1997).) where for a static data set it can match KD 
trees in speed, but is SO much easier to program.  
 -- Lou Pecora, my views are my own.


  
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] where does numpy get its pow function?

2009-09-29 Thread Lou Pecora
- Original Message 
From: Robert Kern robert.k...@gmail.com
To: Discussion of Numerical Python numpy-discussion@scipy.org
Sent: Tuesday, September 29, 2009 12:54:46 PM
Subject: Re: [Numpy-discussion] where does numpy get its pow function?

On Tue, Sep 29, 2009 at 11:47, Chris Colbert sccolb...@gmail.com wrote:
 Does numpy use pow from math.h or something else?

Yes.

-- 
Robert Kern


HAHAHA!  Reminds me of when my son was little and we asked, Do you want 
vanilla or chocolate ice cream.  He would answer, Yes.  

Robert, did you mean it used math.h  OR  that it used something else?  :-) 
 -- Lou Pecora, my views are my own.


  
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Plans for Numpy 1.4.0 and scipy 0.8.0

2009-06-21 Thread Lou Pecora
I'm still using 2.4, but I plan to go to 2.5 when the project we're doing now 
reaches a stable point later this year.  Not sure after that.  I know it's real 
work to keep several versions going, but I sense there are a lot of people in 
the 2.4 - 2.5 window.  I guess 2.6 is a mini step toward 3.0.  The problem with 
each step is that all the libraries we rely on have to be ugraded to that step 
or we might lose the functionality of that library.  For me that's a killer. I 
have to take a good look at all of them before the upgrade or a big project 
will take a fatal hit.

-- Lou Pecora,   my views are my own.

--- On Sun, 6/21/09, John Reid j.r...@mail.cryst.bbk.ac.uk wrote:

From: John Reid j.r...@mail.cryst.bbk.ac.uk
Subject: Re: [Numpy-discussion] Plans for Numpy 1.4.0 and scipy 0.8.0
To: numpy-discussion@scipy.org
Date: Sunday, June 21, 2009, 10:38 AM

David Cournapeau wrote:
 (Continuing the discussion initiated in the neighborhood iterator thread)
     - Chuck suggested to drop python  2.6 support from now on. I am
 against it without a very strong and detailed rationale, because many OS
 still don't have python 2.6 (RHEL, Ubuntu LTS).

I vote against dropping support for python 2.5. Personally I have no 
incentive to upgrade to 2.6 and am very happy with 2.5.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


  ___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] What is the logical value of nan?

2009-03-11 Thread Lou Pecora

--- On Wed, 3/11/09, Bruce Southey bsout...@gmail.com wrote:

 From: Bruce Southey bsout...@gmail.com
 Subject: Re: [Numpy-discussion] What is the logical value of nan?
 To: Discussion of Numerical Python numpy-discussion@scipy.org
 Date: Wednesday, March 11, 2009, 10:24 AM

 This is one link that shows the different representation of
 these 
 numbers in IEEE 754:
 http://www.psc.edu/general/software/packages/ieee/ieee.php
 It is a little clearer than Wikipedia:
 http://en.wikipedia.org/wiki/IEEE_754-1985

Thanks.  Useful sites.

 Numpy's nan/NaN/NAN, inf/Inf/PINF, and NINF are not
 nothing so not zero. 

Agreed.  +1

 Also, I think that conversion to an integer should be an
 error for all of these because there is no equivalent 
 representation of these floating 
 point numbers as integers and I think that using zero for
 NaN is wrong.

Another  +1

 Now for the other two special representations, I would
 presume that 
 Numpy's PZERO (positive zero) and NZERO (negative zero)
 are treated as 
 nothing. Conversion to integer for these should be zero.

Yet another  +1.

-- Lou Pecora,   my views are my own.




  
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Faster way to generate a rotation matrix?

2009-03-04 Thread Lou Pecora

First, do a profile.  That will tell you how much time you are spending in each 
function and where the bottlenecks are.  Easy to do in iPython.

Second, (I am guessing here -- the profile will tell you) that the bottleneck 
is the call back to the rotation matrix function from the optimizer.  That 
can be expensive if the optimizer is doing it a lot.  I had a similar situation 
with a numerical integration scheme using SciPy. When I wrote a C version of 
the integration it ran 10 times faster.  Can you get a C-optimizer?  Then use 
ctypes or something else to call it all from Python?

-- Lou Pecora,   my views are my own.


--- On Tue, 3/3/09, Jonathan Taylor jonathan.tay...@utoronto.ca wrote:

 From: Jonathan Taylor jonathan.tay...@utoronto.ca
 Subject: Re: [Numpy-discussion] Faster way to generate a rotation matrix?
 To: Discussion of Numerical Python numpy-discussion@scipy.org
 Date: Tuesday, March 3, 2009, 11:41 PM
 Thanks,  All these things make sense and I should have known
 to
 calculate the sins and cosines up front.  I managed a few
 more
 tricks and knocked off 40% of the computation
 time:
 
 def rotation(theta, R = np.zeros((3,3))):
 cx,cy,cz = np.cos(theta)
 sx,sy,sz = np.sin(theta)
 R.flat = (cx*cz - sx*cy*sz, cx*sz + sx*cy*cz, sx*sy,
 -sx*cz - cx*cy*sz, -sx*sz + cx*cy*cz,
 cx*sy, sy*sz, -sy*cz, cy)
 return R
 
 Pretty evil looking ;) but still wouldn't mind somehow
 getting it faster.
 
 Am I right in thinking that I wouldn't get much of a
 speedup by
 rewriting this in C as most of the time is spent in
 necessary python
 functions?
 
 Thanks again,
 Jon.



  
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Faster way to generate a rotation matrix?

2009-03-04 Thread Lou Pecora

Whoops.  I see you have profiled your code. Sorry to re-suggest that.  

But I agree with those who suggest a C speed up using ctypes or cthyon.

However, thanks for posting your question.  It caused a LOT of very useful 
responses that I didn't know about.  Thanks to all who replied.

-- Lou Pecora,   my views are my own.



  
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] SVD errors

2009-02-02 Thread Lou Pecora
I ran into this problem a year or so ago. I suspect my messages to the list are 
in the archives somewhere. It is a known problem and involves a hard-coded 
maximum number of iterations in the SVD code.  The problem is on the LaPack 
side.  You can go in and change it, but then you have to recompile everything 
and rebuild Numpy, etc. etc. Not sure how easy/hard this is.  I avoided it. 

What I found that worked for me (depends on your numerical situation) is to 
take the original matrix you are trying to decompose, say A, and examine, 
instead, the SVD of A^T A.  Then the singular values of that matrix are the 
square of the singular values of A.  This worked for me, but my original matrix 
was square.  Maybe that helped.  Don't know. It's worth a try.

-- Lou Pecora,   my views are my own.


--- On Mon, 2/2/09, mtrum...@berkeley.edu mtrum...@berkeley.edu wrote:

 From: mtrum...@berkeley.edu mtrum...@berkeley.edu
 Subject: [Numpy-discussion] SVD errors
 To: numpy-discussion@scipy.org
 Date: Monday, February 2, 2009, 7:21 PM
 Hello list.. I've run into two SVD errors over the last
 few days. Both
 errors are identical in numpy/scipy.
 
 I've submitted a ticket for the 1st problem (numpy
 ticket #990). Summary
 is: some builds of the lapack_lite module linking against
 system LAPACK
 (not the bundled dlapack_lite.o, etc) give a
 LinAlgError: SVD did not
 converge exception on my matrix. This error does
 occur using Mac's
 Accelerate framework LAPACK, and a coworker's Ubuntu
 LAPACK version. It
 does not seem to happen using ATLAS LAPACK (nor using
 Octave/Matlab on
 said Ubuntu)
 
 Just today I've come across a negative singular value
 cropping up in an
 SVD of a different matrix. This error does occur on my
 ATLAS LAPACK based
 numpy, as well as on the Ubuntu setup. And once again, it
 does not happen
 in Octave/Matlab.
 
 I'm using numpy 1.3.0.dev6336 -- don't know what
 the Ubuntu box is running.
 
 Here are some npy files for the two different cases:
 
 https://cirl.berkeley.edu/twiki/pub/User/MikeTrumpis/noconverge_operator.npy
 https://cirl.berkeley.edu/twiki/pub/User/MikeTrumpis/negsval_operator.npy
 
 Mike
 
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion


  
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] One Solution to: What to use to read and write numpy arrays to a file?

2008-12-09 Thread Lou Pecora
I found one solution that's pretty simple for easy read and write to/from a 
file of a numpy array (see my original message below).  Just use the method 
tolist().

e.g. a complex 2 x 2 array

arr=array([[1.0,3.0-7j],[55.2+4.0j,-95.34]])
ls=arr.tolist()

Then use the repr - eval pairings to write and later read the list from the 
file and then convert the list that is read in back to an array:

[ls_str]=fp.readline()
ls_in= eval(ls_str)
arr_in=array(ls_in)  # arr_in is same as arr

Seems to work well.  Any comments?

-- Lou Pecora,   my views are my own.


--- On Tue, 12/9/08, Lou Pecora wrote:

In looking for simple ways to read and write data (in a text readable format) 
to and from a file and later restoring the actual data when reading back in, 
I've found that numpy arrays don't seem to play well with repr and eval. 

E.g. to write some data (mixed types) to a file I can do this (fp is an open 
file),

  thedata=[3.0,-4.9+2.0j,'another string']
  repvars= repr(thedata)+\n
  fp.write(repvars)

Then to read it back and restore the data each to its original type,

strvars= fp.readline()
sonofdata= eval(strvars)

which gives back the original data list.

BUT when I try this with numpy arrays in the data list I find that repr of an 
array adds extra end-of-lines and that messes up the simple restoration of the 
data using eval.  

Am I missing something simple?  I know I've seen people recommend ways to save 
arrays to files, but I'm wondering what is the most straight-forward?  I really 
like the simple, pythonic approach of the repr - eval pairing.




  
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] What to use to read and write numpy arrays to a file?

2008-12-08 Thread Lou Pecora
In looking for simple ways to read and write data (in a text readable format) 
to and from a file and later restoring the actual data when reading back in, 
I've found that numpy arrays don't seem to play well with repr and eval. 

E.g. to write some data (mixed types) to a file I can do this (fp is an open 
file),

  thedata=[3.0,-4.9+2.0j,'another string']
  repvars= repr(thedata)+\n
  fp.write(repvars)

Then to read it back and restore the data each to its original type,

  strvars= fp.readline()
  sonofdata= eval(strvars)

which gives back the original data list.

BUT when I try this with numpy arrays in the data list I find that repr of an 
array adds extra end-of-lines and that messes up the simple restoration of the 
data using eval.  

Am I missing something simple?  I know I've seen people recommend ways to save 
arrays to files, but I'm wondering what is the most straight-forward?  I really 
like the simple, pythonic approach of the repr - eval pairing.

Thanks for any advice. (yes, I am googling, too)


-- Lou Pecora,   my views are my own.



  
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] What to use to read and write numpy arrays to a file?

2008-12-08 Thread Lou Pecora
--- On Mon, 12/8/08, Matthieu Brucher [EMAIL PROTECTED] wrote:

 From: Matthieu Brucher [EMAIL PROTECTED]
 Subject: Re: [Numpy-discussion] What to use to read and write numpy arrays to 
 a file?
 To: Discussion of Numerical Python numpy-discussion@scipy.org
 Date: Monday, December 8, 2008, 3:56 PM
 Hi,
 
 The repr - eval pair does not work with numpy. You can
 simply do a
 tofile() from file().
 
 Matthieu

Yes, I found the tofile/fromfile pair, but they don't preserve the shape.  
Sorry, I should have been clearer on that in my request.  I will be saving 
arrays whose shape I may not know later when I read them in.  I'd like that 
information to be preserved.

Thanks.

-- Lou Pecora,   my views are my own.




  
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] What to use to read and write numpy arrays to a file?

2008-12-08 Thread Lou Pecora
--- On Mon, 12/8/08, Robert Kern [EMAIL PROTECTED] wrote:

 From: Robert Kern [EMAIL PROTECTED]
 Subject: Re: [Numpy-discussion] What to use to read and write numpy arrays to 
 a file?
 
 The most bulletproof way would be to use numpy.save() and
 numpy.load(), but this is a binary format, not a text one.
 
 -- 
 Robert Kern
 

Thanks, Robert.  I may have to go that route, assuming that the save and load 
pair preserve shape, i.e. I don't have to know the shape when I read back in.


-- Lou Pecora,   my views are my own.




  
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] huge array calculation speed

2008-07-11 Thread Lou Pecora
If your positions are static (I'm not clear on that from your message), then 
you might want to check the technique of slice searching.  It only requires 
one sort of the data for each dimension initially, then uses a simple, but 
clever look up to find neighbors within some epsilon of a chosen point.  Speeds 
appear to be about equal to k-d trees.  Programming is vastly simpler than k-d 
trees, however.  

See,

[1] A Simple Algorithm for Nearest Neighbor Search in High Dimensions, Sameer 
A. Nene and Shree K. Nayar, IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE 
INTELLIGENCE 19 (9), 989 (1997).

-- Lou Pecora,   my views are my own.


--- On Thu, 7/10/08, Dan Lussier [EMAIL PROTECTED] wrote:

 From: Dan Lussier [EMAIL PROTECTED]
 Subject: [Numpy-discussion] huge array calculation speed
 To: numpy-discussion@scipy.org
 Date: Thursday, July 10, 2008, 12:38 PM
 Hello,
 
 I am relatively new to numpy and am having trouble with the
 speed of  
 a specific array based calculation that I'm trying to
 do.
 
 What I'm trying to do is to calculate the total total
 potential  
 energy and coordination number of each atom within a
 relatively large  
 simulation.  Each atom is at a position (x,y,z) given by a
 row in a  
 large array (approximately 1e6 by 3) and presently I have
 no  
 information about its nearest neighbours so each its
 position must be  
 checked against all others before cutting the list down
 prior to  
 calculating the energy.




  
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Making NumPy accessible to everyone (or no-one) (was Numpy-discussion Digest, Vol 19, Issue 44)

2008-04-10 Thread Lou Pecora

--- Alexander Michael [EMAIL PROTECTED] wrote:

 Hey! I use np *all the time* as an abbreviation for
 number of points. I don't
 really see what the problem is with using
 numpy.whatever in library code and
 published scripts and whatever you want in one-off
 throw-away scripts. It's easy
 to setup a shortcut key in almost any editor to
 alleviate the typing burden, if
 that is the main objection. If you have a section of
 an algorithm that you are
 trying to make look as much like text-book
 pseudocode as possible, than you
 can't do better than from numpy import whatever
 both for clarity and python
 coding convention. You can also say d = numpy.dot
 in the local scope at the
 top of your algorithm so you can write d(x,y) in
 the algorithm itself for very
 pithy code that doesn't require a FAQ to understand.

Yes, I use np= number of points, too.  But you all
might want to use something else.  That's the point of
the flexibility of import ... as 

Trying to lock in namespaces as np or N or whatever is
a BAD idea.  Allow the flexibility.  You can admonish
against from ... import * for newbies and then tell
them to use from ... import actual function names (as
mentioned above).  But locking people into a standard,
even an informal one is, as someone else said, acting
a bit too much like accountants.  Stop, please!



-- Lou Pecora,   my views are my own.

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] SVD error in Numpy. NumPy Update reversed?

2008-03-25 Thread Lou Pecora
Travis,  Does that mean it's not worth starting a
ticket?  Sounds like nothing can be done, *except* to
put this in the documentation and the FAQ.  It has
bitten several people.

--- Travis E. Oliphant [EMAIL PROTECTED]
wrote:

 Stéfan van der Walt wrote:
 Lou Pecora wrote:
   Thanks, Matthieu, that's a good step.  But when
the
   SVD function throws an exception is it clear
that the
   user can redefine niter and recompile? 
Otherwise, the
   fix remains well hidden.  Most user will be left
   puzzled. I think a comment in the raise
statement
   would be good.  Just point to the solution or
where
   the user could find it.
  
 
  That's a valid concern.  We could maybe pass down
 the iteration limit
  as a keyword?

 This won't work without significant re-design.  This
 limit is in the 
 low-level code which is an f2c'd version of some
 BLAS  which is NumPy's 
 default SVD implementation if it can't find a vendor
 BLAS.
 
 -Travis O.



-- Lou Pecora,   my views are my own.


  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] SVD error in Numpy. NumPy Update reversed?

2008-03-19 Thread Lou Pecora
I recently had a personal email reply from Damian
Menscher who originally found the error in 2002.  He
states:

--

I explained the solution in a followup to my own post:
http://mail.python.org/pipermail/python-list/2002-August/161395.html
-- in short, find the dlasd4_ routine (for the current
1.0.4 version
it's at numpy/linalg/dlapack_lite.c:21902) and change
the max
iteration count from 20 to 100 or higher.

The basic problem was that they use an iterative
method to converge on
the solution, and they had a cutoff of the max number
of iterations
before giving up (to guard against an infinite loop or
cases where an
unlucky matrix would require an excessive number of
iterations and
therefore CPU).  The fix I used was simply to increase
the max
iteration count (from 20 to 100 -- 50 was enough to
solve my problem
but I went for overkill just to be sure I wouldn't see
it again).  It
*may* be reasonable to just leave this as an infinite
loop, or to
increase the count to 1000 or higher.  A lot depends
on your preferred
failure mode:
  - count too low - low cpu usage, but SVD did not
converge errors
somewhat common
  - very high count - some matrices will result in
high cpu usage,
non-convergence still possible
  - infinite loop - it will always converge, but may
take forever

NumPy was supposedly updated also (from 20 to 100, but
you may want to
go higher) in bug 601052.  They said the fix made it
into CVS, but
apparently it got lost or reverted when they did a
release (the oldest
release I can find is v1.0 from 2006 and has it set to
20).  I just
filed another bug (copy/paste of the previous one) in
hopes they'll
fix it for real this time:
http://scipy.org/scipy/numpy/ticket/706

Damian



I looked at line 21902  of dlapack_lite.c, it is,

for (niter = iter; niter = 20; ++niter) {

Indeed the upper limit for iterations in the
linalg.svd code is set for 20.  For now I will go with
my method (on earlier post) of squaring the matrix and
then doing svd when the original try on the original
matrix throws the linalg.linalg.LinAlgError.  I do not
claim that this is a cure-all.  But it seems to work
fast and avoids the original code from thrashing
around in a long iteration. 

I would suggest this be made explicit in the NumPy
documentation and then the user be given the option to
reset the limit on the number of iterations.  



-- Lou Pecora,   my views are my own.


  

Never miss a thing.  Make Yahoo your home page. 
http://www.yahoo.com/r/hs
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] SVD error in Numpy. Bug?

2008-03-18 Thread Lou Pecora
I have run into a failure of complex SVD in numpy
(version='1.0.3.1').  The error is:

  File
/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/numpy/linalg/linalg.py,
line 767, in svd
raise LinAlgError, 'SVD did not converge'
numpy.linalg.linalg.LinAlgError: SVD did not converge

The matrix is complex 36 x 36. Very slight changes in
the matrix components (~ one part in 10^4) are enough
to make the error go away.  I have never seen this
before and it goes against the fact (I think it's a
mathematical fact) that SVD always exists.  A
hard-coded upper limit on the iteration number allowed
somewhere in the SVD C code seems to be the problem.
Read on.

A google search turned up a few messages, included
this one from 2002 where the same error occurred
infrequently, but randomly (it seemed):

--
One online message in August 2002:

Ok, so after several hours of trying to read that
code, I found
the parameter that needs to be tuned.  In case anyone
has this
problem and finds this thread a year from now, here's
your hint:

File: Src/dlapack_lite.c
Subroutine: dlasd4_
Line: 22562

There's a for loop there that limits the number of
iterations to
20.  Increasing this value to 50 allows my matrix to
converge.
I have not bothered to test what the best value for
this number
is, though.  In any case, it appears the number just
exists to
prevent infinite loops, and 50 isn't really that much
closer to
infinity than 20  (Actually, I'm just going to set
it to 100
so I don't have to think about it ever again.)

Damian Menscher
-- 
-=#| Physics Grad Student  SysAdmin @ U Illinois
Urbana-Champaign |#=-
-=#| 488 LLP, 1110 W. Green St, Urbana, IL 61801
Ofc:(217)333-0038 |#=-
-=#| 1412 DCL, Workstation Services Group, CITES
Ofc:(217)244-3862 |#=-
-=#| menscher at uiuc.edu www.uiuc.edu/~menscher/
Fax:(217)333-9819 |#=-
--

I have looked in Src/dlapack_lite.c and line 22562 is
no longer a line that sets a max. iterations
parameter.  There are several set in the file, but
that code is hard to figure (sort of a Fortran-in-C
hybrid).  

Here's one, for example:

maxit = *n * 6 * *n;   // Line 887

I have no idea which parameter to tweak.  Apparently
this error is still in numpy (at least to my version).
Does anyone have a fix?  Should I start a ticket (I
think this is what people do)?  Any help appreciated.

I'm using a Mac Book Pro (Intel chip), system 10.4.11,
Python 2.4.4.




-- Lou Pecora,   my views are my own.


  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ 

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] C Extensions, CTypes and external code libraries

2008-02-14 Thread Lou Pecora

--- David Cournapeau [EMAIL PROTECTED]
wrote:

 On Wed, 2008-02-13 at 08:20 -0800, Lou Pecora wrote:
  Yes, a good question.  Two reasons I started off
 with
  the static library.  One is that Gnu instructions
  claimed the dynamic library did not always build
  properly on the Mac OS X.
 
 If true, that's a good argument. I don't know the
 state of libtool of
 mac os X (the part of autotools which deals with
 building libraries in a
 cross platform way). Given the history of apple with
 open source, I
 would not be surprised if the general support was
 subpar compared to
 other unices.

I ran the GSL install again and allowed the dynamic
lib to be built.  It worked fine so maybe Apple has
done better lately.  I should have tried it right from
the beginning.  Anyway,  I have a make script that
seems to work find and I can link my shared lib to the
GSL library or, I would guess, any other library I
have.  Thanks, again for your helpful suggestions.

I will put up my code (it's small) for others to see
on this list in a separate message.  That might help
others not to make the same mistakes I did.

 I don't know what kind of applications you are
 developing, but taking
 care of the time to load the application because of
 the huge number of
 symbols seems like really premature optimization to
 me. That's the kind
 of problems you don't see if your applications are
 not huge (or
 developed with C++, which put a huge pressure on the
 linker/loader tools
 by its very nature).
 
 Also, note that all modern OS (this includes even
 windows since NT) do
 not load the whole shared library in memory, and
 that two applications
 needing the GSL will share the same version in
 memory. The same
 physical page of a shared library can be mapped
 into different
 address spaces (for different processes). I use ,
 because that's a
 huge over-simplification, and that's where it
 reaches my own
 understanding of the thing. This sharing cannot
 happen for static
 libraries.

Good advice. You are right.  My code is not large and
it loads fast.  




-- Lou Pecora,   my views are my own.


  

Looking for last minute shopping deals?  
Find them fast with Yahoo! Search.  
http://tools.search.yahoo.com/newsearch/category.php?category=shopping
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Example: How to use ctypes and link to a C library

2008-02-14 Thread Lou Pecora
I successfully compiled a shared library for use with
CTypes and linked it to an external library (Gnu
Scientific Library) on Mac OS X 10.4. I hope this
helps Mac people and anyone else who wants to use
CTypes to access their own C extensions and use other
C libraries in the process.  I want to thank several
people on this list who gave me many helpful
suggestions and asked me good questions.  I also want
to thank the several people who kept nudging me to try
CTypes even though I was reluctant.  It is much easier
than programming an extension all in C.   

Below are 4 files that enable building of a C shared
library in Mac OS X (10.4) that can be used with
CTypes to call a function from the Gnu Scientific
Library (a Bessel function program gsl_sf_bessel_J0). 
You can see that the idea is pretty simple.  The code
requires that you have ctypes (in site-packages) and
GSL (dynlib version in /usr/local/lib) or your desired
C library installed.  I suspect on other platforms
what will be different will be the make file.  I do
not know enough to provide Linux or Windows versions. 
I'm sorry.  

Note:  This works best if the libraries are shared
(e.g. the GSL library to use is the dynlib version). 
That way only the code that's needed is loaded when
the C functions are called from python.

Comments welcome. Of course, I am responsible for any
and all mistakes.  So, I make no guarantees or
warrenties.  These are examples and should not be used
where loss of property, life, or other dangers exist.

# Source code 'bess.c' ==

#include stdio.h
#include bess.h
#include gsl/gsl_sf_bessel.h /* Must include the
header to define the function for compiler */


/*  test fcns - */

#ifdef __cplusplus
extern C {
#endif

double J0_bess (double x) {
/* Call the GSL Bessel function order 0 of the first
kind */
  double y = gsl_sf_bessel_J0 (x);
/* Print the value right here */
  printf (J0(%g) = %.18e\n, x, y);
  return y;
}

#ifdef __cplusplus
}
#endif

# Header file 'bess.h'  =

/*  Prototypes  */

#ifdef __cplusplus
extern C {
#endif

double J0_bess(double x);

#ifdef __cplusplus
}
#endif

# Make file 'bess.make'  ===

#  Link to existing library in this directory

bess.so:  bess.o  bess.mak
gcc -bundle -flat_namespace -undefined suppress -o
bess.so  bess.o  -lgsl

#  gcc C compile --
bess.o:  bess.c bess.h bess.mak
gcc -c bess.c -o bess.o

# Python file 'bess.py'  ===

#!/usr/local/bin/pythonw

import numpy as N
import ctypes as C

# Put the name of your library in place of 'bess.so'
and the path to 
# it in place of the path below in load_library 
_bess = N.ctypeslib.load_library('bess.so',
'/Users/loupecora/Code_py/test_folder/ctypes_tests/test3ctypes/simplelink-GSL/')
_bess.J0_bess.restype = C.c_double
_bess.J0_bess.argtypes = [C.c_double]
def fcn_J0(x):
return _bess.J0_bess(x)
x = 0.2
y = fcn_J0(x)
print x, y: %e  %.18e % (x, y)

# Typical output ===
# The first line is printed from the shared library
function J0_bess
# The second line is from the python code that called
the shared lib. function

J0(0.2) = 9.900249722395765284e-01
x, y: 2.00e-01  9.900249722395765284e-01





-- Lou Pecora,   my views are my own.


  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ 

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] C Extensions, CTypes and external code libraries

2008-02-13 Thread Lou Pecora

--- David Cournapeau [EMAIL PROTECTED] wrote:

 But the real question is : if you are concerned with
 code bload, why
 using static lib at all ? Why not using shared
 library, which is
 exactly designed to solve what you are trying to do
 ?
 cheers,
 David

Yes, a good question.  Two reasons I started off with
the static library.  One is that Gnu instructions
claimed the dynamic library did not always build
properly on the Mac OS X.  So I just built the static
GSL and figured if I got that to link up to my code, I
could then spend some time trying the dynamic build. 
The other reason is that I am just learning this and I
am probably backing into the right way to do this
rather than starting right off with the right way. 
Maybe my worries about bloat and (even more) time to
load are not important for the GSL and the code will
load fast enough and not take up too much in resources
to matter.  

Later today I will try to build the dynamic version of
GSL and see what that yields.  If I get it I will link
to that as you suggest.

Thanks, again.  Your suggestions have moved me along
nicely.






-- Lou Pecora,   my views are my own.


  

Never miss a thing.  Make Yahoo your home page. 
http://www.yahoo.com/r/hs
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] C Extensions, CTypes and external code libraries

2008-02-13 Thread Lou Pecora

--- David Cournapeau [EMAIL PROTECTED]
wrote:

 Oh, I may have misunderstood what you are trying to
 do then. You just 
 want to call a shared library from another shared
 library ? This is 
 possible on any platform supporting shared library
 (including but not 
 limited to mac os x, windows, linux, most not
 ancient unices).
[cut]

David,

First, thanks very much for all the information.  I am
still digesting it, but you gave a clear explanation
about the difference between shared and dynamic
libraries on the Mac.  

I tried some of your compile/like commands, but the
Mac gcc did not understand some things like  -Bstatic
and -shared.  It seems to want to make bundles. I
guess your code was a Linux version which the Mac
doesn't like.   But encouraged by your help, I got the
following make file to work:

#  Library make ---
mysharedlib.so:  mysharedlib.o  mysharedlib.mak
gcc -bundle -flat_namespace -undefined suppress -o
mysharedlib.so  mysharedlib.o \
fcnlib.a

#  gcc C compile --
mysharedlib.o:  mysharedlib.c mysharedlib.h
mysharedlib.mak
gcc -c mysharedlib.c -o mysharedlib.o

In the above fcnlib.a is a simple static library I
made before using the above make.  This created the
shared library mysharedlib.so which I imported and
handled with CTypes.  Calling a function in fcnlib.a
from python worked.  

A possible downside is that the shared library
contains *all* of the fcnlib.  I examined it using nm
mysharedlib.so.  That showed that all the functions
of fcnlib.a were present in mysharedlib.so even though
the function in mysharedlib.c only called one function
of fcnlib.a.  I don't know how much of a burden this
will impose at run time if do this with GSL. It would
be nice to only pick up the stuff I need. But at least
I have workable approach.

Thanks for your help.  Comments welcome.




-- Lou Pecora,   my views are my own.


  

Never miss a thing.  Make Yahoo your home page. 
http://www.yahoo.com/r/hs
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] C Extensions, CTypes and external code librarie

2008-02-12 Thread Lou Pecora

--- Albert Strasheim [EMAIL PROTECTED] wrote:

 Hello,
 
 Sounds about right. I don't know the Mac that well
 as far as the
 various types of dynamic libraries go, so just check
 that you're
 working with the right type of libraries, but you've
 got the right
 idea.
 
 Regards,
 
 Albert

Thanks, Albert.  I'll report back to this thread when
I give it a try.




-- Lou Pecora,   my views are my own.


  

Looking for last minute shopping deals?  
Find them fast with Yahoo! Search.  
http://tools.search.yahoo.com/newsearch/category.php?category=shopping
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] C Extensions, CTypes and external code libraries

2008-02-12 Thread Lou Pecora

--- Jon Wright [EMAIL PROTECTED] wrote:

 Lou Pecora wrote:
  ...  This appears to be the way
  static and shared libraries work, especially on
 Mac OS
  X, maybe elsewhere.
 
 Have you tried linking against a GSL static library?
 I don't have a mac, 
 but most linkers only pull in the routines you need.
 For example, using 
 windows and mingw:
 
 #include stdio.h
 #include gsl/gsl_sf_bessel.h
 int main (void)
 {  double x = 5.0;
 double y = gsl_sf_bessel_J0 (x);
 printf (J0(%g) = %.18e\n, x, y);
 return 0; }
 
 ...compiles to a.exe which outputs:
 
 J0(5) = -1.775967713143382900e-001
 

Yes, I know about this approach if I am making an
executable.  But I want to make my code into a shared
library (my code will not have a main, just the
functions I write) and, if possible, let my code call
the GSL code it needs from the C function I write
(i.e. no python interface).  If what you did can be
done for a shared library, then that would be great. 
However, I am ignorant of how to do this.  I will try
to make my shared library using gcc and then add the
GSL library using the -l option as someone else
suggested.  Maybe that will work.  I'll report back. 
I have been searching for info on the right approach
to this on the Mac, since, as I understand, Mac OS X
does make a distinction between shared libraries and
dynamic libraries (which I don't understand fully).  

Thanks.


-- Lou Pecora,   my views are my own.


  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ 

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] C Extensions, CTypes and external code libraries

2008-02-12 Thread Lou Pecora


Albert Strasheim [EMAIL PROTECTED] wrote: Hello,


I only quickly read through the previous thread, but I get that idea
that what you want to do is to link your shared library against the
the GSL shared library and then access your own library using ctypes.

If done like this, you don't need to worry about wrapping GSL or
pulling GSL code into your own library.

As far as I know, this works exactly like it does when you link an
executable against a shared library.

If distutils doesn't allow you to do this easily, you could try using
SCons's SharedLibrary builder instead.

Regards,

Albert
___


Albert,

Yes, I think you got the idea right.  I want to call my own C code using CTypes 
interface, then from within my C code call GSL C code, i.e. a C function 
calling another C function directly.  I do *not* want to go back out through 
the Python interface.  So you are right, I do not want to wrap GSL.   

It sounds like I can just add  something like  -lnameofGSLdylib  (where I put 
in the real name of the GSL library after the -l) in my gcc command to make my 
shared lib.  Is that right?

Thanks for your help.




-- Lou Pecora,   my views are my own.
   
-
Be a better friend, newshound, and know-it-all with Yahoo! Mobile.  Try it now.___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] C Extensions, CTypes and external code libraries

2008-02-12 Thread Lou Pecora

First, thanks to all who answered my questions about
trying to use a large library with CTypes and my own
shared library.  The bottom line seems to be this: 
There is no way to incorporate code external to your
own shared library.  You have to either pull out the
code you want from the static library's source
(painful) or you must just include the whole library
(huge!) and make it all one big shared library.  

Did I get that right?  If so, it's a sad statement
that makes shared libraries harder to write and works
against the reuse of older established code bases.  I
am not criticizing CTypes.  This appears to be the way
static and shared libraries work, especially on Mac OS
X, maybe elsewhere.

I'd really like to be wrong about this and I will
follow up on some of the suggested reading you all
gave me.   

Thanks, again.




-- Lou Pecora,   my views are my own.


  

Looking for last minute shopping deals?  
Find them fast with Yahoo! Search.  
http://tools.search.yahoo.com/newsearch/category.php?category=shopping
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] CTypes: How to incorporate a library with shared library module?

2008-02-12 Thread Lou Pecora
Damian,  Lots of good info there.  Thanks very much. 
-- Lou

--- Damian Eads [EMAIL PROTECTED] wrote:

 Dear Lou,
 
 You may want to try using distutils or setuputils,
 which makes compiling 
 extensions much easier. It does the hard work of
 finding out which flags 
 are needed to compile extensions on the host
 platform. There are many 
 examples on the web on how to use distutils to build
 C extensions 
 (http://docs.python.org/ext/building.html).

[cut]




-- Lou Pecora,   my views are my own.


  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ 

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] CTypes: How to incorporate a library with shared library module?

2008-02-11 Thread Lou Pecora
I will be writing some C code that I will compile into a shared library (.so) 
on my MacOSX computer to use with ctypes.  That code will be calling code from 
a (big) scientific numerical library (Gnu Scientific Library - GSL) to crunch 
the numbers.  But I don't see how I incorporate that code into the  .so file so 
my shared code can get to it when I call it from Python with ctypes.  I do 
_not_ want to make the GSL callable from Python, only from my own C module.  I 
suspect this isn't a ctypes question in particular.  I'm hoping to avoid having 
to tur the whole GSL into a shared library and loading it just to use a few 
functions.  Or avoid having to track down which functions my code will call 
(all the way down the trees) and rip that out to add to my own shared lib.  
There's got to be a better way to make use of big, useful libraries when 
speeding up python with shared lib extension.  I hope. 
   
  Maybe there are ways to do this using a gcc or g++ option.  Right now my make 
file is simply
   
  gcc - bundle -flat_namespace -undefined suppress -o mycode.somycode.o
   
  gcc -c mycode.c  -o mycode.o
   
  Any hints appreciated.  I will continue googling. Nothing so far.  Thanks.
   
   


-- Lou Pecora,   my views are my own.
   
-
Never miss a thing.   Make Yahoo your homepage.___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy and C++ integration...

2008-02-05 Thread Lou Pecora

--- Gael Varoquaux [EMAIL PROTECTED]
wrote:

Re:   ctypes

 I don't use windows much. One thing I liked about
 ctypes when I used it,
 was that what I found it pretty easy to get working
 on both Linux and
 Windows.
 
 Gaël


I got ctypes to install easily on Mac OS X 10.4.11 and
it passed the test using python setup.py test.  Now I
have to find some examples on using it and learn to
compile shared libraries (.so type I guess).


 

-- Lou Pecora,   my views are my own.


  

Looking for last minute shopping deals?  
Find them fast with Yahoo! Search.  
http://tools.search.yahoo.com/newsearch/category.php?category=shopping
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy and C++ integration...

2008-02-05 Thread Lou Pecora
Hmmm... last time I tried ctypes it seemed pretty
Windows oriented and I got nowhere.  But enough people
have said how easy it is that I'll give it another
try.

Believe me, I'd be happy to be wrong and find a nice
easy way to pass NumPy arrays and such.  Thanks.

-- Lou Pecora


--- Gael Varoquaux [EMAIL PROTECTED]
wrote:

 On Tue, Feb 05, 2008 at 09:15:29AM +0100, Sebastian
 Haase wrote:
  Can ctypes do this ?
 
 No. Ctypes is only a way of loading C (and not C++)
 libraries in Python.
 That makes it very simple, but not very powerful.
 Gaël



  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ 

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] New to ctypes. Some problems with loading shared library.

2008-02-05 Thread Lou Pecora
I got ctypes installed and passing its own tests.  But
I cannot get the shared library to load.  I am using
Mac OS X 10.4.11, Python 2.4 running through the
Terminal.

I am using Albert Strasheim's example on
http://scipy.org/Cookbook/Ctypes2 except that I had to
remove the defined 'extern' for FOO_API since the gcc
compiler complained about two 'externs' (I don't
really understand what the extern does here anyway). 

My make file for generating the library is simple,

#  Link ---
test1ctypes.so:  test1ctypes.o  test1ctypes.mak
gcc -bundle -flat_namespace -undefined suppress -o
test1ctypes.so  test1ctypes.o

#  gcc C compile --
test1ctypes.o:  test1ctypes.c test1ctypes.h
test1ctypes.mak
gcc -c test1ctypes.c -o test1ctypes.o

This generates the file test1ctypes.so.  But when I
try to load it

import numpy as N
import ctypes as C

_test1 = N.ctypeslib.load_library('test1ctypes', '.')

I get the error message,

OSError: dlopen(/Users/loupecora/test1ctypes.dylib,
6): image not found

I've been googling for two hours trying to find the
problem or other examples that would give me a clue,
but no luck.  

Any ideas what I'm doing wrong?  Thanks for any clues.






-- Lou Pecora,   my views are my own.


  

Never miss a thing.  Make Yahoo your home page. 
http://www.yahoo.com/r/hs
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] New to ctypes. Some problems with loading shared library.

2008-02-05 Thread Lou Pecora


 Well, it's looking for test1ctypes.dylib, which I
 guess is a MacOSX
 shared library?  Meanwhile, you made a
 test1ctypes.so, which is why it
 can't find it.  You could try using this instead:
 
 _test1 = N.ctypeslib.load_library('test1ctypes.so',
 '.')
 
 or try to get gcc to make a test1ctypes.dylib.
 
 Ryan
 

Thanks, Ryan.  You were on the right track.  I changed
the name of the file in the load_library call to
test1ctypes.so and I had to put in the full path to
the file as the second argument.  The default path was
to my home directory.  I could probably change paths
with a python os call, too.  

Anyway, IT WORKED!  How 'bout that?  One simple
example down and now on to more complex things.

Thanks, again.




-- Lou Pecora,   my views are my own.


  

Looking for last minute shopping deals?  
Find them fast with Yahoo! Search.  
http://tools.search.yahoo.com/newsearch/category.php?category=shopping
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy and C++ integration...

2008-02-04 Thread Lou Pecora

--- Matthieu Brucher [EMAIL PROTECTED]
wrote:
 
 Whatever solution you choose (Boost.Python, ...),
 you will have to use the
 Numpy C API at least a little bit. So Travis' book
 is a good start. As Gaël
 told you, you can use ctypes if you wrap manually
 every method with a C
 function and recreate the class in Python.
 This can be avoided, but you'll have to use more
 powerful tools. I would
 advice SWIG (see my blog for some examples with C++
 and SWIG).
 
 Matthieu


Ah, yes, I will also recommend Travis' book.




-- Lou Pecora,   my views are my own.


  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ 

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy and C++ integration...

2008-02-04 Thread Lou Pecora
Dear Mr. Fulco , 

This may not be exactly what you want to do, but I
would recommend using the C API and then calling your
C++ programs from there (where interface functions to
the C++ code is compiled in the extern C {,  }  
block.  I will be doing this soon with my own project.
Why?  Because the C interface is doable and, I think,
simple enough that it is better to take the Python to
C++ in two steps.  Anyway, worth a look.  So here are
two links that show how to use the C API:

http://www.scipy.org/Cookbook/C_Extensions  - A short
intro, this also has documentation links

http://www.scipy.org/Cookbook/C_Extensions/NumPy_arrays?highlight=%28%28%28-%2A%29%28%5Cr%29%3F%5Cn%29%28.%2A%29CategoryCookbook%5Cb%29
 - This is an article I wrote last year for the
SciPy.org site and I go into a lot of detail with a
lot of examples on how you pass and handle Numpy
arrays.  I think it is (mostly) right and works well
for me.

One warning (which I also talk about in my tutorial)
is to make sure your NumPy arrays are Continguous,
i.e. the array components are in order in one memory
block.  That makes things easier on the C/C++ side.


--- Vince Fulco [EMAIL PROTECTED] wrote:

 Dear Numpy Experts-  I find myself working with
 Numpy arrays and
 wanting to access *simple* C++ functions for time
 series returning the
 results to Numpy.  As I am a relatively new user of
 Python/Numpy, the
 number of paths to use in incorporating C++ code
 into one's scripts is
 daunting.  I've attempted the Weave app but can not
 get past the
 examples.  I've also looked at all the other choices
 out there such as
 Boost, SIP, PyInline, etc.  Any trailheads for the
 simplest approach
 (assuming a very minimal understanding of C++) would
 be much
 appreciated.  At this point, I can't release the
 code however for
 review.  Thank you.
 
 -- 
 Vince Fulco

-- Lou Pecora,   my views are my own.


  

Looking for last minute shopping deals?  
Find them fast with Yahoo! Search.  
http://tools.search.yahoo.com/newsearch/category.php?category=shopping
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy and C++ integration...

2008-02-04 Thread Lou Pecora

--- Christopher Barker [EMAIL PROTECTED] wrote:

 Lou Pecora wrote:
  I
  would recommend using the C API
 
 I would recommend against this -- there is a lot of
 code to write in 
 extensions to make sure you do reference counting,
 etc, and it is hard 
 to get right.

Well, fair enough to some extent, but I didn't find it
so hard after I did a few.   I will speak for myself
here.  The reason I went to the C API is because I
tried several of the routes you suggest and I could
not get any of them to work.  And you're right, the C
API is boilerplate.  That also argues for using it.  

So, for those looking for speed up through some
external C or C++ code, I would say (trying to be fair
here), try what Chris recommends below, if you want,
but IMHO, none of it is trivial.  If you get it to
work, great.  If not, you have the fall back of the C
API.

 ctypes
 pyrex
 SWIG
 SIP
 Boost::python

I tried all of these except SIP and got nowhere.  So
maybe others will be a lot smarter than I.


 Christopher Barker, Ph.D.
 Oceanographer



-- Lou Pecora,   my views are my own.


  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ 

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] argsort memory problem?

2008-01-29 Thread Lou Pecora
This still occurs in numpy 1.0.3.1  so must have been
fixed between that and your 1.0.4-5 version. 

By the way the memory problem crashes my Intel Mac
Book Pro (system 10.4.11) with the gray screen and
black dialog box telling me to restart my computer.  A
very UN-unix like and UN-Mac like way of handling a
memory problem IMHO.  Let us Mac people not be too
smug.

-- Lou Pecora


--- Alexandre Fayolle [EMAIL PROTECTED]
wrote:

 On Tue, Jan 29, 2008 at 02:58:15PM +0100, Oriol
 Vendrell wrote:
  Hi all,
  
  I've noticed something that looks like an odd
 behaviour in array.argsort().
  
  
  # test1 -
  from numpy import array
  while True:
  a=array([8.0,7.0,6.0,5.0,4.0,2.0])
  i=a.argsort()
  # ---
  
  # test2 -
  from numpy import array
  a=array([8.0,7.0,6.0,5.0,4.0,2.0])
  while True:
  i=a.argsort()
  # ---
  
  
  test1 runs out of memory after a few minutes, it
 seems that in each cycle
  some memory is allocated and never returned back.
  test2 runs fine until killed.
  
  I'm unsure if I'm missing something or if this
 could be a bug. I'm using
  numpy 1.0.1 with python 2.4.4 in a debian stable
 system.
 
 Certainly a bug, but it has been fixed and I cannot
 reproduce in debian
 sid (using 1.0.4-5) 
 
 -- 
 Alexandre Fayolle 



  

Never miss a thing.  Make Yahoo your home page. 
http://www.yahoo.com/r/hs
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] argsort memory problem?

2008-01-29 Thread Lou Pecora
Hmmm... Interesting.  I am using Python 2.4.4.  It
would be nice to have other Mac people with same/other
Python and numpy versions try the argsort bug code. 


-- Lou Pecora

--- Francesc Altet [EMAIL PROTECTED] wrote:

 A Tuesday 29 January 2008, Lou Pecora escrigué:
  This still occurs in numpy 1.0.3.1  so must have
 been
  fixed between that and your 1.0.4-5 version.
 
 It works here and I'm using NumPy 1.0.3, Python
 2.5.1 on a Ubuntu 7.10 / 
 Pentium4 machine.
 
  By the way the memory problem crashes my Intel Mac
  Book Pro (system 10.4.11) with the gray screen and
  black dialog box telling me to restart my
 computer.  A
  very UN-unix like and UN-Mac like way of handling
 a
  memory problem IMHO.  Let us Mac people not be too
  smug.
 
 Um, it would be nice if some other Mac-user can
 reproduce your problem. 
 Perhaps you are suffering some other problem that
 can be exposed by 
 this code snip.
 
 Cheers,
 
 -- 
 0,0   Francesc Altet     http://www.carabos.com/


-- Lou Pecora,   my views are my own.


  

Never miss a thing.  Make Yahoo your home page. 
http://www.yahoo.com/r/hs
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] compiling c extension

2008-01-21 Thread Lou Pecora
Did you include arrayobject.h and  call import_array()
in the initialization function, after the call to
Py_InitModule() ?

--- Danny Chan [EMAIL PROTECTED] wrote:

 Hi!
 I am trying to compile a c extension that uses numpy
 arrays on windows. I can compile the extension file,
 but once I get to the linking stage I get unresolved
 references to PyArray_API and import_array. Can
 anyone help me with the correct linker flags? 
 
 Thanks, Danny

[cut]
 more undefined references to `PyArray_API' follow

build\temp.win32-2.5\Release\.\python\libaid_wrap.o:libaid_wrap.c:(.text+0xc216):
 undefined reference to `import_array'
 collect2: ld returned 1 exit status
 error: command 'gcc' failed with exit status 1



-- Lou Pecora,   my views are my own.


  

Looking for last minute shopping deals?  
Find them fast with Yahoo! Search.  
http://tools.search.yahoo.com/newsearch/category.php?category=shopping
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy : your experiences?

2007-11-20 Thread Lou Pecora

--- Rahul Garg [EMAIL PROTECTED] wrote:

 hi.
 
 thanks for ur responses .. so it looks like
 python/numpy is used more
 for gluing things together or doing things like
 postprocessing. is
 anyone using it for core calculations .. as in long
 running python
 calculations?
 i used numpy myself for some nonlinear dynamics and
 chaos related
 calculations but they were usually very short
 running only for a few
 seconds at a time.
 thanks,
 rahul

I've used Python a little to solve ODEs for chaotic
systems.  More for time series analysis (attractor
reconstruction and associated data analysis problems).
 These ran rather fast on the order of seconds or
minutes.

Lately, I've been coding up a package to solved
Schrodinger's Equation for 2D arbitrarily shaped,
infinite wall potentials.  I've settled on a Boundary
Element Approach to get the eigenfunctions in these
systems.  The goal is to study phenomena associated
with quantum chaos in the semiclassical regime.  These
calculations tend to run on the order of 10s of
minutes to an hour.  I eventually will be writing C
extensions for the slower running functions (bottle
necks).  I've done that before and it's a big help for
speed.  




-- Lou Pecora,   my views are my own.


  

Be a better sports nut!  Let your teams follow you 
with Yahoo Mobile. Try it now.  
http://mobile.yahoo.com/sports;_ylt=At9_qDKvtAbMuh1G1SQtBI7ntAcJ
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Problem with numpy.linalg.eig?

2007-11-12 Thread Lou Pecora
Works fine on my computer (Mac OS X 10.4), Python 
2.4. Runs in a second or so.

-- Lou Pecora

---Peter wrote:

Hi all,

The following code calling numpy v1.0.4 fails to
terminate on my machine, which was not the case with
v1.0.3.1

from numpy import arange, float64
from numpy.linalg import eig
a = arange(13*13, dtype = float64)
a.shape = (13,13)
a = a%17
eig(a)


Regards,
Peter




__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] arange and floating point arguments

2007-09-14 Thread Lou Pecora
I thought this is what the linspace function was
written for in numpy.  Why not use that?  It works
just like you would want always including the final
point.


--- Joris De Ridder [EMAIL PROTECTED]
wrote:

 Might using
 
 min(ceil((stop-start)/step),
 ceil((stop-start)/step-r))
 
 with r = finfo(double).resolution instead of
 ceil((stop-start)/step)  
 perhaps be useful?
 
 Joris



-- Lou Pecora,   my views are my own.


  

Catch up on fall's hot new shows on Yahoo! TV. Watch previews, get listings, 
and more!
http://tv.yahoo.com/collections/3658 
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Vector magnitude?

2007-09-05 Thread Lou Pecora

--- Robert Kern [EMAIL PROTECTED] wrote:

 
 Besides constructing the Euclidean norm itself (as
 shown by others here), you
 can also use numpy.linalg.norm() to calculate any of
 several different norms of
 a vector or a matrix:

Right.  linalg.norm also gives the proper magnitude of
complex vectors



-- Lou Pecora,   my views are my own.
---
Great spirits have always encountered violent opposition from mediocre minds. 
-Albert Einstein


   

Moody friends. Drama queens. Your life? Nope! - their life, your story. Play 
Sims Stories at Yahoo! Games.
http://sims.yahoo.com/  
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy requires tuples for indexing where Numeric allowed them?

2007-06-07 Thread Lou Pecora
Hi, Jim,

Just wondering why you would use item() rather than
index in brackets, i.e.   a[i] ?  The latter works
well in numpy.  But maybe I'm missing something.

-- Lou Pecora

--- Jim Kleckner [EMAIL PROTECTED] wrote:

 I'm fighting conversion from Numeric to numpy.
 
 One change that doesn't seem documented is that I
 used to be able to 
 select items using lists and now it seems that I
 have to convert them to 
 tuples.  Is that correct and is there a function
 buried in there that 
 will accept a list for indices?
 
 Any reason that item() can't take a list?
 
 The weird thing is that it doesn't blow up right
 away when a list is 
 passed in an array ref but rather returns something
 I don't expect.
 
 I work with lists rather than the implicit tuples of
 a function call 
 because then I can work with arbitrary dimensions.
 
 In the meantime, I guess I can just convert the list
 to an otherwise 
 unnecessary tuple.
 
 Jim


-- Lou Pecora,   my views are my own.
---
Great spirits have always encountered violent opposition from mediocre minds. 
-Albert Einstein


   

Sick sense of humor? Visit Yahoo! TV's 
Comedy with an Edge to see what's on, when. 
http://tv.yahoo.com/collections/222
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] SciPy Journal

2007-06-07 Thread Lou Pecora
Luke,  I'd love to see that code and the associated
article.  I do a lot of NLD.

--- Luke [EMAIL PROTECTED] wrote:

 I think this Journal sounds like an excellent idea. 
 I have some
 python code that calculates the Lyapunov
 Characteristic Exponents (all
 of them), for a dynamical system that I would be
 willing to write
 about and contribute.



-- Lou Pecora,   my views are my own.
---
Great spirits have always encountered violent opposition from mediocre minds. 
-Albert Einstein


  

Park yourself in front of a world of choices in alternative vehicles. Visit the 
Yahoo! Auto Green Center.
http://autos.yahoo.com/green_center/ 
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [SciPy-user] SciPy Journal

2007-05-31 Thread Lou Pecora
I agree with this idea. Very good.  Although I also
agree with Anne Archibald that the requirement of an
article in the journal to submit code is not a good
idea.   I would be willing to contribute an article on
writing  C extensions that use numpy arrays.  I
already have something on this on the SciPy cookbook,
but I bet it would reach more people in a journal.

I also suggest that articles on using packages like
matplotlib/pylab for scientific purposes also be
included.



-- Lou Pecora,   my views are my own.
---
Great spirits have always encountered violent opposition from mediocre minds. 
-Albert Einstein


   

Got a little couch potato? 
Check out fun summer activities for kids.
http://search.yahoo.com/search?fr=oni_on_mailp=summer+activities+for+kidscs=bz
 
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Question about Optimization (Inline and Pyrex)

2007-04-18 Thread Lou Pecora

--- Anne Archibald [EMAIL PROTECTED] wrote:

 I just
 took another look at
 that code and added a parallel_map I hadn't got
 around to writing
 before, too. I'd be happy to stick it (and test
 file) on the wiki
 under some open license or other (do what thou wilt
 shall be the
 whole of the law?). It's certainly not competition
 for ipython1,
 though, it's mostly to show an example of making
 threads easy to use.
 
 Anne

Please put the parallel map code on the Wiki.  I found
your first (obvious-parallel) example very helpful.



-- Lou Pecora,   my views are my own.
---
Great spirits have always encountered violent opposition from mediocre minds. 
-Albert Einstein

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Question about Optimization (Inline and Pyrex)

2007-04-17 Thread Lou Pecora
Now, I didn't know that.  That's cool because I have a
new dual core Intel Mac Pro.  I see I have some
learning to do with multithreading.  Thanks.

--- Anne Archibald [EMAIL PROTECTED] wrote:

 On 17/04/07, Lou Pecora [EMAIL PROTECTED]
 wrote:
  You should probably look over your code and see if
 you
  can eliminate loops by using the built in
  vectorization of NumPy.  I've found this can
 really
  speed things up.  E.g. given element by element
  multiplication of two n-dimensional arrays x and y
  replace,
 
  z=zeros(n)
  for i in xrange(n):
z[i]=x[i]*y[i]
 
  with,
 
  z=x*y   # NumPy will handle this in a vector
 fashion
 
  Maybe you've already done that, but I thought I'd
  offer it.
 
 It's also worth mentioning that this sort of
 vectorization may allow
 you to avoid python's global interpreter lock.
 
 Normally, python's multithreading is effectively
 cooperative, because
 the interpreter's data structures are all stored
 under the same lock,
 so only one thread can be executing python bytecode
 at a time.
 However, many of numpy's vectorized functions
 release the lock while
 running, so on a multiprocessor or multicore machine
 you can have
 several cores at once running vectorized code.
 
 Anne M. Archibald
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org

http://projects.scipy.org/mailman/listinfo/numpy-discussion
 


-- Lou Pecora,   my views are my own.
---
I knew I was going to take the wrong train, so I left early. 
--Yogi Berra

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Question about Optimization (Inline and Pyrex)

2007-04-17 Thread Lou Pecora
Ii get what you are saying, but I'm not even at the
Stupidly Easy Parallel level, yet.  Eventually. 
Thanks.

--- Anne Archibald [EMAIL PROTECTED] wrote:

 On 17/04/07, Lou Pecora [EMAIL PROTECTED]
 wrote:
  Now, I didn't know that.  That's cool because I
 have a
  new dual core Intel Mac Pro.  I see I have some
  learning to do with multithreading.  Thanks.
 
 No problem. I had completely forgotten about the
 global interpreter
 lock, wrote a little multithreading tool that ran my
 code in three
 different threads, and got just about a 2x speedup
 on a dual-core
 machine. Then someone reminded me about the GIL and
 I was puzzled...
 your results will certainly depend on your code, but
 I found it useful
 to have a little parallel-for-loop idiom for all
 those cases where
 parallelism is stupidly easy.
 
 Anne
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org

http://projects.scipy.org/mailman/listinfo/numpy-discussion
 


-- Lou Pecora,   my views are my own.
---
Great spirits have always encountered violent opposition from mediocre minds. 
-Albert Einstein

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Question about Optimization (Inline and Pyrex)

2007-04-17 Thread Lou Pecora
Very nice.  Thanks.  Examples are welcome since they
are usually the best to get up to speed with
programming concepts.

--- Anne Archibald [EMAIL PROTECTED] wrote:

 On 17/04/07, Lou Pecora [EMAIL PROTECTED]
 wrote:
  I get what you are saying, but I'm not even at the
  Stupidly Easy Parallel level, yet.  Eventually.
 
 Well, it's hardly wonderful, but I wrote a little
 package to make idioms like:

[cut]


-- Lou Pecora,   my views are my own.
---
Great spirits have always encountered violent opposition from mediocre minds. 
-Albert Einstein

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 2D Arrays column operations

2007-03-29 Thread Lou Pecora
Others can correct me, but I believe the ndarrys of
numpy are *not* stored like C arrays, but rather as a
contiguous memory chunk (there are some exceptions,
but ignore for the moment).  Then along with the data
is a structure that tells numpy how to address the
array's data memory using indices (+ other array
info).  In particular, the structure tells numpy the
dimension and the strides for each dimension so it can
skip along and pick out the right components when you
write   c=a[:,1].

I'm sure Travis will correct me if I'm off, but I
think this is basically how numpy arrays operate.


--- Simon Berube [EMAIL PROTECTED] wrote:

 Awww, this is quite right. I kept using the a[0][:]
 notation and I
 assume I am simply pulling out single arrays from
 the array list.
 
 Thank you very much for the prompt reply. (And sorry
 for wasting your
 time :P)


-- Lou Pecora,   my views are my own.
---
I knew I was going to take the wrong train, so I left early. 
--Yogi Berra


 

Be a PS3 game guru.
Get your game face on with the latest PS3 news and previews at Yahoo! Games.
http://videogames.yahoo.com/platform?platform=120121
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Where's append in NumPy??

2007-01-09 Thread Lou Pecora
That may be it.  I'll get a newer version.

--- Robert Kern [EMAIL PROTECTED] wrote:

 Lou Pecora wrote:
  After import numpy as N
  
  In [10]: print N.__version__
  1.1.2881
  
  does that look right as a recent version?
 
 No, that's very old. The version number had briefly
 gotten bumped to 1.1 in the
 repository, but we backed that out quickly.
 
 -- 
 Robert Kern



-- Lou Pecora
  My views are my own.

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Where's append in NumPy?? Can't Installing NumPy.

2007-01-09 Thread Lou Pecora

--- Lou Pecora [EMAIL PROTECTED] wrote:

 That may be it.  I'll get a newer version.
 


No Luck.  

I downloaded numpy-1.0-py2.4-macosx10.4.dmg from the
MacOSX package site, but the installer kept telling me
there was nothing to install.  I removed the previous
NumPy and numpy.pth from the site packages and got the
same message.  How do I install numpy when another
version is there (apparently erroneously marked as a
later version)?



-- Lou Pecora
  My views are my own.

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Where's append in NumPy?? Can't Installing NumPy.

2007-01-09 Thread Lou Pecora
Hi, Chris,   Some answers below (sort of):

--- Christopher Barker [EMAIL PROTECTED] wrote:

 Lou,
 
 This is odd. I've always been able to do it. I sure
 wish Apple had 
 provided a decent package management system!

Amen!

 I have a dmg for 1.0.1 that seems to work well. I
 wonder where I got it?

I got my original NumPy that wouldn't install from the
MacPython site.  It was a package with the usual
install dialog.  Even wiping the existing NumPy from
site-packages didn't work.  Go figure.

Finally got NumPy installed using the tarball from
http://www.scipy.org/Download.  Did the usual,

sudo python setup install

stuff and that worked.  I now have NumPy 1.01 and it
has 'append'. 

For others information:  I had to update SciPy and
Matplotlib, too.  SciPy turned out *not* to be doable
from its tarball on http://www.scipy.org/Download.  I
got the following error (after a whole LOT of output
from compiling, etc.):

/usr/local/bin/g77 -g -Wall -undefined
dynamic_lookup...blah,
blah.../scipy/fftpack/_fftpack.so

/usr/bin/ld: can't locate file for: -lcc_dynamic
collect2: ld returned 1 exit status

/usr/bin/ld: can't locate file for: -lcc_dynamic
collect2: ld returned 1 exit status

error: Command /usr/local/bin/g77 -g -Wall -undefined
dynamic_lookup ...blah,
blah.../scipy/fftpack/_fftpack.so failed with exit
status 1

Exit 1


Whatever collect2 is ld couldn;t find it.  I finally
got SciPy installed from the superpack at the
http://www.scipy.org/Download site.  That seems to
work, giving me version 0.5.3dev for SciPy.

Matplotlib installed fine from tarball and the setup
stuff.

I hope that's of use to someone out there.  Each time
I upgrade something it's an adventure.  Thanks for the
feedback.




-- Lou Pecora
  My views are my own.

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Where's append in NumPy?? Can't Installing NumPy.

2007-01-09 Thread Lou Pecora
Ah, that does ring a bell.  Sigh.  I need to upgrade
my memory banks.  Sure is tough keeping these packages
in sync.  Thanks.  I'll check it out.

--- Robert Kern [EMAIL PROTECTED] wrote:

 Lou Pecora wrote:
 
  /usr/local/bin/g77 -g -Wall -undefined
  dynamic_lookup...blah,
  blah.../scipy/fftpack/_fftpack.so
  
  /usr/bin/ld: can't locate file for: -lcc_dynamic
 
 Yup. You can't use g77 with gcc 4. See the
 instructions I give in the thread
 recompiling needed for binary module after numpy
 1.0.
 
 -- 
 Robert Kern


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Where's append in NumPy??

2007-01-08 Thread Lou Pecora
Where is  append  in NumPy?  I see it in the numpy
manual (I paid for and have the latest version),  but
when I invoke it, Python complains that it doesn't
exist.  In  iPython a query like  append?  gives
'append not found (after importing numpy).  Other
numpy functions are there (e.g. nansum on same page in
the book).  What am I missing?



-- Lou Pecora
  My views are my own.

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Where's append in NumPy??

2007-01-08 Thread Lou Pecora
After import numpy as N

In [10]: print N.__version__
1.1.2881

does that look right as a recent version?

I still get

In [2]: N.append?
Object `N.append` not found.




-- Lou Pecora
  My views are my own.

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion