Re: [Numpy-discussion] statistical model fitting comparison

2008-10-21 Thread Kevin Jacobs [EMAIL PROTECTED]
On Tue, Oct 21, 2008 at 5:01 PM, Bruce Southey [EMAIL PROTECTED] wrote:

 I think you are on your own here as it is a huge chunk to chew!
 Depending on what you really mean by linear models is also part of that
 (the Wikipedia entry is amusing). Most people probably to stats
 applications like lm in R and glm in SAS.

 I do have an interest and the code I have (for logistic regression) is
 probably just as easy to rewrite.



I also have some fairly robust code for logistic and polytomous regression
based on NumPy and SciPy.

-Kevin
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] min() of array containing NaN

2008-08-13 Thread Kevin Jacobs [EMAIL PROTECTED]
On Wed, Aug 13, 2008 at 4:01 PM, Robert Kern [EMAIL PROTECTED] wrote:

 On Wed, Aug 13, 2008 at 14:37, Joe Harrington [EMAIL PROTECTED] wrote:
 On Tue, Aug 12, 2008 at 19:28, Charles R Harris
 [EMAIL PROTECTED] wrote:
 
 
  On Tue, Aug 12, 2008 at 5:13 PM, Andrew Dalke 
 [EMAIL PROTECTED]
  wrote:
 
  On Aug 12, 2008, at 9:54 AM, Anne Archibald wrote:
   Er, is this actually a bug? I would instead consider the fact that
   np.min([]) raises an exception a bug of sorts - the identity of min
 is
   inf.
 
  snip
 
 
  Personally, I expect that if my array 'x' has a NaN then
  min(x) must be a NaN.
 
  I suppose you could use
 
  min(a,b) = (abs(a - b) + a + b)/2
 
  which would have that effect.
 
 Or we could implement the inner loop of the minimum ufunc to return
 NaN if there is a NaN. Currently it just compares the two values
 (which causes the unpredictable results since having a NaN on either
 side of the  is always False). I would be amenable to that provided
 that the C isnan() call does not cause too much slowdown in the normal
 case.
 
  While you're doing that, can you do it so that if keyword nan=False it
  returns NaN if NaNs exist, and if keyword nan=True it ignores NaNs?

 I'm doing nothing. Someone else must volunteer.



I've volunteered to implement this functionality and will have some time
over the weekend to prepare and post a patch for further discussion.

-Kevin
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] min() of array containing NaN

2008-08-12 Thread Kevin Jacobs [EMAIL PROTECTED]
On Tue, Aug 12, 2008 at 1:46 AM, Andrew Dalke [EMAIL PROTECTED]wrote:

 Here's the implementation, from lib/function_base.py

 def nanmin(a, axis=None):
 Find the minimium over the given axis, ignoring NaNs.
 
 y = array(a,subok=True)
 if not issubclass(y.dtype.type, _nx.integer):
 y[isnan(a)] = _nx.inf
 return y.min(axis)


No wonder nanmin is slow.  A C implementation would run at virtually the
same speed as min.  If there is interest, I'll be happy to code C versions.
A better solution would be to just support NaNs and Inf in the generic code.

-Kevin
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] import numpy is slow

2008-07-31 Thread Kevin Jacobs [EMAIL PROTECTED]
On Thu, Jul 31, 2008 at 10:14 AM, Gael Varoquaux 
[EMAIL PROTECTED] wrote:

 On Thu, Jul 31, 2008 at 12:43:17PM +0200, Andrew Dalke wrote:
  Startup performance has not been a numpy concern.  It a concern for
  me, and it has been (for other packages) a concern for some of my
  clients.

 I am curious, if startup performance is a problem, I guess it is because
 you are running lots of little scripts where startup time is big compared
 to run time. Did you think of forking them from an already started
 process. I had this same problem (with libraries way slower than numpy to
 load) and used os.fork to a great success.


Start up time is an issue for me, but in a larger sense than just numpy.  I
do run many scripts, some that are ephemeral and some that take significant
amounts of time.  However, numpy is just one of many many libraries that I
must import, so improvements, even minor ones, are appreciated.

The morale of this discussion, for me, is that just because _you_ don't care
about a particular aspect or feature, doesn't mean that others don't or
shouldn't.  Your workarounds may not be viable for me and vice-versa.  So
let's just go with the spirit of open source and encourage those motivated
to controbute to do so, provided their suggestions are sensible and do not
break code.

-Kevin
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [ANNOUNCE] Traits 3.0 has been released

2008-07-31 Thread Kevin Jacobs [EMAIL PROTECTED]
On Wed, Jul 30, 2008 at 9:25 PM, Dave Peterson [EMAIL PROTECTED]wrote:

 Hello,

 I am very pleased to announce that Traits 3.0 has just been released!


All of the URLs on PyPi to Enthought seem to be broken (e.g.,
http://code.enthought.com/traits).  Can you give an example showing how
traits work?  I'm mildly intrigued, but too lazy to dig beyond the first
broken link.

Thanks,
-Kevin
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: PyCuda

2008-06-22 Thread Kevin Jacobs [EMAIL PROTECTED]
On Sun, Jun 22, 2008 at 3:58 PM, Andreas Klöckner [EMAIL PROTECTED] wrote:

 PyCuda is based on the driver API. CUBLAS uses the high-level API. Once
 *can*
 violate this rule without crashing immediately. But sketchy stuff does
 happen. Instead, for BLAS-1 operations, PyCuda comes with a class called
 GPUArray that essentially reimplements that part of CUBLAS.



Thanks for the clarification.  That makes perfect sense.  Do you have any
feelings on the relative performance of GPUArray versus CUBLAS?



 PyUblas used to be a dependency of PyCuda, but isn't any more. Where did
 you
 see that reference? It should probably be removed.



The first  part of install.rst still says: This tutorial will walk you
through the process of building PyUblas.

Thanks,
-Kevin
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Detecting phase windings

2008-06-16 Thread [EMAIL PROTECTED]
I have a speed problem with the approach I'm using to detect phase wrappings in 
a 3D data set. In my application, phaseField is a 3D array containing the phase 
values of a field. In order to detect the vortices/phase windings at each 
point, I check for windings on each of 3 faces of a 2x2 cube with the following 
code:

def HasVortex(cube):
''' cube is a 2x2x2 array containing the phase values
returns True if any of 3 orthogonal faces contains a phase wrapping
'''
# extract 3 cube faces - flatten to make indexing easier
c1 = cube[0,:,:].flatten()
c3 = cube[:,0,:].flatten()
c5 = cube[:,:,0].flatten()

# now order the phases in a circuit around each face, finishing in the same 
corner as we began
# Use numpy's phase unwrapping code, which has a default threshold of pi
u1 = unwrap(c1[cwi])
u3 = unwrap(c3[cwi])
u5 = unwrap(c5[cwi])

# check whether the final unwrapped value coincides with the value in the 
cell we started at
return not allclose(r_[u1[0],u3[0],u5[0]], r_[u1[4],u3[4],u5[4]])

vortexArray = array([int(HasVortex(phaseField[x:x+2,y:y+2,z:z+2]))
for x in range(phaseField.shape[0]-1)
for y in range(phaseField.shape[1]-1)
for z in range(phaseField.shape[2]-1)]\
   ).reshape((xs-1,ys-1,zs-1))

Whilst this does what I want, it's incredibly slow. I'm wondering whether 
anyone has done this before, or can think of a better approach.

My thoughts about a better approach are to stack the values along 3 new 
dimensions making a 6D array and use unwrap along the 3 new dimensions. 
Something like the following may give you an idea of what I mean, but it's a 
toy example trying to extend 2D into 3D, rather than 3D into 6D, because I 
haven't come to grips with enough of numpy's axis reshaping and stacking tricks 
to avoid getting a severe headache when trying to generalize it:

a = array([[1.,3.], [6.,5.]])
b = np.dstack((a, roll(a,-1,axis=1), roll(roll(a,-1,axis=0),-1,axis=1), 
roll(a,-1,axis=0), a))
print np.unwrap(b, axis=2)

A problem with this approach is that the memory requirements for the stacked 
version will be much greater, so some version using views would be preferable.

Any ideas?

Gary Ruben
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Fancier indexing

2008-05-22 Thread Kevin Jacobs [EMAIL PROTECTED]
After poking around for a bit, I was wondering if there was a faster method
for the following:

# Array of index values 0..n
items = numpy.array([0,3,2,1,4,2],dtype=int)

# Count the number of occurrences of each index
counts = numpy.zeros(5, dtype=int)
for i in items:
  counts[i] += 1

In my real code, 'items' contain up to a million values and this loop will
be in a performance critical area of code.  If there is no simple solution,
I can trivially code this using the C-API.

Thanks,
-Kevin
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fancier indexing

2008-05-22 Thread Kevin Jacobs [EMAIL PROTECTED]
On Thu, May 22, 2008 at 12:08 PM, Keith Goodman [EMAIL PROTECTED] wrote:

 How big is n? If it is much smaller than a million then loop over that
 instead.


n is always relatively small, but I'd rather not do:

for i in range(n):
  counts[i] = (items==i).sum()

If that was the best alternative, I'd just bite the bullet and code this in
C.

Thanks,
-Kevin
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] varu and stdu

2008-04-07 Thread Kevin Jacobs [EMAIL PROTECTED]
I know I'm off topic and maybe a day late, but I'm pained by the naming of
ddof.

It is simply not intuitive for anyone other than the person who thought it
up (and from my recollection, maybe not even for him).For one, most
stats folk use 'df' as the abbreviation for 'degrees of freedom'.  Secondly,
the we tend to think of the constant bias adjustment as an ~adjustment~ of
the sample size or df.  So 'df_adjust=0' or 'sample_adjust=0' will resonate
much more.

The other issue is to clearly describe if 'N-1' is obtained by setting the
adjustment (whatever it is called) to +1 or -1.  There is a reason why most
stats packages have different functions or take a parameter to indicate
'sample' versus 'population' variance calculation.  Though don't take this
as a recommendation to use var and varu -- rather I'm merely pointing out
that var(X, vardef='sample') is an option (using SAS's PROC MEANS parameter
name as an arbitrary example).

In the extremely rare cases I need any other denominator, I'm fine with
multiplying by var(x)*n/(n-adjust).

-Kevin


On Mon, Apr 7, 2008 at 9:41 AM, Pierre GM [EMAIL PROTECTED] wrote:

 Anne, Travis,
 I have no problem to get rid of varu and stdu in MaskedArray: they were
 introduced for my own convenience, and they're indeed outdated with the
 introduction of the ddof parameters. I'll get rid of them next time I post
 something on the SVN.

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] improving code

2008-03-16 Thread [EMAIL PROTECTED]
hello
while trying to write a function that processes some numpy arrays and
calculate euclidean distance ,i ended up with this code


#some samplevalues
totalimgs=17
selectedfacespaces=6
imgpixels=18750 (ie for an image of 125X150 )
...
# i am using these arrays to do the calculation
facespace #numpy.ndarray of shape(totalimgs,imgpixels)
weights #numpy.ndarray of shape(totalimgs,selectedfacespaces)
input_wk #numpy.ndarray of shape(selectedfacespaces,)
distance #numpy.ndarray of shape(selectedfacespaces,) initally all 0.0
's
mindistance #numpy.ndarray of shape(selectedfacespaces,) initally all
0.0 's
...
...
#here is the calculations part

for image in range(numimgs):
distance = abs(input_wk - weights[image, :])
if image==0:
#copy from distance to mindistance
mindistance=distance.copy()
if sum(mindistance)  sum(distance):
imgindex=image
mindistance=distance.copy()
if max(mindistance)  0.0:
mindistance=mindistance/(max(mindistance)+1)
dist=sum(mindistance)

this gets me the euclidean distance value.I want to know if the way i
coded it can be improved,made more compact..if someone can give
suggestions it would be nice
thanks
D
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] confusion about eigenvector

2008-03-03 Thread [EMAIL PROTECTED]
Arnar wrote
 I dont know if this made anything any clearer. However, a simple
 example may be clearer:
 # X is (a ndarray, *not* matrix) column centered with vectorized images in 
 rows
 # method 1:
 XX = dot(X, X.T)
 s, u = linalg.eigh(XX)
 reorder = s.argsort()[::-1]
 facespace = dot(X.T, u[:,reorder])

ok..this and # method 2: (ie svd()) returns same facespace ..and i can
get eigenface images

i read in some document on the topic of eigenfaces that
'Multiplying the sorted eigenvector with  face vector results in
getting the
face-space vector'
facespace=sortedeigenvectorsmatrix *  adjustedfacematrix
(when these are numpy.matrices )

that is why the confusion about transposing X inside
facespace=dot(X.T,u[:,reorder])

if  i make matrices out of  sortedeigenvectors, adjustedfacematrix
then
i will get facespace =sortedeigenvectorsmatrix *  adjustedfacematrix
which has a different set of elements than that obtained by
dot(X.T, u[:,reorder]).
the result differs in some scaling factor?  i couldn't get any clear
eigenface images out of this facespace:-(

D


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] svd() and eigh()

2008-03-01 Thread [EMAIL PROTECTED]
hi
i have a set of images of faces which i make into a 2d array using
numpy.ndarray
each row represents a face image
faces=
[[ 173.   87.  ...   88.  165.]
 [ 158.  103.  ..   73.  143.]
 [ 180.   87.  ..   55.  143.]
 [ 155.  117.  ..   93.  155.]]

from which i can get the mean image =
avgface=average(faces,axis=0)
and calculate the adjustedfaces=faces-avgface

now if i apply svd() i get
 u, s, vt = linalg.svd(adjustedfaces, 0)
# a member posted this
facespace=vt.transpose()

and if i calculate covariance matrix
covmat=matrix(adjustedfaces)* matrix(adjustedfaces).transpose()
eval,evect=eigh(covmat)
evect=sortbyeigenvalue(evect) # sothat largest eval is first
facespace=evect* matrix(adjustedfaces)

what is the difference btw these 2 methods? apparently they yield
different values for the facespace. which should i follow?
is it possible to calculate eigenvectors using svd()?

thanks
D
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] confusion about eigenvector

2008-03-01 Thread [EMAIL PROTECTED]

 I dont know if this made anything any clearer. However, a simple
 example may be clearer:

thanks Arnar for the kind response,now things are a lot clearer...will
try out in code ..
D
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] image to array doubt

2008-02-29 Thread [EMAIL PROTECTED]
hi
i came across a codebase by rice univ people..in that there are some
functions for conversion btw image and vectors

1.code
def image_to_vector(self, filename):
try:
im = Image.open(filename)
except IOError:
print 'couldn\'t load ' + filename
sys.exit(1)
self.im_size = im.size
a = numpy.array(im.getdata())
return a - 128
/code

what is the purpose of a-128 here? i couldn't quite understand

2.code
def vector_to_image(self, v, filename):
v.shape = (-1,)
a, b = min(v), max(v)
span = max(abs(b), abs(a))
im = Image.new('L', self.im_size)
im.putdata((v * 127. / span) + 128)
im.save(filename)
 /code

and why the calculations of a,b span? why do you need the extra  maths
ops inside im.putdata() I couldn't quite follow it

can someone explain this?
thanks
D
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] image to array doubt

2008-02-29 Thread [EMAIL PROTECTED]

 Robin wrote
 I'm not sure why they would be doing this - to me it looks they might
 be using Image as a convenient way to store some other kind of data...

thanks Robin,
I am wondering if there is a more straightforward way to do these..
especially the vector to image function
D
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] PCA on set of face images

2008-02-29 Thread [EMAIL PROTECTED]
hi guys
I have a set of face images with which i want to do face recognition
using Petland's PCA method.I gathered these steps from their docs

1.represent  matrix of face images data
2.find the adjusted matrix by substracting the mean face
3.calculate covariance matrix (cov=A* A_transpose)  where A is from
step2
4.find eigenvectors and select those with highest eigenvalues
5.calculate facespace=eigenvectors*A


when it comes to implementation i have doubts as to how  i should
represent the matrix of face images?
using PIL image.getdata() i can make an array of each greyscale image.
Should the matrix be like each row contains an array representing an
image? That will make a matrix with rows=numimages and
columns=numpixels

cavariancematrix =A *A_transpose will create a square matrix of
shape(numimages,numimages)
Using numpy.linalg.eigh(covariancematrix) will give eigenvectors of
same shape as the covariance matrix.

I would like to know if this is the correct way to do this..I have no
big expertise in linear algebra  so i would be grateful if someone can
confirm the right way of doing this

RoyG
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] PCA on set of face images

2008-02-29 Thread [EMAIL PROTECTED]


On Mar 1, 12:57 am, Peter Skomoroch  wrote:
I think
  matlab example should be easy to translate to scipy/matplotlib using the
  montage function:

  load faces.mat
  %Form covariance matrix
  C=cov(faces');
  %build eigenvectors and eigenvalues
  [E,D] = eig(C);


hi Peter,
nice code..ran the examples..
however couldn't follow the matlab code since i have no exposure to
matlab..was using numpy etc for calcs
could you confirm the layout for the face images data? i assumed that
the initial face matrix should be
faces=a numpy matrix with N rows ie N=numofimages

row1=image1pixels as a sequence
row2=image2pixels as a sequence
...
rowN=imageNpixels as a sequence


and covariancematrix=faces*faces_transpose

is this the right way?
thanks
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] confusion about eigenvector

2008-02-29 Thread [EMAIL PROTECTED]

 This example assumes that facearray is an ndarray.(like you described
 in original post ;-) ) It looks like you are using a matrix.

hi Arnar
thanks ..
a few doubts however

1.when i use say 10 images of 4X3 each
u, s, vt = linalg.svd(facearray, 0)
i will get vt of shape (10,12)
can't i take this as facespace? why do i need to get the transpose?
then i can take as eigface_image0= vt[0].reshape(imgwdth,imght)

2.this way (svd) is diff from covariance matrix method. if i am to do
it using the later ,how can i get the eigenface image data?

thanks for the help
D
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] confusion about eigenvector

2008-02-28 Thread [EMAIL PROTECTED]


On Feb 28, 1:27 pm, Matthieu Brucher  wrote
 If your images are 4x3, your eigenvector must be 12 long.

hi
thanx for reply
i am using 4 images each of size 4X3
the covariance matrix  obtained from adjfaces*faces_trans is 4X4 in
size and that produces the evalues and eigenvectors given here
evalues,evect=eigh(covarmat)

i will give the data i used

facemat (ndarray from data of 4 images each4X3)
[[ 173.   87.   88.  163.  167.   72.   75.  159.  170.  101.   88.
165.]
 [ 158.  103.  115.  152.  138.   58.   81.  153.  126.   68.   73.
143.]
 [ 180.   87.  107.  180.  167.   65.   86.  182.  113.   41.   55.
143.]
 [ 155.  117.  128.  147.  147.   70.   93.  146.  153.   65.   93.
155.]]
avgvals
[ 166.598.5   109.5   160.5   154.75   66.25   83.75  160.
140.5
   68.75   77.25  151.5 ]

adjfaces=matrix(facemat-avgvals)
[[  6.5  -11.5  -21.52.5   12.25   5.75  -8.75  -1.29.5
32.25
   10.75  13.5 ]
 [ -8.54.55.5   -8.5  -16.75  -8.25  -2.75  -7.   -14.5
-0.75
   -4.25  -8.5 ]
 [ 13.5  -11.5   -2.5   19.5   12.25  -1.25   2.25  22.   -27.5
-27.75
  -22.25  -8.5 ]
 [-11.5   18.5   18.5  -13.5   -7.75   3.75   9.25 -14.12.5
-3.75
   15.75   3.5 ]]

faces_trans =adjfaces.transpose()
[[  6.5   -8.5   13.5  -11.5 ]
 [-11.54.5  -11.5   18.5 ]
 [-21.55.5   -2.5   18.5 ]
 [  2.5   -8.5   19.5  -13.5 ]
 [ 12.25 -16.75  12.25  -7.75]
 [  5.75  -8.25  -1.25   3.75]
 [ -8.75  -2.75   2.25   9.25]
 [ -1.-7.22.   -14.  ]
 [ 29.5  -14.5  -27.5   12.5 ]
 [ 32.25  -0.75 -27.75  -3.75]
 [ 10.75  -4.25 -22.25  15.75]
 [ 13.5   -8.5   -8.53.5 ]]
covarmat =adjfaces * faces_trans
[[ 3111.8125 -1080.4375 -1636.4375  -394.9375]
 [-1080.4375   901.3125  -114.6875   293.8125]
 [-1636.4375  -114.6875  3435.3125 -1684.1875]
 [ -394.9375   293.8125 -1684.1875  1785.3125]]

evalues,evectors=eigh(covarmat)
evalues
[ -1.85852801e-13   6.31143639e+02   3.31182765e+03   5.29077871e+03]
evectors
[[ 0.5-0.06727772  0.6496399  -0.56871936]
 [ 0.5-0.77317718 -0.37697426  0.10043632]
 [ 0.5 0.27108233  0.31014514  0.76179023]
 [ 0.5 0.56937257 -0.58281078 -0.29350719]]
newevectmatrix
[[-0.56871936  0.6496399  -0.06727772  0.5   ]
 [ 0.10043632 -0.37697426 -0.77317718  0.5   ]
 [ 0.76179023  0.31014514  0.27108233  0.5   ]
 [-0.29350719 -0.58281078  0.56937257  0.5   ]]

i am not getting the eigenvector of length 12 as you said
pls tell me if i am doing sthing wrong
D
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] confusion about eigenvector

2008-02-28 Thread [EMAIL PROTECTED]
 Arnar wrote
 from scipy import linalg
 facearray-=facearray.mean(0) #mean centering
 u, s, vt = linalg.svd(facearray, 0)
 scores = u*s
 facespace = vt.T

hi Arnar
when i do this i get these
u = 'numpy.core.defmatrix.matrix' (4, 4)
that matches the eigenvectors matrix in my previous data
s= 'numpy.ndarray' (4,)
and
vt='numpy.core.defmatrix.matrix' (4, 12)

here
scores=u*s causes a matrix not aligned error..

is there something wrong in the calculation?
D
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] finding eigenvectors etc

2008-02-20 Thread [EMAIL PROTECTED]
 Different implementations follow different conventions as to which
 is which.

thank you for the replies ..the reason why i asked was that the most
significant eigenvectors ( sorted according to eigenvalues) are  later
used in calculations and then the results obtained differ  in java and
python..so i was worried as to which one to use

Your matrix is almost singular, is badly conditionned,

Mathew, can you explain that..i didn't quite get it..
dn

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] finding eigenvectors etc

2008-02-20 Thread [EMAIL PROTECTED]

 How are you using the values? How significant are the differences?


i am using these eigenvectors to do PCA on a set of images(of faces).I
sort the eigenvectors in descending order of their eigenvalues and
this is multiplied with the  (orig data of some images viz a matrix)to
obtain a facespace.
like
#pseudocode...
sortedeigenvectors=mysort(eigenvectors)
facespace=sortedeigenvectors*adjfaces  /* adjfaces is a matrix */

if i do this in python i get a facespace
[[-1028755.44341439,  1480864.32750018,  1917712.0162213,
-983526.60328021,
  -1662357.13091294,  -499792.41540038,   208696.97376238,
-916628.92613255,
  -1454071.95225114, -1563209.39113008,  -231969.96968212 ,
-768417.98606125]
 [ -866174.88336972,  1212934.33524067,   543013.86361006,
-1352625.86282073,
   -309872.30710619 ,  466301.12884198,
216088.93319292 ,-1512378.8688779,
   2581349.03171275,  1797812.01270033,  1876754.7339826 ,
751781.8166291 ]
 [  -57026.32567001 ,  -69918.94570563,  -399715.51441018,
-233720.8360051,
188227.41229887,   177341.47889165 ,  -65241.23138424 ,
-311917.28253664,
   1133399.70627111,  1089028.99019462,   684854.41725944 ,
413465.86494352]
 [  405955.15245412,   562832.78296479 ,  864334.63457882 ,
629752.80210603,
894768.52572026,   578460.80766675 ,  629146.32442893 ,
768037.57754708,
   -485157.28573271, -1718776.11176486 , -780929.18155991 ,
-165391.19551137]]

whereas the same steps
in java
[
[-516653.73649950844, 274000.54127598763, -108557.2732037272,
-799041.4108906921, -495577.7478765989, -49125.38109725664,
-162041.57505147497, -917033.3002665655, 1207264.8912226136,
1384551.3481945703, 1056098.9289163304, 357801.9553511339],
[-956064.0724430305, 1424775.0801567277, 898684.8188346579,
-1385008.5401600213, -514677.038573372, 387195.56502804917,
281164.65362325957, -1512307.8891047493, 2114204.697920214,
1280391.7056360755, 1650660.0594245053, 554096.482085637],
[-666313.7580419029, 1365981.2428742633, 2011095.455319733,
-453217.29083790665, -1199981.2283586136, -358852.32104592584,
375855.4012532809, -311436.16701894277, -2033000.776565753,
-2418152.391663846, -847661.841421182, -926486.0374297247],
[593030.0669844414, 121955.63569302124, 124121.99904933537,
697146.7418886195, 1321002.514808584, 743093.1371151333,
493712.52017493406, 767889.8563902564, 487050.6874229272,
-641935.1621667973, -310387.14691965195, 246026.029544]
]

such difference causes diff results in all calculations involving the
facespace

dn
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] finding eigenvectors etc

2008-02-19 Thread [EMAIL PROTECTED]
hi
i was calculating eigenvalues and eigenvectors for a covariancematrix
using numpy

adjfaces=matrix(adjarr)
faces_trans=adjfaces.transpose()
covarmat=adjfaces*faces_trans
evalues,evect=eigh(covarmat)

for a sample covarmat like
[[  1.69365981e+13 , -5.44960784e+12,  -9.00346400e+12 , -2.48352625e
+12]
 [ -5.44960784e+12,   5.08860660e+12,  -8.67539205e+11  , 1.22854045e
+12]
 [ -9.00346400e+12,  -8.67539205e+11,   1.78184943e+13  ,-7.94749110e
+12]
 [ -2.48352625e+12 ,  1.22854045e+12,  -7.94749110e+12 ,  9.20247690e
+12]]

i get these
evalues
[  3.84433376e-03,  4.17099934e+12 , 1.71771364e+13 ,  2.76980401e+13]

evect
[[ 0.5-0.04330262  0.60041892 -0.62259297]
 [ 0.5-0.78034307 -0.35933516  0.10928372]
 [ 0.5 0.25371931  0.3700265   0.74074753]
 [ 0.5 0.56992638 -0.6026 -0.22743827]]

what bothers me is that for the same covarmat i get a different set of
eigenvectors and eigenvalues when i use java library Jama's methods
Matrix faceM = new Matrix(faces, nrfaces,length);
Matrix faceM_transpose = faceM.transpose();
Matrix covarM = faceM.times(faceM_transpose);
EigenvalueDecomposition E = covarM.eig();
double[] eigValue = diag(E.getD().getArray());
double[][] eigVector = E.getV().getArray();

here the eigValue=
[-6.835301757686207E-4, 4.170999335736721E12, 1.7177136443134865E13,
2.7698040117669414E13]

and eigVector
[
[0.5, -0.04330262221379265, 0.6004189175979487, 0.6225929700052174],
[0.5, -0.7803430730840767, -0.3593351608695496, -0.10928371540423852],
[0.49994, 0.2537193127299541, 0.370026504572483,
-0.7407475253159538],
[0.49994, 0.5699263825679145, -0.602613008821,
0.22743827071497524]
]

I am quite confused bythis difference in results ..the first element
in eigValue is different and also the signs in last column of
eigVectors are diff..can someone tell me why this happens?
thanks
dn




___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] parallel numpy (by Brian Granger) - any info?

2008-01-08 Thread Kevin Jacobs [EMAIL PROTECTED]
On 1/8/08, Matthieu Brucher [EMAIL PROTECTED] wrote:

 I have AMD processor so I guess I should use ACML somehow instead.
  However, at 1st I would prefer my code to be platform-independent, and
  at 2nd unfortunately I haven't encountered in numpy documentation (in
  website scipy.org and numpy.scipy.org) any mention about how to use
  numpy multithreading at all (neither MKL nor ACML).



 MKL does the multithreading on its own for level 3 BLAS instructions
 (OpenMP). For ACML, the problem is that AMD does not provide a CBLAS
 interface and is not interested in doing so. With ACML, the compilation
 fails with the current Numpy, but hopefully with Scons it will work, at
 least for the LAPACK part.


[..]

That is news to me.  I compile NumPy with ACML regularly.  I haven't tried
with the current trunk but it worked recently.  I have not tried the
parallel versions, though.

-Kevin
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [C++-sig] Overloading sqrt(5.5)*myvector

2007-12-27 Thread Kevin Jacobs [EMAIL PROTECTED]
Hi Bruce,

I have to add that I don't know the answer to your question either, but I do
know that it is solvable and that once the list experts return,
enlightenment will soon follow.  My confidence comes from knowing the Python
internals for how left and right multiplication are performed.  As long as
the left __mul__ operator returns NotImplemented, then the __rmul__ method
will be attempted (see http://docs.python.org/ref/numeric-types.html).  Of
course, I don't know how to declare such a beast in Boost, having never used
it, but I'm sure it is possible.  My intuition is that the first problem you
need to solve is getting Boot to generate the appropriate __rmul__ method.
The second problem, if it even exists, is ensuring that __mul__ returns
NotImplemented.

Best of luck,
-Kevin



On Dec 27, 2007 10:15 AM, Bruce Sherwood [EMAIL PROTECTED] wrote:

 I should have added: This structure worked with the older version of
 VPython which used Numeric, but it doesn't work in the beta version
 which uses numpy. Since I don't know enough about either numpy or Boost,
 I'm left guessing which subsystem is the source of my difficulties, and
 clueless about how to remedy them.

 Bruce Sherwood

 Bruce Sherwood wrote:
  Thanks for the comment, which limits the range of possible solutions.
  The VPython vector class is implemented in C++, not in Python. I made up
  the simple test in my previous note to try out the solution that had
  been offered and which you have usefully ruled out. Here is the relevant
  part of the vector class, which indeed doesn't look like an ndarray:
 
  inline vector
  operator*( const double s) const throw()
  { return vector( s*x, s*y, s*z); }
 
  and here is the free function for right multiplication:
 
  inline vector
  operator*( const double s, const vector v)
  { return vector( s*v.x, s*v.y, s*v.z); }
 
  Maybe the unsolvable problem is in the Boost definitions:
 
  py::class_vector(vector, py::init py::optionaldouble, double,
  double ())
   .def( self * double())
   .def( double() * self)
 
  Left multiplication is fine, but right multiplication isn't.
 
  Bruce Sherwood
 
  Robert Kern wrote:
 
  Bruce Sherwood wrote:
 
 
  Thanks for the suggestion. It hadn't occurred to me to try to override
  numpy as you suggest. However, when I try the code shown below as the
  start of a test of this scheme, I get the following error:
 
  Traceback (most recent call last):
File C:\Documents and Settings\Bruce\My
  Documents\0VPythonWork\vectors.py, line 24, in module
  numpy.float64.__mul__ = new_mul
  TypeError: can't set attributes of built-in/extension type '
 numpy.float64'
 
  I'm copying this to the numpy discussion list, as maybe someone there
  will see where to go starting from your suggestion.
 
 
  Like most (or all) builtin-types, the numpy float scalars do not permit
  replacing their methods from Python.
 
  I'm not familiar with vpython's vector. If you can make it not look
 like an
  ndarray, then you should be able to just implement __rmul__ on vector.
 
 
 
  ___
  Numpy-discussion mailing list
  Numpy-discussion@scipy.org
  http://projects.scipy.org/mailman/listinfo/numpy-discussion
 
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] any better way to normalise a matrix

2007-12-27 Thread [EMAIL PROTECTED]
in my code i am trying to normalise a matrix as below

mymatrix=matrix(..# items are of double type..can be negative
values)
numrows,numcols=mymatrix.shape

for i in range(numrows):
temp=mymatrix[i].max()
for j in range(numcols):
mymatrix[i,j]=abs(mymatrix[i,j]/temp)

# i am using abs() to make neg vals positive

this way the code takes too much time to run for a case of
numrows=25, and numcols=8100 etc..
being a beginner in numpy and python ,i would like to know if this can
be done more efficiently..can anyone advise?

thanx
dn
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] matrix multipln takes too much time

2007-12-25 Thread [EMAIL PROTECTED]
hi
i am doing some maths calculations involving matrices of double values
using numpy.matrix ,

java code for this is something like

int items=25;
int sample=5;
int totalcols=8100;
double[][]dblarrayone=new double[items][totalcols];
double[][]dblarraytwo=new double[items][totalcols];
//their elements are set elsewhere before calculation

double[][] resultarray = new double[items][sample];

for(int i=0;iitems;i++){
for(int j=0;jsample;j++){
double tval=0.0;
for(int p=0;ptotalcols;p++)
tval+=dblarrayone[j][p] * dblarraytwo[i][p];
resultarray[i][j]=Math.abs(tval);

}

}

so I wanted to do the same  in python..(may be this code is not in the
recommended way..)
i am storing the filled matrices and other values as instance variable
of a class and access them by self.whatever...

self.items=25
self.sample=5
self.totalcols=8100
#self.matrixone,self.matrixtwo are numply matix objects with already
filled elements
#but for testing i filled it with zeros
self.matrixone=matrix(zeros((items,totalcols)))
self.matrixtwo=matrix(zeros((items,totalcols)))
resultmatrix=matrix(zeros((self.items,self.sample)))

for i in range(self.items):
for j in range(self.sample):
 tval=0.0
 for p in range(self.totalcols):
tval +=self.matrixone[ j , p ] * self.matrixtwo[ i , p ]
 resultmatrix[ i, j ]=abs(tval)


here I found that while the java code takes barely 110 milliseconds to
execute the code ,the
python code takes something like 53 secs to execute !!..I am baffled
by this ..can anyone advise me how i can improve this? (i want to code
in python so  I can't use c,c++ , java)

dn
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix multipln takes too much time

2007-12-25 Thread [EMAIL PROTECTED]

 on my computer I have:
 1.15.6 sec with your code
 2.0.072 sec with resultmatrix2
 3.0.040 sec with tensordot (resultmatrix3) (-- which is a 400x speed)


wow ,thanks!
the tensordot fn is blinding fast..
i added /modified
 resultndarray = tensordot(matrixone[:sample,:], matrixtwo.T,
axes=1).T
resultmatrix =matrix(abs(resultndarray))

so i can get rid of the negative values in a real case

..your replies helped a lot guys,thanx a lot and happy X'Mas

dn
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] setting items in a matrix

2007-12-20 Thread [EMAIL PROTECTED]
hi
i am a beginner with numpy and python,so pardon me if this doubt seems
silly
i want to create a matrix with say 3 rows and 5 columns..and then set
the values of each item in it .for this i did something like below

myarray=zeros((3,5))
#then set the items
for row in range(3):
for col in range(5):
myarray[row][col]=99.


mymatrix=matrix(myarray)

is this the way to do the matrix creation and value setting?  is the
use of  zeros() unnecessary?  i  am in the early learning stage so
your reply wd help me much

dn
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] setting items in a matrix

2007-12-20 Thread [EMAIL PROTECTED]
hi
i am a beginner with numpy and python,so pardon me if this doubt seems
silly
i want to create a matrix with say 3 rows and 5 columns..and then set
the values of each item in it .for this i did something like below

myarray=zeros((3,5))
#then set the items
for row in range(3):
for col in range(5):
myarray[row][col]=99.


mymatrix=matrix(myarray)

is this the way to do the matrix creation and value setting?  is the
use of  zeros() unnecessary?  i  am in the early learning stage so
your reply wd help me much

dn
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy : your experiences?

2007-11-22 Thread [EMAIL PROTECTED]
  In particular for the simulation yes, depending on the level of detail
  of course. But only parts, eg. random number generation for certain
  distributions had to be coded in C/C++.

 Are you saying you extended the scipy/numpy tools for this?
 Do you think it would make sense to put some of that stuff on the wiki?

No, this is very special to my application and not really numpy
specific. I had to write a Metropolis-Hastings sampler, which worked
in python but was too slow. I've coded this for a specific
distribution in C++ and pass numpy arrays and python lists from and to
C++ functions using boost::python.

Bernhard


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy : your experiences?

2007-11-21 Thread [EMAIL PROTECTED]
 a) Can you guys tell me briefly about the kind of problems you are
 tackling with numpy and scipy?

I'm using python with numpy,scipy, pytables and matplotlib for data
analysis in the field of high energy particle physics. Most of the
work is histograming millions of  events, fitting functions to the
distributions or applying cuts to yield optimized signal/background
ratios. I often use the random number and optimization facilities for
these purposes.

Most of my colleagues use ROOT (root.cern.ch) which has also a python
binding, however, I love the simplicity of numpy's ufuncs and indexing
capabilities, which makes the code much denser and readable.

Another part is developing toy simulation and reconstruction
algorithms in python which later will be translated to C++ since the
main software framework in our collaboration is written in C++. This
includes log-likelihood track reconstruction algorithms based on the
arrival times of photons measured with photomultipliers. Simulation
involves particle generators and detector response simulations to test
the reconstruction with known inputs.

 b) Have you ever felt that numpy/scipy was slow and had to switch to
 C/C++/Fortran?

In particular for the simulation yes, depending on the level of detail
of course. But only parts, eg. random number generation for certain
distributions had to be coded in C/C++.

Since the main software for my work is coded in C++,  I often end up
writing wrappers around parts of this software to extract the data I
need for doing the analysis work in python.

 c) Do you use any form of parallel processing? Multicores? SMPs?
 Clusters? If yes how did u utilize them?

We have a cluster at our lab which I use for my computations. This is
not very difficult since the data can be split into several files and
each can be treated in the same way. One just needs to pass a script
over and over again to the cluster, this is done in  a shell script or
with the tools provided by the cluster scheduling system.

Cheers! Bernhard
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] how to convert btw rgb and pixel value

2007-11-04 Thread [EMAIL PROTECTED]
hi
i wish to convert an rgb image into an array of double values..is
there a method for that in numpy?
also i want to create an array of doubles into corresponding rgb
tuples of an image
can anyone guide me?
dn

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] how to get/set column

2007-11-01 Thread [EMAIL PROTECTED]
It's just the other way around:
mymat[:,0]  # first column
mymat[:,1]  # second column

Take a look at the tutorial:
http://scipy.org/Tentative_NumPy_Tutorial#head-864862d3f2bb4c32f04260fac61eb4ef34788c4c

best! bernhard

On Nov 1, 7:22 am, dev new [EMAIL PROTECTED] wrote:
 is there a method for numpy arrays and matrices to get/set a particular
 column

 i know that a row can be fetched by mymat[1,:] etc
 can this be done for a column
 dn

 ___
 Numpy-discussion mailing list
 [EMAIL PROTECTED]://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Getting an item in an array with its coordinates given by another array

2007-10-29 Thread [EMAIL PROTECTED]
Take a look at numpy.ix_
http://www.scipy.org/Numpy_Example_List_With_Doc#head-603de8bdb62d0412798c45fe1db0648d913c8a9c

This method creates the index array for you. You only have to specify
the coordinates in each dimesion.

Bernhard

On Oct 29, 8:46 am, Matthieu Brucher [EMAIL PROTECTED]
wrote:
   Little correction, only c[(2,3)] gives me what I expect, not c[[2,3]],
  which
   is even stranger.

  c[(2,3)] is the same as c[2,3] and obviously works as you expected.

 Well, this is not indicated in the documentation.

 c[[2,3]] is refered to as 'advanced indexing' in the numpy book.

  It will return elements 2 and 3 along the first dimension. To get what you
  want, you need to put it like this:

  c[[2],[3]]

  In [118]: c = N.arange(0.,3*4*5).reshape((3,4,5))

  In [119]: c[[2],[3]]

  Out[119]: array([[ 55.,  56.,  57.,  58.,  59.]])

 This is very strange to say the least, as tuple do not work in the same way.

 This does not work however using a ndarray holding the indices.





  In [120]: ind = N.array([[2],[3]])

  In [121]: c[ind]

  ---
  type 'exceptions.IndexError'Traceback (most recent call
  last)

  /media/hda6/home/ck/ipython console in module()

  type 'exceptions.IndexError': index (3) out of range (0=index=2) in
  dimension 0

  so you have to convert it to a list before:

  In [122]: c[ind.tolist()]
  Out[122]: array([[ 55.,  56.,  57.,  58.,  59.]])

 But if I have the coordinates of the points in an array, I have to reshape
 it and then convert it into a list. Or convert it into a list and then
 convert it to a tuple. I know that advanced indexing is useful, but here it
 is not coherent. tuples and lists should have the same result on the array,
 or at least it should be documented.
 So there is not way to get a sub-array based on coordinates in an array ?

 Matthieu

 --
 French PhD student
 Website :http://miles.developpez.com/
 Blogs :http://matt.eifelle.comandhttp://blog.developpez.com/?blog=92

 ___
 Numpy-discussion mailing list
 [EMAIL PROTECTED]://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] anyone compiling on IRIX 6.5

2007-10-24 Thread [EMAIL PROTECTED]
Jarrod Millman wrote:
 Hello,
 
 I am hoping to close a few of the remaining tickets for the upcoming
 NumPy 1.0.4 release.  Is anyone using NumPy on IRIX?  Or have access
 to IRIX?  If so, could you please take a look at this ticket:
 http://projects.scipy.org/scipy/numpy/ticket/417
 
 Thanks,
 

IRIX is a very annoying OS for me. There are numerous errors and warnings 
whenever I tried to compile. Now I always try Linux first if possible when I 
have to compile and use some programs, and IRIX is my last choice to try.

I do have access to IRIX, but I'm definitely not an expert about it. Howerer, 
I'm willing to try this job if there are no volunteers.

Regards,
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] anyone compiling on IRIX 6.5

2007-10-24 Thread [EMAIL PROTECTED]

Hi David,
David Cournapeau wrote:


[EMAIL PROTECTED] wrote:

Jarrod Millman wrote:
  

Hello,

I am hoping to close a few of the remaining tickets for the upcoming
NumPy 1.0.4 release.  Is anyone using NumPy on IRIX?  Or have access
to IRIX?  If so, could you please take a look at this ticket:
http://projects.scipy.org/scipy/numpy/ticket/417

Thanks,


IRIX is a very annoying OS for me. There are numerous errors and warnings 
whenever I tried to compile. Now I always try Linux first if possible when I 
have to compile and use some programs, and IRIX is my last choice to try.


I do have access to IRIX, but I'm definitely not an expert about it. Howerer, 
I'm willing to try this job if there are no volunteers.
  
If you are willing to test, could you try the numpy.scons branch on it ? 
I don't have access to IRIX myself, so I cannot test if it works. I 
would hope that it would work out of the box (it did on solaris, 
platform where the trunk does not build right now).


cheers,

David


I have tried to compile numpy.scons on IRIX with python setup.py install. It 
compiled with warnings, but not errors. Attached is the compile log file.


But It seems the numpy just doesn't work. The following is what I have tried:


Python 2.4.4 (#2, May 17 2004, 22:47:37) [C] on irix6
Type help, copyright, credits or license for more information.
 from numpy import *
Running from numpy source directory.
 a = arange(1,100,0.1)
Traceback (most recent call last):
  File stdin, line 1, in ?
NameError: name 'arange' is not defined
 a = array(range(10))
Traceback (most recent call last):
  File stdin, line 1, in ?
NameError: name 'array' is not defined
 import numpy
 numpy.test()
Traceback (most recent call last):
  File stdin, line 1, in ?
AttributeError: 'module' object has no attribute 'test'
 numpy.array
Traceback (most recent call last):
  File stdin, line 1, in ?
AttributeError: 'module' object has no attribute 'array'
 


compile.log.gz
Description: GNU Zip compressed data
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] expm

2007-07-20 Thread Kevin Jacobs [EMAIL PROTECTED]

On 7/20/07, Anne Archibald [EMAIL PROTECTED] wrote:


On 20/07/07, Nils Wagner [EMAIL PROTECTED] wrote:
 lorenzo bolla wrote:
  hi all.
  is there a function in numpy to compute the exp of a matrix, similar
  to expm in matlab?
  for example:
  expm([[0,0],[0,0]]) = eye(2)
 Numpy doesn't provide expm but scipy does.
  from scipy.linalg import expm, expm2, expm3

Just as a warning, numpy does provide expm1, but it does something
different (exp(x)-1, computed directly).



On a separate note, I'm working to provide faster and more accurate versions
of sqrtm and expm.  The current versions do not take full advantage of
LAPACK.  Here are some preliminary benchmarks:

Ill-conditioned

linalg.sqrtm   : error=9.37e-27, 573.38 usec/pass
sqrtm_svd  : error=2.16e-28, 142.38 usec/pass
sqrtm_eig  : error=4.79e-27, 270.38 usec/pass
sqrtm_symmetric: error=1.04e-27, 239.30 usec/pass
sqrtm_symmetric2: error=2.73e-27, 190.03 usec/pass

Well-conditioned

linalg.sqrtm   : error=1.83e-29, 478.67 usec/pass
sqrtm_svd  : error=8.11e-30, 130.57 usec/pass
sqrtm_eig  : error=4.50e-30, 255.56 usec/pass
sqrtm_symmetric: error=2.78e-30, 237.61 usec/pass
sqrtm_symmetric2: error=3.35e-30, 167.27 usec/pass

Large

linalg.sqrtm   : error=5.95e-25, 8450081.68 usec/pass
sqrtm_svd  : error=1.64e-24, 151206.61 usec/pass
sqrtm_eig  : error=6.31e-24, 549837.40 usec/pass
sqrtm_symmetric: error=8.55e-25, 177422.29 usec/pass

where:

def sqrtm_svd(x):
 u,s,vt = linalg.svd(x)
 return dot(u,transpose((s**0.5)*transpose(vt)))

def sqrtm_eig(x):
 d,e = linalg.eig(x)
 d = (d**0.5).astype(float)
 return dot(e,transpose(d*e))

def sqrtm_symmetric(x,cond=1e-7):
 d,e = linalg.eigh(x)
 d[dcond] = 0
 return dot(e,transpose((d**0.5)*e)).astype(float)

def sqrtm_symmetric2(x):
 # Not as robust due to initial Cholesky step
 l=linalg.cholesky(x,lower=1)
 u,s,vt = linalg.svd(l)
 return dot(u,transpose(s*u))

with SciPy linked against ACML.

-Kevin
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] expm

2007-07-20 Thread Kevin Jacobs [EMAIL PROTECTED]

On 7/20/07, Nils Wagner [EMAIL PROTECTED] wrote:


Your sqrtm_eig(x) function won't work if x is defective.
See test_defective.py for details.



I am aware, though at least on my system, the SVD-based method is by far the
fastest and robust (and can be made more robust by the addition of a
relative condition number threshold).  The eig version was included mainly
for comparison.

Have you considered the algorithms proposed by

Nick Higham for various matrix functions ?

http://www.maths.manchester.ac.uk/~higham/pap-mf.html



Yep.  He is one of my heros.  The downside is that direct Python
implementations of many of his algorithms will almost always be
significantly slower than using algorithms that leave the heavy-lifting to
LAPACK.  This is certainly the case for the current sqrtm and expm code.
Even in the Python domain, there is significant room to optimize the current
sqrtm implementation, since one of the key inner loops can be trivially
vectorized.  I'll certainly give that a shot and add it to my next round of
benchmarks.  However, I'm not sure that I want to commit to going to the
next level by developing and maintaining a C implementation of sqrtm.  While
it has the potential to achieve near-optimal performance for that method, we
may be reaching the point of diminishing returns for my needs.  I certainly
won't complain if someone else is willing to do so.

Thanks,
-Kevin
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] expm

2007-07-20 Thread Kevin Jacobs [EMAIL PROTECTED]

On 7/20/07, Nils Wagner [EMAIL PROTECTED] wrote:


Your sqrtm_eig(x) function won't work if x is defective.
See test_defective.py for details.




I've added several defective matrices to my test cases and the SVD method
doesn't work well as I'd thought (which is obvious in retrospect).  I'm
going to circle back and see what I can do to speed up Nick Higham's
algorithm to the point where it is useful for me.  Otherwise, my need is for
a fast sqrtm for symmetric positive definite inputs, so I may still end up
using the SVD approach and contributing a complementary sqrtm_nondefective
back to scipy.

-Kevin
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] expm

2007-07-20 Thread Kevin Jacobs [EMAIL PROTECTED]

On 7/20/07, Charles R Harris [EMAIL PROTECTED] wrote:


I expect using sqrt(x) will be faster than x**.5.



I did test this at one point and was also surprised that sqrt(x) seemed
slower than **.5.  However I found out otherwise while preparing a timeit
script to demonstrate this observation.  Unfortunately, I didn't save the
precise script I used to explore this issue the first time.  On my system
for arrays with more than 2 elements, sqrt is indeed faster.  For smaller
arrays, the different is negligible, but inches out in favor of **0.5.

Thanks,
-Kevin
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] expm

2007-07-20 Thread Kevin Jacobs [EMAIL PROTECTED]

On 7/20/07, Kevin Jacobs [EMAIL PROTECTED] [EMAIL PROTECTED]
wrote:


On 7/20/07, Charles R Harris [EMAIL PROTECTED] wrote:

 I expect using sqrt(x) will be faster than x**.5.


I did test this at one point and was also surprised that sqrt(x) seemed
slower than **.5.  However I found out otherwise while preparing a timeit
script to demonstrate this observation.  Unfortunately, I didn't save the
precise script I used to explore this issue the first time.  On my system
for arrays with more than 2 elements, sqrt is indeed faster.  For smaller
arrays, the different is negligible, but inches out in favor of ** 0.5.




This is just not my day.  My observations above are valid for integer
arrays, but not float arrays:

sqrt(int array)   :  6.98 usec/pass
(int array)**0.5  : 22.75 usec/pass
sqrt(float array) :  6.70 usec/pass
(float array)**0.5:  4.66 usec/pass

Generated by:

import timeit

n=10

t=timeit.Timer(stmt='sqrt(arange(3))',setup='from numpy import
arange,array,sqrt\nx=arange(100)')
print 'sqrt(int array)   : %5.2f usec/pass' % (100*t.timeit(number=n)/n)

t=timeit.Timer(stmt='x**0.5',setup='from numpy import
arange,array\nx=arange(100)')
print '(int array)**0.5  : %5.2f usec/pass' % (100*t.timeit(number=n)/n)

t=timeit.Timer(stmt='sqrt(arange(3))',setup='from numpy import
arange,array,sqrt\nx=arange(100,dtype=float)')
print 'sqrt(float array) : %5.2f usec/pass' % (100*t.timeit(number=n)/n)

t=timeit.Timer(stmt='x**0.5',setup='from numpy import
arange,array\nx=arange(100,dtype=float)')
print '(float array)**0.5: %5.2f usec/pass' % (100*t.timeit(number=n)/n)

-Kevin
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] NumPy/SciPy LAPACK version

2007-07-16 Thread Kevin Jacobs [EMAIL PROTECTED]

Mea culpa on the msqrt example, however I still think it is wrong to get a
complex square-root back when a real valued result is expected and exists.

-Kevin


On 7/16/07, Hanno Klemm [EMAIL PROTECTED] wrote:



Kevin,

the problem appears to be that sqrtm() gives back an array, rather
than a matrix:


 import scipy.linalg as sl
 a = s.matrix([[59, 64, 69],[64, 72, 80],[69, 80, 91]])
 type(a)
class 'numpy.core.defmatrix.matrix'
 a
matrix([[59, 64, 69],
[64, 72, 80],
[69, 80, 91]])
 a*a - N.dot(a,a)
matrix([[0, 0, 0],
[0, 0, 0],
[0, 0, 0]])
 b = sl.sqrtm(a)
 type(b)
type 'numpy.ndarray'
 N.dot(b,b)
array([[ 59. +1.85288457e-22j,  64. -6.61744490e-23j,  69.
+1.85288457e-22j],
   [ 64. -2.64697796e-23j,  72. -3.70576914e-22j,  80.
-5.29395592e-23j],
   [ 69. +1.85288457e-22j,  80. -1.32348898e-22j,  91.
+2.38228016e-22j]])
 b*b - N.dot(b,b)
array([[-33.91512711 +1.03595922e-07j, -44.57413548 -1.82329475e-07j,
-54.51073741 +7.87335527e-08j],
   [-44.57413548 -1.82329475e-07j, -48.15108664 +4.04046172e-07j,
-51.27477788 -2.21716697e-07j],
   [-54.51073741 +7.87335527e-08j, -51.27477788 -2.21716697e-07j,
-43.21448471 +1.42983144e-07j]])


This certainly is a slightly unexpected behaviour.

Hanno


Kevin Jacobs [EMAIL PROTECTED] [EMAIL PROTECTED] said:

 --=_Part_59405_32758974.1184593945795
 Content-Type: text/plain; charset=ISO-8859-1; format=flowed
 Content-Transfer-Encoding: 7bit
 Content-Disposition: inline

 Hi all,

 This is a bit of a SciPy question, but I thought I'd ask here since I'm
 already subscribed.  I'd like to add some new LAPACK bindings to
SciPy and
 was wondering if there was a minimum version requirement for LAPACK,
since
 it would be ideal if I could use some of the newer 3.0 features.  In
 addition to using some block methods only added in 3.0, it is very
 convenient to use the WORK=-1 for space queries instead of
reimplementing
 the underlying logic in the calc_work module.

 The routines of most interest to me are:
 DGELSD
 DGGGLM
 DGGLSE

 I've also found that SciPy's sqrtm is broken:

  a=matrix([[59, 64, 69],
 [64, 72, 80],
 [69, 80, 91]])
  sqrtm(a)
 array([[ 5.0084801  +1.03420519e-08j,  4.40747825 -2.06841037e-08j,
  3.8064764  +1.03420519e-08j],
[ 4.40747825 -2.06841037e-08j,  4.88353492 +4.13682075e-08j,
  5.3595916  -2.06841037e-08j],
[ 3.8064764  +1.03420519e-08j,  5.3595916  -2.06841037e-08j,
  6.9127068  +1.03420519e-08j]])
  sqrtm(a)*sqrtm(a)
 array([[ 25.08487289 +1.03595922e-07j,  19.42586452 -1.82329475e-07j,
  14.48926259 +7.87335527e-08j],
[ 19.42586452 -1.82329475e-07j,  23.84891336 +4.04046172e-07j,
  28.72522212 -2.21716697e-07j],
[ 14.48926259 +7.87335527e-08j,  28.72522212 -2.21716697e-07j,
  47.78551529 +1.42983144e-07j]])

 So not even close...  (and yes, it does deserve a bug report if one
doesn't
 already exist)

 -Kevin

 --=_Part_59405_32758974.1184593945795
 Content-Type: text/html; charset=ISO-8859-1
 Content-Transfer-Encoding: 7bit
 Content-Disposition: inline

 Hi all,brbrThis is a bit of a SciPy question, but I thought
I#39;d ask here since I#39;m already subscribed.nbsp; I#39;d like
to add some new LAPACK bindings to SciPy and was wondering if there
was a minimum version requirement for LAPACK, since it would be ideal
if I could use some of the newer
 3.0 features.nbsp; In addition to using some block methods only
added in 3.0, it is very convenient to use the WORK=-1 for space
queries instead of reimplementing the underlying logic in the
calc_work module.brbrThe routines of most interest to me are:
 brDGELSDbrDGGGLMbrDGGLSEbrbrI#39;ve also found that
SciPy#39;s sqrtm is broken:brbrgt;gt;gt; a=matrix([[59, 64,
69],brnbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp; [64, 72,
80],brnbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp; [69, 80,
91]])brgt;gt;gt; sqrtm(a)brarray([[
 5.0084801nbsp; +1.03420519e-08j,nbsp; 4.40747825
-2.06841037e-08j,brnbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;
3.8064764nbsp;
+1.03420519e-08j],brnbsp;nbsp;nbsp;nbsp;nbsp;nbsp; [
4.40747825 -2.06841037e-08j,nbsp; 4.88353492
+4.13682075e-08j,brnbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;
5.3595916nbsp;
-2.06841037e-08j],brnbsp;nbsp;nbsp;nbsp;nbsp;nbsp; [
 3.8064764nbsp; +1.03420519e-08j,nbsp; 5.3595916nbsp;
-2.06841037e-08j,brnbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;
6.9127068nbsp; +1.03420519e-08j]])brgt;gt;gt;
sqrtm(a)*sqrtm(a)brarray([[ 25.08487289 +1.03595922e-07j,nbsp;
19.42586452
-1.82329475e-07j,brnbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;
 14.48926259
+7.87335527e-08j],brnbsp;nbsp;nbsp;nbsp;nbsp;nbsp; [
19.42586452 -1.82329475e-07j,nbsp; 23.84891336
+4.04046172e-07j,brnbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;
28.72522212 -2.21716697e-07j],brnbsp;nbsp;nbsp;nbsp;nbsp;nbsp;
[ 14.48926259 +7.87335527e-08j,nbsp; 28.72522212 -2.21716697e-07j,br
 nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp; 47.78551529
+1.42983144e-07j]])brbrSo not even close...nbsp

[Numpy-discussion] scipy.test() warnings, errors and failures

2007-07-07 Thread [EMAIL PROTECTED]
Hi, I just installed numpy (1.0.3) and scipy (0.5.2) on a Windows
machine running Python 2.5.1. They both complete installation, and
numpy.test() reports no errors. scipy.test() produces a huge stream
(see below) of warnings, errors (19), and failures (2), however. Also,
there's a deprecation warning when I import scipy. Is this behavior
normal? These appear to be the latest versions of everything.

Thanks,
Matt

Python 2.5.1 (r251:54863, Apr 18 2007, 08:51:08) [MSC v.1310 32 bit
(Intel)] on win32
Type copyright, credits or license() for more information.


Personal firewall software may warn about the connection IDLE
makes to its subprocess using this computer's internal loopback
interface.  This connection is not visible on any external
interface and no data is sent to or received from the Internet.


IDLE 1.2.1
 import numpy
 import scipy

Warning (from warnings module):
  File C:\Python25\lib\site-packages\scipy\misc\__init__.py, line 25
test = ScipyTest().test
DeprecationWarning: ScipyTest is now called NumpyTest; please update
your code
 scipy.test()

Warning (from warnings module):
  File C:\Python25\lib\site-packages\scipy\ndimage\__init__.py, line
40
test = ScipyTest().test
DeprecationWarning: ScipyTest is now called NumpyTest; please update
your code

Warning (from warnings module):
  File C:\Python25\lib\site-packages\scipy\sparse\__init__.py, line
9
test = ScipyTest().test
DeprecationWarning: ScipyTest is now called NumpyTest; please update
your code

Warning (from warnings module):
  File C:\Python25\lib\site-packages\scipy\io\__init__.py, line 20
test = ScipyTest().test
DeprecationWarning: ScipyTest is now called NumpyTest; please update
your code

Warning (from warnings module):
  File C:\Python25\lib\site-packages\scipy\lib\__init__.py, line 5
test = ScipyTest().test
DeprecationWarning: ScipyTest is now called NumpyTest; please update
your code

Warning (from warnings module):
  File C:\Python25\lib\site-packages\scipy\linsolve\umfpack
\__init__.py, line 7
test = ScipyTest().test
DeprecationWarning: ScipyTest is now called NumpyTest; please update
your code

Warning (from warnings module):
  File C:\Python25\lib\site-packages\scipy\linsolve\__init__.py,
line 13
test = ScipyTest().test
DeprecationWarning: ScipyTest is now called NumpyTest; please update
your code

Warning (from warnings module):
  File C:\Python25\lib\site-packages\scipy\interpolate\__init__.py,
line 15
test = ScipyTest().test
DeprecationWarning: ScipyTest is now called NumpyTest; please update
your code

Warning (from warnings module):
  File C:\Python25\lib\site-packages\scipy\optimize\__init__.py,
line 17
test = ScipyTest().test
DeprecationWarning: ScipyTest is now called NumpyTest; please update
your code

Warning (from warnings module):
  File C:\Python25\lib\site-packages\scipy\linalg\__init__.py, line
32
test = ScipyTest().test
DeprecationWarning: ScipyTest is now called NumpyTest; please update
your code

Warning (from warnings module):
  File C:\Python25\lib\site-packages\scipy\special\__init__.py, line
22
test = ScipyTest().test
DeprecationWarning: ScipyTest is now called NumpyTest; please update
your code

Warning (from warnings module):
  File C:\Python25\lib\site-packages\scipy\stats\__init__.py, line
15
test = ScipyTest().test
DeprecationWarning: ScipyTest is now called NumpyTest; please update
your code

Warning (from warnings module):
  File C:\Python25\lib\site-packages\scipy\fftpack\__init__.py, line
21
test = ScipyTest().test
DeprecationWarning: ScipyTest is now called NumpyTest; please update
your code

Warning (from warnings module):
  File C:\Python25\lib\site-packages\scipy\integrate\__init__.py,
line 16
test = ScipyTest().test
DeprecationWarning: ScipyTest is now called NumpyTest; please update
your code

Warning (from warnings module):
  File C:\Python25\lib\site-packages\scipy\lib\lapack\__init__.py,
line 93
test = ScipyTest().test
DeprecationWarning: ScipyTest is now called NumpyTest; please update
your code

Warning (from warnings module):
  File C:\Python25\lib\site-packages\scipy\signal\__init__.py, line
17
test = ScipyTest().test
DeprecationWarning: ScipyTest is now called NumpyTest; please update
your code

Warning (from warnings module):
  File C:\Python25\lib\site-packages\scipy\lib\blas\__init__.py,
line 61
test = ScipyTest().test
DeprecationWarning: ScipyTest is now called NumpyTest; please update
your code

Warning (from warnings module):
  File C:\Python25\lib\site-packages\scipy\maxentropy\__init__.py,
line 12
test = ScipyTest().test
DeprecationWarning: ScipyTest is now called NumpyTest; please update
your code

Warning (from warnings module):
  File C:\Python25\lib\site-packages\scipy\__init__.py, line 77
return ScipyTest(scipy).test(level, verbosity)

Re: [Numpy-discussion] best way of counting time and cputime?

2007-05-14 Thread [EMAIL PROTECTED]
 Is the genutils module not included to standard CPython edition?

It's not. It's a sub-module of IPython.

It's based on the resource module,though and that comes with Python on
Linux. Just define the function Fernando posted:

  def clock():
  clock() - floating point number

  Return the *TOTAL USER+SYSTEM* CPU time in seconds since the start 
  of
  the process.  This is done via a call to resource.getrusage, so it
  avoids the wraparound problems in time.clock().

  u,s = resource.getrusage(resource.RUSAGE_SELF)[:2]
  return u+s

Take a look at the module docs of resource.

Bernhard

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] best way of counting time and cputime?

2007-05-12 Thread [EMAIL PROTECTED]
It depends on what you're aiming at. If you want to compare different
implementations of some expressions and need to know their average
execution times you should use the timeit module. If you want to have
the full execution time of a script, time.time (call at the begin and
end, compute the difference, gives the elapsed time) and time.clock
(under linux the cpu clock time used by the process) are the methods
you want.

Cheers! Bernhard

On May 12, 12:22 am, dmitrey [EMAIL PROTECTED] wrote:
 hi all,
 please inform me which way for counting elapsed time and cputime in
 Python is best? Previously in some Python sources I noticed there are
 some ones.
 Currently I use time.time() for time.
 Thx, D.
 ___
 Numpy-discussion mailing list
 [EMAIL PROTECTED]://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Assembling an array from parts

2007-04-01 Thread Kevin Jacobs [EMAIL PROTECTED]

I had to poke around before finding it too:

 bmat( [[K,G],[G.T, zeros(nc)]] )

On 4/1/07, Bill Baxter [EMAIL PROTECTED] wrote:


What's the best way of assembling a big matrix from parts?
I'm using lagrange multipliers to enforce constraints and this kind of
matrix comes up a lot:

[[ K,   G],
[ G.T  , 0]]

In matlab you can use the syntax
   [K G; G' zeros(nc)]

In numpy I'm using
   vstack([ hstack([ K,G ]), hstack([ G.T, zeros((nc,nc)) ]) ])

Which has a lot of excess verbiage and parentheses that make it hard
to type and hard to parse what's going on.  It would be a little nicer
if there were some kind of function like 'arraystack':

  arraystack( [ [K, G], [G.T, zeros((nc,nc)) ]] )

Is there anything like this already?  I haven't found anything in the
example list like that.  But maybe concatenate() is flexible enough

--bb
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion