[Numpy-discussion] Functions for indexing into certain parts of an array (2d)

2009-06-06 Thread Fernando Perez
Howdy,

I'm finding myself often having to index *into* arrays to set values.
As best I can see, in numpy functions/methods like diag or tri{u,l}
provide for the extraction of values from arrays, but I haven't found
their counterparts for generating the equivalent indices.

Their implementations are actually quite trivial once you think of it,
but I wonder if there's any benefit in having this type of machinery
in numpy itself. I had to think of how to do it more than once, so I
ended up writing up a few utilities for it.

Below are my naive versions I have for internal use.  If there's any
interest, I'm happy to document them to compliance as a patch.

Cheers,

f



def mask_indices(n,mask_func,k=0):
Return the indices for an array, given a masking function like
tri{u,l}.
m = np.ones((n,n),int)
a = mask_func(m,k)
return np.where(a != 0)


def diag_indices(n,ndim=2):
Return the indices to index into a diagonal.

Examples

 a = np.array([[1,2,3,4],[5,6,7,8],[9,10,11,12],[13,14,15,16]])
 a
array([[ 1,  2,  3,  4],
   [ 5,  6,  7,  8],
   [ 9, 10, 11, 12],
   [13, 14, 15, 16]])
 di = diag_indices(4)
 a[di] = 100
 a
array([[100,   2,   3,   4],
   [  5, 100,   7,   8],
   [  9,  10, 100,  12],
   [ 13,  14,  15, 100]])

idx = np.arange(n)
return (idx,)*ndim


def tril_indices(n,k=0):
Return the indices for the lower-triangle of an (n,n) array.

Examples

 a = np.array([[1,2,3,4],[5,6,7,8],[9,10,11,12],[13,14,15,16]])
 a
array([[ 1,  2,  3,  4],
   [ 5,  6,  7,  8],
   [ 9, 10, 11, 12],
   [13, 14, 15, 16]])
 dl = tril_indices(4)
 a[dl] = -1
 a
array([[-1,  2,  3,  4],
   [-1, -1,  7,  8],
   [-1, -1, -1, 12],
   [-1, -1, -1, -1]])
 dl = tril_indices(4,2)
 a[dl] = -10
 a
array([[-10, -10, -10,   4],
   [-10, -10, -10, -10],
   [-10, -10, -10, -10],
   [-10, -10, -10, -10]])

return mask_indices(n,np.tril,k)


def triu_indices(n,k=0):
Return the indices for the upper-triangle of an (n,n) array.

Examples

 a = np.array([[1,2,3,4],[5,6,7,8],[9,10,11,12],[13,14,15,16]])
 a
array([[ 1,  2,  3,  4],
   [ 5,  6,  7,  8],
   [ 9, 10, 11, 12],
   [13, 14, 15, 16]])
 du = triu_indices(4)
 a[du] = -1
 a
array([[-1, -1, -1, -1],
   [ 5, -1, -1, -1],
   [ 9, 10, -1, -1],
   [13, 14, 15, -1]])
 du = triu_indices(4,2)
 a[du] = -10
 a
array([[ -1,  -1, -10, -10],
   [  5,  -1,  -1, -10],
   [  9,  10,  -1,  -1],
   [ 13,  14,  15,  -1]])

return mask_indices(n,np.triu,k)
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Functions for indexing into certain parts of an array (2d)

2009-06-06 Thread Robert Kern
On Sat, Jun 6, 2009 at 02:01, Fernando Perez fperez@gmail.com wrote:
 Howdy,

 I'm finding myself often having to index *into* arrays to set values.
 As best I can see, in numpy functions/methods like diag or tri{u,l}
 provide for the extraction of values from arrays, but I haven't found
 their counterparts for generating the equivalent indices.

 Their implementations are actually quite trivial once you think of it,
 but I wonder if there's any benefit in having this type of machinery
 in numpy itself. I had to think of how to do it more than once, so I
 ended up writing up a few utilities for it.

 Below are my naive versions I have for internal use.  If there's any
 interest, I'm happy to document them to compliance as a patch.

+1

diag_indices() can be made more efficient, but these are fine.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix multiplication

2009-06-06 Thread Gael Varoquaux
On Fri, Jun 05, 2009 at 06:02:09PM -0400, Alan G Isaac wrote:
 I think something close to this would be possible:
 add dot as an array method.
   A .dot(B) .dot(C)
 is not as pretty as
   A * B * C
 but it is much better than
   np.dot(np.dot(A,B),C)

 In fact it is so much better, that it
 might (?) be worth considering separately
 from the entire matrix discussion.

I am +1e4 on that proposition.

My 1e-2 euros.

Gaël
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] extract elements of an array that are contained in another array?

2009-06-06 Thread Neil Crighton
Robert Cimrman cimrman3 at ntc.zcu.cz writes:

 Anne Archibald wrote:

  1. add a keyword argument to intersect1d assume_unique; if it is not
  present, check for uniqueness and emit a warning if not unique
  2. change the warning to an exception
  Optionally:
  3. change the meaning of the function to that of intersect1d_nu if the
  keyword argument is not present
  
 You mean something like:
 
 def intersect1d(ar1, ar2, assume_unique=False):
  if not assume_unique:
  return intersect1d_nu(ar1, ar2)
  else:
  ... # the current code
 
 intersect1d_nu could be still exported to numpy namespace, or not.
 

+1 - from the user's point of view there should just be intersect1d and
setmember1d (i.e. no '_nu' versions). The assume_unique keyword Robert suggests
can be used if speed is a problem.

I really like in1d (no underscore) as a new name for setmember1d_nu. inarray is
another possibility. I don't like 'ain'; 'a' in front of 'in' detracts from 
readability, unlike the extra a in arange.

Can we summarise the discussion in this thread and write up a short proposal
about what we'd like to change in arraysetops, and how to make the changes? 
Then it's easy for other people to give their opinion on any changes. I can do
this if no one else has time.


Neil


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] extract elements of an array that are contained in another array?

2009-06-06 Thread josef . pktd
On Sat, Jun 6, 2009 at 4:42 AM, Neil Crighton neilcrigh...@gmail.com wrote:
 Robert Cimrman cimrman3 at ntc.zcu.cz writes:

 Anne Archibald wrote:

  1. add a keyword argument to intersect1d assume_unique; if it is not
  present, check for uniqueness and emit a warning if not unique
  2. change the warning to an exception
  Optionally:
  3. change the meaning of the function to that of intersect1d_nu if the
  keyword argument is not present
 

1. merge _nu version into one function
---

 You mean something like:

 def intersect1d(ar1, ar2, assume_unique=False):
      if not assume_unique:
          return intersect1d_nu(ar1, ar2)
      else:
          ... # the current code

 intersect1d_nu could be still exported to numpy namespace, or not.


 +1 - from the user's point of view there should just be intersect1d and
 setmember1d (i.e. no '_nu' versions). The assume_unique keyword Robert 
 suggests
 can be used if speed is a problem.

+ 1 on rolling the _nu versions this way into the plain version, this
would avoid a lot of the confusion.
It would not be a code breaking API change for existing correct usage
(but some speed regression without adding keyword)

depreciate intersect1d_nu
^^
 intersect1d_nu could be still exported to numpy namespace, or not.
I would say not, if they are the default branch of the non _nu version

+1 on depreciation


2. alias as in
-

 I really like in1d (no underscore) as a new name for setmember1d_nu. inarray 
 is
 another possibility. I don't like 'ain'; 'a' in front of 'in' detracts from
 readability, unlike the extra a in arange.
I don't like the extra as either, ones name spaces are commonly used

alias setmember1d_nu as `in1d` or `isin1d`, because the function is a
in and not a set operation
+1


 Can we summarise the discussion in this thread and write up a short proposal
 about what we'd like to change in arraysetops, and how to make the changes?
 Then it's easy for other people to give their opinion on any changes. I can do
 this if no one else has time.


 other points

3. behavior of other set functions
---

guarantee that setdiff1d works for non-unique arrays (even when
implementation changes), and change documentation
+1

need to check other functions
^^
union1d:  works for non-unique arrays, obvious from source

setxor1d: requires unique arrays
 np.setxor1d([1,2,3,3,4,5], [0,0,1,2,2,6])
array([2, 4, 5, 6])
 np.setxor1d(np.unique([1,2,3,3,4,5]), np.unique([0,0,1,2,2,6]))
array([0, 3, 4, 5, 6])

setxor: add keyword option and call unique by default
+1 for symmetry

ediff1d and unique1d are defined for non-unique arrays


4. name of keyword


intersect1d(ar1, ar2, assume_unique=False)

alternative isunique=False  or just unique=False
+1 less to write


5. module name
---

rename arraysetops to something easier to read like setfun. I think it
would only affect internal changes since all functions are exported to
the main numpy name space
+1e-4  (I got used to arrayse_tops)


5. keep docs in sync with correct usage
-

obvious


That's my summary and opinions

Josef


 Neil


 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] performance matrix multiplication vs. matlab

2009-06-06 Thread Keith Goodman
On Fri, Jun 5, 2009 at 2:37 PM, Chris Colbert sccolb...@gmail.com wrote:
 I'll caution anyone from using Atlas from the repos in Ubuntu 9.04  as the
 package is broken:

 https://bugs.launchpad.net/ubuntu/+source/atlas/+bug/363510


 just build Atlas yourself, you get better performance AND threading.
 Building it is not the nightmare it sounds like. I think i've done it a
 total of four times now, both 32-bit and 64-bit builds.

 If you need help with it,  just email me off list.

That's a nice offer. I tried building ATLAS on Debian a year or two
ago and got stuck.

Clear out your inbox!
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Functions for indexing into certain parts of an array (2d)

2009-06-06 Thread Keith Goodman
On Sat, Jun 6, 2009 at 12:01 AM, Fernando Perez fperez@gmail.com wrote:

 def diag_indices(n,ndim=2):
    Return the indices to index into a diagonal.

    Examples
    
     a = np.array([[1,2,3,4],[5,6,7,8],[9,10,11,12],[13,14,15,16]])
     a
    array([[ 1,  2,  3,  4],
           [ 5,  6,  7,  8],
           [ 9, 10, 11, 12],
           [13, 14, 15, 16]])
     di = diag_indices(4)
     a[di] = 100
     a
    array([[100,   2,   3,   4],
           [  5, 100,   7,   8],
           [  9,  10, 100,  12],
           [ 13,  14,  15, 100]])
    
    idx = np.arange(n)
    return (idx,)*ndim

I often set the diagonal to zero. Now I can make a fill_diag function.

What do you think of passing in the array a instead of n and ndim
(diag_indices_list_2 below)?

from numpy import arange

def diag_indices(n, ndim=2):
   idx = arange(n)
   return (idx,)*ndim

def diag_indices_list(n, ndim=2):
   idx = range(n)
   return (idx,)*ndim

def diag_indices_list_2(a):
   idx = range(a.shape[0])
   return (idx,) * a.ndim

 a = np.array([[1,2,3,4],[5,6,7,8],[9,10,11,12],[13,14,15,16]])
 n = 4
 ndim = 2

 timeit diag_indices(n, ndim)
100 loops, best of 3: 1.76 µs per loop

 timeit diag_indices_list(n, ndim)
100 loops, best of 3: 1.03 µs per loop

 timeit diag_indices_list_2(a)
100 loops, best of 3: 1.21 µs per loop
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix default to column vector?

2009-06-06 Thread Alan G Isaac
On 6/6/2009 12:41 AM Charles R Harris apparently wrote:
 Well, one could argue that. The x.T is a member of the dual, hence maps 
 vectors to the reals. Usually the reals aren't represented by 1x1 
 matrices. Just my [.02] cents. 

Of course that same perspective could
lead you to argue that a M×N matrix
is for mapping N vectors to M vectors,
not for doing matrix multiplication.

Matrix multiplication produces a
matrix result **by definition**.
Treating 1×1 matrices as equivalent
to scalars is just a convenient anomaly
in certain popular matrix-oriented
languages.

Cheers,
Alan Isaac


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] performance matrix multiplication vs. matlab

2009-06-06 Thread Chris Colbert
since there is demand, and someone already emailed me, I'll put what I
did in this post. It pretty much follows whats on the scipy website,
with a couple other things I gleaned from reading the ATLAS install
guide:

and here it goes, this is valid for Ubuntu 9.04 64-bit  (# starts a
comment when working in the terminal)


download lapack 3.2.1 http://www.netlib.org/lapack/lapack.tgz
download atlas 3.8.3
http://sourceforge.net/project/downloading.php?group_id=23725filename=atlas3.8.3.tar.bz2a=65663372

create folder  /home/your-user-name/build/atlas   #this is where we build
create folder /home/your-user-name/build/lapack #atlas and lapack

extract the folder lapack-3.2.1 to /home/your-user-name/build/lapack
extract the contents of atlas to /home/your-user-name/build/atlas



now in the terminal:

# remove g77 and get stuff we need
sudo apt-get remove g77
sudo apt-get install gfortran
sudo apt-get install build-essential
sudo apt-get install python-dev
sudo apt-get install python-setuptools
sudo easy_install nose


# build lapack
cd /home/your-user-name/build/lapack/lapack-3.2.1
cp INSTALL/make.inc.gfortran make.inc

gedit make.inc
#
#in the make.inc file make sure the line   OPTS = -O2 -fPIC -m64
#andNOOPTS = -O0 -fPIC -m64
#the -m64 flags build 64-bit code, if you want 32-bit, simply leave
#the -m64 flags out
#

cd SRC

#this should build lapack without error
make



# build atlas

cd /home/your-user-name/build/atlas

#this is simply where we will build the atlas
#libs, you can name it what you want
mkdir Linux_X64SSE2

cd Linux_X64SSE2

#need to turn off cpu-throttling
sudo cpufreq-selector -g performance

#if you don't want 64bit code remove the -b 64 flag. replace the
#number 2400 with your CPU frequency in MHZ
#i.e. my cpu is 2.53 GHZ so i put 2530
../configure -b 64 -D c -DPentiumCPS=2400 -Fa  -alg -fPIC
--with-netlib-lapack=/home/your-user-name/build/lapack/lapack-3.2.1/Lapack_LINUX.a

#the configure step takes a bit, and should end without errors

 #this takes a long time, go get some coffee, it should end without error
make build

#this will verify the build, also long running
make check

#this will test the performance of your build and give you feedback on
#it. your numbers should be close to the test numbers at the end
make time

cd lib

#builds single threaded .so's
make shared

#builds multithreaded .so's
make ptshared

#copies all of the atlas libs (and the lapack lib built with atlas)
#to our lib dir
sudo  cp  *.so  /usr/local/lib/



#now we need to get and build numpy

download numpy 1.3.0
http://sourceforge.net/project/downloading.php?group_id=1369filename=numpy-1.3.0.tar.gza=93506515

extract the folder numpy-1.3.0 to /home/your-user-name/build

#in the terminal

cd /home/your-user-name/build/numpy-1.3.0
cp site.cfg.example site.cfg

gedit site.cfg
###
# in site.cfg uncomment the following lines and make them look like these
[DEFAULT]
library_dirs = /usr/local/lib
include_dirs = /usr/local/include

[blas_opt]
libraries = ptf77blas, ptcblas, atlas

[lapack_opt]
libraries = lapack, ptf77blas, ptcblas, atlas
###
#if you want single threaded libs, uncomment those lines instead


#build numpy- should end without error
python setup.py build

#install numpy
python setup.py install

cd /home

sudo ldconfig

python
import numpy
numpy.test()   #this should run with no errors (skipped tests and known-fails 
are ok)
a = numpy.random.randn(6000, 6000)
numpy.dot(a, a) # look at your cpu monitor and verify all cpu cores are 
at 100% if you built with threads


Celebrate with a beer!


Cheers!

Chris





On Sat, Jun 6, 2009 at 10:42 AM, Keith Goodmankwgood...@gmail.com wrote:
 On Fri, Jun 5, 2009 at 2:37 PM, Chris Colbert sccolb...@gmail.com wrote:
 I'll caution anyone from using Atlas from the repos in Ubuntu 9.04  as the
 package is broken:

 https://bugs.launchpad.net/ubuntu/+source/atlas/+bug/363510


 just build Atlas yourself, you get better performance AND threading.
 Building it is not the nightmare it sounds like. I think i've done it a
 total of four times now, both 32-bit and 64-bit builds.

 If you need help with it,  just email me off list.

 That's a nice offer. I tried building ATLAS on Debian a year or two
 ago and got stuck.

 Clear out your inbox!
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] is my numpy installation using custom blas/lapack?

2009-06-06 Thread Richard Llewellyn
Hello,

I've managed a build of lapack and atlas on Fedora 10 on a quad core, 64,
and now (...) have a numpy I can import that runs tests ok. :]I am
puzzled, however, that numpy builds and imports lapack_lite.  Does this mean
I have a problem with the build(s)?
Upon building numpy, I see the troubling output:



C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protecto
r --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC

compile options: '-c'
gcc: _configtest.c
gcc -pthread _configtest.o -L/usr/local/rich/src/scipy_build/lib -llapack
-lptf77blas -lptcblas -latlas -o _configtest
/usr/bin/ld: _configtest: hidden symbol `__powidf2' in
/usr/lib/gcc/x86_64-redhat-linux/4.3.2/libgcc.a(_powidf2.o) is reference
d by DSO
/usr/bin/ld: final link failed: Nonrepresentable section on output
collect2: ld returned 1 exit status
/usr/bin/ld: _configtest: hidden symbol `__powidf2' in
/usr/lib/gcc/x86_64-redhat-linux/4.3.2/libgcc.a(_powidf2.o) is reference
d by DSO
/usr/bin/ld: final link failed: Nonrepresentable section on output
collect2: ld returned 1 exit status
failure.
removing: _configtest.c _configtest.o
Status: 255
Output:
  FOUND:
libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas']
library_dirs = ['/usr/local/rich/src/scipy_build/lib']
language = f77
define_macros = [('NO_ATLAS_INFO', 2)]

##

I don't have root on this machine, but could pester admins for eventual
temporary access.

Thanks much for any help,
Rich
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] is my numpy installation using custom blas/lapack?

2009-06-06 Thread Chris Colbert
when you build numpy, did you use site.cfg to tell it where to find
your atlas libs?

On Sat, Jun 6, 2009 at 1:02 PM, Richard Llewellynllew...@gmail.com wrote:
 Hello,

 I've managed a build of lapack and atlas on Fedora 10 on a quad core, 64,
 and now (...) have a numpy I can import that runs tests ok. :]    I am
 puzzled, however, that numpy builds and imports lapack_lite.  Does this mean
 I have a problem with the build(s)?
 Upon building numpy, I see the troubling output:

 

 C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall
 -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protecto
 r --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC

 compile options: '-c'
 gcc: _configtest.c
 gcc -pthread _configtest.o -L/usr/local/rich/src/scipy_build/lib -llapack
 -lptf77blas -lptcblas -latlas -o _configtest
 /usr/bin/ld: _configtest: hidden symbol `__powidf2' in
 /usr/lib/gcc/x86_64-redhat-linux/4.3.2/libgcc.a(_powidf2.o) is reference
 d by DSO
 /usr/bin/ld: final link failed: Nonrepresentable section on output
 collect2: ld returned 1 exit status
 /usr/bin/ld: _configtest: hidden symbol `__powidf2' in
 /usr/lib/gcc/x86_64-redhat-linux/4.3.2/libgcc.a(_powidf2.o) is reference
 d by DSO
 /usr/bin/ld: final link failed: Nonrepresentable section on output
 collect2: ld returned 1 exit status
 failure.
 removing: _configtest.c _configtest.o
 Status: 255
 Output:
   FOUND:
     libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas']
     library_dirs = ['/usr/local/rich/src/scipy_build/lib']
     language = f77
     define_macros = [('NO_ATLAS_INFO', 2)]

 ##

 I don't have root on this machine, but could pester admins for eventual
 temporary access.

 Thanks much for any help,
 Rich

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] is my numpy installation using custom blas/lapack?

2009-06-06 Thread Richard Llewellyn
Hi Chris,
 thanks much for posting those installation instructions.  Seems similar to
what I pieced together.

I gather ATLAS not found.  Oops, drank that beer too early.

I copied Atlas libs to /usr/local/rich/src/scipy_build/lib.

This is my site.cfg.  Out of desperation I tried search_static_first = 1,
but probably of no use.

[DEFAULT]
library_dirs = /usr/local/rich/src/scipy_build/lib:$HOME/usr/galois/lib
include_dirs =
/usr/local/rich/src/scipy_build/lib/include:$HOME/usr/galois/include
search_static_first = 1

[blas_opt]
libraries = f77blas, cblas, atlas

[lapack_opt]
libraries = lapack, f77blas, cblas, atlas

[amd]
amd_libs = amd

[umfpack]
umfpack_libs = umfpack, gfortran

[fftw]
libraries = fftw3


Rich




On Sat, Jun 6, 2009 at 10:25 AM, Chris Colbert sccolb...@gmail.com wrote:

 when you build numpy, did you use site.cfg to tell it where to find
 your atlas libs?

 On Sat, Jun 6, 2009 at 1:02 PM, Richard Llewellynllew...@gmail.com
 wrote:
  Hello,
 
  I've managed a build of lapack and atlas on Fedora 10 on a quad core, 64,
  and now (...) have a numpy I can import that runs tests ok. :]I am
  puzzled, however, that numpy builds and imports lapack_lite.  Does this
 mean
  I have a problem with the build(s)?
  Upon building numpy, I see the troubling output:
 
  
 
  C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall
  -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protecto
  r --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC
 
  compile options: '-c'
  gcc: _configtest.c
  gcc -pthread _configtest.o -L/usr/local/rich/src/scipy_build/lib -llapack
  -lptf77blas -lptcblas -latlas -o _configtest
  /usr/bin/ld: _configtest: hidden symbol `__powidf2' in
  /usr/lib/gcc/x86_64-redhat-linux/4.3.2/libgcc.a(_powidf2.o) is reference
  d by DSO
  /usr/bin/ld: final link failed: Nonrepresentable section on output
  collect2: ld returned 1 exit status
  /usr/bin/ld: _configtest: hidden symbol `__powidf2' in
  /usr/lib/gcc/x86_64-redhat-linux/4.3.2/libgcc.a(_powidf2.o) is reference
  d by DSO
  /usr/bin/ld: final link failed: Nonrepresentable section on output
  collect2: ld returned 1 exit status
  failure.
  removing: _configtest.c _configtest.o
  Status: 255
  Output:
FOUND:
  libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas']
  library_dirs = ['/usr/local/rich/src/scipy_build/lib']
  language = f77
  define_macros = [('NO_ATLAS_INFO', 2)]
 
  ##
 
  I don't have root on this machine, but could pester admins for eventual
  temporary access.
 
  Thanks much for any help,
  Rich
 
  ___
  Numpy-discussion mailing list
  Numpy-discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] is my numpy installation using custom blas/lapack?

2009-06-06 Thread Chris Colbert
can you run this and post the build.log to pastebin.com:

assuming your numpy build directory is /home/numpy-1.3.0:

cd /home/numpy-1.3.0
rm -rf build
python setup.py build  build.log


Chris


On Sat, Jun 6, 2009 at 1:37 PM, Richard Llewellynllew...@gmail.com wrote:
 Hi Chris,
  thanks much for posting those installation instructions.  Seems similar to
 what I pieced together.

 I gather ATLAS not found.  Oops, drank that beer too early.

 I copied Atlas libs to /usr/local/rich/src/scipy_build/lib.

 This is my site.cfg.  Out of desperation I tried search_static_first = 1,
 but probably of no use.

 [DEFAULT]
 library_dirs = /usr/local/rich/src/scipy_build/lib:$HOME/usr/galois/lib
 include_dirs =
 /usr/local/rich/src/scipy_build/lib/include:$HOME/usr/galois/include
 search_static_first = 1

 [blas_opt]
 libraries = f77blas, cblas, atlas

 [lapack_opt]
 libraries = lapack, f77blas, cblas, atlas

 [amd]
 amd_libs = amd

 [umfpack]
 umfpack_libs = umfpack, gfortran

 [fftw]
 libraries = fftw3


 Rich




 On Sat, Jun 6, 2009 at 10:25 AM, Chris Colbert sccolb...@gmail.com wrote:

 when you build numpy, did you use site.cfg to tell it where to find
 your atlas libs?

 On Sat, Jun 6, 2009 at 1:02 PM, Richard Llewellynllew...@gmail.com
 wrote:
  Hello,
 
  I've managed a build of lapack and atlas on Fedora 10 on a quad core,
  64,
  and now (...) have a numpy I can import that runs tests ok. :]    I am
  puzzled, however, that numpy builds and imports lapack_lite.  Does this
  mean
  I have a problem with the build(s)?
  Upon building numpy, I see the troubling output:
 
  
 
  C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe
  -Wall
  -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protecto
  r --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC
  -fPIC
 
  compile options: '-c'
  gcc: _configtest.c
  gcc -pthread _configtest.o -L/usr/local/rich/src/scipy_build/lib
  -llapack
  -lptf77blas -lptcblas -latlas -o _configtest
  /usr/bin/ld: _configtest: hidden symbol `__powidf2' in
  /usr/lib/gcc/x86_64-redhat-linux/4.3.2/libgcc.a(_powidf2.o) is reference
  d by DSO
  /usr/bin/ld: final link failed: Nonrepresentable section on output
  collect2: ld returned 1 exit status
  /usr/bin/ld: _configtest: hidden symbol `__powidf2' in
  /usr/lib/gcc/x86_64-redhat-linux/4.3.2/libgcc.a(_powidf2.o) is reference
  d by DSO
  /usr/bin/ld: final link failed: Nonrepresentable section on output
  collect2: ld returned 1 exit status
  failure.
  removing: _configtest.c _configtest.o
  Status: 255
  Output:
    FOUND:
      libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas']
      library_dirs = ['/usr/local/rich/src/scipy_build/lib']
      language = f77
      define_macros = [('NO_ATLAS_INFO', 2)]
 
  ##
 
  I don't have root on this machine, but could pester admins for eventual
  temporary access.
 
  Thanks much for any help,
  Rich
 
  ___
  Numpy-discussion mailing list
  Numpy-discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] is my numpy installation using custom blas/lapack?

2009-06-06 Thread Chris Colbert
and where exactly are you seeing atlas not found? during the build
process, are when import numpy in python?

if its the latter, you need to add a .conf file  in /etc/ld.so.conf.d/
 with the line /usr/local/rich/src/scipy_build/lib  and then run  sudo
ldconfig

Chris


On Sat, Jun 6, 2009 at 1:42 PM, Chris Colbertsccolb...@gmail.com wrote:
 can you run this and post the build.log to pastebin.com:

 assuming your numpy build directory is /home/numpy-1.3.0:

 cd /home/numpy-1.3.0
 rm -rf build
 python setup.py build  build.log


 Chris


 On Sat, Jun 6, 2009 at 1:37 PM, Richard Llewellynllew...@gmail.com wrote:
 Hi Chris,
  thanks much for posting those installation instructions.  Seems similar to
 what I pieced together.

 I gather ATLAS not found.  Oops, drank that beer too early.

 I copied Atlas libs to /usr/local/rich/src/scipy_build/lib.

 This is my site.cfg.  Out of desperation I tried search_static_first = 1,
 but probably of no use.

 [DEFAULT]
 library_dirs = /usr/local/rich/src/scipy_build/lib:$HOME/usr/galois/lib
 include_dirs =
 /usr/local/rich/src/scipy_build/lib/include:$HOME/usr/galois/include
 search_static_first = 1

 [blas_opt]
 libraries = f77blas, cblas, atlas

 [lapack_opt]
 libraries = lapack, f77blas, cblas, atlas

 [amd]
 amd_libs = amd

 [umfpack]
 umfpack_libs = umfpack, gfortran

 [fftw]
 libraries = fftw3


 Rich




 On Sat, Jun 6, 2009 at 10:25 AM, Chris Colbert sccolb...@gmail.com wrote:

 when you build numpy, did you use site.cfg to tell it where to find
 your atlas libs?

 On Sat, Jun 6, 2009 at 1:02 PM, Richard Llewellynllew...@gmail.com
 wrote:
  Hello,
 
  I've managed a build of lapack and atlas on Fedora 10 on a quad core,
  64,
  and now (...) have a numpy I can import that runs tests ok. :]    I am
  puzzled, however, that numpy builds and imports lapack_lite.  Does this
  mean
  I have a problem with the build(s)?
  Upon building numpy, I see the troubling output:
 
  
 
  C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe
  -Wall
  -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protecto
  r --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC
  -fPIC
 
  compile options: '-c'
  gcc: _configtest.c
  gcc -pthread _configtest.o -L/usr/local/rich/src/scipy_build/lib
  -llapack
  -lptf77blas -lptcblas -latlas -o _configtest
  /usr/bin/ld: _configtest: hidden symbol `__powidf2' in
  /usr/lib/gcc/x86_64-redhat-linux/4.3.2/libgcc.a(_powidf2.o) is reference
  d by DSO
  /usr/bin/ld: final link failed: Nonrepresentable section on output
  collect2: ld returned 1 exit status
  /usr/bin/ld: _configtest: hidden symbol `__powidf2' in
  /usr/lib/gcc/x86_64-redhat-linux/4.3.2/libgcc.a(_powidf2.o) is reference
  d by DSO
  /usr/bin/ld: final link failed: Nonrepresentable section on output
  collect2: ld returned 1 exit status
  failure.
  removing: _configtest.c _configtest.o
  Status: 255
  Output:
    FOUND:
      libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas']
      library_dirs = ['/usr/local/rich/src/scipy_build/lib']
      language = f77
      define_macros = [('NO_ATLAS_INFO', 2)]
 
  ##
 
  I don't have root on this machine, but could pester admins for eventual
  temporary access.
 
  Thanks much for any help,
  Rich
 
  ___
  Numpy-discussion mailing list
  Numpy-discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix default to column vector?

2009-06-06 Thread Charles R Harris
On Sat, Jun 6, 2009 at 9:29 AM, Alan G Isaac ais...@american.edu wrote:

 On 6/6/2009 12:41 AM Charles R Harris apparently wrote:
  Well, one could argue that. The x.T is a member of the dual, hence maps
  vectors to the reals. Usually the reals aren't represented by 1x1
  matrices. Just my [.02] cents.

 Of course that same perspective could
 lead you to argue that a M×N matrix
 is for mapping N vectors to M vectors,
 not for doing matrix multiplication.

 Matrix multiplication produces a
 matrix result **by definition**.
 Treating 1×1 matrices as equivalent
 to scalars is just a convenient anomaly
 in certain popular matrix-oriented
 languages.


So is eye(3)*(v.T*v) valid? If (v.T*v) is 1x1 you have incompatible
dimensions for the multiplication, whereas if it is a scalar you can
multiply eye(3) by it. The usual matrix algebra gets a bit confused here
because it isn't clear about the distinction between inner products and the
expression v.T*v which is typically used in it's place.  I think the only
consistent way around this is to treat 1x1 matrices as scalars, which I
believe matlab does,  but then the expression eye(3)*(v.T*v) isn't
associative and we lose some checks.

I don't think we can change the current matrix class, to do so would break
too much code. It would be nice to extend it with an explicit inner product,
but I can't think of any simple notation for it that python would parse.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Functions for indexing into certain parts of an array (2d)

2009-06-06 Thread Fernando Perez
On Sat, Jun 6, 2009 at 12:09 AM, Robert Kernrobert.k...@gmail.com wrote:

 +1

OK, thanks.  I'll try to get it ready.

 diag_indices() can be made more efficient, but these are fine.

Suggestion?  Right now it's not obvious to me...

A few more questions:

- Are doctests considered enough testing for numpy, or are separate
tests also required?

- Where should these go?

- Any interest in also having the stuff below?  I'm needing to build
structured random arrays a lot (symmetric, anti-symmetric, symmetric
with  a particular diagonal, etc), and these are coming in handy.  If
you want them, I'll put the whole thing together (these use the
indexing utilities from the previous suggestion).

Thanks!


f

 Other suggested utilities.  Not fully commented yet, but if they
are wanted for numpy, will be submitted in final form.

def structured_rand_arr(size, sample_func=np.random.random,
ltfac=None, utfac=None, fill_diag=None):
Make a structured random 2-d array of shape (size,size).

Parameters
--

size : int
  Determines the shape of the output array: (size,size).

sample_func : function, optional.
  Must be a function which when called with a 2-tuple of ints, returns a
  2-d array of that shape.  By default, np.random.random is used, but any
  other sampling function can be used as long as matches this API.

utfac : float, optional
  Multiplicative factor for the lower triangular part of the matrix.

ltfac : float, optional
  Multiplicative factor for the lower triangular part of the matrix.

fill_diag : float, optional
  If given, use this value to fill in the diagonal.  Otherwise the diagonal
  will contain random elements.

# Make a random array from the given sampling function
mat0 = sample_func((size,size))
# And the empty one we'll then fill in to return
mat = np.empty_like(mat0)
# Extract indices for upper-triangle, lower-triangle and diagonal
uidx = triu_indices(size,1)
lidx = tril_indices(size,-1)
didx = diag_indices(size)
# Extract each part from the original and copy it to the output, possibly
# applying multiplicative factors.  We check the factors instead of
# defaulting to 1.0 to avoid unnecessary floating point multiplications
# which could be noticeable for very large sizes.
if utfac:
mat[uidx] = utfac * mat0[uidx]
else:
mat[uidx] = mat0[uidx]
if ltfac:
mat[lidx] = itfac * mat0.T[lidx]
else:
mat[lidx] = mat0.T[lidx]
# If fill_diag was provided, use it; otherwise take the values in the
# diagonal from the original random array.
if fill_diag:
mat[didx] = fill_diag
else:
mat[didx] = mat0[didx]

return mat


def symm_rand_arr(size,sample_func=np.random.random,fill_diag=None):
Make a symmetric random 2-d array of shape (size,size).

Parameters
--

n : int
  Size of the output array.

fill_diag : float, optional
  If given, use this value to fill in the diagonal.  Useful for
  
return structured_rand_arr(size,sample_func,fill_diag=fill_diag)


def antisymm_rand_arr(size,sample_func=np.random.random,fill_diag=None):
Make an anti-symmetric random 2-d array of shape (size,size).

Parameters
--

n : int
  Size of the output array.
  
return structured_rand_arr(size,sample_func,ltfac=-1.0,fill_diag=fill_diag)
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Functions for indexing into certain parts of an array (2d)

2009-06-06 Thread Fernando Perez
On Sat, Jun 6, 2009 at 8:27 AM, Keith Goodmankwgood...@gmail.com wrote:
 What do you think of passing in the array a instead of n and ndim
 (diag_indices_list_2 below)?

Yes, I thought of that too.  I see use cases for both though.  Would
people prefer both, or rather a flexible interface that tries to
introspect the inputs and do both in one call?

Cheers,

f
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] is my numpy installation using custom blas/lapack?

2009-06-06 Thread Richard Llewellyn
I posted the setup.py build output to pastebin.com, though missed the
uninteresting stderr (forgot tcsh command to redirect both).
Also, used setup.py build --fcompiler=gnu95.


To be clear, I am not certain that my ATLAS libraries are not found. But
during the build starting at line 95 (pastebin.com) I see a compilation
failure, and then NO_ATLAS_INFO, 2.

I don't think I can use ldconfig without root, but have set LD_LIBRARY_PATH
to point to the scipy_build/lib until I put them somewhere else.

importing numpy works, though lapack_lite is also imported. I wonder if this
is normal even if my ATLAS was used.

Thanks,
Rich

On Sat, Jun 6, 2009 at 10:46 AM, Chris Colbert sccolb...@gmail.com wrote:

 and where exactly are you seeing atlas not found? during the build
 process, are when import numpy in python?

 if its the latter, you need to add a .conf file  in /etc/ld.so.conf.d/
  with the line /usr/local/rich/src/scipy_build/lib  and then run  sudo
 ldconfig

 Chris


 On Sat, Jun 6, 2009 at 1:42 PM, Chris Colbertsccolb...@gmail.com wrote:
  can you run this and post the build.log to pastebin.com:
 
  assuming your numpy build directory is /home/numpy-1.3.0:
 
  cd /home/numpy-1.3.0
  rm -rf build
  python setup.py build  build.log
 
 
  Chris
 
 
  On Sat, Jun 6, 2009 at 1:37 PM, Richard Llewellynllew...@gmail.com
 wrote:
  Hi Chris,
   thanks much for posting those installation instructions.  Seems similar
 to
  what I pieced together.
 
  I gather ATLAS not found.  Oops, drank that beer too early.
 
  I copied Atlas libs to /usr/local/rich/src/scipy_build/lib.
 
  This is my site.cfg.  Out of desperation I tried search_static_first =
 1,
  but probably of no use.
 
  [DEFAULT]
  library_dirs = /usr/local/rich/src/scipy_build/lib:$HOME/usr/galois/lib
  include_dirs =
  /usr/local/rich/src/scipy_build/lib/include:$HOME/usr/galois/include
  search_static_first = 1
 
  [blas_opt]
  libraries = f77blas, cblas, atlas
 
  [lapack_opt]
  libraries = lapack, f77blas, cblas, atlas
 
  [amd]
  amd_libs = amd
 
  [umfpack]
  umfpack_libs = umfpack, gfortran
 
  [fftw]
  libraries = fftw3
 
 
  Rich
 
 
 
 
  On Sat, Jun 6, 2009 at 10:25 AM, Chris Colbert sccolb...@gmail.com
 wrote:
 
  when you build numpy, did you use site.cfg to tell it where to find
  your atlas libs?
 
  On Sat, Jun 6, 2009 at 1:02 PM, Richard Llewellynllew...@gmail.com
  wrote:
   Hello,
  
   I've managed a build of lapack and atlas on Fedora 10 on a quad core,
   64,
   and now (...) have a numpy I can import that runs tests ok. :]I
 am
   puzzled, however, that numpy builds and imports lapack_lite.  Does
 this
   mean
   I have a problem with the build(s)?
   Upon building numpy, I see the troubling output:
  
   
  
   C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe
   -Wall
   -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protecto
   r --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC
   -fPIC
  
   compile options: '-c'
   gcc: _configtest.c
   gcc -pthread _configtest.o -L/usr/local/rich/src/scipy_build/lib
   -llapack
   -lptf77blas -lptcblas -latlas -o _configtest
   /usr/bin/ld: _configtest: hidden symbol `__powidf2' in
   /usr/lib/gcc/x86_64-redhat-linux/4.3.2/libgcc.a(_powidf2.o) is
 reference
   d by DSO
   /usr/bin/ld: final link failed: Nonrepresentable section on output
   collect2: ld returned 1 exit status
   /usr/bin/ld: _configtest: hidden symbol `__powidf2' in
   /usr/lib/gcc/x86_64-redhat-linux/4.3.2/libgcc.a(_powidf2.o) is
 reference
   d by DSO
   /usr/bin/ld: final link failed: Nonrepresentable section on output
   collect2: ld returned 1 exit status
   failure.
   removing: _configtest.c _configtest.o
   Status: 255
   Output:
 FOUND:
   libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas']
   library_dirs = ['/usr/local/rich/src/scipy_build/lib']
   language = f77
   define_macros = [('NO_ATLAS_INFO', 2)]
  
   ##
  
   I don't have root on this machine, but could pester admins for
 eventual
   temporary access.
  
   Thanks much for any help,
   Rich
  
   ___
   Numpy-discussion mailing list
   Numpy-discussion@scipy.org
   http://mail.scipy.org/mailman/listinfo/numpy-discussion
  
  
  ___
  Numpy-discussion mailing list
  Numpy-discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 
  ___
  Numpy-discussion mailing list
  Numpy-discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 
 
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix default to column vector?

2009-06-06 Thread Olivier Verdier
I took that very seriously when you said that matrices were important to
you. Far from me the idea of forbidding numpy users to use matrices.
My point was the fact that newcomers are confused by the presence of both
matrices and arrays. I think that there should be only one
matrix/vector/tensor object in numpy. Therefore I would advocate the removal
of matrices from numpy.

*But* why not have matrices in a different component? Maybe as a part of
scipy? or somewhere else? You would be more than welcome to use them
anywhere. Note that I use components outside numpy for my teaching (scipy,
sympy, mayavi, nosetest) and I don't have any problems with that.

With my argument I endeavoured to explain the potential complications of
using matrices instead of arrays when teaching. Perhaps the strongest
argument against matrices is that you cannot use vectors. I've taught enough
matlab courses to realise the pain that this represents for students. But I
realise also that somebody else would have a different experience.

Of course x.T*y should be a 1x1 matrix, this is not an anomaly, but it is
confusing for students, because they expect a scalar. That is why I prefer
to teach with dot. Then the relation matrix/vector/scalar is crystal clear.

== Olivier

2009/6/5 Alan G Isaac ais...@american.edu

 On 6/5/2009 11:38 AM Olivier Verdier apparently wrote:
  I think matrices can be pretty tricky when used for
  teaching.  For instance, you have to explain that all the
  operators work component-wise, except the multiplication!
  Another caveat is that since matrices are always 2x2, the
  scalar product of two column vectors computed as  x.T
  * y will not be a scalar, but a 2x2 matrix. There is also
  the fact that you must cast all your vectors to column/raw
  matrices (as in matlab). For all these reasons, I prefer
  to use arrays and dot for teaching, and I have never had
  any complaints.


 I do not understand this argument.
 You should take it very seriously when someone
 reports to you that the matrix object is a crucial to them,
 e.g., as a teaching tool.  Even if you do not find
 personally persuasive an example like
 http://mail.scipy.org/pipermail/numpy-discussion/2009-June/043001.html
 I have told you: this is important for my students.
 Reporting that your students do not complain about using
 arrays instead of matrices does not change this one bit.

 Student backgrounds differ by domain of application.  In
 economics, matrices are in *very* wide use, and
 multidimensional arrays get almost no use.  Textbooks in
 econometrics (a huge and important field, even outside of
 economics) are full of proofs using matrix algebra.
 A close match to what the students see is crucial.
 When working with multiplication or exponentiation,
 matrices do what they expect, and 2d arrays do not.

 One more point. As Python users we get used to installing
 a package here and a package there to add functionality.
 But this is not how most people looking for a matrix
 language see the world.  Removing the matrix object from
 NumPy will raise the barrier to adoption by social
 scientists, and there should be a strongly persuasive reason
 before taking such a step.

 Separately from all that, does anyone doubt that there is
 code that depends on the matrix object?  The core objection
 to a past proposal for useful change was that it could break
 extant code.  I would hope that nobody who took that
 position would subsequently propose removing the matrix
 object altogether.

 Cheers,
 Alan Isaac

 PS If x and y are column vectors (i.e., matrices), then
 x.T * y *should* be a 1×1 matrix.
 Since the * operator is doing matrix multiplication,
 this is the correct result, not an anomaly.


 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Functions for indexing into certain parts of an array (2d)

2009-06-06 Thread Gael Varoquaux
On Sat, Jun 06, 2009 at 11:30:37AM -0700, Fernando Perez wrote:
 On Sat, Jun 6, 2009 at 12:09 AM, Robert Kernrobert.k...@gmail.com wrote:
 - Any interest in also having the stuff below?  I'm needing to build
 structured random arrays a lot (symmetric, anti-symmetric, symmetric
 with  a particular diagonal, etc), and these are coming in handy.  If
 you want them, I'll put the whole thing together (these use the
 indexing utilities from the previous suggestion).

I think they need examples. Right now, it is not clear at all to me what
they do.

Gaël
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix default to column vector?

2009-06-06 Thread Fernando Perez
On Sat, Jun 6, 2009 at 11:03 AM, Charles R
Harrischarlesr.har...@gmail.com wrote:

 I don't think we can change the current matrix class, to do so would break
 too much code. It would be nice to extend it with an explicit inner product,
 but I can't think of any simple notation for it that python would parse.

Maybe it's time to make another push on python-dev for the pep-225
stuff for other operators?

https://cirl.berkeley.edu/fperez/static/numpy-pep225/

Last year I got pretty much zero interest from python-dev on this, but
they were very very busy with 3.0 on the horizon.  Perhaps once they
put 3.1 out would be a good time to champion this again.

It's slightly independent of the matrix class debate, but perhaps
having special operators for real matrix multiplication could ease
some of the bottlenecks of this discussion.

It would be great if someone could champion that discussion on
python-dev though, I don't see myself finding the time for it another
time around...

Cheers,

f
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Functions for indexing into certain parts of an array (2d)

2009-06-06 Thread Keith Goodman
On Sat, Jun 6, 2009 at 11:30 AM, Fernando Perez fperez@gmail.com wrote:
 On Sat, Jun 6, 2009 at 12:09 AM, Robert Kernrobert.k...@gmail.com wrote:
 diag_indices() can be made more efficient, but these are fine.

 Suggestion?  Right now it's not obvious to me...

I'm interested in a more efficient way too. Here's how I plan to adapt
the code for my personal use:

def fill_diag(arr, value):
if arr.ndim != 2:
raise ValueError, Input must be 2-d.
idx = range(arr.shape[0])
arr[(idx,) * 2] = value
return arr

 a = np.array([[1,2,3],[4,5,6],[7,8,9]])

 a = fill_diag(a, 0)
 a

array([[0, 2, 3],
   [4, 0, 6],
   [7, 8, 0]])

 a = fill_diag(a, np.array([1,2,3]))
 a

array([[1, 2, 3],
   [4, 2, 6],
   [7, 8, 3]])
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Functions for indexing into certain parts of an array (2d)

2009-06-06 Thread Keith Goodman
On Sat, Jun 6, 2009 at 11:46 AM, Keith Goodman kwgood...@gmail.com wrote:
 On Sat, Jun 6, 2009 at 11:30 AM, Fernando Perez fperez@gmail.com wrote:
 On Sat, Jun 6, 2009 at 12:09 AM, Robert Kernrobert.k...@gmail.com wrote:
 diag_indices() can be made more efficient, but these are fine.

 Suggestion?  Right now it's not obvious to me...

 I'm interested in a more efficient way too. Here's how I plan to adapt
 the code for my personal use:

 def fill_diag(arr, value):
    if arr.ndim != 2:
        raise ValueError, Input must be 2-d.
    idx = range(arr.shape[0])
    arr[(idx,) * 2] = value
    return arr

Maybe it is confusing to return the array since the operation is in place. So:

def fill_diag(arr, value):
if arr.ndim != 2:
raise ValueError, Input must be 2-d.
idx = range(arr.shape[0])
arr[(idx,) * 2] = value
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix default to column vector?

2009-06-06 Thread Alan G Isaac
On 6/6/2009 2:03 PM Charles R Harris apparently wrote:
 So is eye(3)*(v.T*v) valid? If (v.T*v) is 1x1 you have incompatible
 dimensions for the multiplication

Exactly.  So it is not valid.  As you point out, to make it valid
implies a loss of the associativity of matrix multiplication.
Not a good idea!

Cheers,
Alan

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix default to column vector?

2009-06-06 Thread Charles R Harris
On Sat, Jun 6, 2009 at 12:34 PM, Olivier Verdier zelb...@gmail.com wrote:

 I took that very seriously when you said that matrices were important to
 you. Far from me the idea of forbidding numpy users to use matrices.
 My point was the fact that newcomers are confused by the presence of both
 matrices and arrays. I think that there should be only one
 matrix/vector/tensor object in numpy. Therefore I would advocate the removal
 of matrices from numpy.

 *But* why not have matrices in a different component? Maybe as a part of
 scipy? or somewhere else? You would be more than welcome to use them
 anywhere. Note that I use components outside numpy for my teaching (scipy,
 sympy, mayavi, nosetest) and I don't have any problems with that.

 With my argument I endeavoured to explain the potential complications of
 using matrices instead of arrays when teaching. Perhaps the strongest
 argument against matrices is that you cannot use vectors. I've taught enough
 matlab courses to realise the pain that this represents for students. But I
 realise also that somebody else would have a different experience.

 Of course x.T*y should be a 1x1 matrix, this is not an anomaly, but it is
 confusing for students, because they expect a scalar. That is why I prefer
 to teach with dot. Then the relation matrix/vector/scalar is crystal clear.


How about the common expression

exp((v.t*A*v)/2)

do you expect a matrix exponential here? Or should the students write

exp(v, A*v/2)

where ... is the inner product?

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] performance matrix multiplication vs. matlab

2009-06-06 Thread Minjae Kim
Thanks for this excellent recipe.

I have not tried it out myself yet, but I will follow the instruction on
clean Ubuntu 9.04 64-bit.

Best,
Minjae

On Sat, Jun 6, 2009 at 11:59 AM, Chris Colbert sccolb...@gmail.com wrote:

 since there is demand, and someone already emailed me, I'll put what I
 did in this post. It pretty much follows whats on the scipy website,
 with a couple other things I gleaned from reading the ATLAS install
 guide:

 and here it goes, this is valid for Ubuntu 9.04 64-bit  (# starts a
 comment when working in the terminal)


 download lapack 3.2.1 http://www.netlib.org/lapack/lapack.tgz
 download atlas 3.8.3

 http://sourceforge.net/project/downloading.php?group_id=23725filename=atlas3.8.3.tar.bz2a=65663372

 create folder  /home/your-user-name/build/atlas   #this is where we build
 create folder /home/your-user-name/build/lapack #atlas and lapack

 extract the folder lapack-3.2.1 to /home/your-user-name/build/lapack
 extract the contents of atlas to /home/your-user-name/build/atlas



 now in the terminal:

 # remove g77 and get stuff we need
 sudo apt-get remove g77
 sudo apt-get install gfortran
 sudo apt-get install build-essential
 sudo apt-get install python-dev
 sudo apt-get install python-setuptools
 sudo easy_install nose


 # build lapack
 cd /home/your-user-name/build/lapack/lapack-3.2.1
 cp INSTALL/make.inc.gfortran make.inc

 gedit make.inc
 #
 #in the make.inc file make sure the line   OPTS = -O2 -fPIC -m64
 #andNOOPTS = -O0 -fPIC -m64
 #the -m64 flags build 64-bit code, if you want 32-bit, simply leave
 #the -m64 flags out
 #

 cd SRC

 #this should build lapack without error
 make



 # build atlas

 cd /home/your-user-name/build/atlas

 #this is simply where we will build the atlas
 #libs, you can name it what you want
 mkdir Linux_X64SSE2

 cd Linux_X64SSE2

 #need to turn off cpu-throttling
 sudo cpufreq-selector -g performance

 #if you don't want 64bit code remove the -b 64 flag. replace the
 #number 2400 with your CPU frequency in MHZ
 #i.e. my cpu is 2.53 GHZ so i put 2530
 ../configure -b 64 -D c -DPentiumCPS=2400 -Fa  -alg -fPIC

 --with-netlib-lapack=/home/your-user-name/build/lapack/lapack-3.2.1/Lapack_LINUX.a

 #the configure step takes a bit, and should end without errors

  #this takes a long time, go get some coffee, it should end without error
 make build

 #this will verify the build, also long running
 make check

 #this will test the performance of your build and give you feedback on
 #it. your numbers should be close to the test numbers at the end
 make time

 cd lib

 #builds single threaded .so's
 make shared

 #builds multithreaded .so's
 make ptshared

 #copies all of the atlas libs (and the lapack lib built with atlas)
 #to our lib dir
 sudo  cp  *.so  /usr/local/lib/



 #now we need to get and build numpy

 download numpy 1.3.0

 http://sourceforge.net/project/downloading.php?group_id=1369filename=numpy-1.3.0.tar.gza=93506515

 extract the folder numpy-1.3.0 to /home/your-user-name/build

 #in the terminal

 cd /home/your-user-name/build/numpy-1.3.0
 cp site.cfg.example site.cfg

 gedit site.cfg
 ###
 # in site.cfg uncomment the following lines and make them look like these
 [DEFAULT]
 library_dirs = /usr/local/lib
 include_dirs = /usr/local/include

 [blas_opt]
 libraries = ptf77blas, ptcblas, atlas

 [lapack_opt]
 libraries = lapack, ptf77blas, ptcblas, atlas
 ###
 #if you want single threaded libs, uncomment those lines instead


 #build numpy- should end without error
 python setup.py build

 #install numpy
 python setup.py install

 cd /home

 sudo ldconfig

 python
 import numpy
 numpy.test()   #this should run with no errors (skipped tests and
 known-fails are ok)
 a = numpy.random.randn(6000, 6000)
 numpy.dot(a, a) # look at your cpu monitor and verify all cpu cores
 are at 100% if you built with threads


 Celebrate with a beer!


 Cheers!

 Chris





 On Sat, Jun 6, 2009 at 10:42 AM, Keith Goodmankwgood...@gmail.com wrote:
  On Fri, Jun 5, 2009 at 2:37 PM, Chris Colbert sccolb...@gmail.com
 wrote:
  I'll caution anyone from using Atlas from the repos in Ubuntu 9.04  as
 the
  package is broken:
 
  https://bugs.launchpad.net/ubuntu/+source/atlas/+bug/363510
 
 
  just build Atlas yourself, you get better performance AND threading.
  Building it is not the nightmare it sounds like. I think i've done it a
  total of four times now, both 32-bit and 64-bit builds.
 
  If you need help with it,  just email me off list.
 
  That's a nice offer. I tried building ATLAS on Debian a year or two
  ago and got stuck.
 
  Clear out your inbox!
  ___
  Numpy-discussion mailing list
  Numpy-discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 ___
 Numpy-discussion mailing list
 

Re: [Numpy-discussion] matrix default to column vector?

2009-06-06 Thread Alan G Isaac
On 6/6/2009 2:58 PM Charles R Harris apparently wrote:
 How about the common expression
 exp((v.t*A*v)/2)
 do you expect a matrix exponential here?


I take your point that there are conveniences
to treating a 1 by 1 matrix as a scalar.
Most matrix programming languages do this, I think.
For sure GAUSS does.  The result of   x' * A * x
is a matrix (it has one row and one column) but
it functions like a scalar (and even more,
since right multiplication by it is also allowed).

While I think this is wrong, especially in a
language that readily distinguishes scalars
and matrices, I recognize that many others have
found the behavior useful.  And I confess that
when I talk about quadratic forms, I do treat
x.T * A * x as if it were scalar.

But just to be clear, how are you proposing to
implement that behavior, if you are?

Alan

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix default to column vector?

2009-06-06 Thread Robert Kern
On Sat, Jun 6, 2009 at 14:59, Alan G Isaac ais...@american.edu wrote:
 On 6/6/2009 2:58 PM Charles R Harris apparently wrote:
 How about the common expression
 exp((v.t*A*v)/2)
 do you expect a matrix exponential here?


 I take your point that there are conveniences
 to treating a 1 by 1 matrix as a scalar.
 Most matrix programming languages do this, I think.
 For sure GAUSS does.  The result of   x' * A * x
 is a matrix (it has one row and one column) but
 it functions like a scalar (and even more,
 since right multiplication by it is also allowed).

 While I think this is wrong, especially in a
 language that readily distinguishes scalars
 and matrices, I recognize that many others have
 found the behavior useful.  And I confess that
 when I talk about quadratic forms, I do treat
 x.T * A * x as if it were scalar.

The old idea of introducing RowVector and ColumnVector would help
here. If x were a ColumnVector and A a Matrix, then you can introduce
the following rules:

x.T is a RowVector
RowVector * ColumnVector is a scalar
RowVector * Matrix is a RowVector
Matrix * ColumnVector is a ColumnVector

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Functions for indexing into certain parts of an array (2d)

2009-06-06 Thread Bruce Southey
On Sat, Jun 6, 2009 at 2:01 AM, Fernando Perezfperez@gmail.com wrote:
[snip]
 

 def mask_indices(n,mask_func,k=0):
    Return the indices for an array, given a masking function like
 tri{u,l}.
    m = np.ones((n,n),int)
    a = mask_func(m,k)
    return np.where(a != 0)


 def diag_indices(n,ndim=2):
    Return the indices to index into a diagonal.

    Examples
    
     a = np.array([[1,2,3,4],[5,6,7,8],[9,10,11,12],[13,14,15,16]])
     a
    array([[ 1,  2,  3,  4],
           [ 5,  6,  7,  8],
           [ 9, 10, 11, 12],
           [13, 14, 15, 16]])
     di = diag_indices(4)
     a[di] = 100
     a
    array([[100,   2,   3,   4],
           [  5, 100,   7,   8],
           [  9,  10, 100,  12],
           [ 13,  14,  15, 100]])
    
    idx = np.arange(n)
    return (idx,)*ndim



While not trying to be negative, this raises important questions that
need to be covered because the user should not have to do trial and
error to find what actually works and what that does not. While
certain features can be fixed within Numpy, API changes should be
avoided.

Please explain the argument of 'n'?
Since you seem to be fixing it to the length of the main diagonal then
it is redundant. Otherwise why the first 'n' diagonal elements and not
the last 'n' diagonal elements. If it meant to be allow different
diagonal elements then it would need adjustment to indicate start and
stopping location.

What happens when the shape of 'a' is different from 'n'? I would
think that this means that diag_indices should be an array method or
require passing a (or shape of a to diag_indices).

What happens if the array is not square? If a is 4 by 2 then passing
n=4 will be wrong.

What about offdiagonals? That is should be clear that you are
referring to the main diagonal.

How does this address non-contiguous memory,  Fortran ordered arrays
or arrays with more than 2 dimensions?

How does this handle record and masked arrays as well as the matrix
subclass that are supported by Numpy? Presumably it does not so if it
is not an array method, then the type of input would need to be
checked.

There are probably similar issues to the other functions you propose.

Bruce
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix default to column vector?

2009-06-06 Thread Charles R Harris
On Sat, Jun 6, 2009 at 1:59 PM, Alan G Isaac ais...@american.edu wrote:

 On 6/6/2009 2:58 PM Charles R Harris apparently wrote:
  How about the common expression
  exp((v.t*A*v)/2)
  do you expect a matrix exponential here?


 I take your point that there are conveniences
 to treating a 1 by 1 matrix as a scalar.
 Most matrix programming languages do this, I think.
 For sure GAUSS does.  The result of   x' * A * x
 is a matrix (it has one row and one column) but
 it functions like a scalar (and even more,
 since right multiplication by it is also allowed).


It's actually an inner product and the notation x, A*x would be
technically correct. More generally, it is a bilinear function of two
vectors. But the correct notation is a bit cumbersome for a student
struggling with plain old matrices ;)

Ndarrays are actually closer to the tensor ideal in that M*v would be a
contraction removing two indices from a three index tensor product. The
dot function, aka *, then functions as a contraction. In this case x.T*A*x
works just fine because A*x is 1D and x.T=x, so the final result is a scalar
(0D array). So making vectors 1D arrays would solve some problems. There
remains the construction v*v.T, which should really be treated as a tensor
product, or bivector.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Changes to arraysetops

2009-06-06 Thread Neil Crighton
Thanks for the summary!  I'm +1 on points 1, 2 and 3.

+0 for points 4 and 5 (assume_unique keyword and renaming arraysetops).

Neil

PS. I think you mean deprecate, not depreciate :)

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Functions for indexing into certain parts of an array (2d)

2009-06-06 Thread Robert Kern
On Sat, Jun 6, 2009 at 15:31, Bruce Southey bsout...@gmail.com wrote:

 While not trying to be negative, this raises important questions that
 need to be covered because the user should not have to do trial and
 error to find what actually works and what that does not. While
 certain features can be fixed within Numpy, API changes should be
 avoided.

He's proposing additional functions, not changes to existing functions.

 How does this address non-contiguous memory,  Fortran ordered arrays

They just work. These functions create index arrays to use with fancy
indexing which works on all of these because of numpy's abstractions.
Please read the code. This is obvious.

 or arrays with more than 2 dimensions?

diag_indices(n, ndim=3), etc.

 How does this handle record and masked arrays as well as the matrix
 subclass that are supported by Numpy?

Again, these functions work via fancy indexing. We don't need to
repeat all of the documentation on fancy indexing in each of these
functions.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix default to column vector?

2009-06-06 Thread Charles R Harris
On Sat, Jun 6, 2009 at 2:30 PM, Robert Kern robert.k...@gmail.com wrote:

 On Sat, Jun 6, 2009 at 14:59, Alan G Isaac ais...@american.edu wrote:
  On 6/6/2009 2:58 PM Charles R Harris apparently wrote:
  How about the common expression
  exp((v.t*A*v)/2)
  do you expect a matrix exponential here?
 
 
  I take your point that there are conveniences
  to treating a 1 by 1 matrix as a scalar.
  Most matrix programming languages do this, I think.
  For sure GAUSS does.  The result of   x' * A * x
  is a matrix (it has one row and one column) but
  it functions like a scalar (and even more,
  since right multiplication by it is also allowed).
 
  While I think this is wrong, especially in a
  language that readily distinguishes scalars
  and matrices, I recognize that many others have
  found the behavior useful.  And I confess that
  when I talk about quadratic forms, I do treat
  x.T * A * x as if it were scalar.

 The old idea of introducing RowVector and ColumnVector would help
 here. If x were a ColumnVector and A a Matrix, then you can introduce
 the following rules:

 x.T is a RowVector
 RowVector * ColumnVector is a scalar
 RowVector * Matrix is a RowVector
 Matrix * ColumnVector is a ColumnVector


Yes, that is another good solution. In tensor notation, RowVectors have
signature r_i, ColumnVectors c^i, and matrices M^i_j. The '*' operator is
then a contraction on adjacent indices, a result with no indices is a
scalar, and the only problem that remains is the tensor product usually
achieved by x*y.T. But making the exception that col * row is the tensor
product producing a matrix would solve that and still raise an error for
such things as col*row*row. Or we could simply require something like
bivector(x,y)

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix default to column vector?

2009-06-06 Thread Alan G Isaac
On 6/6/2009 4:30 PM Robert Kern apparently wrote:
 The old idea of introducing RowVector and ColumnVector would help
 here. If x were a ColumnVector and A a Matrix, then you can introduce
 the following rules:
 
 x.T is a RowVector
 RowVector * ColumnVector is a scalar
 RowVector * Matrix is a RowVector
 Matrix * ColumnVector is a ColumnVector


To me, a row vector is just a matrix with a single row,
and a column vector is just a matrix with a single column.
Calling them vectors is rather redundant, since matrices
are also vectors (i.e., belong to a vector space).

I think the core of the row-vector/column-vector proposal
is really the idea that we could have 1d objects that
also have an orientation for the purposes of certain
operations. But then why not just use matrices, which
automatically provide that orientation?

Here are the 3 reasons I see:
- to allow iteration over matrices to produce a
  less surprising result (*if* you find it surprising
  that a matrix is a container of matrices, as I do)
- to allow 1d indexing of these vectors
- to introduce a scalar product

I rather doubt (?) that these justify the added complexity
of an additional array subclass.

Alan

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Functions for indexing into certain parts of an array (2d)

2009-06-06 Thread Robert Kern
On Sat, Jun 6, 2009 at 13:30, Fernando Perez fperez@gmail.com wrote:
 On Sat, Jun 6, 2009 at 12:09 AM, Robert Kernrobert.k...@gmail.com wrote:

 +1

 OK, thanks.  I'll try to get it ready.

 diag_indices() can be made more efficient, but these are fine.

 Suggestion?  Right now it's not obvious to me...

Oops! Never mind. I thought it was using mask_indices like the others.
 There is a neat trick for accessing the diagonal of an existing array
(a.flat[::a.shape[1]+1]), but it won't work to implement
diag_indices().

 A few more questions:

 - Are doctests considered enough testing for numpy, or are separate
 tests also required?

I don't think we have the testing machinery hooked up to test the
docstrings on the functions themselves (we made the decision to keep
examples as clean and pedagogical as possible rather than complete
tests). You can use doctests in the test directories, though.

 - Where should these go?

numpy/lib/twodim_base.py to go with their existing counterparts, I would think.

 - Any interest in also having the stuff below?  I'm needing to build
 structured random arrays a lot (symmetric, anti-symmetric, symmetric
 with  a particular diagonal, etc), and these are coming in handy.  If
 you want them, I'll put the whole thing together (these use the
 indexing utilities from the previous suggestion).

I wouldn't mind having a little gallery of matrix generators in numpy,
but someone else has already made a much more complete collection:

  http://pypi.python.org/pypi/rogues

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix default to column vector?

2009-06-06 Thread Robert Kern
On Sat, Jun 6, 2009 at 16:00, Alan G Isaac ais...@american.edu wrote:
 On 6/6/2009 4:30 PM Robert Kern apparently wrote:
 The old idea of introducing RowVector and ColumnVector would help
 here. If x were a ColumnVector and A a Matrix, then you can introduce
 the following rules:

 x.T is a RowVector
 RowVector * ColumnVector is a scalar
 RowVector * Matrix is a RowVector
 Matrix * ColumnVector is a ColumnVector


 To me, a row vector is just a matrix with a single row,
 and a column vector is just a matrix with a single column.
 Calling them vectors is rather redundant, since matrices
 are also vectors (i.e., belong to a vector space).

 I think the core of the row-vector/column-vector proposal
 is really the idea that we could have 1d objects that
 also have an orientation for the purposes of certain
 operations. But then why not just use matrices, which
 automatically provide that orientation?

Because (x.T * x) where x is an (n,1) matrix and * is matrix
multiplication (i.e. MM(n,1) - MM(1,1)) is not the same thing as the
inner product of a vector (RR^n - RR). Please see the post I was
responding to for the motivation.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix default to column vector?

2009-06-06 Thread Alan G Isaac
 On Sat, Jun 6, 2009 at 1:59 PM, Alan G Isaac ais...@american.edu
 For sure GAUSS does.  The result of   x' * A * x
 is a matrix (it has one row and one column) but
 it functions like a scalar (and even more,
 since right multiplication by it is also allowed).


On 6/6/2009 4:32 PM Charles R Harris apparently wrote:
 It's actually an inner product


Sorry for the confusion: I was just reporting how GAUSS
treats the expression x' * A * x.

Alan


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix default to column vector?

2009-06-06 Thread Charles R Harris
On Sat, Jun 6, 2009 at 3:00 PM, Alan G Isaac ais...@american.edu wrote:

 On 6/6/2009 4:30 PM Robert Kern apparently wrote:
  The old idea of introducing RowVector and ColumnVector would help
  here. If x were a ColumnVector and A a Matrix, then you can introduce
  the following rules:
 
  x.T is a RowVector
  RowVector * ColumnVector is a scalar
  RowVector * Matrix is a RowVector
  Matrix * ColumnVector is a ColumnVector


 To me, a row vector is just a matrix with a single row,
 and a column vector is just a matrix with a single column.
 Calling them vectors is rather redundant, since matrices
 are also vectors (i.e., belong to a vector space).


Well, yes, linear mappings between vector spaces are also vector spaces, but
it is useful to make the distinction. Likewise, L(x,L(y,z)) is multilinear
map that factors through the tensor product of x,y,z. So on and so forth. At
some point all these constructions are useful. But I think it is pernicious
for a first course in matrix algebra to not distinguish between matrices and
vectors. The abstraction to general linear spaces can come later.



 I think the core of the row-vector/column-vector proposal
 is really the idea that we could have 1d objects that
 also have an orientation for the purposes of certain
 operations. But then why not just use matrices, which
 automatically provide that orientation?


Because at some point you want scalars. In matrix algebra matrices are
generally considered maps between vector spaces. Covariance matrices don't
fit that paradigm, but that is skimmed over. It's kind of a mess, actually.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Functions for indexing into certain parts of an array (2d)

2009-06-06 Thread Keith Goodman
On Sat, Jun 6, 2009 at 2:01 PM, Robert Kern robert.k...@gmail.com wrote:
  There is a neat trick for accessing the diagonal of an existing array
 (a.flat[::a.shape[1]+1]), but it won't work to implement
 diag_indices().

Perfect. That's 3x faster.

def fill_diag(arr, value):
if arr.ndim != 2:
raise ValueError, Input must be 2-d.
if arr.shape[0] != arr.shape[1]:
raise ValueError, 'Input must be square.'
arr.flat[::arr.shape[1]+1] = value
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Functions for indexing into certain parts of an array (2d)

2009-06-06 Thread Alan G Isaac
On 6/6/2009 6:02 PM Keith Goodman apparently wrote:
 def fill_diag(arr, value):
 if arr.ndim != 2:
 raise ValueError, Input must be 2-d.
 if arr.shape[0] != arr.shape[1]:
 raise ValueError, 'Input must be square.'
 arr.flat[::arr.shape[1]+1] = value


You might want to check for contiguity.
See diagrv in pyGAUSS.py:
http://code.google.com/p/econpy/source/browse/trunk/pytrix/pyGAUSS.py

Alan Isaac

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] svn numpy not building on osx 10.5.6, python.org python 2.5.2

2009-06-06 Thread George Nurser
Hi,
the current svn version 7039 isn't compiling for me.
Clean checkout, old numpy directories removed from site-packages..
Same command did work for svn r 6329

[george-nursers-macbook-pro-15:~/src/numpy] agn% python setup.py
config_fc --fcompiler=gnu95 build_clib --fcompiler=gnu95 build_ext
--fcompiler=gnu95 install
Running from numpy source directory.
F2PY Version 2_7039
numpy/core/setup_common.py:81: MismatchCAPIWarning: API mismatch
detected, the C API version numbers have to be updated. Current C api
version is 3, with checksum c80bc716a6f035470a6f3f448406d9d5, but
recorded checksum for C API version 3 in codegen_dir/cversions.txt is
bf22c0d05b31625d2a7015988d61ce5a. If functions were added in the C
API, you have to update C_API_VERSION  in numpy/core/setup_common.pyc.
 MismatchCAPIWarning)
blas_opt_info:
 FOUND:
   extra_link_args = ['-Wl,-framework', '-Wl,Accelerate']
   define_macros = [('NO_ATLAS_INFO', 3)]
   extra_compile_args = ['-msse3',
'-I/System/Library/Frameworks/vecLib.framework/Headers']

lapack_opt_info:
 FOUND:
   extra_link_args = ['-Wl,-framework', '-Wl,Accelerate']
   define_macros = [('NO_ATLAS_INFO', 3)]
   extra_compile_args = ['-msse3']

running config_fc
unifing config_fc, config, build_clib, build_ext, build commands
--fcompiler options
running build_clib
customize UnixCCompiler
customize UnixCCompiler using build_clib
building 'npymath' library
compiling C sources
C compiler: gcc -arch ppc -arch i386 -isysroot
/Developer/SDKs/MacOSX10.4u.sdk -fno-strict-aliasing -Wno-long-double
-no-cpp-precomp -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -O3

error: unknown file type '.src' (from 'numpy/core/src/npy_math.c.src')

--George.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Functions for indexing into certain parts of an array (2d)

2009-06-06 Thread Alan G Isaac
On 6/6/2009 6:39 PM Robert Kern apparently wrote:
 Ah, that's the beauty of .flat; it takes care of that for you. .flat
 is not a view onto the memory directly. It is a not-quite-a-view onto
 what the memory *would* be if the array were contiguous and the memory
 directly reflected the layout as seen by Python.


Aha.  Thanks!
Alan


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] NameError: name 'numeric' is not defined

2009-06-06 Thread Robin
Hi,

I just updated to latest numpy svn:
In [10]: numpy.__version__
Out[10]: '1.4.0.dev7039'

It seemed to build fine, but I am getting a lot of errors testing it:
--
Ran 178 tests in 0.655s

FAILED (errors=138)
Out[8]: nose.result.TextTestResult run=178 errors=138 failures=0

Almost all the errors look the same:
==
ERROR: test_shape (test_ctypeslib.TestNdpointer)
--
Traceback (most recent call last):
  File 
/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/tests/test_ctypeslib.py,
line 83, in test_shape
self.assert_(p.from_param(np.array([[1,2]])))
  File 
/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/ctypeslib.py,
line 171, in from_param
return obj.ctypes
  File 
/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/core/__init__.py,
line 27, in module
__all__ += numeric.__all__
NameError: name 'numeric' is not defined

I haven't seen this before - is it something wrong with my build or
the current svn state? I am using macports python 2.5.4 on os x 10.5.7

Cheers

Robin
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] NameError: name 'numeric' is not defined

2009-06-06 Thread Robin
On Sun, Jun 7, 2009 at 12:53 AM, Robinrobi...@gmail.com wrote:
 I haven't seen this before - is it something wrong with my build or
 the current svn state? I am using macports python 2.5.4 on os x 10.5.7

Hmmm... after rebuilding from the same version the problem seems to
have gone away... sorry for the noise...

Robin
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix default to column vector?

2009-06-06 Thread Tom K.


Fernando Perez wrote:
 
 On Sat, Jun 6, 2009 at 11:03 AM, Charles R
 Harrischarlesr.har...@gmail.com wrote:
 
 I don't think we can change the current matrix class, to do so would
 break
 too much code. It would be nice to extend it with an explicit inner
 product,
 but I can't think of any simple notation for it that python would parse.
 
 Maybe it's time to make another push on python-dev for the pep-225
 stuff for other operators?
 
 https://cirl.berkeley.edu/fperez/static/numpy-pep225/
 
 Last year I got pretty much zero interest from python-dev on this, but
 they were very very busy with 3.0 on the horizon.  Perhaps once they
 put 3.1 out would be a good time to champion this again.
 
 It's slightly independent of the matrix class debate, but perhaps
 having special operators for real matrix multiplication could ease
 some of the bottlenecks of this discussion.
 
 It would be great if someone could champion that discussion on
 python-dev though, I don't see myself finding the time for it another
 time around...
 

How about pep 211?
http://www.python.org/dev/peps/pep-0211/

PEP 211 proposes a single new operator (@) that could be used for matrix
multiplication.
MATLAB has elementwise versions of multiply, exponentiation, and left and
right division using a preceding . for the usual matrix versions (* ^ \
/).
PEP 225 proposes tilde versions of + - * / % **.

While PEP 225 would allow a matrix exponentiation and right divide, I think
these things are much less  common than matrix multiply.  Plus, I think
following through with the PEP 225 implementation would create a
frankenstein of a language that would be hard to read.

So, I would argue for pushing for a single new operator that can then be
used to implement dot with a binary infix operator.  We can resurrect PEP
211 or start a new PEP or whatever, the main thing is to have a proposal
that makes sense.  Actually, what do you all think of this:
  @ -- matrix multiply
  @@ -- matrix exponentiation
and we leave it at that - let's not get too greedy and try for matrix
inverse via @/ or something.  

For the nd array operator, I would propose taking the last dimension of the
left array and collapsing it with the first dimension of the right array,
so 
  shape (a0, ..., aL-1,k) @ (k, b0, ..., bM-1) -- (a0, ..., aL-1, b0, ...,
bM-1)
Does that make sense?

With this proposal, matrices go away and all our lives are sane again. :-) 
Long live the numpy ndarray!  Thanks to the creators for all your hard work
BTW - I love this stuff!

  - Tom K.
-- 
View this message in context: 
http://www.nabble.com/matrix-default-to-column-vector--tp23652920p23907204.html
Sent from the Numpy-discussion mailing list archive at Nabble.com.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] scipy 0.7.1rc2 released

2009-06-06 Thread Adam Mercer
On Fri, Jun 5, 2009 at 06:09, David Cournapeaucourn...@gmail.com wrote:

 Please test it ! I am particularly interested in results for scipy
 binaries on mac os x (do they work on ppc).

Test suite passes on Intel Mac OS X (10.5.7) built from source:

OK (KNOWNFAIL=6, SKIP=21)
nose.result.TextTestResult run=3486 errors=0 failures=0

Cheers

Adam
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] reciprocal(0)

2009-06-06 Thread Ralf Gommers
Hi,

I expect `reciprocal(x)` to calculate 1/x, and for input 0 to either follow
the python rules or give the np.divide(1, 0) result. However the result
returned (with numpy trunk) is:

 np.reciprocal(0)
-2147483648

 np.divide(1, 0)
0
 1/0
Traceback (most recent call last):
  File stdin, line 1, in module
ZeroDivisionError: integer division or modulo by zero

The result for a zero float argument is inf as expected. I want to document
the correct behavior for integers, what should it be?

Cheers,
Ralf
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] is my numpy installation using custom blas/lapack?

2009-06-06 Thread llewelr

Hi,

On Jun 6, 2009 3:11pm, Chris Colbert sccolb...@gmail.com wrote:

it definately found your threaded atlas libraries. How do you know



it's numpy is using lapack_lite?


I don't, actually. But it is importing it. With python -v, this is the  
error I get if I don't set LD_LIBRARY_PATH to my scipy_build directory



import numpy.linalg.linalg # precompiled from  
/data10/users/rich/usr/galois/lib64/python/numpy/linalg/linalg.pyc
dlopen(/data10/users/rich/usr/galois/lib64/python/numpy/linalg/lapack_lite.so,  
2);

Traceback (most recent call last):
File stdin, line 1, in module
File /data10/users/rich/usr/galois//lib64/python/numpy/__init__.py, line  
130, in module

import add_newdocs
File /data10/users/rich/usr/galois//lib64/python/numpy/add_newdocs.py,  
line 9, in module

from lib import add_newdoc
File /data10/users/rich/usr/galois//lib64/python/numpy/lib/__init__.py,  
line 13, in module

from polynomial import *
File /data10/users/rich/usr/galois//lib64/python/numpy/lib/polynomial.py,  
line 18, in module

from numpy.linalg import eigvals, lstsq
File /data10/users/rich/usr/galois//lib64/python/numpy/linalg/__init__.py,  
line  
47, in module

from linalg import *
File /data10/users/rich/usr/galois//lib64/python/numpy/linalg/linalg.py,  
line 22, in module

from numpy.linalg import lapack_lite
ImportError: liblapack.so: cannot open shared object file: No such file or  
directory




Here blas_opt_info seems to be missing ATLAS version.


numpy.show_config()

atlas_threads_info:
libraries = ['lapack', 'lapack', 'f77blas', 'cblas', 'atlas']
library_dirs = ['/usr/local/rich/src/scipy_build/lib']
language = f77

blas_opt_info:
libraries = ['lapack', 'f77blas', 'cblas', 'atlas']
library_dirs = ['/usr/local/rich/src/scipy_build/lib']
define_macros = [('NO_ATLAS_INFO', 2)]
language = c

atlas_blas_threads_info:
libraries = ['lapack', 'f77blas', 'cblas', 'atlas']
library_dirs = ['/usr/local/rich/src/scipy_build/lib']
language = c

lapack_opt_info:
libraries = ['lapack', 'lapack', 'f77blas', 'cblas', 'atlas']
library_dirs = ['/usr/local/rich/src/scipy_build/lib']
define_macros = [('NO_ATLAS_INFO', 2)]
language = f77

lapack_mkl_info:
NOT AVAILABLE

blas_mkl_info:
NOT AVAILABLE

mkl_info:
NOT AVAILABLE






when I do:





python



import numpy



numpy.show_config()



atlas_threads_info:



libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas']



library_dirs = ['/usr/local/lib']



language = f77





blas_opt_info:



libraries = ['ptf77blas', 'ptcblas', 'atlas']



library_dirs = ['/usr/local/lib']



define_macros = [('ATLAS_INFO', '\\3.8.3\\')]



language = c





atlas_blas_threads_info:



libraries = ['ptf77blas', 'ptcblas', 'atlas']



library_dirs = ['/usr/local/lib']



language = c





lapack_opt_info:



libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas']



library_dirs = ['/usr/local/lib']



define_macros = [('NO_ATLAS_INFO', 2)]



language = f77





lapack_mkl_info:



NOT AVAILABLE





blas_mkl_info:



NOT AVAILABLE





mkl_info:



NOT AVAILABLE







also try:



 a = numpy.random.randn(6000, 6000)



 numpy.dot(a,a)





and make sure all your cpu cores peg at 100%





Unfortunately only one cpu. What does that mean? Threaded libraries not  
used?


from top:

Cpu0 :100.0%us, 0.0%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu1 : 0.0%us, 0.2%sy, 0.0%ni, 99.4%id, 0.0%wa, 0.2%hi, 0.2%si, 0.0%st
Cpu2 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu3 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st

Thanks much for the help.

Rich




On Sat, Jun 6, 2009 at 3:35 PM, llew...@gmail.com wrote:



 Oops. Thanks, that makes more sense:







 http://pastebin.com/m7067709b







 On Jun 6, 2009 12:15pm, Chris Colbert sccolb...@gmail.com wrote:



 i need the full link to pastebin.com in order to view your post.















 It will be something like http://pastebin.com/m6b09f05c























 chris






















 On Sat, Jun 6, 2009 at 2:32 PM, Richard llewellynllew...@gmail.com  
wrote:







  I posted the setup.py build output to pastebin.com, though missed the







  uninteresting stderr (forgot tcsh command to redirect both).







  Also, used setup.py build --fcompiler=gnu95.







 







 






  To be clear, I am not certain that my ATLAS libraries are not found.  
But






  during the build starting at line 95 (pastebin.com) I see a  
compilation







  failure, and then NO_ATLAS_INFO, 2.







 







  I don't think I can use ldconfig without root, but have set



  LD_LIBRARY_PATH







  to point to the scipy_build/lib until I put them somewhere else.







 






  importing numpy works, though lapack_lite is also imported. I wonder  
if



  this







  is normal even if my ATLAS was used.







 







  Thanks,







  Rich







 







  On Sat, Jun 6, 2009 at 10:46 AM, Chris Colbert sccolb...@gmail.com



  wrote:







 







  and where exactly are you seeing 

Re: [Numpy-discussion] is my numpy installation using custom blas/lapack?

2009-06-06 Thread llewelr

Hi,

On Jun 6, 2009 3:11pm, Chris Colbert sccolb...@gmail.com wrote:

it definately found your threaded atlas libraries. How do you know



it's numpy is using lapack_lite?


I don't, actually. But it is importing it. With python -v, this is the  
error I get if I don't set LD_LIBRARY_PATH to my scipy_build directory



import numpy.linalg.linalg # precompiled from  
/data10/users/rich/usr/galois/lib64/python/numpy/linalg/linalg.pyc
dlopen(/data10/users/rich/usr/galois/lib64/python/numpy/linalg/lapack_lite.so,  
2);

Traceback (most recent call last):
File stdin, line 1, in module
File /data10/users/rich/usr/galois//lib64/python/numpy/__init__.py, line  
130, in module

import add_newdocs
File /data10/users/rich/usr/galois//lib64/python/numpy/add_newdocs.py,  
line 9, in module

from lib import add_newdoc
File /data10/users/rich/usr/galois//lib64/python/numpy/lib/__init__.py,  
line 13, in module

from polynomial import *
File /data10/users/rich/usr/galois//lib64/python/numpy/lib/polynomial.py,  
line 18, in module

from numpy.linalg import eigvals, lstsq
File /data10/users/rich/usr/galois//lib64/python/numpy/linalg/__init__.py,  
line  
47, in module

from linalg import *
File /data10/users/rich/usr/galois//lib64/python/numpy/linalg/linalg.py,  
line 22, in module

from numpy.linalg import lapack_lite
ImportError: liblapack.so: cannot open shared object file: No such file or  
directory




Here blas_opt_info seems to be missing ATLAS version.


numpy.show_config()

atlas_threads_info:
libraries = ['lapack', 'lapack', 'f77blas', 'cblas', 'atlas']
library_dirs = ['/usr/local/rich/src/scipy_build/lib']
language = f77

blas_opt_info:
libraries = ['lapack', 'f77blas', 'cblas', 'atlas']
library_dirs = ['/usr/local/rich/src/scipy_build/lib']
define_macros = [('NO_ATLAS_INFO', 2)]
language = c

atlas_blas_threads_info:
libraries = ['lapack', 'f77blas', 'cblas', 'atlas']
library_dirs = ['/usr/local/rich/src/scipy_build/lib']
language = c

lapack_opt_info:
libraries = ['lapack', 'lapack', 'f77blas', 'cblas', 'atlas']
library_dirs = ['/usr/local/rich/src/scipy_build/lib']
define_macros = [('NO_ATLAS_INFO', 2)]
language = f77

lapack_mkl_info:
NOT AVAILABLE

blas_mkl_info:
NOT AVAILABLE

mkl_info:
NOT AVAILABLE






when I do:





python



import numpy



numpy.show_config()



atlas_threads_info:



libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas']



library_dirs = ['/usr/local/lib']



language = f77





blas_opt_info:



libraries = ['ptf77blas', 'ptcblas', 'atlas']



library_dirs = ['/usr/local/lib']



define_macros = [('ATLAS_INFO', '\\3.8.3\\')]



language = c





atlas_blas_threads_info:



libraries = ['ptf77blas', 'ptcblas', 'atlas']



library_dirs = ['/usr/local/lib']



language = c





lapack_opt_info:



libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas']



library_dirs = ['/usr/local/lib']



define_macros = [('NO_ATLAS_INFO', 2)]



language = f77





lapack_mkl_info:



NOT AVAILABLE





blas_mkl_info:



NOT AVAILABLE





mkl_info:



NOT AVAILABLE







also try:



 a = numpy.random.randn(6000, 6000)



 numpy.dot(a,a)





and make sure all your cpu cores peg at 100%





Unfortunately only one cpu. What does that mean? Threaded libraries not  
used?


from top:

Cpu0 :100.0%us, 0.0%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu1 : 0.0%us, 0.2%sy, 0.0%ni, 99.4%id, 0.0%wa, 0.2%hi, 0.2%si, 0.0%st
Cpu2 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu3 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st

Thanks much for the help.

Rich




On Sat, Jun 6, 2009 at 3:35 PM, llew...@gmail.com wrote:



 Oops. Thanks, that makes more sense:







 http://pastebin.com/m7067709b







 On Jun 6, 2009 12:15pm, Chris Colbert sccolb...@gmail.com wrote:



 i need the full link to pastebin.com in order to view your post.















 It will be something like http://pastebin.com/m6b09f05c























 chris






















 On Sat, Jun 6, 2009 at 2:32 PM, Richard llewellynllew...@gmail.com  
wrote:







  I posted the setup.py build output to pastebin.com, though missed the







  uninteresting stderr (forgot tcsh command to redirect both).







  Also, used setup.py build --fcompiler=gnu95.







 







 






  To be clear, I am not certain that my ATLAS libraries are not found.  
But






  during the build starting at line 95 (pastebin.com) I see a  
compilation







  failure, and then NO_ATLAS_INFO, 2.







 







  I don't think I can use ldconfig without root, but have set



  LD_LIBRARY_PATH







  to point to the scipy_build/lib until I put them somewhere else.







 






  importing numpy works, though lapack_lite is also imported. I wonder  
if



  this







  is normal even if my ATLAS was used.







 







  Thanks,







  Rich







 







  On Sat, Jun 6, 2009 at 10:46 AM, Chris Colbert sccolb...@gmail.com



  wrote:







 







  and where exactly are you seeing 

Re: [Numpy-discussion] reciprocal(0)

2009-06-06 Thread josef . pktd
On Sat, Jun 6, 2009 at 11:49 PM, Ralf Gommers
ralf.gomm...@googlemail.com wrote:
 Hi,

 I expect `reciprocal(x)` to calculate 1/x, and for input 0 to either follow
 the python rules or give the np.divide(1, 0) result. However the result
 returned (with numpy trunk) is:

 np.reciprocal(0)
 -2147483648

 np.divide(1, 0)
 0
 1/0
 Traceback (most recent call last):
   File stdin, line 1, in module
 ZeroDivisionError: integer division or modulo by zero

 The result for a zero float argument is inf as expected. I want to document
 the correct behavior for integers, what should it be?

 Cheers,
 Ralf

Add a warning not to use integers, if a nan or inf is possible in the
code, because the behavior in numpy is not very predictable.
overflow looks ok, but I really don't like the casting of nans to zero.

Josef

 x = np.array([0,1],dtype=int)

 x[1] = np.nan
 x
array([0, 0])

 x[1]= np.inf
Traceback (most recent call last):
OverflowError: cannot convert float infinity to long

 np.array([np.nan, 1],dtype=int)
array([0, 1])

 np.array([0, np.inf],dtype=int)
Traceback (most recent call last):
ValueError: setting an array element with a sequence.

 np.array([np.nan, np.inf]).astype(int)
array([-2147483648, -2147483648])


and now yours looks like an inf cast to zero

 x = np.array([0,1],dtype=int)
 x/x[0]
array([0, 0])

Masked Arrays look good for this

 x = np.ma.array([0,1],dtype=int)
 x
masked_array(data = [0 1],
 mask = False,
   fill_value = 99)

 x/x[0]
masked_array(data = [-- --],
 mask = [ True  True],
   fill_value = 99)
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] reciprocal(0)

2009-06-06 Thread Ralf Gommers
You're right, that's a little inconsistent. I would also prefer to get an
overflow for divide by 0 rather than casting to zero.

- ralf


On Sun, Jun 7, 2009 at 12:22 AM, josef.p...@gmail.com wrote:

 On Sat, Jun 6, 2009 at 11:49 PM, Ralf Gommers
 ralf.gomm...@googlemail.com wrote:
  Hi,
 
  I expect `reciprocal(x)` to calculate 1/x, and for input 0 to either
 follow
  the python rules or give the np.divide(1, 0) result. However the result
  returned (with numpy trunk) is:
 
  np.reciprocal(0)
  -2147483648
 
  np.divide(1, 0)
  0
  1/0
  Traceback (most recent call last):
File stdin, line 1, in module
  ZeroDivisionError: integer division or modulo by zero
 
  The result for a zero float argument is inf as expected. I want to
 document
  the correct behavior for integers, what should it be?
 
  Cheers,
  Ralf

 Add a warning not to use integers, if a nan or inf is possible in the
 code, because the behavior in numpy is not very predictable.
 overflow looks ok, but I really don't like the casting of nans to zero.

 Josef

  x = np.array([0,1],dtype=int)

  x[1] = np.nan
  x
 array([0, 0])

  x[1]= np.inf
 Traceback (most recent call last):
 OverflowError: cannot convert float infinity to long

  np.array([np.nan, 1],dtype=int)
 array([0, 1])

  np.array([0, np.inf],dtype=int)
 Traceback (most recent call last):
 ValueError: setting an array element with a sequence.

  np.array([np.nan, np.inf]).astype(int)
 array([-2147483648, -2147483648])


 and now yours looks like an inf cast to zero

  x = np.array([0,1],dtype=int)
  x/x[0]
 array([0, 0])

 Masked Arrays look good for this

  x = np.ma.array([0,1],dtype=int)
  x
 masked_array(data = [0 1],
 mask = False,
   fill_value = 99)

  x/x[0]
 masked_array(data = [-- --],
 mask = [ True  True],
   fill_value = 99)
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion