Re: [Numpy-discussion] array of matrices

2009-03-28 Thread Robert Kern
2009/3/27 Charles R Harris charlesr.har...@gmail.com:

 On Fri, Mar 27, 2009 at 4:43 PM, Robert Kern robert.k...@gmail.com wrote:

 On Fri, Mar 27, 2009 at 17:38, Bryan Cole br...@cole.uklinux.net wrote:
  I have a number of arrays of shape (N,4,4). I need to perform a
  vectorised matrix-multiplication between pairs of them I.e.
  matrix-multiplication rules for the last two dimensions, usual
  element-wise rule for the 1st dimension (of length N).
 
  (How) is this possible with numpy?

 dot(a,b) was specifically designed for this use case.

 I think maybe he wants to treat them as stacked matrices.

Oh, right. Sorry. dot(a, b) works when a is (N, 4, 4) and b is just
(4, 4). Never mind.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] How to tell whether I am using 32 bit or 64 bit numpy?

2009-03-28 Thread John Reid
I imagine I'm using 64 bit numpy as I made a vanilla install from recent 
source on a 64 bit box but how can I tell for sure? I have some problems 
creating large arrays.


In [29]: a=numpy.empty((1024, 1024, 1024), dtype=int8)
works just fine


In [30]: a=numpy.empty((1024, 1024, 2048), dtype=int8)
gives me the dimensions too large error:
ValueError: dimensions too large.


In [31]: a=numpy.empty((1024, 1024, 2047), dtype=int8)
gives me a memory error:
MemoryError:


How can I create these large arrays? Do I need to make sure I have a 64 
bit python? How do I do that?


Thanks in advance,
John.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] How to tell whether I am using 32 bit or 64 bit numpy?

2009-03-28 Thread David Cournapeau
On Sat, Mar 28, 2009 at 8:01 PM, John Reid j.r...@mail.cryst.bbk.ac.uk wrote:
 I imagine I'm using 64 bit numpy as I made a vanilla install from recent
 source on a 64 bit box but how can I tell for sure? I have some problems
 creating large arrays.

from platform import machine
print machine()

Should give you something like x86_64 for 64 bits intel/amd architecture,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] How to tell whether I am using 32 bit or 64 bit numpy?

2009-03-28 Thread Charles R Harris
On Sat, Mar 28, 2009 at 5:01 AM, John Reid j.r...@mail.cryst.bbk.ac.ukwrote:

 I imagine I'm using 64 bit numpy as I made a vanilla install from recent
 source on a 64 bit box but how can I tell for sure? I have some problems
 creating large arrays.


What platform are you on? I'm guessing Mac. You can check python on unix
type systems with

$[char...@f9 ~]$ file `which python`
/usr/bin/python: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV),
dynamically linked (uses shared libs), for GNU/Linux 2.6.9, stripped

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] How to tell whether I am using 32 bit or 64 bit numpy?

2009-03-28 Thread John Reid
Sorry for noise, it is my mistake. My assumption that the box is 64 bit 
was wrong :(

At least the processors are 64 bit :
Intel® Core™2 Duo Processor T9600

but uname -m reports:
i686

which as far as I understand means it thinks it is a 32 bit processor. 
If anyone knows better please let me know.

John.


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] How to tell whether I am using 32 bit or 64 bit numpy?

2009-03-28 Thread John Reid
David Cournapeau wrote:
 from platform import machine
 print machine()
 
 Should give you something like x86_64 for 64 bits intel/amd architecture,


In [3]: from platform import machine

In [4]: print machine()
i686


Now I'm wondering why the OS isn't 64 bit but that's not for discussion 
here I guess.

John.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] How to tell whether I am using 32 bit or 64 bit numpy?

2009-03-28 Thread David Cournapeau
On Sat, Mar 28, 2009 at 8:23 PM, John Reid j.r...@mail.cryst.bbk.ac.uk wrote:
 David Cournapeau wrote:
 from platform import machine
 print machine()

 Should give you something like x86_64 for 64 bits intel/amd architecture,


 In [3]: from platform import machine

 In [4]: print machine()
 i686


 Now I'm wondering why the OS isn't 64 bit but that's not for discussion
 here I guess.

Generally, at least on linux, you have to choose a difference
installation CD (or bootstrap method) depending on whether you want 32
or 64 bits OS when installing. Assuming a 64 bits capable CPU, I think
you can't run 64 bits binaries on a 32 bits OS, but the contrary is
more common (I don't really know the details - I stopped caring with
vmware :) ).

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] How to tell whether I am using 32 bit or 64 bit numpy?

2009-03-28 Thread John Reid


Charles R Harris wrote:
 What really matters is if python is 64 bits. Most 64 bit systems also 
 run 32 bit binaries.

Are you saying that even if uname -m gives i686, I still might be able 
to build a 64 bit python and numpy?

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] How to tell whether I am using 32 bit or 64 bit numpy?

2009-03-28 Thread David Cournapeau
On Sat, Mar 28, 2009 at 8:32 PM, John Reid j.r...@mail.cryst.bbk.ac.uk wrote:


 Charles R Harris wrote:
 What really matters is if python is 64 bits. Most 64 bit systems also
 run 32 bit binaries.

 Are you saying that even if uname -m gives i686, I still might be able
 to build a 64 bit python and numpy?

I think he meant exactly the contrary ;)

AFAIK, you can't run 64 bits binaries on a 32 bits linux, even if your
CPU is 64 bits,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] How to tell whether I am using 32 bit or 64 bit numpy?

2009-03-28 Thread John Reid
Charles R Harris wrote:
 What platform are you on? I'm guessing Mac. You can check python on unix 
 type systems with
 
 $[char...@f9 ~]$ file `which python`
 /usr/bin/python: ELF 32-bit LSB executable, Intel 80386, version 1 
 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.9, stripped

I'm on OpenSuse:

file `which python`
/usr/local/bin/python: ELF 32-bit LSB executable, Intel 80386, version 1 
(SYSV), for GNU/Linux 2.6.4, dynamically linked (uses shared libs), for 
GNU/Linux 2.6.4, not stripped

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] How to tell whether I am using 32 bit or 64 bit numpy?

2009-03-28 Thread Charles R Harris
On Sat, Mar 28, 2009 at 5:32 AM, John Reid j.r...@mail.cryst.bbk.ac.ukwrote:



 Charles R Harris wrote:
  What really matters is if python is 64 bits. Most 64 bit systems also
  run 32 bit binaries.

 Are you saying that even if uname -m gives i686, I still might be able
 to build a 64 bit python and numpy?


Probably not. You need a 64 bit operating system and it doesn't look like
you have that. Did you install Suse yourself?

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] How to tell whether I am using 32 bit or 64 bit numpy?

2009-03-28 Thread John Reid

Charles R Harris wrote:
 
 
 On Sat, Mar 28, 2009 at 5:32 AM, John Reid j.r...@mail.cryst.bbk.ac.uk 
 mailto:j.r...@mail.cryst.bbk.ac.uk wrote:
 
 
 
 Charles R Harris wrote:
   What really matters is if python is 64 bits. Most 64 bit systems also
   run 32 bit binaries.
 
 Are you saying that even if uname -m gives i686, I still might be able
 to build a 64 bit python and numpy?
 
 
 Probably not. You need a 64 bit operating system and it doesn't look 
 like you have that. Did you install Suse yourself?

Nope, but I do have root access to the box. I think it is probably a bit 
late to change it now considering how much has been installed already 
and the number of other users. Thanks to you and David for your help.

John.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] How to tell whether I am using 32 bit or 64bit numpy?

2009-03-28 Thread Dinesh B Vadhia
Uhmmm!  I installed 64-bit Python (2.5x) on a Windows 64-bit Vista machine 
(yes, strange but true) hoping that the 32-bit Numpy  Scipy libraries would 
work but they didn't. 


 
From: Charles R Harris 
Sent: Saturday, March 28, 2009 4:28 AM
To: Discussion of Numerical Python 
Subject: Re: [Numpy-discussion] How to tell whether I am using 32 bit or 64bit 
numpy?




 
On Sat, Mar 28, 2009 at 5:23 AM, John Reid j.r...@mail.cryst.bbk.ac.uk wrote:

David Cournapeau wrote:
 from platform import machine
 print machine()

 Should give you something like x86_64 for 64 bits intel/amd architecture,



  In [3]: from platform import machine

  In [4]: print machine()
  i686


  Now I'm wondering why the OS isn't 64 bit but that's not for discussion
  here I guess.


What really matters is if python is 64 bits. Most 64 bit systems also run 32 
bit binaries.

Chuck 





 ___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] How to tell whether I am using 32 bit or 64bit numpy?

2009-03-28 Thread David Cournapeau
Dinesh B Vadhia wrote:
 Uhmmm!  I installed 64-bit Python (2.5x) on a Windows 64-bit Vista
 machine (yes, strange but true) hoping that the 32-bit Numpy  Scipy
 libraries would work but they didn't.

That's a totally different situation: in your case, python and numpy
share the same address space in one process (for all purpose, numpy is a
dll for python), and you certainly can't mix 32 and 64 bits in the same
process. What you can do is running 32 bits numpy/scipy for a 32 bits
python on windows 64 bits...

... or helping us making numpy and scipy work on windows 64 bits by
testing the experimental 64 bits builds of numpy/scipy for windows :)

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] [Announce] Numpy 1.3.0 rc1

2009-03-28 Thread David Cournapeau
Hi,

I am pleased to announce the release of the rc1 for numpy
1.3.0. You can find source tarballs and installers for both Mac OS X
and Windows on the sourceforge page:

https://sourceforge.net/projects/numpy/
https://sourceforge.net/projects/numpy/

The release note for the 1.3.0 release are below,

The Numpy developers

=
NumPy 1.3.0 Release Notes
=

This minor includes numerous bug fixes, official python 2.6 support, and
several new features such as generalized ufuncs.

Highlights
==

Python 2.6 support
~~

Python 2.6 is now supported on all previously supported platforms, including
windows.

http://www.python.org/dev/peps/pep-0361/

Generalized ufuncs
~~

There is a general need for looping over not only functions on scalars
but also
over functions on vectors (or arrays), as explained on
http://scipy.org/scipy/numpy/wiki/GeneralLoopingFunctions. We propose to
realize this concept by generalizing the universal functions (ufuncs), and
provide a C implementation that adds ~500 lines to the numpy code base. In
current (specialized) ufuncs, the elementary function is limited to
element-by-element operations, whereas the generalized version supports
sub-array by sub-array operations. The Perl vector library PDL
provides a
similar functionality and its terms are re-used in the following.

Each generalized ufunc has information associated with it that states
what the
core dimensionality of the inputs is, as well as the corresponding
dimensionality of the outputs (the element-wise ufuncs have zero core
dimensions). The list of the core dimensions for all arguments is called the
signature of a ufunc. For example, the ufunc numpy.add has signature
(),()-() defining two scalar inputs and one scalar output.

Another example is (see the GeneralLoopingFunctions page) the function
inner1d(a,b) with a signature of (i),(i)-(). This applies the inner
product
along the last axis of each input, but keeps the remaining indices
intact. For
example, where a is of shape (3,5,N) and b is of shape (5,N), this will
return
an output of shape (3,5). The underlying elementary function is called 3*5
times. In the signature, we specify one core dimension (i) for each
input and
zero core dimensions () for the output, since it takes two 1-d arrays and
returns a scalar. By using the same name i, we specify that the two
corresponding dimensions should be of the same size (or one of them is
of size
1 and will be broadcasted).

The dimensions beyond the core dimensions are called loop dimensions.
In the
above example, this corresponds to (3,5).

The usual numpy broadcasting rules apply, where the signature
determines how
the dimensions of each input/output object are split into core and loop
dimensions:

While an input array has a smaller dimensionality than the corresponding
number
of core dimensions, 1's are pre-pended to its shape.  The core
dimensions are
removed from all inputs and the remaining dimensions are broadcasted;
defining
the loop dimensions.  The output is given by the loop dimensions plus the
output core dimensions.

Experimental Windows 64 bits support


Numpy can now be built on windows 64 bits (amd64 only, not IA64), with
both MS
compilers and mingw-w64 compilers:

This is *highly experimental*: DO NOT USE FOR PRODUCTION USE. See
INSTALL.txt,
Windows 64 bits section for more information on limitations and how to
build it
by yourself.

New features


Formatting issues
~

Float formatting is now handled by numpy instead of the C runtime: this
enables
locale independent formatting, more robust fromstring and related methods.
Special values (inf and nan) are also more consistent across platforms
(nan vs
IND/NaN, etc...), and more consistent with recent python formatting work (in
2.6 and later).

Nan handling in max/min
~~~

The maximum/minimum ufuncs now reliably propagate nans. If one of the
arguments is a nan, then nan is retured. This affects np.min/np.max,
amin/amax
and the array methods max/min. New ufuncs fmax and fmin have been added
to deal
with non-propagating nans.

Nan handling in sign


The ufunc sign now returns nan for the sign of anan.


New ufuncs
~~

#. fmax - same as maximum for integer types and non-nan floats. Returns the
   non-nan argument if one argument is nan and returns nan if both arguments
   are nan.
#. fmin - same as minimum for integer types and non-nan floats. Returns the
   non-nan argument if one argument is nan and returns nan if both arguments
   are nan.
#. deg2rad - converts degrees to radians, same as the radians ufunc.
#. rad2deg - converts radians to degrees, same as the degrees ufunc.
#. log2 - base 2 logarithm.
#. exp2 - base 2 exponential.
#. trunc - truncate floats to nearest integer towards zero.
#. logaddexp - add numbers stored as logarithms and return the logarithm
   of the 

[Numpy-discussion] DVCS at PyCon

2009-03-28 Thread Travis E. Oliphant

FYI from PyCon

Here at PyCon, it has been said that Python will be moving towards DVCS 
and will be using bzr or mecurial, but explicitly *not* git.   It would 
seem that *git* got the lowest score in the Developer survey that 
Brett Cannon did. 

The reasons seem to be:

  * git doesn't have good Windows clients
  * git is not written with Python


I think the sample size was pretty small to be making decisions on 
(especially when most opinions where un-informed).   I don't know if 
it matters that NumPy / SciPy use the same DVCS as Python, but it's a 
data-point.

-Travis

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [Announce] Numpy 1.3.0 rc1

2009-03-28 Thread Robert Pyle
Hi all,

On Mar 28, 2009, at 9:26 AM, David Cournapeau wrote:
 I am pleased to announce the release of the rc1 for numpy
 1.3.0. You can find source tarballs and installers for both Mac OS X
 and Windows on the sourceforge page:

 https://sourceforge.net/projects/numpy/
 https://sourceforge.net/projects/numpy/

I have a PPC Mac, dual G5, running 10.5.6.

The Mac OS X installer (numpy-1.3.0rc1-py2.5-macosx10.5.dmg) did not  
work for me.  It said none of my disks were suitable for  
installation.  The last time around, numpy-1.3.0b1-py2.5- 
macosx10.5.dmg persisted in installing itself into the system python  
rather than the Enthought distribution that I use, so I installed that  
version from the source tarball.

This time, installing from the source tarball also went smoothly.

Testing seems okay:
  np.test()
Running unit tests for numpy
NumPy version 1.3.0rc1
NumPy is installed in /Library/Frameworks/Python.framework/Versions/ 
4.1.30101/lib/python2.5/site-packages/numpy
Python version 2.5.2 |EPD Py25 4.1.30101| (r252:60911, Dec 19 2008,  
15:28:32) [GCC 4.0.1 (Apple Computer, Inc. build 5370)]
nose version 0.10.3
 
 
 
 
 
 
 
K...K... 
 
 
 
 
 
 
 
...S..
--
Ran 2030 tests in 13.930s

OK (KNOWNFAIL=2, SKIP=1)
nose.result.TextTestResult run=2030 errors=0 failures=0

Bob

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] How to tell whether I am using 32 bitor 64bit numpy?

2009-03-28 Thread Dinesh B Vadhia
David

1)  32-bit Numpy/Scipy with 32-bit Python on 64-bit Windows does work.  But, it 
doesn't take advantage of memory  2gb.

2)  Happy to help out with the experimental 64-bit builds of Numpy/Scipy.  But, 
would this be with pre-installed Windows libraries or source files as I'm not 
setup for dealing with source files?  The machine has an Intel Core2 Quad CPU 
with 8gb ram.  Strangely, the 64-bit Python 2.5x Intel version wouldn't install 
but the AMD version did.

Dinesh

 


From: David Cournapeau 
Sent: Saturday, March 28, 2009 6:16 AM
To: Discussion of Numerical Python 
Subject: Re: [Numpy-discussion] How to tell whether I am using 32 bitor 64bit 
numpy?


Dinesh B Vadhia wrote:
 Uhmmm!  I installed 64-bit Python (2.5x) on a Windows 64-bit Vista
 machine (yes, strange but true) hoping that the 32-bit Numpy  Scipy
 libraries would work but they didn't.

That's a totally different situation: in your case, python and numpy
share the same address space in one process (for all purpose, numpy is a
dll for python), and you certainly can't mix 32 and 64 bits in the same
process. What you can do is running 32 bits numpy/scipy for a 32 bits
python on windows 64 bits...

... or helping us making numpy and scipy work on windows 64 bits by
testing the experimental 64 bits builds of numpy/scipy for windows :)

cheers,

David

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] DVCS at PyCon

2009-03-28 Thread David Cournapeau
Hi Travis,

On Sat, Mar 28, 2009 at 11:54 PM, Travis E. Oliphant
oliph...@enthought.com wrote:

 FYI from PyCon

 Here at PyCon, it has been said that Python will be moving towards DVCS
 and will be using bzr or mecurial, but explicitly *not* git.   It would
 seem that *git* got the lowest score in the Developer survey that
 Brett Cannon did.

It is interesting how those tools are viewed so differently in
different communities. I am too quite doubtful about the validity of
those surveys :)

 The reasons seem to be:

  * git doesn't have good Windows clients

Depending on what is meant by good windows client (GUI, IDE
integration), it is true, but then neither do bzr or hg have good
clients, so I find this statement a bit strange. What is certainly
true is that git developers care much less about windows than bzr (and
hg ?). For example, I would guess git will never care much about case
insensitive fs, etc... (I know bzr developers worked quite a bit on
this).

  * git is not written with Python

I can somewhat understand why it matters to python, but does it matter to us ?

There are definitely strong arguments against git - but I don't think
being written in python is a strong one. The lack of a good window
support is a good argument against changing from svn, but very
unconvincing compared to other tools. Git has now so much more
manpower compared to hg and bzr (many more project use it: the list of
visible projects using git is becoming quite impressive) - from a 3rd
party POV, I think git is much better set up than bzr and hg. Gnome
choosing git could be significant (they made the decision a couple of
days ago).

 I think the sample size was pretty small to be making decisions on
 (especially when most opinions where un-informed).

Most people just choose the one they first use. Few people know
several DVCS. Pauli and me started a page about arguments pro/cons git
- it is still very much work in progress:

http://projects.scipy.org/numpy/wiki/GitMigrationProposal

Since few people are willing to try different systems, we also started
a few workflows (compared to svn):

http://projects.scipy.org/numpy/wiki/GitWorkflow

FWIW, I have spent some time to look at converting svn repo to git,
with proper conversion of branches, tags, and other things. I have
converted my own scikits to git as a first trial (I have numpy
converted as well, but I did not put it anywhere to avoid confusion).
This part of the problem would be relatively simple to handle.

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [Announce] Numpy 1.3.0 rc1

2009-03-28 Thread David Cournapeau
Hi Robert,

Thanks for the report.

On Sun, Mar 29, 2009 at 12:10 AM, Robert Pyle rp...@post.harvard.edu wrote:
 Hi all,

 On Mar 28, 2009, at 9:26 AM, David Cournapeau wrote:
 I am pleased to announce the release of the rc1 for numpy
 1.3.0. You can find source tarballs and installers for both Mac OS X
 and Windows on the sourceforge page:

 https://sourceforge.net/projects/numpy/
 https://sourceforge.net/projects/numpy/

 I have a PPC Mac, dual G5, running 10.5.6.

 The Mac OS X installer (numpy-1.3.0rc1-py2.5-macosx10.5.dmg) did not
 work for me.  It said none of my disks were suitable for
 installation.

Hm, strange, I have never encountered this problem. To be sure I
understand, you could open/mount the .dmg, but the .pkg refuses to
install ?

  The last time around, numpy-1.3.0b1-py2.5-
 macosx10.5.dmg persisted in installing itself into the system python
 rather than the Enthought distribution that I use, so I installed that
 version from the source tarball.

I am afraid there is nothing I can do here - the installer can only
work with the system python I believe (or more exactly the python
version I built the package against).

Maybe people more familiar with bdist_mpkg could prove me wrong ?

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] How to tell whether I am using 32 bitor 64bit numpy?

2009-03-28 Thread David Cournapeau
2009/3/29 Dinesh B Vadhia dineshbvad...@hotmail.com:
 David

 1)  32-bit Numpy/Scipy with 32-bit Python on 64-bit Windows does work.  But,
 it doesn't take advantage of memory  2gb.

Indeed. But running numpy 32 bits in python 64 bits is not possible -
and even if it were, I guess it could not handle more than 32 bits
pointers either :)


 2)  Happy to help out with the experimental 64-bit builds of Numpy/Scipy.
 But, would this be with pre-installed Windows libraries or source files as
 I'm not setup for dealing with source files?

Binaries. Building numpy and scipy from sources on windows 64 bits is
still a relatively epic battle I would not recommend on anyone :)

The machine has an Intel Core2
 Quad CPU with 8gb ram.

The windows version matters much more than the CPU (server vs xp vs
vista). I think we will only distribute binaries for python 2.6, too.

 Strangely, the 64-bit Python 2.5x Intel version
 wouldn't install but the AMD version did.

If by Intel version you mean itanium, then it is no surprise. Itanium
and amd64 are totally different CPU, and not compatible with each
other. Otherwise, I am not sure what to understand what you mean by 64
bits Intel version.

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [Announce] Numpy 1.3.0 rc1

2009-03-28 Thread Robert Pyle
Hi David,

On Mar 28, 2009, at 12:04 PM, David Cournapeau wrote:

 Hi Robert,

 Thanks for the report.

 On Sun, Mar 29, 2009 at 12:10 AM, Robert Pyle  
 rp...@post.harvard.edu wrote:

 The Mac OS X installer (numpy-1.3.0rc1-py2.5-macosx10.5.dmg) did not
 work for me.  It said none of my disks were suitable for
 installation.

 Hm, strange, I have never encountered this problem. To be sure I
 understand, you could open/mount the .dmg, but the .pkg refuses to
 install ?

Yes.  When it gets to Select a Destination, I would expect my boot  
disk to get the green arrow as the installation target, but it (and  
the other three disks) have the exclamation point in the red circle.   
Same thing happened on my MacBook Pro (Intel) with its one disk.

As I noted before, however, installation from source went without  
problems on both machines.

Bob

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU

2009-03-28 Thread Chris Colbert
going back and looking at this error:

C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall
-Wstrict-prototypes -fPIC

compile options: '-c'
gcc: _configtest.c
gcc -pthread _configtest.o -L/usr/local/atlas/lib -llapack -lptf77blas
-lptcblas -latlas -o _configtest
/usr/bin/ld: _configtest: hidden symbol `__powidf2' in
/usr/lib/gcc/i486-linux-gnu/4.3.2/libgcc.a(_powidf2.o) is referenced by DSO
/usr/bin/ld: final link failed: Nonrepresentable section on output
collect2: ld returned 1 exit status
/usr/bin/ld: _configtest: hidden symbol `__powidf2' in
/usr/lib/gcc/i486-linux-gnu/4.3.2/libgcc.a(_powidf2.o) is referenced by DSO
/usr/bin/ld: final link failed: Nonrepresentable section on output
collect2: ld returned 1 exit status
failure.
removing: _configtest.c _configtest.o


isnt that saying that _configtest.o is referencing something in libgcc which
is not lined to in the compile command?

Is this something I can add to the numpy setup script?

This problem is really beating me down. I've gone back and re-made the atlas
.so like 15 times linking with libgcc in 15 different ways. all to no
avail


Thanks again for any help!

Chris
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU

2009-03-28 Thread David Cournapeau
2009/3/29 Chris Colbert sccolb...@gmail.com:
 going back and looking at this error:

 C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall
 -Wstrict-prototypes -fPIC

 compile options: '-c'
 gcc: _configtest.c
 gcc -pthread _configtest.o -L/usr/local/atlas/lib -llapack -lptf77blas
 -lptcblas -latlas -o _configtest
 /usr/bin/ld: _configtest: hidden symbol `__powidf2' in
 /usr/lib/gcc/i486-linux-gnu/4.3.2/libgcc.a(_powidf2.o) is referenced by DSO
 /usr/bin/ld: final link failed: Nonrepresentable section on output
 collect2: ld returned 1 exit status
 /usr/bin/ld: _configtest: hidden symbol `__powidf2' in
 /usr/lib/gcc/i486-linux-gnu/4.3.2/libgcc.a(_powidf2.o) is referenced by DSO
 /usr/bin/ld: final link failed: Nonrepresentable section on output
 collect2: ld returned 1 exit status
 failure.
 removing: _configtest.c _configtest.o


 isnt that saying that _configtest.o is referencing something in libgcc which
 is not lined to in the compile command?

 Is this something I can add to the numpy setup script?

 This problem is really beating me down. I've gone back and re-made the atlas
 .so like 15 times linking with libgcc in 15 different ways. all to no
 avail

The way to build shared libraries in atlas does not always work, and
some auto-detected settings are often the wrong ones.

There is unfortunately not much we can do to help - and understanding
the exact problem may be quite difficult if you are not familiar with
various build issues.

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU

2009-03-28 Thread Chris Colbert
Robin,

Thanks. I need to get the backport for multiprocessing on 2.5.

But now, it's more of a matter of not wanting to admit defeat

Cheers,

Chris

On Sat, Mar 28, 2009 at 2:30 PM, Robin robi...@gmail.com wrote:

 2009/3/28 Chris Colbert sccolb...@gmail.com:
  Alright, building numpy against atlas from the repositories works, but
 this
  atlas only contains the single threaded libraries. So i would like to get
 my
  build working completely.

 It doesn't help at all with your problem - but I thought I'd point out
 there are other ways to exploit multicore machines than using threaded
 ATLAS (if that is your goal).

 For example, I use single threaded libraries and control parallel
 execution myself using multiprocessing module (this is easier for
 simple batch jobs, but might not be appropriate for your case).

 There is some information about this on the wiki:
 http://scipy.org/ParallelProgramming

 Cheers

 Robin
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU

2009-03-28 Thread Robin
2009/3/28 Chris Colbert sccolb...@gmail.com:
 Alright, building numpy against atlas from the repositories works, but this
 atlas only contains the single threaded libraries. So i would like to get my
 build working completely.

It doesn't help at all with your problem - but I thought I'd point out
there are other ways to exploit multicore machines than using threaded
ATLAS (if that is your goal).

For example, I use single threaded libraries and control parallel
execution myself using multiprocessing module (this is easier for
simple batch jobs, but might not be appropriate for your case).

There is some information about this on the wiki:
http://scipy.org/ParallelProgramming

Cheers

Robin
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU

2009-03-28 Thread Charles R Harris
2009/3/28 Chris Colbert sccolb...@gmail.com

 this is really, really, half of numpy.linalg works, the other half doesn't.


 working functions:

 cholesky
 det
 inv
 norm
 solve

 non-working functions (they all hang at 100% cpu):

 eig
 eigh
 eigvals
 eigvalsh
 pinv
 lstsq
 svd



 I must be a total n00b to be the only person running into this problem :)

 Cheers,


What does your lapack make.inc file look like? Here is what I used on 64 bit
Hardy back when.


#  LAPACK make include file.   #
#  LAPACK, Version 3.1.1   #
#  February 2007   #

#
# See the INSTALL/ directory for more examples.
#
SHELL = /bin/sh
#
#  The machine (platform) identifier to append to the library names
#
PLAT = _LINUX
#
#  Modify the FORTRAN and OPTS definitions to refer to the
#  compiler and desired compiler options for your machine.  NOOPT
#  refers to the compiler options desired when NO OPTIMIZATION is
#  selected.  Define LOADER and LOADOPTS to refer to the loader and
#  desired load options for your machine.
#
FORTRAN  = gfortran
OPTS = -funroll-all-loops -O3 -fPIC
DRVOPTS  = $(OPTS)
NOOPT= -fPIC
LOADER   = gfortran
LOADOPTS =
#
# Timer for the SECOND and DSECND routines
#
# Default : SECOND and DSECND will use a call to the EXTERNAL FUNCTION ETIME
# TIMER= EXT_ETIME
# For RS6K : SECOND and DSECND will use a call to the EXTERNAL FUNCTION
ETIME_
# TIMER= EXT_ETIME_
# For gfortran compiler: SECOND and DSECND will use a call to the INTERNAL
FUNCTION ETIME
TIMER= INT_ETIME
# If your Fortran compiler does not provide etime (like Nag Fortran
Compiler, etc...)
# SECOND and DSECND will use a call to the INTERNAL FUNCTION CPU_TIME
# TIMER= INT_CPU_TIME
# If neither of this works...you can use the NONE value... In that case,
SECOND and DSECND will always return 0
# TIMER = NONE
#
#  The archiver and the flag(s) to use when building archive (library)
#  If you system has no ranlib, set RANLIB = echo.
#
ARCH = ar
ARCHFLAGS= cr
RANLIB   = ranlib
#
#  The location of the libraries to which you will link.  (The
#  machine-specific, optimized BLAS library should be used whenever
#  possible.)
#
BLASLIB  = ../../blas$(PLAT).a
LAPACKLIB= lapack$(PLAT).a
TMGLIB   = tmglib$(PLAT).a
EIGSRCLIB= eigsrc$(PLAT).a
LINSRCLIB= linsrc$(PLAT).a

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU

2009-03-28 Thread Chris Colbert
this is really, really, half of numpy.linalg works, the other half doesn't.

working functions:

cholesky
det
inv
norm
solve

non-working functions (they all hang at 100% cpu):

eig
eigh
eigvals
eigvalsh
pinv
lstsq
svd



I must be a total n00b to be the only person running into this problem :)

Cheers,

Chris

On Sat, Mar 28, 2009 at 2:42 PM, Chris Colbert sccolb...@gmail.com wrote:

 alright,

 so i solved the linking error by building numpy against the static atlas
 libraries instead of .so's.

 But my original problem persists. Some functions work properly, buy
 numpy.linalg.eig() still hangs.

 the build log is attached.


 Chris


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU

2009-03-28 Thread Chris Colbert
here it is: 32 bit Intrepid


#  LAPACK make include file.   #
#  LAPACK, Version 3.1.1   #
#  February 2007   #

#
SHELL = /bin/sh
#
#  The machine (platform) identifier to append to the library names
#
PLAT = _LINUX
#
#  Modify the FORTRAN and OPTS definitions to refer to the
#  compiler and desired compiler options for your machine.  NOOPT
#  refers to the compiler options desired when NO OPTIMIZATION is
#  selected.  Define LOADER and LOADOPTS to refer to the loader and
#  desired load options for your machine.
#
FORTRAN  = gfortran
OPTS = -O2 -fPIC
DRVOPTS  = $(OPTS)
NOOPT= -O0 -fPIC
LOADER   = gfortran
LOADOPTS =
#
# Timer for the SECOND and DSECND routines
#
# Default : SECOND and DSECND will use a call to the EXTERNAL FUNCTION ETIME
#TIMER= EXT_ETIME
# For RS6K : SECOND and DSECND will use a call to the EXTERNAL FUNCTION
ETIME_
# TIMER= EXT_ETIME_
# For gfortran compiler: SECOND and DSECND will use a call to the INTERNAL
FUNCTION ETIME
TIMER= INT_ETIME
# If your Fortran compiler does not provide etime (like Nag Fortran
Compiler, etc...)
# SECOND and DSECND will use a call to the INTERNAL FUNCTION CPU_TIME
# TIMER= INT_CPU_TIME
# If neither of this works...you can use the NONE value... In that case,
SECOND and DSECND will always return 0
# TIMER = NONE
#
#  The archiver and the flag(s) to use when building archive (library)
#  If you system has no ranlib, set RANLIB = echo.
#
ARCH = ar
ARCHFLAGS= cr
RANLIB   = ranlib
#
#  The location of the libraries to which you will link.  (The
#  machine-specific, optimized BLAS library should be used whenever
#  possible.)
#
BLASLIB  = ../../blas$(PLAT).a
LAPACKLIB= lapack$(PLAT).a
TMGLIB   = tmglib$(PLAT).a
EIGSRCLIB= eigsrc$(PLAT).a
LINSRCLIB= linsrc$(PLAT).a
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU

2009-03-28 Thread Chris Colbert
I notice my OPTS and NOOPTS are different than yours. (I went of
scipy.orginstall guide)

Do you think that's the issue?

Cheers,

Chris
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU

2009-03-28 Thread Charles R Harris
2009/3/28 Chris Colbert sccolb...@gmail.com

 I notice my OPTS and NOOPTS are different than yours. (I went of 
 scipy.orginstall guide)

 Do you think that's the issue?


Probably not, but my experience is limited. IIRC, I also had to get the
command line for building ATLAS just right and build LAPACK separately
instead of having ATLAS do it. It took several tries and much poring through
the ATLAS instructions.

Something else to check is if you have another LAPACK/ATLAS sitting around
somewhere.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU

2009-03-28 Thread Chris Colbert
i just ran a dummy config on atlas and its giving me different OPTS and
NOOPTS flags than the scipy tutorial. so im gonna try that and report back

Chris

2009/3/28 Charles R Harris charlesr.har...@gmail.com



 2009/3/28 Chris Colbert sccolb...@gmail.com

 I notice my OPTS and NOOPTS are different than yours. (I went of
 scipy.org install guide)

 Do you think that's the issue?


 Probably not, but my experience is limited. IIRC, I also had to get the
 command line for building ATLAS just right and build LAPACK separately
 instead of having ATLAS do it. It took several tries and much poring through
 the ATLAS instructions.

 Something else to check is if you have another LAPACK/ATLAS sitting around
 somewhere.

 Chuck



 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU

2009-03-28 Thread Charles R Harris
2009/3/28 Chris Colbert sccolb...@gmail.com

 i just ran a dummy config on atlas and its giving me different OPTS and
 NOOPTS flags than the scipy tutorial. so im gonna try that and report back


I think that I also had to explicitly specify the bit size flag on the ATLAS
command line during various builds, -b32/64 or something like that...

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU

2009-03-28 Thread Chris Colbert
yeah, I set -b 32 on atlas...

the bogus atlas config was telling me to set OPTS = O -fPIC -m32 and NOPTS =
O -fPIC -m32, this caused the make process of lapack to hang.

So i set OPTS = O2 -fPIC -m32 and NOPTS = O0 -fPIC -m32.  Which is the same
as all of my first attempts except for the presence of -m32. so maybe
specifying bit size here is needed too.

Chris

2009/3/28 Charles R Harris charlesr.har...@gmail.com



 2009/3/28 Chris Colbert sccolb...@gmail.com

 i just ran a dummy config on atlas and its giving me different OPTS and
 NOOPTS flags than the scipy tutorial. so im gonna try that and report back


 I think that I also had to explicitly specify the bit size flag on the
 ATLAS command line during various builds, -b32/64 or something like that...

 Chuck



 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is it ok to include GPL scripts in the numpy *repository* ?

2009-03-28 Thread David Warde-Farley
Alan G Isaac wrote:
 On 3/27/2009 6:48 AM David Cournapeau apparently wrote:
 To build the numpy .dmg mac os x installer, I use a script from the
 adium project, which uses applescript and some mac os x black magic. The
 script seems to be GPL, as adium itself:
 
 
 It might be worth a query to see if the
 author would release just this script
 under the modified BSD license.
 http://trac.adiumx.com/wiki/ContactUs

Just FYI (since this seems to have been resolved), I know that most of 
the Adium team are sympathetic to BSD licensing as it makes their 
professional lives less complicated. Many also work on Growl ( 
http://growl.info ) which is BSD-licensed. The main reason for the GPL 
in Adium is the dependency (for now) on libpurple, which infects their 
whole codebase. So, in the (doubbtful) event that some of Adium's code 
might prove useful to NumPy/SciPy, it probably *is* worth asking the 
developers about relicensing bits and pieces.

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU

2009-03-28 Thread Chris Colbert
YES! YES! YES! YES! HAHAHAHA! YES!

using these flags in make.inc to build lapack 1.3.1 worked:

OPTS = O2 -fPIC -m32
NOPTS = O2 -fPIC -m32

then build atlas as normal and build numpy against the static atlas
libraries (building against the .so's created by atlas causes a linking
error in numpy build.log.  Numpy will still work, but who knows what
function may be broken).

Now, off to build numpy 1.3.0rc1

Thanks for all the help gents!

Chris




On Sat, Mar 28, 2009 at 4:27 PM, Chris Colbert sccolb...@gmail.com wrote:

 yeah, I set -b 32 on atlas...

 the bogus atlas config was telling me to set OPTS = O -fPIC -m32 and NOPTS
 = O -fPIC -m32, this caused the make process of lapack to hang.

 So i set OPTS = O2 -fPIC -m32 and NOPTS = O0 -fPIC -m32.  Which is the same
 as all of my first attempts except for the presence of -m32. so maybe
 specifying bit size here is needed too.

 Chris

 2009/3/28 Charles R Harris charlesr.har...@gmail.com



 2009/3/28 Chris Colbert sccolb...@gmail.com

 i just ran a dummy config on atlas and its giving me different OPTS and
 NOOPTS flags than the scipy tutorial. so im gonna try that and report back


 I think that I also had to explicitly specify the bit size flag on the
 ATLAS command line during various builds, -b32/64 or something like that...

 Chuck



 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU

2009-03-28 Thread Charles R Harris
2009/3/28 Chris Colbert sccolb...@gmail.com

 YES! YES! YES! YES! HAHAHAHA! YES!

 using these flags in make.inc to build lapack 1.3.1 worked:

 OPTS = O2 -fPIC -m32
 NOPTS = O2 -fPIC -m32

 then build atlas as normal and build numpy against the static atlas
 libraries (building against the .so's created by atlas causes a linking
 error in numpy build.log.  Numpy will still work, but who knows what
 function may be broken).

 Now, off to build numpy 1.3.0rc1

 Thanks for all the help gents!


You might need to run ldconfig to get the dynamic linking working.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU

2009-03-28 Thread Charles R Harris
On Sat, Mar 28, 2009 at 3:34 PM, Charles R Harris charlesr.har...@gmail.com
 wrote:



 2009/3/28 Chris Colbert sccolb...@gmail.com

 YES! YES! YES! YES! HAHAHAHA! YES!

 using these flags in make.inc to build lapack 1.3.1 worked:

 OPTS = O2 -fPIC -m32
 NOPTS = O2 -fPIC -m32

 then build atlas as normal and build numpy against the static atlas
 libraries (building against the .so's created by atlas causes a linking
 error in numpy build.log.  Numpy will still work, but who knows what
 function may be broken).

 Now, off to build numpy 1.3.0rc1

 Thanks for all the help gents!


 You might need to run ldconfig to get the dynamic linking working.


Oh, and the *.so libs don't install automagically, I had to explicitly copy
them into the library.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU

2009-03-28 Thread Chris Colbert
what does ldconfig do other than refresh the library path?

i copied the .so's  to /usr/local/atlas/lib and added that path to
/etc/ld.so.conf.d/scipy.conf and then did ldconfig

this was before building numpy

Chris

2009/3/28 Charles R Harris charlesr.har...@gmail.com



 On Sat, Mar 28, 2009 at 3:34 PM, Charles R Harris 
 charlesr.har...@gmail.com wrote:



 2009/3/28 Chris Colbert sccolb...@gmail.com

 YES! YES! YES! YES! HAHAHAHA! YES!

 using these flags in make.inc to build lapack 1.3.1 worked:

 OPTS = O2 -fPIC -m32
 NOPTS = O2 -fPIC -m32

 then build atlas as normal and build numpy against the static atlas
 libraries (building against the .so's created by atlas causes a linking
 error in numpy build.log.  Numpy will still work, but who knows what
 function may be broken).

 Now, off to build numpy 1.3.0rc1

 Thanks for all the help gents!


 You might need to run ldconfig to get the dynamic linking working.


 Oh, and the *.so libs don't install automagically, I had to explicitly copy
 them into the library.

 Chuck



 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU

2009-03-28 Thread Chris Colbert
aside from a smaller numpy install size, what do i gain from linking against
the .so's vs the static libraries?

Chris

On Sat, Mar 28, 2009 at 6:09 PM, Chris Colbert sccolb...@gmail.com wrote:

 what does ldconfig do other than refresh the library path?

 i copied the .so's  to /usr/local/atlas/lib and added that path to
 /etc/ld.so.conf.d/scipy.conf and then did ldconfig

 this was before building numpy

 Chris

 2009/3/28 Charles R Harris charlesr.har...@gmail.com



 On Sat, Mar 28, 2009 at 3:34 PM, Charles R Harris 
 charlesr.har...@gmail.com wrote:



 2009/3/28 Chris Colbert sccolb...@gmail.com

 YES! YES! YES! YES! HAHAHAHA! YES!

 using these flags in make.inc to build lapack 1.3.1 worked:

 OPTS = O2 -fPIC -m32
 NOPTS = O2 -fPIC -m32

 then build atlas as normal and build numpy against the static atlas
 libraries (building against the .so's created by atlas causes a linking
 error in numpy build.log.  Numpy will still work, but who knows what
 function may be broken).

 Now, off to build numpy 1.3.0rc1

 Thanks for all the help gents!


 You might need to run ldconfig to get the dynamic linking working.


 Oh, and the *.so libs don't install automagically, I had to explicitly
 copy them into the library.

 Chuck



 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU

2009-03-28 Thread Chris Colbert
just to see if it would work. I compiled against the .so's and just didnt
worry about the linking error. Then i installed numpy and ran numpy.test()

these are the results:

Ran 2030 tests in 9.778s

OK (KNOWNFAIL=1, SKIP=11)
nose.result.TextTestResult run=2030 errors=0 failures=0



so i guess that means its ok..

Chris

On Sat, Mar 28, 2009 at 6:30 PM, Chris Colbert sccolb...@gmail.com wrote:

 aside from a smaller numpy install size, what do i gain from linking
 against the .so's vs the static libraries?

 Chris


 On Sat, Mar 28, 2009 at 6:09 PM, Chris Colbert sccolb...@gmail.comwrote:

 what does ldconfig do other than refresh the library path?

 i copied the .so's  to /usr/local/atlas/lib and added that path to
 /etc/ld.so.conf.d/scipy.conf and then did ldconfig

 this was before building numpy

 Chris

 2009/3/28 Charles R Harris charlesr.har...@gmail.com



 On Sat, Mar 28, 2009 at 3:34 PM, Charles R Harris 
 charlesr.har...@gmail.com wrote:



 2009/3/28 Chris Colbert sccolb...@gmail.com

 YES! YES! YES! YES! HAHAHAHA! YES!

 using these flags in make.inc to build lapack 1.3.1 worked:

 OPTS = O2 -fPIC -m32
 NOPTS = O2 -fPIC -m32

 then build atlas as normal and build numpy against the static atlas
 libraries (building against the .so's created by atlas causes a linking
 error in numpy build.log.  Numpy will still work, but who knows what
 function may be broken).

 Now, off to build numpy 1.3.0rc1

 Thanks for all the help gents!


 You might need to run ldconfig to get the dynamic linking working.


 Oh, and the *.so libs don't install automagically, I had to explicitly
 copy them into the library.

 Chuck



 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Failure with 1.3.0b1 under Solaris 10 SPARC

2009-03-28 Thread Jeff Blaine
Same problem with 1.3.0rc1

Jeff Blaine wrote:
 Aside from this, the website for NumPy should have a link to the
 list subscription address, not a link to the list itself (which
 cannot be posted to unless one is a member).
 
 Python 2.4.2 (#2, Dec  6 2006, 17:18:19)
 [GCC 3.3.5] on sunos5
 Type help, copyright, credits or license for more information.
   import numpy
 Traceback (most recent call last):
File stdin, line 1, in ?
File
 /afs/.rcf.mitre.org/lang/python/sun4x_510/2.4.2/lib/python2.4/site-packages/numpy/__init__.py,
  
 
 line 130, in ?
  import add_newdocs
File
 /afs/.rcf.mitre.org/lang/python/sun4x_510/2.4.2/lib/python2.4/site-packages/numpy/add_newdocs.py,
  
 
 line 9, in ?
  from lib import add_newdoc
File
 /afs/.rcf.mitre.org/lang/python/sun4x_510/2.4.2/lib/python2.4/site-packages/numpy/lib/__init__.py,
  
 
 line 4, in ?
  from type_check import *
File
 /afs/.rcf.mitre.org/lang/python/sun4x_510/2.4.2/lib/python2.4/site-packages/numpy/lib/type_check.py,
  
 
 line 8, in ?
  import numpy.core.numeric as _nx
File
 /afs/.rcf.mitre.org/lang/python/sun4x_510/2.4.2/lib/python2.4/site-packages/numpy/core/__init__.py,
  
 
 line 5, in ?
  import multiarray
 ImportError: ld.so.1: python: fatal: relocation error: file
 /afs/.rcf.mitre.org/lang/python/sun4x_510/2.4.2/lib/python2.4/site-packages/numpy/core/multiarray.so:
  
 
 symbol __builtin_isfinite: referenced symbol not found
  
 
 See build.log attached as well.
 
 
 


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Failure with 1.3.0b1 under Solaris 10 SPARC

2009-03-28 Thread Charles R Harris
On Sat, Mar 28, 2009 at 5:31 PM, Jeff Blaine jbla...@mitre.org wrote:

 Same problem with 1.3.0rc1

 Jeff Blaine wrote:
  Aside from this, the website for NumPy should have a link to the
  list subscription address, not a link to the list itself (which
  cannot be posted to unless one is a member).
 
  Python 2.4.2 (#2, Dec  6 2006, 17:18:19)
  [GCC 3.3.5] on sunos5
  Type help, copyright, credits or license for more information.
import numpy
  Traceback (most recent call last):
 File stdin, line 1, in ?
 File
  /afs/.
 rcf.mitre.org/lang/python/sun4x_510/2.4.2/lib/python2.4/site-packages/numpy/__init__.py
 ,
 
  line 130, in ?
   import add_newdocs
 File
  /afs/.
 rcf.mitre.org/lang/python/sun4x_510/2.4.2/lib/python2.4/site-packages/numpy/add_newdocs.py
 ,
 
  line 9, in ?
   from lib import add_newdoc
 File
  /afs/.
 rcf.mitre.org/lang/python/sun4x_510/2.4.2/lib/python2.4/site-packages/numpy/lib/__init__.py
 ,
 
  line 4, in ?
   from type_check import *
 File
  /afs/.
 rcf.mitre.org/lang/python/sun4x_510/2.4.2/lib/python2.4/site-packages/numpy/lib/type_check.py
 ,
 
  line 8, in ?
   import numpy.core.numeric as _nx
 File
  /afs/.
 rcf.mitre.org/lang/python/sun4x_510/2.4.2/lib/python2.4/site-packages/numpy/core/__init__.py
 ,
 
  line 5, in ?
   import multiarray
  ImportError: ld.so.1: python: fatal: relocation error: file
  /afs/.
 rcf.mitre.org/lang/python/sun4x_510/2.4.2/lib/python2.4/site-packages/numpy/core/multiarray.so
 :
 
  symbol __builtin_isfinite: referenced symbol not found
   
 
  See build.log attached as well.
 
 


Google indicates that this might be a problem with a missing isfinite and
gcc 3.3.5. I think we should be detecting this, but...

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Failure with 1.3.0b1 under Solaris 10 SPARC

2009-03-28 Thread Charles R Harris
On Sat, Mar 28, 2009 at 6:35 PM, Charles R Harris charlesr.har...@gmail.com
 wrote:



 On Sat, Mar 28, 2009 at 5:31 PM, Jeff Blaine jbla...@mitre.org wrote:

 Same problem with 1.3.0rc1

 Jeff Blaine wrote:
  Aside from this, the website for NumPy should have a link to the
  list subscription address, not a link to the list itself (which
  cannot be posted to unless one is a member).
 
  Python 2.4.2 (#2, Dec  6 2006, 17:18:19)
  [GCC 3.3.5] on sunos5
  Type help, copyright, credits or license for more information.
import numpy
  Traceback (most recent call last):
 File stdin, line 1, in ?
 File
  /afs/.
 rcf.mitre.org/lang/python/sun4x_510/2.4.2/lib/python2.4/site-packages/numpy/__init__.py
 ,
 
  line 130, in ?
   import add_newdocs
 File
  /afs/.
 rcf.mitre.org/lang/python/sun4x_510/2.4.2/lib/python2.4/site-packages/numpy/add_newdocs.py
 ,
 
  line 9, in ?
   from lib import add_newdoc
 File
  /afs/.
 rcf.mitre.org/lang/python/sun4x_510/2.4.2/lib/python2.4/site-packages/numpy/lib/__init__.py
 ,
 
  line 4, in ?
   from type_check import *
 File
  /afs/.
 rcf.mitre.org/lang/python/sun4x_510/2.4.2/lib/python2.4/site-packages/numpy/lib/type_check.py
 ,
 
  line 8, in ?
   import numpy.core.numeric as _nx
 File
  /afs/.
 rcf.mitre.org/lang/python/sun4x_510/2.4.2/lib/python2.4/site-packages/numpy/core/__init__.py
 ,
 
  line 5, in ?
   import multiarray
  ImportError: ld.so.1: python: fatal: relocation error: file
  /afs/.
 rcf.mitre.org/lang/python/sun4x_510/2.4.2/lib/python2.4/site-packages/numpy/core/multiarray.so
 :
 
  symbol __builtin_isfinite: referenced symbol not found
   
 
  See build.log attached as well.
 
 


 Google indicates that this might be a problem with a missing isfinite and
 gcc 3.3.5. I think we should be detecting this, but...


What version of glibc do you have?

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [Announce] Numpy 1.3.0 rc1

2009-03-28 Thread Alan G Isaac
On 3/28/2009 9:26 AM David Cournapeau apparently wrote:
 I am pleased to announce the release of the rc1 for numpy
 1.3.0. You can find source tarballs and installers for both Mac OS X
 and Windows on the sourceforge page:
 https://sourceforge.net/projects/numpy/

Was the Python 2.6 Superpack intentionally omitted?

Alan Isaac


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] array of matrices

2009-03-28 Thread Anne Archibald
2009/3/28 Geoffrey Irving irv...@naml.us:
 On Sat, Mar 28, 2009 at 12:47 AM, Robert Kern robert.k...@gmail.com wrote:
 2009/3/27 Charles R Harris charlesr.har...@gmail.com:

 On Fri, Mar 27, 2009 at 4:43 PM, Robert Kern robert.k...@gmail.com wrote:

 On Fri, Mar 27, 2009 at 17:38, Bryan Cole br...@cole.uklinux.net wrote:
  I have a number of arrays of shape (N,4,4). I need to perform a
  vectorised matrix-multiplication between pairs of them I.e.
  matrix-multiplication rules for the last two dimensions, usual
  element-wise rule for the 1st dimension (of length N).
 
  (How) is this possible with numpy?

 dot(a,b) was specifically designed for this use case.

 I think maybe he wants to treat them as stacked matrices.

 Oh, right. Sorry. dot(a, b) works when a is (N, 4, 4) and b is just
 (4, 4). Never mind.

 It'd be great if this operation existed as a primitive.  What do you
 think would be the best way in which to add it?  One option would be
 to add a keyword argument to dot giving a set of axes to map over.
 E.g.,

    dot(a, b, map=0) = array([dot(u,v) for u,v in zip(a,b)]) # but in C

 map isn't a very good name for the argument, though.

I think the right long-term solution is to make dot (and some other
linear algebra functions) into generalized ufuncs, so that when you
dot two multidimensional objects together, they are treated as arrays
of two-dimensional arrays, broadcasting is done on all but the last
two dimensions, and then the linear algebra is applied elementwise.
This covers basically all stacked matrices uses in a very general
way, but would require some redesigning of the linear algebra system -
for example, dot() currently works on both two- and one-dimensional
arrays, which can't work in such a setting.

The infrastructure to support such generalized ufuncs has been added
to numpy, but as far as I know no functions yet make use of it.

Anne


 Geoffrey
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] array of matrices

2009-03-28 Thread Robert Kern
On Sat, Mar 28, 2009 at 23:15, Anne Archibald peridot.face...@gmail.com wrote:
 2009/3/28 Geoffrey Irving irv...@naml.us:
 On Sat, Mar 28, 2009 at 12:47 AM, Robert Kern robert.k...@gmail.com wrote:
 2009/3/27 Charles R Harris charlesr.har...@gmail.com:

 On Fri, Mar 27, 2009 at 4:43 PM, Robert Kern robert.k...@gmail.com wrote:

 On Fri, Mar 27, 2009 at 17:38, Bryan Cole br...@cole.uklinux.net wrote:
  I have a number of arrays of shape (N,4,4). I need to perform a
  vectorised matrix-multiplication between pairs of them I.e.
  matrix-multiplication rules for the last two dimensions, usual
  element-wise rule for the 1st dimension (of length N).
 
  (How) is this possible with numpy?

 dot(a,b) was specifically designed for this use case.

 I think maybe he wants to treat them as stacked matrices.

 Oh, right. Sorry. dot(a, b) works when a is (N, 4, 4) and b is just
 (4, 4). Never mind.

 It'd be great if this operation existed as a primitive.  What do you
 think would be the best way in which to add it?  One option would be
 to add a keyword argument to dot giving a set of axes to map over.
 E.g.,

    dot(a, b, map=0) = array([dot(u,v) for u,v in zip(a,b)]) # but in C

 map isn't a very good name for the argument, though.

 I think the right long-term solution is to make dot (and some other
 linear algebra functions) into generalized ufuncs, so that when you
 dot two multidimensional objects together, they are treated as arrays
 of two-dimensional arrays, broadcasting is done on all but the last
 two dimensions, and then the linear algebra is applied elementwise.
 This covers basically all stacked matrices uses in a very general
 way, but would require some redesigning of the linear algebra system -
 for example, dot() currently works on both two- and one-dimensional
 arrays, which can't work in such a setting.

 The infrastructure to support such generalized ufuncs has been added
 to numpy, but as far as I know no functions yet make use of it.

I don't think there is a way to do it in general with dot(). Some
cases are ambiguous. I think you will need separate matrix-matrix,
matrix-vector, and vector-vector gufuncs, to coin a term.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion