Re: [Numpy-discussion] SciPy Journal

2007-05-31 Thread Anne Archibald
On 31/05/07, Travis Oliphant [EMAIL PROTECTED] wrote:

 2) I think it's scope should be limited to papers that describe
 algorithms and code that are in NumPy / SciPy / SciKits.   Perhaps we
 could also accept papers that describe code that depends on NumPy /
 SciPy that is also easily available.

I think there's a place for code that doesn't fit in scipy itself but
could be closely tied to it - scikits, for example, or other code that
can't go in for license reasons (such as specialization).

 3) I'd like to make a requirement for inclusion of new code in SciPy
 that it have an associated journal article describing the algorithms,
 design approach, etc.  I don't see this journal article as being
 user-interface documentation for the code.  I see this is as a place to
 describe why the code is organized as it is and to detail any algorithms
 that are used.

I don't think that's a good idea. It raises the barrier to
contributing code (particularly for non-native English speakers),
which is not something we need. Certainly every major piece of code
warrants a journal article, or at least a short piece, and certainly
the author should have first shot, but I think it's not unreasonable
to allow the code to go in without the article being written. But (for
example) I implemented the Kuiper statistic and would be happy to
contribute it to scipy (once it's seen a bit more debugging), but it's
quite adequately described in the literature already, so it doesn't
seem worth writing an article about it.

Anne M. Archibald
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] SciPy Journal

2007-05-31 Thread Matthieu Brucher

Hi,

1) I'd like to get it going so that we can push out an electronic issue

after the SciPy conference (in September)



That would be great for the fame of numpy and scipy :)


2) I think it's scope should be limited to papers that describe

algorithms and code that are in NumPy / SciPy / SciKits.   Perhaps we
could also accept papers that describe code that depends on NumPy /
SciPy that is also easily available.




3) I'd like to make a requirement for inclusion of new code in SciPy

that it have an associated journal article describing the algorithms,
design approach, etc.  I don't see this journal article as being
user-interface documentation for the code.  I see this is as a place to
describe why the code is organized as it is and to detail any algorithms
that are used.



For this point, I have the same opinion as Anne :
- having an equivalence between cde and article is raising the entry level,
but as Anne said, some code could be somehow too trivial ?
- a peer-review process implies that an article can be rejected, so the code
is accepted, but not the article and vice-versa ?
- perhaps encouraging new contributors to propose an article would be a
solution ?


4) The purpose of the journal as I see it is to


a) provide someplace to document what is actually done in SciPy and
related software.
b) provide a teaching tool of numerical methods with actual people
use-it code that would be
   useful to researchers, students, and professionals.
c) hopefully clever new algorithms will be developed for SciPy by
people using Python
   that could be show-cased here
d) provide a peer-review publication opportunity for people who
contribute to open-source
   software




+1 for everything, very good idea.

5) We obviously need associate editors and people willing to review

submitted articles as well as people willing to submit articles.   I
have two articles that can be submitted within the next two months.
What do other people have?




I could talk about the design I proposed for generic optimizer, and
hopefully I'll have some other generic modules that could be exposed. But
it's not in scipy, and it's not an official scikit at the moment.
How long should it be - some journals have limits in size, so... - ?
I think I could propose one in three months - English is not my
mother-tongue and I have a pressing article deadline before -


It is true that I could submit this stuff to other journals, but it

seems like that doing that makes the information harder to find in the
future and not easier.  I'm also dissatisfied with how information
exclusionary academic journals seem to be.  They are catching up, but
they are still not as accessible as other things available on the
internet.





Given the open nature of most scientific research, it is remarkable that
getting access to the information is not as easy as it should be with
modern search engines (if your internet domain does not subscribe to the
e-journal).



Same for me, I have a hard time figuring why the prices are so high, and
that closes the door to very good articles :(

Matthieu
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] byteswap() leaves dtype unchanged

2007-05-31 Thread Matthew Brett
Hi,

  I can see your point I think, that situation 1 seems to be the more
  common and obvious, and coming at it from outside, you would have
  thought that a.byteswap would change both.

 I think the reason that byteswap behaves the way it does is that for
 situation 1 you often don't actually need to do anything. Just
 calculate with the things (it'll be a bit slow); as soon as the first
 copy gets made you're back to native byte order. So for those times
 you need to do it in place it's not too much trouble to byteswap and
 adjust the byte order in the dtype (you'd need to inspect the byte
 order in the first place to know it was byteswapped...)

Thanks - good point. How about the following suggestion:

For the next release:

rename byteswap to something like byteswapbuffer
deprecate byteswap in favor of byteswapbuffer

Update the docstrings to make the distinction between situations clearer.

I think that would reduce the clear element of surprise here.

Best,

Matthew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] build problem on Windows

2007-05-31 Thread Pearu Peterson


Albert Strasheim wrote:
 Hello
 
 I took a quick look at the code, and it seems like new_fcompiler(...) is too 
 soon to throw an error if a Fortran compiler cannot be detected.
 
 Instead, you might want to return some kind of NoneFCompiler that throws an 
 error if the build actually tries to compile Fortran code.

yes, this issue should be fixed in svn now.

Pearu
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] fortran representation

2007-05-31 Thread lorenzo bolla

Hi all,
I've got an easy question for you. I looked in Travis' book, but I couldn't
figure out the answer...

If I have an array1D (obtained reading a stream of numbers with
numpy.fromfile) like that:

In [150]: data
Out[150]: array([ 2.,  3.,  4.,  3.,  4.,  5.,  4.,  5.,  6.,  5.,  6.,
7.], dtype=float32)

I want it to be considered as Fortran ordered, so that when I do:

In [151]: data.shape = (3,4)

I want to get:

array([[ 2.,  3.,  4.,  5.],
  [ 3.,  4.,  5.,  6.],
  [ 4.,  5.,  6.,  7.]], dtype=float32)

But I get:

array([[ 2.,  3.,  4.,  3.],
  [ 4.,  5.,  4.,  5.],
  [ 6.,  5.,  6.,  7.]], dtype=float32)

How can I do that?
(I know I could do data.reshape(4,3).T, but it's not very elegant and
reshaping in ND becomes a mess!).
(I tried with numpy.array(data, order = 'Fotran'), with no luck...)

Thank you in advance,
Lorenzo.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [SciPy-user] SciPy Journal

2007-05-31 Thread Lou Pecora
I agree with this idea. Very good.  Although I also
agree with Anne Archibald that the requirement of an
article in the journal to submit code is not a good
idea.   I would be willing to contribute an article on
writing  C extensions that use numpy arrays.  I
already have something on this on the SciPy cookbook,
but I bet it would reach more people in a journal.

I also suggest that articles on using packages like
matplotlib/pylab for scientific purposes also be
included.



-- Lou Pecora,   my views are my own.
---
Great spirits have always encountered violent opposition from mediocre minds. 
-Albert Einstein


   

Got a little couch potato? 
Check out fun summer activities for kids.
http://search.yahoo.com/search?fr=oni_on_mailp=summer+activities+for+kidscs=bz
 
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] build advice

2007-05-31 Thread John Hunter
A colleague of mine is trying to update our production environment
with the latest releases of numpy, scipy, mpl and ipython, and is
worried about the lag time when there is a new numpy and old scipy,
etc... as the build progresses.  This is the scheme he is considering,
which looks fine to me, but I thought I would bounce it off the list
here in case anyone has confronted or thought about this problem
before.

Alternatively, an install to a tmp dir and then a bulk cp -r should work, no?

JDH

 We're planning to be putting out a bugfix
 matplotlib point release to 0.90.1 -- can you hold off on the mpl
 install for a day or so?

Sure.  While I have your attention, do you think this install scheme
would work?  It's the body of an email I just sent to c.l.py.


At work I need to upgrade numpy, scipy, ipython and matplotlib.  They need
to be done all at once.  All have distutils setups but the new versions and
the old versions are incompatible with one another as a group because
numpy's apis changed.  Ideally, I could just do something like

cd ~/src
cd numpy
python setup.py install
cd ../scipy
python setup.py install
cd ../matplotlib
python setup.py install
cd ../ipython
python setup.py install

however, even if nothing goes awry it leaves me with a fair chunk of time
where the state of the collective system is inconsistent (new numpy, old
everything else).  I'm wondering...  Can I stage the installs to a different
directory tree like so:

export PYTHONPATH=$HOME/local/lib/python-2.4/site-packages
cd ~/src
cd numpy
python setup.py install --prefix=$PYTHONPATH
cd ../scipy
python setup.py install --prefix=$PYTHONPATH
cd ../matplotlib
python setup.py install --prefix=$PYTHONPATH
cd ../ipython
python setup.py install --prefix=$PYTHONPATH

That would get them all built as a cohesive set.  Then I'd repeat the
installs without PYTHONPATH:

unset PYTHONPATH
cd ~/src
cd numpy
python setup.py install
cd ../scipy
python setup.py install
cd ../matplotlib
python setup.py install
cd ../ipython
python setup.py install

Presumably the compilation (the time-consuming part) is all
location-independent, so the second time the build_ext part should be fast.

Can anyone comment on the feasibility of this approach?  I guess what I'm
wondering is what dependencies there are on the installation directory.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] build advice

2007-05-31 Thread John Hunter
On 5/31/07, Matthew Brett [EMAIL PROTECTED] wrote:
 Hi,

  That would get them all built as a cohesive set.  Then I'd repeat the
  installs without PYTHONPATH:

 Is that any different from:
 cd ~/src
cd numpy
python setup.py build
cd ../scipy
python setup.py build

Well, the scipy and mpl builds need to see the new numpy build, I
think that is the issue.
JDH
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] build advice

2007-05-31 Thread Matthew Brett
Ah, yes, I was typing too fast, thinking too little.

On 5/31/07, John Hunter [EMAIL PROTECTED] wrote:
 On 5/31/07, Matthew Brett [EMAIL PROTECTED] wrote:
  Hi,
 
   That would get them all built as a cohesive set.  Then I'd repeat the
   installs without PYTHONPATH:
 
  Is that any different from:
  cd ~/src
 cd numpy
 python setup.py build
 cd ../scipy
 python setup.py build

 Well, the scipy and mpl builds need to see the new numpy build, I
 think that is the issue.
 JDH
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] SciPy Journal

2007-05-31 Thread Travis Oliphant
Anne Archibald wrote:

On 31/05/07, Travis Oliphant [EMAIL PROTECTED] wrote:

  

2) I think it's scope should be limited to papers that describe
algorithms and code that are in NumPy / SciPy / SciKits.   Perhaps we
could also accept papers that describe code that depends on NumPy /
SciPy that is also easily available.



I think there's a place for code that doesn't fit in scipy itself but
could be closely tied to it - scikits, for example, or other code that
can't go in for license reasons (such as specialization).

  

I did mention scikits, but your point is well taken.

3) I'd like to make a requirement for inclusion of new code in SciPy
that it have an associated journal article describing the algorithms,
design approach, etc.  I don't see this journal article as being
user-interface documentation for the code.  I see this is as a place to
describe why the code is organized as it is and to detail any algorithms
that are used.



I don't think that's a good idea. It raises the barrier to
contributing code (particularly for non-native English speakers),
which is not something we need. Certainly every major piece of code
warrants a journal article, or at least a short piece, and certainly
the author should have first shot, but I think it's not unreasonable
to allow the code to go in without the article being written. But (for
example) I implemented the Kuiper statistic and would be happy to
contribute it to scipy (once it's seen a bit more debugging), but it's
quite adequately described in the literature already, so it doesn't
seem worth writing an article about it.
  


What I envision is multiple levels of artciles, much like you see full 
papers and correspondences in the literature.  Something like this 
should take no more than a 1-4 page letter that describes the algorithm 
used and references available literature.  I still would like to see it 
though as a means to document what is in SciPy.  

Of course, I'd like to see more code, but right now we have a problem in 
deciding what code should go into SciPy and there seems to be no clear 
way to transition the code from the sandbox into SciPy.  This would help 
in that process I think.  I'm always open to help somebody who may have 
difficulty with English.  


-Travis

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] SciPy Journal

2007-05-31 Thread Travis Oliphant
Matthieu Brucher wrote:


 For this point, I have the same opinion as Anne :
 - having an equivalence between cde and article is raising the entry 
 level, but as Anne said, some code could be somehow too trivial ?
 - a peer-review process implies that an article can be rejected, so 
 the code is accepted, but not the article and vice-versa ?

I would like to avoid that.  In my mind the SciPy Journal should reflect 
code that is actually available in the PyLab world.   But, I could see 
having code that is not written about in the journal (the current state, 
for example...)

 - perhaps encouraging new contributors to propose an article would be 
 a solution ?


 I could talk about the design I proposed for generic optimizer, and 
 hopefully I'll have some other generic modules that could be exposed. 
 But it's not in scipy, and it's not an official scikit at the moment.
 How long should it be - some journals have limits in size, so... - ?


In my mind, we electronically publish something every 6 months to start 
with and then go from there.   The publication process amounts to a 
peer-review check on the work, a basic quality check on the type-setting 
(we will require authors to do their own typesetting), and then a 
listing of the articles for the specific edition.   Page size is not as 
important as file size.   Each article should be under a few MB. 


-Travis

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] fortran representation

2007-05-31 Thread Travis Oliphant
lorenzo bolla wrote:

 Hi all,
 I've got an easy question for you. I looked in Travis' book, but I 
 couldn't figure out the answer...
  
 If I have an array1D (obtained reading a stream of numbers with 
 numpy.fromfile) like that:
  
 In [150]: data
 Out[150]: array([ 2.,  3.,  4.,  3.,  4.,  5.,  4.,  5.,  6.,  5.,  
 6.,  7.], dtype=float32)
  
 I want it to be considered as Fortran ordered, so that when I do:
  
 In [151]: data.shape = (3,4)
  
 I want to get:
  
 array([[ 2.,  3.,  4.,  5.],
[ 3.,  4.,  5.,  6.],
[ 4.,  5.,  6.,  7.]], dtype=float32)
  
 But I get:
  
 array([[ 2.,  3.,  4.,  3.],
[ 4.,  5.,  4.,  5.],
[ 6.,  5.,  6.,  7.]], dtype=float32)
  
 How can I do that?


The Fortran orderness is a property of changing the shape from 1-d to 
2-d.  Assigning to the shape only allows you to do C-ordering. 

However, the reshape method allows a key-word argument:

data.reshape(3,4,order='F')

will give you what you want.

-Travis





___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Convert Numeric array to numpy array

2007-05-31 Thread Philip Riggs
I am not finding an answer to this question in the latest numpy  
documentation. I have a package that still uses Numeric (GDAL with  
python bindings). Is this valid code that will work as expected to  
convert the Numeric array to a numpy array (very simplified from my  
script)?

import numpy
from Numeric import *
import gdal
import gdalconst
import gdalnumeric

tempmap = array ([[1,0,1,0],[1,1,0,1]])
map = numpy.array(tempmap)
mask = piecewise (map, map  0, (1,0))

scalarmap = numpy.array([[0.5,0.2,0.3,0,4],[0.7,0.5,0.6,0.3]])
surfacemap = scalarmap * mask

Thanks
Philip

PS:

I considered converting the old bindings to use numpy arrays, but  
receive an error. I believe the problem lies in the datatype  
conversion shown below. Is there an easy way to fix this?


# GDALDataType
GDT_Unknown = 0
GDT_Byte = 1
GDT_UInt16 = 2
GDT_Int16 = 3
GDT_UInt32 = 4
GDT_Int32 = 5
GDT_Float32 = 6
GDT_Float64 = 7
GDT_CInt16 = 8
GDT_CInt32 = 9
GDT_CFloat32 = 10
GDT_CFloat64 = 11
GDT_TypeCount = 12

def GDALTypeCodeToNumericTypeCode( gdal_code ):
 if gdal_code == GDT_Byte:
 return UnsignedInt8
 elif gdal_code == GDT_UInt16:
 return UnsignedInt16
 elif gdal_code == GDT_Int16:
 return Int16
 elif gdal_code == GDT_UInt32:
 return UnsignedInt32
 elif gdal_code == GDT_Int32:
 return Int32
 elif gdal_code == GDT_Float32:
 return Float32
 elif gdal_code == GDT_Float64:
 return Float64
 elif gdal_code == GDT_CInt16:
 return Complex32
 elif gdal_code == GDT_CInt32:
 return Complex32
 elif gdal_code == GDT_CFloat32:
 return Complex32
 elif gdal_code == GDT_CFloat64:
 return Complex64
 else:
 return None

def NumericTypeCodeToGDALTypeCode( numeric_code ):
 if numeric_code == UnsignedInt8:
 return GDT_Byte
 elif numeric_code == Int16:
 return GDT_Int16
 elif numeric_code == UnsignedInt16:
 return GDT_UInt16
 elif numeric_code == Int32:
 return GDT_Int32
 elif numeric_code == UnsignedInt32:
 return GDT_UInt32
 elif numeric_code == Int:
 return GDT_Int32
 elif numeric_code == UnsignedInteger:
 return GDT_UInt32
 elif numeric_code == Float32:
 return GDT_Float32
 elif numeric_code == Float64:
 return GDT_Float64
 elif numeric_code == Complex32:
 return GDT_CFloat32
 elif numeric_code == Complex64:
 return GDT_CFloat64
 else:
 return None
  
   
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] subscribe

2007-05-31 Thread sittner
subscribe
i would like to subscribe to the mailing list

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] argument parsing in lapack_lite_zgeqrf - 64bit issue

2007-05-31 Thread Ornolfur Rognvaldsson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi!

I am using numpy-1.0.1 and ran into problems with 4 routines in
lapack_litemodule.c on 64 bit machines. Traced it back to parsing of
ints as longs. This has been fixed in numpy-1.0.3 for three of the
routines. lapack_lite_zgeqrf() still has the wrong parsing though. The
attached patch should fix it, I guess.

Össi
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.2.1 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGXpPIHbjsBy8IXVsRAnIEAJ0c6ln97VUI/CYl2O9IfbQD+2eOXQCgjxRE
kVfuLQrDk4k5hTzqaBMj9Qw=
=SVBu
-END PGP SIGNATURE-
--- lapack_litemodule.c	2007-03-11 07:34:55.0 +0100
+++ lapack_litemodule.c_fixed	2007-05-31 11:12:26.0 +0200
@@ -755,7 +755,7 @@
 int lda;
 int info;
 
-TRY(PyArg_ParseTuple(args,llOlOOll,m,n,a,lda,tau,work,lwork,info));
+TRY(PyArg_ParseTuple(args,iiOiOOii,m,n,a,lda,tau,work,lwork,info));
 
 /* check objects and convert to right storage order */
 TRY(check_object(a,PyArray_CDOUBLE,a,PyArray_CDOUBLE,zgeqrf));
@@ -765,7 +765,7 @@
 lapack_lite_status__ = \
 FNAME(zgeqrf)(m, n, ZDATA(a), lda, ZDATA(tau), ZDATA(work), lwork, info);
 
-return Py_BuildValue({s:l,s:l,s:l,s:l,s:l,s:l},zgeqrf_,lapack_lite_status__,m,m,n,n,lda,lda,lwork,lwork,info,info);
+return Py_BuildValue({s:i,s:i,s:i,s:i,s:i,s:i},zgeqrf_,lapack_lite_status__,m,m,n,n,lda,lda,lwork,lwork,info,info);
 }


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [SciPy-user] SciPy Journal

2007-05-31 Thread Nicolas Pettiaux
2007/5/31, Travis Oliphant [EMAIL PROTECTED]:

 1) I'd like to get it going so that we can push out an electronic issue
 after the SciPy conference (in September)

Such a journal is a very good idea indeed. This would also support the
credibility of python/scipy/numpy for an academic audience that
legitimates scientific productions mostly by articles in journals.

 2) I think it's scope should be limited to papers that describe
 algorithms and code that are in NumPy / SciPy / SciKits.   Perhaps we
 could also accept papers that describe code that depends on NumPy /
 SciPy that is also easily available.

More generally, examples of uses of scipy / numpy ... would be
interesting in such a journal, as well as simply the proceedings of
the scipy conferences.

 It is true that I could submit this stuff to other journals, but it
 seems like that doing that makes the information harder to find in the
 future and not easier.  I'm also dissatisfied with how information
 exclusionary academic journals seem to be.  They are catching up, but
 they are still not as accessible as other things available on the internet.

having *one*  main place where much information and documentation,
with peer reviewed validation, could be found is IMHO very
interesting.

Regards,

Nicolas
-- 
Nicolas Pettiaux - email: [EMAIL PROTECTED]
Utiliser des formats ouverts et des logiciels libres -
http://www.passeralinux.org
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] ATLAS,LAPACK compilation - help!

2007-05-31 Thread sittner

Hello there,
I'm new here, so excuse me if the solution is trivial:
i have installed ATLAS and LAPACK on my ubuntu 7 dual core intel machine.
now, when i try to install numpy, it tells me it doesn't find these
libraries:


$ python setup.py install
Running from numpy source directory.
F2PY Version 2_3816
blas_opt_info:
blas_mkl_info:
  libraries mkl,vml,guide not found in /usr/local/lib
  libraries mkl,vml,guide not found in /usr/lib
  NOT AVAILABLE

atlas_blas_threads_info:
Setting PTATLAS=ATLAS
  libraries lapack,blas not found in /usr/local/lib/ATLAS/src/
  libraries lapack,blas not found in /usr/local/lib/ATLAS
  libraries lapack,blas not found in /usr/local/lib
  libraries lapack,blas not found in /usr/lib
  NOT AVAILABLE

atlas_blas_info:
  libraries lapack,blas not found in /usr/local/lib/ATLAS/src/
  libraries lapack,blas not found in /usr/local/lib/ATLAS
  libraries lapack,blas not found in /usr/local/lib
  libraries lapack,blas not found in /usr/lib
  NOT AVAILABLE
..
I have installed ATLAS and lapack with no errors.
ATLAS is in usr/local/lib/ATLAS/:
$ ls /usr/local/lib/ATLAS
bin   doc  interfaces  Make.Linux_UNKNOWNSSE2_2  README  tune
CONFIGinclude  lib makes src
config.c  INSTALL.txt  MakefileMake.top  tst.o

so, what seems to be the problem?
thanks,
t





___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [SciPy-user] SciPy Journal

2007-05-31 Thread Aldarion
Lou Pecora wrote:
 I agree with this idea. Very good.  Although I also
 agree with Anne Archibald that the requirement of an
 article in the journal to submit code is not a good
 idea.   I would be willing to contribute an article on
 writing  C extensions that use numpy arrays.  I
 already have something on this on the SciPy cookbook,
 but I bet it would reach more people in a journal.

 I also suggest that articles on using packages like
 matplotlib/pylab for scientific purposes also be
 included.
and Ipython(Ipython1) :).
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Convert Numeric array to numpy array

2007-05-31 Thread Travis Oliphant
Philip Riggs wrote:

I am not finding an answer to this question in the latest numpy  
documentation. I have a package that still uses Numeric (GDAL with  
python bindings). Is this valid code that will work as expected to  
convert the Numeric array to a numpy array (very simplified from my  
script)?

import numpy
from Numeric import *
import gdal
import gdalconst
import gdalnumeric

tempmap = array ([[1,0,1,0],[1,1,0,1]])
map = numpy.array(tempmap)
  

Yes, this will do the conversion.

I considered converting the old bindings to use numpy arrays, but  
receive an error. I believe the problem lies in the datatype  
conversion shown below. Is there an easy way to fix this?


# GDALDataType
GDT_Unknown = 0
GDT_Byte = 1
GDT_UInt16 = 2
GDT_Int16 = 3
GDT_UInt32 = 4
GDT_Int32 = 5
GDT_Float32 = 6
GDT_Float64 = 7
GDT_CInt16 = 8
GDT_CInt32 = 9
GDT_CFloat32 = 10
GDT_CFloat64 = 11
GDT_TypeCount = 12

def GDALTypeCodeToNumericTypeCode( gdal_code ):
 if gdal_code == GDT_Byte:
 return UnsignedInt8
 elif gdal_code == GDT_UInt16:
 return UnsignedInt16
 elif gdal_code == GDT_Int16:
 return Int16
 elif gdal_code == GDT_UInt32:
 return UnsignedInt32
 elif gdal_code == GDT_Int32:
 return Int32
 elif gdal_code == GDT_Float32:
 return Float32
 elif gdal_code == GDT_Float64:
 return Float64
 elif gdal_code == GDT_CInt16:
 return Complex32
 elif gdal_code == GDT_CInt32:
 return Complex32
 elif gdal_code == GDT_CFloat32:
 return Complex32
 elif gdal_code == GDT_CFloat64:
 return Complex64
 else:
 return None

def NumericTypeCodeToGDALTypeCode( numeric_code ):
 if numeric_code == UnsignedInt8:
 return GDT_Byte
 elif numeric_code == Int16:
 return GDT_Int16
 elif numeric_code == UnsignedInt16:
 return GDT_UInt16
 elif numeric_code == Int32:
 return GDT_Int32
 elif numeric_code == UnsignedInt32:
 return GDT_UInt32
 elif numeric_code == Int:
 return GDT_Int32
 elif numeric_code == UnsignedInteger:
 return GDT_UInt32
 elif numeric_code == Float32:
 return GDT_Float32
 elif numeric_code == Float64:
 return GDT_Float64
 elif numeric_code == Complex32:
 return GDT_CFloat32
 elif numeric_code == Complex64:
 return GDT_CFloat64
 else:
 return None
  

All of these Int16, UnsignedInt16, etc. names are in

numpy.oldnumeric

However, they use the new character codes and so you have to take that 
into account.   You might also consider using a dictionary to do this 
conversion instead of the if ... elif tree.

-Travis

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ATLAS,LAPACK compilation - help!

2007-05-31 Thread Robert Kern
[EMAIL PROTECTED] wrote:
 Hello there,
 I'm new here, so excuse me if the solution is trivial:
 i have installed ATLAS and LAPACK on my ubuntu 7 dual core intel machine.
 now, when i try to install numpy, it tells me it doesn't find these
 libraries:
 
 
 $ python setup.py install
 Running from numpy source directory.
 F2PY Version 2_3816
 blas_opt_info:
 blas_mkl_info:
   libraries mkl,vml,guide not found in /usr/local/lib
   libraries mkl,vml,guide not found in /usr/lib
   NOT AVAILABLE
 
 atlas_blas_threads_info:
 Setting PTATLAS=ATLAS
   libraries lapack,blas not found in /usr/local/lib/ATLAS/src/
   libraries lapack,blas not found in /usr/local/lib/ATLAS
   libraries lapack,blas not found in /usr/local/lib
   libraries lapack,blas not found in /usr/lib
   NOT AVAILABLE
 
 atlas_blas_info:
   libraries lapack,blas not found in /usr/local/lib/ATLAS/src/
   libraries lapack,blas not found in /usr/local/lib/ATLAS
   libraries lapack,blas not found in /usr/local/lib
   libraries lapack,blas not found in /usr/lib
   NOT AVAILABLE
 ..
 I have installed ATLAS and lapack with no errors.
 ATLAS is in usr/local/lib/ATLAS/:
 $ ls /usr/local/lib/ATLAS
 bin   doc  interfaces  Make.Linux_UNKNOWNSSE2_2  README  tune
 CONFIGinclude  lib makes src
 config.c  INSTALL.txt  MakefileMake.top  tst.o
 
 so, what seems to be the problem?

You haven't actually installed ATLAS. You've just built it. Don't put the source
in /usr/local/lib/ATLAS/. Put that somewhere else, like ~/src/ATLAS/. Follow the
installation instructions in INSTALL.txt. Note this section:


There are two mandatory steps to ATLAS installation (config  build), as
well as three optional steps (test, time, install) and these steps are
described in detail below.  For the impatient, here is the basic outline:
**
   mkdir my_build_dir ; cd my_build_dir
   /path/to/ATLAS/configure [flags]
   make  ! tune and compile library
   make check! perform sanity tests
   make ptcheck  ! checks of threaded code for multiprocessor systems
   make time ! provide performance summary as % of clock rate
   make install  ! Copy library and include files to other directories
**


-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] SciPy Journal

2007-05-31 Thread Christopher Barker
Anne Archibald wrote:
 I implemented the Kuiper statistic and would be happy to
 contribute it to scipy (once it's seen a bit more debugging), but it's
 quite adequately described in the literature already, so it doesn't
 seem worth writing an article about it.

It could be a very short article that refers the seminal papers in the 
existing literature -- that would still be very helpful.

 2) I think it's scope should be limited to papers that describe
 algorithms and code that are in NumPy / SciPy / SciKits.

I don't see any reason to limit it -- honestly, the problem is more 
likely to be too few articles than too many! Anything that would make 
sense at the SciPy conference should be fair game. We might want to have 
a clear structure that isolates the NumPy/SciPy/SciKits articles though.

-Chris







-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] f2py with ifort, code won't compile

2007-05-31 Thread Andrew Corrigan
I'm trying to compile a simple Fortran code with f2py but I get a bunch 
of errors apparently related to my setup of python + numpy + ifort.

The final error is:
error: Command ifort -L/Users/acorriga/pythonroot/lib 
/tmp/tmp9KOZQM/tmp/tmp9KOZQM/src.linux-i686-2.5/simplemodule.o 
/tmp/tmp9KOZQM/tmp/tmp9KOZQM/src.linux-i686-2.5/fortranobject.o 
/tmp/tmp9KOZQM/simple.o 
/tmp/tmp9KOZQM/tmp/tmp9KOZQM/src.linux-i686-2.5/simple-f2pywrappers2.o 
-o ./simple.so failed with exit status 1
make: *** [simple.so] Error 1

The preceding errors all look like:
/tmp/tmp9KOZQM/tmp/tmp9KOZQM/src.linux-i686-2.5/fortranobject.o(.text+0x1b84):/tmp/tmp9KOZQM/src.linux-i686-2.5/fortranobject.c:219:
 
undefined reference to `PyErr_SetString'

I also got one other one which looks like:
/Users/acorriga/pythonroot/intel/fc/9.1.039/lib/for_main.o(.text+0x3e): 
In function `main':
: undefined reference to `MAIN__'

When I first encountered this error I was using Python 2.4.  I tried 
reinstalling Python, but upgraded to 2.5 in case that helped, but it 
apparently did not.  I'm using Intel Fortran version 9.1 and NumPy 
1.0.3.  This problem is occurring on my department's lab computers.  On 
my home desktop, which has Ubuntu Feisty installed, using the Feisty 
repository's python-numpy package and gfortran the same Fortran code 
compiles fine with f2py. Any ideas what the problem is?

Thanks,
Andrew Corrigan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] subscribe

2007-05-31 Thread Gael Varoquaux
Well just go to
http://projects.scipy.org/mailman/listinfo/numpy-discussion and enter
your email address in the form.

Nice to see some one from LKB interested in numpy. You might want to talk
to Thomas Nirrengarten, from the Haroche group, is has been learning
Python and numpy recently.

Cheers,

Gaël

On Wed, May 30, 2007 at 11:05:49AM +0200, [EMAIL PROTECTED] wrote:
 subscribe
 i would like to subscribe to the mailing list

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

-- 
  Gael Varoquaux,
  Groupe d'optique atomique,
  Laboratoire Charles Fabry de l'Institut d'Optique
  Campus Polytechnique, RD 128
  91127 Palaiseau cedex FRANCE
  Tel : 33 (0) 1 64 53 33 49 - Fax : 33 (0) 1 64 53 31 01
  Labs: 33 (0) 1 64 53 33 63 - 33 (0) 1 64 53 33 62
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] GPU implementation?

2007-05-31 Thread Martin Ünsal
I was wondering if anyone has thought about accelerating NumPy with a
GPU. For example nVidia's CUDA SDK provides a feasible way to offload
vector math onto the very fast SIMD processors available on the GPU.
Currently GPUs primarily support single precision floats and are not
IEEE compliant, but still could be useful for some applications.

If there turns out to be a significant speedup over using the CPU, this
could be a very accessible way to do scientific and numerical
computation using GPUs, much easier than coding directly to the GPU APIs.

Martin


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] GPU implementation?

2007-05-31 Thread Charles R Harris

On 5/31/07, Martin Ünsal [EMAIL PROTECTED] wrote:


I was wondering if anyone has thought about accelerating NumPy with a
GPU. For example nVidia's CUDA SDK provides a feasible way to offload
vector math onto the very fast SIMD processors available on the GPU.
Currently GPUs primarily support single precision floats and are not
IEEE compliant, but still could be useful for some applications.



I've thought about it, but I think it would be a heck of a lot of work.
NumPy works with subarrays a lot and I suspect this would make it tricky to
stream through a GPU. Making good use of the several pipelines would also
require a certain degree of parallelism which is not there now. We would
also need computation of sin, cos, and other functions for ufuncs, so that
might not work well. For ordinary matrix/array arithmetic the shortest route
might be a version of ATLAS/BLAS, some of LAPACK, and maybe an FFT library
written to use a GPU.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] GPU implementation?

2007-05-31 Thread James Turner
Hi Martin,

  I was wondering if anyone has thought about accelerating NumPy with a
  GPU. For example nVidia's CUDA SDK provides a feasible way to offload
  vector math onto the very fast SIMD processors available on the GPU.
  Currently GPUs primarily support single precision floats and are not
  IEEE compliant, but still could be useful for some applications.

I wasn't actually there, but I noticed that last year's SciPy
conference page includes a talk entitled GpuPy: Using GPUs to
Accelerate NumPy, by Benjamin Eitzen (I think I also found his Web
page via Google):

   http://www.scipy.org/SciPy2006/Schedule

I also wondered whether Benjamin or anyone else who is interested had
come across the Open Graphics Project (hadn't got around to asking)?

   http://wiki.duskglow.com/tiki-index.php?page=open-graphics

This would be quite a specialized combination, but I'm sure it could
be useful to some people with high performance requirements or maybe
building some kind of special appliances.

Cheers,

James.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] GPU implementation?

2007-05-31 Thread Andrew Corrigan
Martin Ünsal martinunsal at gmail.com writes:

 
 I was wondering if anyone has thought about accelerating NumPy with a
 GPU. For example nVidia's CUDA SDK provides a feasible way to offload
 vector math onto the very fast SIMD processors available on the GPU.
 Currently GPUs primarily support single precision floats and are not
 IEEE compliant, but still could be useful for some applications.
 
 If there turns out to be a significant speedup over using the CPU, this
 could be a very accessible way to do scientific and numerical
 computation using GPUs, much easier than coding directly to the GPU APIs.
 
 Martin
 

I've thought about this too and think that it's a great idea. The existing
library Brook, which has a similar programming model to NumPy, proves that it's
feasible.  And Brook was originally done with OpenGL  DirectX as backends to
access the hardware.  Needless to say, that's a lot harder than using CUDA. 
Since it hasn't already been pointed out, CUDA includes the cuBLAS and cuFFT
libraries.  I don't what the status of a LAPACK built on top of the cuBLAS is,
but I'd be surprised if someone isn't already working on it.  Also, NVIDIA has
stated that double-precision hardware will be available later this year, in case
that's an issue for anyone.

I agree very much that it would make the GPUs more accessible, although CUDA has
done an amazing job at that already.  I think the most helpful thing about this
would be if it allowed us to code using the existing array interface from NumPy
in a way that the code automatically runs on the GPU in an optimized way - using
shared memory + avoiding bank conflicts.

I'd happily contribute to such a project if someone else got it started.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ATLAS,LAPACK compilation - help!

2007-05-31 Thread David Cournapeau
Robert Kern wrote:
 [EMAIL PROTECTED] wrote:
 Hello there,
 I'm new here, so excuse me if the solution is trivial:
 i have installed ATLAS and LAPACK on my ubuntu 7 dual core intel machine.
 now, when i try to install numpy, it tells me it doesn't find these
 libraries:

 
 $ python setup.py install
 Running from numpy source directory.
 F2PY Version 2_3816
 blas_opt_info:
 blas_mkl_info:
   libraries mkl,vml,guide not found in /usr/local/lib
   libraries mkl,vml,guide not found in /usr/lib
   NOT AVAILABLE

 atlas_blas_threads_info:
 Setting PTATLAS=ATLAS
   libraries lapack,blas not found in /usr/local/lib/ATLAS/src/
   libraries lapack,blas not found in /usr/local/lib/ATLAS
   libraries lapack,blas not found in /usr/local/lib
   libraries lapack,blas not found in /usr/lib
   NOT AVAILABLE

 atlas_blas_info:
   libraries lapack,blas not found in /usr/local/lib/ATLAS/src/
   libraries lapack,blas not found in /usr/local/lib/ATLAS
   libraries lapack,blas not found in /usr/local/lib
   libraries lapack,blas not found in /usr/lib
   NOT AVAILABLE
 ..
 I have installed ATLAS and lapack with no errors.
 ATLAS is in usr/local/lib/ATLAS/:
 $ ls /usr/local/lib/ATLAS
 bin   doc  interfaces  Make.Linux_UNKNOWNSSE2_2  README  tune
 CONFIGinclude  lib makes src
 config.c  INSTALL.txt  MakefileMake.top  tst.o

 so, what seems to be the problem?

 You haven't actually installed ATLAS. You've just built it. Don't put the 
 source
 in /usr/local/lib/ATLAS/. Put that somewhere else, like ~/src/ATLAS/. Follow 
 the
 installation instructions in INSTALL.txt. Note this section:

 
 There are two mandatory steps to ATLAS installation (config  build), as
 well as three optional steps (test, time, install) and these steps are
 described in detail below.  For the impatient, here is the basic outline:
 **
mkdir my_build_dir ; cd my_build_dir
/path/to/ATLAS/configure [flags]
make  ! tune and compile library
make check! perform sanity tests
make ptcheck  ! checks of threaded code for multiprocessor systems
make time ! provide performance summary as % of clock rate
make install  ! Copy library and include files to other directories
 **
 

Alternatively, if you are not familiar with compiling softwares (and 
Atlas can be tricky to compile/install), just install the packages 
provided by ubuntu: sudo apt-get install atlas3-sse2-dev 
atlas3-base-dev, and it should be fine.

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] GPU implementation?

2007-05-31 Thread Brian Granger
This is very much worth pursuing.  I have been working on things
related to this on and off at my day job.  I can't say specifically
what I have been doing, but I can make some general comments:

* It is very easy to wrap the different parts of cude using ctypes and
call it from/numpy.

* Compared to a recent fast Intel CPU, the speedups we see are
consistent with what the NVIDIA literature reports:   10-30x is common
and in some cases we have seen up to 170x.

* Certain parts of numpy will be very easy to accelerate:  things
covered by blas, ffts, and ufuncs, random variates -  but each of
these will have very different speedups.

* LAPACK will be tough, extremely tough in some cases.  The main issue
is that various algorithms in LAPACK rely on different levels of BLAS
(1,2, or 3).  The algorithms in LAPACK that primarily use level 1 BLAS
functions (vector operations), like LU-decomp, are probably not worth
porting to the GPU - at least not using the BLAS that NVIDIA provides.
 On the other hand, the algorithms that use more of the level 2 and 3
BLAS functions are probably worth looking at.

* NVIDIA made a design decision in its implementation of cuBLAS and
cuFFT that is somewhat detrimental for certain algorithms.  In their
implementation, the BLAS and FFT routines can _only_ be called from
the CPU, not from code running on the GPU.  Thus if you have an
algorithm that makes many calls to cuBLAS/cuFFT, you pay a large
overhead in having to keep the main flow of the algorithm on the CPU.
It is not uncommon for this overhead to completely erode any speedup
you may have gotten on the GPU.

* For many BLAS calls, the cuBLAS won't be much faster than a good
optimized BLAS from ATLAS or Goto.

Brian


On 5/31/07, Martin Ünsal [EMAIL PROTECTED] wrote:
 I was wondering if anyone has thought about accelerating NumPy with a
 GPU. For example nVidia's CUDA SDK provides a feasible way to offload
 vector math onto the very fast SIMD processors available on the GPU.
 Currently GPUs primarily support single precision floats and are not
 IEEE compliant, but still could be useful for some applications.

 If there turns out to be a significant speedup over using the CPU, this
 could be a very accessible way to do scientific and numerical
 computation using GPUs, much easier than coding directly to the GPU APIs.

 Martin


 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion