[Numpy-discussion] Out-of-RAM FFTs

2009-04-01 Thread Greg Novak
Hello,
I'd like to do an FFT of a moderately large 3D cube, 1024^3.  Looking
at the run-time of smaller arrays, this is not a problem in terms of
compute time, but the array doesn't fit in memory.  So, several
questions:

1) Numerical Recipes has an out-of-memory FFT algorithm, but looking
through the numpy and scipy docs and modules, I didn't find a function
that does the same thing.  Did I miss it?  Should I get to work typing
it in?
2) I had high hopes for just memory-mapping the large array and
passing it to the standard fft function.  However, the memory-mapped
region must fit into the address space, and I don't seem to be able to
use more than 2 GB at a time.  So memory mapping doesn't seem to help
me at all.

This last issue leads to another series of things that puzzle me.  I
have an iMac running OS X 10.5 with an Intel Core 2 duo processor and
4 GB of memory.  As far as I've learned, the processor is 64 bit, the
operating system is 64 bit, so I should be able to happily memory-map
my entire disk if I want.  However, Python seems to run out of steam
when it's used 2 GB.  This is true of both 2.5 and 2.6.  What gives?
Is this a Python issue?

Thanks,
Greg
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] bad line in setup.py

2009-04-01 Thread Darren Dale
In setup.py, svn_revision(), there is a line:

log.warn(unrecognized .svn/entries format; skipping %s, base)

log is not defined in setup.py. I'm using svn-1.6.

Darren
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Out-of-RAM FFTs

2009-04-01 Thread David Cournapeau
Greg Novak wrote:
 1) Numerical Recipes has an out-of-memory FFT algorithm, but looking
 through the numpy and scipy docs and modules, I didn't find a function
 that does the same thing.  Did I miss it? 

I don't think so.

  Should I get to work typing
 it in?
   

Maybe :)

 2) I had high hopes for just memory-mapping the large array and
 passing it to the standard fft function.  However, the memory-mapped
 region must fit into the address space, and I don't seem to be able to
 use more than 2 GB at a time.  So memory mapping doesn't seem to help
 me at all.

 This last issue leads to another series of things that puzzle me.  I
 have an iMac running OS X 10.5 with an Intel Core 2 duo processor and
 4 GB of memory.  As far as I've learned, the processor is 64 bit, the
 operating system is 64 bit, so I should be able to happily memory-map
 my entire disk if I want.  However, Python seems to run out of steam
 when it's used 2 GB.  This is true of both 2.5 and 2.6.  What gives?
 Is this a Python issue?
   

Yes - official python binaries are 32 bits only. I don't know how
advanced/usable is the 64 bits build, but I am afraid you will have to
use an unofficial build or to build it by yourself.

I don't know if the following can help you:

http://developer.apple.com/documentation/Darwin/Conceptual/64bitPorting/intro/intro.html#//apple_ref/doc/uid/TP40001064-CH205-TPXREF101

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] trouble building docs with sphinx-0.6.1

2009-04-01 Thread Darren Dale
This morning I upgraded to sphinx-0.6.1, hoping to take advantage of all the
recent work that has been done to clean up and consolidate the web of sphinx
extensions. I'm seeing segfaults when I try to build my own docs, or the
h5py docs. I tried building the numpy documentation after applying the
attached patch. After building html, updating the environment, reading
sources, I see a slew of warnings and errors, followed by:

preparing documents... done
Exception occurred:  0%] contents
  File /usr/lib64/python2.6/site-packages/docutils/nodes.py, line 471, in
__getitem__
return self.attributes[key]
KeyError: 'entries'
The full traceback has been saved in /tmp/sphinx-err-RDe0NL.log, if you want
to report the issue to the author.
Please also report this if it was a user error, so that a better error
message can be provided next time.
Send reports to sphinx-...@googlegroups.com. Thanks!

Here are the contents of the referenced log file:

Traceback (most recent call last):
  File //usr/lib64/python2.6/site-packages/sphinx/cmdline.py, line 172, in
main
app.build(all_files, filenames)
  File //usr/lib64/python2.6/site-packages/sphinx/application.py, line
129, in build
self.builder.build_update()
  File //usr/lib64/python2.6/site-packages/sphinx/builders/__init__.py,
line 255, in build_update
'out of date' % len(to_build))
  File //usr/lib64/python2.6/site-packages/sphinx/builders/__init__.py,
line 310, in build
self.write(docnames, list(updated_docnames), method)
  File //usr/lib64/python2.6/site-packages/sphinx/builders/__init__.py,
line 348, in write
doctree = self.env.get_and_resolve_doctree(docname, self)
  File //usr/lib64/python2.6/site-packages/sphinx/environment.py, line
995, in get_and_resolve_doctree
prune=prune_toctrees)
  File //usr/lib64/python2.6/site-packages/sphinx/environment.py, line
1128, in resolve_toctree
tocentries = _entries_from_toctree(toctree, separate=False)
  File //usr/lib64/python2.6/site-packages/sphinx/environment.py, line
1109, in _entries_from_toctree
subtree=True):
  File //usr/lib64/python2.6/site-packages/sphinx/environment.py, line
1109, in _entries_from_toctree
subtree=True):
  File //usr/lib64/python2.6/site-packages/sphinx/environment.py, line
1109, in _entries_from_toctree
subtree=True):
  File //usr/lib64/python2.6/site-packages/sphinx/environment.py, line
1051, in _entries_from_toctree
refs = [(e[0], str(e[1])) for e in toctreenode['entries']]
  File /usr/lib64/python2.6/site-packages/docutils/nodes.py, line 471, in
__getitem__
return self.attributes[key]
KeyError: 'entries'

Has anyone else tried building numpy's docs with sphinx-0.6.1? Is there any
interest in sorting these issues out before 1.3 is released?

Thanks,
Darren


numpy_doc.patch
Description: Binary data
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Out-of-RAM FFTs

2009-04-01 Thread Matthew Brett
Hi,

 1) Numerical Recipes has an out-of-memory FFT algorithm, but looking
 through the numpy and scipy docs and modules, I didn't find a function
 that does the same thing.  Did I miss it?  Should I get to work typing
 it in?

No please don't do that; I'm afraid the Numerical Recipes book has a
code license that is unusable for numpy / scipy, and it would be a
very bad thing if any of their code ended up in the code base.

Best,

Matthew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] bad line in setup.py

2009-04-01 Thread David Cournapeau
2009/4/2 Darren Dale dsdal...@gmail.com:
 In setup.py, svn_revision(), there is a line:

 log.warn(unrecognized .svn/entries format; skipping %s, base)

 log is not defined in setup.py. I'm using svn-1.6.

Damn - this should be fixed in r6830. Maybe I should have stayed with
my solution instead of using setuptools version,

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] trouble building docs with sphinx-0.6.1

2009-04-01 Thread Darren Dale
On Wed, Apr 1, 2009 at 11:57 AM, David Cournapeau 
da...@ar.media.kyoto-u.ac.jp wrote:

 Darren Dale wrote:
  This morning I upgraded to sphinx-0.6.1, hoping to take advantage of
  all the recent work that has been done to clean up and consolidate the
  web of sphinx extensions. I'm seeing segfaults when I try to build my
  own docs, or the h5py docs.

 Segfaults ? In sphinx or numpy related ?


I just updated my numpy svn checkout, reinstalled, and now I can build the
h5py documents without segfaulting.

 I tried building the numpy documentation after applying the attached
  patch. After building html, updating the environment, reading sources,
  I see a slew of warnings and errors, followed by:
 
  preparing documents... done
  Exception occurred:  0%] contents
File /usr/lib64/python2.6/site-packages/docutils/nodes.py, line
  471, in __getitem__
  return self.attributes[key]
  KeyError: 'entries'
  The full traceback has been saved in /tmp/sphinx-err-RDe0NL.log, if
  you want to report the issue to the author.
  Please also report this if it was a user error, so that a better error
  message can be provided next time.
  Send reports to sphinx-...@googlegroups.com
  mailto:sphinx-...@googlegroups.com. Thanks!

 This often happens for non-clean build. The only solution I got so far
 was to start the doc build from scratch...


Do you mean to delete my doc/build directory and run make html? I've
already tried that.

Darren
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] trouble building docs with sphinx-0.6.1

2009-04-01 Thread David Cournapeau
Darren Dale wrote:

 Do you mean to delete my doc/build directory and run make html? I've
 already tried that.

Yes, I meant that, and no, I don't have other suggestion :(

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] trouble building docs with sphinx-0.6.1

2009-04-01 Thread Pierre GM

On Apr 1, 2009, at 11:57 AM, David Cournapeau wrote:

 preparing documents... done
 Exception occurred:  0%] contents
  File /usr/lib64/python2.6/site-packages/docutils/nodes.py, line
 471, in __getitem__
return self.attributes[key]
 KeyError: 'entries'
 The full traceback has been saved in /tmp/sphinx-err-RDe0NL.log, if
 you want to report the issue to the author.
 Please also report this if it was a user error, so that a better  
 error
 message can be provided next time.
 Send reports to sphinx-...@googlegroups.com
 mailto:sphinx-...@googlegroups.com. Thanks!

 This often happens for non-clean build. The only solution I got so far
 was to start the doc build from scratch...

David, won't work here, there's a bug indeed.
Part of it comes from numpydoc, that isn't completely compatible w/  
Sphin-0.6.1. In particular, the code doesn't know what to do w/ this  
'entries' parameter.
part of it comes from Sphinx. Georg said he made the 'entries'  
parameter optional, but that doesn't solve everything. Matt Knox  
actually came across the 'best' solution
Edit Sphinx/environment.py, L1051:
replace
refs = [(e[0], str(e[1])) for e in toctreenode['entries'])]
by
refs = [(e[0], str(e[1])) for e in toctreenode.get('entries', [])]



 Has anyone else tried building numpy's docs with sphinx-0.6.1? Is
 there any interest in sorting these issues out before 1.3 is  
 released?

 I am afraid it is too late for the 1.3 release,

 cheers,

 David
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] trouble building docs with sphinx-0.6.1

2009-04-01 Thread Darren Dale
On Wed, Apr 1, 2009 at 12:26 PM, Pierre GM pgmdevl...@gmail.com wrote:


 On Apr 1, 2009, at 11:57 AM, David Cournapeau wrote:
 
  preparing documents... done
  Exception occurred:  0%] contents
   File /usr/lib64/python2.6/site-packages/docutils/nodes.py, line
  471, in __getitem__
 return self.attributes[key]
  KeyError: 'entries'
  The full traceback has been saved in /tmp/sphinx-err-RDe0NL.log, if
  you want to report the issue to the author.
  Please also report this if it was a user error, so that a better
  error
  message can be provided next time.
  Send reports to sphinx-...@googlegroups.com
  mailto:sphinx-...@googlegroups.com. Thanks!
 
  This often happens for non-clean build. The only solution I got so far
  was to start the doc build from scratch...

 David, won't work here, there's a bug indeed.
 Part of it comes from numpydoc, that isn't completely compatible w/
 Sphin-0.6.1. In particular, the code doesn't know what to do w/ this
 'entries' parameter.
 part of it comes from Sphinx. Georg said he made the 'entries'
 parameter optional, but that doesn't solve everything. Matt Knox
 actually came across the 'best' solution
 Edit Sphinx/environment.py, L1051:
 replace
 refs = [(e[0], str(e[1])) for e in toctreenode['entries'])]
 by
 refs = [(e[0], str(e[1])) for e in toctreenode.get('entries', [])]


Thanks Pierre, I think that did it.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] 1.3.0 rc1 MATHLIB env variable / bad compiler flags

2009-04-01 Thread Mark Sienkiewicz
I have this configuration:

numpy 1.3.0 rc1
Solaris 10
Python 2.5.4 compiled as a 64 bit executable

When I try to install numpy, it says:

C compiler: cc -DNDEBUG -O -xarch=native64 -xcode=pic32

compile options: '-Inumpy/core/src -Inumpy/core/include 
-I/usr/stsci/Python-2.5.4/include/python2.5 -c'
cc: _configtest.c
cc _configtest.o -lm -o _configtest
ld: fatal: file _configtest.o: wrong ELF class: ELFCLASS64
ld: fatal: File processing errors. No output written to _configtest
ld: fatal: file _configtest.o: wrong ELF class: ELFCLASS64
ld: fatal: File processing errors. No output written to _configtest
failure.

  ...

mathlibs = check_mathlib(config_cmd)
  File numpy/core/setup.py, line 253, in check_mathlib
raise EnvironmentError(math library missing; rerun 
EnvironmentError: math library missing; rerun setup.py after setting the 
MATHLIB env variable

Of course, the problem is that it is using the wrong compiler flags 
during the link phase, so nothing I set MATHLIB to can possibly work.

I found that I can get it to compile by creating these shell scripts:

% cat cc
#!/bin/sh
/opt/SUNWspro-6u2/bin/cc -xarch=native64 -xcode=pic32 $*

% cat f90
#!/bin/sh
/opt/SUNWspro-6u2/bin/f90 -xarch=native64 -xcode=pic32 $*

I think this looks like a bug.  I thought I might try to make a patch 
(since this is all about installing, so, in principle, you don't need to 
know much about numpy), but I did not get very far in figuring out how 
the install works.

The good news is that once you get it to build, it seems to work.  
(IIRC, rc1 fails the same test that it had problems with in my other 
email.  The fix on the trunk also works in 64 bit solaris.)

Mark S.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [Announce] Numpy 1.3.0 rc1

2009-04-01 Thread Tommy Grav

On Mar 30, 2009, at 2:56 AM, David Cournapeau wrote:

 On Mon, Mar 30, 2009 at 3:36 AM, Robert Pyle  
 rp...@post.harvard.edu wrote:


 I just installed 2.5.4 from python.org, and the OS X installer still
 doesn't work.  This is on a PPC G5; I haven't tried it on my Intel
 MacBook Pro.

 I think I got it. To build numpy, I use virtualenv to make a
 bootstrap environment, but then the corresponding python path get
 embedded in the .mpkg - so unless you have your python interpreter in
 exactly the same path as my bootstrap (which is very unlikely), it
 won't run at all. This would also explain why I never saw the problem.

This is exactly the problem. This is the error message that you get
when running the .dmg and no hard drives are available for selection.

You cannot install numpy 1.3.0rc1 on this volume.
  numpy requires /Users/david/src/dsp/numpy/1.3.x/bootstrap Python 2.5  
to install.


 I will prepare a new binary,

Any idea when a new binary will be available on sourceforge.net?

Cheers
Tommy
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Out-of-RAM FFTs

2009-04-01 Thread Charles R Harris
On Wed, Apr 1, 2009 at 9:26 AM, David Cournapeau 
da...@ar.media.kyoto-u.ac.jp wrote:

 Greg Novak wrote:
  1) Numerical Recipes has an out-of-memory FFT algorithm, but looking
  through the numpy and scipy docs and modules, I didn't find a function
  that does the same thing.  Did I miss it?

 I don't think so.

   Should I get to work typing
  it in?
 

 Maybe :)

  2) I had high hopes for just memory-mapping the large array and
  passing it to the standard fft function.  However, the memory-mapped
  region must fit into the address space, and I don't seem to be able to
  use more than 2 GB at a time.  So memory mapping doesn't seem to help
  me at all.
 
  This last issue leads to another series of things that puzzle me.  I
  have an iMac running OS X 10.5 with an Intel Core 2 duo processor and
  4 GB of memory.  As far as I've learned, the processor is 64 bit, the
  operating system is 64 bit, so I should be able to happily memory-map
  my entire disk if I want.  However, Python seems to run out of steam
  when it's used 2 GB.  This is true of both 2.5 and 2.6.  What gives?
  Is this a Python issue?
 

 Yes - official python binaries are 32 bits only. I don't know how
 advanced/usable is the 64 bits build, but I am afraid you will have to
 use an unofficial build or to build it by yourself.

 I don't know if the following can help you:


 http://developer.apple.com/documentation/Darwin/Conceptual/64bitPorting/intro/intro.html#//apple_ref/doc/uid/TP40001064-CH205-TPXREF101


There was a thread about this back when...
Herehttp://thread.gmane.org/gmane.comp.python.numeric.general/22353it
is, note Michael Abshoff's directions on building 64 bit python on the
mac.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Out-of-RAM FFTs

2009-04-01 Thread Matthieu Brucher
Hi,

In any case, the OS will have to swap a lot of your data :
- if you use floats (32bits), you use 4GB for your input array
- this does not fit inside your memory
- it even fit less if you count on the fact that FFT needs a least one
array as large, so at least 8 GB.

So you should, in every case, split your data and load it on the fly,
by hand (after each FFT, you could swap the array axis, but it may not
be the best, the best being having something like 16GB RAM for
floats).

Matthieu

2009/4/1 Greg Novak no...@ucolick.org:
 Hello,
 I'd like to do an FFT of a moderately large 3D cube, 1024^3.  Looking
 at the run-time of smaller arrays, this is not a problem in terms of
 compute time, but the array doesn't fit in memory.  So, several
 questions:

 1) Numerical Recipes has an out-of-memory FFT algorithm, but looking
 through the numpy and scipy docs and modules, I didn't find a function
 that does the same thing.  Did I miss it?  Should I get to work typing
 it in?
 2) I had high hopes for just memory-mapping the large array and
 passing it to the standard fft function.  However, the memory-mapped
 region must fit into the address space, and I don't seem to be able to
 use more than 2 GB at a time.  So memory mapping doesn't seem to help
 me at all.

 This last issue leads to another series of things that puzzle me.  I
 have an iMac running OS X 10.5 with an Intel Core 2 duo processor and
 4 GB of memory.  As far as I've learned, the processor is 64 bit, the
 operating system is 64 bit, so I should be able to happily memory-map
 my entire disk if I want.  However, Python seems to run out of steam
 when it's used 2 GB.  This is true of both 2.5 and 2.6.  What gives?
 Is this a Python issue?

 Thanks,
 Greg
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Information System Engineer, Ph.D.
Website: http://matthieu-brucher.developpez.com/
Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Out-of-RAM FFTs

2009-04-01 Thread Christopher Barker
Greg Novak wrote:
 This last issue leads to another series of things that puzzle me.  I
 have an iMac running OS X 10.5 with an Intel Core 2 duo processor and
 4 GB of memory.  As far as I've learned, the processor is 64 bit, the
 operating system is 64 bit, so I should be able to happily memory-map
 my entire disk if I want.  However, Python seems to run out of steam
 when it's used 2 GB.  This is true of both 2.5 and 2.6.  What gives?
 Is this a Python issue?

Not python per-se, but yes, the standard builds of Python on OS-X are 
all 32 bit.

I'm pretty sure that python2.6 builds successfully as 64 on OS-X -- 
check the archives and/or send a note to the pythonmac list:

http://mail.python.org/mailman/listinfo/pythonmac-sig

to get more info.

You will need to build python and all the extension packages you need to 
  make it work, though.

macports is worth checking out -- I think it can do 64 bit builds.

-Chris




-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] trouble building docs with sphinx-0.6.1

2009-04-01 Thread Pauli Virtanen
Wed, 01 Apr 2009 11:48:59 -0400, Darren Dale wrote:
 This morning I upgraded to sphinx-0.6.1, hoping to take advantage of all
 the recent work that has been done to clean up and consolidate the web
 of sphinx extensions. I'm seeing segfaults when I try to build my own
 docs, or the h5py docs. I tried building the numpy documentation after
 applying the attached patch. After building html, updating the
 environment, reading sources, I see a slew of warnings and errors,
 followed by:
[clip]

It was an incompatibility of Numpy's autosummary extension and 
Sphinx = 0.6. It should now be fixed in Numpy trunk.

-- 
Pauli Virtanen

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] trouble building docs with sphinx-0.6.1

2009-04-01 Thread Darren Dale
On Wed, Apr 1, 2009 at 4:40 PM, Pauli Virtanen p...@iki.fi wrote:

 Wed, 01 Apr 2009 11:48:59 -0400, Darren Dale wrote:
  This morning I upgraded to sphinx-0.6.1, hoping to take advantage of all
  the recent work that has been done to clean up and consolidate the web
  of sphinx extensions. I'm seeing segfaults when I try to build my own
  docs, or the h5py docs. I tried building the numpy documentation after
  applying the attached patch. After building html, updating the
  environment, reading sources, I see a slew of warnings and errors,
  followed by:
 [clip]

 It was an incompatibility of Numpy's autosummary extension and
 Sphinx = 0.6. It should now be fixed in Numpy trunk.


Thanks Pauli. I noticed some bad indentation in testing.decorators, maybe
this patch could be considered:

Index: numpy/testing/decorators.py
===
--- numpy/testing/decorators.py (revision 6831)
+++ numpy/testing/decorators.py (working copy)
@@ -1,4 +1,5 @@
-Decorators for labeling test objects
+
+Decorators for labeling test objects

 Decorators that merely return a modified version of the original
 function object are straightforward.  Decorators that return a new
@@ -11,7 +12,8 @@
 

 def slow(t):
-Labels a test as 'slow'.
+
+Labels a test as 'slow'.

 The exact definition of a slow test is obviously both subjective and
 hardware-dependent, but in general any individual test that requires
more
@@ -22,7 +24,8
@@
 return
t


 def
setastest(tf=True):
-''' Signals to nose that this function is or is not a
test
+
'''
+Signals to nose that this function is or is not a
test



Parameters

--
@@ -47,7 +50,8
@@
 return
set_test


 def skipif(skip_condition,
msg=None):
-''' Make function raise SkipTest exception if skip_condition is
true
+
'''
+Make function raise SkipTest exception if skip_condition is
true



Parameters

--
@@ -59,12 +63,12
@@
 msg :
string
 Message to give on raising a SkipTest
exception


-
Returns
-
---
-   decorator :
function
-   Decorator, which, when applied to a function, causes
SkipTest
-   to be raised when the skip_condition was True, and the
function
-   to be called normally
otherwise.
+
Returns
+
---
+decorator :
function
+Decorator, which, when applied to a function, causes
SkipTest
+to be raised when the skip_condition was True, and the
function
+to be called normally
otherwise.


 Notes
 -
@@ -86,9 +90,9 @@

 def get_msg(func,msg=None):
 Skip message with information about function being
skipped.
-if msg is None:
+if msg is None:
 out = 'Test skipped due to test condition'
-else:
+else:
 out = '\n'+msg

 return Skipping test: %s%s % (func.__name__,out)
@@ -115,32 +119,33 @@
 skipper = skipper_gen
 else:
 skipper = skipper_func
-
+
 return nose.tools.make_decorator(f)(skipper)

 return skip_decorator


 def knownfailureif(fail_condition, msg=None):
-''' Make function raise KnownFailureTest exception if fail_condition is
true
+'''
+Make function raise KnownFailureTest exception if fail_condition is
true

 Parameters
 --
 fail_condition : bool or callable.
 Flag to determine whether to mark test as known failure (True)
 or not (False).  If the condition is a callable, it is used at
-runtime to dynamically make the decision.  This is useful for
+runtime to dynamically make the decision.  This is useful for
 tests that may require costly imports, to delay the cost
 until the test suite is actually executed.
 msg : string
 Message to give on raising a KnownFailureTest exception

-   Returns
-   ---
-   decorator : function
-   Decorator, which, when applied to a function, causes SkipTest
-   to be raised when the skip_condition was True, and the function
-   to be called normally otherwise.
+Returns
+---
+decorator : function
+Decorator, which, when applied to a function, causes SkipTest
+to be raised when the skip_condition was True, and the function
+to be called normally otherwise.

 Notes
 -
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy 1.3.0 rc1 OS X Installer

2009-04-01 Thread Christopher Barker
David Cournapeau wrote:
 Christopher Barker wrote:
 It does, but we don't need a binary installer for a python that doesn't 
 have a binary installer.
 
 Yes, not now - but I would prefer avoiding to have to change the process
 again when time comes. It may not look like it, but enabling a working
 process which works well on all platforms including windows took me
 several days to work properly.

I'm not surprised it took that long -- that sounds short to me!

Anyway, If there are new python builds that are 64-bit (quad?) you wont' 
have to change much -- only make sure that the libs you are linking to 
are 64 bit. I suppose you could try to get a quad-universal gfortran.a 
now, but I'd wait 'till you need it.

 However, I'm not sure you need to do what your saying here. I imagine 
 this workflow:

 set up a virtualenv for, say numpy x.y.rc-z

 play around with it, get everything to build, etc. with plain old 
 setup.py build, setup.py install, etc.

 Once you are happy, run:

 /Library/Frameworks/Python.framework/Versions/2.5/bin/bdist_mpkg
   
 
 This means building the same thing twice.

does it? I just did test:

I set up a virtualenv with nothing in it.

I used that environment to do:

setup.py build
setup.py install

and got an apparently working numpy in my virtualenv.

Then I ran:

/Library/Frameworks/Python.framework/Versions/2.5/bin/bdist_mpkg

It did a lot, but I don't THINK is re-compiled everthing. I think the 
trick is that eveything it built was still in the build dir -- and it is 
the saem python, even though it's not living in the same place.

I got what seems to be a functioning Universal Installer for the 
python.org python.


Having said that, it looks like it may be very easy to hack a package 
built in the virtualenv to install in the right place. In:

dist/numpy-1.3.0rc1-py2.5-macosx10.4.mpkg/Contents/Packages/

there are two mpkgs. IN each of those, there is:

Contents/Info.plist

which is an xml file. In there, there is:

keyIFPkgFlagDefaultLocation/key

which should be set to:

string/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/string

If it was built in a virtualenv, that would be the virtualenv path.

It's a hack, but you could post-process the mpkg to change that value 
after building.

 The docs are included in the .dmg, and yes, the doc needs to be built
 from the same installation (or more exactly the same source).

In my tests, that seems to work fine.

 True. In that case we could put the dylib somewhere obscure:

 /usr/local/lib/scipy1.6/lib/
   
 Hm, that's strange - why /usr/local/lib ? It is outside the scipy
 installation.

I'm not sure, really, except that that's where Robin Dunn puts stuff for 
wxPython -- I think one reason may be that he can then point to it from 
more than one Python installation -- for instance Apple's and 
python.orgs. And it could help with building from a virtualenv.

 or even:

 /Library/Frameworks/Python.framework/Versions/2.5/lib/
   
 That's potentially dangerous: since this directory is likely to be in
 LIBDIR, it means libgfortran will be taken there or from /usr/local/lib
 if the user builds numpy/scipy after installing numpy. If it is
 incompatible with the user gfortran, it will lead to weird issues, hard
 to debug.

showing what a pain all of this is! Of course, you could put it in:

/Library/Frameworks/Python.framework/Versions/2.5/lib/site_packages/scipy/lib/

or something that only scipy would know about.


 If we install something like libgfortran, it should be installed
 privately - but dynamically linking against private libraries is hard,
 because that's very platform dependent

yup -- on the Mac, it could work well, 'cause the paths to the libs are 
hard-coded when linked, so you WILL get the right one -- if it's there! 
Tough this could lead to trouble when you want to build from a 
virtualenv, and install to the system location. mocaholib gives you 
tools to re-write the locations, but someone would have to write that code.

 gfortran hello.f - a.out is 8 kb
 gfortran hello.f -static-libgfortran - a.out is 130 kb
 libgfortran.a - ~ 1.3Mb


 And thinking about it: mac os x rather encourage big binaries - fat
 binary, so I am not sure it is a big concern.

so I guess we're back to static linking -- I guess that's why it's 
generally recommended for distributing binaries.

 Also, would it break anything if the libgfortran installed were properly 
 versioned:
   libgfortran.a.b.c
 
 versioned libraries only make sense for shared libraries,I think.

right -- I meant for the dynamic lib option.

 On
 Linux, the static library is not even publicly available (it is in
 /usr/lib/gcc/4.3.3). I wonder whether the mac os x gfortran binary did
 not make a mistake there, actually.

where are you getting that? I'm about to go on vacation, but maybe I 
could try a few things...

-Chris


-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   

Re: [Numpy-discussion] Numpy 1.3.0 rc1 OS X Installer

2009-04-01 Thread Christopher Barker
Christopher Barker wrote:
 Anyway, If there are new python builds that are 64-bit (quad?) you wont' 
 have to change much -- only make sure that the libs you are linking to 
 are 64 bit. I suppose you could try to get a quad-universal gfortran.a 
 now,

Actually, it looks like the binary at:

http://r.research.att.com/tools/

Is already a quad-universal build.

-Chris




-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] GSoC Proposals Due on the 3rd

2009-04-01 Thread Stéfan van der Walt
Hi all

Students interested in working on NumPy or SciPy for GSoC2009, please
note that the deadline for proposals is the 3rd of April.

http://socghop.appspot.com/document/show/program/google/gsoc2009/userguide#depth_studentapply

Let's keep those applications coming!

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy 1.3.0 rc1 OS X Installer

2009-04-01 Thread David Cournapeau
Christopher Barker wrote:

 I'm not surprised it took that long -- that sounds short to me!

 Anyway, If there are new python builds that are 64-bit (quad?) you wont' 
 have to change much -- only make sure that the libs you are linking to 
 are 64 bit. I suppose you could try to get a quad-universal gfortran.a 
 now, but I'd wait 'till you need it.
   

My main concern was about making sure I always built against the same
python interpreter. For now, I have hardcoded the python executable to
use for the mpkg, but that's not good enough.
 It did a lot, but I don't THINK is re-compiled everthing. I think the 
 trick is that eveything it built was still in the build dir -- and it is 
 the saem python, even though it's not living in the same place.
   

Well, yes, it could be a totally different, incompatible python that it
would still do as you said - distutils cannot be trusted at all to do
the right thing here, it has no real dependency system. The problem is
to make sure that bdist_mpkg and the installed numpy in virtual env are
built against the same python binary:
- it makes sure the build is actually compatible
- it also gives a sanity check about 'runnability' of the binary -
at least, numpy can be imported (for the doc), and by building from
scratch, I lose this.

Of course, the real solution would be to automatically test for the
built numpy - I unfortunately did not have the time to do this correctly,

 keyIFPkgFlagDefaultLocation/key

 which should be set to:

 string/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/string

 If it was built in a virtualenv, that would be the virtualenv path.

 It's a hack, but you could post-process the mpkg to change that value 
 after building.
   

This has the same problem - it works as long as the virtual python and
the 'python targetted by the binary installer are the same'.


 showing what a pain all of this is! Of course, you could put it in:

 /Library/Frameworks/Python.framework/Versions/2.5/lib/site_packages/scipy/lib/

 or something that only scipy would know about.
   

That's the only correct solution I can see, yes.

 On
 Linux, the static library is not even publicly available (it is in
 /usr/lib/gcc/4.3.3). I wonder whether the mac os x gfortran binary did
 not make a mistake there, actually.
 

 where are you getting that? I'm about to go on vacation, but maybe I 
 could try a few things...
   

On linux, libgfortran.a is /usr/lib/gcc/i486-linux-gnu/4.3/libgfortran.a
- this is private
On mac OS X, it is /usr/local/lib - this is public

Exactly the kind of 'small' things which end up with quite some headache
when you care about reliability and repeatability.

As a related problem, I am not a big fan about the Apple way of building
fat binaries, I much prefer the approach one build/arch and merging the
binaries after build with lipo. This is more compatible with all
autoconf projects out there - and it means using/updating the compilers
is easy (building multi-arch binaries the 'apple' way is really a pain -
the scripts used by R, to build the universal gfortran, is quite
complicated). I think this above problem would not have happened with a
pristine gfortran built from sources,

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion