[Numpy-discussion] testing

2016-09-26 Thread Charles R Harris
Testing if this gets posted... Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Testing warnings

2016-01-26 Thread Sebastian Berg
Hi all,

so I have been thinking about this a little more, and I do not think
there is a truly nice solution to the python bug:
http://bugs.python.org/issue4180 (does not create problems for new
pythons).

However, I have been so annoyed by trying to test FutureWarnings or
DeprecationWarnings in the past that I want *some* improvement. You can
do quite a lot by adding some new features, but there are also some
limitations.

I think that we must be able to:

 o Filter out warnings on the global test run level.
 o Find all not explicitly filtered warnings during development easily.
 o We should be able to test any (almost any?) warnings, even those
that would be filtered globally.

The next line of considerations for me is, whether we want:

 o To be able to *print* warnings during test runs? (in release mode)
 o Be able to not repeat filtering of globally filtered warnings when
filtering additional warnings in an individual test?
 o Be able to count warnings, but ignore other warnings (not the global
ones, though).
 o Filter warnings by module? (might be hard to impossible)

And one further option:
 o Could we accept that testing warnings is *only* possible reliable in
Python 3.4+? It would however even mean that we have to fully *skip*
tests that would ensure specific warnings to be given.

The first things can be achieved setting all warnings to errors on the
global level and trying to make the local tests as specific specific as
possible. I could go ahead with it. There will likely be some uglier
points, but it should work. They do not require funnier new hacks.

For all I can see, the second bunch of things requires new features
such as in my current PR. So, I want to know whether we can/want to go
ahead with this kind of idea [1].

For me personally, I cannot accept we do not provide the first points.

The second bunch, I would like some of them (I do not know about
printing warnings in release mode?), and skipping tests on Python 2,
seems to me even worse then ugly hacks.
Getting there is a bit uglier (requires a new context manager for all I
see), an I tend to think it is worth the trouble, but I don't think it
is vital.

In other words, I don't care too much about those points, but I want to
get somewhere because I have been bitten often enough by the annoying
and in my opinion simply unacceptable (on python 2) use of "ignore"
warnings filters in tests. The current state makes finding warnings
given in our own tests almost impossible, in the best case they will
have to be fixed much much later when the change actually occurs, in
the worst case we never find our own real bugs.

So where to go? :)

- Sebastian


[1] I need to fix the module filtering point, the module filtering does
not work reliably currently, I think it can be fixed at least 99.5%,
but it is not too pretty (not that the user should notice).



signature.asc
Description: This is a digitally signed message part
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] testing

2015-09-18 Thread Chip Parker
Let's see if the instructions enclosed in
http://www.jamesh.id.au/articles/mailman-spamassassin/ which was
written in 2003 are still biting us.

-- 
Chip Parker
DevOps Engineer
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] testing

2015-09-18 Thread Chip Parker
This would be an email.

-- 
Chip Parker
DevOps Engineer
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] testing numpy with downstream testsuites (was: Re: Notes from the numpy dev meeting at scipy 2015)

2015-08-26 Thread Nathaniel Smith
[Popping this off to its own thread to try and keep things easier to follow]

On Tue, Aug 25, 2015 at 9:52 AM, Nathan Goldbaum nathan12...@gmail.com wrote:
   - Lament: it would be really nice if we could get more people to
 test our beta releases, because in practice right now 1.x.0 ends
 up being where we actually the discover all the bugs, and 1.x.1 is
 where it actually becomes usable. Which sucks, and makes it
 difficult to have a solid policy about what counts as a
 regression, etc. Is there anything we can do about this?

 Just a note in here - have you all thought about running the test suites for
 downstream projects as part of the numpy test suite?

I don't think it came up, but it's not a bad idea! The main problems I
can foresee are:
1) Since we don't know the downstream code, it can be hard to
interpret test suite failures. OTOH for changes we're uncertain of we
already do often end up running some downstream test suites by hand,
so it can only be an improvement on that...
2) Sometimes everyone including downstream agrees that breaking
something is actually a good idea and they should just deal, but what
do you do then?

These both seem solvable though.

I guess a good strategy would be to compile a travis-compatible wheel
of $PACKAGE version $latest-stable against numpy 1.x, and then in the
1.(x+1) development period numpy would have an additional travis run
which, instead of running the numpy test suite, instead does:
  pip install .
  pip install $PACKAGE-$latest-stable.whl
  python -c 'import package; package.test()' # adjust as necessary
? Where $PACKAGE is something like scipy / pandas / astropy / ...
matplotlib would be nice but maybe impractical...?

Maybe someone else will have objections but it seems like a reasonable
idea to me. Want to put together a PR? Asides from fame and fortune
and our earnest appreciation, your reward is you get to make sure that
the packages you care about are included so that we break them less
often in the future ;-).

-n

-- 
Nathaniel J. Smith -- http://vorpus.org
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] testing numpy with downstream testsuites (was: Re: Notes from the numpy dev meeting at scipy 2015)

2015-08-26 Thread Jens Nielsen
As a Matplotlib developer I try to test our code manually with all betas
and rc of new numpy versions.
(And already pushed fixed a few new deprecation warnings with 1.10beta1
which otherwise passes our test suite.
I forgot to report this back since there were no issues to report )
However, we could actually do this automatically if numpy betas were
uploaded as prereleases on pypi.

We are already using Travis's allow failure mode to test python 3.5 betas
and rc's along with all our dependencies installed with `pip --pre`
https://pip.pypa.io/en/latest/reference/pip_install.html#pre-release-versions

Putting prereleases on pypi would thus automate most of the testing of new
Numpy versions for us.

Best
Jens

ons. 26. aug. 2015 kl. 07.59 skrev Nathaniel Smith n...@pobox.com:

 [Popping this off to its own thread to try and keep things easier to
 follow]

 On Tue, Aug 25, 2015 at 9:52 AM, Nathan Goldbaum nathan12...@gmail.com
 wrote:
- Lament: it would be really nice if we could get more people to
  test our beta releases, because in practice right now 1.x.0 ends
  up being where we actually the discover all the bugs, and 1.x.1 is
  where it actually becomes usable. Which sucks, and makes it
  difficult to have a solid policy about what counts as a
  regression, etc. Is there anything we can do about this?
 
  Just a note in here - have you all thought about running the test suites
 for
  downstream projects as part of the numpy test suite?

 I don't think it came up, but it's not a bad idea! The main problems I
 can foresee are:
 1) Since we don't know the downstream code, it can be hard to
 interpret test suite failures. OTOH for changes we're uncertain of we
 already do often end up running some downstream test suites by hand,
 so it can only be an improvement on that...
 2) Sometimes everyone including downstream agrees that breaking
 something is actually a good idea and they should just deal, but what
 do you do then?

 These both seem solvable though.

 I guess a good strategy would be to compile a travis-compatible wheel
 of $PACKAGE version $latest-stable against numpy 1.x, and then in the
 1.(x+1) development period numpy would have an additional travis run
 which, instead of running the numpy test suite, instead does:
   pip install .
   pip install $PACKAGE-$latest-stable.whl
   python -c 'import package; package.test()' # adjust as necessary
 ? Where $PACKAGE is something like scipy / pandas / astropy / ...
 matplotlib would be nice but maybe impractical...?

 Maybe someone else will have objections but it seems like a reasonable
 idea to me. Want to put together a PR? Asides from fame and fortune
 and our earnest appreciation, your reward is you get to make sure that
 the packages you care about are included so that we break them less
 often in the future ;-).

 -n

 --
 Nathaniel J. Smith -- http://vorpus.org
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] testing numpy with downstream testsuites (was: Re: Notes from the numpy dev meeting at scipy 2015)

2015-08-26 Thread Matthew Brett
Hi,

On Wed, Aug 26, 2015 at 7:59 AM, Nathaniel Smith n...@pobox.com wrote:
 [Popping this off to its own thread to try and keep things easier to follow]

 On Tue, Aug 25, 2015 at 9:52 AM, Nathan Goldbaum nathan12...@gmail.com 
 wrote:
   - Lament: it would be really nice if we could get more people to
 test our beta releases, because in practice right now 1.x.0 ends
 up being where we actually the discover all the bugs, and 1.x.1 is
 where it actually becomes usable. Which sucks, and makes it
 difficult to have a solid policy about what counts as a
 regression, etc. Is there anything we can do about this?

 Just a note in here - have you all thought about running the test suites for
 downstream projects as part of the numpy test suite?

 I don't think it came up, but it's not a bad idea! The main problems I
 can foresee are:
 1) Since we don't know the downstream code, it can be hard to
 interpret test suite failures. OTOH for changes we're uncertain of we
 already do often end up running some downstream test suites by hand,
 so it can only be an improvement on that...
 2) Sometimes everyone including downstream agrees that breaking
 something is actually a good idea and they should just deal, but what
 do you do then?

 These both seem solvable though.

 I guess a good strategy would be to compile a travis-compatible wheel
 of $PACKAGE version $latest-stable against numpy 1.x, and then in the
 1.(x+1) development period numpy would have an additional travis run
 which, instead of running the numpy test suite, instead does:
   pip install .
   pip install $PACKAGE-$latest-stable.whl
   python -c 'import package; package.test()' # adjust as necessary
 ? Where $PACKAGE is something like scipy / pandas / astropy / ...
 matplotlib would be nice but maybe impractical...?

 Maybe someone else will have objections but it seems like a reasonable
 idea to me. Want to put together a PR? Asides from fame and fortune
 and our earnest appreciation, your reward is you get to make sure that
 the packages you care about are included so that we break them less
 often in the future ;-).

One simple way to get going would be for the release manager to
trigger a build from this repo:

https://github.com/matthew-brett/travis-wheel-builder

This build would then upload a wheel to:

http://travis-wheels.scikit-image.org/

The upstream packages would have a test grid which included an entry
with something like:

pip install -f http://travis-wheels.scikit-image.org --pre numpy

Cheers,

Matthew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] testing numpy with downstream testsuites (was: Re: Notes from the numpy dev meeting at scipy 2015)

2015-08-26 Thread Jeff Reback
Pandas has for quite a while has a travis build where we install numpy
master and then run our test suite.

e.g. here: https://travis-ci.org/pydata/pandas/jobs/77256007

Over the last year this has uncovered a couple of changes which affected
pandas (mainly using something deprecated which was turned off :)

This was pretty simple to setup. Note that this adds 2+ minutes to the
build (though our builds take a while anyhow so its not a big deal).



On Wed, Aug 26, 2015 at 7:14 AM, Matthew Brett matthew.br...@gmail.com
wrote:

 Hi,

 On Wed, Aug 26, 2015 at 7:59 AM, Nathaniel Smith n...@pobox.com wrote:
  [Popping this off to its own thread to try and keep things easier to
 follow]
 
  On Tue, Aug 25, 2015 at 9:52 AM, Nathan Goldbaum nathan12...@gmail.com
 wrote:
- Lament: it would be really nice if we could get more people to
  test our beta releases, because in practice right now 1.x.0 ends
  up being where we actually the discover all the bugs, and 1.x.1 is
  where it actually becomes usable. Which sucks, and makes it
  difficult to have a solid policy about what counts as a
  regression, etc. Is there anything we can do about this?
 
  Just a note in here - have you all thought about running the test
 suites for
  downstream projects as part of the numpy test suite?
 
  I don't think it came up, but it's not a bad idea! The main problems I
  can foresee are:
  1) Since we don't know the downstream code, it can be hard to
  interpret test suite failures. OTOH for changes we're uncertain of we
  already do often end up running some downstream test suites by hand,
  so it can only be an improvement on that...
  2) Sometimes everyone including downstream agrees that breaking
  something is actually a good idea and they should just deal, but what
  do you do then?
 
  These both seem solvable though.
 
  I guess a good strategy would be to compile a travis-compatible wheel
  of $PACKAGE version $latest-stable against numpy 1.x, and then in the
  1.(x+1) development period numpy would have an additional travis run
  which, instead of running the numpy test suite, instead does:
pip install .
pip install $PACKAGE-$latest-stable.whl
python -c 'import package; package.test()' # adjust as necessary
  ? Where $PACKAGE is something like scipy / pandas / astropy / ...
  matplotlib would be nice but maybe impractical...?
 
  Maybe someone else will have objections but it seems like a reasonable
  idea to me. Want to put together a PR? Asides from fame and fortune
  and our earnest appreciation, your reward is you get to make sure that
  the packages you care about are included so that we break them less
  often in the future ;-).

 One simple way to get going would be for the release manager to
 trigger a build from this repo:

 https://github.com/matthew-brett/travis-wheel-builder

 This build would then upload a wheel to:

 http://travis-wheels.scikit-image.org/

 The upstream packages would have a test grid which included an entry
 with something like:

 pip install -f http://travis-wheels.scikit-image.org --pre numpy

 Cheers,

 Matthew
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Testing of scipy

2015-01-30 Thread Carl Kleffner
Hi Colin.

this is an interesting test with different hardware.

As a summary:

- Python-2.7 amd64
- numpy-0.1.9.openblas:   OK
- scipy-0.15.1.openblas:2 errors  11 failures
- CPU: AMD A8-5600K APU (piledriver)

scipy errors and failures due to piledriver:

(1) ERROR: test_improvement (test_quadpack.TestCtypesQuad)
 WindowsError: [Error 193] %1 is not a valid Win32 application

(2) ERROR: test_typical (test_quadpack.TestCtypesQuad)
 WindowsError: [Error 193] %1 is not a valid Win32 application

(3) FAIL: test_interpolate.TestInterp1D.test_circular_refs
 ReferenceError: Remaining reference(s) to object

(4) FAIL: test__gcutils.test_assert_deallocated
 ReferenceError: Remaining reference(s) to object

other failures are known failures due to mingw-w64 / openblas build.

(1) and (2) is a problem with ctypes.util.find_msvcrt(). This methods seems
to be buggy. Not a scipy problem. Maybe the test could be enhanced.

(3) and (4) is a problem due to garbage collection. No idea why.

Maybe you can file a bug for (1) ... (4)

Carl

2015-01-29 19:36 GMT+01:00 cjw c...@ncf.ca:

  Carl,

 I have already sent the test result for numpy.  Here is the test result
 for scipy.

 I hope that it helps.

 Colin W.

 --
 *** Python 2.7.9 (default, Dec 10 2014, 12:28:03) [MSC v.1500 64 bit
 (AMD64)] on win32. ***
 
 [Dbg]
 C:\Python27\lib\site-packages\numpy\core\__init__.py:6: Warning: Numpy
 64bit experimental build with Mingw-w64 and OpenBlas.
   from . import multiarray
 Running unit tests for scipy
 NumPy version 1.9.1
 NumPy is installed in C:\Python27\lib\site-packages\numpy
 SciPy version 0.15.1
 SciPy is installed in C:\Python27\lib\site-packages\scipy
 Python version 2.7.9 (default, Dec 10 2014, 12:28:03) [MSC v.1500 64 bit
 (AMD64)]
 nose version 1.3.4
 C:\Python27\lib\site-packages\numpy\lib\utils.py:95: DeprecationWarning:
 `scipy.lib.blas` is deprecated, use `scipy.linalg.blas` instead!
   warnings.warn(depdoc, DeprecationWarning)
 C:\Python27\lib\site-packages\numpy\lib\utils.py:95: DeprecationWarning:
 `scipy.lib.lapack` is deprecated, use `scipy.linalg.lapack` instead!
   warnings.warn(depdoc, DeprecationWarning)
 C:\Python27\lib\site-packages\numpy\lib\utils.py:95: DeprecationWarning:
 `scipy.weave` is deprecated, use `weave` instead!
   warnings.warn(depdoc, DeprecationWarning)
 ...K..EE.F.K..K...
 ...
 SS..SS....F...
 
 

[Numpy-discussion] Testing

2013-10-27 Thread Neil Girdhar
How do I test a patch that I've made locally?  I can't seem to import numpy
locally:

Error importing numpy: you should not try to import numpy from
its source directory; please exit the numpy source tree, and
relaunch
your python intepreter from there.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Testing

2013-10-27 Thread Nathaniel Smith
On Sun, Oct 27, 2013 at 10:59 PM, Neil Girdhar mistersh...@gmail.com wrote:
 How do I test a patch that I've made locally?  I can't seem to import numpy
 locally:

 Error importing numpy: you should not try to import numpy from
 its source directory; please exit the numpy source tree, and
 relaunch
 your python intepreter from there.

python runtests.py --help

-n
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Testing

2013-10-27 Thread Charles R Harris
On Sun, Oct 27, 2013 at 4:59 PM, Neil Girdhar mistersh...@gmail.com wrote:

 How do I test a patch that I've made locally?  I can't seem to import
 numpy locally:

 Error importing numpy: you should not try to import numpy from
 its source directory; please exit the numpy source tree, and
 relaunch
 your python intepreter from there.



If you are running current master do

python runtests.py --help

Chuck



___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Testing

2013-10-27 Thread Neil Girdhar
Ah, sorry, didn't see that I can do that from runtests!!  Thanks!!


On Sun, Oct 27, 2013 at 7:13 PM, Neil Girdhar mistersh...@gmail.com wrote:

 Since I am trying to add a printoptions context manager, I would like to
 test it.  Should I add tests, or can I somehow use it from an ipython shell?


 On Sun, Oct 27, 2013 at 7:12 PM, Charles R Harris 
 charlesr.har...@gmail.com wrote:




 On Sun, Oct 27, 2013 at 4:59 PM, Neil Girdhar mistersh...@gmail.comwrote:

 How do I test a patch that I've made locally?  I can't seem to import
 numpy locally:

 Error importing numpy: you should not try to import numpy from
 its source directory; please exit the numpy source tree, and
 relaunch
 your python intepreter from there.



 If you are running current master do

 python runtests.py --help

 Chuck



 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Testing

2013-10-27 Thread Neil Girdhar
Since I am trying to add a printoptions context manager, I would like to
test it.  Should I add tests, or can I somehow use it from an ipython shell?


On Sun, Oct 27, 2013 at 7:12 PM, Charles R Harris charlesr.har...@gmail.com
 wrote:




 On Sun, Oct 27, 2013 at 4:59 PM, Neil Girdhar mistersh...@gmail.comwrote:

 How do I test a patch that I've made locally?  I can't seem to import
 numpy locally:

 Error importing numpy: you should not try to import numpy from
 its source directory; please exit the numpy source tree, and
 relaunch
 your python intepreter from there.



 If you are running current master do

 python runtests.py --help

 Chuck



 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] testing with amd libm/acml

2012-11-08 Thread Francesc Alted
On 11/7/12 8:41 PM, Neal Becker wrote:
 Would you expect numexpr without MKL to give a significant boost?

Yes.  Have a look at how numexpr's own multi-threaded virtual machine 
compares with numexpr using VML:

http://code.google.com/p/numexpr/wiki/NumexprVML

As it can be seen, the best results are obtained by using the 
multi-threaded VM in numexpr in combination with a single-threaded VML 
engine.  Caution: I did these benchmarks some time ago (couple of 
years?), so it might be that multi-threaded VML would have improved by 
now.  If performance is critical, some experiments should be done first 
so as to find the optimal configuration.

At any rate, VML will let you to optimally leverage the SIMD 
instructions in the cores, allowing to compute, for example, exp() in 1 
or 2 clock cycles (depending on the vector length, the number of cores 
in your system and the data precision):

http://software.intel.com/sites/products/documentation/hpc/mkl/vml/functions/exp.html

Pretty amazing.

-- 
Francesc Alted

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] testing with amd libm/acml

2012-11-08 Thread Francesc Alted
On 11/8/12 12:35 AM, Chris Barker wrote:
 On Wed, Nov 7, 2012 at 11:41 AM, Neal Becker ndbeck...@gmail.com wrote:
 Would you expect numexpr without MKL to give a significant boost?
 It can, depending on the use case:
   -- It can remove a lot of uneccessary temporary creation.
   -- IIUC, it works on blocks of data at a time, and thus can keep
 things in cache more when working with large data sets.

Well, the temporaries are still created, but the thing is that, by 
working with small blocks at a time, these temporaries fit in CPU cache, 
preventing copies into main memory.  I like to name this the 'blocking 
technique', as explained in slide 26 (and following) in:

https://python.g-node.org/wiki/_media/starving_cpu/starving-cpu.pdf

A better technique is to reduce the block size to the minimal expression 
(1 element), so temporaries are stored in registers in CPU instead of 
small blocks in cache, hence preventing copies even in *cache*.  Numba 
(https://github.com/numba/numba) follows this approach, which is pretty 
optimal as can be seen in slide 37 of the lecture above.

-- It can (optionally) use multiple threads for easy parallelization.

No, the *total* amount of cores detected in the system is the default in 
numexpr; if you want less, you will need to use 
set_num_threads(nthreads) function.  But agreed, sometimes using too 
many threads could effectively be counter-producing.

-- 
Francesc Alted

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] testing with amd libm/acml

2012-11-08 Thread Dag Sverre Seljebotn
On 11/07/2012 08:41 PM, Neal Becker wrote:
 Would you expect numexpr without MKL to give a significant boost?

If you need higher performance than what numexpr can give without using 
MKL, you could look at code such as this:

https://github.com/herumi/fmath/blob/master/fmath.hpp#L480

But that means going to C (e.g., by wrapping that function in Cython). 
Pay attention to what range you evaluate the function in though (my eyes 
may deceive me but it seems that the test program only test for 
arguments drawn from the standard Gaussian which is a bit limited..)

Dag Sverre
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] testing with amd libm/acml

2012-11-08 Thread Chris Barker
On Thu, Nov 8, 2012 at 2:22 AM, Francesc Alted franc...@continuum.io wrote:

   -- It can remove a lot of uneccessary temporary creation.

 Well, the temporaries are still created, but the thing is that, by
 working with small blocks at a time, these temporaries fit in CPU cache,
 preventing copies into main memory.

hmm -- I thought it was smart enough to remove some unnecessary
temporaries altogether. Shows what I know. But apparently it does,
indeed, avoid creating the full-size temporary arrays.

pretty cool stuff, in any case.

-Chris


-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] testing with amd libm/acml

2012-11-08 Thread Francesc Alted
On 11/8/12 1:41 PM, Dag Sverre Seljebotn wrote:
 On 11/07/2012 08:41 PM, Neal Becker wrote:
 Would you expect numexpr without MKL to give a significant boost?
 If you need higher performance than what numexpr can give without using
 MKL, you could look at code such as this:

 https://github.com/herumi/fmath/blob/master/fmath.hpp#L480

Hey, that's cool.  I was a bit disappointed not finding this sort of 
work in open space.  It seems that this lacks threading support, but 
that should be easy to implement by using OpenMP directives.

-- 
Francesc Alted

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] testing with amd libm/acml

2012-11-08 Thread Dag Sverre Seljebotn
On 11/08/2012 06:06 PM, Francesc Alted wrote:
 On 11/8/12 1:41 PM, Dag Sverre Seljebotn wrote:
 On 11/07/2012 08:41 PM, Neal Becker wrote:
 Would you expect numexpr without MKL to give a significant boost?
 If you need higher performance than what numexpr can give without using
 MKL, you could look at code such as this:

 https://github.com/herumi/fmath/blob/master/fmath.hpp#L480

 Hey, that's cool.  I was a bit disappointed not finding this sort of
 work in open space.  It seems that this lacks threading support, but
 that should be easy to implement by using OpenMP directives.

IMO this is the wrong place to introduce threading; each thread should 
call expd_v on its chunks. (Which I think is how you said numexpr 
currently uses VML anyway.)

Dag Sverre



___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] testing with amd libm/acml

2012-11-08 Thread Francesc Alted
On 11/8/12 6:38 PM, Dag Sverre Seljebotn wrote:
 On 11/08/2012 06:06 PM, Francesc Alted wrote:
 On 11/8/12 1:41 PM, Dag Sverre Seljebotn wrote:
 On 11/07/2012 08:41 PM, Neal Becker wrote:
 Would you expect numexpr without MKL to give a significant boost?
 If you need higher performance than what numexpr can give without using
 MKL, you could look at code such as this:

 https://github.com/herumi/fmath/blob/master/fmath.hpp#L480
 Hey, that's cool.  I was a bit disappointed not finding this sort of
 work in open space.  It seems that this lacks threading support, but
 that should be easy to implement by using OpenMP directives.
 IMO this is the wrong place to introduce threading; each thread should
 call expd_v on its chunks. (Which I think is how you said numexpr
 currently uses VML anyway.)

Oh sure, but then you need a blocked engine for performing the 
computations too.  And yes, by default numexpr uses its own threading 
code rather than the existing one in VML (but that can be changed by 
playing with set_num_threads/set_vml_num_threads).  It always stroked to 
me as a little strange that the internal threading in numexpr was more 
efficient than VML one, but I suppose this is because the latter is more 
optimized to deal with large blocks instead of those of medium size (4K) 
in numexpr.

-- 
Francesc Alted

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] testing with amd libm/acml

2012-11-08 Thread Dag Sverre Seljebotn
On 11/08/2012 06:59 PM, Francesc Alted wrote:
 On 11/8/12 6:38 PM, Dag Sverre Seljebotn wrote:
 On 11/08/2012 06:06 PM, Francesc Alted wrote:
 On 11/8/12 1:41 PM, Dag Sverre Seljebotn wrote:
 On 11/07/2012 08:41 PM, Neal Becker wrote:
 Would you expect numexpr without MKL to give a significant boost?
 If you need higher performance than what numexpr can give without using
 MKL, you could look at code such as this:

 https://github.com/herumi/fmath/blob/master/fmath.hpp#L480
 Hey, that's cool.  I was a bit disappointed not finding this sort of
 work in open space.  It seems that this lacks threading support, but
 that should be easy to implement by using OpenMP directives.
 IMO this is the wrong place to introduce threading; each thread should
 call expd_v on its chunks. (Which I think is how you said numexpr
 currently uses VML anyway.)

 Oh sure, but then you need a blocked engine for performing the
 computations too.  And yes, by default numexpr uses its own threading

I just meant that you can use a chunked OpenMP for-loop wherever in your 
code that you call expd_v. A five-line blocked engine, if you like :-)

IMO that's the right location since entering/exiting OpenMP blocks takes 
some time.

 code rather than the existing one in VML (but that can be changed by
 playing with set_num_threads/set_vml_num_threads).  It always stroked to
 me as a little strange that the internal threading in numexpr was more
 efficient than VML one, but I suppose this is because the latter is more
 optimized to deal with large blocks instead of those of medium size (4K)
 in numexpr.

I don't know enough about numexpr to understand this :-)

I guess I just don't see the motivation to use VML threading or why it 
should be faster? If you pass a single 4K block to a threaded VML call 
then I could easily see lots of performance problems: a) 
starting/stopping threads or signalling the threads of a pool is a 
constant overhead per parallel section, b) unless you're very careful 
to only have VML touch the data, and VML always schedules elements in 
the exact same way, you're going to have the cache lines of that 4K 
block shuffled between L1 caches of different cores for different 
operations...

As I said, I'm mostly ignorant about how numexpr works, that's probably 
showing :-)

Dag Sverre
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] testing with amd libm/acml

2012-11-08 Thread Dag Sverre Seljebotn
On 11/08/2012 07:55 PM, Dag Sverre Seljebotn wrote:
 On 11/08/2012 06:59 PM, Francesc Alted wrote:
 On 11/8/12 6:38 PM, Dag Sverre Seljebotn wrote:
 On 11/08/2012 06:06 PM, Francesc Alted wrote:
 On 11/8/12 1:41 PM, Dag Sverre Seljebotn wrote:
 On 11/07/2012 08:41 PM, Neal Becker wrote:
 Would you expect numexpr without MKL to give a significant boost?
 If you need higher performance than what numexpr can give without
 using
 MKL, you could look at code such as this:

 https://github.com/herumi/fmath/blob/master/fmath.hpp#L480
 Hey, that's cool.  I was a bit disappointed not finding this sort of
 work in open space.  It seems that this lacks threading support, but
 that should be easy to implement by using OpenMP directives.
 IMO this is the wrong place to introduce threading; each thread should
 call expd_v on its chunks. (Which I think is how you said numexpr
 currently uses VML anyway.)

 Oh sure, but then you need a blocked engine for performing the
 computations too.  And yes, by default numexpr uses its own threading

 I just meant that you can use a chunked OpenMP for-loop wherever in your
 code that you call expd_v. A five-line blocked engine, if you like :-)

 IMO that's the right location since entering/exiting OpenMP blocks takes
 some time.

 code rather than the existing one in VML (but that can be changed by
 playing with set_num_threads/set_vml_num_threads).  It always stroked to
 me as a little strange that the internal threading in numexpr was more
 efficient than VML one, but I suppose this is because the latter is more
 optimized to deal with large blocks instead of those of medium size (4K)
 in numexpr.

 I don't know enough about numexpr to understand this :-)

 I guess I just don't see the motivation to use VML threading or why it
 should be faster? If you pass a single 4K block to a threaded VML call
 then I could easily see lots of performance problems: a)
 starting/stopping threads or signalling the threads of a pool is a
 constant overhead per parallel section, b) unless you're very careful
 to only have VML touch the data, and VML always schedules elements in
 the exact same way, you're going to have the cache lines of that 4K
 block shuffled between L1 caches of different cores for different
 operations...

c) Your effective block size is then 4KB/ncores.

(Unless you scale the block size by ncores).

DS
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] testing with amd libm/acml

2012-11-08 Thread Francesc Alted
On 11/8/12 7:55 PM, Dag Sverre Seljebotn wrote:
 On 11/08/2012 06:59 PM, Francesc Alted wrote:
 On 11/8/12 6:38 PM, Dag Sverre Seljebotn wrote:
 On 11/08/2012 06:06 PM, Francesc Alted wrote:
 On 11/8/12 1:41 PM, Dag Sverre Seljebotn wrote:
 On 11/07/2012 08:41 PM, Neal Becker wrote:
 Would you expect numexpr without MKL to give a significant boost?
 If you need higher performance than what numexpr can give without using
 MKL, you could look at code such as this:

 https://github.com/herumi/fmath/blob/master/fmath.hpp#L480
 Hey, that's cool.  I was a bit disappointed not finding this sort of
 work in open space.  It seems that this lacks threading support, but
 that should be easy to implement by using OpenMP directives.
 IMO this is the wrong place to introduce threading; each thread should
 call expd_v on its chunks. (Which I think is how you said numexpr
 currently uses VML anyway.)
 Oh sure, but then you need a blocked engine for performing the
 computations too.  And yes, by default numexpr uses its own threading
 I just meant that you can use a chunked OpenMP for-loop wherever in your
 code that you call expd_v. A five-line blocked engine, if you like :-)

 IMO that's the right location since entering/exiting OpenMP blocks takes
 some time.

Yes, I meant precisely this first hand.

 code rather than the existing one in VML (but that can be changed by
 playing with set_num_threads/set_vml_num_threads).  It always stroked to
 me as a little strange that the internal threading in numexpr was more
 efficient than VML one, but I suppose this is because the latter is more
 optimized to deal with large blocks instead of those of medium size (4K)
 in numexpr.
 I don't know enough about numexpr to understand this :-)

 I guess I just don't see the motivation to use VML threading or why it
 should be faster? If you pass a single 4K block to a threaded VML call
 then I could easily see lots of performance problems: a)
 starting/stopping threads or signalling the threads of a pool is a
 constant overhead per parallel section, b) unless you're very careful
 to only have VML touch the data, and VML always schedules elements in
 the exact same way, you're going to have the cache lines of that 4K
 block shuffled between L1 caches of different cores for different
 operations...

 As I said, I'm mostly ignorant about how numexpr works, that's probably
 showing :-)

No, on the contrary, you rather hit the core of the issue (or part of 
it).  On one hand, VML needs large blocks in order to maximize the 
performance of the pipeline and in the other hand numexpr tries to 
minimize block size in order to make temporaries as small as possible 
(so avoiding the use of the higher level caches).  From this tension 
(and some benchmarking work) the size of 4K (btw, this is the number of 
*elements*, so the size is actually either 16 KB and 32 KB for single 
and double precision respectively) was derived.  Incidentally, for 
numexpr with no VML support, the size is reduced to 1K elements (and 
perhaps it could be reduced a bit more, but anyways).

Anyway, this is way too low level to be discussed here, although we can 
continue on the numexpr list if you are interested in more details.

-- 
Francesc Alted

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] testing with amd libm/acml

2012-11-07 Thread Neal Becker
I'm trying to do a bit of benchmarking to see if amd libm/acml will help me.

I got an idea that instead of building all of numpy/scipy and all of my custom 
modules against these libraries, I could simply use:

LD_PRELOAD=/opt/amdlibm-3.0.2/lib/dynamic/libamdlibm.so:/opt/acml5.2.0/gfortran64/lib/libacml.so
 
my program here

I'm hoping that both numpy and my own dll's then will take advantage of these 
libraries.

Do you think this will work?

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] testing with amd libm/acml

2012-11-07 Thread David Cournapeau
On Wed, Nov 7, 2012 at 12:35 PM, Neal Becker ndbeck...@gmail.com wrote:
 I'm trying to do a bit of benchmarking to see if amd libm/acml will help me.

 I got an idea that instead of building all of numpy/scipy and all of my custom
 modules against these libraries, I could simply use:

 LD_PRELOAD=/opt/amdlibm-3.0.2/lib/dynamic/libamdlibm.so:/opt/acml5.2.0/gfortran64/lib/libacml.so
 my program here

 I'm hoping that both numpy and my own dll's then will take advantage of these
 libraries.

 Do you think this will work?

Quite unlikely depending on your configuration, because those
libraries are rarely if ever ABI compatible (that's why it is such a
pain to support).

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] testing with amd libm/acml

2012-11-07 Thread Neal Becker
David Cournapeau wrote:

 On Wed, Nov 7, 2012 at 12:35 PM, Neal Becker ndbeck...@gmail.com wrote:
 I'm trying to do a bit of benchmarking to see if amd libm/acml will help me.

 I got an idea that instead of building all of numpy/scipy and all of my
 custom modules against these libraries, I could simply use:

 
LD_PRELOAD=/opt/amdlibm-3.0.2/lib/dynamic/libamdlibm.so:/opt/acml5.2.0/gfortran64/lib/libacml.so
 my program here

 I'm hoping that both numpy and my own dll's then will take advantage of these
 libraries.

 Do you think this will work?
 
 Quite unlikely depending on your configuration, because those
 libraries are rarely if ever ABI compatible (that's why it is such a
 pain to support).
 
 David

When you say quite unlikely (to work), you mean 

a) unlikely that libm/acml will be used to resolve symbols in numpy/dlls at 
runtime (e.g., exp)?

or 

b) program may produce wrong results and/or crash ?

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] testing with amd libm/acml

2012-11-07 Thread David Cournapeau
On Wed, Nov 7, 2012 at 1:56 PM, Neal Becker ndbeck...@gmail.com wrote:
 David Cournapeau wrote:

 On Wed, Nov 7, 2012 at 12:35 PM, Neal Becker ndbeck...@gmail.com wrote:
 I'm trying to do a bit of benchmarking to see if amd libm/acml will help me.

 I got an idea that instead of building all of numpy/scipy and all of my
 custom modules against these libraries, I could simply use:


 LD_PRELOAD=/opt/amdlibm-3.0.2/lib/dynamic/libamdlibm.so:/opt/acml5.2.0/gfortran64/lib/libacml.so
 my program here

 I'm hoping that both numpy and my own dll's then will take advantage of 
 these
 libraries.

 Do you think this will work?

 Quite unlikely depending on your configuration, because those
 libraries are rarely if ever ABI compatible (that's why it is such a
 pain to support).

 David

 When you say quite unlikely (to work), you mean

 a) unlikely that libm/acml will be used to resolve symbols in numpy/dlls at
 runtime (e.g., exp)?

 or

 b) program may produce wrong results and/or crash ?

Both, actually. That's not something I would use myself. Did you try
openblas ? It is open source, simple to build, and is pretty fast,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] testing with amd libm/acml

2012-11-07 Thread Neal Becker
David Cournapeau wrote:

 On Wed, Nov 7, 2012 at 1:56 PM, Neal Becker ndbeck...@gmail.com wrote:
 David Cournapeau wrote:

 On Wed, Nov 7, 2012 at 12:35 PM, Neal Becker ndbeck...@gmail.com wrote:
 I'm trying to do a bit of benchmarking to see if amd libm/acml will help
 me.

 I got an idea that instead of building all of numpy/scipy and all of my
 custom modules against these libraries, I could simply use:


 
LD_PRELOAD=/opt/amdlibm-3.0.2/lib/dynamic/libamdlibm.so:/opt/acml5.2.0/gfortran64/lib/libacml.so
 my program here

 I'm hoping that both numpy and my own dll's then will take advantage of
 these libraries.

 Do you think this will work?

 Quite unlikely depending on your configuration, because those
 libraries are rarely if ever ABI compatible (that's why it is such a
 pain to support).

 David

 When you say quite unlikely (to work), you mean

 a) unlikely that libm/acml will be used to resolve symbols in numpy/dlls at
 runtime (e.g., exp)?

 or

 b) program may produce wrong results and/or crash ?
 
 Both, actually. That's not something I would use myself. Did you try
 openblas ? It is open source, simple to build, and is pretty fast,
 
 David

Actually, for my current work, I'm more concerned with speeding up operations 
such as exp, log and basic vector arithmetic.  Any thoughts on that?

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] testing with amd libm/acml

2012-11-07 Thread Neal Becker
David Cournapeau wrote:

 On Wed, Nov 7, 2012 at 1:56 PM, Neal Becker ndbeck...@gmail.com wrote:
 David Cournapeau wrote:

 On Wed, Nov 7, 2012 at 12:35 PM, Neal Becker ndbeck...@gmail.com wrote:
 I'm trying to do a bit of benchmarking to see if amd libm/acml will help
 me.

 I got an idea that instead of building all of numpy/scipy and all of my
 custom modules against these libraries, I could simply use:


 
LD_PRELOAD=/opt/amdlibm-3.0.2/lib/dynamic/libamdlibm.so:/opt/acml5.2.0/gfortran64/lib/libacml.so
 my program here

 I'm hoping that both numpy and my own dll's then will take advantage of
 these libraries.

 Do you think this will work?

 Quite unlikely depending on your configuration, because those
 libraries are rarely if ever ABI compatible (that's why it is such a
 pain to support).

 David

 When you say quite unlikely (to work), you mean

 a) unlikely that libm/acml will be used to resolve symbols in numpy/dlls at
 runtime (e.g., exp)?

 or

 b) program may produce wrong results and/or crash ?
 
 Both, actually. That's not something I would use myself. Did you try
 openblas ? It is open source, simple to build, and is pretty fast,
 
 David

In my current work, probably the largest bottlenecks are 'max*',  which are

log (\sum e^(x_i))


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] testing with amd libm/acml

2012-11-07 Thread Dag Sverre Seljebotn
On 11/07/2012 03:30 PM, Neal Becker wrote:
 David Cournapeau wrote:

 On Wed, Nov 7, 2012 at 1:56 PM, Neal Becker ndbeck...@gmail.com wrote:
 David Cournapeau wrote:

 On Wed, Nov 7, 2012 at 12:35 PM, Neal Becker ndbeck...@gmail.com wrote:
 I'm trying to do a bit of benchmarking to see if amd libm/acml will help
 me.

 I got an idea that instead of building all of numpy/scipy and all of my
 custom modules against these libraries, I could simply use:



 LD_PRELOAD=/opt/amdlibm-3.0.2/lib/dynamic/libamdlibm.so:/opt/acml5.2.0/gfortran64/lib/libacml.so
 my program here

 I'm hoping that both numpy and my own dll's then will take advantage of
 these libraries.

 Do you think this will work?

 Quite unlikely depending on your configuration, because those
 libraries are rarely if ever ABI compatible (that's why it is such a
 pain to support).

 David

 When you say quite unlikely (to work), you mean

 a) unlikely that libm/acml will be used to resolve symbols in numpy/dlls at
 runtime (e.g., exp)?

 or

 b) program may produce wrong results and/or crash ?

 Both, actually. That's not something I would use myself. Did you try
 openblas ? It is open source, simple to build, and is pretty fast,

 David

 In my current work, probably the largest bottlenecks are 'max*',  which are

 log (\sum e^(x_i))

numexpr with Intel VML is the solution I know of that doesn't require 
you to dig into compiling C code yourself. Did you look into that or is 
using Intel VML/MKL not an option?

Fast exps depend on the CPU evaluating many exp's at the same time (both 
explicit through vector registers, and implicit through pipelining); 
even if you get what you try to work (which is unlikely I think) the 
approach is inherently slow, since just passing a single number at the 
time through the exp function can't be efficient.

Dag Sverre
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] testing with amd libm/acml

2012-11-07 Thread Neal Becker
Would you expect numexpr without MKL to give a significant boost?

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] testing with amd libm/acml

2012-11-07 Thread Chris Barker
On Wed, Nov 7, 2012 at 11:41 AM, Neal Becker ndbeck...@gmail.com wrote:
 Would you expect numexpr without MKL to give a significant boost?

It can, depending on the use case:
 -- It can remove a lot of uneccessary temporary creation.
 -- IIUC, it works on blocks of data at a time, and thus can keep
things in cache more when working with large data sets.
  -- It can (optionally) use multiple threads for easy parallelization.

All you can do is try it on your use-case and see what you get. It's a
pretty light lift to try.

-Chris






-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Testing the python buffer protocol (bf_getbuffer / tp_as_buffer)

2011-12-17 Thread Soeren Sonnenburg
Doesn't work, complaining that the object has no __buffer__ attribute.

Digging into the numpy c code it seems numpy doesn't even support the
buffer protocol but only the deprecated (old) one
http://docs.python.org/c-api/objbuffer.html .

At least there is nowhere a PyObject_CheckBuffer() call but frombuffer
in the numpy C code checks for

(Py_TYPE(buf)-tp_as_buffer-bf_getwritebuffer == NULL  
  
 Py_TYPE(buf)-tp_as_buffer-bf_getreadbuffer == NULL)
 

.

So it needs bf_read/writebuffer to be set instead of bf_getbuffer and the array 
buffer protocol :-(

Soeren

On Sat, 2011-12-17 at 03:20 +0100, Torgil Svensson wrote:
 What happens if you use
 
 y=numpy.frombuffer(x) ?
 
 //Torgil
 
 
 On Sat, Dec 17, 2011 at 1:41 AM, Soeren Sonnenburg so...@debian.org wrote:
  Hi,
 
  I've implemented the buffer protocol
  (http://www.python.org/dev/peps/pep-3118/) for some matrix class and
  when I manually call PyObject_GetBuffer on that object I see that I get
  the right matrix.
 
  Now I'd like to see numpy use the buffer protocol of my class. Does
  anyone know how to test that? What do I need to write, just
 
  x=MyMatrix([1,2,3])
  y=numpy.array(x)
 
  (that doesn't call the buffer function though - so it must be sth else)?
 
  Any ideas?
  Soeren
  --
  For the one fact about the future of which we can be certain is that it
  will be utterly fantastic. -- Arthur C. Clarke, 1962
 
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
 

-- 
For the one fact about the future of which we can be certain is that it
will be utterly fantastic. -- Arthur C. Clarke, 1962


signature.asc
Description: This is a digitally signed message part
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Testing the python buffer protocol (bf_getbuffer / tp_as_buffer)

2011-12-17 Thread mark florisson
What version of numpy are you using? IIRC the new buffer protocol has
been supported since numpy 1.5.

On 17 December 2011 08:42, Soeren Sonnenburg so...@debian.org wrote:
 Doesn't work, complaining that the object has no __buffer__ attribute.

 Digging into the numpy c code it seems numpy doesn't even support the
 buffer protocol but only the deprecated (old) one
 http://docs.python.org/c-api/objbuffer.html .

 At least there is nowhere a PyObject_CheckBuffer() call but frombuffer
 in the numpy C code checks for

 (Py_TYPE(buf)-tp_as_buffer-bf_getwritebuffer == NULL
             Py_TYPE(buf)-tp_as_buffer-bf_getreadbuffer == NULL)

 .

 So it needs bf_read/writebuffer to be set instead of bf_getbuffer and the 
 array buffer protocol :-(

 Soeren

 On Sat, 2011-12-17 at 03:20 +0100, Torgil Svensson wrote:
 What happens if you use

 y=numpy.frombuffer(x) ?

 //Torgil


 On Sat, Dec 17, 2011 at 1:41 AM, Soeren Sonnenburg so...@debian.org wrote:
  Hi,
 
  I've implemented the buffer protocol
  (http://www.python.org/dev/peps/pep-3118/) for some matrix class and
  when I manually call PyObject_GetBuffer on that object I see that I get
  the right matrix.
 
  Now I'd like to see numpy use the buffer protocol of my class. Does
  anyone know how to test that? What do I need to write, just
 
  x=MyMatrix([1,2,3])
  y=numpy.array(x)
 
  (that doesn't call the buffer function though - so it must be sth else)?
 
  Any ideas?
  Soeren
  --
  For the one fact about the future of which we can be certain is that it
  will be utterly fantastic. -- Arthur C. Clarke, 1962
 
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


 --
 For the one fact about the future of which we can be certain is that it
 will be utterly fantastic. -- Arthur C. Clarke, 1962

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Testing the python buffer protocol (bf_getbuffer / tp_as_buffer)

2011-12-17 Thread Pauli Virtanen
17.12.2011 09:42, Soeren Sonnenburg kirjoitti:
 Doesn't work, complaining that the object has no __buffer__ attribute.

 Digging into the numpy c code it seems numpy doesn't even support the
 buffer protocol but only the deprecated (old) one
 http://docs.python.org/c-api/objbuffer.html .
[clip]

Since Numpy version 1.5, the new buffer protocol is supported.

-- 
Pauli Virtanen

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Testing the python buffer protocol (bf_getbuffer / tp_as_buffer)

2011-12-17 Thread Soeren Sonnenburg
On Sat, 2011-12-17 at 15:29 +0100, Pauli Virtanen wrote:
 17.12.2011 09:42, Soeren Sonnenburg kirjoitti:
  Doesn't work, complaining that the object has no __buffer__ attribute.
 
  Digging into the numpy c code it seems numpy doesn't even support the
  buffer protocol but only the deprecated (old) one
  http://docs.python.org/c-api/objbuffer.html .
 [clip]
 
 Since Numpy version 1.5, the new buffer protocol is supported.

I've looked at the source code of numpy 1.6.1 and couldn't find the
respective code... I guess I must be doing something wrong but there
really was no call to PyObject_CheckBuffer() ...


The problem is I don't really know what is supposed to happen if the new
buffer protocol is supported by some class named say Foo. Could I then
do

x=Foo([1,2,3])

numpy.array([2,2,2])+x

and such operations?

Soeren
-- 
For the one fact about the future of which we can be certain is that it
will be utterly fantastic. -- Arthur C. Clarke, 1962


signature.asc
Description: This is a digitally signed message part
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Testing the python buffer protocol (bf_getbuffer / tp_as_buffer)

2011-12-17 Thread Pauli Virtanen
18.12.2011 00:49, Soeren Sonnenburg kirjoitti:
[clip]
 I've looked at the source code of numpy 1.6.1 and couldn't find the
 respective code... I guess I must be doing something wrong but there
 really was no call to PyObject_CheckBuffer() ...

Look for PyObject_GetBuffer

 The problem is I don't really know what is supposed to happen if the new
 buffer protocol is supported by some class named say Foo. Could I then
 do

 x=Foo([1,2,3])

 numpy.array([2,2,2])+x

 and such operations?

Yes. You can try it out with Python's builtin memoryview class.

-- 
Pauli Virtanen

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Testing the python buffer protocol (bf_getbuffer / tp_as_buffer)

2011-12-16 Thread Soeren Sonnenburg
Hi,

I've implemented the buffer protocol
(http://www.python.org/dev/peps/pep-3118/) for some matrix class and
when I manually call PyObject_GetBuffer on that object I see that I get
the right matrix.

Now I'd like to see numpy use the buffer protocol of my class. Does
anyone know how to test that? What do I need to write, just

x=MyMatrix([1,2,3])
y=numpy.array(x)

(that doesn't call the buffer function though - so it must be sth else)?

Any ideas?
Soeren
-- 
For the one fact about the future of which we can be certain is that it
will be utterly fantastic. -- Arthur C. Clarke, 1962


signature.asc
Description: This is a digitally signed message part
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Testing the python buffer protocol (bf_getbuffer / tp_as_buffer)

2011-12-16 Thread Torgil Svensson
What happens if you use

y=numpy.frombuffer(x) ?

//Torgil


On Sat, Dec 17, 2011 at 1:41 AM, Soeren Sonnenburg so...@debian.org wrote:
 Hi,

 I've implemented the buffer protocol
 (http://www.python.org/dev/peps/pep-3118/) for some matrix class and
 when I manually call PyObject_GetBuffer on that object I see that I get
 the right matrix.

 Now I'd like to see numpy use the buffer protocol of my class. Does
 anyone know how to test that? What do I need to write, just

 x=MyMatrix([1,2,3])
 y=numpy.array(x)

 (that doesn't call the buffer function though - so it must be sth else)?

 Any ideas?
 Soeren
 --
 For the one fact about the future of which we can be certain is that it
 will be utterly fantastic. -- Arthur C. Clarke, 1962

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] testing binary installer for OS X

2010-02-26 Thread Ralf Gommers
Hi,

I built an installer for OS X and did some testing on a clean computer. All
NumPy tests pass. SciPy (0.7.1 binary) gives a number of errors and
failures, I copied one of each type below. For full output see
http://pastebin.com/eEcwkzKr . To me it looks like the failures are
harmless, and the kdtree errors are not related to changes in NumPy. Is that
right?

I also installed Matplotlib (0.99.1.1 binary), but I did not find a way to
test just the binary install except manually. Created some plots, looked
fine. Then I ran the test script examples/tests/backend_driver.py from an
svn checkout, but my laptop died before the tests finished (at least 25
mins). Basic output was:
example_name.py 1.123 0
example_name2.py   0.987 0
...
Can anyone tell me what the best way is to test the MPL binary?

Cheers,
Ralf




==
ERROR: Failure: ValueError (numpy.dtype does not appear to be the correct
type object)
--
Traceback (most recent call last):
  File
/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/nose-0.11.1-py2.6.egg/nose/loader.py,
line 379, in loadTestsFromName
addr.filename, addr.module)
  File
/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/nose-0.11.1-py2.6.egg/nose/importer.py,
line 39, in importFromPath
return self.importFromDir(dir_path, fqname)
  File
/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/nose-0.11.1-py2.6.egg/nose/importer.py,
line 86, in importFromDir
mod = load_module(part_fqname, fh, filename, desc)
  File
/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/cluster/__init__.py,
line 9, in module
import vq, hierarchy
  File
/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/cluster/hierarchy.py,
line 199, in module
import scipy.spatial.distance as distance
  File
/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/spatial/__init__.py,
line 7, in module
from ckdtree import *
  File numpy.pxd, line 30, in scipy.spatial.ckdtree
(scipy/spatial/ckdtree.c:6087)
ValueError: numpy.dtype does not appear to be the correct type object

==
ERROR: test_kdtree.test_random_compiled.test_approx
--
Traceback (most recent call last):
  File
/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/nose-0.11.1-py2.6.egg/nose/case.py,
line 364, in setUp
try_run(self.inst, ('setup', 'setUp'))
  File
/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/nose-0.11.1-py2.6.egg/nose/util.py,
line 487, in try_run
return func()
  File
/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/spatial/tests/test_kdtree.py,
line 133, in setUp
self.kdtree = cKDTree(self.data)
  File ckdtree.pyx, line 214, in scipy.spatial.ckdtree.cKDTree.__init__
(scipy/spatial/ckdtree.c:1563)
NameError: np


==
FAIL: test_asfptype (test_base.TestBSR)
--
Traceback (most recent call last):
  File
/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/sparse/tests/test_base.py,
line 242, in test_asfptype
assert_equal( A.dtype , 'int32' )
  File
/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy/testing/utils.py,
line 284, in assert_equal
raise AssertionError(msg)
AssertionError:
Items are not equal:
 ACTUAL: dtype('int32')
 DESIRED: 'int32'


==
FAIL: test_nrdtrisd (test_basic.TestCephes)
--
Traceback (most recent call last):
  File
/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/tests/test_basic.py,
line 349, in test_nrdtrisd
assert_equal(cephes.nrdtrisd(0.5,0.5,0.5),0.0)
  File
/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy/testing/utils.py,
line 301, in assert_equal
raise AssertionError(msg)
AssertionError:
Items are not equal:
 ACTUAL: -0
 DESIRED: 0.0

--
Ran 2585 tests in 46.196s

FAILED (KNOWNFAIL=4, SKIP=31, errors=28, failures=17)
Out[2]: nose.result.TextTestResult run=2585 errors=28 failures=17
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] testing binary installer for OS X

2010-02-26 Thread josef . pktd
On Fri, Feb 26, 2010 at 12:00 PM, Ralf Gommers
ralf.gomm...@googlemail.com wrote:
 Hi,

 I built an installer for OS X and did some testing on a clean computer. All
 NumPy tests pass. SciPy (0.7.1 binary) gives a number of errors and
 failures, I copied one of each type below. For full output see
 http://pastebin.com/eEcwkzKr . To me it looks like the failures are
 harmless, and the kdtree errors are not related to changes in NumPy. Is that
 right?

 I also installed Matplotlib (0.99.1.1 binary), but I did not find a way to
 test just the binary install except manually. Created some plots, looked
 fine. Then I ran the test script examples/tests/backend_driver.py from an
 svn checkout, but my laptop died before the tests finished (at least 25
 mins). Basic output was:
 example_name.py 1.123 0
 example_name2.py   0.987 0
 ...
 Can anyone tell me what the best way is to test the MPL binary?

 Cheers,
 Ralf




 ==
 ERROR: Failure: ValueError (numpy.dtype does not appear to be the correct
 type object)
 --
 Traceback (most recent call last):
   File
 /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/nose-0.11.1-py2.6.egg/nose/loader.py,
 line 379, in loadTestsFromName
     addr.filename, addr.module)
   File
 /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/nose-0.11.1-py2.6.egg/nose/importer.py,
 line 39, in importFromPath
     return self.importFromDir(dir_path, fqname)
   File
 /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/nose-0.11.1-py2.6.egg/nose/importer.py,
 line 86, in importFromDir
     mod = load_module(part_fqname, fh, filename, desc)
   File
 /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/cluster/__init__.py,
 line 9, in module
     import vq, hierarchy
   File
 /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/cluster/hierarchy.py,
 line 199, in module
     import scipy.spatial.distance as distance
   File
 /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/spatial/__init__.py,
 line 7, in module
     from ckdtree import *
   File numpy.pxd, line 30, in scipy.spatial.ckdtree
 (scipy/spatial/ckdtree.c:6087)
 ValueError: numpy.dtype does not appear to be the correct type object


this looks like the cython type check problem, ckdtree.c  doesn't look
compatible with your numpy version

In this case the next errors might be follow-up errors because of an
incomplete import

Josef


 ==
 ERROR: test_kdtree.test_random_compiled.test_approx
 --
 Traceback (most recent call last):
   File
 /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/nose-0.11.1-py2.6.egg/nose/case.py,
 line 364, in setUp
     try_run(self.inst, ('setup', 'setUp'))
   File
 /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/nose-0.11.1-py2.6.egg/nose/util.py,
 line 487, in try_run
     return func()
   File
 /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/spatial/tests/test_kdtree.py,
 line 133, in setUp
     self.kdtree = cKDTree(self.data)
   File ckdtree.pyx, line 214, in scipy.spatial.ckdtree.cKDTree.__init__
 (scipy/spatial/ckdtree.c:1563)
 NameError: np


 ==
 FAIL: test_asfptype (test_base.TestBSR)
 --
 Traceback (most recent call last):
   File
 /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/sparse/tests/test_base.py,
 line 242, in test_asfptype
     assert_equal( A.dtype , 'int32' )
   File
 /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy/testing/utils.py,
 line 284, in assert_equal
     raise AssertionError(msg)
 AssertionError:
 Items are not equal:
  ACTUAL: dtype('int32')
  DESIRED: 'int32'


 ==
 FAIL: test_nrdtrisd (test_basic.TestCephes)
 --
 Traceback (most recent call last):
   File
 /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/tests/test_basic.py,
 line 349, in test_nrdtrisd
     assert_equal(cephes.nrdtrisd(0.5,0.5,0.5),0.0)
   File
 /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy/testing/utils.py,
 line 301, in assert_equal
     raise AssertionError(msg)
 AssertionError:
 Items are not equal:
  ACTUAL: -0
  DESIRED: 0.0

 --
 Ran 2585 tests in 46.196s

 FAILED (KNOWNFAIL=4, 

Re: [Numpy-discussion] testing binary installer for OS X

2010-02-26 Thread Pauli Virtanen
pe, 2010-02-26 kello 12:09 -0500, josef.p...@gmail.com kirjoitti:
 On Fri, Feb 26, 2010 at 12:00 PM, Ralf Gommers
[clip]
  ValueError: numpy.dtype does not appear to be the correct type object
 
 This looks like the cython type check problem, ckdtree.c  doesn't look
 compatible with your numpy version

Or, rather, the Scipy binary is not compatible with the Numpy you built,
because of a differing size of the PyArray_Descr structure.
Recompilation of Scipy would fix this, but if the aim is to produce a
binary-compatible release, then something is still wrong.

-- 
Pauli Virtanen

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] testing binary installer for OS X

2010-02-26 Thread josef . pktd
On Fri, Feb 26, 2010 at 12:19 PM, Pauli Virtanen p...@iki.fi wrote:
 pe, 2010-02-26 kello 12:09 -0500, josef.p...@gmail.com kirjoitti:
 On Fri, Feb 26, 2010 at 12:00 PM, Ralf Gommers
 [clip]
  ValueError: numpy.dtype does not appear to be the correct type object

 This looks like the cython type check problem, ckdtree.c  doesn't look
 compatible with your numpy version

 Or, rather, the Scipy binary is not compatible with the Numpy you built,
 because of a differing size of the PyArray_Descr structure.
 Recompilation of Scipy would fix this, but if the aim is to produce a
 binary-compatible release, then something is still wrong.

recompiling wouldn't be enough, the cython c files also need to be
regenerated for a different numpy version.
(If I understand the problem correctly.)

Josef


 --
 Pauli Virtanen

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] testing binary installer for OS X

2010-02-26 Thread Pauli Virtanen
pe, 2010-02-26 kello 12:26 -0500, josef.p...@gmail.com kirjoitti:
[clip]
 recompiling wouldn't be enough, the cython c files also need to be
 regenerated for a different numpy version.
 (If I understand the problem correctly.)

No. The Cython-generated sources just use sizeof(PyArray_Descr), the
value is not hardcoded, so it's a compile-time issue.

-- 
Pauli Virtanen

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] testing binary installer for OS X

2010-02-26 Thread Charles R Harris
On Fri, Feb 26, 2010 at 10:34 AM, Pauli Virtanen p...@iki.fi wrote:

 pe, 2010-02-26 kello 12:26 -0500, josef.p...@gmail.com kirjoitti:
 [clip]
  recompiling wouldn't be enough, the cython c files also need to be
  regenerated for a different numpy version.
  (If I understand the problem correctly.)

 No. The Cython-generated sources just use sizeof(PyArray_Descr), the
 value is not hardcoded, so it's a compile-time issue.


So Ralf need to be sure that scipy was compiled against, say, numpy1.3.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] testing binary installer for OS X

2010-02-26 Thread josef . pktd
On Fri, Feb 26, 2010 at 12:41 PM, Charles R Harris
charlesr.har...@gmail.com wrote:


 On Fri, Feb 26, 2010 at 10:34 AM, Pauli Virtanen p...@iki.fi wrote:

 pe, 2010-02-26 kello 12:26 -0500, josef.p...@gmail.com kirjoitti:
 [clip]
  recompiling wouldn't be enough, the cython c files also need to be
  regenerated for a different numpy version.
  (If I understand the problem correctly.)

 No. The Cython-generated sources just use sizeof(PyArray_Descr), the
 value is not hardcoded, so it's a compile-time issue.

 So Ralf need to be sure that scipy was compiled against, say, numpy1.3.

I think I mixed up some things then,
scipy 0.7.1 cython files should be regenerated with the latest cython
release so that it doesn't check the sizeof anymore.
Then, a scipy 0.7.1 build against numpy 1.3 would also work without
recompiling against numpy 1.4.1

Is this correct?

Josef


 Chuck


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] testing binary installer for OS X

2010-02-26 Thread Charles R Harris
On Fri, Feb 26, 2010 at 10:44 AM, josef.p...@gmail.com wrote:

 On Fri, Feb 26, 2010 at 12:41 PM, Charles R Harris
 charlesr.har...@gmail.com wrote:
 
 
  On Fri, Feb 26, 2010 at 10:34 AM, Pauli Virtanen p...@iki.fi wrote:
 
  pe, 2010-02-26 kello 12:26 -0500, josef.p...@gmail.com kirjoitti:
  [clip]
   recompiling wouldn't be enough, the cython c files also need to be
   regenerated for a different numpy version.
   (If I understand the problem correctly.)
 
  No. The Cython-generated sources just use sizeof(PyArray_Descr), the
  value is not hardcoded, so it's a compile-time issue.
 
  So Ralf need to be sure that scipy was compiled against, say, numpy1.3.

 I think I mixed up some things then,
 scipy 0.7.1 cython files should be regenerated with the latest cython
 release so that it doesn't check the sizeof anymore.
 Then, a scipy 0.7.1 build against numpy 1.3 would also work without
 recompiling against numpy 1.4.1

 Is this correct?


Yes, but the aim of 1.4.1 is that it should work with the existing binaries
of scipy, i.e., it should be backward compatible with no changes in dtype
sizes and such so that even files generated with the old cython shouldn't
cause problems.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] testing binary installer for OS X

2010-02-26 Thread josef . pktd
On Fri, Feb 26, 2010 at 12:50 PM, Charles R Harris
charlesr.har...@gmail.com wrote:


 On Fri, Feb 26, 2010 at 10:44 AM, josef.p...@gmail.com wrote:

 On Fri, Feb 26, 2010 at 12:41 PM, Charles R Harris
 charlesr.har...@gmail.com wrote:
 
 
  On Fri, Feb 26, 2010 at 10:34 AM, Pauli Virtanen p...@iki.fi wrote:
 
  pe, 2010-02-26 kello 12:26 -0500, josef.p...@gmail.com kirjoitti:
  [clip]
   recompiling wouldn't be enough, the cython c files also need to be
   regenerated for a different numpy version.
   (If I understand the problem correctly.)
 
  No. The Cython-generated sources just use sizeof(PyArray_Descr), the
  value is not hardcoded, so it's a compile-time issue.
 
  So Ralf need to be sure that scipy was compiled against, say, numpy1.3.

 I think I mixed up some things then,
 scipy 0.7.1 cython files should be regenerated with the latest cython
 release so that it doesn't check the sizeof anymore.
 Then, a scipy 0.7.1 build against numpy 1.3 would also work without
 recompiling against numpy 1.4.1

 Is this correct?


 Yes, but the aim of 1.4.1 is that it should work with the existing binaries
 of scipy, i.e., it should be backward compatible with no changes in dtype
 sizes and such so that even files generated with the old cython shouldn't
 cause problems.

We had this discussion but David said that this is impossible, binary
compatibility doesn't remove the (old) cython problem.

Josef


 Chuck


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] testing binary installer for OS X

2010-02-26 Thread Charles R Harris
On Fri, Feb 26, 2010 at 10:53 AM, josef.p...@gmail.com wrote:

 On Fri, Feb 26, 2010 at 12:50 PM, Charles R Harris
 charlesr.har...@gmail.com wrote:
 
 
  On Fri, Feb 26, 2010 at 10:44 AM, josef.p...@gmail.com wrote:
 
  On Fri, Feb 26, 2010 at 12:41 PM, Charles R Harris
  charlesr.har...@gmail.com wrote:
  
  
   On Fri, Feb 26, 2010 at 10:34 AM, Pauli Virtanen p...@iki.fi wrote:
  
   pe, 2010-02-26 kello 12:26 -0500, josef.p...@gmail.com kirjoitti:
   [clip]
recompiling wouldn't be enough, the cython c files also need to be
regenerated for a different numpy version.
(If I understand the problem correctly.)
  
   No. The Cython-generated sources just use sizeof(PyArray_Descr), the
   value is not hardcoded, so it's a compile-time issue.
  
   So Ralf need to be sure that scipy was compiled against, say,
 numpy1.3.
 
  I think I mixed up some things then,
  scipy 0.7.1 cython files should be regenerated with the latest cython
  release so that it doesn't check the sizeof anymore.
  Then, a scipy 0.7.1 build against numpy 1.3 would also work without
  recompiling against numpy 1.4.1
 
  Is this correct?
 
 
  Yes, but the aim of 1.4.1 is that it should work with the existing
 binaries
  of scipy, i.e., it should be backward compatible with no changes in dtype
  sizes and such so that even files generated with the old cython shouldn't
  cause problems.

 We had this discussion but David said that this is impossible, binary
 compatibility doesn't remove the (old) cython problem.


Depends on what you mean by binary compatibility. If something is added to
the end of a structure, then it is still backwards compatible because the
offsets of old entries don't change, but the old cython will fail in that
case because the size changes. If the sizes are all the same, then there
should be no problem and that is what we are shooting for. There are
additions to the c_api in 1.4 but I think that structure is private.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] testing binary installer for OS X

2010-02-26 Thread Charles R Harris
On Fri, Feb 26, 2010 at 11:26 AM, Charles R Harris 
charlesr.har...@gmail.com wrote:



 On Fri, Feb 26, 2010 at 10:53 AM, josef.p...@gmail.com wrote:

 On Fri, Feb 26, 2010 at 12:50 PM, Charles R Harris
 charlesr.har...@gmail.com wrote:
 
 
  On Fri, Feb 26, 2010 at 10:44 AM, josef.p...@gmail.com wrote:
 
  On Fri, Feb 26, 2010 at 12:41 PM, Charles R Harris
  charlesr.har...@gmail.com wrote:
  
  
   On Fri, Feb 26, 2010 at 10:34 AM, Pauli Virtanen p...@iki.fi wrote:
  
   pe, 2010-02-26 kello 12:26 -0500, josef.p...@gmail.com kirjoitti:
   [clip]
recompiling wouldn't be enough, the cython c files also need to be
regenerated for a different numpy version.
(If I understand the problem correctly.)
  
   No. The Cython-generated sources just use sizeof(PyArray_Descr), the
   value is not hardcoded, so it's a compile-time issue.
  
   So Ralf need to be sure that scipy was compiled against, say,
 numpy1.3.
 
  I think I mixed up some things then,
  scipy 0.7.1 cython files should be regenerated with the latest cython
  release so that it doesn't check the sizeof anymore.
  Then, a scipy 0.7.1 build against numpy 1.3 would also work without
  recompiling against numpy 1.4.1
 
  Is this correct?
 
 
  Yes, but the aim of 1.4.1 is that it should work with the existing
 binaries
  of scipy, i.e., it should be backward compatible with no changes in
 dtype
  sizes and such so that even files generated with the old cython
 shouldn't
  cause problems.

 We had this discussion but David said that this is impossible, binary
 compatibility doesn't remove the (old) cython problem.


 Depends on what you mean by binary compatibility. If something is added to
 the end of a structure, then it is still backwards compatible because the
 offsets of old entries don't change, but the old cython will fail in that
 case because the size changes. If the sizes are all the same, then there
 should be no problem and that is what we are shooting for. There are
 additions to the c_api in 1.4 but I think that structure is private.


I note that there are still traces of datetime in the 1.4.x public include
files, although the desc size looks right.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] testing binary installer for OS X

2010-02-26 Thread David Cournapeau
On Sat, Feb 27, 2010 at 2:44 AM,  josef.p...@gmail.com wrote:


 I think I mixed up some things then,
 scipy 0.7.1 cython files should be regenerated with the latest cython
 release so that it doesn't check the sizeof anymore.
 Then, a scipy 0.7.1 build against numpy 1.3 would also work without
 recompiling against numpy 1.4.1

 Is this correct?

Yes, this is correct. It is impossible to create a numpy 1.4.x which
is compatible with the *existing* scipy binary, because several
structures have been growing (not only because of datetime).

The cython changes have already been incorporated in scipy 0.7.x
branch, so in the end, what should be done is a new 0.7.2 scipy binary
built against numpy 1.3.0, which will then be compatible with both
numpy 1.3 and 1.4 binaries,

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] testing binary installer for OS X

2010-02-26 Thread Ralf Gommers
On Sat, Feb 27, 2010 at 8:17 AM, David Cournapeau courn...@gmail.comwrote:

 On Sat, Feb 27, 2010 at 2:44 AM,  josef.p...@gmail.com wrote:

 
  I think I mixed up some things then,
  scipy 0.7.1 cython files should be regenerated with the latest cython
  release so that it doesn't check the sizeof anymore.
  Then, a scipy 0.7.1 build against numpy 1.3 would also work without
  recompiling against numpy 1.4.1
 
  Is this correct?

 Yes, this is correct. It is impossible to create a numpy 1.4.x which
 is compatible with the *existing* scipy binary, because several
 structures have been growing (not only because of datetime).

 The cython changes have already been incorporated in scipy 0.7.x
 branch, so in the end, what should be done is a new 0.7.2 scipy binary
 built against numpy 1.3.0, which will then be compatible with both
 numpy 1.3 and 1.4 binaries,

 Hmm, I remember you saying this a while ago and I'm sure you're right. But
it got lost in the noise, and like Charles I thought the aim was to produce
a 1.4.x binary compatible with what's out there now. This is also what you
said on Wednesday:

quote
So here is how I see things in the near future for release:
- compile a simple binary installer for mac os x and windows (no need
for doc or multiple archs) from 1.4.x
- test this with the scipy binary out there (running the full test
suites), ideally other well known packages as well (matplotlib,
pytables, etc...).
unquote

So now this seems to be impossible. I'm not so sure then we're not confusing
even more confusing with yet another incompatible binary...

Cheers,
Ralf
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] testing binary installer for OS X

2010-02-26 Thread David Cournapeau
On Sat, Feb 27, 2010 at 11:59 AM, Ralf Gommers
ralf.gomm...@googlemail.com wrote:


 quote
 So here is how I see things in the near future for release:
 - compile a simple binary installer for mac os x and windows (no need
 for doc or multiple archs) from 1.4.x
 - test this with the scipy binary out there (running the full test
 suites), ideally other well known packages as well (matplotlib,
 pytables, etc...).
 unquote

 So now this seems to be impossible. I'm not so sure then we're not confusing
 even more confusing with yet another incompatible binary...

Sorry, I should have been clearer in the above quoted list. There were
two issues with numpy 1.4.0, one caused by datetime, and one caused by
other changes to growing structures. The second one is ok for most
cases, but cython  0.12.1 was too strict in checking some structure
size, meaning any extension built from cython  0.12.1 will refuse to
import. There is  nothing we can do for this one.

So the plan I had in mind was:
 - release fixed numpy 1.4.1
 - release a new scipy 0.7.2 built against numpy 1.3.0, which would be
compatible with both existing 1.3.0 and the new 1.4.1

Is this clearer ?

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] testing binary installer for OS X

2010-02-26 Thread Ralf Gommers
On Sat, Feb 27, 2010 at 2:33 PM, David Cournapeau courn...@gmail.comwrote:


 Sorry, I should have been clearer in the above quoted list. There were
 two issues with numpy 1.4.0, one caused by datetime, and one caused by
 other changes to growing structures. The second one is ok for most
 cases, but cython  0.12.1 was too strict in checking some structure
 size, meaning any extension built from cython  0.12.1 will refuse to
 import. There is  nothing we can do for this one.

 So the plan I had in mind was:
  - release fixed numpy 1.4.1
  - release a new scipy 0.7.2 built against numpy 1.3.0, which would be
 compatible with both existing 1.3.0 and the new 1.4.1

 Is this clearer ?


Yes that is clear. Would it make sense to first release scipy 0.7.2 though?
Then numpy 1.4.1 can be tested against it and we can be sure it works. The
other way around it's not possible to test. Or can you tell from the test
output I posted that it should be okay?

Ralf
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] testing binary installer for OS X

2010-02-26 Thread David Cournapeau
On Sat, Feb 27, 2010 at 3:43 PM, Ralf Gommers
ralf.gomm...@googlemail.com wrote:


 Yes that is clear. Would it make sense to first release scipy 0.7.2 though?
 Then numpy 1.4.1 can be tested against it and we can be sure it works. The
 other way around it's not possible to test.

Yes it is, you just have to build scipy against numpy 1.3.0.

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Testing for close to zero?

2009-01-19 Thread Jonathan Taylor
Hi,

When solving a quadratic equation I get that alpha =
-3.78336776728e-31 which I believe to be far below machine precision:

finfo(float).eps
2.2204460492503131e-16

But an if statement like:

if alpha == 0:
   ...

does not catch this.  Is there a better way to check for things that
are essentially zero or should I really be using

if np.abs(alpha)  finfo(float).eps:
   ...

?

Thanks for any help.
Jonathan.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Testing for close to zero?

2009-01-19 Thread Robert Kern
On Mon, Jan 19, 2009 at 14:43, Jonathan Taylor
jonathan.tay...@utoronto.ca wrote:
 Hi,

 When solving a quadratic equation I get that alpha =
 -3.78336776728e-31 which I believe to be far below machine precision:

 finfo(float).eps
 2.2204460492503131e-16

 But an if statement like:

 if alpha == 0:
   ...

 does not catch this.  Is there a better way to check for things that
 are essentially zero or should I really be using

 if np.abs(alpha)  finfo(float).eps:
   ...

Almost. You should scale eps by some estimate of the size of the
problem. Exactly how you should do this depends on the problem,
though. Errors accumulate in different ways depending on the
operations you perform on the numbers. Multiplying eps by
max(abs(array_of_inputs)) is probably a reasonable starting point.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Testing for close to zero?

2009-01-19 Thread Jonathan Taylor
Interesting.  That makes sense and I suppose that also explains why
there is no function to do this sort of thing for you.

Jon.

On Mon, Jan 19, 2009 at 3:55 PM, Robert Kern robert.k...@gmail.com wrote:
 On Mon, Jan 19, 2009 at 14:43, Jonathan Taylor
 jonathan.tay...@utoronto.ca wrote:
 Hi,

 When solving a quadratic equation I get that alpha =
 -3.78336776728e-31 which I believe to be far below machine precision:

 finfo(float).eps
 2.2204460492503131e-16

 But an if statement like:

 if alpha == 0:
   ...

 does not catch this.  Is there a better way to check for things that
 are essentially zero or should I really be using

 if np.abs(alpha)  finfo(float).eps:
   ...

 Almost. You should scale eps by some estimate of the size of the
 problem. Exactly how you should do this depends on the problem,
 though. Errors accumulate in different ways depending on the
 operations you perform on the numbers. Multiplying eps by
 max(abs(array_of_inputs)) is probably a reasonable starting point.

 --
 Robert Kern

 I have come to believe that the whole world is an enigma, a harmless
 enigma that is made terrible by our own mad attempt to interpret it as
 though it had an underlying truth.
  -- Umberto Eco
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Testing for close to zero?

2009-01-19 Thread Charles R Harris
On Mon, Jan 19, 2009 at 7:23 PM, Jonathan Taylor 
jonathan.tay...@utoronto.ca wrote:

 Interesting.  That makes sense and I suppose that also explains why
 there is no function to do this sort of thing for you.


A combination of relative and absolute errors is another common solution,
i.e., test against relerr*max(abs(array_of_inputs)) + abserr. In cases like
this relerr is typically eps and abserr tends to be something like 1e-12,
which keeps you from descending towards zero any further than you need to.

Chuck



 Jon.

 On Mon, Jan 19, 2009 at 3:55 PM, Robert Kern robert.k...@gmail.com
 wrote:
  On Mon, Jan 19, 2009 at 14:43, Jonathan Taylor
  jonathan.tay...@utoronto.ca wrote:
  Hi,
 
  When solving a quadratic equation I get that alpha =
  -3.78336776728e-31 which I believe to be far below machine precision:
 
  finfo(float).eps
  2.2204460492503131e-16
 
  But an if statement like:
 
  if alpha == 0:
...
 
  does not catch this.  Is there a better way to check for things that
  are essentially zero or should I really be using
 
  if np.abs(alpha)  finfo(float).eps:
...
 
  Almost. You should scale eps by some estimate of the size of the
  problem. Exactly how you should do this depends on the problem,
  though. Errors accumulate in different ways depending on the
  operations you perform on the numbers. Multiplying eps by
  max(abs(array_of_inputs)) is probably a reasonable starting point.
 


  --
  Robert Kern
 
  I have come to believe that the whole world is an enigma, a harmless
  enigma that is made terrible by our own mad attempt to interpret it as
  though it had an underlying truth.
   -- Umberto Eco
  ___
  Numpy-discussion mailing list
  Numpy-discussion@scipy.org
  http://projects.scipy.org/mailman/listinfo/numpy-discussion
 
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Testing for close to zero?

2009-01-19 Thread Robert Kern
On Mon, Jan 19, 2009 at 22:09, Charles R Harris
charlesr.har...@gmail.com wrote:


 On Mon, Jan 19, 2009 at 7:23 PM, Jonathan Taylor
 jonathan.tay...@utoronto.ca wrote:

 Interesting.  That makes sense and I suppose that also explains why
 there is no function to do this sort of thing for you.

 A combination of relative and absolute errors is another common solution,
 i.e., test against relerr*max(abs(array_of_inputs)) + abserr. In cases like
 this relerr is typically eps and abserr tends to be something like 1e-12,
 which keeps you from descending towards zero any further than you need to.

I don't think the absolute error term is appropriate in this case. If
all of my inputs are of the size 1e-12, I would expect a result of
1e-14 to be significantly far from 0.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Testing for close to zero?

2009-01-19 Thread Charles R Harris
On Mon, Jan 19, 2009 at 9:17 PM, Robert Kern robert.k...@gmail.com wrote:

 On Mon, Jan 19, 2009 at 22:09, Charles R Harris
 charlesr.har...@gmail.com wrote:
 
 
  On Mon, Jan 19, 2009 at 7:23 PM, Jonathan Taylor
  jonathan.tay...@utoronto.ca wrote:
 
  Interesting.  That makes sense and I suppose that also explains why
  there is no function to do this sort of thing for you.
 
  A combination of relative and absolute errors is another common solution,
  i.e., test against relerr*max(abs(array_of_inputs)) + abserr. In cases
 like
  this relerr is typically eps and abserr tends to be something like 1e-12,
  which keeps you from descending towards zero any further than you need
 to.

 I don't think the absolute error term is appropriate in this case. If
 all of my inputs are of the size 1e-12, I would expect a result of
 1e-14 to be significantly far from 0.


Sure, that's why you *chose* constants appropriate to the problem. As to
this case, I don't know what the quadratic is or what methods are being used
to solve it, or even if the methods are appropriate. So the comment was
general and I think many numeric methods for solving equations use some
variant of the combination. For instance, the 1D zero finders in Scipy use
it.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Testing for close to zero?

2009-01-19 Thread Robert Kern
On Mon, Jan 19, 2009 at 23:36, Charles R Harris
charlesr.har...@gmail.com wrote:

 On Mon, Jan 19, 2009 at 9:17 PM, Robert Kern robert.k...@gmail.com wrote:

 On Mon, Jan 19, 2009 at 22:09, Charles R Harris
 charlesr.har...@gmail.com wrote:
 
 
  On Mon, Jan 19, 2009 at 7:23 PM, Jonathan Taylor
  jonathan.tay...@utoronto.ca wrote:
 
  Interesting.  That makes sense and I suppose that also explains why
  there is no function to do this sort of thing for you.
 
  A combination of relative and absolute errors is another common
  solution,
  i.e., test against relerr*max(abs(array_of_inputs)) + abserr. In cases
  like
  this relerr is typically eps and abserr tends to be something like
  1e-12,
  which keeps you from descending towards zero any further than you need
  to.

 I don't think the absolute error term is appropriate in this case. If
 all of my inputs are of the size 1e-12, I would expect a result of
 1e-14 to be significantly far from 0.

 Sure, that's why you *chose* constants appropriate to the problem.

But that's what eps*max(abs(array_of_inputs)) is supposed to do.

In the formulation that you are using (e.g. that of
assert_arrays_almost_equal()), the absolute error comes into play when
you are comparing two numbers in ignorance of the processes that
created them. The relative error in that formula is being adjusted by
the size of the two numbers (*not* the inputs to the algorithm). The
two numbers may be close to 0, but the relevant inputs to the
algorithm may be ~1, let's say. In that case, you need the absolute
error term to provide the scale information that is otherwise not
present in the comparison.

But if you know what the inputs to the calculation were, you can
estimate the scale factor for the relative tolerance directly
(rigorously, if you've done the numerical analysis) and the absolute
tolerance is supernumerary.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Testing for close to zero?

2009-01-19 Thread Charles R Harris
On Mon, Jan 19, 2009 at 10:48 PM, Robert Kern robert.k...@gmail.com wrote:

 On Mon, Jan 19, 2009 at 23:36, Charles R Harris
 charlesr.har...@gmail.com wrote:
 
  On Mon, Jan 19, 2009 at 9:17 PM, Robert Kern robert.k...@gmail.com
 wrote:
 
  On Mon, Jan 19, 2009 at 22:09, Charles R Harris
  charlesr.har...@gmail.com wrote:
  
  
   On Mon, Jan 19, 2009 at 7:23 PM, Jonathan Taylor
   jonathan.tay...@utoronto.ca wrote:
  
   Interesting.  That makes sense and I suppose that also explains why
   there is no function to do this sort of thing for you.
  
   A combination of relative and absolute errors is another common
   solution,
   i.e., test against relerr*max(abs(array_of_inputs)) + abserr. In cases
   like
   this relerr is typically eps and abserr tends to be something like
   1e-12,
   which keeps you from descending towards zero any further than you need
   to.
 
  I don't think the absolute error term is appropriate in this case. If
  all of my inputs are of the size 1e-12, I would expect a result of
  1e-14 to be significantly far from 0.
 
  Sure, that's why you *chose* constants appropriate to the problem.

 But that's what eps*max(abs(array_of_inputs)) is supposed to do.

 In the formulation that you are using (e.g. that of
 assert_arrays_almost_equal()), the absolute error comes into play when
 you are comparing two numbers in ignorance of the processes that
 created them. The relative error in that formula is being adjusted by
 the size of the two numbers (*not* the inputs to the algorithm). The
 two numbers may be close to 0, but the relevant inputs to the
 algorithm may be ~1, let's say. In that case, you need the absolute
 error term to provide the scale information that is otherwise not
 present in the comparison.

 But if you know what the inputs to the calculation were, you can
 estimate the scale factor for the relative tolerance directly
 (rigorously, if you've done the numerical analysis) and the absolute
 tolerance is supernumerary.


So you do bisection on an oddball curve,  512 iterations later you hit
zero... Or you do numeric integration where there is lots of cancellation.
These problems aren't new and the mixed method for tolerance is quite
standard and has been for many years. I don't see why you want to argue
about it, if you don't like the combined method, set the absolute error to
zero, problem solved.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Testing for close to zero?

2009-01-19 Thread Robert Kern
On Tue, Jan 20, 2009 at 00:21, Charles R Harris
charlesr.har...@gmail.com wrote:

 On Mon, Jan 19, 2009 at 10:48 PM, Robert Kern robert.k...@gmail.com wrote:

 On Mon, Jan 19, 2009 at 23:36, Charles R Harris
 charlesr.har...@gmail.com wrote:
 
  On Mon, Jan 19, 2009 at 9:17 PM, Robert Kern robert.k...@gmail.com
  wrote:
 
  On Mon, Jan 19, 2009 at 22:09, Charles R Harris
  charlesr.har...@gmail.com wrote:
  
  
   On Mon, Jan 19, 2009 at 7:23 PM, Jonathan Taylor
   jonathan.tay...@utoronto.ca wrote:
  
   Interesting.  That makes sense and I suppose that also explains why
   there is no function to do this sort of thing for you.
  
   A combination of relative and absolute errors is another common
   solution,
   i.e., test against relerr*max(abs(array_of_inputs)) + abserr. In
   cases
   like
   this relerr is typically eps and abserr tends to be something like
   1e-12,
   which keeps you from descending towards zero any further than you
   need
   to.
 
  I don't think the absolute error term is appropriate in this case. If
  all of my inputs are of the size 1e-12, I would expect a result of
  1e-14 to be significantly far from 0.
 
  Sure, that's why you *chose* constants appropriate to the problem.

 But that's what eps*max(abs(array_of_inputs)) is supposed to do.

 In the formulation that you are using (e.g. that of
 assert_arrays_almost_equal()), the absolute error comes into play when
 you are comparing two numbers in ignorance of the processes that
 created them. The relative error in that formula is being adjusted by
 the size of the two numbers (*not* the inputs to the algorithm). The
 two numbers may be close to 0, but the relevant inputs to the
 algorithm may be ~1, let's say. In that case, you need the absolute
 error term to provide the scale information that is otherwise not
 present in the comparison.

 But if you know what the inputs to the calculation were, you can
 estimate the scale factor for the relative tolerance directly
 (rigorously, if you've done the numerical analysis) and the absolute
 tolerance is supernumerary.


 So you do bisection on an oddball curve,  512 iterations later you hit
 zero... Or you do numeric integration where there is lots of cancellation.
 These problems aren't new and the mixed method for tolerance is quite
 standard and has been for many years. I don't see why you want to argue
 about it, if you don't like the combined method, set the absolute error to
 zero, problem solved.

I think we're talking about different things. I'm talking about the
way to estimate a good value for the absolute error. My
array_of_inputs was not the values that you are comparing to zero, but
the inputs to the algorithm that created the value you are comparing
to zero.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Testing for close to zero?

2009-01-19 Thread Charles R Harris
On Mon, Jan 19, 2009 at 11:26 PM, Robert Kern robert.k...@gmail.com wrote:

 On Tue, Jan 20, 2009 at 00:21, Charles R Harris
 charlesr.har...@gmail.com wrote:
 
  On Mon, Jan 19, 2009 at 10:48 PM, Robert Kern robert.k...@gmail.com
 wrote:
 
  On Mon, Jan 19, 2009 at 23:36, Charles R Harris
  charlesr.har...@gmail.com wrote:
  
   On Mon, Jan 19, 2009 at 9:17 PM, Robert Kern robert.k...@gmail.com
   wrote:
  
   On Mon, Jan 19, 2009 at 22:09, Charles R Harris
   charlesr.har...@gmail.com wrote:
   
   
On Mon, Jan 19, 2009 at 7:23 PM, Jonathan Taylor
jonathan.tay...@utoronto.ca wrote:
   
Interesting.  That makes sense and I suppose that also explains
 why
there is no function to do this sort of thing for you.
   
A combination of relative and absolute errors is another common
solution,
i.e., test against relerr*max(abs(array_of_inputs)) + abserr. In
cases
like
this relerr is typically eps and abserr tends to be something like
1e-12,
which keeps you from descending towards zero any further than you
need
to.
  
   I don't think the absolute error term is appropriate in this case. If
   all of my inputs are of the size 1e-12, I would expect a result of
   1e-14 to be significantly far from 0.
  
   Sure, that's why you *chose* constants appropriate to the problem.
 
  But that's what eps*max(abs(array_of_inputs)) is supposed to do.
 
  In the formulation that you are using (e.g. that of
  assert_arrays_almost_equal()), the absolute error comes into play when
  you are comparing two numbers in ignorance of the processes that
  created them. The relative error in that formula is being adjusted by
  the size of the two numbers (*not* the inputs to the algorithm). The
  two numbers may be close to 0, but the relevant inputs to the
  algorithm may be ~1, let's say. In that case, you need the absolute
  error term to provide the scale information that is otherwise not
  present in the comparison.
 
  But if you know what the inputs to the calculation were, you can
  estimate the scale factor for the relative tolerance directly
  (rigorously, if you've done the numerical analysis) and the absolute
  tolerance is supernumerary.
 
 
  So you do bisection on an oddball curve,  512 iterations later you hit
  zero... Or you do numeric integration where there is lots of
 cancellation.
  These problems aren't new and the mixed method for tolerance is quite
  standard and has been for many years. I don't see why you want to argue
  about it, if you don't like the combined method, set the absolute error
 to
  zero, problem solved.

 I think we're talking about different things. I'm talking about the
 way to estimate a good value for the absolute error. My
 array_of_inputs was not the values that you are comparing to zero, but
 the inputs to the algorithm that created the value you are comparing
 to zero.


Ah. But that won't generally work for polynomials, they are too ill
conditioned with respect to the coefficients. Even quadratics solved using
the standard formula with the +/- can be ill conditioned. And that isn't to
mention that the zeros are scale invariant, i.e., you can multiply the whole
equation by some ginormous number and the zeros will remain the same. It's
fun for a rainy day to check the scale invariance of the zero estimates of
various solution algorithms.

On the other hand, that method of estimating the error might work for
integrals if the result scales with the input parameters.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Testing: Failed examples don't raise errors on buildbot.

2008-07-20 Thread Charles R Harris
Alan, Stefan

Not raising errors seems ok for examples, but some of the unit tests are
also implemented as doctests and the failures are hidden in the logs. I'm
not sure what to do about this, but thought it worth pointing out. Also, it
would be nice if skipped tests didn't generate large bits of printout, it
makes it hard to find relevant failures.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Testing: Failed examples don't raise errors on buildbot.

2008-07-20 Thread Alan McIntyre
On Sun, Jul 20, 2008 at 9:17 PM, Alan McIntyre [EMAIL PROTECTED] wrote:
 The skipped test verbosity is annoying; I'll see if there's a way to
 make that a bit cleaner-looking for some low verbosity level.

The latest release version of nose from easy_install (0.10.3) doesn't
generate that verbose output for skipped tests.  Should we move up to
requiring 0.10.3 for tests?
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Testing: Failed examples don't raise errors on buildbot.

2008-07-20 Thread Robert Kern
On Sun, Jul 20, 2008 at 21:47, Alan McIntyre [EMAIL PROTECTED] wrote:
 On Sun, Jul 20, 2008 at 9:17 PM, Alan McIntyre [EMAIL PROTECTED] wrote:
 The skipped test verbosity is annoying; I'll see if there's a way to
 make that a bit cleaner-looking for some low verbosity level.

 The latest release version of nose from easy_install (0.10.3) doesn't
 generate that verbose output for skipped tests.  Should we move up to
 requiring 0.10.3 for tests?

I don't think aesthetics are worth requiring a particular version.
numpy doesn't need it; the users can decide whether they want it or
not. We should try to have it installed on the buildbots, though,
since we *are* the users in that case.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Testing: Failed examples don't raise errors on buildbot.

2008-07-20 Thread Alan McIntyre
On Sun, Jul 20, 2008 at 10:56 PM, Robert Kern [EMAIL PROTECTED] wrote:
 I don't think aesthetics are worth requiring a particular version.
 numpy doesn't need it; the users can decide whether they want it or
 not. We should try to have it installed on the buildbots, though,
 since we *are* the users in that case.

Actually I was considering asking to move the minimum nose version up
to 0.10.3 just because it's the current version before this aesthetic
issue came up.  There's about 30 bug fixes between 0.10.0 and 0.10.3,
including one that fixed some situations in which exceptions were
being hidden and one that makes the coverage reporting more accurate.
It's not a big deal, though.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Testing: Failed examples don't raise errors on buildbot.

2008-07-20 Thread Gael Varoquaux
On Sun, Jul 20, 2008 at 11:09:04PM -0400, Alan McIntyre wrote:
 Actually I was considering asking to move the minimum nose version up
 to 0.10.3 just because it's the current version before this aesthetic
 issue came up.  There's about 30 bug fixes between 0.10.0 and 0.10.3,
 including one that fixed some situations in which exceptions were
 being hidden and one that makes the coverage reporting more accurate.
 It's not a big deal, though.

There might be a case to move to 10.3, considering the large amount of
bug fixes, but in general I think it is a bad idea to require leading
edge packages. The reason being that you would like people to be able to
rely on packaged version of the different tools to build an test a
package. By packaged versions, I mean versions in the repositories of the
main linux distributions, and macport and fink. Each time we require
something outside a repository, we loose testers.

Gaël
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Testing: Failed examples don't raise errors on buildbot.

2008-07-20 Thread Alan McIntyre
On Sun, Jul 20, 2008 at 11:17 PM, Gael Varoquaux
[EMAIL PROTECTED] wrote:
 There might be a case to move to 10.3, considering the large amount of
 bug fixes, but in general I think it is a bad idea to require leading
 edge packages. The reason being that you would like people to be able to
 rely on packaged version of the different tools to build an test a
 package. By packaged versions, I mean versions in the repositories of the
 main linux distributions, and macport and fink. Each time we require
 something outside a repository, we loose testers.

Fair enough; does anybody have any idea which version of nose is
generally available from distributions like the ones you mentioned?
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Testing: Failed examples don't raise errors on buildbot.

2008-07-20 Thread Gael Varoquaux
On Sun, Jul 20, 2008 at 11:19:57PM -0400, Alan McIntyre wrote:
 On Sun, Jul 20, 2008 at 11:17 PM, Gael Varoquaux
 [EMAIL PROTECTED] wrote:
  There might be a case to move to 10.3, considering the large amount of
  bug fixes, but in general I think it is a bad idea to require leading
  edge packages. The reason being that you would like people to be able to
  rely on packaged version of the different tools to build an test a
  package. By packaged versions, I mean versions in the repositories of the
  main linux distributions, and macport and fink. Each time we require
  something outside a repository, we loose testers.

 Fair enough; does anybody have any idea which version of nose is
 generally available from distributions like the ones you mentioned?

Ubuntu hardy (current): 10.0 (http://packages.ubuntu.com)
Ubuntu intrepid (next): 10.3 (http://packages.ubuntu.com)
Debian unstable:10.3 (http://packages.dbian.com)
Fedora 8:   10.0 (https://admin.fedoraproject.org/pkgdb/)

For the rest I can't figure out how to get the information. I suspect we
can standardise on things around six month old. Debian unstable tracks
closely upstream, Ubuntu and Fedora have a release cycle of 6 months, I
don't know about SUSE, but I think it is similar, and macports, fink, or
Gentoo trac closely upstream.

Gaël
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Testing: Failed examples don't raise errors on buildbot.

2008-07-20 Thread Alan McIntyre
On Sun, Jul 20, 2008 at 11:34 PM, Gael Varoquaux
[EMAIL PROTECTED] wrote:
 For the rest I can't figure out how to get the information. I suspect we
 can standardise on things around six month old. Debian unstable tracks
 closely upstream, Ubuntu and Fedora have a release cycle of 6 months, I
 don't know about SUSE, but I think it is similar, and macports, fink, or
 Gentoo trac closely upstream.

It looks like Macports is at 0.10.1: http://py-nose.darwinports.com/

So it looks like 0.10.0 should still be a safe bet for being generally
available.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Testing -heads up with #random

2008-07-17 Thread Fernando Perez
Hi Alan,

I was trying to reuse your #random checker for ipython but kept
running into problems.  Is it working for you in numpy in actual code?
 Because in the entire SVN tree I only see it mentioned here:

maqroll[numpy] grin #random
./numpy/testing/nosetester.py:
   43 : if #random in want:
   67 : # #random directive to allow executing a command
while ignoring its
  375 : # try the #random directive on the output line
  379 : BadExample object at 0x084D05AC  #random: may vary on your system
maqroll[numpy]

I'm asking because I suspect it is NOT working for numpy.  The reason
is some really nasty, silent exception trapping being done by nose.
In nose's loadTestsFromModule,  which you've overridden to include:

yield NumpyDocTestCase(test,
   optionflags=optionflags,
   checker=NumpyDoctestOutputChecker())

it's likely that this line can cause an exception (at least it was
doing it for me in ipython, because this class inherits from npd but
tries to directly call __init__ from doctest.DocTestCase).
Unfortunately, nose  will  silently swallow *any* exception there,
simply ignoring your tests and not even telling you what happened.
Very, very annoying.  You can see if you have an exception by doing
something like

try:
dt = DocTestCase(test,
 optionflags=optionflags,
 checker=checker)
except:
from IPython import ultraTB
ultraTB.AutoFormattedTB()()
yield dt

to force a traceback printing.

Anyway, I mention this because I just wasted a good chunk of time
fighting this one for ipython, where I need the #random functionality.
 It seems it's not used in numpy yet, but I imagine it will soon, and
I figured I'd save you some time.

Cheers,

f
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Testing -heads up with #random

2008-07-17 Thread Alan McIntyre
On Thu, Jul 17, 2008 at 4:25 AM, Fernando Perez [EMAIL PROTECTED] wrote:
 I was trying to reuse your #random checker for ipython but kept
 running into problems.  Is it working for you in numpy in actual code?
  Because in the entire SVN tree I only see it mentioned here:

 maqroll[numpy] grin #random
 ./numpy/testing/nosetester.py:
   43 : if #random in want:
   67 : # #random directive to allow executing a command
 while ignoring its
  375 : # try the #random directive on the output line
  379 : BadExample object at 0x084D05AC  #random: may vary on your system
 maqroll[numpy]

The second example is a doctest for the feature; for me it fails if
#random is removed, and passes otherwise.

 I'm asking because I suspect it is NOT working for numpy.  The reason
 is some really nasty, silent exception trapping being done by nose.
 In nose's loadTestsFromModule,  which you've overridden to include:

Ah, thanks; I recall seeing a comment somewhere about nose swallowing
exceptions in code under test, but I didn't know it would do things
like that.

 Unfortunately, nose  will  silently swallow *any* exception there,
 simply ignoring your tests and not even telling you what happened.
 Very, very annoying.  You can see if you have an exception by doing
 something like

I added that to my local nosetester.py, but it didn't turn up any
exceptions.  I'll keep it in my working copy so I'm not as likely to
miss some problem in the future.

 Anyway, I mention this because I just wasted a good chunk of time
 fighting this one for ipython, where I need the #random functionality.
  It seems it's not used in numpy yet, but I imagine it will soon, and
 I figured I'd save you some time.

Thanks :)
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion