Re: [Numpy-discussion] Clarification sought on Scipy Numpy version requirements.

2015-06-19 Thread Charles R Harris
On Fri, Jun 19, 2015 at 3:05 PM, Sturla Molden 
wrote:

> Charles R Harris  wrote:
>
> > I'm looking to change some numpy deprecations into errors as well as
> remove
> > some deprecated functions. The problem I see is that
> > SciPy claims to support Numpy >= 1.5 and Numpy 1.5 is really, really,
> old.
> > So the question is, does "support" mean compiles with earlier versions
> > of Numpy ?
>
> It means there is a Travis CI build with NumPy 1.6.2. So any change to the
> SciPy source code must compile with NumPy 1.6 and any later version of
> NumPy.
>
> There is no Travis CI build with NumPy 1.5. I don't think we know for sure
> if it is really compatible with the current SciPy.
>

I guess this also raises the question of what versions of Scipy Numpy needs
to support. I'm thinking of removing the noprefix.h, but it doesn't cost to
leave it in as it must be explicitly included by anyone who needs it. Hmm,
maybe best to leave it be, although I suspect anyone using it could just as
well use an earlier version of Numpy.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Clarification sought on Scipy Numpy version requirements.

2015-06-19 Thread josef.pktd
On Fri, Jun 19, 2015 at 4:08 PM, Charles R Harris  wrote:

> Hi All,
>
> I'm looking to change some numpy deprecations into errors as well as
> remove some deprecated functions. The problem I see is that
> SciPy claims to support Numpy >= 1.5 and Numpy 1.5 is really, really, old.
> So the question is, does "support" mean compiles with earlier versions
> of Numpy ? If that is the case there is very little that can be done about
> deprecation. OTOH, if it means Scipy can be compiled with more recent numpy
> versions but used with earlier Numpy versions (which is a good feat), I'd
> like to know. I'd also like to know what the interface requirements are, as
> I'd like to remove old_defines.h
>

numpy 1.6 I think is still accurate
https://github.com/scipy/scipy/pull/4265

As far as I know, you can never compile against a newer and run with an
older version.

We had the discussion recently about backwards versus forwards binary
compatibility

Josef




>
> Chuck
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] I can't tell if Numpy is configured properly with show_config()

2015-06-19 Thread Sturla Molden
Elliot Hallmark  wrote:

> And I can't help but wonder if there is further configuration I need
> to make numpy faster, or if this is just a difference between out
> machines

Try to build NumPy with Intel MKL or OpenBLAS instead. 

ATLAS is only efficient on the host computer on which it is built, and even
there it is not very fast (but far better than the reference BLAS). 

Sturla

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] I can't tell if Numpy is configured properly with show_config()

2015-06-19 Thread Elliot Hallmark
Debian Sid, 64-bit.  I was trying to fix the problem of np.dot running very
slow.

I ended up uninstalling numpy, installing libatlas3-base through apt-get
and re-installing numpy.  The performance of dot is greatly improved!  But
I can't tell from any other method whether numpy is set up correctly.
Consider comparing the faster one to another in a virtual env that is still
slow:

###
fast one
###

In [1]: import time, numpy

In [2]: n=1000

In [3]: A = numpy.random.rand(n,n)

In [4]: B = numpy.random.rand(n,n)

In [5]: then = time.time(); C=numpy.dot(A,B); print time.time()-then
0.306427001953

In [6]: numpy.show_config()
blas_info:
libraries = ['blas']
library_dirs = ['/usr/lib']
language = f77
lapack_info:
libraries = ['lapack']
library_dirs = ['/usr/lib']
language = f77
atlas_threads_info:
  NOT AVAILABLE
blas_opt_info:
libraries = ['blas']
library_dirs = ['/usr/lib']
language = f77
define_macros = [('NO_ATLAS_INFO', 1)]
atlas_blas_threads_info:
  NOT AVAILABLE
openblas_info:
  NOT AVAILABLE
lapack_opt_info:
libraries = ['lapack', 'blas']
library_dirs = ['/usr/lib']
language = f77
define_macros = [('NO_ATLAS_INFO', 1)]
atlas_info:
  NOT AVAILABLE
lapack_mkl_info:
  NOT AVAILABLE
blas_mkl_info:
  NOT AVAILABLE
atlas_blas_info:
  NOT AVAILABLE
mkl_info:
  NOT AVAILABLE

###
slow one
###

In [1]: import time, numpy

In [2]: n=1000

In [3]: A = numpy.random.rand(n,n)

In [4]: B = numpy.random.rand(n,n)

In [5]: then = time.time(); C=numpy.dot(A,B); print time.time()-then
7.88430500031

In [6]: numpy.show_config()
blas_info:
libraries = ['blas']
library_dirs = ['/usr/lib']
language = f77
lapack_info:
libraries = ['lapack']
library_dirs = ['/usr/lib']
language = f77
atlas_threads_info:
  NOT AVAILABLE
blas_opt_info:
libraries = ['blas']
library_dirs = ['/usr/lib']
language = f77
define_macros = [('NO_ATLAS_INFO', 1)]
atlas_blas_threads_info:
  NOT AVAILABLE
openblas_info:
  NOT AVAILABLE
lapack_opt_info:
libraries = ['lapack', 'blas']
library_dirs = ['/usr/lib']
language = f77
define_macros = [('NO_ATLAS_INFO', 1)]
atlas_info:
  NOT AVAILABLE
lapack_mkl_info:
  NOT AVAILABLE
blas_mkl_info:
  NOT AVAILABLE
atlas_blas_info:
  NOT AVAILABLE
mkl_info:
  NOT AVAILABLE

#

Further, in the following comparison between Cpython and converting to
numpy array for one operation, I get Cpython being faster by the same
amount in both environments.  But another user got numpy being faster.

In [1]: import numpy as np

In [2]: pts = range(100,1000)

In [3]: pts[100] = 0

In [4]: %timeit pts_arr = np.array(pts); mini = np.argmin(pts_arr)
1 loops, best of 3: 129 µs per loop

In [5]: %timeit mini = sorted(enumerate(pts))[0][1]
1 loops, best of 3: 89.2 µs per loop

The other user got

In [29]: %timeit pts_arr = np.array(pts); mini = np.argmin(pts_arr)
1 loops, best of 3: 37.7 µs per loop

In [30]: %timeit mini = sorted(enumerate(pts))[0][1]
1 loops, best of 3: 69.2 µs per loop

And I can't help but wonder if there is further configuration I need
to make numpy faster, or if this is just a difference between out
machines

In the future, should I ignore show_config() and just do this dot product
test?

Any guidance would be appreciated.

Thanks,
  Elliot
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Clarification sought on Scipy Numpy version requirements.

2015-06-19 Thread Sturla Molden
Charles R Harris  wrote:

> I'm looking to change some numpy deprecations into errors as well as remove
> some deprecated functions. The problem I see is that
> SciPy claims to support Numpy >= 1.5 and Numpy 1.5 is really, really, old.
> So the question is, does "support" mean compiles with earlier versions
> of Numpy ?

It means there is a Travis CI build with NumPy 1.6.2. So any change to the
SciPy source code must compile with NumPy 1.6 and any later version of
NumPy. 

There is no Travis CI build with NumPy 1.5. I don't think we know for sure
if it is really compatible with the current SciPy.

Sturla

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Python 3 and isinstance(np.int64(42), int)

2015-06-19 Thread Chris Barker
On Wed, Jun 17, 2015 at 11:13 PM, Nathaniel Smith  wrote:

>  there's some
> argument that in Python, doing explicit type checks like this is
> usually a sign that one is doing something awkward,


I tend to agree with that.

On the other hand, numpy itself is kind-of sort-of statically typed. But in
that case, if you need to know the type of an array -- check the array's
dtype.

Also:

 >>> a = np.zeros(7, int)
 >>> n = a[3]
 >>> type(n)


I Never liked declaring numpy arrays with the python types like "int" or
"float" -- in numpy you usually care more about the type, so should simple
use "int64" if you want a 64 bit int. And "float64" if you want a 64 bit
float. Granted, pyton floats have always been float64 (on all platfroms??),
and python ints used to a be a reasonable int type, but now that python
ints are bigInts in py3, it really makes sense to be clear.

And now that I think about it, in py2, int is 32 bit on win64 and 64 bit on
*nix64 -- so you're really better off being explicit with your numpy arrays.

-CHB


-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Clarification sought on Scipy Numpy version requirements.

2015-06-19 Thread Charles R Harris
Hi All,

I'm looking to change some numpy deprecations into errors as well as remove
some deprecated functions. The problem I see is that
SciPy claims to support Numpy >= 1.5 and Numpy 1.5 is really, really, old.
So the question is, does "support" mean compiles with earlier versions
of Numpy ? If that is the case there is very little that can be done about
deprecation. OTOH, if it means Scipy can be compiled with more recent numpy
versions but used with earlier Numpy versions (which is a good feat), I'd
like to know. I'd also like to know what the interface requirements are, as
I'd like to remove old_defines.h

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Flag for np.tile to use as_strided to reduce memory

2015-06-19 Thread Stephan Hoyer
On Fri, Jun 19, 2015 at 10:39 AM, Sebastian Berg  wrote:

> No, what tile does cannot be represented that way. If it was possible
> you can achieve the same using `np.broadcast_to` basically, which was
> just added though. There are some other things you can do, like rolling
> window (adding dimensions), maybe some day we should add that (or you
> want to take a shot ;)).
>
> - Sebastian
>

The one case where np.tile could be done using stride tricks is if the
dimension you want to repeat has size 1 or currently does not exist.
np.broadcast_to was an attempt to make this stuff less awkward, though it
still requries mixing in transposes.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Flag for np.tile to use as_strided to reduce memory

2015-06-19 Thread Sebastian Berg
On Fr, 2015-06-19 at 10:06 +0200, Freddy Rietdijk wrote:
> Hi,
> 
> 
> Having read that it is possible to basically 'copy' elements along an
> axis without actually copying the values by making use of the strides,
> I wonder whether it is possible to add this as an option to np.tile.
> 

No, what tile does cannot be represented that way. If it was possible
you can achieve the same using `np.broadcast_to` basically, which was
just added though. There are some other things you can do, like rolling
window (adding dimensions), maybe some day we should add that (or you
want to take a shot ;)).

- Sebastian

> 
> It would be easier than having to use as_strided or broadcast_arrays
> to repeat data without actually replicating it.
> 
> 
> http://stackoverflow.com/questions/23695851/python-repeating-numpy-array-without-replicating-data
> 
> https://scipy-lectures.github.io/advanced/advanced_numpy/#example-fake-dimensions-with-strides
> 
> 
> 
> Frederik
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion



signature.asc
Description: This is a digitally signed message part
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Flag for np.tile to use as_strided to reduce memory

2015-06-19 Thread Freddy Rietdijk
Hi,

Having read that it is possible to basically 'copy' elements along an axis
without actually copying the values by making use of the strides, I wonder
whether it is possible to add this as an option to np.tile.

It would be easier than having to use as_strided or broadcast_arrays to
repeat data without actually replicating it.

http://stackoverflow.com/questions/23695851/python-repeating-numpy-array-without-replicating-data
https://scipy-lectures.github.io/advanced/advanced_numpy/#example-fake-dimensions-with-strides

Frederik
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion