This was originally posted on SO
(https://stackoverflow.com/questions/28853740/numpy-array-casting-ruled-not-safe)
and it was suggested it is probably a bug in numpy.take.
Python 2.7.8 |Anaconda 2.1.0 (32-bit)| (default, Jul 2 2014, 15:13:35) [MSC
v.1500 32 bit (Intel)] on win32
Type
In a 64-bit environment, is it possible to universally set the dtype to 32-bit
for all ints, floats etc. to avoid setting the dtype individually for each
array object and calculations?
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
Agree that OpenBLAS is the most favorable route instead of starting from
scratch.
Btw, why is sparse BLAS not included as I've always been under the
impression that scipy sparse supports BLAS - no?
___
NumPy-Discussion mailing list
Scipy sparse uses matrices - I was under the impression that scipy sparse only
works with matrices or have things moved on?
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Does the numpy indexing refactorizing address the performance of fancy indexing
highlighted in wes mckinney's blog some years back -
http://wesmckinney.com/blog/?p=215 - where numpy.take() was shown to be
preferable than fancy indexing?
___
Francesc: Does numexpr support scipy sparse matrices?
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
This conversation gets discussed often with Numpy developers but since the
requirement for optimized Blas is pretty common these days, how about
distributing Numpy with OpenBlas by default? People who don't want optimized
BLAS or OpenBLAS can then edit the site.cfg file to add/remove. I can
Francesc
Congratulations and will definitely be benchmarking Numexpr soon.
Will similar performance improvements been seen with OpenBLAS as with MKL?
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
For me, binary data wrt arrays means that data values are [0|1]. Is this
what is meant in The compression process is carried out internally by
Blosc, a high-performance compressor that is optimized for binary data. ?
___
NumPy-Discussion mailing
When using vstack or hstack for large arrays, are there any performance
penalties eg. takes longer time-wise or makes a copy of an array during
operation ?___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
I want to write a general exception handler to warn if too much data is being
loaded for the ram size in a machine for a successful numpy array operation to
take place. For example, the program multiplies two floating point arrays A
and B which are populated with loadtext. While the data is
If A is very large and B is very small then np.concatenate(A, B) will copy
B's data over to A which would take less time than the other way around - is
that so?
Does 'memory order' mean that it depends on sufficient contiguous
memory being available for B otherwise it will be fragmented or
So, with the example case, the approximate memory cost for an in-place
operation would be:
A *= B : 2N
But, if the original A or B is to remain unchanged then it will be:
C = A * B : 3N ?
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
Francesc: Thanks. I looked at numexpr a few years back but it didn't support
array slicing/indexing. Has that changed?
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Probably a loaded question but is there a significant performance difference
between using MKL (or OpenBLAS) on multi-core cpu's and cuBLAS on gpu's. Does
anyone have recent experience or link to an independent benchmark?
___
NumPy-Discussion mailing
Jerome, Thanks for the swift response and tests. Crikey, that is a significant
difference at first glance. Would it be possible to compare a BLAS computation
eg. matrix-vector or matrix-matrix calculation? Thx!___
NumPy-Discussion mailing list
Use site.cfg.example as template to create a new site.cfg. For openblas,
uncomment:
[openblas]
library_dirs = /opt/OpenBLAS/lib
include_dirs = /opt/OpenBLAS/include
Also, uncomment default section:
[DEFAULT]
library_dirs = /usr/local/lib
include_dirs = /usr/local/include
That should do it -
Use site.cfg.example as template to create a new site.cfg. For openblas,
uncomment:
[openblas]
library_dirs = /opt/OpenBLAS/lib
include_dirs = /opt/OpenBLAS/include
Also, uncomment default section:
[DEFAULT]
library_dirs = /usr/local/lib
include_dirs = /usr/local/include
That should do it -
18 matches
Mail list logo