[Numpy-discussion] numpy array casting ruled not safe

2015-03-07 Thread Dinesh Vadhia
This was originally posted on SO (https://stackoverflow.com/questions/28853740/numpy-array-casting-ruled-not-safe) and it was suggested it is probably a bug in numpy.take. Python 2.7.8 |Anaconda 2.1.0 (32-bit)| (default, Jul 2 2014, 15:13:35) [MSC v.1500 32 bit (Intel)] on win32 Type

[Numpy-discussion] Can dtype be set universally?

2014-05-22 Thread Dinesh Vadhia
In a 64-bit environment, is it possible to universally set the dtype to 32-bit for all ints, floats etc. to avoid setting the dtype individually for each array object and calculations? ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org

Re: [Numpy-discussion] The BLAS problem

2014-04-12 Thread Dinesh Vadhia
Agree that OpenBLAS is the most favorable route instead of starting from scratch. Btw, why is sparse BLAS not included as I've always been under the impression that scipy sparse supports BLAS - no? ___ NumPy-Discussion mailing list

Re: [Numpy-discussion] deprecate numpy.matrix

2014-02-10 Thread Dinesh Vadhia
Scipy sparse uses matrices - I was under the impression that scipy sparse only works with matrices or have things moved on? ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion

Re: [Numpy-discussion] Indexing changes in 1.9

2014-02-03 Thread Dinesh Vadhia
Does the numpy indexing refactorizing address the performance of fancy indexing highlighted in wes mckinney's blog some years back - http://wesmckinney.com/blog/?p=215 - where numpy.take() was shown to be preferable than fancy indexing? ___

Re: [Numpy-discussion] ANN: numexpr 2.3 (final) released

2014-01-27 Thread Dinesh Vadhia
Francesc: Does numexpr support scipy sparse matrices? ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion

[Numpy-discussion] MKL and OpenBLAS

2014-01-26 Thread Dinesh Vadhia
This conversation gets discussed often with Numpy developers but since the requirement for optimized Blas is pretty common these days, how about distributing Numpy with OpenBlas by default? People who don't want optimized BLAS or OpenBLAS can then edit the site.cfg file to add/remove. I can

Re: [Numpy-discussion] ANN: numexpr 2.3 (final) released

2014-01-26 Thread Dinesh Vadhia
Francesc Congratulations and will definitely be benchmarking Numexpr soon. Will similar performance improvements been seen with OpenBLAS as with MKL? ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org

Re: [Numpy-discussion] ANN: BLZ 0.6.1 has been released

2014-01-26 Thread Dinesh Vadhia
For me, binary data wrt arrays means that data values are [0|1]. Is this what is meant in The compression process is carried out internally by Blosc, a high-performance compressor that is optimized for binary data. ? ___ NumPy-Discussion mailing

[Numpy-discussion] vstack and hstack performance penalty

2014-01-24 Thread Dinesh Vadhia
When using vstack or hstack for large arrays, are there any performance penalties eg. takes longer time-wise or makes a copy of an array during operation ?___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org

[Numpy-discussion] Catching out-of-memory error before it happens

2014-01-24 Thread Dinesh Vadhia
I want to write a general exception handler to warn if too much data is being loaded for the ram size in a machine for a successful numpy array operation to take place. For example, the program multiplies two floating point arrays A and B which are populated with loadtext. While the data is

Re: [Numpy-discussion] vstack and hstack performance penalty

2014-01-24 Thread Dinesh Vadhia
If A is very large and B is very small then np.concatenate(A, B) will copy B's data over to A which would take less time than the other way around - is that so? Does 'memory order' mean that it depends on sufficient contiguous memory being available for B otherwise it will be fragmented or

Re: [Numpy-discussion] Catching out-of-memory error before it happens

2014-01-24 Thread Dinesh Vadhia
So, with the example case, the approximate memory cost for an in-place operation would be: A *= B : 2N But, if the original A or B is to remain unchanged then it will be: C = A * B : 3N ? ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org

Re: [Numpy-discussion] Catching out-of-memory error before it happens

2014-01-24 Thread Dinesh Vadhia
Francesc: Thanks. I looked at numexpr a few years back but it didn't support array slicing/indexing. Has that changed? ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion

[Numpy-discussion] MKL + CPU, GPU + cuBLAS comparison

2013-11-26 Thread Dinesh Vadhia
Probably a loaded question but is there a significant performance difference between using MKL (or OpenBLAS) on multi-core cpu's and cuBLAS on gpu's. Does anyone have recent experience or link to an independent benchmark? ___ NumPy-Discussion mailing

Re: [Numpy-discussion] MKL + CPU, GPU + cuBLAS comparison

2013-11-26 Thread Dinesh Vadhia
Jerome, Thanks for the swift response and tests. Crikey, that is a significant difference at first glance. Would it be possible to compare a BLAS computation eg. matrix-vector or matrix-matrix calculation? Thx!___ NumPy-Discussion mailing list

Re: [Numpy-discussion] ANN: NumPy 1.8.0 release.

2013-10-31 Thread Dinesh Vadhia
Use site.cfg.example as template to create a new site.cfg. For openblas, uncomment: [openblas] library_dirs = /opt/OpenBLAS/lib include_dirs = /opt/OpenBLAS/include Also, uncomment default section: [DEFAULT] library_dirs = /usr/local/lib include_dirs = /usr/local/include That should do it -

Re: [Numpy-discussion] ANN: NumPy 1.8.0 release.

2013-10-31 Thread Dinesh Vadhia
Use site.cfg.example as template to create a new site.cfg. For openblas, uncomment: [openblas] library_dirs = /opt/OpenBLAS/lib include_dirs = /opt/OpenBLAS/include Also, uncomment default section: [DEFAULT] library_dirs = /usr/local/lib include_dirs = /usr/local/include That should do it -