On Thu, Feb 21, 2013 at 4:16 AM, Matyáš Novák lo...@centrum.cz wrote:
You could also look into OpenBLAS, which is easier to build and
generally faster than ATLAS. (But alas, not supported by NumPy/SciPY AFAIK.)
It look slike OpenBLAS is BSD-licensed, and thus compatible with numpy/sciy.
It
On 02/22/2013 05:52 PM, Chris Barker - NOAA Federal wrote:
On Thu, Feb 21, 2013 at 4:16 AM, Matyáš Novák lo...@centrum.cz wrote:
You could also look into OpenBLAS, which is easier to build and
generally faster than ATLAS. (But alas, not supported by NumPy/SciPY AFAIK.)
It look slike OpenBLAS
from scipy.linalg.blas import fblas
dgemm = fblas.dgemm._cpointer
sgemm = fblas.sgemm._cpointer
OK, but this gives me a PyCObject. How do I make it a function pointer of the
correct type in cython?
Thanks again
Sergio
___
NumPy-Discussion
On 22.02.2013 19:54, Sergio Callegari wrote:
from scipy.linalg.blas import fblas
dgemm = fblas.dgemm._cpointer
sgemm = fblas.sgemm._cpointer
OK, but this gives me a PyCObject. How do I make it a function
pointer of the
correct type in cython?
In cython I do not know it. I coded it
On 22 Feb 2013 16:53, Chris Barker - NOAA Federal chris.bar...@noaa.gov
wrote:
On Thu, Feb 21, 2013 at 4:16 AM, Matyáš Novák lo...@centrum.cz wrote:
You could also look into OpenBLAS, which is easier to build and
generally faster than ATLAS. (But alas, not supported by NumPy/SciPY
AFAIK.)
I just read a web page on how to embed python in an application[1].
They explain that we can keep the symbol exported event if we
statically link the BLAS library in scipy. This make me think we could
just change how we compile the lib that link with BLAS and we will be
able to reuse it for other
You could also look into OpenBLAS, which is easier to build and
generally faster than ATLAS. (But alas, not supported by NumPy/SciPY AFAIK.)
Hi,
maybe not officially supported, so the integration into numpy is a bit
tricky
(after long tries I had success with exporting BLAS and LAPACK
Dag Sverre Seljebotn d.s.seljebotn at astro.uio.no writes:
On 02/18/2013 05:26 PM, rif wrote:
I have no answer to the question, but I was curious as to why directly
calling the cblas would be 10x-20x slower in the first place. That
seems surprising, although I'm just learning about
V. Armando Solé sole at esrf.fr writes:
from scipy.linalg.blas import fblas
dgemm = fblas.dgemm._cpointer
sgemm = fblas.sgemm._cpointer
I'm going to try and benchmark it asap.
Thanks
___
NumPy-Discussion mailing list
On 02/20/2013 10:18 AM, Sergio wrote:
Dag Sverre Seljebotn d.s.seljebotn at astro.uio.no writes:
On 02/18/2013 05:26 PM, rif wrote:
I have no answer to the question, but I was curious as to why directly
calling the cblas would be 10x-20x slower in the first place. That
seems surprising,
Hi,
We also have the same problem for Theano. Having one reusable blas on
windows would be useful to many project. Also, if possible try to make it
accesible from C,C++ too. Not just cython.
Fred
On Feb 20, 2013 5:15 AM, Dag Sverre Seljebotn d.s.seljeb...@astro.uio.no
wrote:
On 02/20/2013
Hi,
I have a project that includes a cython script which in turn does some direct
access to a couple of cblas functions. This is necessary, since some matrix
multiplications need to be done inside a tight loop that gets called thousands
of times. Speedup wrt calling scipy.linalg.blas.cblas
I have no answer to the question, but I was curious as to why directly
calling the cblas would be 10x-20x slower in the first place. That seems
surprising, although I'm just learning about python numerics.
On Mon, Feb 18, 2013 at 7:38 AM, Sergio Callegari
sergio.calleg...@gmail.com wrote:
Hi Sergio,
I faced a similar problem one year ago. I solved it writing a C function
receiving a pointer to the relevant linear algebra routine I needed.
Numpy does not offers the direct access to the underlying library
functions, but scipy does it:
from scipy.linalg.blas import fblas
dgemm =
On 02/18/2013 05:26 PM, rif wrote:
I have no answer to the question, but I was curious as to why directly
calling the cblas would be 10x-20x slower in the first place. That
seems surprising, although I'm just learning about python numerics.
The statement was that directly (on the Cython
On 02/18/2013 05:28 PM, Dag Sverre Seljebotn wrote:
On 02/18/2013 05:26 PM, rif wrote:
I have no answer to the question, but I was curious as to why directly
calling the cblas would be 10x-20x slower in
The statement was that directly (on the Cython level) calling cblas is
10x-20x slower than
But I'd hope that the overhead for going through the wrappers is constant,
rather than dependent on the size, so that for large matrices you'd get
essentially equivalent performance?
On Mon, Feb 18, 2013 at 8:28 AM, Dag Sverre Seljebotn
d.s.seljeb...@astro.uio.no wrote:
On 02/18/2013 05:26
On Mon, Feb 18, 2013 at 9:28 AM, Dag Sverre Seljebotn
d.s.seljeb...@astro.uio.no wrote:
On 02/18/2013 05:26 PM, rif wrote:
I have no answer to the question, but I was curious as to why directly
calling the cblas would be 10x-20x slower in the first place. That
seems surprising, although
On 02/18/2013 05:29 PM, rif wrote:
But I'd hope that the overhead for going through the wrappers is
constant, rather than dependent on the size, so that for large matrices
you'd get essentially equivalent performance?
That is correct.
Ah, so then the quality of the BLAS matters much less in
18.02.2013 19:20, Dag Sverre Seljebotn kirjoitti:
On 02/18/2013 05:29 PM, rif wrote:
But I'd hope that the overhead for going through the wrappers is
constant, rather than dependent on the size, so that for large matrices
you'd get essentially equivalent performance?
That is correct.
Ah,
On 02/18/2013 06:48 PM, Pauli Virtanen wrote:
18.02.2013 19:20, Dag Sverre Seljebotn kirjoitti:
On 02/18/2013 05:29 PM, rif wrote:
But I'd hope that the overhead for going through the wrappers is
constant, rather than dependent on the size, so that for large matrices
you'd get essentially
18.02.2013 20:41, Dag Sverre Seljebotn kirjoitti:
[clip]
I think there should be a new project, pylapack or similar, for this,
outside of NumPy and SciPy. NumPy and SciPy could try to import it, and
if found, fetch a function pointer table. (If not found, just stay with
what has been
On 02/18/2013 09:23 PM, Pauli Virtanen wrote:
18.02.2013 20:41, Dag Sverre Seljebotn kirjoitti:
[clip]
I think there should be a new project, pylapack or similar, for this,
outside of NumPy and SciPy. NumPy and SciPy could try to import it, and
if found, fetch a function pointer table. (If
On 18.02.2013 21:23, Pauli Virtanen wrote:
18.02.2013 20:41, Dag Sverre Seljebotn kirjoitti:
[clip]
I think there should be a new project, pylapack or similar, for
this,
outside of NumPy and SciPy. NumPy and SciPy could try to import it,
and
if found, fetch a function pointer table. (If
18.02.2013 23:29, V. Armando Sole kirjoitti:
[clip]
I find Dag's approach more appealing.
SciPy can be problematic (windows 64-bit) and if one could offer access
to the linear algebra functions without needing SciPy I would certainly
prefer it.
Well, the two approaches are not exclusive.
On 18.02.2013 22:47, Pauli Virtanen wrote:
18.02.2013 23:29, V. Armando Sole kirjoitti:
[clip]
I find Dag's approach more appealing.
SciPy can be problematic (windows 64-bit) and if one could offer
access
to the linear algebra functions without needing SciPy I would
certainly
prefer
26 matches
Mail list logo