Re: [Numpy-discussion] numpy vbench-marking, compiler comparison

2013-11-26 Thread Daπid
Have you tried on an Intel CPU? I have both a i5 quad core and an i7 octo
core where I could run it over the weekend. One may expect some compiler
magic taking advantage of the advanced features, specially the i7.

/David
On Nov 25, 2013 8:16 PM, Julian Taylor jtaylor.deb...@googlemail.com
wrote:

 On 25.11.2013 02:32, Yaroslav Halchenko wrote:
 
  On Tue, 15 Oct 2013, Nathaniel Smith wrote:
  What do you have to lose?
 
  btw -- fresh results are here
 http://yarikoptic.github.io/numpy-vbench/ .
 
  I have tuned benchmarking so it now reflects the best performance
 across
  multiple executions of the whole battery, thus eliminating spurious
  variance if estimate is provided from a single point in time.
  Eventually I
  expect many of those curves to become even cleaner.
 
  On another note, what do you think of moving the vbench benchmarks
  into the main numpy tree? We already require everyone who submits a
  bug fix to add a test; there are a bunch of speed enhancements coming
  in these days and it would be nice if we had some way to ask people to
  submit a benchmark along with each one so that we know that the
  enhancement stays enhanced...
 
  On this positive note (it is boring to start a new thread, isn't it?) --
  would you be interested in me transfering numpy-vbench over to
  github.com/numpy ?
 
  as of today, plots on http://yarikoptic.github.io/numpy-vbench  should
  be updating 24x7 (just a loop, thus no time guarantee after you submit
  new changes).
 
  Besides benchmarking new benchmarks (your PRs  would still be very
  welcome,  so far it was just me and Julian T) and revisions, that
  process also goes through a random sample of existing previously
  benchmarked revisions and re-runs the benchmarks thus improving upon the
  ultimate 'min' timing performance.  So you can see already that many
  plots became much 'cleaner', although now there might be a bit of bias
  in estimates for recent revisions since they hadn't accumulated yet as
  many of 'independent runs' as older revisions.
 

 using the vbench I created a comparison of gcc and clang with different
 options.
 Cliffnotes:
 * gcc -O2 performs 5-10% better than -O3 in most benchmarks, except in a
 few select cases where the vectorizer does its magic
 * gcc and clang are very close in performance, but the cases where a
 compiler wins by a large margin its mostly gcc that wins

 I have collected some interesting plots on this notebook:
 http://nbviewer.ipython.org/7646615
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] MKL + CPU, GPU + cuBLAS comparison

2013-11-26 Thread Dinesh Vadhia
Probably a loaded question but is there a significant performance difference 
between using MKL (or OpenBLAS) on multi-core cpu's and cuBLAS on gpu's.  Does 
anyone have recent experience or link to an independent benchmark?

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] MKL + CPU, GPU + cuBLAS comparison

2013-11-26 Thread Jerome Kieffer
On Tue, 26 Nov 2013 01:02:40 -0800
Dinesh Vadhia dineshbvad...@hotmail.com wrote:

 Probably a loaded question but is there a significant performance difference 
 between using MKL (or OpenBLAS) on multi-core cpu's and cuBLAS on gpu's.  
 Does anyone have recent experience or link to an independent benchmark?
 

Using Numpy (Xeon 5520 2.2GHz):

In [1]: import numpy
In [2]: shape = (450,450,450)
In [3]: start=numpy.random.random(shape).astype(complex128)
In [4]: %timeit result = numpy.fft.fftn(start)
1 loops, best of 3: 10.2 s per loop

Using FFTw (8 threads (2x quad cores):

In [5]: import fftw3
In [7]: result = numpy.empty_like(start)
In [8]: fft = fftw3.Plan(start, result, direction='forward', flags=['measure'], 
nthreads=8)
In [9]: %timeit fft()
1 loops, best of 3: 887 ms per loop

Using CuFFT (GeForce Titan):
1) with 2 transfers:
In [10]: import pycuda,pycuda.gpuarray as gpuarray,scikits.cuda.fft as 
cu_fft,pycuda.autoinit
In [11]: cuplan = cu_fft.Plan(start.shape, numpy.complex128, numpy.complex128)
In [12]: d_result = gpuarray.empty(start.shape, start.dtype)
In [13]: d_start = gpuarray.empty(start.shape, start.dtype)
In [14]: def cuda_fft(start):
   : d_start.set(start)
   : cu_fft.fft(d_start, d_result, cuplan)
   : return d_result.get()
   : 
In [15]: %timeit cuda_fft(start)
1 loops, best of 3: 1.7 s per loop

2) with 1 transfert:
In [18]: def cuda_fft_2():
cu_fft.fft(d_start, d_result, cuplan)
return d_result.get()
   : 
In [20]: %timeit cuda_fft_2()
1 loops, best of 3: 1.05 s per loop

3) Without transfer:
In [22]: def cuda_fft_3():
cu_fft.fft(d_start, d_result, cuplan)
pycuda.autoinit.context.synchronize()
   : 

In [23]: %timeit cuda_fft_3()
1 loops, best of 3: 202 ms per loop

Conclusion: 
A Geforce Titan (1000€) can be 4x faster than a couple of Xeon 5520 (2x 250€) 
if your data are already on the GPU.
Nota: Plan calculation are much faster on GPU then on CPU.
-- 
Jérôme Kieffer
tel +33 476 882 445
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] MKL + CPU, GPU + cuBLAS comparison

2013-11-26 Thread regikeyz .
HI GUYS,
PLEASE COULD YOU UNSUBSCRIBE ME FROM THESE EMAILS
I cant find the link on the bottom
Thank-you



On 26 November 2013 10:42, Jerome Kieffer jerome.kief...@esrf.fr wrote:

 On Tue, 26 Nov 2013 01:02:40 -0800
 Dinesh Vadhia dineshbvad...@hotmail.com wrote:

  Probably a loaded question but is there a significant performance
 difference between using MKL (or OpenBLAS) on multi-core cpu's and cuBLAS
 on gpu's.  Does anyone have recent experience or link to an independent
 benchmark?
 

 Using Numpy (Xeon 5520 2.2GHz):

 In [1]: import numpy
 In [2]: shape = (450,450,450)
 In [3]: start=numpy.random.random(shape).astype(complex128)
 In [4]: %timeit result = numpy.fft.fftn(start)
 1 loops, best of 3: 10.2 s per loop

 Using FFTw (8 threads (2x quad cores):

 In [5]: import fftw3
 In [7]: result = numpy.empty_like(start)
 In [8]: fft = fftw3.Plan(start, result, direction='forward',
 flags=['measure'], nthreads=8)
 In [9]: %timeit fft()
 1 loops, best of 3: 887 ms per loop

 Using CuFFT (GeForce Titan):
 1) with 2 transfers:
 In [10]: import pycuda,pycuda.gpuarray as gpuarray,scikits.cuda.fft as
 cu_fft,pycuda.autoinit
 In [11]: cuplan = cu_fft.Plan(start.shape, numpy.complex128,
 numpy.complex128)
 In [12]: d_result = gpuarray.empty(start.shape, start.dtype)
 In [13]: d_start = gpuarray.empty(start.shape, start.dtype)
 In [14]: def cuda_fft(start):
: d_start.set(start)
: cu_fft.fft(d_start, d_result, cuplan)
: return d_result.get()
:
 In [15]: %timeit cuda_fft(start)
 1 loops, best of 3: 1.7 s per loop

 2) with 1 transfert:
 In [18]: def cuda_fft_2():
 cu_fft.fft(d_start, d_result, cuplan)
 return d_result.get()
:
 In [20]: %timeit cuda_fft_2()
 1 loops, best of 3: 1.05 s per loop

 3) Without transfer:
 In [22]: def cuda_fft_3():
 cu_fft.fft(d_start, d_result, cuplan)
 pycuda.autoinit.context.synchronize()
:

 In [23]: %timeit cuda_fft_3()
 1 loops, best of 3: 202 ms per loop

 Conclusion:
 A Geforce Titan (1000€) can be 4x faster than a couple of Xeon 5520 (2x
 250€) if your data are already on the GPU.
 Nota: Plan calculation are much faster on GPU then on CPU.
 --
 Jérôme Kieffer
 tel +33 476 882 445
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] MKL + CPU, GPU + cuBLAS comparison

2013-11-26 Thread Dinesh Vadhia
Jerome, Thanks for the swift response and tests.  Crikey, that is a significant 
difference at first glance.  Would it be possible to compare a BLAS computation 
eg. matrix-vector or matrix-matrix calculation? Thx!___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] MKL + CPU, GPU + cuBLAS comparison

2013-11-26 Thread Frédéric Bastien
We have such benchmark in Theano:

https://github.com/Theano/Theano/blob/master/theano/misc/check_blas.py#L177

HTH

Fred

On Tue, Nov 26, 2013 at 7:10 AM, Dinesh Vadhia
dineshbvad...@hotmail.com wrote:
 Jerome, Thanks for the swift response and tests.  Crikey, that is a
 significant difference at first glance.  Would it be possible to compare a
 BLAS computation eg. matrix-vector or matrix-matrix calculation? Thx!

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] UNSUBSCRIBE Re: MKL + CPU, GPU + cuBLAS comparison

2013-11-26 Thread regikeyz .
UNSUBSCRIBE


On 26 November 2013 13:37, Frédéric Bastien no...@nouiz.org wrote:

 We have such benchmark in Theano:

 https://github.com/Theano/Theano/blob/master/theano/misc/check_blas.py#L177

 HTH

 Fred

 On Tue, Nov 26, 2013 at 7:10 AM, Dinesh Vadhia
 dineshbvad...@hotmail.com wrote:
  Jerome, Thanks for the swift response and tests.  Crikey, that is a
  significant difference at first glance.  Would it be possible to compare
 a
  BLAS computation eg. matrix-vector or matrix-matrix calculation? Thx!
 
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] UNSUBSCRIBE Re: MKL + CPU, GPU + cuBLAS comparison

2013-11-26 Thread Paul Hobson
We can't manage your account for you. Click here:
http://mail.scipy.org/mailman/listinfo/numpy-discussion
to unsunscribe yourself.
-paul


On Tue, Nov 26, 2013 at 5:42 AM, regikeyz . regi.pub...@gmail.com wrote:

 UNSUBSCRIBE


 On 26 November 2013 13:37, Frédéric Bastien no...@nouiz.org wrote:

 We have such benchmark in Theano:


 https://github.com/Theano/Theano/blob/master/theano/misc/check_blas.py#L177

 HTH

 Fred

 On Tue, Nov 26, 2013 at 7:10 AM, Dinesh Vadhia
 dineshbvad...@hotmail.com wrote:
  Jerome, Thanks for the swift response and tests.  Crikey, that is a
  significant difference at first glance.  Would it be possible to
 compare a
  BLAS computation eg. matrix-vector or matrix-matrix calculation? Thx!
 
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy vbench-marking, compiler comparison

2013-11-26 Thread Julian Taylor
there isn't that much code in numpy that profits from modern x86
instruction sets, even the simple arithmetic loops are strided and thus
unvectorizable by the compiler. They have been vectorized manually in
1.8 using sse2 and it is on my todo list to add runtime detected avx
support.


On 26.11.2013 09:57, Daπid wrote:
 Have you tried on an Intel CPU? I have both a i5 quad core and an i7
 octo core where I could run it over the weekend. One may expect some
 compiler magic taking advantage of the advanced features, specially the i7.

 
 using the vbench I created a comparison of gcc and clang with different
 options.
 Cliffnotes:
 * gcc -O2 performs 5-10% better than -O3 in most benchmarks, except in a
 few select cases where the vectorizer does its magic
 * gcc and clang are very close in performance, but the cases where a
 compiler wins by a large margin its mostly gcc that wins
 
 I have collected some interesting plots on this notebook:
 http://nbviewer.ipython.org/7646615


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] PyArray_BASE equivalent in python

2013-11-26 Thread Peter Rennert
Hi,

I as the title says, I am looking for a way to set in python the base of 
an ndarray to an object.

Use case is porting qimage2ndarray to PySide where I want to do 
something like:

In [1]: from PySide import QtGui

In [2]: image = 
QtGui.QImage('/home/peter/code/pyTools/sandbox/images/faceDemo.jpg')

In [3]: import numpy as np

In [4]: a = np.frombuffer(image.bits())

-- I would like to do something like:
In [5]: a.base = image

-- to avoid situations such as:
In [6]: del image

In [7]: a
Segmentation fault (core dumped)

The current implementation of qimage2ndarray uses a C function to do

 PyArray_BASE(sipRes) = image;
 Py_INCREF(image);

But I want to avoid having to install compilers, headers etc on target 
machines of my code just for these two lines of code.

Thanks,

P

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] PyArray_BASE equivalent in python

2013-11-26 Thread Nathaniel Smith
On Tue, Nov 26, 2013 at 11:54 AM, Peter Rennert p.renn...@cs.ucl.ac.uk wrote:
 Hi,

 I as the title says, I am looking for a way to set in python the base of
 an ndarray to an object.

 Use case is porting qimage2ndarray to PySide where I want to do
 something like:

 In [1]: from PySide import QtGui

 In [2]: image =
 QtGui.QImage('/home/peter/code/pyTools/sandbox/images/faceDemo.jpg')

 In [3]: import numpy as np

 In [4]: a = np.frombuffer(image.bits())

 -- I would like to do something like:
 In [5]: a.base = image

 -- to avoid situations such as:
 In [6]: del image

 In [7]: a
 Segmentation fault (core dumped)

This is a bug in PySide -- the buffer object returned by image.bits()
needs to hold a reference to the original image. Please report a bug
to them. You will also get a segfault from code that doesn't use numpy
at all, by doing things like:

bits = image.bits()
del image
anything involving the bits object

As a workaround, you can write a little class with an
__array_interface__ attribute that points to the image's contents, and
then call np.asarray() on this object. The resulting array will have
your object as its .base, and then your object can hold onto whatever
references it wants.

-n
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] PyArray_BASE equivalent in python

2013-11-26 Thread Peter Rennert
Brilliant thanks, I will try out the little class approach.

On 11/26/2013 08:03 PM, Nathaniel Smith wrote:
 On Tue, Nov 26, 2013 at 11:54 AM, Peter Rennert p.renn...@cs.ucl.ac.uk 
 wrote:
 Hi,

 I as the title says, I am looking for a way to set in python the base of
 an ndarray to an object.

 Use case is porting qimage2ndarray to PySide where I want to do
 something like:

 In [1]: from PySide import QtGui

 In [2]: image =
 QtGui.QImage('/home/peter/code/pyTools/sandbox/images/faceDemo.jpg')

 In [3]: import numpy as np

 In [4]: a = np.frombuffer(image.bits())

 -- I would like to do something like:
 In [5]: a.base = image

 -- to avoid situations such as:
 In [6]: del image

 In [7]: a
 Segmentation fault (core dumped)
 This is a bug in PySide -- the buffer object returned by image.bits()
 needs to hold a reference to the original image. Please report a bug
 to them. You will also get a segfault from code that doesn't use numpy
 at all, by doing things like:

 bits = image.bits()
 del image
 anything involving the bits object

 As a workaround, you can write a little class with an
 __array_interface__ attribute that points to the image's contents, and
 then call np.asarray() on this object. The resulting array will have
 your object as its .base, and then your object can hold onto whatever
 references it wants.

 -n
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] PyArray_BASE equivalent in python

2013-11-26 Thread Peter Rennert
I probably did something wrong, but it does not work how I tried it. I 
am not sure if you meant it like this, but I tried to subclass from 
ndarray first, but then I do not have access to __array_interface__. Is 
this what you had in mind?

from PySide import QtGui
import numpy as np

class myArray():
 def __init__(self, shape, bits, strides):
 self.__array_interface__ = \
 {'data': bits,
  'typestr': 'i32',
  'descr': [('', 'f8')],
  'shape': shape,
  'strides': strides,
  'version': 3}

image = QtGui.QImage('/home/peter/code/pyTools/sandbox/images/faceDemo.jpg')

b = myArray((image.width(), image.height()), image.bits(), 
(image.bytesPerLine(), 4))
b = np.asarray(b)

b.base
#read-write buffer ptr 0x7fd744c4b010, size 1478400 at 0x264e9f0

del image

b
# booom #

On 11/26/2013 08:12 PM, Peter Rennert wrote:
 Brilliant thanks, I will try out the little class approach.

 On 11/26/2013 08:03 PM, Nathaniel Smith wrote:
 On Tue, Nov 26, 2013 at 11:54 AM, Peter Rennert p.renn...@cs.ucl.ac.uk 
 wrote:
 Hi,

 I as the title says, I am looking for a way to set in python the base of
 an ndarray to an object.

 Use case is porting qimage2ndarray to PySide where I want to do
 something like:

 In [1]: from PySide import QtGui

 In [2]: image =
 QtGui.QImage('/home/peter/code/pyTools/sandbox/images/faceDemo.jpg')

 In [3]: import numpy as np

 In [4]: a = np.frombuffer(image.bits())

 -- I would like to do something like:
 In [5]: a.base = image

 -- to avoid situations such as:
 In [6]: del image

 In [7]: a
 Segmentation fault (core dumped)
 This is a bug in PySide -- the buffer object returned by image.bits()
 needs to hold a reference to the original image. Please report a bug
 to them. You will also get a segfault from code that doesn't use numpy
 at all, by doing things like:

 bits = image.bits()
 del image
 anything involving the bits object

 As a workaround, you can write a little class with an
 __array_interface__ attribute that points to the image's contents, and
 then call np.asarray() on this object. The resulting array will have
 your object as its .base, and then your object can hold onto whatever
 references it wants.

 -n
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] PyArray_BASE equivalent in python

2013-11-26 Thread Robert Kern
On Tue, Nov 26, 2013 at 9:37 PM, Peter Rennert p.renn...@cs.ucl.ac.uk
wrote:

 I probably did something wrong, but it does not work how I tried it. I
 am not sure if you meant it like this, but I tried to subclass from
 ndarray first, but then I do not have access to __array_interface__. Is
 this what you had in mind?

 from PySide import QtGui
 import numpy as np

 class myArray():
  def __init__(self, shape, bits, strides):

You need to pass in the image as well and keep a reference to it.

  self.__array_interface__ = \
  {'data': bits,
   'typestr': 'i32',
   'descr': [('', 'f8')],
   'shape': shape,
   'strides': strides,
   'version': 3}

Most of these are wrong. Something like the following should suffice:


class QImageArray(object):
def __init__(self, qimage):
shape = (qimage.height(), qimage.width(), -1)
# Generate an ndarray from the image bits and steal its
# __array_interface__ information.
arr = np.frombuffer(qimage.bits(), dtype=np.uint8).reshape(shape)
self.__array_interface__ = arr.__array_interface__
# Keep the QImage alive.
self.qimage = qimage

--
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] PyArray_BASE equivalent in python

2013-11-26 Thread Nathaniel Smith
On Tue, Nov 26, 2013 at 1:37 PM, Peter Rennert p.renn...@cs.ucl.ac.uk wrote:
 I probably did something wrong, but it does not work how I tried it. I
 am not sure if you meant it like this, but I tried to subclass from
 ndarray first, but then I do not have access to __array_interface__. Is
 this what you had in mind?

 from PySide import QtGui
 import numpy as np

 class myArray():
  def __init__(self, shape, bits, strides):
  self.__array_interface__ = \
  {'data': bits,
   'typestr': 'i32',
   'descr': [('', 'f8')],
   'shape': shape,
   'strides': strides,
   'version': 3}

You need this object to also hold a reference to the image object --
the idea is that so long as the array lives it will hold a ref to this
object in .base, and then this object holds the image alive. But...

 image = QtGui.QImage('/home/peter/code/pyTools/sandbox/images/faceDemo.jpg')

 b = myArray((image.width(), image.height()), image.bits(),
 (image.bytesPerLine(), 4))
 b = np.asarray(b)

 b.base
 #read-write buffer ptr 0x7fd744c4b010, size 1478400 at 0x264e9f0

...this isn't promising, since it suggests that numpy is cleverly
cutting out the middle-man when you give it a buffer object, since it
knows that buffer objects are supposed to actually take care of memory
management.

You might have better luck using the raw pointer two-tuple form for
the data field. You can't get these pointers directly from a buffer
object, but numpy will give them to you. So you can use something like

  data: np.asarray(bits).__array_interface__[data]

-n

-- 
Nathaniel J. Smith
Postdoctoral researcher - Informatics - University of Edinburgh
http://vorpus.org
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] PyArray_BASE equivalent in python

2013-11-26 Thread Peter Rennert
Btw, I just wanted to file a bug at PySide, but it might be alright at 
their end, because I can do this:

from PySide import QtGui

image = QtGui.QImage('/home/peter/code/pyTools/sandbox/images/faceDemo.jpg')

a = image.bits()

del image

a
#read-write buffer ptr 0x7f5fe0034010, size 1478400 at 0x3c1a6b0


On 11/26/2013 09:37 PM, Peter Rennert wrote:
 I probably did something wrong, but it does not work how I tried it. I 
 am not sure if you meant it like this, but I tried to subclass from 
 ndarray first, but then I do not have access to __array_interface__. 
 Is this what you had in mind?

 from PySide import QtGui
 import numpy as np

 class myArray():
 def __init__(self, shape, bits, strides):
 self.__array_interface__ = \
 {'data': bits,
  'typestr': 'i32',
  'descr': [('', 'f8')],
  'shape': shape,
  'strides': strides,
  'version': 3}

 image = 
 QtGui.QImage('/home/peter/code/pyTools/sandbox/images/faceDemo.jpg')

 b = myArray((image.width(), image.height()), image.bits(), 
 (image.bytesPerLine(), 4))
 b = np.asarray(b)

 b.base
 #read-write buffer ptr 0x7fd744c4b010, size 1478400 at 0x264e9f0

 del image

 b
 # booom #

 On 11/26/2013 08:12 PM, Peter Rennert wrote:
 Brilliant thanks, I will try out the little class approach.

 On 11/26/2013 08:03 PM, Nathaniel Smith wrote:
 On Tue, Nov 26, 2013 at 11:54 AM, Peter Rennert 
 p.renn...@cs.ucl.ac.uk wrote:
 Hi,

 I as the title says, I am looking for a way to set in python the 
 base of
 an ndarray to an object.

 Use case is porting qimage2ndarray to PySide where I want to do
 something like:

 In [1]: from PySide import QtGui

 In [2]: image =
 QtGui.QImage('/home/peter/code/pyTools/sandbox/images/faceDemo.jpg')

 In [3]: import numpy as np

 In [4]: a = np.frombuffer(image.bits())

 -- I would like to do something like:
 In [5]: a.base = image

 -- to avoid situations such as:
 In [6]: del image

 In [7]: a
 Segmentation fault (core dumped)
 This is a bug in PySide -- the buffer object returned by image.bits()
 needs to hold a reference to the original image. Please report a bug
 to them. You will also get a segfault from code that doesn't use numpy
 at all, by doing things like:

 bits = image.bits()
 del image
 anything involving the bits object

 As a workaround, you can write a little class with an
 __array_interface__ attribute that points to the image's contents, and
 then call np.asarray() on this object. The resulting array will have
 your object as its .base, and then your object can hold onto whatever
 references it wants.

 -n
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] PyArray_BASE equivalent in python

2013-11-26 Thread Nathaniel Smith
On Tue, Nov 26, 2013 at 2:55 PM, Peter Rennert p.renn...@cs.ucl.ac.uk wrote:
 Btw, I just wanted to file a bug at PySide, but it might be alright at
 their end, because I can do this:

 from PySide import QtGui

 image = QtGui.QImage('/home/peter/code/pyTools/sandbox/images/faceDemo.jpg')

 a = image.bits()

 del image

 a
 #read-write buffer ptr 0x7f5fe0034010, size 1478400 at 0x3c1a6b0

That just means that the buffer still has a pointer to the QImage's
old memory. It doesn't mean that following that pointer won't crash.
Try str(a) or something that actually touches the buffer contents...

-- 
Nathaniel J. Smith
Postdoctoral researcher - Informatics - University of Edinburgh
http://vorpus.org
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion