Re: [Numpy-discussion] State-of-the-art to use a C/C++ library from Python

2016-09-02 Thread Thiago Franco Moraes
I think you can  use ffi.from_buffer and ffi.cast from cffi.


On Fri, Sep 2, 2016 at 8:53 AM Carl Kleffner  wrote:

> fork / extension of cffiwrap:
>
>
> *"cfficloak - A simple but flexible module for creating object-oriented,
> pythonic CFFI wrappers.This is an extension of
> https://bitbucket.org/memotype/cffiwrap
> "*
>
> 2016-09-02 13:46 GMT+02:00 Sebastian Haase :
>
>> How do these two relate to each other !?
>> - Sebastian
>>
>>
>> On Fri, Sep 2, 2016 at 12:33 PM, Carl Kleffner 
>> wrote:
>>
>>> maybe https://bitbucket.org/memotype/cffiwrap or
>>> https://github.com/andrewleech/cfficloak helps?
>>>
>>> C.
>>>
>>>
>>> 2016-09-02 11:16 GMT+02:00 Nathaniel Smith :
>>>
 On Fri, Sep 2, 2016 at 1:16 AM, Peter Creasey
  wrote:
 >> Date: Wed, 31 Aug 2016 13:28:21 +0200
 >> From: Michael Bieri 
 >>
 >> I'm not quite sure which approach is state-of-the-art as of 2016.
 How would
 >> you do it if you had to make a C/C++ library available in Python
 right now?
 >>
 >> In my case, I have a C library with some scientific functions on
 matrices
 >> and vectors. You will typically call a few functions to configure the
 >> computation, then hand over some pointers to existing buffers
 containing
 >> vector data, then start the computation, and finally read back the
 data.
 >> The library also can use MPI to parallelize.
 >>
 >
 > Depending on how minimal and universal you want to keep things, I use
 > the ctypes approach quite often, i.e. treat your numpy inputs an
 > outputs as arrays of doubles etc using the ndpointer(...) syntax. I
 > find it works well if you have a small number of well-defined
 > functions (not too many options) which are numerically very heavy.
 > With this approach I usually wrap each method in python to check the
 > inputs for contiguity, pass in the sizes etc. and allocate the numpy
 > array for the result.

 FWIW, the broader Python community seems to have largely deprecated
 ctypes in favor of cffi. Unfortunately I don't know if anyone has
 written helpers like numpy.ctypeslib for cffi...

 -n

 --
 Nathaniel J. Smith -- https://vorpus.org
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 https://mail.scipy.org/mailman/listinfo/numpy-discussion

>>>
>>>
>>> ___
>>> NumPy-Discussion mailing list
>>> NumPy-Discussion@scipy.org
>>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>>
>>>
>>
>> ___
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>
>>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Research position in the Brazilian Research Institute for Science and Neurotechnology - BRAINN

2015-04-08 Thread Thiago Franco Moraes
Research position in the Brazilian Research Institute for Science and
Neurotechnology – BRAINN

Postdoc researcher to work with software development for medical imaging

The Brazilian Research Institute for Neuroscience and Neurotechnology
(BRAINN) (www.brainn.org.br) focuses on the investigation of basic
mechanisms leading to epilepsy and stroke, and the injury mechanisms that
follow disease onset and progression. This research has important
applications related to prevention, diagnosis, treatment and rehabilitation
and will serve as a model for better understanding normal and abnormal
brain function. The BRAINN Institute is composed of 10 institutions from
Brazil and abroad and hosted by State University of Campinas (UNICAMP).
Among the associated institutions is Renato Archer Information Technology
Center (CTI) that has a specialized team in open-source software
development for medical imaging (www.cti.gov.br/invesalius) and 3D printing
applications for healthcare. CTI is located close the UNICAMP in the city
of Campinas, State of São Paulo in a very technological region of Brazil
and is looking for a postdoc researcher to work with software development
for medical imaging related to the imaging analysis, diagnosis and
treatment of brain diseases. The postdoc position is for two years with the
possibility of being renovated  for more two years.

Education
- PhD in computer science, computer engineering, mathematics, physics or
related.

Requirements
- Digital image processing (Medical imaging)
- Computer graphics (basic)

Benefits
6.143,40 Reais per month free of taxes (about US$ 2.000,00);
15% technical reserve for conferences participation and specific materials
acquisition;

Interested
Send curriculum to: jorge.si...@cti.gov.br with subject “Postdoc position”
Applications reviews will begin April 30, 2015 and continue until the
position is filled.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Research position in the Brazilian Research Institute for Science and Neurotechnology – BRAINN

2014-07-22 Thread Thiago Franco Moraes
*Research position in the Brazilian Research Institute for Science and
Neurotechnology – BRAINN Postdoc researcher to work with software
development for medical imaging*

The Brazilian Research Institute for Neuroscience and Neurotechnology
(BRAINN) (www.brainn.org.br) focuses on the investigation of basic
mechanisms leading to epilepsy and stroke, and the injury mechanisms that
follow disease onset and progression. This research has important
applications related to prevention, diagnosis, treatment and rehabilitation
and will serve as a model for better understanding normal and abnormal
brain function. The BRAINN Institute is composed of 10 institutions from
Brazil and abroad and hosted by State University of Campinas (UNICAMP).
Among the associated institutions is Renato Archer Information Technology
Center (CTI) that has a specialized team in open-source software
development for medical imaging (www.cti.gov.br/invesalius) and 3D printing
applications for healthcare. CTI is located close the UNICAMP in the city
of Campinas, State of São Paulo in a very technological region of Brazil
and is looking for a postdoc researcher to work with software development
for medical imaging related to the imaging analysis, diagnosis and
treatment of brain diseases. The postdoc position is for two years with the
possibility of being renovated  for more two years.

*Education*
- PhD in computer science, computer engineering, mathematics, physics or
related.

*Requirements*
- Digital image processing (Medical imaging)
- Computer graphics (basic)

* Benefits*
6.143,40 Reais per month free of taxes (about US$ 2.800,00);
15% technical reserve for conferences participation and specific materials
acquisition;

*Interested*
Send curriculum to: jorge.si...@cti.gov.br with subject “Postdoc position”
Applications reviews will begin August 1, 2014 and continue until the
position is filled.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] question about scipy superpack

2012-08-07 Thread Thiago Franco Moraes
A little off-topic, but related: Which python version do you recommend
to install in Mac OS X 10.8? The native one? The one from python.org?
or the one compiled via homebrew? And do you think it's better to use
the 32 or 64 bits?

Thanks!

On Tue, Aug 7, 2012 at 12:35 PM, Chris Barker chris.bar...@noaa.gov wrote:
 On Mon, Aug 6, 2012 at 8:51 PM, Tom Krauss thomas.p.kra...@gmail.com wrote:
 I got a new job, and a new mac book pro on which I just installed Mac OS X
 10.8.

 congrats -- on the job, and on an employer that gets you a mac!

 I need to run SWIG to generate a shared object from C++ source that works
 with numpy.i.  I'm considering installing the Scipy Superpack, but I have a
 question.  If I install the Scipy Superpack, which has most of the packages
 I need, plus some others, will it be able to find numpy/arrayobject.h

 It's probably there, yes, and you should be able to find it with:

 numpy.get_include()

 (use that in your setup.py)

 the source files needed by gcc to compile the swig-generated C++ wrapper?

 The trick here is which gcc -- Apple is fast to move forward, is on
 the bleeding edge with gcc -- the latest XCode uses LLVM, which is not
 compatible with older Python builds.

 I *think* the superpack is build against the pyton.org python builds (32 bit?)

 Anyway, the python,org 32 bit build requires an older gcc for building
 extensions -- you can get XCode 3from Apple Developer connection if
 you dig for it -- it works fine on 10.7, I hope it does on 10.8.

 I'm not totally sure about the 32/64 bit Intel build.

 The pythonmac list will be a help here.

 Good luck,

 -Chris




 --

 Christopher Barker, Ph.D.
 Oceanographer

 Emergency Response Division
 NOAA/NOS/ORR(206) 526-6959   voice
 7600 Sand Point Way NE   (206) 526-6329   fax
 Seattle, WA  98115   (206) 526-6317   main reception

 chris.bar...@noaa.gov
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Calculating density based on distance

2012-01-14 Thread Thiago Franco Moraes
No dia Sábado, 14 de Janeiro de 2012, Benjamin rootben.r...@ou.edu escreveu:


 On Saturday, January 14, 2012, Thiago Franco de Moraes 
totonixs...@gmail.com wrote:
 Hi all,

 I have the following problem:

 Given a array with dimension Nx3, where N is generally greater than
 1.000.000, for each item in this array I have to calculate its density,
 Where its density is the number of items from the same array with
 distance less than a given r. The items are the rows from the array.

 I was not able to think a solution to this using one or two functions of
 Numpy. Then I wrote this code http://pastebin.com/iQV0bMNy . The problem
 it so slow. So I tried to implement it in Cython, here the result
 http://pastebin.com/zTywzjyM , but it is very slow yet.

 Is there a better and faster way of doing that? Is there something in my
 Cython implementation I can do to perform better?

 Thanks!

 Have you looked at scipy.spatial.KDTree?  It can efficiently load up a
data structure that lets you easily determine the spatial relationship
between datapoints.

 Ben Root

Thanks, Ben, I'm going to do that.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Smoothing binary volumes

2011-09-06 Thread Thiago Franco Moraes
Hi all,

I've been implementing the algorithm from this paper Reducing
Aliasing Artifacts in Iso-Surfaces of Binary Volumes from Ross T.
Whitaker. Because I develop a opensource software which works with
segmentation of CT and MRI medical images, and the results of
segmentation is a binary volume image. The problem is that the surface
generate from binary images are rough and have those staircase
artifacts. Bellow some examples after and before using the algorithm:

Binary ball - http://i.imgur.com/DlIwP.png
Smoothed ball - http://i.imgur.com/G04zN.png

Binary CT - http://i.imgur.com/Ah1LB.png
Smoothed CT - http://i.imgur.com/ps1Nz.png

The algorithm works in the volumetric image. After the algorithm I use
the Marching Cubes (from VTK) to generate the surface.

I could use a gaussian filter, but some fine features could be lost.

The source-code is here [1]. The first attempt I use python, but it
used a lot of memory. Now I'm using Cython, it's not completed. Maybe
I will try to code directly in C.

I'm sending this email to this mail-list because someone could have
some interest in this, and maybe it could be inserted in some python
library, like mahotas.

Thanks!

[1] - https://github.com/tfmoraes/binvol
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Adjacent matrix

2011-06-15 Thread Thiago Franco Moraes
On Tue, Jun 14, 2011 at 6:30 PM, Gael Varoquaux
gael.varoqu...@normalesup.org wrote:
 On Tue, Jun 14, 2011 at 10:29:38AM -0300, Thiago Franco Moraes wrote:
 I don't know if I understand. The idea is to use _make_edges_3d to
 give me the connectivity, isn't it? Like for example, a 3x3 image:

 0, 1, 2
 3, 4, 5
 6, 7, 8

 If I use _make_edges_3d(3, 3) I get:

 array([[0, 1, 3, 4, 6, 7, 0, 1, 2, 3, 4, 5],
        [1, 2, 4, 5, 7, 8, 3, 4, 5, 6, 7, 8]])

 So, for example, the pixel 4 is connected with 3, 5, 7 and 1
 (connectivity 4). Is it?

 Yes, you go the idea.

 The problem is I need to work with connectivity 26 in 3D images, or 8
 in 2D images. I'm not able to understand the code so well to apply to
 those connectivity I said (8 and 26).

 The code is for 4 connectivity (or 6 in 3D). You will need to modify the
 _make_edges_3d function to get the connectivity you want. Hopefully this
 will get the ball running, and by looking at the code you will figure out
 how to modify it.

 HTH,

 Gaël

Thanks Gaël. To resolve my problem I'm using mahotas.labeled.borders
which was proposed by Luis. However, the ideas from _make_edges_3d can
be useful to me in other problems I'm having. Thanks again.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Adjacent matrix

2011-06-14 Thread Thiago Franco Moraes
On Tue, Jun 14, 2011 at 8:06 AM, Luis Pedro Coelho l...@cmu.edu wrote:
 On Monday, June 13, 2011 03:55:46 PM Thiago Franco Moraes wrote:
 Find all of the grid points in that lie adjacent to one or more grid
 points of opposite values.

 This is the code used to calculate that matrix:
 [spin]
 Where nx, ny and nz are the x, y, z dimensions from the im input
 binary matrix. I think this operation is called bwperim.

 My package, mahotas, supports it directly:

 mahotas.bwperim(bw, n)

 at
 http://packages.python.org/mahotas/

 HTH,
 Luis

Thanks Luis. But mahotas.bwperim only supports 2D images, my images
are 3D from Computerized Tomography (often very large) and I need
connectivity 26.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Adjacent matrix

2011-06-14 Thread Thiago Franco Moraes
On Mon, Jun 13, 2011 at 5:03 PM, Gael Varoquaux
gael.varoqu...@normalesup.org wrote:
 Hi,

 You can probably find some inspiration from
 https://github.com/scikit-learn/scikit-learn/blob/master/scikits/learn/feature_extraction/image.py

 Gaël

Hi Gaël,

I don't know if I understand. The idea is to use _make_edges_3d to
give me the connectivity, isn't it? Like for example, a 3x3 image:

0, 1, 2
3, 4, 5
6, 7, 8

If I use _make_edges_3d(3, 3) I get:

array([[0, 1, 3, 4, 6, 7, 0, 1, 2, 3, 4, 5],
   [1, 2, 4, 5, 7, 8, 3, 4, 5, 6, 7, 8]])

So, for example, the pixel 4 is connected with 3, 5, 7 and 1
(connectivity 4). Is it?

The problem is I need to work with connectivity 26 in 3D images, or 8
in 2D images. I'm not able to understand the code so well to apply to
those connectivity I said (8 and 26). Do you think it is possible?

Thanks Gaël,

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Adjacent matrix

2011-06-14 Thread Thiago Franco Moraes
On Tue, Jun 14, 2011 at 11:09 AM, Luis Pedro Coelho l...@cmu.edu wrote:
 On Tuesday, June 14, 2011 08:50:47 AM Thiago Franco Moraes wrote:
 On Tue, Jun 14, 2011 at 8:06 AM, Luis Pedro Coelho l...@cmu.edu wrote:
  On Monday, June 13, 2011 03:55:46 PM Thiago Franco Moraes wrote:
  Find all of the grid points in that lie adjacent to one or more grid
  points of opposite values.
 
  This is the code used to calculate that matrix:
  [spin]
 
  Where nx, ny and nz are the x, y, z dimensions from the im input
  binary matrix. I think this operation is called bwperim.
 
  My package, mahotas, supports it directly:
 
  mahotas.bwperim(bw, n)
 
  at
  http://packages.python.org/mahotas/
 
  HTH,
  Luis

 Thanks Luis. But mahotas.bwperim only supports 2D images, my images
 are 3D from Computerized Tomography (often very large) and I need
 connectivity 26.

 If the images are binary, an alternative is mahotas.labeled.borders()

 It returns the pixels which have more than one value in  there neighbourhoods.

 HTH
 Luis

Thanks again Luis. I have to add the version which works with 3D
images is from git (https://github.com/luispedro/mahotas/), the one
installed with pip only works with 2D images.

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Adjacent matrix

2011-06-13 Thread Thiago Franco Moraes
Hi all,

I'm reproducing a algorithm from a paper. This paper takes as input a
binary volumetric matrix. In a step from this paper, from this binary
volumetric matrix a adjacent matrix is calculated, thisadjacent matrix
is calculated as bellow:

Find all of the grid points in that lie adjacent to one or more grid
points of opposite values.

This is the code used to calculate that matrix:

   int x,y,z,x_,y_,z_, addr,addr2;

   for(z = 1; z  nz-1; z++)

   for(y = 1; y  ny-1; y++)

   for(x = 1; x  nx-1; x++)

   for(z_=z-1; z_ = z+1; z_++)

   for(y_=y-1; y_ = y+1; y_++)

   for(x_=x-1; x_ = x+1; x_++)

   {

   addr = z*nx*ny+y*nx+x;

   addr2 = z_*nx*ny+y_*nx+x_;

   if(im[addr] != im[addr2])

out[addr] = 1;

   }

Where nx, ny and nz are the x, y, z dimensions from the im input
binary matrix. I think this operation is called bwperim.

Yes, I can reproduce that code in Python. But it will not have a great
performance. My question is: Is there a way to do that using numpy or
scipy tools?

Thanks!

PS: Those indexing starts with 1 because it is a matlab arrays however
it is coded in C.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Memmap with multiprocessing

2011-04-27 Thread Thiago Franco Moraes
On Tue, Apr 26, 2011 at 6:21 PM, Ralf Gommers
ralf.gomm...@googlemail.com wrote:
 On Mon, Apr 25, 2011 at 1:16 PM, Thiago Franco Moraes
 totonixs...@gmail.com wrote:
 Hi,

 Has anyone confirmed if this is a bug? Should I post this in the bug tracker?

 I see the same thing with recent master. Something very strange is
 going on in the memmap.__array_finalize__ method under Windows. Can
 you file a bug?

 Ralf

Hi Ralf,

Done http://projects.scipy.org/numpy/ticket/1809

Thanks!

 Thanks!

 On Tue, Apr 19, 2011 at 9:01 PM, Thiago Franco de Moraes
 totonixs...@gmail.com wrote:
 Hi all,

 I'm having a error using memmap objects shared among processes created
 by the multprocessing module. This error only happen in Windows with
 numpy 1.5 or above, in numpy 1.4.1 it doesn't happen, In Linux and Mac
 OS X it doesn't happen. This error is demonstrated by this little
 example script here https://gist.github.com/929168 , and the traceback
 is bellow (between traceback tags):

 traceback
 Process Process-1:
 Traceback (most recent call last):
  File C:\Python26\Lib\multiprocessing\process.py, line 232, in _bootstrap
    self.run()
  File C:\Python26\Lib\multiprocessing\process.py, line 88, in run
    self._target(*self._args, **self._kwargs)
  File C:\Documents and Settings\phamorim\Desktop\test.py, line 7, in
 print_ma
 trix
    print matrix
  File C:\Python26\Lib\site-packages\numpy\core\numeric.py, line 1379, in
 arra
 y_str
    return array2string(a, max_line_width, precision, suppress_small, ' ',
 , s
 tr)
  File C:\Python26\Lib\site-packages\numpy\core\arrayprint.py, line 309, in
 ar
 ray2string
    separator, prefix)
  File C:\Python26\Lib\site-packages\numpy\core\arrayprint.py, line 189, in
 _a
 rray2string
    data = _leading_trailing(a)
  File C:\Python26\Lib\site-packages\numpy\core\arrayprint.py, line 162, in
 _l
 eading_trailing
    min(len(a), _summaryEdgeItems))]
  File C:\Python26\Lib\site-packages\numpy\core\memmap.py, line 257, in
 __arra
 y_finalize__
    self.filename = obj.filename
 AttributeError: 'memmap' object has no attribute 'filename'
 Exception AttributeError: AttributeError('NoneType' object has no attribute
 'te
 ll',) in bound method memmap.__del__ of memmap([0, 0, 0, 0, 0, 0, 0, 0, 0,
 0,
 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
       0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
       0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
       0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
       0, 0, 0, 0, 0, 0, 0, 0], dtype=int16) ignored
 /traceback

 I don't know if it's a bug, but I thought it's is import to report
 because the version 1.4.1 was working and 1.5.0 and above was not.

 Thanks!


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Memmap with multiprocessing

2011-04-25 Thread Thiago Franco Moraes
Hi,

Has anyone confirmed if this is a bug? Should I post this in the bug tracker?

Thanks!

On Tue, Apr 19, 2011 at 9:01 PM, Thiago Franco de Moraes
totonixs...@gmail.com wrote:
 Hi all,

 I'm having a error using memmap objects shared among processes created
 by the multprocessing module. This error only happen in Windows with
 numpy 1.5 or above, in numpy 1.4.1 it doesn't happen, In Linux and Mac
 OS X it doesn't happen. This error is demonstrated by this little
 example script here https://gist.github.com/929168 , and the traceback
 is bellow (between traceback tags):

 traceback
 Process Process-1:
 Traceback (most recent call last):
  File C:\Python26\Lib\multiprocessing\process.py, line 232, in _bootstrap
    self.run()
  File C:\Python26\Lib\multiprocessing\process.py, line 88, in run
    self._target(*self._args, **self._kwargs)
  File C:\Documents and Settings\phamorim\Desktop\test.py, line 7, in
 print_ma
 trix
    print matrix
  File C:\Python26\Lib\site-packages\numpy\core\numeric.py, line 1379, in
 arra
 y_str
    return array2string(a, max_line_width, precision, suppress_small, ' ',
 , s
 tr)
  File C:\Python26\Lib\site-packages\numpy\core\arrayprint.py, line 309, in
 ar
 ray2string
    separator, prefix)
  File C:\Python26\Lib\site-packages\numpy\core\arrayprint.py, line 189, in
 _a
 rray2string
    data = _leading_trailing(a)
  File C:\Python26\Lib\site-packages\numpy\core\arrayprint.py, line 162, in
 _l
 eading_trailing
    min(len(a), _summaryEdgeItems))]
  File C:\Python26\Lib\site-packages\numpy\core\memmap.py, line 257, in
 __arra
 y_finalize__
    self.filename = obj.filename
 AttributeError: 'memmap' object has no attribute 'filename'
 Exception AttributeError: AttributeError('NoneType' object has no attribute
 'te
 ll',) in bound method memmap.__del__ of memmap([0, 0, 0, 0, 0, 0, 0, 0, 0,
 0,
 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
       0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
       0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
       0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
       0, 0, 0, 0, 0, 0, 0, 0], dtype=int16) ignored
 /traceback

 I don't know if it's a bug, but I thought it's is import to report
 because the version 1.4.1 was working and 1.5.0 and above was not.

 Thanks!


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion