Hi Vincent,
(1) Fortran compiler isn't necessary for numpy, but is for scipy,
which isn't ported to python 3 yet.
(2) Could you put up on pastebin or somewhere online the full error
you got?
The problem isn't one of not finding the Python.h header file, which
will be present in the Python
On Jun 7, 2010, at 5:19 PM, Vincent Davis wrote:
Here is a link to the full output after typing python setup.py build.
https://docs.google.com/Doc?docid=0AVQgwG2qUDgdZGYyaGo0NjNfMjI5Z3BraHd6ZDghl=en
that's just bringing up an empty document page for me...
This is unexpected, from the error log:
/Library/Frameworks/Python.framework/Versions/3.1/include/python3.1/
Python.h:11:20: error: limits.h: No such file or directory
No good... it can't find basic system headers. Perhaps it's due to the
MACOSX_DEPLOYMENT_TARGET environment variable that
On Tue, Jun 8, 2010 at 7:58 AM, Zachary Pincus zachary.pin...@yale.edu
wrote:
This is unexpected, from the error log:
/Library/Frameworks/Python.framework/Versions/3.1/include/python3.1/
Python.h:11:20: error: limits.h: No such file or directory
No good... it can't find basic system
Failed again, I have attached the output including the execution of
the above commands.
Thanks for link to the environment variables, I need to read that.
In the attached file (and the one from the next email too) I didn't
see the
MACOSX_DEPLOYMENT_TARGET=10.4
export
match(v1, v2) = returns a boolean array of length len(v1) indicating
whether element i in v1 is in v2.
You want numpy.in1d (and friends, probably, like numpy.unique and the
others that are all collected in numpy.lib.arraysetops...)
Definition: numpy.in1d(ar1, ar2, assume_unique=False)
Hi Ionut,
Check out the tabular package:
http://parsemydata.com/tabular/index.html
It seems to be basically what you want... it does pivot tables (aka
crosstabulation), it's built on top of numpy, and has simple data IO
tools.
Also check out this discussion on pivot tables from the numpy
A[:5,:5] shows the data I want, but it's not contiguous in memory.
A.resize(5,5) is contiguous, but do not contains the data I want.
How to get both efficiently?
A[:5,:5].copy()
will give you a new, contiguous array that has the same contents as
A[5:,5:], but in a new chunk of memory. Is
, 44]])
In [41]: b.flags.c_contiguous
Out[41]: True
In [42]: b.flags.owndata
Out[42]: False
Zach
On Wed, Aug 4, 2010 at 5:20 PM, Zachary Pincus zachary.pin...@yale.edu
wrote:
A[:5,:5] shows the data I want, but it's not contiguous in memory.
A.resize(5,5) is contiguous, but do
). Other slices won't have this property... A[:] = A[::-1]
e.g. will fail totally.
On Aug 4, 2010, at 11:52 AM, Zachary Pincus wrote:
Yes it is, but is there a way to do it in-place?
So you want the first 25 elements of the array (in a flat contiguous
view) to contain the 25 elements
indices = argsort(a1)
ranks = zeros_like(indices)
ranks[indices] = arange(len(indices))
Doesn't answer your original question directly, but I only recently
learned from this list that the following does the same as the above:
ranks = a1.argsort().argsort()
Will wonders never cease...
So
In the end, the question was; is worth adding start= and stop=
markers
into loadtxt to allow grabbing sections of a file between two known
headers? I imagine it's something that people come up against
regularly.
Simple enough to wrap your file in a new file-like object that stops
Though, really, it's annoying that numpy.loadtxt needs both the
readline function *and* the iterator protocol. If it just used
iterators, you could do:
def truncator(fh, delimiter='END'):
for line in fh:
if line.strip() == delimiter:
break
yield line
On Sep 17, 2010, at 3:59 PM, Benjamin Root wrote:
So, this code will still raise an error for an empty file.
Personally, I consider that a bug because I would expect to receive
an empty array. I could understand raising an error for a non-empty
file that does not contain anything
Though, really, it's annoying that numpy.loadtxt needs both the
readline function *and* the iterator protocol. If it just used
iterators, you could do:
def truncator(fh, delimiter='END'):
for line in fh:
if line.strip() == delimiter:
break
yield line
Hi all,
I looked at line 21902 of dlapack_lite.c, it is,
for (niter = iter; niter = 20; ++niter) {
Indeed the upper limit for iterations in the
linalg.svd code is set for 20. For now I will go with
my method (on earlier post) of squaring the matrix and
then doing svd when the
Hi,
How about a combination of sort, followed by searchsorted right/left
using the bin boundaries as keys? The difference of the two
resulting vectors is the bin value. Something like:
In [1]: data = arange(100)
In [2]: bins = [0,10,50,70,100]
In [3]: lind = data.searchsorted(bins)
Hi, and thanks for the suggestion!
How many bits per pixel does your camera actually generate !?
If its for example a 12 bit camera, you could just fill in directly
into 4096 preallocated bins.
You would not need any sorting !!
That's what I did for a 16 bit camera -- but I wrote it in C and
But even if indices = array, one still needs to do something like:
for index in indices: histogram[index] += 1
Which is slow in python and fast in C.
I thought of a broadcasting approach... what are the chances that a
simple
bins[:] = 0
bins[ img.flat ] += 1
That doesn't work
Hello,
But even if indices = array, one still needs to do something like:
for index in indices: histogram[index] += 1
numpy.bincount?
That is indeed what I was looking for! I knew I'd seen such a function.
However, the speed is a bit disappointing. I guess the sorting isn't
too much of a
Combining Sebastian and Jae-Joon's suggestions, I have something that
might work:
timeit numpy.bincount(array.flat)
10 loops, best of 3: 28.2 ms per loop
This is close enough to video-rate... And I can then combine bins as
needed to get a particular bin count/range after the fact.
Thanks,
Hello all,
I need to allocate a numpy array that I will then pass to a camera
driver (via ctypes) so that the driver can fill the array with pixels.
The catch is that the driver requires that rows of pixels start at 4-
byte boundaries.
The sample C++ code given for allocating memory for
Hello all,
I'm writing up a general function to allocate aligned numpy arrays
(I'll post it shortly, as Anne suggested that such a function would be
useful).
However, I've run into trouble with using ndarray.view() in odd corner-
cases:
In : numpy.__version__
Out: '1.1.0.dev5077'
In : a =
It works for me:
x = arange(0,10)
scale=1
loc=1
norm = 1 / (scale * sqrt(2 * pi))
y = norm * exp(-power((x - loc), 2) / (2 * scale**2))
y
array([ 1.46762663e-01, 3.98942280e-01, 1.46762663e-01,
5.39909665e-02, 2.68805194e-03, 1.33830226e-04,
9.01740968e-07,
Hi all,
Actually -- it seems like view() doesn't work with strided arrays at
all. (?)
In : a = numpy.ones((4,32), dtype=numpy.uint8)
In : a.view(numpy.uint16).shape
Out: (4, 16)
In : a[:,:16].view(numpy.uint16)
ValueError: new type not compatible with array.
I think this might be a
Hello all,
Attached is code (plus tests) for allocating aligned arrays -- I think
this addresses all the requests in this thread, with regard to
allowing for different kinds of alignment. Thanks Robert and Anne for
your help and suggestions. Hopefully this will be useful.
The core is a
On May 17, 2008, at 9:34 AM, David Cournapeau wrote:
Nripun Sredar wrote:
I have a sparse matrix 416x52. I tried to factorize this matrix using
svd from numpy. But it didn't produce a result and looked like it is
in an infinite loop.
I tried a similar operation using random numbers in the
Thanks for the tips! This is very helpful.
Specifically, I have a package that uses numpy and numpy.distutils to
built itself. Unfortunately, there are some pure-C libraries that I
call using ctypes, and as these libraries are are not python
extensions, it is hard to get distutils to build
Hello all,
I've been toying around with bundling up a numpy-using python program
for windows by using py2exe. All in all, it works great, except for
one thing: the numpy superpack installer for windows has (correctly)
selected SSE3 binary libraries to install on my machine. This causes
I'd like to shift the columns of a 2d array one column to the right.
Is there a way to do that without making a copy?
I think what you want is numpy.roll?
Definition: numpy.roll(a, shift, axis=None)
Docstring:
Roll the elements in the array by 'shift' positions along
the given
build my own as required?
Thanks,
Zach
On Jun 4, 2008, at 1:37 PM, Zachary Pincus wrote:
Hello all,
I've been toying around with bundling up a numpy-using python program
for windows by using py2exe. All in all, it works great, except for
one thing: the numpy superpack installer for windows
Hello all,
I've noticed that py2app and py2exe work strangely on my project,
which (having fortran code) is driven by the numpy distutils. Now,
these two distutils commands need to peek into the build/lib.
[whatever] directories to grab the files to package up. Indeed, the
docs for py2exe
Is there any efficient implementation of bilateral filter that works
smoothly with numpy?
Not that I know of...
Of course, if you were to write one, I'm sure there would be some
interest in it! I would recommend looking into the tools in
scipy.ndimage for basic image-processing support,
Attached here my cython implementation of the bilateral filter,
which is my first cython program. I would ask for the following:
Thanks for the code! I know it will be of use to me. (Do you place any
particular license on it?)
Zach
On Aug 5, 2008, at 9:38 AM, Nadav Horesh wrote:
Hello all,
I have a bizarre bug that causes numpy to segfault, but:
- only when run under ipython
- only when numpy is imported *after* an other library (that does
not import numpy)
Here's what the code looks like.
Crashes (only in ipython):
import celltool.utility.pil_lite.Image as Image,
Hmm, I may have identified this by other means. See my next email...
Zach
On Aug 7, 2008, at 3:22 PM, Zachary Pincus wrote:
Hello all,
I have a bizarre bug that causes numpy to segfault, but:
- only when run under ipython
- only when numpy is imported *after* an other library (that does
Hello all,
As per my previous email, I encountered a strange sometimes-segfault
when using 'numpy.array(thing)' to convert 'thing' (which provides an
__array_interface__) to a numpy array.
The offending __array_interface__ has a 'data' item that is a python
string (not, as specified in the
Hello all,
numpy.unpackbits has a docstring that states that it returns a boolean
array, but the function instead returns a uint8 array. Should I enter
this in trac as a documentation bug or a functionality bug?
Also, numpy.packbits will not accept a bool-typed array as input (only
the following code drives python into an endless loop :
import numpy
numpy.fromstring(abcd, dtype = float, sep = ' ')
It works on OS X 10.5.4 with a today's SVN head of numpy:
In [1]: import numpy
In [2]: numpy.fromstring(abcd, dtype = float, sep = ' ')
Out[2]: array([ 0.])
In [3]:
Hi Dan,
Your approach generates numerous large temporary arrays and lists. If
the files are large, the slowdown could be because all that memory
allocation is causing some VM thrashing. I've run into that at times
parsing large text files.
Perhaps better would be to iterate through the
This is similar to what I tried originally! Unfortunately, repeatedly
appending to a list seems to be very slow... I guess Python keeps
reallocating and copying the list as it grows. (It would be nice to
be
able to tune the increments by which the list size increases.)
Robert's right, as
is it really necessary to label these dmg's for 10.5 only?
No. This is done automatically by the tool used to build the mpkg.
I'll look at changing this to 10.4, thanks for the reminder.
If the dmg name is generated from the distribution name that the
python distutils makes (e.g.
import numpy
linsp = numpy.linspace
red = linsp(0, 255, 50)
green = linsp(125, 150, 50)
blue = linsp(175, 255, 50)
array's elements are float. How do I convert them into integer?
I need to build a new array from red, green, blue. like this:
[[ red[0], green[0], blue[0] ],
[ red[1],
I'm looking for a way to acccomplish the following task without lots
of loops involved, which are really slowing down my code.
I have a 128x512 array which I want to break down into 2x2 squares.
Then, for each 2x2 square I want to do some simple calculations
such as finding the maximum
This almost works. Is there a way to do some masking on tiles, for
instance taking the maximum height of each 2x2 square that is an
odd number? I've tried playing around with masking and where, but
they don't return an array of the original size and shape of tiles
below.
Could you provide
Hi Alan,
Traceback (most recent call last):
File /usr/local/lib/python2.5/site-packages/enthought.traits-2.0.4-
py2.5-linux-i686.egg/enthought/traits/trait_notifiers.py, line 325,
in call_1
self.handler( object )
File TrimMapl_1.py, line 98, in _Run_fired
outdata =
Hi, probably a basic question, but I'm looking for a neat way to sum
all the positive values in an array of floats. I'm currently doing it
the hard way, but am hoping there is some cunning and elegant syntax I
can use instead
Fancy indexing's my favorite cunning and elegant syntax:
a =
Hello all,
I'm doing something silly with images and am unable to figure out the
right way to express this with fancy indexing -- or anything other
than a brute for-loop for that matter.
The basic gist is that I have an array representing n images, of shape
(n, x, y). I also have a map of
You need to give an array for each axis. Each of these arrays will be
broadcast against each other to form three arrays of the desired shape
of composite. This is discussed in the manual here:
http://mentat.za.net/numpy/refguide/indexing.xhtml#indexing-multi-dimensional-arrays
http://mentat.za.net/numpy/numpy_advanced_slides/
Those slides are really useful! Thanks a ton.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
Hi,
The PIL has some fundamental architectural problems that prevent it
from dealing easily with 16-bit TIFFs, which are exacerbated on little-
endian platforms. Add to this a thin sheen of various byte-order bugs
and other problems in the __array_interface__, and it's really hard to
get
Hi all,
I'm curious about how to control compiler options for mingw builds of
numpy on windows... Specifically, I want to build binaries without SSE
support, so that they can run on older hardware.
Setting a CFLAGS variable on the command-line doesn't appear to do
anything, but perhaps
I'm curious about how to control compiler options for mingw builds of
numpy on windows... Specifically, I want to build binaries without
SSE
support, so that they can run on older hardware.
The windows binaries of numpy can run on machines without SSE support.
If for some reason you want
Hi Pierre,
I've tested the new loadtxt briefly. Looks good, except that there's a
minor bug when trying to use a specific white-space delimiter (e.g.
\t) while still allowing other white-space to be allowed in fields
(e.g. spaces).
Specifically, on line 115 in LineSplitter, we have:
I needed it to help me fixing a couple of bugs for old CPU, so it
ended up being implemented in the nsis script for scipy now (I will
add it to numpy installers too). So from now, any newly releases of
both numpy and scipy installers could be overriden:
installer-name.exe /arch native -
This looks really cool -- thanks Luis.
Definitely keep us posted as this progresses, too.
Zach
On Dec 29, 2008, at 4:41 PM, Luis Pedro Coelho wrote:
On Monday 29 December 2008 14:51:48 Luis Pedro Coelho wrote:
I will make the git repository publicly available once I figure out
how to
do
Hi,
intersect1d and setmember1d doesn't give expected results in case
there are duplicate values in either array becuase it works by
sorting data and substracting previous value. Is there an
alternative in numpy to get indices of intersected values.
From the docstring for setmember1d
Hi all,
I just grabbed the latest bilateral filter from Stéfan's repository,
but I can't get it to work! I'm using a recent numpy SVN and the
latest release of cython...
In [10]: bl = bilateral.bilateral(image, 2, 150)
releases of numpy and cython on a linux box (do you use Mac?). I am
attaching the package I have on my PC, for the small chance it would
help.
Nadav.
-הודעה מקורית-
מאת: numpy-discussion-boun...@scipy.org בשם Zachary Pincus
נשלח: ו 27-פברואר-09 22:26
אל: Discussion
Well, the latest cython doesn't help -- both errors still appear as
below. (Also, the latest cython can't run the numpy tests either.) I'm
befuddled.
Zach
On Feb 28, 2009, at 5:18 PM, Zachary Pincus wrote:
Hi all,
So, I re-grabbed the latest bilateral code from the repository, and
did
Well, the latest cython doesn't help -- both errors still appear as
below. (Also, the latest cython can't run the numpy tests either.)
I'm
befuddled.
That's pretty weird. Did you remove the .so that was build as well as
any source files, before doing build_ext with the latest Cython?
).
As you said, probably the cython list is a better place to look for
an answer. I would be happy to see how this issue resolved.
Nadav
-הודעה מקורית-
מאת: numpy-discussion-boun...@scipy.org בשם Zachary Pincus
נשלח: א 01-מרץ-09 20:59
אל: Discussion of Numerical Python
נושא: Re
2009/3/1 Zachary Pincus zachary.pin...@yale.edu:
Dag, the cython person who seems to deal with the numpy stuff, had
this to say:
- cimport and import are different things; you need both.
- The dimensions field is in Cython renamed shape to be closer
to the Python interface. This is done
Hi Stéfan,
http://github.com/stefanv/bilateral.git
Cool! Does this, out of curiosity, break things for you? (Or Nadav?)
I wish I had some way to test. Do you maybe have a short example that
I can convert to a test?
Here's my test case for basic working-ness (e.g. non exception-
throwing)
did you have a look at OpenCV?
http://sourceforge.net/projects/opencvlibrary
Since a couple of weeks, we have implemented the numpy array
interface so data exchange is easy [check out from SVN].
Oh fantastic! That is great news indeed.
Zach
___
Hi David,
Thanks again for bundling in the architecture-specification flag into
the numpy superpack installers: being able to choose sse vs. nosse
installs is really helpful to me, and from what I hear, many others as
well!
Anyhow, I just noticed (sorry I didn't see this before the
I have two 3D density maps (meaning volumetric data, each one
essentially a IxJxK matrix containing real values that sum to one) and
want to find translation of between the two that maximises
correlation.
This can be done by computing the correlation between the two
(correlation theorem -
Does it work to use a cutoff of half the size of the input arrays in
each dimension? This is equivalent to calculating both shifts (the
positive and negative) and using whichever has a smaller absolute
value.
no, unfortunately the cutoff is not half of the dimensions.
Explain more about
Hi John,
First, did you build your own Python 2.6 or install from a binary?
When you type python at the command prompt, which python runs? (You
can find this out by running which python from the command line.)
Second, it appears that numpy is *already installed* for a non-apple
python 2.5
Hi Johannes,
According to http://www.pygtk.org/pygtk2reference/class-
gdkpixbuf.html , the pixels_array is a numeric python array (a
predecessor to numpy). The upshot is that perhaps the nice
broadcasting machinery will work fine:
pb_pixels[...] = fits_pixels[..., numpy.newaxis]
This might
scipy.ndimage.zoom (and related interpolation functions) would be a
good bet -- different orders of interpolation are available, too,
which can be useful.
Zach
On May 4, 2009, at 11:40 AM, Johannes Bauer wrote:
Hello list,
is there a possibility to scale an array by interpolation,
You might want also to look into scipy.ndimage.zoom.
Zach
On Jul 9, 2009, at 9:42 AM, Thomas Hrabe wrote:
Hi all,
I am not a newbie to python and numpy, but however, I kinda do not
find a proper solution for my interpolation problem without coding it
explicitly myself.
All I want to do
Might want to look into masked arrays: numpy.ma.array.
a = numpy.array([1,5,4,99])
b = numpy.array([3,7,2,8])
arr = numpy.array([a, b])
masked = numpy.ma.array(arr, mask = arr==99)
masked.mean(axis=0)
masked_array(data = [2.0 6.0 3.0 8.0],
mask = [False False False False],
Does numpy have functions to convert between e.g. an array of uint32
and
uint8, where the uint32 array is a packed version of the uint8 array
(selecting little/big endian)?
You could use the ndarray constructor to look at the memory differently:
In : a = numpy.arange(240, 260,
We have a need to to generate half-size version of RGB images as
quickly
as possible.
How good do these need to look? You could just throw away every other
pixel... image[::2, ::2].
Failing that, you could also try using ndimage's convolve routines to
run a 2x2 box filter over the
I believe that pretty generic connected-component finding is already
available with scipy.ndimage.label, as David suggested at the
beginning of the thread...
This function takes a binary array (e.g. zeros where the background
is, non-zero where foreground is) and outputs an array where each
Hello,
a b c (or any equivalent expression) is python syntactic sugar for
(a b) and (b c).
Now, for numpy arrays, a b gives an array with boolean True or False
where the elements of a are less than those of b. So this gives us two
arrays that python now wants to and together. To do
Wow. Once again, Apple makes using python unnecessarily difficult.
Someone needs a whack with a clue bat.
Well, some tools from the operating system use numpy and other python
modules. And upgrading one of these modules might conceivably break
that dependency, leading to breakage in the
Unless I read your request or the documentation wrong, h5py already
supports pulling specific fields out of compound data types:
http://h5py.alfven.org/docs-1.1/guide/hl.html#id3
For compound data, you can specify multiple field names alongside
the numeric slices:
dset[FieldA]
101 - 179 of 179 matches
Mail list logo