The `cdist` function in scipy spatial does what you want, and takes ~ 1ms on
my machine.
In [1]: import numpy as np
In [2]: from scipy.spatial.distance import cdist
In [3]: a = np.random.random((340, 2))
In [4]: b = np.random.random((329, 2))
In [5]: c = cdist(a, b)
In [6]: c.shape
Out[6]:
It certainly does. Here is mine, showing that numpy is linked against mkl:
In [2]: np.show_config()
lapack_opt_info:
libraries = ['mkl_lapack95', 'mkl_intel', 'mkl_intel_thread',
'mkl_core', 'mkl_p4m', 'mkl_p4p', 'pthread']
library_dirs =
On Fri, Feb 25, 2011 at 12:52 PM, Joe Kington jking...@wisc.edu wrote:
Do you expect to have very large integer values, or only values over a
limited range?
If your integer values will fit in into 16-bit range (or even 32-bit, if
you're on a 64-bit machine, the default dtype is float64...)
I've got an issue where trying to pass a numpy array to one of my
cython functions fails, with the exception saying than 'int objects
are not iterable'.
So somehow, my array is going from being perfectly ok (i can display
the image and print its shape and size), to going bad right before the
This problem is solved. Lisandro spent a bunch of time with me helping
to track it down. Thanks Lisandro!
On Mon, Nov 9, 2009 at 6:49 PM, Chris Colbert sccolb...@gmail.com wrote:
I've got an issue where trying to pass a numpy array to one of my
cython functions fails, with the exception
Cool. Thanks!
I will take a look at this. We have some code in scikits.image that
creates a QImage from the numpy data buffer for display. But I have
only implemented it for RGB888 so far. So you may have saved me some
time :)
Cheers!
Chris
2009/12/2 Hans Meine me...@informatik.uni-hamburg.de:
Why cant the divisor constant just be made an optional kwarg that
defaults to zero?
It wont break any existing code, and will let everybody that wants the
other behavior, to have it.
On Thu, Dec 3, 2009 at 1:49 PM, Colin J. Williams c...@ncf.ca wrote:
Yogesh,
Could you explain the rationale
not to mention that that idea probably isnt going to work if his
problem is non-linear ;)
On Thu, Dec 10, 2009 at 7:36 PM, Norbert Nemec
norbert.nemec.l...@gmx.de wrote:
Dag Sverre Seljebotn wrote:
I haven't heard of anything, but here's what I'd do:
- Use np.int64
- Multiply all inputs
On Sat, Dec 19, 2009 at 6:43 AM, Charles R Harris charlesr.har...@gmail.com
wrote:
On Fri, Dec 18, 2009 at 10:20 PM, Wayne Watson
sierra_mtnv...@sbcglobal.net wrote:
This program gives me the message following it:
Program==
import numpy as np
from numpy import
Perhaps it's my inability to properly use openmp, but when working on
scikits.image on algorithms doing per-pixel manipulation with numpy arrays
(using Cython), i saw better performance using Python threads and releasing
the GIL than I did with openmp. I found the openmp overhead to be quite
In [4]: %timeit a = np.random.randint(0, 20, 100)
10 loops, best of 3: 4.32 us per loop
In [5]: %timeit (a=10).sum()
10 loops, best of 3: 7.32 us per loop
In [8]: %timeit np.where(a=10)
10 loops, best of 3: 5.36 us per loop
am i missing something?
On Wed, Feb 24, 2010 at 12:50 PM,
This is how I always do it:
In [1]: import numpy as np
In [3]: tmat = np.array([[0., 1., 0., 5.],[0., 0., 1., 3.],[1., 0., 0.,
2.]])
In [4]: tmat
Out[4]:
array([[ 0., 1., 0., 5.],
[ 0., 0., 1., 3.],
[ 1., 0., 0., 2.]])
In [5]: points = np.random.random((5, 3))
On Fri, Apr 2, 2010 at 3:03 PM, Erik Tollerud erik.tolle...@gmail.comwrote:
you could try something like this (untested):
if __name__ == '__main__':
try:
import numpy
except ImportError:
import subprocess
subprocess.check_call(['easy_install', 'numpy']) # will
On Sat, Apr 3, 2010 at 12:17 AM, josef.p...@gmail.com wrote:
On Fri, Apr 2, 2010 at 11:45 PM, Chris Colbert sccolb...@gmail.com wrote:
On Fri, Apr 2, 2010 at 3:03 PM, Erik Tollerud erik.tolle...@gmail.com
wrote:
you could try something like this (untested):
if __name__ == '__main__
On Sat, Apr 3, 2010 at 12:52 PM, Antoine Pairet li...@pairet.be wrote:
On Sat, 2010-04-03 at 11:04 -0500, Warren Weckesser wrote:
Don't include that last numpy in the path. E.g.
export
PYTHONPATH=$PYTHONPATH:/home/pcpm/pairet/pythonModules/numpy/lib64/python2.4/site-packages
numpy is
On Sat, Apr 3, 2010 at 5:50 PM, Chris Colbert sccolb...@gmail.com wrote:
On Sat, Apr 3, 2010 at 12:52 PM, Antoine Pairet li...@pairet.be wrote:
On Sat, 2010-04-03 at 11:04 -0500, Warren Weckesser wrote:
Don't include that last numpy in the path. E.g.
export
PYTHONPATH=$PYTHONPATH:/home
On Tue, May 4, 2010 at 12:20 PM, S. Chris Colbert sccolb...@gmail.comwrote:
On Thu, 2009-03-12 at 19:59 +0100, Dag Sverre Seljebotn wrote:
(First off, is it OK to continue polling the NumPy list now and then on
Cython language decisions? Or should I expect that any interested Cython
users
I had this problem back in 2009 when building Enthought Enable, and was
happy with a work around. It just bit me again, and I finally got around to
drilling down to the problem.
On linux, if one uses the numpy/site.cfg [default] section when building
from source to specifiy local library
On Wed, May 12, 2010 at 11:06 PM, Chris Colbert sccolb...@gmail.com wrote:
I had this problem back in 2009 when building Enthought Enable, and was
happy with a work around. It just bit me again, and I finally got around to
drilling down to the problem.
On linux, if one uses the numpy
Yes, concatenate is doing other work under the covers. In short, in supports
concatenating a list of arbitrary python sequences into an array and does
checking on each element of the tuple to ensure it is valid to concatenate.
On Tue, Aug 17, 2010 at 9:03 AM, Zbyszek Szmek zbys...@in.waw.pl
a Image.tostring() anyway, which negates any
benefit.
Thanks in advance for the help!
S. Chris Colbert
Rehabilitation Robotics Laboratory
University of South Florida
Code ###
import VideoCapture
import wx
import time
import threading
import numpy as np
import Image
class WebCamWorker
.
On Tue, Feb 24, 2009 at 9:15 PM, Chris Colbert sccolb...@gmail.com
wrote:
Hi all,
I'm new to mailing list and relatively new (~1 year) to python/numpy. I
would appreciate any insight any of you may have here. The last 8 hours
of
digging through the docs has left me, finally, stuck
-segment buffer interface. i.e. no
buffer(revpixels) is needed; just simply revpixels.
array.copy() is also 50% faster than array.tostring() on my machine.
Chris
On Tue, Feb 24, 2009 at 9:27 PM, Chris Colbert sccolb...@gmail.com wrote:
thanks for both answers!
Lisandro, you're right, I should have
thanks!
On Wed, Feb 25, 2009 at 1:22 AM, Andrew Straw straw...@astraw.com wrote:
Given what you're doing, may I also suggest having a look at
http://code.astraw.com/projects/motmot/wxglvideo.html
-Andrew
Chris Colbert wrote:
As an update for any future googlers:
the problem
In addition to what Robert said, you also only need to calculate six
transcendentals:
cx = cos(tx)
sx = sin(tx)
cy = cos(ty)
sy = sin(ty)
cz = cos(tz)
sz = sin(tz)
you, are making sixteen transcendental calls in your loop each time.
I can also recommend Chapter 2 of Introduction to Robotics:
since you only need to calculate the sine or cosine of a single value (not
an array of values) I would recommend using the sine and cosine function of
the python standard math library as it is a full order of magnitude faster.
(at least on my core 2 windows vista box)
i.e. import math as m
m.sin
a = array()
b = str(a).replace('[','').replace(']','')
there's probably a better way, but it works.
On Tue, Mar 10, 2009 at 12:21 PM, Mark Bakker mark...@gmail.com wrote:
Hello,
I want to convert an array to a string.
I like array2string, but it puts these annoying square brackets
as long as we all agree that e has a value of 2.71828 18284 59045 23536, its
just a matter of semantics.
the constant you reference is indicated by greek lower gamma
Chris
On Wed, Mar 11, 2009 at 11:39 AM, Charles R Harris
charlesr.har...@gmail.com wrote:
Traditionally, Euler's constant is
i don't know the correct answer... but i imagine it would be fairly easy to
compile a couple of representative scipts on each compiler and compare their
performance.
On Wed, Mar 11, 2009 at 4:29 PM, Sebastian Haase ha...@msg.ucsf.edu wrote:
Hi,
I was wondering if people could comment on which
there has already been a port of the robotics toolbox for matlab into python
which is built on numpy:
http://code.google.com/p/robotics-toolbox-python/
which contains all the function you are describing.
Chris
On Wed, Mar 4, 2009 at 6:10 PM, Gareth Elston
gareth.elston.fl...@googlemail.com
Hey Everyone,
I built Lapack and Atlas from source last night on a C2D running 32-bit
Linux Mint 6. I ran 'make check' and 'make time' on the lapack build, and
ran the dynamic LU decomp test on atlas. Both packages checked out fine.
Then, I built numpy and scipy against them using the
numpy.test() doesn't return (after 2 hours of running at 100% at least). I
imagine its hanging on this eig function as well.
Chris
On Fri, Mar 27, 2009 at 10:12 AM, David Cournapeau
da...@ar.media.kyoto-u.ac.jp wrote:
Chris Colbert wrote:
Hey Everyone,
I built Lapack and Atlas from
-u.ac.jp wrote:
Chris Colbert wrote:
numpy.test() doesn't return (after 2 hours of running at 100% at
least). I imagine its hanging on this eig function as well.
Can you run the following test ?
nosetests -v -s test_build.py (in numpy/linalg).
If it fails, it almost surely a problem in the way
I compiled everything with gfortran. I dont even have g77 on my system.
On Fri, Mar 27, 2009 at 11:18 AM, Chris Colbert sccolb...@gmail.com wrote:
here are the results from that test:
test_lapack (test_build.TestF77Mismatch) ... ok
I built Atlas 3.8.3 which I assume is the newest release.
Chris
2009/3/27 Charles R Harris charlesr.har...@gmail.com
2009/3/27 Chris Colbert sccolb...@gmail.com
Hey Everyone,
I built Lapack and Atlas from source last night on a C2D running 32-bit
Linux Mint 6. I ran 'make check
Atlas 3.8.3 and Lapack 3.1.1
On Fri, Mar 27, 2009 at 11:05 AM, David Cournapeau
da...@ar.media.kyoto-u.ac.jp wrote:
Chris Colbert wrote:
I compiled everything with gfortran. I dont even have g77 on my system.
Ok. Which version of atlas and lapack are you using ? Lapack 3.2 is
known
David,
The log was too big for the list, so I sent it to your email address
directly.
Chris
2009/3/27 Chris Colbert sccolb...@gmail.com
David,
The log is attached.
Thanks for giving me the bash command. I would have never figured that one
out
Chris
On Fri, Mar 27, 2009 at 11:23
So you think its a problem with gcc?
im using version 4.3.1 shipped with the ubuntu 8.10 distro.
Chris
On Fri, Mar 27, 2009 at 11:56 AM, David Cournapeau
da...@ar.media.kyoto-u.ac.jp wrote:
Chris Colbert wrote:
David,
The log was too big for the list, so I sent it to your email address
, 2009 at 12:05 PM, David Cournapeau
da...@ar.media.kyoto-u.ac.jp wrote:
Chris Colbert wrote:
So you think its a problem with gcc?
That's my guess, yes.
im using version 4.3.1 shipped with the ubuntu 8.10 distro.
I thought you were using mint ? If you are using ubuntu, then it is very
forgive my ignorance, but wouldn't installing atlas from the repositories
defeat the purpose of installing atlas at all, since the build process
optimizes it to your own cpu timings?
Chris
On Fri, Mar 27, 2009 at 12:43 PM, David Cournapeau courn...@gmail.comwrote:
2009/3/28 Chris Colbert
, David Cournapeau
da...@ar.media.kyoto-u.ac.jp wrote:
Chris Colbert wrote:
forgive my ignorance, but wouldn't installing atlas from the
repositories defeat the purpose of installing atlas at all, since the
build process optimizes it to your own cpu timings?
Yes and no. Yes, it will be slower
have also
linked to the single threaded counterparts in the section above? (I assumed
one would be overridden by the other)
Other than those, I followed closely the instructions on scipy.org.
Chris
On Fri, Mar 27, 2009 at 12:57 PM, Chris Colbert sccolb...@gmail.com wrote:
this is true
/msg00033.html
it says that the library was not built correctly.
does this mean my atlas .so's (which i built via - make ptshared) are
incorrect?
I suppose I could just grab atlas from the repositories, but that would be
admitting defeat.
Chris
On Fri, Mar 27, 2009 at 1:09 PM, Chris Colbert sccolb
files.
I've attached both makefiles to this message, if anyone could take a look
and see if something obvious is amiss.
Thanks,
Chris
On Fri, Mar 27, 2009 at 10:32 PM, Chris Colbert sccolb...@gmail.com wrote:
Ok, im getting the same error on an install of straight ubuntu 8.10
the guy
going back and looking at this error:
C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall
-Wstrict-prototypes -fPIC
compile options: '-c'
gcc: _configtest.c
gcc -pthread _configtest.o -L/usr/local/atlas/lib -llapack -lptf77blas
-lptcblas -latlas -o _configtest
Robin,
Thanks. I need to get the backport for multiprocessing on 2.5.
But now, it's more of a matter of not wanting to admit defeat
Cheers,
Chris
On Sat, Mar 28, 2009 at 2:30 PM, Robin robi...@gmail.com wrote:
2009/3/28 Chris Colbert sccolb...@gmail.com:
Alright, building numpy
:)
Cheers,
Chris
On Sat, Mar 28, 2009 at 2:42 PM, Chris Colbert sccolb...@gmail.com wrote:
alright,
so i solved the linking error by building numpy against the static atlas
libraries instead of .so's.
But my original problem persists. Some functions work properly, buy
numpy.linalg.eig() still
here it is: 32 bit Intrepid
# LAPACK make include file. #
# LAPACK, Version 3.1.1 #
# February 2007
I notice my OPTS and NOOPTS are different than yours. (I went of
scipy.orginstall guide)
Do you think that's the issue?
Cheers,
Chris
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
i just ran a dummy config on atlas and its giving me different OPTS and
NOOPTS flags than the scipy tutorial. so im gonna try that and report back
Chris
2009/3/28 Charles R Harris charlesr.har...@gmail.com
2009/3/28 Chris Colbert sccolb...@gmail.com
I notice my OPTS and NOOPTS
of -m32. so maybe
specifying bit size here is needed too.
Chris
2009/3/28 Charles R Harris charlesr.har...@gmail.com
2009/3/28 Chris Colbert sccolb...@gmail.com
i just ran a dummy config on atlas and its giving me different OPTS and
NOOPTS flags than the scipy tutorial. so im gonna try
build.log. Numpy will still work, but who knows what
function may be broken).
Now, off to build numpy 1.3.0rc1
Thanks for all the help gents!
Chris
On Sat, Mar 28, 2009 at 4:27 PM, Chris Colbert sccolb...@gmail.com wrote:
yeah, I set -b 32 on atlas...
the bogus atlas config was telling me
at 3:34 PM, Charles R Harris
charlesr.har...@gmail.com wrote:
2009/3/28 Chris Colbert sccolb...@gmail.com
YES! YES! YES! YES! HAHAHAHA! YES!
using these flags in make.inc to build lapack 1.3.1 worked:
OPTS = O2 -fPIC -m32
NOPTS = O2 -fPIC -m32
then build atlas as normal and build numpy
aside from a smaller numpy install size, what do i gain from linking against
the .so's vs the static libraries?
Chris
On Sat, Mar 28, 2009 at 6:09 PM, Chris Colbert sccolb...@gmail.com wrote:
what does ldconfig do other than refresh the library path?
i copied the .so's to /usr/local/atlas
that means its ok..
Chris
On Sat, Mar 28, 2009 at 6:30 PM, Chris Colbert sccolb...@gmail.com wrote:
aside from a smaller numpy install size, what do i gain from linking
against the .so's vs the static libraries?
Chris
On Sat, Mar 28, 2009 at 6:09 PM, Chris Colbert sccolb...@gmail.comwrote
if you built numpy from source with a site.cfg file pointing to you atlas
libraries, numpy.dot() will use that library natively. no need to import
_dotblas.
Chris
On Tue, Apr 14, 2009 at 6:16 PM, Mathew Yeates myea...@jpl.nasa.gov wrote:
Hi
The line
from _dotblas import dot . is giving
you could always build atlas from source, the benefit of that is that you
can build it with threading enabled, vs the ubuntu packages which dont scale
across cores (at least on my machine they didnt).
The build instructions on scipy.org are fairly complete, if you give it a go
and have any
Hi all,
I'm building numpy 1.3.0 from source with libatlas-sse2-dev from the jaunty
repos. I'm running into 16 failures when running the nose test.
This is a fresh install of 9.04 and i've repod the following packages:
build-essential
swig
gfortran
python-dev
libatlas-sse2-dev
libatlas-base-dev
my case is only for 2d, but should apply to Nd as well.
It would be convienent if np.max would return a tuple of the max value and
its Nd location indices.
Is there an easier way than just using the 1d flattened array max index
(np.argmax) and calculating its corresponding Nd location?
Chris
but this gives me just the locations of the column/row maximums.
I need the (x,y) location of the array maximum.
Chris
On Sun, May 3, 2009 at 4:34 PM, josef.p...@gmail.com wrote:
On Sun, May 3, 2009 at 3:30 PM, Chris Colbert sccolb...@gmail.com
wrote:
my case is only for 2d, but should
wait, nevermind. your're right. Thanks!
On Sun, May 3, 2009 at 6:30 PM, Chris Colbert sccolb...@gmail.com wrote:
but this gives me just the locations of the column/row maximums.
I need the (x,y) location of the array maximum.
Chris
On Sun, May 3, 2009 at 4:34 PM, josef.p...@gmail.com
Lets say I have histogram of a color image that is of size [16, 16, 16].
Now, I have a function that converts my rgb image into the format where each
rgb color (i.e. img[x, y, :] = (r, g, b)) is an integer in the range(0, 16)
I want create a new 2d array where new2darray[x, y] = hist[img[x,y,
at 7:34 PM, Chris Colbert sccolb...@gmail.com wrote:
Lets say I have histogram of a color image that is of size [16, 16, 16].
Now, I have a function that converts my rgb image into the format where
each rgb color (i.e. img[x, y, :] = (r, g, b)) is an integer in the range(0,
16)
I want create
in my endless pursuit of perfomance, i'm searching for a quick way to create
a 3d histogram from a 3d rgb image.
Here is what I have so far for a (16,16,16) 3d histogram:
def hist3d(imgarray):
histarray = N.zeros((16, 16, 16))
temp = imgarray.copy()
(i, j) = imgarray.shape[0:2]
asking.
Chris
2009/5/3 Stéfan van der Walt ste...@sun.ac.za
Hi Chris
2009/5/4 Chris Colbert sccolb...@gmail.com:
in my endless pursuit of perfomance, i'm searching for a quick way to
create
a 3d histogram from a 3d rgb image.
Does histogramdd do what you want?
Regards
Stéfan
framerate, but good enough for prototyping.
Thanks!
Chris
On Sun, May 3, 2009 at 8:36 PM, josef.p...@gmail.com wrote:
On Sun, May 3, 2009 at 8:15 PM, Chris Colbert sccolb...@gmail.com
wrote:
in my endless pursuit of perfomance, i'm searching for a quick way to
create
a 3d histogram from a 3d
i'll take a look at them over the next few days and see what i can hack out.
Chris
On Mon, May 4, 2009 at 3:18 PM, David Huard david.hu...@gmail.com wrote:
On Mon, May 4, 2009 at 7:00 AM, josef.p...@gmail.com wrote:
On Mon, May 4, 2009 at 12:31 AM, Chris Colbert sccolb...@gmail.com
wrote
:
On Mon, May 4, 2009 at 4:18 PM, josef.p...@gmail.com wrote:
On Mon, May 4, 2009 at 4:00 PM, Chris Colbert sccolb...@gmail.com
wrote:
i'll take a look at them over the next few days and see what i can hack
out.
Chris
On Mon, May 4, 2009 at 3:18 PM, David Huard david.hu...@gmail.com
i just realized I don't need the line:
cdef int z = img.shape(2)
it's left over from tinkering. sorry. And i should probably convert the out
array to type float to handle large data sets.
Chris
On Wed, May 6, 2009 at 7:30 PM, josef.p...@gmail.com wrote:
On Wed, May 6, 2009 at 6:06 PM, Chris
, josef.p...@gmail.com wrote:
On Wed, May 6, 2009 at 7:39 PM, Chris Colbert sccolb...@gmail.com wrote:
i just realized I don't need the line:
cdef int z = img.shape(2)
it's left over from tinkering. sorry. And i should probably convert the
out
array to type float to handle large data sets
suppose i have two arrays: n and t, both are 1-D arrays.
for each value in t, I need to use it to perform an element wise scalar
operation on every value in n and then sum the results into a single scalar
to be stored in the output array.
Is there any way to do this without the for loop like
:
On Thu, May 7, 2009 at 12:39 PM, Chris Colbert sccolb...@gmail.com
wrote:
suppose i have two arrays: n and t, both are 1-D arrays.
for each value in t, I need to use it to perform an element wise scalar
operation on every value in n and then sum the results into a single
scalar
*', arg2)
tempval2 = eval(transform, arg2)*0.5
fval = (exp(b) / t) * (tempval2 + rsum)
f.append(fval)
/code #
On Thu, May 7, 2009 at 1:04 PM, Chris Colbert sccolb...@gmail.com wrote:
unfortunately, the actual function being processes is not so simple
how it
works either, but it doesnt work without it.
I've just about got something working using broadcasting and will post it
soon.
chris
On Thu, May 7, 2009 at 1:37 PM, josef.p...@gmail.com wrote:
On Thu, May 7, 2009 at 1:08 PM, Chris Colbert sccolb...@gmail.com wrote:
let me just post my
:
On Thu, May 7, 2009 at 2:11 PM, Chris Colbert sccolb...@gmail.com wrote:
alright I got it working. Thanks!
This version is an astonishingly 1900x faster than my original
implementation which had two for loops. Both versions are below:
thanks again!
### new fast code
b = 4.7
that's essentially what the eval statement does.
On Thu, May 7, 2009 at 4:22 PM, josef.p...@gmail.com wrote:
On Thu, May 7, 2009 at 3:39 PM, josef.p...@gmail.com wrote:
On Thu, May 7, 2009 at 3:10 PM, Chris Colbert sccolb...@gmail.com
wrote:
the user of the program inputs the transform
Seljebotn da...@student.matnat.uio.no
Stéfan van der Walt wrote:
2009/5/7 Chris Colbert sccolb...@gmail.com:
This was really my first attempt at doing anything constructive with
Cython.
It was actually unbelievably easy to work with. I think i spent less
time
working on this, than I did trying
Now man up and buy him his beer!
2009/5/8 Stéfan van der Walt ste...@sun.ac.za
Hi all,
David Cournapeau got gcov working with NumPy! Well done, David!
http://cournape.wordpress.com/2009/05/08/first-steps-toward-c-code-coverage-in-numpy/
Regards
Stéfan
at least I think this is strange behavior.
When convolving an image with a large kernel, its know that its faster to
perform the operation as multiplication in the frequency domain. The below
code example shows that the results of my 2d filtering are shifted from the
expected value a distance 1/2
der Walt ste...@sun.ac.za
Hi Chris
2009/5/11 Chris Colbert sccolb...@gmail.com:
When convolving an image with a large kernel, its know that its faster to
perform the operation as multiplication in the frequency domain. The
below
code example shows that the results of my 2d filtering
Thanks Stefan.
2009/5/11 Stéfan van der Walt ste...@sun.ac.za
2009/5/11 Chris Colbert sccolb...@gmail.com:
Does the scipy implementation do this differently? I thought that since
FFTW
support has been dropped, that scipy and numpy use the same routines...
Just to be clear, I
This is interesting.
I have always done RGB imaging with numpy using arrays of shape (height,
width, 3). In fact, this is the form that PIL gives when calling
np.asarray() on a PIL image.
It does seem more efficient to be able to do a[0],a[1],a[2] to get the R, G,
and B channels respectively.
the reason for all this is that the bitmap image format specifies the image
origin as the lower left corner. This is the convention used by PIL. The
origin of a numpy array is the upper right corner. Matplot lib does not
handle this discrepancy in the function pil_to_array, which is called
On 64-bit ubuntu 9.04 and Python 2.6, I built numpy from source against
atlas and lapack (everything 64bit).
To install, I used: sudo python setup.py install --prefix /usr/local
but then python doesnt find the numpy module, even though it exists in
/usr/local/lib/python2.6/site-packages
Do I
building without the prefix flag works for me as well, just wondering why
this doesnt...
Chris
On Mon, Jun 1, 2009 at 4:47 PM, Skipper Seabold jsseab...@gmail.com wrote:
On Mon, Jun 1, 2009 at 4:37 PM, Chris Colbert sccolb...@gmail.com wrote:
On 64-bit ubuntu 9.04 and Python 2.6, I built
thanks Robert,
the directory indeed wasnt in the $PATH variable.
Cheers,
Chris
On Mon, Jun 1, 2009 at 5:12 PM, Robert Kern robert.k...@gmail.com wrote:
On Mon, Jun 1, 2009 at 15:37, Chris Colbert sccolb...@gmail.com wrote:
On 64-bit ubuntu 9.04 and Python 2.6, I built numpy from source
the directory wasn't on the python path either. I added a site-packages.pth
file to /usr/local/lib/python2.6/dist-packages with the line
/usr/local/lib/python2.6/site-packages
Not elegant, but it worked.
Chris
On Mon, Jun 1, 2009 at 5:44 PM, Chris Colbert sccolb...@gmail.com wrote:
yeah, I
Sebastian is right.
Since Matlab r2007 (i think that's the version) it has included support for
multi-core architecture. On my core2 Quad here at the office, r2008b has no
problem utilizing 100% cpu for large matrix multiplications.
If you download and build atlas and lapack from source and
different on windows AFAIK.
chris
On Thu, Jun 4, 2009 at 4:54 PM, Chris Colbert sccolb...@gmail.com wrote:
Sebastian is right.
Since Matlab r2007 (i think that's the version) it has included support for
multi-core architecture. On my core2 Quad here at the office, r2008b has no
problem utilizing 100
How about just introducing a slightly different syntax which tells numpy to
handle the array like a matrix:
Some thing along the lines of this:
A = array[[..]]
B = array[[..]]
elementwise multipication (as it currently is):
C = A * B
matrix multiplication:
C = {A} * {B}
or
C = [A] * [B]
well, it sounded like a good idea.
Oh, well.
On Fri, Jun 5, 2009 at 5:28 PM, Robert Kern robert.k...@gmail.com wrote:
On Fri, Jun 5, 2009 at 16:24, Chris Colbert sccolb...@gmail.com wrote:
How about just introducing a slightly different syntax which tells numpy
to
handle the array like
and verify all cpu cores are
at 100% if you built with threads
Celebrate with a beer!
Cheers!
Chris
On Sat, Jun 6, 2009 at 10:42 AM, Keith Goodmankwgood...@gmail.com wrote:
On Fri, Jun 5, 2009 at 2:37 PM, Chris Colbert sccolb...@gmail.com wrote:
I'll caution anyone from using Atlas from
when you build numpy, did you use site.cfg to tell it where to find
your atlas libs?
On Sat, Jun 6, 2009 at 1:02 PM, Richard Llewellynllew...@gmail.com wrote:
Hello,
I've managed a build of lapack and atlas on Fedora 10 on a quad core, 64,
and now (...) have a numpy I can import that runs
, cblas, atlas
[amd]
amd_libs = amd
[umfpack]
umfpack_libs = umfpack, gfortran
[fftw]
libraries = fftw3
Rich
On Sat, Jun 6, 2009 at 10:25 AM, Chris Colbert sccolb...@gmail.com wrote:
when you build numpy, did you use site.cfg to tell it where to find
your atlas libs?
On Sat, Jun 6
[lapack_opt]
libraries = lapack, f77blas, cblas, atlas
[amd]
amd_libs = amd
[umfpack]
umfpack_libs = umfpack, gfortran
[fftw]
libraries = fftw3
Rich
On Sat, Jun 6, 2009 at 10:25 AM, Chris Colbert sccolb...@gmail.com wrote:
when you build numpy, did you use site.cfg to tell
in site.cfg when you built it.
Change your site.cfg rebuild reinstall and you should be fine
Chris
On Sun, Jun 7, 2009 at 12:11 AM, llew...@gmail.com wrote:
Hi,
On Jun 6, 2009 3:11pm, Chris Colbert sccolb...@gmail.com wrote:
it definately found your threaded atlas libraries. How do you
thanks for catching the typos!
Chris
On Sun, Jun 7, 2009 at 4:20 AM, Gabriel Beckersbeck...@orn.mpg.de wrote:
On Sat, 2009-06-06 at 12:59 -0400, Chris Colbert wrote:
../configure -b 64 -D c -DPentiumCPS=2400 -Fa -alg -fPIC
--with-netlib-lapack=/home/your-user-name/build/lapack/lapack-3.2.1
reported no errors.
Gabriel
On Sun, 2009-06-07 at 10:20 +0200, Gabriel Beckers wrote:
On Sat, 2009-06-06 at 12:59 -0400, Chris Colbert wrote:
../configure -b 64 -D c -DPentiumCPS=2400 -Fa -alg -fPIC
--with-netlib-lapack=/home/your-user-name/build/lapack/lapack-3.2.1/Lapack_LINUX.a
Many
do you mean that the values in the kernel depends on the kernels
position relative to the data to be convolved, or that the kernel is
not composed of homogeneous values but otherwise does not change as it
is slid around the source data?
If the case is the latter, you may be better off doing the
can you hold the entire file in memory as single array with room to spare?
If so, you could use multiprocessing and load a bunch of smaller
arrays, then join them all together.
It wont be super fast, because serializing a numpy array is somewhat
slow when using multiprocessing. That said, its
1 - 100 of 153 matches
Mail list logo