Hi
I have indices into an array I'd like split so they are sequential
e.g.
[1,2,3,10,11] - [1,2,3],[10,11]
How do I do this?
-Mathew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
)]
-=- Olivier
2011/6/2 Mathew Yeates mat.yea...@gmail.com
Hi
I have indices into an array I'd like split so they are sequential
e.g.
[1,2,3,10,11] - [1,2,3],[10,11]
How do I do this?
-Mathew
___
NumPy-Discussion mailing list
NumPy-Discussion
Hi
I have installed a new version of Python27 in a new directory. I want to get
this info into the registry so, when I install Numpy, it will use my new
Python
TIA
-Mathew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
I *am* using the windows installer.
On Thu, May 19, 2011 at 11:14 AM, Alan G Isaac alan.is...@gmail.com wrote:
On 5/19/2011 2:07 PM, Mathew Yeates wrote:
I have installed a new version of Python27 in a new directory. I want to get
this info into the registry so, when I install Numpy
Right. The Registry keys point to the old Python27.
On Thu, May 19, 2011 at 11:23 AM, Alan G Isaac alan.is...@gmail.com wrote:
On 5/19/2011 2:15 PM, Mathew Yeates wrote:
I*am* using the windows installer.
And you find that it does not find your most recent
Python 2.7 install, for which you
cool. just what I was looking for
On Thu, May 19, 2011 at 2:15 PM, Alan G Isaac alan.is...@gmail.com wrote:
On 5/19/2011 2:24 PM, Mathew Yeates wrote:
The Registry keys point to the old Python27.
Odd. The default installation settings
should have reset this. Or so I believed.
Maybe
Solved. Sort of. When I compiled by hand and switched /MD to /MT it
worked. It would still be nice if I could control the compiler options
f2py passes to cl.exe
-Mathew
On Thu, May 19, 2011 at 3:05 PM, Mathew Yeates mat.yea...@gmail.com wrote:
Hi
I am trying to run f2py and link to some
okay. To get it all to work I edited msvc9compiler.py and changed /MD
to /MT. This still led to an a different error. having to do with
mt.exe which does not come with MSVC 2008 Express. I fixed this
commenting out /MANIFEST stuff in msvc9compile.py
On Thu, May 19, 2011 at 6:25 PM, Mathew Yeates
Hi
What is current method of using ndiimage on a Tiff file? I've seen
different methods using ndimage itself, scipy.misc and Pil.
Mathew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
some of its
transformations from the start, e.g.
def imread(fname, mode='RGBA'):
return np.asarray(Image.open(fname).convert(mode))
to ensure that you always get 4-channel images, even for images that
were initially RGB or grayscale.
HTH,
Dan
On Tue, Apr 26, 2011 at 2:00 PM, Mathew
I have
subroutine foo (a)
integer a
print*, Hello from Fortran!
print*, a=,a
a=2
end
and from python I want to do
a=1
foo(a)
and I want a's value to now be 2.
How do I do this?
Mathew
___
NumPy-Discussion mailing list
...@gmail.com wrote:
On Tue, Apr 12, 2011 at 9:06 PM, Mathew Yeates mat.yea...@gmail.com wrote:
I have
subroutine foo (a)
integer a
print*, Hello from Fortran!
print*, a=,a
a=2
end
and from python I want to do
a=1
foo(a)
and I want a's value to now be 2.
How
I'm trying to do something ... unusual.
gdb support scripting with Python. From within my python script, I can
get the address of a contiguous area of memory that stores a fortran
array. I want to creat a NumPy array using frombuffer. I see that
the CPython API supports the creation of a buffer,
Thank a lot. I was wading through the Python C API. This is much simpler.
-Mathew
On Fri, Sep 24, 2010 at 10:21 AM, Zachary Pincus
zachary.pin...@yale.edu wrote:
I'm trying to do something ... unusual.
gdb support scripting with Python. From within my python script, I can
get the address of
thanks. I am also getting an error in ndi.mean
Were you getting the error
RuntimeError: data type not supported?
-Mathew
On Wed, Jun 2, 2010 at 9:40 AM, Wes McKinney wesmck...@gmail.com wrote:
On Wed, Jun 2, 2010 at 3:41 AM, Vincent Schut sc...@sarvision.nl wrote:
On 06/02/2010 04:52 AM,
I'm on Windows, using a precompiled binary. I never built numpy/scipy on
Windows.
On Wed, Jun 2, 2010 at 10:45 AM, Wes McKinney wesmck...@gmail.com wrote:
On Wed, Jun 2, 2010 at 1:23 PM, Mathew Yeates mat.yea...@gmail.com
wrote:
thanks. I am also getting an error in ndi.mean
Were you
Nope. This version didn't work either.
If you're on Python 2.6 the binary on here might work for you:
http://www.lfd.uci.edu/~gohlke/pythonlibs/
It looks recent enough to have the rewritten ndimage
___
NumPy-Discussion mailing list
Hi
Can anyone think of a clever (non-lopping) solution to the following?
A have a list of latitudes, a list of longitudes, and list of data values.
All lists are the same length.
I want to compute an average of data values for each lat/lon pair. e.g. if
lat[1001] lon[1001] = lat[2001] [lon
I guess it's as fast as I'm going to get. I don't really see any other way.
BTW, the lat/lons are integers)
-Mathew
On Tue, Jun 1, 2010 at 1:49 PM, Zachary Pincus zachary.pin...@yale.eduwrote:
Hi
Can anyone think of a clever (non-lopping) solution to the following?
A have a list of
There is definitely something wrong with matplotlib/numpy. Consider the
following
from numpy import *
mydata=memmap('map.dat',dtype=float64,mode='w+',shape=56566500)
del mydata
I can now remove the file map.dat with (from the command line) $rm map.dat
However
If I plot mydata before the line
19, 2009 at 10:30 AM, John Hunter jdh2...@gmail.com wrote:
On Nov 19, 2009, at 11:57 AM, Robert Kern robert.k...@gmail.com wrote:
On Thu, Nov 19, 2009 at 11:52, Mathew Yeates mat.yea...@gmail.com
wrote:
There is definitely something wrong with matplotlib/numpy. Consider
the
following
I am running my gtk app from python. I am deleting the canvas and running
gc.collect(). I still seem to have a reference to my memmapped data.
Any other hints?
-Mathew
On Thu, Nov 19, 2009 at 10:42 AM, John Hunter jdh2...@gmail.com wrote:
On Nov 19, 2009, at 12:35 PM, Mathew Yeates
yes, a GTK app from the python shell. And not using the toolbar.
I'll see if I can extract out a sample of code that demonstrates the problem
I'm having.
Thx
Mathew
On Thu, Nov 19, 2009 at 10:56 AM, John Hunter jdh2...@gmail.com wrote:
On Nov 19, 2009, at 12:53 PM, Mathew Yeates mat.yea
Hi
I have a line of matplotlib code
-self.ax.plot(plot_data,mif)
that causes the line
-self.data=numpy.zeros(shape=dims)
to throw a MemoryError exception.
(if I comment out the first line I get no error.)
This is on a windows xp machine with latest numpy and the latest matplotlib.
I
The value of dims is constant and not particularly large. I also checked to
make sure I wasn't running out of memory. Are there other reasons for this
error?
Mathew
On Wed, Nov 18, 2009 at 1:51 PM, Robert Kern robert.k...@gmail.com wrote:
On Wed, Nov 18, 2009 at 15:48, Mathew Yeates mat.yea
also, the exception is only thrown when I plot something first. I wonder if
matplotlib is messing something up.
On Wed, Nov 18, 2009 at 2:13 PM, Mathew Yeates mat.yea...@gmail.com wrote:
The value of dims is constant and not particularly large. I also checked to
make sure I wasn't running out
I turns out I *was* running out of memory. My dimensions would require 3.5
gig and my plot must have used up some memory.
On Wed, Nov 18, 2009 at 2:43 PM, Charles R Harris charlesr.har...@gmail.com
wrote:
On Wed, Nov 18, 2009 at 3:13 PM, Mathew Yeates mat.yea...@gmail.comwrote
What limits are there on file size when using memmap?
-Mathew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
for a 64 bit machine does this mean I am limited to 4 GB?
-Mathew
On Wed, Nov 18, 2009 at 3:48 PM, Robert Kern robert.k...@gmail.com wrote:
On Wed, Nov 18, 2009 at 17:43, Mathew Yeates mat.yea...@gmail.com wrote:
What limits are there on file size when using memmap?
With a modern
, Mathew Yeates myea...@jpl.nasa.gov
mailto:myea...@jpl.nasa.gov wrote:
I have a function f(x,y) which produces N values [v1,v2,v3 vN]
where some of the values are None (only found after evaluation)
each evaluation of f is expensive and N is large.
I want N x,y pairs which
Sebastian Walter wrote:
N optimization problems. This is very unusual! Typically the problem
at hand can be formulated as *one* optimization problem.
yes, this is really not so much an optimization problem as it is a
vectorization problem.
I am trying to avoid
1) Evaluate f over and over
if your function has many
local minima in some of the N output dimensions.
Mathew Yeates wrote:
Sebastian Walter wrote:
N optimization problems. This is very unusual! Typically the problem
at hand can be formulated as *one* optimization problem.
yes, this is really not so
What is the canonical method for
dealing with missing values?
Suppose f(x,y) returns None for some (x,y) pairs (unknown until
evaluation). I don't like the idea of setting the return to some small
value as this may create local maxima in the solution space.
So any of the scipy packages deal
I have a function f(x,y) which produces N values [v1,v2,v3 vN]
where some of the values are None (only found after evaluation)
each evaluation of f is expensive and N is large.
I want N x,y pairs which produce the optimal value in each column.
A brute force approach would be to generate
I know this must be trivial but I can't seem to get it right
I have N 2x2 arrays which perform a rotation. I also have N xy pairs to
transpose. What is the simplest way to perform the transformation
without looping?
Thanks from someone about to punch their screen.
I should add, I'm starting with N rotation angles. So I should rephrase
and say I'm starting with N angles and N xy pairs.
Mathew Yeates wrote:
I know this must be trivial but I can't seem to get it right
I have N 2x2 arrays which perform a rotation. I also have N xy pairs to
transpose
well, this isn't a perfect solution. polyfit is better because it
determines rank based on condition values. Finds the eigenvalues ...
etc. But, unless it can vectorized without Python looping, it's too slow
for me to use
Mathew
josef.p...@gmail.com wrote:
If you remove the mean from
Hi
I posted something about this earlier
Say I have 2 arrays X and Y with shapes (N,3) where N is large
I am doing the following
for row in range(N):
result=polyfit(X[row,:],Y[row,:],1,full=True) # fit 3 points with a line
This takes forever and I was hoping to find a way to speed things
sheer genius. Done in the blink of an eye and my original was taking 20
minutes!
Keith Goodman wrote:
On 4/21/09, Mathew Yeates myea...@jpl.nasa.gov wrote:
Hi
I posted something about this earlier
Say I have 2 arrays X and Y with shapes (N,3) where N is large
I am doing the following
sheer genius. Done in the blink of an eye and my original was taking 20
minutes!
Keith Goodman wrote:
On 4/21/09, Mathew Yeates myea...@jpl.nasa.gov wrote:
Hi
I posted something about this earlier
Say I have 2 arrays X and Y with shapes (N,3) where N is large
I am doing the following
sheer genius. Done in the blink of an eye and my original was taking 20
minutes!
Keith Goodman wrote:
On 4/21/09, Mathew Yeates myea...@jpl.nasa.gov wrote:
Hi
I posted something about this earlier
Say I have 2 arrays X and Y with shapes (N,3) where N is large
I am doing the following
Hi
The line
from _dotblas import dot . is giving me an import error when I
looked at the symbols in _dotblas.so I only see things like CFLOAT_dot.
When I trace the system calls I see that my optimized ATLAS libraries
are being accessed, but immediately after opening libatlas.so I get an
Hi,
I understand how to fit the points (x1,y1) (x2,y2),(x3,y3) with a line
using polyfit. But, what if I want to perform this task on every row of
an array?
For instance
[[x1,x2,x3],
[s1,s2,s3]]
[[y1,y2,y3,],
[r1,r2,r3]]
and I want the results to be the coefficients [a,b,c] and [d,e,f]
Hi
I have 2 vectors A and B. For each value in A I want to find the location
in B of the same value. Both A and B have unique elements.
Of course I could something like
For each index of A:
v =A[index]
location = numpy.where(B == v)
But I have very large lists and it will take too
[EMAIL PROTECTED]
wrote:
On Fri, Oct 24, 2008 at 3:48 PM, Mathew Yeates [EMAIL PROTECTED]wrote:
Hi
I have 2 vectors A and B. For each value in A I want to find the location
in B of the same value. Both A and B have unique elements.
Of course I could something like
For each index
Is there a routine in scipy for telling whether a point is inside a
convex 4 sided polygon?
Mathew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
On an AMD x86_64 with ATLAS installed I am getting errors like
ValueError: On entry to DLASD0 parameter number 9 had an illegal value
ValueError: On entry to ILAENV parameter number 2 had an illegal value
Anybody seen this before?
Mathew
___
Hi
In my site.cfg I have
[DEFAULT]
library_dirs = /home/ossetest/lib64:/home/ossetest/lib
include_dirs = /home/ossetest/include
[fftw]
libraries = fftw3
but libfftw3.a isn't being accesed.
ls -lu ~/lib/libfftw3.a
-rw-r--r-- 1 ossetest ossetest 1572628 Jul 26 15:02
/home/ossetest/lib/libfftw3.a
I'm getting this too
Ticket #652 ... ok
Ticket 662.Segmentation fault
Robert Kern wrote:
On Tue, Jul 29, 2008 at 14:16, James Turner [EMAIL PROTECTED] wrote:
I have built NumPy 1.1.0 on RedHat Enterprise 3 (Linux 2.4.21
with gcc 3.2.3 and glibc 2.3.2) and Python 2.5.1. When I run
my set up is similar. Same cpu's. Except I am using atlas 3.9.1 and gcc
4.2.4
James Turner wrote:
Are you using ATLAS? If so, where did you get it and what cpu do you have?
Yes. I have Atlas 3.8.2. I think I got it from
http://math-atlas.sourceforge.net. I also included Lapack 3.1.1
I am using an ATLAS 64 bit lapack 3.9.1.
My cpu (4 cpus)
-
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 23
model name : Intel(R) Xeon(R) CPU X5460 @ 3.16GHz
stepping: 6
cpu MHz
more info
when /linalg.py(872)eigh() calls dsyevd I crash
James Turner wrote:
Thanks everyone. I think I might try using the Netlib BLAS, since
it's a server installation... but please let me know if you'd like
me to troubleshoot this some more (the sooner the easier).
James.
Charles R Harris wrote:
This smells like an ATLAS problem.
I don't think so. I crash in a call to dsyevd which part of lapack but
not atlas. Also, when I commented out the call to test_eigh_build I get
zillions of errors like (look at the second one, warnings wasn't imported?)
oops. It is ATLAS. I was able to run with a nonoptimized lapack.
Mathew Yeates wrote:
Charles R Harris wrote:
This smells like an ATLAS problem.
I don't think so. I crash in a call to dsyevd which part of lapack but
not atlas. Also, when I commented out the call to test_eigh_build
What got fixed?
Robert Kern wrote:
On Tue, Jul 29, 2008 at 17:41, Mathew Yeates [EMAIL PROTECTED] wrote:
Charles R Harris wrote:
This smells like an ATLAS problem.
I don't think so. I crash in a call to dsyevd which part of lapack but
not atlas. Also, when I commented out
Hi
Okay, here's a weird one. In Fortran you can specify the upper/lower
bounds of an array
e.g. REAL A(3:7)
What would be the best way to translate this to a Numpy array? I would
like to do something like
A=numpy.zeros(shape=(5,))
and have the expression A[3] actually return A[0].
Or
Which reference manual?
René Bastian wrote:
Le Mercredi 26 Décembre 2007 21:22, Mathew Yeates a écrit :
Hi
I've been looking at fromfunction and itertools but I'm flummoxed.
I have an arbitrary number of lists. I want to form all possible
combinations from all lists. So if
r1=[dog,cat
yes, I came up with this and may use it. Seems like it would be insanely
slow but my problem is small enough that it might be okay.
Thanks
Keith Goodman wrote:
On Dec 26, 2007 12:22 PM, Mathew Yeates [EMAIL PROTECTED] wrote:
I have an arbitrary number of lists. I want to form all
Thanks Chuck.
Charles R Harris wrote:
On Dec 26, 2007 2:30 PM, Charles R Harris [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
On Dec 26, 2007 1:45 PM, Keith Goodman [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
On Dec 26, 2007 12:22 PM, Mathew Yeates [EMAIL
Anybody know of any tricks for handling something like
z[0]=1.0
for i in range(100):
out[i]=func1(z[i])
z[i+1]=func2(out[i])
??
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
Anybody know how to contact the pycdf author? His name is Gosselin I
think. There are hardcoded values that cause pycdf to segfault when
using large strings.
Mathew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
Hi
When I try
import numpy
id(numpy.dot) == id(numpy.core.multiarray.dot)
I get True. But I have liblapck.a installed in ~/lib and I put the lines
[DEFAULT]
library_dirs = /home/myeates/lib
include_dirs = /home/myeates/include
in site.cfg
In fact, when I build and run a sytem trace I see that
yes, I get
from numpy.core import _dotblas
ImportError: No module named multiarray
Now what?
uname -a
Linux 2.6.9-55.0.2.EL #1 Tue Jun 12 17:47:10 EDT 2007 i686 athlon i386
GNU/Linux
Robert Kern wrote:
Mathew Yeates wrote:
Hi
When I try
import numpy
id(numpy.dot) == id
yes
Robert Kern wrote:
Mathew Yeates wrote:
yes, I get
from numpy.core import _dotblas
ImportError: No module named multiarray
That's just weird. Can you import numpy.core.multiarray by itself?
___
Numpy-discussion mailing list
oops. sorry
from numpy.core import _dotblas
ImportError:
/home/myeates/lib/python2.5/site-packages/numpy/core/_dotblas.so:
undefined symbol: cblas_zaxpy
Robert Kern wrote:
Mathew Yeates wrote:
yes, I get
from numpy.core import _dotblas
ImportError: No module named multiarray
= [('NO_ATLAS_INFO', 1)]
language = f77
running config
Robert Kern wrote:
Mathew Yeates wrote:
oops. sorry
from numpy.core import _dotblas
ImportError:
/home/myeates/lib/python2.5/site-packages/numpy/core/_dotblas.so:
undefined symbol: cblas_zaxpy
Okay, yes, that's the problem
more info. My blas library has zaxpy defined but not cblas_zaxpy
Mathew Yeates wrote:
my site,cfg just is
[DEFAULT]
library_dirs = /home/myeates/lib
include_dirs = /home/myeates/include
python setup.py config gives
F2PY Version 2_3979
blas_opt_info:
blas_mkl_info:
libraries mkl,vml
I'm the one who created libblas.a so I must have done something wrong.
This is lapack-3.1.1.
Robert Kern wrote:
If your BLAS just the reference BLAS, don't bother with _dotblas. It won't be
any faster than the default implementation in numpy. You only get a win if you
are using an
Thanks Robert
I have a deadline and don't have time to install ATLAS. Instead I'm
installing clapack. Is this the corrrect thing to do?
Mathew
Robert Kern wrote:
Mathew Yeates wrote:
I'm the one who created libblas.a so I must have done something wrong.
This is lapack-3.1.1
I guess I can't blame lapack. My system has atlas so I recompiled numpy
pointing to atlas. Now
id(numpy.dot) == id(numpy.core.multiarray.dot) is False
However when I run decomp.svd on a 25 by 25 identity matrix, it hangs when
gesdd is called (line 501 of linalag/decomp.py)
Anybody else seeing
never returns
Charles R Harris wrote:
On 8/29/07, *Mathew Yeates* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
I guess I can't blame lapack. My system has atlas so I recompiled
numpy
pointing to atlas. Now
id(numpy.dot) == id(numpy.core.multiarray.dot) is False
Does anyone know how to run
python setup.py build
and have gfortran used? It is in my path.
Mathew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
result
Found executable /usr/bin/g77
gnu: no Fortran 90 compiler found
Something is *broken*.
Robert Kern wrote:
Mathew Yeates wrote:
Does anyone know how to run
python setup.py build
and have gfortran used? It is in my path.
python setup.py config_fc --fcompiler=gnu95 build
--prefix=/u/vento0/myeates
Thread model: posix
gcc version 4.2.0
-bash-3.1$ python setup.py config_fc --fcompiler=gnu95 build 21 |tee out
Robert Kern wrote:
Mathew Yeates wrote:
result
Found executable /usr/bin/g77
gnu: no Fortran 90 compiler found
Something is *broken
/usr/bin/g77
gnu: no Fortran 90 compiler found
gnu: no Fortran 90 compiler found
customize GnuFCompiler
gnu: no Fortran 90 compiler found
gnu: no Fortran 90 compiler found
customize GnuFCompiler using config
compiling '_configtest.c':
Robert Kern wrote:
Mathew Yeates wrote:
-bash-3.1$ python
Even more info!
I am using numpy gotten from svn on Wed or Thurs.
Mathew Yeates wrote:
More info:
I tried Chris' suggestion , i.e. export F77=gfortran
And now I get
Found executable /u/vento0/myeates/bin/gfortran
gnu: no Fortran 90 compiler found
Found executable /usr/bin/g77
Mathew
I have gfortran installed in my path. But when I run python setup.py
build I get
Found executable /usr/bin/g77
gnu: no Fortran 90 compiler found
gnu: no Fortran 90 compiler found
customize GnuFCompiler
gnu: no Fortran 90 compiler found
gnu: no Fortran 90 compiler found
The output of python
77 matches
Mail list logo